text
stringlengths
28
2.36M
meta
stringlengths
20
188
TITLE: Solving a gradient equation for harmonic function. QUESTION [0 upvotes]: The vector field $$\frac{\vec{A}\times\vec{x}}{(\vec{A}\times\vec{x})^2}$$ has zero curl, thus a scalar field $\varphi$ exists such that $$\nabla\varphi=\frac{\vec{A}\times\vec{x}}{(\vec{A}\times\vec{x})^2}$$ here $\vec{A}$ is constant and $\vec{x}$ is the position vector. It also turns out that $\nabla \cdot \dfrac{\vec{A}\times\vec{x}}{(\vec{A}\times\vec{x})^2} = 0$, the divergence is zero. Then $\varphi$ is harmonic, $$\Delta \varphi=0$$ How do I find $\varphi$? Tried to write the 3 PDEs in Maple but did not succeed to find a solution this way. REPLY [1 votes]: The curl is not zero at the origin. Choose a coordinate system in which the $z$-axis aligns with $\vec{A}$. Let $\gamma$ be a circle of radius $\epsilon$ in the $xy$-plane and $D$ the disk enclosed by $\gamma$ in $xy$ plane. Then Stokes theorem gives us $$ \int_D \nabla\times \left(\frac{\vec{A}\times \vec{x}}{|\vec{A}\times \vec{x}|^2}\right)\cdot d\vec{S}= \oint_\gamma \frac{\vec{A}\times \vec{x}}{|\vec{A}\times \vec{x}|^2}\cdot d\vec{x}=\frac{1}{|\vec{A}|\epsilon}\int_{0}^{2\pi} \epsilon d\theta =\frac{2\pi}{|\vec{A}|} $$ meaning the curl cannot be zero everywhere. So what is going on here? Let $\vec{A}=A\hat{z}$, then $$ \vec{V}=\frac{\vec{A}\times \vec{x}}{|\vec{A}\times \vec{x}|^2}=\frac{1}{A}\frac{-y\hat{x}+x\hat{y}}{x^2+y^2}=\frac{1}{A}\frac{\hat{\phi}}{\rho} $$ where $(\rho, \phi,z)$ is the cylindrical coordinates. As you can see $\vec{V}$ is ill-defined on the $z$-axis (where $\rho=0$ and $\hat{\phi}$ has no meaning), so the domain of definition of $\vec{V}$ is in fact $\mathbb{R}^3-(\text{line tangent to }\vec{A})$. But this space is not simply connected, so $\nabla\times \vec{F}=0$ does not necessarily imply $\vec{F}=\nabla \varphi$. However suppose we still want to naively find a function $\varphi$ such that $\vec{V}=\nabla\varphi$. In cyclindrical coordinates we have $$ \begin{aligned} \nabla f&=\frac{\partial f}{\partial \rho}\hat{\rho}+\frac{1}{\rho}\frac{\partial f}{\partial \phi}\hat{\phi}+ \frac{\partial f}{\partial z}\hat{z}\\ \nabla\cdot \vec{F}&=\frac{1}{\rho}\frac{\partial (\rho F_\rho)}{\partial \rho}+\frac{1}{\rho}\frac{\partial F_\phi}{\partial \phi}+ \frac{\partial F_z}{\partial z}\\ \nabla \times \vec{F}&=\left(\frac{1}{\rho}\frac{\partial F_z}{\partial \phi}-\frac{\partial F_\phi}{\partial z}\right)\hat{\rho}+\left(\frac{\partial F_\rho}{\partial z}-\frac{\partial F_z}{\partial \rho}\right)\hat{\phi}+ \frac{1}{\rho}\left(\frac{\partial (\rho F_\phi)}{\partial \rho}-\frac{\partial F_\rho}{\partial \phi}\right)\hat{z}\ \end{aligned} $$ So your vector field is in fact equal to (naively) $$ \vec{V}= \nabla \left(\frac{\phi}{A}\right) $$ Can you see the problem now? Whatever $\phi$ is, it is not a single-values function. As you go one-times around the origin $\phi$ picks up an extra $2\pi$. The curl of $\vec{V}$ if you still want to stay faithful to this naive picture, should be interpreted as $\nabla \times \vec{V}=\frac{2\pi\hat{z}}{A}\int_{-\infty}^\infty dt\delta^3(x\hat{x}+y\hat{y}+(t-z)\hat{z})$ where $\delta^3(\vec{x})$ is the 2-D Dirac delta function vanishing everywhere but the origin [Hopefully I haven't done any mistakes in determining this curl]. Physicists use these (naive) tricks all the time by the way. In their language, the origin is an (infinitesimal) vortex, and this vector field can be thought of circulating flow field around the vortex.
{"set_name": "stack_exchange", "score": 0, "question_id": 2413687}
TITLE: If $b > 1$ and $B(r)$ is the set of all numbers $b^t$, where $t$ is rational and $t \leq r$, prove that $b^r = \sup B(r)$ where $r$ is rational. QUESTION [0 upvotes]: I'm working through Rudin's "Principles of Mathematical Analysis" on my own, so I don't want the full answer. I'm only looking for a hint on this problem. As a follow-up to this question, Rudin asks me to prove that If $b > 1$ and $B(r)$ is the set of all numbers $b^t$, where $t$ is rational and $t \leq r$, prove that $b^r = \sup B(r)$ where $r$ is rational. I know that $B(r) \subset \mathbb{R}$ and that $\mathbb{R}$ has the least upper bound property, so to prove the existence of $\sup B(r)$, I just need to show that $B(r) \neq \emptyset$ and $B(r)$ is bounded above. $B(r)$ is obviously nonempty because $\forall r \in \mathbb{Q}$, $ r \leq r \Rightarrow b^r \in B(r)$. $b > 1$, so $b^t \leq b^r \Rightarrow t \leq r$, so $b^r$ is an upper bound of $B(r)$ because if $b^y > b^r$, for some $y \in \mathbb{Q}$, then $y > r$, so $b^y \notin B(r)$. $B(r)$ is nonempty and bounded above, so $\sup B(r)$ exists. I got stuck here proving that $b^r = \sup B(r)$. I'm thinking about this approach to prove that if $x < b^r$, then $x$ isn't an upper bound of $B(r)$. Pick some $x \in \mathbb{R}, x > 0, x < b^r$. I know that (I'm not sure how I know...) that $\exists s \in \mathbb{R}$ such that $b^s = x \Rightarrow x = b^s < b^r$. Using the fact that $\mathbb{Q}$ is dense in $\mathbb{R}$, I know that there is some $p \in \mathbb{Q}$ such that $s < p < r$. Therefore $x = b^s < b^p < b^r$, so $x$ isn't an upper bound of $B(r)$, which implies that $b^r = \sup B(r)$. I'm stuck here because I don't know how to do this without assuming that if $x \in \mathbb{R}, x > 0$, then $\exists s \in \mathbb{R}$ such that $b^s = x$. If this proof isn't correct or assumes too much, can someone give me a hint in the right direction? REPLY [2 votes]: For a given rational $r$, to prove $b^r=supB(r)$, you first need to pick $x \in B(r)$ and show $x \leq b^r$, so that $b^r$ is an upper bound according to Defintion 1.7. You know $x=b^t$ for some rational $t \leq r$, you know how rational exponents are defined in terms of integer exponents because of the work done in part (a) of this problem, and you know how order works when the exponents are integers. You need to put these things together to have (i) in Definition 1.8. After that, you pick any real $x$ with $x<b^r$ and you need to show there is some $y \in B(r)$ such that $x<y$ (so that ''$x \geq z$ for every $z \in B(r)$'' is a false statement, and $x$ is not an upper bound, according to Definition 1.7). This shows (ii) in Definition 1.8, which will mean that $b^r$ is the least upper bound. If you think about it correctly, you won't need hints for this part.
{"set_name": "stack_exchange", "score": 0, "question_id": 639718}
TITLE: Why are irreducible elements non-units? QUESTION [5 upvotes]: I know this may seem trivial but I'm trying to grasp why irreducible elements are non-units. If an element p is a unit and b is its inverse, then $pb = 1, \forall p,b \in R$, R is a ring. Does this imply that b is a factor of p, thus making it reducible? REPLY [12 votes]: Here is another example: suppose your teacher instructs you to factor: $p(x) = x + 1$ in the rational number system. You might reply, "it's already factored", but then your teacher says: "No, $p(x) = 2\cdot \left(\dfrac{x}{2} + \dfrac{1}{2}\right)$". You realize, with a sinking feeling, you'll never "be finished factoring". Intuitively, factoring ought to stop "at some basic level". Factoring "up to units" IS that basic level, an opt-out that allows us to finally be "done". (note that $2$ is a unit in $\Bbb Q$, so the two factorizations above are "really the same").
{"set_name": "stack_exchange", "score": 5, "question_id": 1173461}
\begin{document} \title*{A McKean optimal transportation perspective on Feynman-Kac formulae with application to data assimilation} \titlerunning{A McKean optimal transportation perspective on data assimilation} \author{Yuan Cheng and Sebastian Reich} \institute{Yuan Cheng \at Institut f\"ur Mathematik, Universit\"at Potsdam, Am Neuen Palais 10, D-14469 Potsdam, Germany, \email{yuan.cheng@uni-potsdam.de} \and Sebastian Reich \at Institut f\"ur Mathematik, Universit\"at Potsdam, Am Neuen Palais 10, D-14469 Potsdam, Germany, and Department of Mathematics and Statistics, University of Reading, Whiteknights, PO Box 220, Reading, RG6 6AX, UK, \email{sreich@math.uni-potsdam.de} } \maketitle \date{26/09/2013} \abstract{Data assimilation is the task of combining mathematical models with observational data. From a mathematical perspective data assimilation leads to Bayesian inference problems which can be formulated in terms of Feynman-Kac formulae. In this paper we focus on the sequential nature of many data assimilation problems and their numerical implementation in form of Monte Carlo methods. We demonstrate how sequential data assimilation can be interpreted as time-dependent Markov processes, which is often referred to as the McKean approach to Feynman-Kac formulae. It is shown that the McKean approach has very natural links to coupling of random variables and optimal transportation. This link allows one to propose novel sequential Monte Carlo methods/particle filters. In combination with localization these novel algorithms have the potential of beating the curse of dimensionality, which has prevented particle filters from being applied to spatially extended systems.} \section{Introduction} \label{sec:1} This paper is concerned with Monte Carlo methods for approximating expectation values for sequences of probability density functions (PDFs) $\pi^n(z)$, $n\ge 0$, $z\in {\cal Z}$. We assume that these PDFs arise sequentially from a Markov process with given transition kernel $\pi(z|z')$ and are modified by weight functions $G^n(z)\ge0$ at each iteration index $n\ge1$. More precisely, the PDFs satisfy the recursion \begin{equation} \label{sec1:FK} \pi^{n}(z) = \frac{1}{C} G^n(z) \int_{\cal Z} \pi(z|z') \pi^{n-1}(z'){\rm d}z' \end{equation} with the constant $C$ chosen such that $\int_{\cal Z} \pi^n(z){\rm d}z = 1$. A general mathematical framework for such problems is provided by the Feynman-Kac formalism as discussed in detail in \cite{sr:DelMoral}.\footnote{The classic Feynman-Kac formulae provide a connection between stochastic processes and solutions to partial differential equations. Here we use a generalization which links discrete-time stochastic processes to sequences of marginal distributions and associated expectation values. In addition to sequential Bayesian inference, which primarily motivates this review article, applications of discrete-time Feynman-Kac formula of type (\ref{sec1:FK}) can, for example, be found in non-equilibrium molecular dynamics, where the weight functions $G^n$ in (\ref{sec1:FK}) corresponds to the incremental work exerted on a molecular system at time $t_n$. See \cite{sr:stoltz} for more details.} In order to apply Monte Carlo methods to (\ref{sec1:FK}) it is useful to reformulate (\ref{sec1:FK}) in terms of modified Markov processes with transition kernel $\pi^n(z|z')$, which satisfy the consistency condition \begin{equation} \label{sec1:comp} \pi^{n}(z) = \int_{\cal Z} \pi^n(z|z') \pi^{n-1}(z'){\rm d}z'. \end{equation} This reformulation has been called the McKean approach to Feynman-Kac models in \cite{sr:DelMoral}. \footnote{\cite{sr:McKean66} pioneered the study of stochastic processes which are generated by stochastic differential equations for which the diffusion term depends on the time-evolving marginal distributions $\pi(z,t)$. Here we utilize a generalization of this idea to discrete-time Markov processes which allows for transition kernels $\pi^n(z|z')$ to depend on the marginal distributions $\pi^n(z)$.} Once a particular McKean model is available, a Monte Carlo implementation reduces to sequences of particles $\{z_i^n \}_{i=1}^M$ being generated sequentially by \begin{equation} \label{sec1:MC} z_i^{n} \sim \pi^n(\cdot|z_i^{n-1}), \qquad i=1,\ldots,M, \end{equation} for $n = 0,1,\ldots,N$. In other words, $z^{n}_i$ is the realization of a random variable with (conditional) PDF $\pi^n(z|z_i^{n-1})$. Such a Monte Carlo method constitutes a particular instance of the far more general class of sequential Monte Carlo methods (SMCMs) \citep{sr:Doucet}. While there are many applications that naturally give rise to Feynman-Kac formulae \citep{sr:DelMoral}, we will focus in this paper on Markov processes for which the underlying transition kernel $\pi(z|z')$ is determined by a deterministic dynamical system and that we wish to estimate its current state $z^n$ from partial and noisy observations $y^n_{\rm obs}$. The weight function $G^n(z)$ of a Feynman-Kac recursion (\ref{sec1:FK}) is in this case given by the likelihood of observing $y_{\rm obs}^n$ given $z^n$ and we encounter a particular application of \emph{Bayesian inference} \citep{sr:jazwinski,sr:stuart10a}. The precise mathematical setting and the Feynman-Kac formula for the associated data assimilation problem will be discussed in Section \ref{sec:2}. Some of the standard Monte Carlo approaches to Feynman-Kac formulae will be summarized in Section \ref{sec:3}. It is important to note that the consistency condition (\ref{sec1:comp}) does not specify a McKean model uniquely. In other words, given a Feynman-Kac recursion (\ref{sec1:FK}) there are many options to define an associated McKean model $\pi^n(z|z')$. It has been suggested independently by \cite{sr:reich10,sr:cotterreich} and \cite{sr:marzouk11} in the context of Bayesian inference that optimal transportation \citep{sr:Villani} can be used to couple the prior and posterior distributions. This idea generalizes to all Feynman-Kac formulae and leads to optimal in the sense of optimal transportation McKean models. This optimal transportation approach to McKean models will be developed in detail in Section \ref{sec:4} of this paper. Optimal transportation problems lead to a nonlinear elliptic PDE, called the Monge-Ampere equation \citep{sr:Villani}, which is very hard to tackle numerically in space dimensions larger than one. On the other hand, optimal transportation is an infinite-dimensional generalization \citep{sr:mccann95,sr:Villani2} of the classic linear transport problem \citep{sr:strang}. This interpretation is very attractive in terms of Monte Carlo methods and gives rise to a novel SMCM of type (\ref{sec1:MC}), which we call the ensemble transform particle filter (ETPF) \citep{sr:reich13}. The ETPF is based on a linear transformation of the forecast particles \begin{equation}\label{sec1:forecast} z_i^f \sim \pi(\cdot|z_i^{n-1}), \qquad i = 1,\ldots,M, \end{equation} of type \begin{equation} \label{sec1:trans} z_j^{n} = \sum_{i=1}^M z_i^f s_{ij} \end{equation} with the entries $s_{ij}\ge 0$ of the transform matrix $S\in \mathbb{R}^{M\times M}$ being determined by an appropriate linear transport problem. Even more remarkably, it turns out that SMCMs which resample in each iteration as well as the popular class of ensemble Kalman filters (EnKFs) \citep{sr:evensen} also fit into the linear transform framework of (\ref{sec1:trans}). We will discuss particle/ensemble-based sequential data assimilation algorithms within the unifying framework of linear ensemble transform filters in Section \ref{sec:5}. This section also includes an extension to spatially extended dynamical systems using the concept of localization \citep{sr:evensen}. Section \ref{sec:6} provides numerical results for the Lorenz-63 \citep{sr:lorenz63} and the Lorenz-96 \citep{sr:lorenz96} models. The results for the 40 dimensional Lorenz-96 indicate that the ensemble transform particle filter with localization can beat the curse of dimensionality which has so far prevented SMCMs from being used for high-dimensional systems \citep{sr:bengtsson08}. A brief historical account of data assimilation and filtering is given in Section 7. We mention that, while the focus of this paper is on state estimation for deterministic dynamical systems, the results can easily be extended to stochastic models as well as combined state and parameter estimation problems. Furthermore, possible applications include all areas in which SMCMs have successfully been used. We refer, for example, to navigation, computer vision, and cognitive sciences (see, e.g., \cite{sr:Doucet,sr:mumford03} and references therein). \section{Data assimilation and Feynman-Kac formula} \label{sec:2} Consider a deterministic \emph{dynamical system}\footnote{Even though this review article assumes deterministic evolution equations, the results presented here can easily be generalized to evolution equations with stochastic model errors.} \begin{equation} \label{sec2:DS} z^{n+1} = \Psi (z^n) \end{equation} with state variable $z\in \mathbb{R}^{N_z}$, iteration index $n\ge 0$ and given initial $z^0 \in {\cal Z} \subset \mathbb{R}^{N_z}$. We assume that $\Psi$ is a diffeomorphism on $\mathbb{R}^{N_z}$ and that $\Psi({\cal Z}) \subseteq {\cal Z}$, which implies that the iterates $z^n$ stay in ${\cal Z}$ for all $n\ge 0$. Dynamical systems of the form (\ref{sec2:DS}) often arise as the time-$\Delta t$-flow maps of differential equations \begin{equation} \label{sec2:ODE} \frac{{\rm d}z}{{\rm d}t} = f(z). \end{equation} In many practical applications the initial state $z^0$ is not precisely known. We may then assume that our uncertainty about the correct initial state can, for example, be quantified in terms of ratios of frequencies of occurrence. More precisely, the \emph{ratio of frequency of occurrence} (RFO) of two initial conditions $z_a^0 \in {\cal Z}$ and $z_b^0 \in {\cal Z}$ is defined as the ratio of frequencies of occurrence for the two associated $\varepsilon$-neighborhoods ${\cal U}_\varepsilon (z_a^0)$ and ${\cal U}_\varepsilon (z_b^0)$, respectively, and taking the limit $\varepsilon \to 0$. Here ${\cal U}_\varepsilon (z)$ is defined as \[ {\cal U}_\varepsilon (z) = \{z' \in \mathbb{R}^{N_z}: \|z'-z\|^2 \le \varepsilon\} \] It is important to note that the volume of both neighborhoods are identical, \emph{i.e.}, $V({\cal U}_\varepsilon(z_a^0)) = V({\cal U}_\varepsilon (z_b^0))$. From a frequentist perspective, RFOs can be thought of as arising from repeated experiments with one and the same dynamical system (\ref{sec2:DS}) under varying initial conditions and upon counting how often $z^0 \in {\cal U}_\varepsilon(z^0_a)$ relative to $z^0 \in {\cal U}_{\varepsilon}(z^0_b)$. There will, of course, be many instances for which $z^0$ is neither in ${\cal U}_\varepsilon(z_a^0)$ nor ${\cal U}_\varepsilon (z_b^0)$. Alternatively, one can take a Bayesian perspective and think of RFOs as our subjective belief about a $z_a^0$ to actually arise as the initial condition in (\ref{sec2:DS}) relative to another initial condition $z_b^0$. The later interpretation is also applicable in case only a single experiment with (\ref{sec2:DS}) is conducted. Independent of such a statistical interpretation of the RFO, we assume the availability of a function $\tau (z)>0$ such that the RFO can be expressed as \begin{equation} \label{sec2:fittomodel} \mbox{RFO} = \frac{\tau(z_a^0)}{\tau(z_b^0)} \end{equation} for all pairs of initial conditions from ${\cal Z}$. Provided that \[ \int_{\cal Z} \tau(z) {\rm d}z < \infty \] we can introduce the probability density function (PDF) \[ \pi_{Z^0}(z) = \frac{\tau(z)}{\int_{\cal Z} \tau(z) {\rm d}z} \] and interpret initial conditions $z^0$ as realizations of a random variable $Z^0:\Omega \to \mathbb{R}^{N_z}$ with PDF $\pi_{Z^0}$.\footnote{We have assumed the existence of an underlying probability space $(\Omega,{\cal F},\mathbb{P})$. The specific structure of this probability space does not play a role in the subsequent discussions.} We remark that most of our subsequent discussions carry through even if $\int_{\cal Z}\tau(z) {\rm d}z$ is unbounded as long as the RFOs remain well-defined. So far we have discussed RFOs for initial conditions. But one can also consider such ratios for any iteration index $n\ge 0$, \emph{i.e.}, for solutions \[ z^n_a = \Psi^{n} (z^0_a) \] and \[ z^n_b = \Psi^{n} (z^0_b). \] Here $\Psi^{n}$ denotes the $n$-fold application of $\Psi$. The RFO at iteration index $n$ is now defined as the ratio of frequencies of occurrence for the two associated $\varepsilon$-neighborhoods ${\cal U}_\varepsilon(z_a^n)$ and ${\cal U}_\varepsilon (z_b^n)$, respectively, in the limit $\varepsilon \to 0$. We pull this ratio back to $n=0$ and find that \[ \mbox{RFO}(n) \approx \frac{ \tau(\Psi^{-n}(z^n_a)) V_a}{\tau(\Psi^{-n}(z^n_b))V_b} \] for $\varepsilon$ sufficiently small, where \[ V_{a/b} = V(\Psi^{-n}({\cal U}_\varepsilon (z_{a/b}^n))) := \int_{\Psi^{-n}({\cal U}_\varepsilon (z_{a/b}^n))} {\rm d}z \] denote the volumes of $\Psi^{-n}({\cal U}_\varepsilon(z^n_{a/b}))$ and $\Psi^{-n}$ refers to the inverse of $\Psi^{n}$. These two volumes can be approximated as \[ V_{a/b} \approx V({\cal U}_\varepsilon (z^0_{a/b})) \times |D\Psi^{-n}(z^n_{a/b})| \] for $\varepsilon>0$ sufficiently small. Here $D\Psi^{-n}(z) \in \mathbb{R}^{N_z\times N_z}$ stands for the Jacobian matrix of partial derivatives of $\Psi^{-n}$ at $z$ and $|D\Psi^{-n}(z)|$ for its determinant. Hence, upon taking the limit $\varepsilon \to 0$, we obtain \begin{align*} \mbox{RFO}(n) &= \frac{\tau(\Psi^{-n}(z^n_a)) |D\Psi^{-n}(z^n_a)|}{\tau(\Psi^{-n}(z^n_b)) |D\Psi^{-n}(z^n_b)|} \\ &= \frac{\pi_{Z^0}(\Psi^{-n}(z^n_a)) |D\Psi^{-n}(z^n_a)|}{\pi_{Z^0}(\Psi^{-n}(z^n_b)) |D\Psi^{-n}(z^n_b)|}. \end{align*} Therefore we may interpret solutions $z^n$ for fixed iteration index $n\ge 1$ as realizations of a random variable $Z^n:\Omega \to \mathbb{R}^{N_z}$ with PDF \begin{equation} \label{sec2:marginal} \pi_{Z^n}(z) = \pi_{Z^0}(\Psi^{-n}(z)) |D\Psi^{-n}(z)|. \end{equation} These PDFs can also be defined recursively using \begin{equation} \label{sec2:Markov1} \pi_{Z^{n+1}}(z) = \int_{\cal Z} \delta (z - \Psi(z'))\,\pi_{Z^n}(z')\, {\rm d}z' . \end{equation} Here $\delta(\cdot)$ denotes the Dirac delta function, which satisfies \[ \int_{\mathbb{R}^{N_z}} f(z) \delta (z-\bar z) {\rm d}z = f(\bar z) \] for all smooth functions $f:\mathbb{R}^{N_z} \to \mathbb{R}$. \footnote{The Dirac delta function $\delta (z-\bar z)$ provides a convenient short-hand for the point measure $\mu_{\bar z}({\rm d}z)$.} In other words, the dynamical system (\ref{sec2:DS}) induces a \emph{Markov process}, which we can also write as \[ Z^{n+1} = \Psi(Z^n) \] in terms of the random variables $Z^n$, $n\ge 0$. The sequence of random variables $\{Z^n\}_{n=0}^N$ for fixed $N\ge 1$ gives rise to the finite-time stochastic process $Z^{0:N}:\Omega \to {\cal Z}^{N+1}$ with realizations \[ z^{0:N} := (z^0,z^1,\ldots,z^N) = Z^{0:N}(\omega), \quad \omega \in \Omega, \] that satisfy (\ref{sec2:DS}). The joint distribution of $Z^{0:N}$, denoted by $\pi_{Z^{0:N}}$, is formally\footnote{To be mathematically precise one should talk about the joint measure \[ \mu_{Z^{0:N}}({\rm d}z^0,\dots,{\rm d}z^N) = \mu_{Z^0}({\rm d}z^0) \mu_{\Psi(z^0)}({\rm d}z^1) \cdots \mu_{\Psi(z^{N-1})}({\rm d}z^N) \] with initial measure $\mu_{Z^0}({\rm d}z^0) = \pi_{Z^0}(z^0){\rm d}z^0$.} given by \begin{equation} \label{sec2:joint} \pi_{Z^{0:N}}(z^0,\ldots,z^N) = \pi_{Z^0}(z^0)\,\delta (z^1-\Psi(z^0)) \cdots \delta (z^N -\Psi(z^{N-1})) \end{equation} and (\ref{sec2:marginal}) is the marginal of $\pi_{Z^{0:N}}$ in $z^n$, $n=1,\ldots,N$. Let us now consider the situation where (\ref{sec2:DS}) serves as a model for an unknown physical process with realization \begin{equation} \label{sec2:truth} z^{0:N}_{\rm ref} = (z^0_{\rm ref},z^1_{\rm ref},\ldots,z^N_{\rm ref}) . \end{equation} In the classic filtering/smoothing setting \citep{sr:jazwinski,sr:crisan} one assumes that there exists an $\omega_{\rm ref} \in \Omega$ such that \[ z^{0:N}_{\rm ref} = Z^{0:N}(\omega_{\rm ref}). \] In practice such an assumption is highly unrealistic and the reference trajectory (\ref{sec2:truth}) may instead follow an iteration \begin{equation} \label{sec2:DST} z^{n+1}_{\rm ref} = \Psi_{\rm ref}(z^n_{\rm ref}) \end{equation} with unknown initial $z^0_{\rm ref}$ and unknown $\Psi_{\rm ref}$. Of course, it should hold that $\Psi$ in (\ref{sec2:DS}) is close to $\Psi_{\rm ref}$ in an appropriate mathematical sense. Independently of such assumptions, we assume that $z^{0:N}_{\rm ref}$ is accessible to us through partial and noisy observations of the form \[ y^n_{\rm obs} = h(z_{\rm ref}^n) + \xi^n, \quad n=1,\ldots,N, \] where $h:{\cal Z}\to\mathbb{R}^{N_y}$ is called the \emph{forward} or {observation map} and the $\xi^n$'s are realizations of independent and identically distributed Gaussian random variables with mean zero and covariance matrix $R \in \mathbb{R}^{N_y \times N_y}$. Estimating $z_{\rm ref}^n$ from $y^n_{\rm obs}$ constitutes a classic inverse problem \citep{sr:Tarantola}. The \emph{ratio of fits to data} (RFD) of two realizations $z_a^{0:N}$ and $z_b^{0:N}$ from the stochastic process $Z^{0:N}$ is defined as \[ \mbox{RFD} = \frac{\prod_{n=1}^N e^{-\frac{1}{2}(h(z^n_a)-y^n_{\rm obs})^T R^{-1} (h(z^n_a)-y_{\rm obs}^n) }}{\prod_{n=1}^N e^{-\frac{1}{2}(h(z^n_b)-y^n_{\rm obs})^T R^{-1} (h(z^n_b)-y_{\rm obs}^n) }} . \] Finally we define the \emph{ratio of fits to model and data} (RFMD) of a $z_a^{0:N}$ versus a $z_b^{0:N}$ given the model and the observations as follows: \begin{align} \nonumber \mbox{RFMD} &= \mbox{RFD} \times \mbox{RFO}(0)\\ \nonumber &= \frac{\prod_{n=1}^N e^{-\frac{1}{2} (h(z^n_a)-y^n_{\rm obs})^T R^{-1} (h(z^n_a)-y_{\rm obs}^n) }}{\prod_{n=1}^N e^{-\frac{1}{2} (h(z^n_b)-y^n_{\rm obs})^T R^{-1} (h(z^n_b)-y_{\rm obs}^n) }} \frac{\pi_{Z^0}(z_a^0)}{\pi_{Z^0}(z_b^0)} \\ \label{sec2:fitmd} &=\frac{\prod_{n=1}^N e^{-\frac{1}{2}(h(z^n_a)-y^n_{\rm obs})^T R^{-1} (h(z^n_a)-y_{\rm obs}^n) }}{\prod_{n=1}^N e^{-\frac{1}{2}(h(z^n_b)-y^n_{\rm obs})^T R^{-1} (h(z^n_b)-y_{\rm obs}^n) }} \frac{\pi_{Z^{0:N}}(z_a^{0:N})}{\pi_{Z^{0:N}}(z_b^{0:N})} . \end{align} The simple product structure arises since the uncertainty in the initial conditions is assumed to be independent of the measurement errors and the last identity follows from the fact that our dynamical model is deterministic. Again we may translate this combined ratio into a PDF \begin{equation} \label{sec2:feynman} \pi_{Z^{0:N}}(z^{0:N}|y_{\rm obs}^{1:N}) = \frac{1}{C} \prod_{n=1}^N e^{-\frac{1}{2} (h(z^n)-y^n_{\rm obs})^T R^{-1} (h(z^n)-y_{\rm obs}^n) } \,\pi_{Z^{0}}(z^0), \end{equation} where $C>0$ is a normalization constant depending only on $y_{\rm obs}^{1:N}$. This PDF gives the probability distribution in $z^{0:N}$ conditioned on the given set of observations \[ y_{\rm obs}^{1:N} = (y^1_{\rm obs},\ldots,y_{\rm obs}^N). \] The PDF (\ref{sec2:feynman}) is, of course, also conditioned on (\ref{sec2:DS}) and the initial PDF $\pi_{Z^0}$. This dependence is not explicitly taken account of in order to avoid additional notational clutter. The formulation (\ref{sec2:feynman}) is an instance of Bayesian inference on the one hand, and an instance of the Feynman-Kac formalism on the other. Within the Bayesian perspective, $\pi_{Z^{0}}$ (or, equivalently, $\pi_{Z^{0:N}}$) represents the prior distribution, \[ \pi_{Y^{1:N}}({\rm y}^{1:n}|z^{0:n}) = \frac{1}{(2\pi)^{N_y N/2} |R|^{N/2}} \prod_{n=1}^N e^{-\frac{1}{2} (h(z^n)-y^n)^T R^{-1} (h(z^n)-y^n) } \] the compounded likelihood function, and (\ref{sec2:feynman}) the posterior PDF given an actually observed $y^{1:n} = y^{1:n}_{\rm obs}$. The Feynman-Kac formalism is more general and includes a wide range of applications for which an underlying stochastic process is modified by weights $G^n(z^n)\ge 0$. These weights then replace the likelihood functions \[ \pi_{Y}(y^n_{\rm obs}|z^n) = \frac{1}{(2\pi)^{N_y/2} |R|^{1/2}} e^{-\frac{1}{2} (h(z^n)- y^n_{\rm obs})^T R^{-1} (h(z^n)-y^n_{\rm obs}) } \] in (\ref{sec2:feynman}). The functions $G^n:\mathbb{R}^{N_z} \to \mathbb{R}$ can depend on the iteration index, as in \[ G^n(z) := \pi_{Y}(y_{\rm obs}^n|z) \] or may be independent of the iteration index. See \cite{sr:DelMoral} for further details on the Feynman-Kac formalism and \cite{sr:stoltz} for a specific (non-Bayesian) application in the context of non-equilibrium molecular dynamics. Formula (\ref{sec2:feynman}) is hardly ever explicitly accessible and one needs to resort to numerical approximations whenever one wishes to either compute the expectation value \[ \mathbb{E}_{Z^{0:N}}[f(z^{0:N})|y_{\rm obs}^{1:N}] = \int_{{\cal Z}^{N+1}} f(z^{0:N}) \,\pi_{Z^{0:N}}(z^{0:N}|y_{\rm obs}^{1:N}) {\rm d}z^0 \cdots {\rm d}z^N \] of a given function $f:{\cal Z}^{N+1} \to \mathbb{R}$ or the \[ \mbox{RFMD} = \frac{\pi_{Z^{0:N}}(z_a^{0:N}|y_{\rm obs}^{1:N})}{\pi_{Z^{0:N}}(z_b^{0:N}|y_{\rm obs}^{1:N})} \] for given trajectories $z^{0:N}_a$ and $z^{0:N}_b$. Basic Monte Carlo approximation methods will be discussed in Section \ref{sec:3}. Alternatively, one may seek the \emph{maximum a posteriori (MAP) estimator} $z^0_{\rm MAP}$, which is defined as the initial condition $z^0$ that maximizes (\ref{sec2:feynman}) or, formulated alternatively, \[ z^0_{\rm MAP} = \mbox{arg} \,\inf L(z^0) ,\qquad L(z^0) := -\log \pi_{Z^{0:N}}(z^{0:N}|y_{\rm obs}^{1:N}) \] \citep{sr:kaipio,sr:Lewis,sr:Tarantola}. The MAP estimator is closely related to \emph{variational data assimilation techniques}, such as 3D-Var and 4D-Var, widely used in meteorology \citep{sr:daley,sr:kalnay}. In many applications expectation values need to be computed for functions $f$ which depend only on a single $z^n$. Those expectation values can be obtained by first integrating out all components in (\ref{sec2:feynman}) except for $z^n$. We denote the resulting marginal PDF by $\pi_{Z^n}(z^n|y_{\rm obs}^{1:N})$. The case $n=0$ plays a particular role since \[ \mbox{RFMD} = \frac{\pi_{Z^{0}}(z_a^{0}|y_{\rm obs}^{1:N})}{\pi_{Z^{0}}(z_b^{0}|y_{\rm obs}^{1:N})} \] and \[ \pi_{Z^n}(z^n|y_{\rm obs}^{1:N}) = \pi_{Z^0}(\Psi^{-n}(z^n)|y_{\rm obs}^{1:N}) |D\Psi^{-n}(z^n)| \] for $n=1,\ldots,N$. These identities hold because our dynamical system (\ref{sec2:DS}) is deterministic and invertible. In Section \ref{sec:4} we will discuss recursive approaches for determining the marginal $\pi_{Z^N}(z^N|y_{\rm obs}^{1:N})$ in terms of Markov processes. Computational techniques for implementing such recursions will be discussed in Section \ref{sec:5}. \section{Monte Carlo methods in path space} \label{sec:3} In this section, we briefly summarize two popular Monte Carlo strategies for computing expectation values with respect to the complete conditional distribution $\pi_{Z^{0:N}}(\cdot|y_{\rm obs}^{1:N})$. We start with the classic importance sampling Monte Carlo method. \subsection{Ensemble prediction and importance sampling} Ensemble prediction is a Monte Carlo method for assessing the marginal PDFs (\ref{sec2:marginal}) for $n=0,\ldots,N$. One first generates $z_i^0$, $i=1,\ldots,M$, independent samples from the initial PDF $\pi_{Z^0}$. Here samples are generated such that the probability of being in ${\cal U}_\varepsilon(z)$ is \[ \int_{{{\cal U}_\varepsilon}(z)} \pi_{Z^0}(z') {\rm d}z' \approx V({\cal U}_\varepsilon (z)) \times \pi_{Z^0}(z). \] Since samples $z_i^0$ are generated such that their RFOs are all equal, it follows that their ratio of fits to model is identical to one. Furthermore, the expectation value of a function $f$ with respect to $Z^0$ is approximated by the familiar empirical estimator \[ \bar f_M := \frac{1}{M} \sum_{i=1}^M f(z_i^0). \] The initial ensemble $\{z_i^0\}_{i=1}^M$ is propagated independently under the dynamical system (\ref{sec2:DS}) for $n=0,\ldots,N-1$. This yields $M$ trajectories \[ z_i^{0:N} = (z_i^0,z_i^1,\ldots,z_i^N), \] which provide independent samples from the $\pi_{Z^{0:N}}$ distribution. With each of these samples we associate the weight \[ w_i = \frac{1}{C} \prod_{n=1}^N e^{-\frac{1}{2} (h(z_i^n)-y_{\rm obs}^n)^T R^{-1} (h(z_i^n)-y_{\rm obs}^n) } \] with the constant of proportionality chosen such that $\sum_{i=1}^M w_i = 1$. The ratio of fits to model and data for any pair of samples $z_i^{1:N}$ and $z_j^{1:N}$ from $\pi_{Z^{0:N}}$ is now simply given by $w_i/w_j$ and the expectation value of a function $f$ with respect to $\pi_{Z^{0:N}}(z^{0:N}|y_{\rm obs}^{1:N})$ can be approximated by the empirical estimator \[ \bar f = \sum_{i=1}^M w_i f(z^{0:N}_i) . \] This estimator is an instance of \emph{importance sampling} since samples from a distribution different from the target distribution, here $\pi_{Z^{0:N}}$, are used to approximate the statistics of the target distribution, here $\pi_{Z^{0:N}}(\cdot|y_{\rm obs}^{1:N})$. See \cite{sr:liu} and \cite{sr:Robert2004} for further details. \subsection{Markov chain Monte Carlo (MCMC) methods} Importance sampling becomes inefficient whenever the \emph{effective sample size} \begin{equation} \label{sec3:SS} M_{\rm eff} = \frac{1}{\sum_{i=1}^M w_i^2} \in [1,M] \end{equation} becomes much smaller than the sample size $M$. Under those circumstances it can be preferable to generate dependent samples $z_i^{0:N}$ using MCMC methods. MCMC methods rely on a proposal step and a Metropolis-Hastings acceptance criterion. Note that only $z_i^0$ needs to be stored since the whole trajectory is then uniquely determined by $z^n_i = \Psi^n(z_i^0)$. Consider, for simplicity, the reversible proposal step \[ z^0_p = z_i^0 + \xi, \] where $\xi$ is a realization of a random variable $\Xi$ with PDF $\pi_{\Xi}$ satisfying $\pi_{\Xi}(\xi) = \pi_{\Xi}(-\xi)$ and $z^0_i$ denotes the last accepted sample. The associated trajectory $z_p^{0:N}$ is computed using (\ref{sec2:DS}). Next one evaluates the ratio of fits to model and data (\ref{sec2:fitmd}) for $z_a^{0:N} = z_p^{0:N}$ versus $z_b^{0:N} = z_i^{0:N}$, \emph{i.e.}, \[ \alpha := \frac{\prod_{n=1}^N e^{-\frac{1}{2} (h(z^n_p)-y^n_{\rm obs})^T R^{-1} (h(z^n_p)-y_{\rm obs}^n) }}{\prod_{n=1}^N e^{-\frac{1}{2} (h(z^n_i)-y^n_{\rm obs})^T R^{-1} (h(z^n_i)-y_{\rm obs}^n) }} \frac{\pi_{Z^0}(z_p^0)}{\pi_{Z^0}(z_i^0)} . \] If $\alpha\ge 1$, then the proposal is accepted and the new sample for the initial condition is $z^0_{i+1} = z^0_p$. Otherwise the proposal is accepted with probability $\alpha$ and rejected with probability $1-\alpha$. In case of rejection one sets $z_{i+1}^0 = z_i^0$. Note that the accepted samples follow the $\pi_{Z^0}(z^0|y_{\rm obs}^{1:N})$ distribution and not the initial PDF $\pi_{Z^0}(z^0)$. A potential problem with MCMC methods lies in low acceptance rates and/or highly dependent samples. In particular, if the distribution in $\pi_\Xi$ is too narrow then exploration of phase space can be slow while a too wide distribution can potentially lead to low acceptance rates. Hence one should compare the effective sample size (\ref{sec3:SS}) from an importance sampling approach to the effective sample size of an MCMC implementation, which is inversely proportional to the integrated autocorrelation time of the samples. See \cite{sr:liu} and \cite{sr:Robert2004} for more details. We close this section by referring to \cite{sr:sarkka} for further algorithms for filtering and smoothing. \section{McKean optimal transportation approach} \label{sec:4} We now restrict the discussion of the Feynman-Kac formalism to the marginal PDFs $\pi_{Z^n}(z^n|y_{\rm obs}^{1:N})$. We have already seen in Section \ref{sec:2} that the marginal PDF with $n=N$ plays a particularly important role. We show that this marginal PDF can be recursively defined starting from the PDF $\pi_{Z^0}$ for the initial condition $z^0$ of (\ref{sec2:DS}). For that reason we introduce the forecast and analysis PDF at iteration index $n$ and denote them by $\pi_{Z^n}(z^n|y_{\rm obs}^{1:n-1})$ and $\pi_{Z^n}(z^n|y_{\rm obs}^{1:n})$, respectively. Those PDFs are defined recursively by \begin{equation} \label{sec4:Step1} \pi_{Z^n}(z^n|y_{\rm obs}^{1:n-1}) = \pi_{Z^{n-1}}(\Psi^{-1}(z^n)|y_{\rm obs}^{1:n-1})| D\Psi^{-1}(z^n)| \end{equation} and \begin{equation} \label{sec4:Step2} \pi_{Z^n}(z^n|y_{\rm obs}^{1:n}) = \frac{\pi_Y(y_{\rm obs}^n|z^n) \pi_{Z^n}(z^n|y_{\rm obs}^{1:n-1}) }{\int_{\cal Z} \pi_Y(y_{\rm obs}^n|z) \pi_{Z^n}(z|y_{\rm obs}^{1:n-1}) {\rm d}z}. \end{equation} Here (\ref{sec4:Step1}) simply propagates the analysis from index $n-1$ to the forecast at index $n$ under the action of the dynamical system (\ref{sec2:DS}). Bayes' formula is then applied in (\ref{sec4:Step2}) in order to transform the forecast into the analysis at index $n$ by assimilating the observed $y_{\rm obs}^n$. \begin{theorem} Consider the sequence of forecast and analysis PDFs defined by the recursion (\ref{sec4:Step1})-(\ref{sec4:Step2}) for $n=1,\ldots,N$ with $\pi_{Z^0}(z^0)$ given. Then the analysis PDF at $n=N$ is equal to the Feynman-Kac PDF (\ref{sec2:feynman}) marginalized down to the single variable $z^N$. \end{theorem} \begin{proof} We prove the theorem by induction in $N$. We first verify the claim for $N=1$. Indeed, by definition, \begin{align*} \pi_{Z^{1}}(z^1|y_{\rm obs}^1) &= \frac{1}{C_1} \int_{\cal Z} \pi_{Y}(y_{\rm obs}^1|z^1) \delta(z^1-\Psi(z^0)) \pi_{Z^0}(z^0) {\rm d}z^0 \\ &= \frac{1}{C_1} \pi_{Y}(y_{\rm obs}^1|z^1) \pi_{Z^0}(\Psi^{[-1]}(z^1))|D\Psi^{-1}(z^1)|\\ &= \frac{1}{C_1} \pi_{Y}(y_{\rm obs}^1|z^1) \pi_{Z^1}(z^1) \end{align*} and $\pi_{Z^1}(z^1)$ is the forecast PDF at index $n=1$. Here $C_1$ denotes the constant of proportionality which only depends on $y_{\rm obs}^1$. The induction step from $N$ to $N+1$ follows from the following line of arguments. We know by the induction assumption that the marginal PDF at $N+1$ can be written as \begin{align*} \pi_{Z^{N+1}}(z^{N+1}|y_{\rm obs}^{1:N+1}) &= \int_{\cal Z} \pi_{Z^{N:N+1}}(z^{N:N+1}|y_{\rm obs}^{1:N+1}) {\rm d}z^N\\ &= \frac{1}{C_{N+1}} \pi_Y(y_{\rm obs}^{N+1}|z^{N+1}) \int_{\cal Z} \delta (z^{N+1}-\Psi(z^N)) \pi_{Z^N}(z^N|y_{\rm obs}^{1:N}) {\rm d}z^N\\ &= \frac{1}{C_{N+1}} \pi_Y(y_{\rm obs}^{N+1}|z^{N+1}) \pi_{Z^{N+1}}(z^{N+1}|y_{\rm obs}^{1:N}) \end{align*} in agreement with (\ref{sec4:Step2}) for $n=N+1$. Here $C_{N+1}$ denotes the constant of proportionality that depends only on $y_{\rm obs}^{N+1}$ and we have made use of the fact that the forecast PDF at index $n = N+1$ is, according to (\ref{sec4:Step1}), defined by \[ \pi_{Z^{N+1}}(z^{N+1}|y_{\rm obs}^{1:N}) = \pi_{Z^{N}}(\Psi^{-1} (z^{N+1})|y_{\rm obs}^{1:N})| D\Psi^{-1} (z^{N+1})|. \] \hfill $\qed$ \end{proof} While the forecast step (\ref{sec4:Step1}) is in the form of a Markov process with transition kernel \[ \pi_{\rm model}(z|z') = \delta(z-\Psi(z')), \] this does not hold for the analysis step (\ref{sec4:Step2}). The \emph{McKean approach} to (\ref{sec4:Step1})-(\ref{sec4:Step2}) is based on introducing Markov transition kernels $\pi_{\rm data}^n(z|z')$, $n=1,\ldots,N$, for the analysis step (\ref{sec4:Step2}). In other words, the transition kernel $\pi_{\rm data}^n$ at iteration index $n$ has to satisfy the consistency condition \begin{equation} \label{sec4:consistency} \pi_{Z^n}(z|y_{\rm obs}^{1:n}) = \int_{\cal Z} \pi_{\rm data}^n(z|z') \pi_{Z^n}(z'|y_{\rm obs}^{1:n-1}) {\rm d}z' . \end{equation} These transition kernels are not unique. The following kernel \begin{equation} \label{sec4:DelMoral} \pi_{\rm data}^n(z|z') = \epsilon \pi_{Y}(y^n_{\rm obs}|z') \delta(z-z') + \left(1-\epsilon \pi_{Y}(y^n_{\rm obs}|z') \right) \pi_{Z^n}(z|y_{\rm obs}^{1:n}) \end{equation} has, for example, been considered in \cite{sr:DelMoral}. Here $\epsilon \ge 0$ has to be chosen such that \[ 1-\epsilon \pi_{Y}(y^n_{\rm obs}|z) \ge 0 \] for all $z\in \mathbb{R}^{N_z}$. Indeed, we find that \begin{align*} \int_{\cal Z} \pi_{\rm data}(z|z') \pi_{Z^n}(z'|y_{\rm obs}^{1:n-1}){\rm d}z' &= \pi_{Z^n}(z|y_{\rm obs}^{1:n}) + \epsilon \pi_Y(y^n_{\rm obs}|z)\pi_{Z^n}(z|y_{\rm obs}^{1:n-1}) \,\,- \\ & \qquad \qquad \qquad \epsilon \pi_{Z^n}(z|y^{1:n}_{\rm obs}) \pi_Y(y^n_{\rm obs}|y_{\rm obs}^{1:n-1}) \\ & = \pi_{Z^n}(z|y_{\rm obs}^{1:n}). \end{align*} The intuitive interpretation of this transition kernel is that one stays at $z'$ with probability $p= \epsilon \pi_{Y}(y^n_{\rm obs}|z')$ while with probability $(1-p)$ one samples from the analysis PDF $\pi_{Z^n}(z|y_{\rm obs}^{1:n})$. Let us define the combined McKean-Markov transition kernel \begin{align} \nonumber \pi^n(z^n|z^{n-1}) &:= \int_{\cal Z} \pi_{\rm data}^n(z^n|z) \pi_{\rm model}(z|z^{n-1}) {\rm d}z \\ \nonumber &= \int_{\cal Z} \pi_{\rm data}^n(z^n|z) \delta (z-\Psi(z^{n-1})) {\rm d}z \\ &= \pi_{\rm data}^n(z^n|\Psi(z^{n-1})) \label{sec3:kernel} \end{align} for the propagation of the analysis PDF from iteration index $n-1$ to $n$. The combined McKean-Markov transition kernels $\pi^n$, $n=1,\ldots,N$, define a finite-time stochastic process $\hat Z^{0:N} = \{\hat Z^n\}_{n=0}^N$ with $\hat Z^0 = Z^0$. The marginal PDFs satisfy \[ \pi_{{\hat Z}^n}(z^n|y_{\rm obs}^{1:n}) = \int_{\cal Z} \pi^n(z^n|z^{n-1}) \pi_{{\hat Z}^{n-1}} (z^{n-1}|y_{\rm obs}^{1:n-1}) {\rm d}z^{n-1}. \] \begin{corollary} The final time marginal distribution $\pi_{Z^N}(z^N|y_{\rm obs}^{1:N})$ of the Feynman-Kac formulation (\ref{sec2:feynman}) is identical to the final time marginal distribution $\pi_{\hat Z^N}(z^N|y_{\rm obs}^{1:N})$ of the finite-time stochastic process $\hat Z^{0:N}$ induced by the McKean-Markov transition kernels $\pi^n(z^n|z^{n-1})$. \end{corollary} We summarize our discussion on the McKean approach in terms of analysis and forecast random variables, which constitute the basic building blocks for most current sequential data assimilation methods. \begin{definition} Given a dynamic iteration (\ref{sec2:DS}) with PDF $\pi_{Z^0}$ for the initial conditions and observations $y_{\rm obs}^n$, $n=1,\ldots,N$, the McKean approach leads to a recursive definition of forecast $Z^{n,f}$ and analysis $Z^{n,a}$ random variables. The iteration is started by declaring $Z^0$ the analysis $Z^{0,a}$ at $n=0$. The forecast at iteration index $n>0$ is defined by propagating the analysis at $n-1$ forward under the dynamic model (\ref{sec2:DS}), \emph{i.e.}, \begin{equation} \label{sec4:McKean1} Z^{n,f} = \Psi(Z^{n-1,a}). \end{equation} The analysis $Z^{n,a}$ at iteration index $n$ is defined by applying the transition kernel $\pi_{\rm data}^{n}(z|z')$ to $Z^{n,f}$. In particular, if $z^{n,f} = Z^{n,f}(\omega)$ is a realized forecast at iteration index $n$, then the analysis is distributed according to \begin{equation} \label{sec4:McKean2} Z^{n,a}|z^{n,f} \sim \pi_{\rm data}^{n}(z|z^{n,f}). \end{equation} If the marginal PDFs of $Z^{n,f}$ and $Z^{n,a}$ are denoted by $\pi_{Z^{n,f}}(z)$ and $\pi_{Z^{n,a}}(z)$, respectively, then the transition kernel $\pi_{\rm data}^n$ has to satisfy the compatibility condition (\ref{sec4:consistency}), \emph{i.e.}, \begin{equation} \label{sec4:compatible} \int_{\cal Z} \pi_{\rm data}^n (z|z') \pi_{Z^{n,f}}(z'){\rm d}z' = \pi_{Z^{n,a}}(z) \end{equation} with \[ \pi_{Z^{n,a}}(z) = \frac{\pi_Y(y_{\rm obs}^n|z) \pi_{Z^{n,f}}(z)}{\int_{\cal Z} \pi_Y(y_{\rm obs}^n|z) \pi_{Z^{n,f}}(z){\rm d}z}. \] \end{definition} The data related transition step (\ref{sec4:McKean2}) introduces randomness into the analysis of a given forecast value. This appears counterintuitive and, indeed, the main purpose of the rest of this section is to demonstrate that the transition kernel $\pi^n_{\rm data}$ can be chosen such that \begin{equation} \label{sec4:McKean2b} Z^{n,a} = \nabla_z \phi^{n}(Z^{n,f}), \end{equation} where $\phi^n:\mathbb{R}^{N_z} \to \mathbb{R}$ is an appropriate convex potential. In other words, the data-driven McKean update step can be reduced to a (deterministic) map and the stochastic process $\hat Z^{0:N}$ is induced by the deterministic recursion (or dynamical system) \[ \hat Z^n = \nabla_z \phi^n(\Psi( \hat Z^{n-1})) \] with $\hat Z^0 = Z^0$. The compatibility condition (\ref{sec4:compatible}) with $\pi_{\rm data}^n(z|z') = \delta (z-\nabla_z \psi^n(z'))$ reduces to \begin{equation} \label{sec4:Monge} \pi_{Z^{n,a}}(\nabla_z \psi^n(z)) |D\nabla_z \phi^n(z)| = \pi_{Z^{n,f}}(z), \end{equation} which constitutes a highly non-linear elliptic PDE for the potential $\phi^n$. In the remainder of this section we discuss under which conditions this PDE has a solution. This discussion will also guide us towards a numerical approximation technique that circumvents the need for directly solving (\ref{sec4:Monge}) either analytically or numerically. Consider the forecast PDF $\pi_{Z^{n,f}}$ and the analysis PDF $\pi_{Z^{n,a}}$ at iteration index $n$. For simplicity of notion we drop the iteration index and simply write $\pi_{Z^f}$ and $\pi_{Z^a}$, respectively. \begin{definition} A \emph{coupling} of $\pi_{Z^f}$ and $\pi_{Z^a}$ consists of a pair $Z^{f:a} = (Z^f,Z^a)$ of random variables such that $Z^f \sim \pi_{Z^f}$, $Z^a \sim \pi_{Z^a}$, and $Z^{f:a} \sim \pi_{Z^{f:a}}$. The joint PDF $\pi_{Z^{f:a}}(z^f,z^a)$ on the product space $\mathbb{R}^{N_z}\times \mathbb{R}^{N_z}$, is called the {\it transference plan} for this coupling. The set of all transference plans is denoted by $\Pi(\pi_{Z^f},\pi_{Z^a})$.\footnote{Couplings should be properly defined in terms of probability measures. A coupling between two measures $\mu_{Z^f}({\rm d}z^f)$ and $\mu_{Z^a}({\rm d}z^a)$ consists of a pair of random variables with joint measure $\mu_{Z^{f:a}}({\rm d}z^f,{\rm d}z^a)$ such that $\mu_{Z^f}({\rm d}z^f) = \int_{\cal Z} \mu_{Z^{f:a}}({\rm d}z^f,{\rm d}z^a)$ and $\mu_{Z^a}({\rm d}z^a) = \int_{\cal Z} \mu_{Z^{f:a}}({\rm d}z^f,{\rm d}z^a)$, respectively.} \end{definition} Clearly, couplings always exist since one can use the trivial product coupling \[ \pi_{Z^{f:a}}(z^f,z^a) = \pi_{Z^f}(z^f) \pi_{Z^a}(z^a), \] in which case the associated random variables $Z^f$ and $Z^a$ are independent and each random variable follows its given marginal distribution. Once a coupling has been found, a McKean transition kernel is determined by \[ \pi_{\rm data}(z|z') = \frac{\pi_{Z^{f:a}}(z',z)}{\int_{\cal Z} \pi_{Z^{f:a}}(z',z''){\rm d}z''} . \] Reversely, any transition kernel $\pi_{\rm data}(z|z')$, such as (\ref{sec4:DelMoral}), also induces a coupling. A diffeomorphism $T: {\cal Z} \to {\cal Z}$ is called a \emph{transport map} if the induced random variable $Z^a= T(Z^f)$ satisfies \[ \int_{{\cal Z}} f(z^a) \pi_{Z^a}(z^a){\rm d}z^a = \int_{{\cal Z}} f(T(z^f)) \pi_{Z^f}(z^f){\rm d}z^f \] for all suitable functions $f:{\cal Z} \to \mathbb{R}$. The associated coupling \[ \pi_{Z^{f:a}}(z^f,z^a) = \delta (z^a - T(z^f)) \pi_{Z^f}(z^f) \] is called a \emph{deterministic coupling}. Indeed, one finds that \[ \int_{\cal Z} \pi_{Z^{f:a}}(z^f,z^a){\rm d}z^a = \pi_{Z^f}(z^f) \] and \[ \pi_{Z^a}(z^a) = \int_{\cal Z} \pi_{Z^{f:a}}(z^f,z^a){\rm d}z^f = \pi_{Z^f}(T^{-1}(z^a)) |DT^{-1}(z^a)|, \] respectively. When it comes to actually choosing a particular coupling from the set $\Pi(\pi_{Z^f},\pi_{Z^a})$ of all admissible ones, it appears preferable to pick the one that maximizes the covariance or correlation between $Z^f$ and $Z^a$. But maximizing their covariance for given marginals has an important geometric interpretation. Consider, for simplicity, univariate random variables $Z^f$ and $Z^a$, then \begin{align*} \mathbb{E}_{Z^{f:a}}[|z^f-z^a |^2] &= \mathbb{E}_{Z^f}[|z^f|^2] + \mathbb{E}_{Z^a}[|z^a |^2] - 2 \mathbb{E}_{Z^{f:a}}[z^f z^a] \\ &= \mathbb{E}_{Z^f}[|z^f|^2] + \mathbb{E}_{Z^a}[|z^a|^2] - 2 \mathbb{E}_{Z^{f:a}}[(z^f-\bar z^f) (z^a-\bar z^a)] -2 \bar z^f \bar z^a \\ &= \mathbb{E}_{Z^f}[|z^f|^2] + \mathbb{E}_{Z^a}[|z^a|^2] -2\bar z^f \bar z^a - 2 \mbox{cov}(Z^f,Z^a), \end{align*} where $\bar z^{f/a} = \mathbb{E}_{Z^{f/a}}[z^{f/a}]$. Hence finding a joint PDF $\pi_{Z^{f:a}} \in \Pi(\pi_{Z^f},\pi_{Z^a})$ that minimizes the expectation of $|z^f-z^a|^2$ simultaneously maximizes the covariance between $Z^f$ and $Z^a$. This geometric interpretation leads to the celebrated Monge-Kantorovitch problem. \begin{definition} Let $\Pi(\pi_{Z^f},\pi_{Z^a})$ denote the set of all possible couplings between $\pi_{Z^f}$ and $\pi_{Z^a}$. A transference plan $\pi_{Z^{f:a}}^\ast \in \Pi(\pi_{Z^f},\pi_{Z^a})$ is called the solution to the \emph{Monge-Kantorovitch problem} with cost function $c(z^f,z^a) = \|z^f-z^a\|^2$ if \begin{equation} \label{sec4:eq:MK} \pi_{Z^{f:a}}^\ast = \arg \inf_{\pi_{Z^{f:a}}\in \Pi(\pi_{Z^f},\pi_{Z^a})} \mathbb{E}_{Z^{f:a}} [ \|z^f-z^a\|^2]. \end{equation} The associated functional $W(\pi_{Z^f},\pi_{Z^a})$, defined by \begin{equation} \label{sec4:WD} W(\pi_{Z^f},\pi_{Z^a})^2 = \mathbb{E}_{Z^{f:a}} [ \|z^f-z^a\|^2] \end{equation} is called the \emph{$L^2$-Wasserstein distance} between $\pi_{Z^f}$ and $\pi_{Z^a}$. \end{definition} \begin{example} Let us consider the discrete set \begin{equation} \label{sec4:td} \mathbb{Z} = \{z_1,z_2,\ldots, z_M\}, \qquad z_i \in \mathbb{R}, \end{equation} and two probability vectors $\mathbb{P}(z_i) = 1/M$ and $\mathbb{P}(z_i) = w_i$, respectively, on $\mathbb{Z}$ with $w_i\ge 0$, $i=1,\ldots,M$, and $\sum_i w_i =1$. Any coupling between these two probability vectors is characterized by a matrix $T \in \mathbb{R}^{M\times M}$ such that its entries $t_{ij} = (T)_{ij}$ satisfy $t_{ij} \ge 0$ and \begin{equation} \label{sec4:margLP} \sum_{i=1}^M t_{ij} = 1/M, \qquad \sum_{j=1}^M t_{ij} = w_i . \end{equation} These matrices characterize the set of all couplings $\Pi$ in the definition of the Monge-Kantorovitch problem. Given a coupling $T$ and the mean values \[ \bar z^f = \frac{1}{M}\sum_{i=1}^M z_i, \qquad \bar z^a = \sum_{i=1}^M w_i z_i, \] the covariance between the associated discrete random variables ${\rm Z}^f:\Omega \to \mathbb{Z}$ and ${\rm Z}^a:\Omega \to \mathbb{Z}$ is defined by \begin{equation} \label{sec4:maxLP} \mbox{cov}({\rm Z}^f,{\rm Z}^a) = \sum_{i,j=1}^M (z_i-\bar z^a) t_{ij} (z_j-\bar z^f). \end{equation} The particular coupling defined by $t_{ij} = w_i/M$ leads to zero correlation between ${\rm Z}^f$ and ${\rm Z}^a$. On the other hand, maximizing the correlation leads to a \emph{linear transport problem} in the $M^2$ unknowns $\{t_{ij}\}$. More precisely, the unknowns $t_{ij}$ have to satisfy the inequality constraints $t_{ij}\ge 0$, the equality constraints (\ref{sec4:margLP}), and should minimize \[ J(T) = \sum_{i,j=1}^M t_{ij} |z_i - z_j |^2, \] which is equivalent to maximizing (\ref{sec4:maxLP}). See \cite{sr:strang} and \cite{sr:wright99} for an introduction to linear transport problems and, more generally, to linear programming. \end{example} We now return to continuous random variables and the desired coupling between forecast and analysis PDFs. The following theorem is an adaptation of a more general result on optimal couplings from \cite{sr:Villani}. \begin{theorem} If the forecast PDF $\pi_{Z^f}$ has bounded second-order moments, then the optimal transference plan that solves the Monge-Kantorovitch problem gives rise to a deterministic coupling with transport map \[ Z^a = \nabla_z\phi(Z^f), \] where $\phi:\mathbb{R}^{N_z} \to \mathbb{R}$ is a convex potential. \end{theorem} Below we sketch the basic line of arguments that lead to this theorem. We first introduce the \emph{support} of a coupling $\pi_{Z^{f:a}}$ on $\mathbb{R}^{N_z} \times \mathbb{R}^{N_z}$ as the smallest closed set on which $\pi_{Z^{f:a}}$ is concentrated, \emph{i.e.}, \[ \mbox{supp} \,(\pi_{Z^{f:a}}) := \bigcap \{ S \subset \mathbb{R}^{N_z}\times \mathbb{R}^{N_z}: \,S\, \,\mbox{closed and}\,\, \mu_{Z^{f:a}}(\mathbb{R}^{N_z} \times \mathbb{R}^{N_z} \setminus S) = 0 \} \] with the measure of $\mathbb{R}^{N_z}\times \mathbb{R}^{N_z} \setminus S$ defined by \[ \mu_{Z^{f:a}}(\mathbb{R}^{N_z} \times \mathbb{R}^{N_z} \setminus S) = \int_{\mathbb{R}^{N_z} \times \mathbb{R}^{N_z} \setminus S} \pi_{Z^{f:a}}(z^f,z^a) \,{\rm d}z^f {\rm d}z^a . \] The support of $\pi_{Z^{f:a}}$ is called \emph{cyclically monotone} if for every set of points $(z^f_i,z^a_i) \in \mbox{supp}\,(\pi_{Z^{f:a}}) \subset \mathbb{R}^{N_z} \times \mathbb{R}^{N_z}$, $i=1,\ldots,I$, and any permutation $\sigma$ of $\{1,\ldots,I\}$ one has \begin{equation} \label{sec4:cyclical} \sum_{i=1}^I \|z_i^f - z_i^a\|^2 \le \sum_{i=1}^I \|z_i^f - z^a_{\sigma (i)}\|^2 . \end{equation} Note that (\ref{sec4:cyclical}) is equivalent to \[ \sum_{i=1}^I (z_i^f)^T (z_{\sigma (i)}^a - z_i^a) \le 0. \] It can be shown that any coupling whose support is not cyclically monotone can be modified into another coupling with lower transport cost. Hence it follows that a solution $\pi^\ast_{Z^{f:a}}$ of the Monge-Kantorovitch problem (\ref{sec4:eq:MK}) has cyclically monotone support. A fundamental theorem (\emph{Rockafellar's theorem} \citep{sr:Villani}) of convex analysis now states that cyclically monotone sets $S \subset \mathbb{R}^{N_z}\times \mathbb{R}^{N_z}$ are contained in the \emph{subdifferential} of a convex function $\phi:\mathbb{R}^{N_z} \to \mathbb{R}$. Here the subdifferential $\partial \phi$ of a convex function $\phi$ at a point $z\in \mathbb{R}^{N_z}$ is defined as the compact, non-empty and convex set of all $m \in \mathbb{R}^{N_z}$ such that \[ \phi(z') \ge \phi(z) + m (z'-z) \] for all $z' \in \mathbb{R}^{N_z}$. We write $m \in \partial \phi(z)$. An optimal transport map is obtained whenever the convex potential $\phi$ for a given optimal coupling $\pi^\ast_{Z^{f:a}}$ is sufficiently regular in which case the subdifferential $\partial \phi (z)$ reduces to the classic gradient $\nabla_z \phi$ and $z^a = \nabla_z \phi(z^f)$. This regularity is ensured by the assumptions of the above theorem. See \cite{sr:mccann95} and \cite{sr:Villani} for more details. We summarize the \emph{McKean optimal transportation approach} in the following definition. \begin{definition} Given a dynamic iteration (\ref{sec2:DS}) with PDF $\pi_{Z^0}$ for the initial conditions and observations $y_{\rm obs}^n$, $n=1,\ldots,N$, the forecast $Z^{n,f}$ at iteration index $n>0$ is defined by (\ref{sec4:McKean1}) and the analysis $Z^{n,a}$ by (\ref{sec4:McKean2b}). The convex potential $\phi^n$ is the solution to the Monge-Kantorovitch optimal transportation problem for coupling $\pi_{Z^{f,n}}$ and $\pi_{Z^{a,n}}$. The iteration is started at $n=0$ with $Z^{0,a} = Z^0$. \end{definition} The application of optimal transportation to Bayesian inference and data assimilation has first been discussed by \cite{sr:reich10}, \cite{sr:marzouk11}, and \cite{sr:cotterreich}. In the following section we discuss data assimilation algorithms from a McKean optimal transportation perspective. \section{Linear ensemble transform methods} \label{sec:5} In this section, we discuss SMCMs, EnKFs, and the recently proposed \citep{sr:reich13} ETPF from a coupling perspective. All three data assimilation methods have in common that they are based on an ensemble $z_i^n$, $i=1,\ldots,$M, of model states. In the absence of observations the $M$ ensemble members propagate independently under the model dynamics (\ref{sec2:DS}), \emph{i.e.}, an analysis ensemble at time-level $n-1$ gives rise to a forecast ensemble at time-level $n$ via \[ z_i^{n,f} = \Psi(z_i^{n-1,a}), \qquad i=1,\ldots,M. \] The three methods differ in the way the forecast ensemble $\{z_i^{n,f}\}_{i=1}^M$ is transformed into an analysis ensemble $\{z_i^{n,a}\}_{i=1}^M$ under an observation $y_{\rm obs}^n$. However, all three methods share a common mathematical structure which we outline next. We drop the iteration index in order to simplify the notation. \begin{definition} The class of \emph{linear ensemble transform filters} (LETFs) is defined by \begin{equation} \label{sec5:transform} z^{a}_j = \sum_{i=1}^M z_i^{f} s_{ij}, \end{equation} where the coefficients $s_{ij}$ are the $M^2$ entries of a matrix $S \in \mathbb{R}^{M\times M}$. \end{definition} The concept of LETFs is well established for EnKF formulations \citep{sr:tippett03}. It will be shown below that SMCMs and the ETPF also belong to the class of LETFs. In other words, these three methods differ only in the definition of the corresponding transform matrix $S$. \subsection{Sequential Monte Carlo methods (SMCMs)} \label{sec51} A central building block of a SMCM is the \emph{proposal density} $\pi_{\rm prop}(z|z')$, which produces a \emph{proposal ensemble} $\{z^p_i\}_{i=1}^M$ from the last analysis ensemble. Here we assume, for simplicity, that the proposal density is given by the model dynamics itself, \emph{i.e.}, \[ \pi_{\rm prop}(z|z') = \delta (z-\Psi(z')), \] and then \[ z_i^p = z_i^f ,\qquad i=1,\ldots,M. \] One associates with the proposal/forecast ensemble two discrete measures on \begin{equation} \label{sec5:SMCM1a} \mathbb{Z} = \{z_1^{f},z_2^{f},\ldots,z_M^{f} \}, \end{equation} namely the uniform measure $\mathbb{P}(z_i^{f}) = 1/M$ and the non-uniform measure \[ \mathbb{P}(z_i^{f}) = w_i, \] defined by the importance weights \begin{equation} \label{sec5:SMCM1b} w_i = \frac{\pi_Y(y_{\rm obs}|z_i^{f})} {\sum_{j=1}^M \pi_Y(y_{\rm obs}| z_j^{f})}. \end{equation} The \emph{sequential importance resampling} (SIR) filter \citep{sr:gordon93} resamples from the weighted forecast ensemble in order to produce a new equally weighted analysis ensemble $\{z_i^a\}$. Here we only consider SIR filter implementations with resampling performed after each data assimilation cycle. An in-depth discussion of the SIR filter and more general SMCMs can be found, for example, in \cite{sr:Doucet,sr:doucet11}. Here we focus on the coupling of discrete measures perspective of a resampling step. We first note that any resampling strategy effectively leads to a coupling of the uniform and the non-uniform measure on (\ref{sec5:SMCM1a}). As previously discussed, a coupling is defined by a matrix $T \in \mathbb{R}^{M\times M}$ such that $t_{ij} \ge 0$, and \begin{equation} \label{sec5:SMCM1c} \sum_{i=1}^M t_{ij} = 1/M, \qquad \sum_{j=1}^M t_{ij} = w_i. \end{equation} The resampling strategy (\ref{sec4:DelMoral}) leads to \[ t_{ij} = \frac{1}{M}\left(\epsilon w_j \delta_{ij} + (1- \epsilon w_j)w_ i \right) \] with $\epsilon \ge 0$ chosen such that $\epsilon w_j \le 1$ for all $j=1,\ldots,M$. Monomial resampling corresponds to the special case $\epsilon = 0$, \emph{i.e.}~$t_{ij} = w_i/M$. The associated transformation matrix $S$ in (\ref{sec5:transform}) is the realization of a random matrix with entries $s_{ij} \in \{0,1\}$ such that each column of $S$ contains exactly one entry $s_{ij} = 1$. Given a coupling $T$, the probability for the $s_{ij}$'s entry to be the one selected in the $j$th column is \[ \mathbb{P}(s_{ij} = 1) = M t_{ij} \] and $Mt_{ij} = w_i$ in case of monomial resampling. Any such resampling procedure based on a coupling matrix $T$ leads to a consistent coupling for the underlying forecast and analysis probability measures as $M\to \infty$, which, however, is non-optimal in the sense of the Monge-Kantorovitch problem (\ref{sec4:eq:MK}). We refer to \cite{sr:crisan} for resampling strategies which satisfy alternative optimality conditions. We emphasize that the transformation matrix $S$ of a SIR particle filter analysis step satisfies \begin{equation} \label{sec5:LETF1a} \sum_{i=1}^M s_{ij} = 1 \end{equation} and \begin{equation} \label{sec5:LETF1b} s_{ij}\in [0,1]. \end{equation} In other words, each realization of the resampling step yields a Markov chain $S$. Furthermore, the weights $\hat w_i = M^{-1} \sum_{j=1}^M s_{ij}$ satisfy $\mathbb{E}[\hat w_i] = w_i$ and the analysis ensemble defined by $z_j^a = z_i^f$ if $s_{ij} = 1$, $j=1,\ldots,M$, is contained in the convex hull of the forecast ensemble (\ref{sec5:SMCM1a}). A forecast ensemble $\{z_i^f\}_{i=1}^M$ leads to the following estimator \[ \bar z^f = \frac{1}{M} \sum_{i=1}^M z_i^f \] for the mean and \[ P_{zz}^f = \frac{1}{M-1} \sum_{i=1}^M (z_i^f-\bar z^f)(z_i^f -\bar z^f)^T \] for the covariance matrix. In order to increase the \emph{robustness} of a SIR particle filter one often augments the resampling step by the \emph{particle rejuvenation step} \citep{sr:pham01} \begin{equation} \label{sec5:rejuvenation1} z_j^a = z_i^f + \xi_j, \end{equation} where the $\xi_j$'s are realizations of $M$ independent and identically distributed Gaussian random variables ${\rm N}(0,h^2P_{zz}^f)$ and $s_{ij} = 1$. Here $h>0$ is the \emph{rejuvenation parameter} which determines the magnitude of the stochastic perturbations. Rejuvenation helps to avoid the creation of identical analysis ensemble members which would remain identical under the deterministic model dynamics (\ref{sec2:DS}) for all times. Furthermore, rejuvenation can be used as a heuristic tool in order to compensate for model errors as encoded, for example, in the difference between (\ref{sec2:DS}) and (\ref{sec2:DST}). In this paper we only discuss SMCMs which are based on the proposal step (\ref{sec1:forecast}). Alternative proposal steps are possible and recent work on alternative implementations of SMCMs include \cite{sr:leeuwen10}, \cite{sr:chorin10}, \cite{sr:chorin12}, \cite{sr:chorin12b}, \cite{sr:leeuwen12}, \cite{sr:reich13b}. \subsection{Ensemble Kalman filter (EnKF)} The historically first version of the EnKF uses perturbed observations in order to transform a forecast ensemble into an analysis ensemble. The key requirement of any EnKF is that the transformation step is consistent with the classic Kalman update step in case the forecast and analysis PDFs are Gaussian. The, so called, \emph{EnKF with perturbed observations} is explicitly given by the simply formula \[ z_j^a = z_j^f - K (y_j^f + \xi_j - y_{\rm obs}), \qquad j=1,\ldots,M, \] where $y_j^f = h(z_j^f)$, the $\xi_j$'s are realizations of independent and identically distributed Gaussian random variables with mean zero and covariance matrix $R$, and $K$ denotes the Kalman gain matrix, which in case of the EnKF is determined by the forecast ensemble as follows: \[ K = P_{zy}^f (P_{yy}^f + R)^{-1} \] with empirical covariance matrices \[ P^f_{zy} = \frac{1}{M-1} \sum_{i=1}^M (z_i^f - \bar z^f)(y_i^f - \bar y^f)^T \] and \[ P^f_{yy} = \frac{1}{M-1} \sum_{i=1}^M (y_i^f - \bar y^f)(y_i^f - \bar y^f)^T, \] respectively. Here the ensemble mean in observation space is defined by \[ \bar y^f = \frac{1}{M} \sum_{i=1}^M y_i^f. \] In order to shorten subsequent notations, we introduce the $N_y \times M$ matrix of ensemble deviations \[ A_y^f = [ y_1^f-\bar y^f, y_2^f -\bar y^f, \dots, y_M^f-\bar y^f] \] in observation space and the $N_z \times M$ matrix of ensemble deviations \[ A_z^f = [ z_1^f-\bar z^f, z_2^f -\bar z^f, \dots, z_M^f-\bar z^f] \] in state space, respectively. In terms of these ensemble deviation matrices, it holds that \[ P_{zz}^f = \frac{1}{M-1} A_z^f (A_z^f)^T \quad \mbox{and} \quad P_{zy}^f = \frac{1}{M-1} A_z^f (A_y^f)^T, \] respectively. It can be verified by explicit calculations that the EnKF with perturbed observations fits into the class of LETFs with \[ z_j^a = \sum_{i=1}^M z_i^f \left( \delta_{ij} - \frac{1}{M-1} (y_i^f -\bar y_i^f)^T (P_{yy}^f + R)^{-1} ( y_j^f + \xi_j - y_{\rm obs}) \right) \] and, therefore, \[ s_{ij} = \delta_{ij} - \frac{1}{M-1}(y_i^f - \bar y^f)^T (P_{yy}^f + R)^{-1} (y_j^f + \xi_j - y_{\rm obs}). \] Here we have used that \[ \frac{1}{M-1} \sum_{i=1}^M (z_i^f - \bar z^f)(y_i^f - \bar y^f)^T = \frac{1}{M-1} \sum_{i=1}^M z_i^f (y_i^f - \bar y^f)^T. \] We note that the transform matrix $S$ is the realization of a random matrix. The class of \emph{ensemble square root filters} (ESRF) leads instead to deterministic transformation matrices $S$. More precisely, an ESRF uses separate transformation steps for the ensemble mean $\bar z^f$ and the ensemble deviations $z_i^f - \bar z^f$. The mean is simply updated according to the classic Kalman formula, \emph{i.e.} \begin{equation} \label{sec5:Kalmanmean} \bar z^a = \bar z^f - K(\bar y^f - y_{\rm obs}) \end{equation} with the Kalman gain matrix defined as before. Upon introducing the analysis matrix of ensemble deviations $A_z^a \in \mathbb{R}^{N_z\times M}$, one obtains \begin{align} \nonumber P_{zz}^a &= \frac{1}{M-1} A_z^a (A_z^a)^T \\ &= P_{zz}^f - K (P_{zy}^f)^T = \frac{1}{M-1} A_z^f Q (A_z^f)^T \label{sec5:Kalmancovariance} \end{align} with the $M\times M$ matrix $Q$ defined by \[ Q = I - \frac{1}{M-1} (A^f_y)^T (P_{yy}^f + R)^{-1} A_y^f . \] Let us denote the matrix square root\footnote{The matrix square root of a symmetric positive semi-definite matrix $Q$ is the unique symmetric matrix $D$ which satisfies $D D = Q$.} of $Q$ by $D$ and its entries by $d_{ij}$. We note that $\sum_{i=1}^M d_{ij} = 1$ and it follows that \begin{align} \nonumber z_j^a &= \bar z^f - K(\bar y^f - y_{\rm obs}) + \sum_{i=1}^M (z_i^f -\bar z^f) d_{ij} \\ \nonumber &= \sum_{i=1}^M z_i^f \left( \frac{1}{M-1}(y_i^f -\bar y^f)^T(P_{yy}^f + R)^{-1} (y_{\rm obs} - \bar y^f) + d_{ij} \right)\\ &= \sum_{i=1}^M z_i^f \left( \frac{1}{M-1}(y_i^f -\bar y^f)^T(P_{yy}^f + R)^{-1} (y_{\rm obs} - \bar y^f ) + d_{ij} \right) . \label{sec5:ESRF} \end{align} The appropriate entries for the transformation matrix $S$ of an ESRF can now be read off of (\ref{sec5:ESRF}). See \cite{sr:tippett03,sr:wang04,sr:nichols08,sr:ott04,sr:nerger12} and \cite{sr:evensen} for further details on other ESRF implementations such as the ensemble adjustment Kalman filter. We mention in particular that an application of the Sherman-Morrison-Woodbury formula \citep{sr:golub} leads to the equivalent square root formula \begin{align} D &= \left\{ I + \frac{1}{M-1} (A_y^f)^T R^{-1} A_y^f \right\}^{-1/2}, \label{sec5:eq:Stransform} \end{align} which avoids the need for inverting the $N_y \times N_y$ matrix $P_{yy}^f + R$, which is desirable whenever $N_y \gg M$. Furthermore, using the equivalent Kalman gain matrix representation \[ K = P^a_{zy}R^{-1}, \] the Kalman update formula (\ref{sec5:Kalmanmean}) for the mean becomes \begin{align*} \bar z^a &= \bar z^f - P^a_{zy} R^{-1} (\bar y^f - y_{\rm obs})\\ &= \bar z^f + \frac{1}{M-1} A_z^f Q (A^f_y)^T R^{-1} (y_{\rm obs} - \bar y^f). \end{align*} This reformulation gives rise to \begin{equation} \label{sec5:letkf} s_{ij} = \frac{1}{M-1} q_{ij} (y^f_j - \bar y^f) R^{-1} (y_{\rm obs} - \bar y^f) + d_{ij} , \end{equation} which forms the basis of the \emph{local ensemble transform Kalman filter} (LETKF) \citep{sr:ott04,sr:hunt07} to be discussed in more detail in Section \ref{sec54}. We mention that the EnKF with perturbed observations or an ESRF implementation leads to transformation matrices $S$ which satisfy (\ref{sec5:LETF1a}) but the entries $s_{ij}$ can take positive as well as negative values. This can be problematic in case the state variable $z$ should be non-negative. Then it is possible that a forecast ensemble $z_i^f \ge 0$, $i=1,\ldots,M$, is transformed into an analysis $z_i^a$, which contains negative entries. See \cite{sr:janjic13} for modifications to EnKF type algorithms in order to preserve positivity. One can discuss the various EnKF formulations from an optimal transportation perspective. Here the coupling is between two Gaussian distributions; the forecast PDF ${\rm N}(\bar z^f,P_{zz}^f)$ and analysis PDF ${\rm N}(\bar z^a,P_{zz}^a)$, respectively with the analysis mean given by (\ref{sec5:Kalmanmean}) and the analysis covariance matrix by (\ref{sec5:Kalmancovariance}). We know that the optimal coupling must be of the form \[ z^a = \nabla_z \phi (z^f) \] and, in case of Gaussian PDFs, the convex potential $\phi:\mathbb{R}^{N_z} \to \mathbb{R}$ is furthermore bilinear, \emph{i.e.}, \[ \phi(z) = b^T z + \frac{1}{2} z^T A z \] with the vector $b$ and the matrix $A$ appropriately defined. The choice \[ z^a = b + Az^f = \bar z^a + A(z^f-\bar z^f) \] leads to \[ b = \bar z^a - A\bar z^f \] for the vector $b \in \mathbb{R}^{N_z}$. The matrix $A \in \mathbb{R}^{N_z \times N_z}$ then needs to satisfy \[ P_{zz}^a = AP_{zz}^f A^T. \] The optimal, in the sense of Monge-Kantorovitch with cost function $c(z^f,z^a) = \|z^f-z^a\|^2$, matrix $A$ is given by \[ A = (P_{zz}^a)^{1/2} \left[ (P_{zz}^a)^{1/2} P_{zz}^f (P_{zz}^a)^{1/2} \right]^{-1/2} (P_{zz}^a)^{1/2} . \] See \cite{sr:olkin82}. An efficient implementation of this optimal coupling in the context of ESRFs has been discussed in \cite{sr:cotterreich}. The essential idea is to replace the matrix square root of $P_{zz}^a$ by the analysis matrix of ensemble deviations $A_z^a = A_z^f D$ scaled by $1/\sqrt{M-1}$. Note that different cost functions $c(z^f,z^a)$ lead to different solutions to the associated Monge-Kantorovitch problem (\ref{sec4:eq:MK}). Of particular interest is the weighted inner product \[ c(z^f,z^a) = \left( (z^f-z^a)^T B^{-1} (z^f - z^a)\right)^2 \] for an appropriate positive definite matrix $B \in \mathbb{R}^{N_z \times N_z}$ \citep{sr:cotterreich}. As for SMCMs particle rejuvenation can be applied to the analysis from an EnKF or ESRF. However, the more popular method for increasing the robustness of EnKFs is to apply \emph{multiplicative ensemble inflation} \begin{equation} \label{sec5:inflation} z_i^f \to \bar z^f + \alpha (z_i^f - \bar z^f), \qquad \alpha\ge 1, \end{equation} to the forecast ensemble prior to the application of an EnKF or ESRF. The parameter $\alpha$ is called the inflation factor. An adaptive strategy for determining the factor $\alpha$ has, for example, been proposed by \cite{sr:anderson07,sr:miyoshi11}. The inflation factor $\alpha$ can formally be related to the rejuvenation parameter $h$ in (\ref{sec5:rejuvenation1}) through \[ \alpha = \sqrt{1+h^2}. \] This relation becomes exact as $M\to \infty$. We mention that the \emph{rank histogram filter} of \cite{sr:anderson10}, which uses a nonlinear filter in observation space and linear regression from observation onto state space, also fits into the framework of the LETFs. See \cite{sr:reichcotter14} for more details. The \emph{nonlinear ensemble adjustment filter} of \cite{sr:lei11}, on the other hand, falls outside the class of LETFs. \subsection{Ensemble transform particle filter (ETPF)} \label{secETPF} We now return to the SIR filter described in Section \ref{sec51}. Recall that a SIR filter relies on importance resampling which we have interpreted as a coupling between the uniform measure on (\ref{sec5:SMCM1a}) and the measure defined by (\ref{sec5:SMCM1b}). Any coupling is characterized by a matrix $T$ such that its entries are non-negative and (\ref{sec5:SMCM1c}) hold. \begin{definition} \label{sec5:ETPF} The ETPF is based on choosing the $T$ which minimizes \begin{equation} \label{sec5:functional1} J(T) = \sum_{i,j=1}^M t_{ij} \|z_i^f - z_j^f\|^2 \end{equation} subject to (\ref{sec5:SMCM1c}) and $t_{ij}\ge 0$. Let us denote the minimizer by $T^\ast$. Then the transform matrix $S$ of an ETPF is defined by \[ S = MT^\ast, \] which satisfies (\ref{sec5:LETF1a}) and (\ref{sec5:LETF1b}). \end{definition} Let us give a geometric interpretation of the ETPF transformation step. Since $T^\ast$ from Definition \ref{sec5:ETPF} provides an optimal coupling, Rockafellar's theorem implies the existence of a convex potential $\phi_M :\mathbb{R}^{N_z} \to \mathbb{R}$ such that \[ z_i^f \in \partial \phi_M (z_j^f) \quad \mbox{for all}\quad i \in {\cal I}_j := \{i' \in \{1,\ldots,M\}: t^\ast_{i'j} > 0\}, \] $j=1,\ldots,M$. In fact, $\phi_M$ can be chosen to be piecewise affine and a constructive formula can be found in \cite{sr:Villani}. The ETPF transformation step \begin{equation} \label{sec5:etpftransform} z_j^a = M \sum_{i=1}^M z_i^f t^\ast_{ij} = \sum_{i=1}^M z_i^f s_{ij} \end{equation} corresponds to a particular selection from the linear space $\partial \phi_M (z_j^f)$, $j=1,\ldots,M$; namely the expectation value of the discrete random variable \[ {\rm Z}_j^a: \Omega \to \{z_1^f,z_2^f,\ldots,z_M^f\} \] with probabilities $\mathbb{P}(z_i^f) = s_{ij}$, $i=1,\ldots,M$. Hence it holds that \[ \bar z^a := \frac{1}{M}\sum_{j=1}^M z_j^a = \sum_{i=1}^M w_i z_i^f . \] See \cite{sr:reich13} for more details. where it has also been shown that the potentials $\phi_M$ converge to the solution of the underlying continuous Monge-Kantorovitch problem as the ensemble size $M$ approaches infinity. It should be noted that standard algorithms for finding the minimizer of (\ref{sec5:functional1}) suffer from a ${\cal O}(M^3 \log M)$ computational complexity. This complexity has been reduced to ${\cal O}(M^2 \log M)$ by \cite{sr:Pele-iccv2009}. There are also fast iterative methods for finding approximate minimizers of (\ref{sec5:functional1}) using the \emph{Sinkhorn distance} \citep{sr:cuturi13a}. The particle rejuvenation step (\ref{sec5:rejuvenation1}) for SMCMs can be extended to the ETPF as follows: \begin{equation} \label{sec5:rejuvenation2} z_j^a = \sum_{i=1}^M z_i^f s_{ij} + \xi_j, \qquad j=1,\ldots,M. \end{equation} As before the $\xi_j$'s are realizations of $M$ independent Gaussian random variables with mean zero and appropriate covariance matrices $P_j^a$. We use $P_j^a = h^2 P_{zz}^f$ with rejuvenation parameter $h>0$ for the numerical experiments conducted in this paper. Another possibility would be to locally estimate $P_j^a$ from the coupling matrix $T^\ast$, \emph{i.e.}, \[ P_j^a = \sum_{i=1}^M s_{ij} (z_i^f - \bar z_j^a)(z_i^f - \bar z_j^a)^T \] with mean $\bar z_j^a = \sum_{i=1}^M s_{ij} z_i^f$. \subsection{Quasi-Monte Carlo (QMC) convergence} \label{sec55} The expected rate of convergence for standard Monte Carlo methods is $M^{-1/2}$ where $M$ denotes the ensemble size. QMC methods have an upper bound of $\log(M)^dM^{-1}$ where $d$ stands for the dimension \citep{sr:caflisch88}. For the purpose of this paper, $d=N_z$. Unlike Monte Carlo methods QMC methods also depend on the dimension of the space which implies a better performance for small $N_z$ or/and large $M$. However, in practice QMC methods perform considerably better than the theoretical bound for the convergence rate and outperform Monte Carlo methods even for small ensemble sizes and in very high dimensional models. The latter may be explained by the concept of \textit{effective dimension} introduced by \cite{sr:caflisch97}. The following simulation investigates the convergence rate of the estimators for the first and second moment of the posterior distribution after applying a single analysis step of a SIR particle filter and an ETPF. The prior is chosen to be a uniform distribution on the unit square and the sum of both components is observed with additive noise drawn from a centered Gaussian distribution with variance equal to two. Reference values for the posterior moments are generated using Monte Carlo importance sampling with sample size $M=2^{26}$. QMC samples of different sizes are drawn from the prior distribution and a single residual resampling step is compared to a single transformation step using an optimal coupling $T^\ast$. Fig.~\ref{fig_QMC} shows the root mean square errors (RMSEs) of the different posterior estimates with respect to their reference values. We find that the transform method preserves the optimal $M^{-1}$ convergence rate of the prior QMC samples while resampling reduces the convergence rate to the $M^{-1/2}$. \begin{figure}[h] \centering \subfigure{\includegraphics[clip=TRUE,scale=.3,trim =.5cm 0cm .5cm 0cm]{qmc_mean}} \hfill \subfigure{\includegraphics[clip=TRUE,scale=.3,trim =.5cm 0cm .5cm 0cm]{qmc_var1}} \hfill \subfigure{\includegraphics[clip=TRUE,scale=.3,trim =0.5cm 0cm .5cm 0cm]{qmc_cor}} \hfill \subfigure{\includegraphics[clip=TRUE,scale=.3,trim =0.5cm 0cm .5cm 0cm]{qmc_var2}} \caption{RMSEs of estimates for the posterior mean, variances (var), and correlation (cor) using importance resampling (SIR) and optimal transformations (ETPF) plotted on a log-log scale as a function of ensemble sizes $M$.} \label{fig_QMC} \end{figure} We mention that replacing the deterministic transformation step in (\ref{sec5:etpftransform}) by drawing ensemble member $j$ from the prior ensemble according to the weights given by the $j$-th column of $S$ leads to a stochastic version of the ETPF. This variant, despite being stochastic like the importance resampling step, results again in a QMC convergence rate. \section{Spatially extended dynamical systems and localization} \label{sec54} Let us start this section with a simple thought experiment on the curse of dimensionality. Consider a state space of dimension $N_z = 100$ and a prior Gaussian distribution with mean zero and covariance matrix $P^f = I$. The reference solution is $z_{\rm ref} = 0$ and we observe every component of the state vector subject to independent measurement errors with mean zero and variance $R = 0.16$. If one applies a single importance resampling step to this problem with ensemble size $M=10$, one finds that the effective sample size collapses to $M_{\rm eff} \approx 1$ and the resulting analysis ensemble is unable to recover the reference solution. However, one also quickly realizes that the stated problem can be decomposed into $N_z$ independent data assimilation problems in each component of the state vector alone. If importance resampling is now performed in each component of the state vector independently, then the effective sample size for each of the $N_z$ analysis problems remains close to $M =10$ and the reference solution can be recovered from the given set of observations. This is the idea of \emph{localization}. Note that localization has increased the total sample size to $M \times N_z = 1000$ for this problem! We now formally extend LETFs to spatially extended systems which may be viewed as an infinite-dimensional dynamical system \citep{sr:robinson} and formulate an appropriate localization strategy. Consider the linear advection equation \[ u_t + u_x = 0 \] as a simple example for such a scenario. If $u_0(x)$ denotes the solution at time $t=0$, then \[ u(x,t) = u_0(x+t) \] is the solution of the linear advection equation for all $t\ge 0$. Given a time-increment $\Delta t>0$, the associated dynamical system maps a function $u(x)$ into $u(x+\Delta t)$. A finite-dimensional dynamical system is obtained by discretizing in space with mesh-size $\Delta x>0$. For example, the Box scheme \citep{sr:Morton} leads to \[ \frac{u_j^{k+1}+u_{j+1}^{k+1} - u_j^k-u_{j+1}^k}{2\Delta t} + \frac{u_{j+1}^{k+1} + u_{j+1}^k - u_j^{k+1} - u_j^k}{2\Delta x} = 0 \] and, for $J$ spatial grid points, the state vector at $t_k = k\Delta t$ becomes \[ z^k = (u_1^k,u_2^k,\ldots,u_J^k)^T \in \mathbb{R}^J. \] We may take the formal limit $J\to \infty$ and $\Delta x\to 0$ in order to return to functions $z^k(x)$. The dynamical system (\ref{sec2:DS}) is then defined as the map that propagates such functions (or their finite-difference approximations) from one observation instance to the next in accordance with the specified numerical method. Here we assume that observations are taken in intervals of $\Delta t_{\rm obs} = N_{\rm out} \Delta t$ with $N_{\rm out} \ge 1$ a fixed integer. The index $n\ge 1$ in (\ref{sec2:DS}) is the counter for those observation instances. In other words, forecast or analysis ensemble members, $z^{f/a}(x)$, now become functions of $x\in \mathbb{R}$, belong to some appropriate function space ${\cal H}$, and the dynamical system (\ref{sec2:DS}) is formally replaced by a map or evolution equation on ${\cal H}$ \citep{sr:robinson}. For simplicity of exposition we assume periodic boundary conditions, \emph{i.e.},~$z(x) = z(x+L)$ for some appropriate $L>0$. The curse of dimensionality \citep{sr:bengtsson08} implies that, generally speaking, none of the LETFs discussed so far is suitable for data assimilation of spatially extended systems. In order to overcome this situation, we now discuss the concept of \emph{localization} as first introduced in \cite{sr:houtekamer01,sr:houtekamer05} for EnKFs. While we will focus on a particular localization, called \emph{R-localization}, suggested by \cite{sr:hunt07}, our methodology can be extended to \emph{B-localization} as proposed by \cite{sr:hamill01}. In the context of the LETFs R-localization amounts to modifying (\ref{sec5:transform}) to \[ z^{a}_j(x) = \sum_{i=1}^M z_i^{f}(x) s_{ij}(x), \] where the associated transform matrices $S(x) \in \mathbb{R}^M \times \mathbb{R}^M$ depend now on the spatial location $x \in [0,L]$. It is crucial that the transformation matrices $S(x)$ are sufficiently smooth in $x$ in order to produce analysis ensembles with sufficient regularity for the evolution problem under consideration and, in particular $z^a_j \in {\cal H}$. In case of an SMCM with importance resampling, the resulting $S(x)$ would, in general, not even be continuous for almost all $x\in [0,L)$. Hence we only discuss localization for the ESRF and the ETPF. Let us, for simplicity, assume that the forward operator $h:{\cal H} \to \mathbb{R}^{N_y}$ for the observations $y_{\rm obs}$ is defined by \[ h_k(z) = z({\rm x}_k), \qquad k=1,\ldots,N_y. \] Here the ${\rm x}_k\in [0,L)$ denote the spatial location at which the observation is taken. The measurement errors are Gaussian with mean zero and covariance matrix $R \in \mathbb{R}^{N_y\times N_y}$. We assume for simplicity that $R$ is diagonal. In the sequel we assume that $z(x)$ has been extended to $x\in \mathbb{R}$ by periodic extension from $x\in [0,L)$ and introduce time-averaged and normalized \emph{spatial correlation function} \begin{equation} \label{sec5:spatialcorr} C(x,s) := \frac{\sum_{n=0}^N z^n(x+s)z^n(x) }{\sum_{n=0}^N (z^n(x))^2} \end{equation} for $x \in [0,L)$ and $s \in [-L/2,L/2)$. Here we have assumed that the underlying solution process is stationary ergodic. In case of spatial homogeneity the spatial correlation function becomes furthermore independent of $x$ for $N$ sufficiently large. We also introduce a localization kernel ${\cal K}(x,x';r_{\rm loc})$ in order to define $R$-localization for an ESRF and the ETPF. The localization kernel can be as simple as \begin{equation} \label{SEC5:locrho1} {\cal K}(x,x;r_{\rm loc}) = \left\{ \begin{array}{ll} 1-\frac{1}{2} s & \mbox{for} \,\, s \le 2,\\ 0 & \mbox{else}, \end{array} \right. \end{equation} with \[ s := \frac{\min \{|x - x' -L|,|x-x'|,|x-x'+L|\}}{r_{\rm loc}} \ge 0, \] or a higher-order polynomial such as \begin{equation} \label{sec5:locrho2} {\cal K}(x,x';r_{\rm loc}) = \left\{ \begin{array}{ll} 1 - \frac{5}{3} s^2 + \frac{5}{8} s^3 + \frac{1}{2} s^4 - \frac{1}{4} s^5 & \mbox{for} \,\, s \le 1,\\ -\frac{2}{3} s^{-1} + 4 - 5s + \frac{5}{3} s^2 + \frac{5}{8} s^3 - \frac{1}{2} s^4 + \frac{1}{12} s^5 & \mbox{for}\,\, 1 \le s \le 2,\\ 0 & \mbox{else} . \end{array} \right. . \end{equation} See \cite{sr:gaspari99}. In order to compute the transformation matrix $S(x)$ for given $x$, we modify the $k$th diagonal entry in the measurement error covariance matrix $R \in \mathbb{R}^{N_y\times N_y}$ and define \begin{equation} \label{sec5:localization1} \frac{1}{\tilde r_{kk} (x)} := \frac{{\cal K}(x,{\rm x}_k;r_{{\rm loc},R})}{r_{kk}} \end{equation} for $k=1,\ldots,N_y$. Given a localization radius $r_{{\rm loc},R}>0$, this results in a matrix $\tilde R^{-1}(x)$ which replaces the $R^{-1}$ in an ESRF and the ETPF. More specifically, the LETKF is based on the following modifications to the ESRF. First one replaces (\ref{sec5:eq:Stransform}) by \[ Q(x) = \left\{ I + \frac{1}{M-1} (A_y^f)^T \tilde R^{-1}(x) A_y^f \right\}^{-1} \] and defines $D(x) = Q(x)^{1/2}$. Finally the localized transformation matrix $S(x)$ is given by \begin{equation} \label{sec5:LETKF} s_{ij}(x) = \frac{1}{M-1} q_{ij}(x) (y^f_j - \bar y^f) \tilde R^{-1}(x) (y_{\rm obs} - \bar y^f) + d_{ij}(x), \end{equation} which replaces (\ref{sec5:letkf}). We mention that \cite{sr:anderson12b} discusses practical methods for choosing the localization radius $r_{{\rm loc},R}$ for EnKFs. In order to extend the concept of $R$-localization to the ETPF, we also define the localized cost function \begin{equation} \label{sec5:localization2} c_x(z^f,z^a) = \int_0^L {\cal K}(x,x';r_{{\rm loc},c}) \|z^f(x')-z^a(x')\|^2 {\rm d}x' \end{equation} with a localization radius $r_{{\rm loc},c}\ge 0$, which can be chosen independently from the localization radius for the measurement error covariance matrix $R$. The ETPF with R-localization can now be implemented as follows. At each spatial location $x\in [0,L)$ one determines the desired transformation matrix $S(x)$ by first computing the weights \begin{equation} \label{sec5:locweights} w_i \propto e^{-\frac{1}{2}(h(z_i^f) - y_{\rm obs})^T \tilde R^{-1}(x) (h(z_i^f)-y_{\rm obs}) } \end{equation} and then minimizing the cost function \begin{equation} \label{sec5:locdist} J(T) = \sum_{i,j=1}^M c_x(z_i^f,z_j^f) t_{ij} \end{equation} over all admissible couplings. One finally sets $S(x) = M T^\ast$. As discussed earlier any infinite-dimensional evolution equation such as the linear advection equation will be truncated in practice to a computational grid $x_j = j\Delta x$. The transform matrices $S(x)$ need then to be computed for each grid point only and the integral in (\ref{sec5:localization2}) is replaced by a simple Riemann sum. We mention that alternative filtering strategies for spatio-temporal processes have been proposed by \cite{sr:majda} in the context of turbulent systems. One of their strategies is to perform localization in spectral space in case of regularly spaced observations. Another spatial localization strategy for particle filters can be found in \cite{sr:vanhandel14}. \section{Applications} \label{sec:6} In this section we present some numerical results comparing the different LETFs for the chaotic Lorenz-63 \citep{sr:lorenz63} and Lorenz-96 \citep{sr:lorenz96} models. While the highly nonlinear Lorenz-63 model can be used to investigate the behavior of different DA algorithms for strongly non-Gaussian distributions, the forty dimensional Lorenz-96 model is a prototype ``spatially extended'' system which demonstrates the need for localization in order to achieve skillful filter results for moderate ensemble sizes. We begin with the Lorenz-63 model. We mention that theoretical results on the long time behavior of filtering algorithms for chaotic systems, such as the Lorenz-63 model, have been obtained, for example, by \cite{sr:hunt13} and \cite{sr:law13}. \subsection{Lorenz-63 model} The Lorenz-63 model is given by the differential equation (\ref{sec2:ODE}) with state variable $z = ({\rm x},{\rm y},{\rm z})^T \in \mathbb{R}^3$, right hand side \begin{align*} f(z) &= \left( \begin{array}{l} \sigma ({\rm y}-{\rm x})\\ {\rm x}(\rho-{\rm z}) - {\rm y} \\ {\rm xy} - \beta {\rm z} \end{array} \right), \end{align*} and parameter values $\sigma = 10$, $\rho = 28$, and $\beta = 8/3$. The resulting ODE (\ref{sec2:ODE}) is discretized in time by the implicit midpoint method \citep{sr:ascher08}, \emph{i.e.}, \begin{equation} \label{sec5:IM} z^{n+1} = z^n + \Delta t f(z^{n+1/2}) ,\qquad z^{n+1/2} = \frac{1}{2} (z^{n+1} + z^n) \end{equation} with step-size $\Delta t = 0.01$. Let us abbreviate the resulting map by $\Psi_{\rm IM}$. Then the dynamical system (\ref{sec2:DS}) is defined as \[ \Psi = \Psi_{\rm IM}^{[12]}. \] In other words observations are assimilated every 12 time-steps. We only observe the ${\rm x}$ variable with a Gaussian measurement error of variance $R = 8$. We used different ensemble sizes from $10$ to $80$ as well as different inflation factors ranging from $1.0$ to $1.12$ by increments of $0.02$ for the EnKF and rejuvenation parameters ranging from $0$ to $0.4$ by increments of $0.04$ for the ETPF. Note that a rejuvenation parameter of $h = 0.4$ corresponds to an inflation factor $\alpha = \sqrt{1+h^2} \approx 1.0770$. The following variant of the ETPF with localized cost function has also been implemented. We first compute the importance weights $w_i$ of a given observation. Then each component of the state vector is updated using only the distance in that component in the cost function $J(T)$. For example, the ${\rm x}_i^f$ components of the forecast ensemble members $z_i^f = ({\rm x}_i^f,{\rm y}_i^f,{\rm z}_i^f)^T$, $i=1,\ldots,M$, are updated according to \[ {\rm x}_i^a = M \sum_{i=1}^M {\rm x}_i^f t_{ij}^\ast \] with the coefficients $t_{ij}^\ast\ge 0$ minimizing the cost function \[ J(T) = \sum_{i,j=1}^M t_{ij} |{\rm x}_i^f - {\rm x}_j^f|^2 \] subject to (\ref{sec5:SMCM1c}). We use ETPF\_R0 as the shorthand form for this method. This variant of the ETPF is of special interest from a computational point of view since the linear transport problem in $\mathbb{R}^3$ reduces to three simple one-dimensional problems. \begin{figure} \centering \subfigure{\includegraphics[clip=TRUE,scale=.3,trim =.5cm 0cm .5cm 0cm]{EnKF_L63}} \hfill \subfigure{\includegraphics[clip=TRUE,scale=.3,trim =.5cm 0cm .5cm 0cm]{ETPF_L63}} \hfill \subfigure{\includegraphics[clip=TRUE,scale=.3,trim =0.5cm 0cm .5cm 0cm]{ETPF1d_L63}} \hfill \subfigure{\includegraphics[clip=TRUE,scale=.3,trim =0.5cm 0cm .5cm 0cm]{Bestof_L63}} \caption{a)-c): Heatmaps showing the RMSEs for different parameters for the EnKF, ETPF and ETPF\_R0 for the Lorenz-63 model. d): RMSEs for different ensemble sizes using 'optimal' inflation factors and rejuvenation.} \label{L63} \end{figure} The model is run over $N=20,000$ assimilation steps after discarding 200 steps to lose the influence of the initial conditions. The resulting \textit{root-mean-square errors} averaged over time (RMSEs) \[ \text{RMSE} = \frac1N\sum_{n = 1}^{N}\sqrt{\|\bar{z}^{n,a}-z_{\text{ref}}^n\|^2} \] are reported in Fig. 2 a)-c). We dropped the results for the ETPF and ETPF\_R0 with ensemble size $M=10$ as they indicated strong divergence. We see that the EnKF produces stable results while the other filters are more sensitive to different choices for the rejuvenation parameter. However, with increasing ensemble size and 'optimal' choice of parameters the ETPF and the ETPF\_R0 outperform the EnKF which reflects the biasedness of the EnKF. Fig. 2 d) shows the RMSEs for each ensemble size using the parameters that yield the lowest RMSE. Here we see again that the stability of the EnKF leads to good results even for very small ensemble sizes. The downside is also evident: While the ETPFs fail to track the reference solution as well as the EnKF for very small ensemble sizes a small increase leads to much lower RMSEs. The asymptotic consistent ETPF outperforms the ETPF\_R0 for large ensemble sizes but is less stable otherwise. We also included RMSEs for the SIR filter with rejuvenation parameters chosen from the same range of values as for the ETPFs. Although not shown here, this range seems to cover the 'optimal' choice for the rejuvenation parameter. The comparison with the EnKF is as expected: for small ensemble sizes the SIR performs worse but beats the EnKF for larger ensemble sizes due to its asymptotic consistency. However, the equally consistent ETPF yields lower RMSEs throughout for the ensemble sizes considered here. Interestingly, the SIR only catches up with the inconsistent but computationally cheap ETPF\_R0 for the largest ensemble size in this experiment. We mention that the RMSE drops to around 1.4 with the SIR filter with an ensemble size of 1000. At this point we note that the computational burden increases considerably for the ETPF for larger ensemble sizes due to the need of solving increasingly large linear transport problems. See the discussion from Section \ref{secETPF}. \subsection{Lorenz-96 model} Given a periodic domain $x \in [0,L]$ and $N_z$ equally spaced grid-points $x_j = j\Delta x$, $\Delta x = L/N_z$, we denote by $u_j$ the approximation to $z(x)$ at the grid points $x_j$, $j=1,\ldots,N_z$. The following system of differential equations \begin{equation} \label{sec5:L96} \frac{{\rm d}u_j}{{\rm d} t} = -\frac{u_{j-1}u_{j+1}-u_{j-2}u_{j-1}}{3\Delta x} - u_j + F, \qquad j = 1,\ldots,40, \end{equation} is due to \cite{sr:lorenz96} and is called the Lorenz-96 model. We set $F=8$ and apply periodic boundary conditions $u_j = u_{j+40}$. The state variable is defined by $z = (u_1,\ldots,u_{40})^T \in \mathbb{R}^{40}$. The Lorenz-96 model (\ref{sec5:L96}) can be seen as a coarse spatial approximation to the PDE \[ \frac{\partial u}{\partial t} = -\frac{1}{2} \frac{\partial (u)^2}{ \partial x} - u + F, \qquad x \in [0,40/3], \] with mesh-size $\Delta x = 1/3$ and $N_z=40$ grid points. The implicit midpoint method (\ref{sec5:IM}) is used with a step-size of $\Delta t = 0.005$ to discretize the differential equations (\ref{sec5:L96}) in time. Observations are assimilated every 22 time-steps and we observe every other grid point with a Gaussian measurement error of variance $R=8$. The large assimilation interval and variance of the measurement error are chosen because of a desired non-Gaussian ensemble distribution. We used ensemble sizes from 10 to 80, inflation factors from $1.0$ to $1.12$ with increments of $0.02$ for the EnKF and rejuvenation parameters between $0$ and $0.4$ with increments of $0.05$ for the ETPFs. As mentioned before, localization is required and we take (\ref{sec5:locrho2}) as our localization kernel. For each value of $M$ we fixed a localization radius $r_{{\rm loc},R}$ in (\ref{sec5:localization1}). The particular choices can be read off of the following table: \begin{table} \centering \begin{tabular}{c *8{ |c }} \hline M& 10&20&30&40&50&60&70&80 \\ \hline \hline $r_{\text{loc},R}^{EnKF}$ &2&4&6&6&7&7&8&8 \\ \hline $r_{\text{loc},R}^{ETPF}$&1&2&3&4&5&6&6&6 \\ \hline \end{tabular} \end{table} These values have been found by trial and error and we do not claim that these values are by any means 'optimal'. As for localization of the cost function (\ref{sec5:locdist}) for the ETPF we used the same kernel as for the measurement error and implemented different versions of the localized ETPF which differ in the choice of the localization radius: ETPF\_R1 corresponds to the choice of $r_{\text{loc},c}= 1$ and ETPF\_R2 is used for the ETPF with $r_{\text{loc},c}=2$. As before we denote the computationally cheap version with cost function $c_{x_j}(z^f,z^a) = |u_j^f - u_j^a|^2$ at grid point $x_j$ by ETPF\_R0. \begin{figure} \centering \includegraphics[scale = .3]{corr} \caption{Time averaged spatial correlation between solution components depending on their distance.} \end{figure} The localization kernel as well as the localization radii $r_{\text{loc},c}$ are not chosen by any optimality criterion but rather by convenience and simplicity. A better kernel or localization radii may be derived from looking at the time averaged spatial correlation coefficients (\ref{sec5:spatialcorr}) as shown in Fig. 3. Our kernel gives higher weights to components closer to the one to be updated, even though the correlation with the immediate neighbor is relatively low. \begin{figure} \centering \subfigure{\includegraphics[clip=TRUE,scale=.3,trim =.5cm 0cm .5cm 0cm]{EnKF_L96}} \hfill \subfigure{\includegraphics[clip=TRUE,scale=.3,trim =.5cm 0cm .5cm 0cm]{ETPF1d_L96}} \hfill \subfigure{\includegraphics[clip=TRUE,scale=.3,trim =0.5cm 0cm .5cm 0cm]{ETPF_L96}} \hfill \subfigure{\includegraphics[clip=TRUE,scale=.3,trim =0.5cm 0cm .5cm 0cm]{Bestof_L96_2}} \caption{a)-c): Heatmaps showing the RMSEs for different parameters for the EnKF, ETPF\_R0 and ETPF\_R1 for the Lorenz-96 model. d): RMSEs for different ensemble sizes using 'optimal' inflation factors and rejuvenation.} \end{figure} The model is run over $N=10,000$ assimilation steps after discarding 500 steps to loose the influence of the initial conditions. The resulting time averaged RMSEs are displayed in Fig. 4. We dropped the results for the smallest rejuvenation parameters as the filters showed strong divergence. Similar to the results for the Lorenz-63 model the EnKF shows the most stable overall performance for various parameters but fails to keep up with the ETPFs for higher ensemble sizes, though the difference between the different filters is much smaller than for the Lorenz-63 system. This is no surprise since the Lorenz-96 system does not have the highly non-linear dynamics of the Lorenz-63 system which causes the involved distributions to be strongly non-Gaussian. The important point here is that the ETPF as a particle filter is able to compete with the EnKF even for small ensemble sizes. Traditionally, high dimensional systems required very high ensemble sizes for particle filters to perform reasonably well. Hundreds of particles are necessary for the SIR to be even close to the true state. \section{Historical comments} The notion of data assimilation has been coined in the field of meteorology and more widely in the geosciences to collectively denote techniques for combining computational models and physical observations in order to estimate the current state of the atmosphere or any other geophysical process. The perhaps first occurrence of the concept of data assimilation can be found in the work of \cite{sr:Richardson}, where observational data needed to be interpolated onto a grid in order to initialize the computational forecast process. With the rapid increase in computational resolution starting in the 1960s, it became quickly necessary to replace simple data interpolation by an optimal combination of first guess estimates and observations. This gave rise to techniques such as the \emph{successive correction method}, \emph{nudging}, \emph{optimal interpolation} and \emph{variational least square techniques} (3D-Var and 4D-Var). See \cite{sr:daley,sr:kalnay} for more details. \cite{sr:leith74} proposed \emph{ensemble} (or Monte Carlo) \emph{forecasting} as an alternative to conventional single forecasts. However ensemble forecasting did not become operational before 1993 due to limited computer resources \citep{sr:kalnay}. The availability of ensemble forecasts subsequently lead to the invention of the EnKF by \cite{sr:evensen94} with a later correction by \cite{sr:burgers98} and many subsequent developments, which have been summarized in \cite{sr:evensen}. We mention that the analysis step of an EnKF with perturbed observations is closely related to a method now called \emph{randomized likelihood method} \citep{sr:kitanidis95,sr:oliver96,sr:oliver96b}. In a completely independent line of research the problem of optimal estimation of stochastic processes from data has led to the theory of \emph{filtering} and \emph{smoothing}, which started with the work of \cite{sr:wiener}. The state space approach to filtering of linear systems gave rise to the celebrated \emph{Kalman filter} and more generally to the \emph{stochastic PDE formulations} of Zakai and Kushner-Stratonovitch in case of continuous-time filtering. See \cite{sr:jazwinski} for the theoretical developments up to 1970. Monte Carlo techniques were first introduced to the filtering problem by \cite{sr:handschin69}, but it was not until the work of \cite{sr:gordon93} that the SMCM became widely used \citep{sr:Doucet}. The \emph{McKean interacting particle approach} to SMCMs has been pioneered by \cite{sr:DelMoral}. The theory of particle filters for time-continuous filtering problems is summarized in \cite{sr:crisan}. Standard SMCMs suffer from the \emph{curse of dimensionality} in that the necessary number of ensemble members $M$ increases exponentially with the dimension $N_z$ of state space \citep{sr:bengtsson08}. This limitation has prevented SMCMs from being used in meteorology and the geosciences. On the other hand, it is known that EnKFs lead to inconsistent estimates which is problematic when multimodal forecast distributions are to be expected. Current research work is therefore focused on a theoretical understanding of EnKFs and related sequential assimilation techniques (see, for example, \cite{sr:hunt13,sr:law13}), extensions of particle filters/SMCMs to PDE models (see, for example, \cite{sr:chorin12b,sr:leeuwen12,sr:beskov13,sr:snyder13}), and Bayesian inference on function spaces (see, for example, \cite{sr:stuart10a,sr:CDRS09,sr:dashti13}) and hybrid variational methods such as by, for example, \cite{sr:bonavita12,sr:clayton13}. A historical account of optimal transportation can be found in \cite{sr:Villani2}. The work of \cite{sr:mccann95} provides the theoretical link between the classic linear assignment problem and the Monge-Kantorovitch problem of coupling PDFs. The ETPF is a computational procedure for approximating such couplings using importance sampling and linear transport instead. \section{Summary and Outlook} We have discussed various ensemble/particle-based algorithms for sequential data assimilation in the context of LETFs. Our starting point was the McKean interpretation of Feynman-Kac formulae. The McKean approach requires a coupling of measures which can be discussed in the context of optimal transportation. This approach leads to the ETPF when applied in the context of SMCMs. We have furthermore discussed extensions of LETFs to spatially extended systems in form of $R$-localization. The presented work can be continued along several lines. First, one may replace the empirical forecast measure \begin{equation} \label{sec8:emp} \pi_{\rm emp}^f(z) := \frac{1}{M} \sum_{i=1}^M \delta(z-z_i^f), \end{equation} which forms the basis of SMCMs and the ETPF, by a \emph{Gaussian mixture} \begin{equation} \label{sec8:GM} \pi_{\rm GM}^f(z) := \frac{1}{M} \sum_{i=1}^M {\rm n}(z;z_i^f,B), \end{equation} where $B\in \mathbb{R}^{N_z\times N_z}$ is a given covariance matrix and \[ {\rm n}(z;m,B) := \frac{1}{(2\pi)^{N_z/2} |B|^{1/2}} e^{ - \frac{1}{2} (z-m)^T B^{-1} (z-m)} . \] Note that the empirical measure (\ref{sec8:emp}) is recovered in the limit $B\to 0$. While the weighted empirical measure \[ \pi_{\rm emp}^a(z) := \sum_{i=1}^M w_i \delta(z-z_i^f) \] with weights given by (\ref{sec5:SMCM1b}) provides the analysis in case of an empirical forecast measure (\ref{sec8:emp}) and an observation $y_{\rm obs}$, a Gaussian mixture forecast PDF (\ref{sec8:GM}) leads to an analysis PDF in form of a weighted Gaussian mixture provided the forward operator $h(z)$ is linear in $z$. This fact allows one to extend the ETPF to Gaussian mixtures. See \cite{sr:reichcotter14} for more details. Alternative implementations of Gaussian mixture filters can, for example, be found in \cite{sr:stordal11} and \cite{sr:frei11}. Second, one may factorize the likelihood function $\pi_{Y^{1:N}}(y^{1:N}|z^{0,N})$ into $L> 1$ identical copies \[ \hat \pi_{Y^{1:N}}(y^{1:N}|z^{0:N}) := \frac{1}{(2\pi)^{N_yN/2} |R/L|^{N/2}} \prod_{n=1}^N e^{- \frac{1}{2L} (h(z^n)-y^n)^T R^{-1} (h(z^n)-y^n) }, \] \emph{i.e.}, \[ \pi_{Y^{1:N}}(y^{1:N}|z^{0,N}) = \prod_{l=1}^L \hat \pi_{Y^{1:N}}(y^{1:N}|z^{0:N}) \] and one obtains a sequence of $L$ ``incremental'' Feynman-Kac formulae. Each of these formulae can be approximated numerically by any of the methods discussed in this review. For example, one obtains the continuous EnKF formulation of \cite{sr:br10} in the limit $L\to \infty$ in case of an ESRF. We also mention the continuous Gaussian mixture ensemble transform filter \citep{sr:reich11}. An important advantage of an incremental approach is the fact that the associated weights (\ref{sec5:SMCM1b}) remain closer to the uniform reference value $1/M$ in each iteration step. See also related methods such as \emph{running in place} (RIP) \citep{sr:kalnayyang10}, the iterative EnKF approach of \cite{sr:bocquet12,sr:sakov12}, and the embedding approach of \cite{sr:beskov13} for SMCMs. Third, while this paper has been focused on discrete time algorithms, most of the presented results can be extended to differential equations with observations arriving continuously in time such as \[ {\rm d}y_{\rm obs}(t) = h(z_{\rm ref}(t)){\rm d} t + \sigma {\rm d}W(t), \] where $W(t)$ denotes standard Brownian motion and $\sigma> 0$ determines the amplitude of the measurement error. The associated marginal densities satisfy the Kushner-Stratonovitch stochastic PDE \citep{sr:jazwinski}. Extensions of the McKean approach to continuous-in-time filtering problems can be found in \cite{sr:crisan10} and \cite{sr:meyn13}. We also mention the continuous-in-time formulation of the EnKF by \cite{sr:br11}. More generally, a reformulation of LETFs in terms of continuously-in-time arriving observations is of the abstract form \begin{equation} \label{sec6:ip} {\rm d}z_j = f(z_j){\rm d}t + \sum_{i=1}^M z_i {\rm d}s_{ij}, \qquad j = 1,\ldots,M . \end{equation} Here $S(t) = \{s_{ij}(t)\}$ denotes a matrix-valued stochastic process which depends on the ensemble $\{z_i(t)\}$ and the observations $y_{\rm obs}(t)$. In other words, (\ref{sec6:ip}) leads to a particular class of interacting particle systems and we leave further investigations of its properties for future research. We only mention that the continuous-in-time EnKF formulation of \cite{sr:br11} leads to \[ {\rm d}s_{ij} = \frac{1}{M-1} (y_i-\bar y)\sigma^{-1}({\rm d}y_{\rm obs} - y_j{\rm d}t + \sigma^{1/2} {\rm d}W_j ), \] where the $W_j(t)$'s denote standard Brownian motion, $y_j = h(z_j)$, and $\bar y = \frac{1}{M}\sum_{i=1}^M y_i$. See als \cite{sr:akir11} for related reformulations of ESRFs. \begin{acknowledgement} We would like to thank Yann Brenier, Dan Crisan, Mike Cullen and Andrew Stuart for inspiring discussions on ensemble-based filtering methods and the theory of optimal transportation. \end{acknowledgement} \bibliographystyle{plainnat} \bibliography{survey} \end{document}
{"config": "arxiv", "file": "1311.6300/main_CR_v2c.tex"}
TITLE: Does this sequence $x_{n}=(\sqrt[n]{e}-1)\cdot n$ converge? QUESTION [2 upvotes]: Does the sequence defined by $$x_{n} =\left(\sqrt[n]{e}-1\right)\cdot n$$ converge. For finding the limit one has to solve for $\displaystyle\lim_{x \to \infty} x_{n}$ which I think I can solve, but how do I prove that it converges/diverges. REPLY [0 votes]: Without using any information about the number $e$, it is possible to prove that if $x>0$ then the sequence $n(\sqrt[n]{x}-1)$ converges. The proof is based on algebraic inequalities and given in my post (see topic "Logarithm as a Limit"). Main idea is to show monotone and bounded nature of the sequence.
{"set_name": "stack_exchange", "score": 2, "question_id": 363076}
TITLE: help on a specific power series expansion, i cannot see what the author did here QUESTION [3 upvotes]: I am working through the next chapter of my quantum mechanics book over winter break, and admittedly series are my weakest point as far as calculus is concerned. The author starts with the expression: $$ \sqrt{\frac{\omega _1^2}{\omega _0^2}+\frac{2 \omega _1}{\omega _0}+\frac{\omega _2^2}{\omega _0^2}+1} $$ He then explains that we wish to expand this expression to the second order in a power series in the small parameters $\frac{\omega _1}{\omega _0}$ and $\frac{\omega _2}{\omega _0}$. The next line (his expansion) reads: $$ \left(1+\frac{\omega _1}{\omega _0}+\frac{\omega _1^2}{2 \omega _0^2}+\frac{\omega _2^2}{2 \omega _0^2}-\frac{1}{8} \left(\frac{\omega _1^2}{\omega _0^2}+\frac{2 \omega _1}{\omega _0}+\frac{\omega _2^2}{\omega _0^2}\right){}^2+\text{...}\right) $$ Can anyone explain what steps he took in order to do this expansion? I have been trying to figure it out and I can't see what he has done. any help is greatly appreciated. Thanks! If anyone is curious the book is Quantum Mechanics by David H. McIntyre, and this is from section 10.1 Spin 1/2 example, page 315. REPLY [2 votes]: This is Taylor's expansion: $$f(x) = \sum_{n=0}^\infty \frac{f^{(n)}(a)}{n!} (x-a)^n$$ Let $f(x) = \sqrt{x}$, and let us only look at the terms up to second order around $1$, i.e. $$f(x) \approx \frac{f(1)}{0!} + \frac{f'(1)}{1} (x-1) + \frac{f''(1)}{2}(x-1)^2.$$ Now $f(1) = 1$ can be seen immediately. $f'(x) = \frac{1}{2 \sqrt{x}}$ and thus $f'(1) = \frac{1}{2}$. Lastly, $f''(x) = -\frac{1}{4\sqrt{x^3}} $ and thus $f''(1) = - \frac{1}{4}.$ Putting all together yields the desired expression.
{"set_name": "stack_exchange", "score": 3, "question_id": 1087045}
\section{Matrix perturbation analysis} \label{app:sec_perturb_theory} In this section, we recall several standard tools from matrix perturbation theory for studying the perturbation of the spectra of Hermitian matrices. The reader is referred to \cite{stewart1990matrix} for a more comprehensive overview of this topic. Let $A \in \mathbb{C}^{n \times n}$ be Hermitian with eigenvalues $\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n$ and corresponding eigenvectors $v_1,v_2,\dots,v_n \in \mathbb{C}^n$. Let $\widetilde{A} = A + W$ be a perturbed version of $A$, with the perturbation matrix $W \in \mathbb{C}^{n \times n}$ being Hermitian. Let us denote the eigenvalues of $\tilde{A}$ and $W$ by $\tilde{\lambda}_1 \geq \cdots \geq \tilde{\lambda}_n$, and $\epsilon_1 \geq \epsilon_2 \geq \cdots \geq \epsilon_n$, respectively. To begin with, one can quantify the perturbation of the eigenvalues of $\widetilde{A}$ with respect to the eigenvalues of $A$. Weyl's inequality \cite{Weyl1912} is a very useful result in this regard. \begin{theorem} [Weyl's Inequality \cite{Weyl1912}] \label{thm:Weyl} For each $i = 1,\dots,n$, it holds that \begin{equation} \lambda_i + \epsilon_n \leq \tilde{\lambda}_i \leq \lambda_i + \epsilon_1. \end{equation} In particular, this implies that $\tilde{\lambda}_i \in [\lambda_i - \norm{W}, \lambda_i + \norm{W}]$. \end{theorem} One can also quantify the perturbation of the subspace spanned by eigenvectors of $A$, which was established by Davis and Kahan \cite{daviskahan}. Before introducing the theorem, we need some definitions. Let $U,\widetilde{U} \in \mathbb{C}^{n \times k}$ (for $k \leq n$) have orthonormal columns respectively, and let $\sigma_1 \geq \dots \geq \sigma_k$ denote the singular values of $U^{*}\widetilde{U}$. Also, let us denote $\calR(U)$ to be the range space of the columns of $U$, and similarly for $\calR(\widetilde U)$. Then the $k$ principal angles between $\calR(U), \calR(\widetilde{U})$ are defined as $\theta_i := \cos^{-1}(\sigma_i)$ for $1 \leq i \leq k$, with each $\theta_i \in [0,\pi/2]$. It is usual to define $k \times k$ diagonal matrices $\Theta(\calR(U), \calR(\widetilde{U})) := \text{diag}(\theta_1,\dots,\theta_k)$ and $\sin \Theta(\calR(U), \calR(\widetilde{U})) := \text{diag}(\sin \theta_1,\dots,\sin \theta_k)$. Denoting $||| \cdot |||$ to be any unitarily invariant norm (Frobenius, spectral, etc.), the following relation holds (see for eg., \cite[Lemma 2.1]{li94}, \cite[Corollary I.5.4]{stewart1990matrix}). \begin{equation*} ||| \sin \Theta(\calR(U), \calR(\widetilde{U})) ||| = ||| (I - \tilde{U} \tilde{U}^{*} ) U |||. \end{equation*} With the above notation in mind, we now introduce a version of the Davis-Kahan theorem taken from \cite[Theorem 1]{dkuseful} (see also \cite[Theorem V.3.6]{stewart1990matrix}). \begin{theorem}[Davis-Kahan] \label{thm:DavisKahan} Fix $1 \leq r \leq s \leq n$, let $d = s-r+1$, and let $U = (u_r,u_{r+1},\dots,u_s) \in \mathbb{C}^{n \times d}$ and $\widetilde{U} = (\widetilde{u}_r,\widetilde{u}_{r+1},\dots,\widetilde{u}_s) \in \mathbb{C}^{n \times d}$. Write \begin{equation*} \delta = \inf\set{\abs{\hat\lambda - \lambda}: \lambda \in [\lambda_s,\lambda_r], \hat\lambda \in (-\infty,\widetilde\lambda_{s+1}] \cup [\widetilde \lambda_{r-1},\infty)} \end{equation*} where we define $\widetilde\lambda_0 = \infty$ and $\widetilde\lambda_{n+1} = -\infty$ and assume that $\delta > 0$. Then \begin{equation*} ||| \sin \Theta(\calR(U), \calR(\widetilde{U}))||| = ||| (I - \tilde{U} \tilde{U}^{*} ) U ||| \leq \frac{ ||| W ||| }{ \delta}. \end{equation*} \end{theorem} For instance, if $r = s = j$, then by using the spectral norm $\norm{\cdot}$, we obtain \begin{equation} \label{eq:dk_useful} \sin \Theta(\calR(\widetilde{v}_j), \calR(v_j)) = \norm{(I - v_j v_j^{*})\widetilde{v}_j} \leq \frac{\norm{W}}{\min\set{\abs{\widetilde{\lambda}_{j-1}-\lambda_j},\abs{\widetilde\lambda_{j+1}-\lambda_j}}}. \end{equation} Finally, we recall the following standard result which states that given any pair of $k$-dimensional subspaces with orthonormal basis matrices $U, \tilde{U} \in \mathbb{R}^{n \times k}$, there exists an alignment of $U, \tilde{U}$ with the error after alignment bounded by the distance between the subspaces. We provide the proof for completeness. \begin{proposition} \label{prop:orth_basis_align} Let $U, \tilde{U} \in \mathbb{R}^{n \times k}$ respectively consist of orthonormal vectors. Then there exists a $k \times k$ rotation matrix $O$ such that $$\norm{\tilde{U} - U O} \leq 2\norm{(I -UU^T) \tilde{U}}.$$ \end{proposition} \begin{proof} Write the SVD as $U^T \tilde{U} = V \Sigma (V')^T$, where we recall that the $i$th largest singular value $\sigma_i = \cos \theta_i$ with $\theta_i \in [0,\pi/2]$ denoting the principal angles between $\mathcal{R}(U)$ and $\mathcal{R}(\tilde{U})$. Choosing $O = V (V')^T$, we then obtain \begin{align*} \norm{\tilde{U} - U V(V')^T} &\leq \norm{\tilde{U} - UU^T \tilde{U}} + \norm{UU^T \tilde{U} - U V(V')^T} \\ &= \norm{(I - UU^T) \tilde{U}} + \norm{U^T \tilde{U} - V(V')^T} \\ &= \norm{(I - UU^T) \tilde{U}} + \norm{I - \Sigma} \\ &\leq 2\norm{(I -UU^T) \tilde{U}}, \end{align*} where the last inequality follows from the fact $\norm{I-\Sigma} = 1-\cos \theta_k \leq \sin \theta_k$. \end{proof}
{"config": "arxiv", "file": "2011.01737/matpert_app.tex"}
TITLE: Projective dimension of direct summand QUESTION [1 upvotes]: I want to prove that if $M$ is an $A$-module with finite projective dimension and $N$ is an $A$-module that is a direct summand of $M$ then the projective dimension of $N$ is less or equal to that of $M$. I tried using the exact sequence $$0\rightarrow N \rightarrow M \rightarrow N'\rightarrow 0$$ with no results. REPLY [2 votes]: Write $M=N\oplus N'$ and set $n=\mathrm{pd}(M)$. Let $$0\to X_n\to P_{n-1}\to\cdots\to P_0\to N\to 0$$ be an exact sequence with $P_i$ projective modules, and similarly $$0\to X'_n\to P'_{n-1}\to\cdots\to P'_0\to N'\to 0$$ be an exact sequence with $P'_i$ projective modules. Then take the direct sum of these two sequences and since $\mathrm{pd}(M)=n$ conclude that $X_n\oplus X'_n$ is projective, so both are projective. Now it follows $\mathrm{pd}(N)\le n$.
{"set_name": "stack_exchange", "score": 1, "question_id": 1425633}
TITLE: Convergence of partial sums and their inverses QUESTION [3 upvotes]: If a sequence $s_{k}$ of partial sums converges to a nonzero limit, and we assume that $s_{k} \neq 0$ for all $k$ $\epsilon$ $\mathbb{N}$, then also the sequence $\left \{ \frac{1}{s_{k}} \right \}$ converges. In my book, $s_{k}$ is defined as $\sum_{j = 1}^{k}\frac{a_{j}}{10^{j}}$ which is a decimal expansion. I can't immediately see why this sequence converges - Maybe I'm just braindead today, but I can't think of any examples in my head that are making sense to me. Can anyone point me in the right direction about why this is true? REPLY [1 votes]: In general, if $\{a_n\}$ is any convergent sequence with a limit $a\neq 0$, then $\dfrac{1}{a_n}$ converges to $\dfrac{1}{a}$. Proof. Let $\epsilon>0$. Then $$ \left|\frac{1}{a_n}-\frac{1}{a}\right|=\left|\frac{a-a_{n}}{aa_n}\right| $$ Since $a_n\to a$ as $n\to\infty$, we can choose a positive integer $N_1$ such that if $n\ge N_1$, then $|a-a_n|\leq \dfrac{|a|}{2}$ for all $n\ge N_1$. This implies $a_n\ge\dfrac{|a|}{2}$ for all $n\ge N_1$. Now choose a positive integer $N_2$ such that $|a-a_n|\leq \dfrac{a^2}{2}\epsilon$ for all $n\ge N_2$. Let $N=\max(N_1, N_2)$. It follows that $$ \left|\frac{1}{a_n}-\frac{1}{a}\right|=\left|\frac{a-a_{n}}{aa_n}\right|=\frac{|a-a_n|}{|a||a_n|}\leq \frac{\frac{a^2}{2} \epsilon}{\frac{a^2}{2}}=\epsilon $$ for each $n\ge N$. So $\dfrac{1}{a_n}\to\dfrac{1}{a}$ as desired.
{"set_name": "stack_exchange", "score": 3, "question_id": 509503}
TITLE: Limit of $\frac{\sin(xy^2)}{x^2+y^2}$ as $x\to0$ and $y\to0$ QUESTION [0 upvotes]: Does anybody have an idea, how to solve this limit, I cant figure it out. I wanted to try some algebraic magic to get something with $\frac{\sin(x)}{x}$ but without success. Any sugestions? Thank you. REPLY [4 votes]: Try writing it as $$\frac{\sin(xy^2)}{x^2+y^2}=\frac{\sin(xy^2)}{xy^2}\frac{xy^2}{x^2+y^2}$$ Then $$\frac{\sin(xy^2)}{xy^2} \rightarrow 1$$ So you just have to evaluate the limit of $$\frac{xy^2}{x^2+y^2}$$ which is zero since $$\frac{xy^2}{x^2+y^2}=\frac{x}{(\frac{x}{y})^2 +1} \leq x.$$
{"set_name": "stack_exchange", "score": 0, "question_id": 811906}
TITLE: Alternative definition of and opposite concept to a matroid? QUESTION [3 upvotes]: From Wikipedia In terms of independence, a finite matroid $M$ is a pair $(E,\mathcal{I})$, where $E$ is a finite set (called the ground set) and $\mathcal{I}$ is a family of subsets of $E$ (called the independent sets) with the following properties: The empty set is independent, i.e., $\emptyset\in\mathcal{I}$. Alternatively, at least one subset of $E$ is independent, i.e., $\mathcal{I}\neq\emptyset$. Every subset of an independent set is independent, i.e., for each $A'\subset A\subset E$, if $A\in\mathcal{I}$ then $A'\in\mathcal{I}$. This is sometimes called the hereditary property. If $A$ and $B$ are two independent sets of $\mathcal{I}$ and $A$ has more elements than $B$, then there exists an element in $A$ that when added to $B$ gives a larger independent set. This is sometimes called the augmentation property or the independent set exchange property. A subset of the ground set E that is not independent is called dependent. A maximal independent set—that is, an independent set which becomes dependent on adding any element of E—is called a basis for the matroid. Can a matroid be defined equivalently by replacing the augmentation property with the following one : $\forall A \in \mathcal{I}$, $A$ has a maximal superset in $\mathcal{I}$, and the cardinality of the maximal superset of $A$ is the same for all members of $\mathcal{I}$. I was wondering if there has already existed a concept which is opposite to a matroid? For example, for a pair $(E, \mathcal{J})$, $E \in \mathcal{J}$, for each $A'\subset A\subset E$, if $A' \in\mathcal{J}$ then $A \in\mathcal{J}$. If $A$ and $B$ are both in $\mathcal{J}$ and $A$ has less elements than $B$, then there exists an element in $B$ when removed from $B$ gives a smaller member in $\mathcal{J}$. ($A$ may or may not be helpful in finding such an element in $B$.) or can the third point be replaced by $\forall A \in \mathcal{J}$, $A$ has a minimal subset in $\mathcal{J}$, and the cardinality of the minimal subset of $A$ is the same for all members of $\mathcal{J}$. Based on the definition, can we further define a concept just opposite to a basis of a matroid, something like "a minimal set in $\mathcal{J}$ is called a basis for $(E, \mathcal{J})$". An example of such $(E, \mathcal{J})$ will be the collection of all bases of a topology. Thanks and regards! REPLY [0 votes]: My favorite "alternative" definition of a matroid is: A matroid is a simplicial complex whose every induced subcomplex is pure.
{"set_name": "stack_exchange", "score": 3, "question_id": 268423}
TITLE: Version of double-slit experimen: Was it done? What are results? QUESTION [0 upvotes]: My motivation is role of consciousness in quantum measurement. Imagine I perform double-slit experiment with a device recording on a paper which slit did photon go through (so there is some detector in each slit). Then without looking on the photograph (where photons landed) and without looking on the paper I burn the paper so the information about path is completely discarded. Only after I look on the photograph. What will I see? My guess is I will see two spots, yet I am wandering what are experimental results. REPLY [0 votes]: Detecting a photon requires that the detector absorbs all of the photon's energy. If you are working with a single photon and you detect it at a slit, then nothing will arrive at the film beyond the slit. If a single photon passes through one (or more) slits, it can be detected at only one point beyond the slits. The electromagnetic wave associated with a photon passes through your slits and produces an interference pattern on any surface beyond the slits. At each point in the pattern, the field strength determines the probability that the the photon will be absorbed at that point.
{"set_name": "stack_exchange", "score": 0, "question_id": 530489}
TITLE: Is the set $K$ finite or infinite? QUESTION [0 upvotes]: Say you have $\Phi(x)=2^{sA_n}$, for some fixed parameter $s\in \Bbb R$, $n \in \Bbb Z^+ $ with $A_n=1/\log(p_n(x)) $, where $p_n(x) $ is a polynomial with degree $n$. How many forms $p_n(x)$ yield concave $\Phi$, for all $s$, for, $(-\infty<x<\infty)$ and $\Phi<1.$ So for example $A_1=\{\},$ $A_2=\{p_2(x)=x^2-x+1\}$, $...$ etc. Let $K=\{A_1,A_2,...\}$. Is $K$ finite or infinite? Edit: I think the $\log()$ brings unneeded complexity to the problem. You could just have $A_n=1/p_n(x)$ or even $A_n=p_n(x).$ REPLY [1 votes]: A function $\Phi(x)=2^{sg(x)}$ is concave for $\Phi(x)<1$ if and only if $(\ln 2)s^2 (g')^2+sg'' \geq 0$ whenever $sg(x)<0$. Dividing by $sg$ implies that $(\ln 2)(s/g)(g')^2+g''/g \leq 0$. Given any $x$ with $g(x) \neq 0$, one could choose $s$ such that $sg(x) \rightarrow 0^-$ and so it follows that $g''/g \leq 0$ for all $x$ with $g(x) \neq 0$. Equivalently $g''(x)g(x) \leq 0$ for all $x$. If $g(x)=p(x)$ is a polynomial, the condition $g''(x)g(x) \leq 0$ for all $x$ cannot hold except if $p(x)$ is linear (this can be seen by taking $x\rightarrow \infty$). In this case, for $g(x)=ax+b$, the function $\Phi(x)=2^{sg(x)}$ is concave for all $x$. If $g(x)=1/p(x)$ for a polynomial $p(x)$ which is nonzero for all $x$, then $g''(x)g(x)\leq 0$ is equivalent to $$ {2(p')^2} \leq pp'',$$ for all $x$. Comparing the leading terms shows that this is not possible for any nonconstant polynomial. If $g(x)=1/\log(p(x))$, where $p(x)>0$ for all $x$, then $g''(x)g(x)\leq 0$ is equivalent to $$ (p')^2 \log p+2p'\leq p''p(\log p).$$ Again comparing the largest terms on both sides and their leading coefficients, we see that this inequality cannot hold for all $x$
{"set_name": "stack_exchange", "score": 0, "question_id": 2916323}
The main idea of this section is that while the key estimate for the proof of decoupling for the parabola in \cite{li-efficient} follows from Plancherel (see \cite[Lemma 3.8]{GLYZK} with $k = 2$, \cite[Remark 4]{li-efficient}, or \cite[Proposition 19]{tao-247}), the key estimate here will follow from \eqref{maininput}. \begin{lemma}[Key estimate]\label{key} If $0 \leq \delta(j) \leq \delta(i_1), \delta(i_{1}'), \delta(i_2) \leq \delta(k) \leq 1$ with $\delta(i_2)^{2} \leq \delta(i_{1}') \leq \delta(i_1)$, then for any $\vep > 0$, \begin{align*} M_{p}(j, k, i_1, i_2) \lsm_{p, \vep, \dim(C), N(1)} \delta(k)^{-O(1)}M_{p}(j, k, i_{1}', i_2)N(i_{1}' - i_{1})^{\kappa_{p}(C)/3 + \vep/3} \end{align*} where $\kappa_{p}(C)$ is defined in \eqref{maininput}. \end{lemma} \begin{proof} Fix arbitrary $\vep > 0$ and arbitrary $I_1 \in P_{\delta(i_1)}(C_{i_1})$ and $I_2 \in P_{\delta(i_2)}(C_{i_2})$ such that $d(I_1, I_2) \geq \delta(k)$. Next fix arbitrary Schwartz functions $f$ and $g$ with Fourier support in $\bigcup_{J_1 \in P_{\delta(j)}(I_1 \cap C_j)}\Om_{J_1}$ and $\bigcup_{J_2 \in P_{\delta(j)}(I_2 \cap C_j)}\Om_{J_2}$, respectively. We may normalize $f$ and $g$ so that \begin{align}\label{fgnorm} \sum_{J_1 \in P_{\delta(j)}(I_1 \cap C_j)}\nms{f_{\Om_{J_1}}}_{L^{3p}(\R^2)}^{2} = \sum_{J_2 \in P_{\delta(j)}(I_2 \cap C_j)}\nms{g_{\Om_{J_2}}}_{L^{3p}(\R^2)}^{2} = 1. \end{align} Thus we need to show that \begin{equation*} \begin{aligned} \int_{\R^2}|\sum_{J_1 \in P_{\delta(j)}(I_1 \cap C_j)}f_{\Om_{J_1}}|^{p}&|\sum_{J_2 \in P_{\delta(j)}(I_2 \cap C_j)}g_{\Om_{J_2}}|^{2p}\\ & \lsm_{p, \vep, \dim(C), N(1)} \delta(k)^{-O(p)}N(i_{1}' - i_{1})^{p\kappa_{p}(C) + p\vep}M_{p}(j, k, i_{1}', i_2)^{3p}. \end{aligned} \end{equation*} Write $I_1 := [a, a + \delta(i_1)]$ and $I_2 := [b, b + \delta(i_2)]$. Assume that $I_2$ is to the left of $I_1$ and so $a - b > \delta(k)$; the case when $I_2$ is to the right of $I_1$ is similar. We now essentially reduce to the case when $b = 0$. To see this, let $T_{I_2} = (\begin{smallmatrix} 1 & 0\\-2b & 1 \end{smallmatrix})$, $\wt{f}_{I_2}(y) := f(T_{I_2}^{\top}y)e(-y \cdot T_{I_2}(b, b^2))$, and $\wt{g}_{I_2}(y) := g(T_{I_2}^{\top}y)e(-y \cdot T_{I_2}(b, b^2))$. By a similar argument as in the proof of \Cref{parab}, it suffices to show that \begin{equation}\label{keyeq0a} \begin{aligned} \int_{\R^2}|\sum_{J_1 \in P_{\delta(j)}((I_1 - b) \cap (C_{j} - b))}(\wt{f}_{I_2})_{\Om_{J_1}}|^{p}|&\sum_{J_2 \in P_{\delta(j)}([0, \delta(i_2)] \cap (C_{j} - b))}(\wt{g}_{I_2})_{\Om_{J_2}}|^{2p}\\ & \lsm_{p, \vep, \dim(C), N(1)} \delta(k)^{-O(p)}N(i_{1}' - i_{1})^{p\kappa_{p}(C) + p\vep}M_{p}(j, k, i_{1}', i_2)^{3p} \end{aligned} \end{equation} where \begin{align*} \sum_{J_1 \in P_{\delta(j)}((I_1 - b) \cap (C_{j} - b))}\nms{(\wt{f}_{I_2})_{\Om_{J_1}}}_{L^{3p}(\R^2)}^{2} = \sum_{J_2 \in P_{\delta(j)}([0, \delta(i_2)] \cap (C_{j} - b))}\nms{(\wt{g}_{I_2})_{\Om_{J_2}}}_{L^{3p}(\R^2)}^{2} = 1 \end{align*} since $\det T_{I_2} = 1$. \begin{figure} \centering \includegraphics[width=.8\textwidth]{curved_cantor_v3.png} \caption{Scheme of the key estimate. Since $I_1$ is away from the origin and the parabola is Lipschitz on $I_1$ with Lipschitz constant $\gtrsim \delta(k)^{-O(1)}$, we know we can decouple vertically. The fact that we are multiplying by $G^2$, on the Fourier side amounts to convolving against $\wh{G} \ast \wh{G}$, which adds an uncertainty of size $O(\delta(i_2)^{2})$ on each vertical level. This is acceptable because, we can cover the overlap by $\delta(k)^{-1}$ many copies of the orange sets (these copies are in shades of blue, purple and maroon in the picture).} \label{fig:curved_cantor} \end{figure} Let $$G := \sum_{J_2 \in P_{\delta(j)}([0, \delta(i_2)] \cap (C_{j} - b))}(\wt{g}_{I_2})_{\Om_{J_2}}.$$ Then $G$ (and hence $G^2$) is Fourier supported in an $O(\delta(i_2)) \times O(\delta(i_2)^{2} + \delta(j))$ rectangle centered at the origin. For each $J \in P_{\delta(i_{1}')}((I_1 - b) \cap (C_{i_{1}'} - b))$, let $$F_{J} := \sum_{J_1 \in P_{\delta(j)}(J \cap (C_{j} - b))}(\wt{f}_{I_2})_{\Om_{J_1}}.$$ The Fourier transform of $F_J$ is supported in the vertical strip $\{(\xi_1, \xi_2): \xi_{2} = \gamma_{J}^{2} + O(\delta(i_{1}'))\}$ where $\gamma_{J}$ is the center of $J$ and $\gamma_{J}$ is a distance $\gtrsim \delta(k)$ away from the origin. Since $\delta(j), \delta(i_{2})^{2} \leq \delta(i_{1}')$, $F_{J}G^{2}$ has Fourier transform supported in the vertical strip $\{(\xi_1, \xi_2): \xi_{2} = \gamma_{J}^{2} + O(\delta(i_{1}'))\}$ as well. Using this notation, showing \eqref{keyeq0a} is equivalent to showing that \begin{align} \begin{aligned}\label{reduction} \int_{\R^2}|\sum_{J \in P_{\delta(i_{1}')}((I_1 - b) \cap (C_{i_{1}'} - b))}F_{J}G^{2}|^{p} \lsm_{p, \vep, \dim(C), N(1)} \delta(k)^{-O(p)}N(i_{1}' - i_{1})^{p\kappa_{p}(C) + p\vep}M_{p}(j, k, i_{1}', i_2)^{3p}. \end{aligned} \end{align} We now claim that \begin{equation} \begin{aligned}\label{lowerdim} \|&\sum_{J \in P_{\delta(i_{1}')}((I_1 - b) \cap (C_{i_{1}'} - b))}F_{J}G^{2}\|_{L^{p}(\R^2)}\\ &\hspace{0.5in}\lsm_{p, \vep, \dim(C), N(1)} \delta(k)^{-O(1)}N(i_{1}' - i_{1})^{\kappa_{p}(C) + \vep}(\sum_{J \in P_{\delta(i_{1}')}((I_1 - b) \cap (C_{i_{1}'} - b))}\nms{F_{J}G^{2}}_{L^{p}(\R^2)}^{2})^{1/2} \end{aligned} \end{equation} which, as we will show, follows from an application of Cantor set decoupling for the line given by \eqref{maininput}. Let us see how to use \eqref{lowerdim} to prove \eqref{reduction}. Reversing the change of variables used to obtain \eqref{keyeq0a} and applying the definition of $M_{p}(j, k, i_{1}', i_{2})$ along with the normalization of $g$ in \eqref{fgnorm} gives \begin{align}\label{keyeq1} \nms{F_{J}G^2}_{L^{p}(\R^2)} \leq M_{p}(j, k, i_{1}', i_2)^{3}(\sum_{J_1 \in P_{\delta(j)}((J + b) \cap C_j)}\nms{f_{\Om_{J_1}}}_{L^{3p}(\R^2)}^{2})^{1/2} \end{align} for each $J \in P_{\delta(i_{1}')}((I_1 - b) \cap (C_{i_{1}'} - b))$. Combining \eqref{lowerdim} with \eqref{keyeq1} and using our normalization of $f$ in \eqref{fgnorm} then proves \eqref{reduction}. Thus it remains to prove \eqref{lowerdim}. First since $p \geq 2$, by Minkowski's inequality, it suffices to prove that for fixed $x \in \R^2$, \begin{equation}\label{keyeq2} \begin{aligned} &\int_{\R}|\sum_{J \in P_{\delta(i_{1}')}((I_1 - b) \cap (C_{i_{1}'} - b))}F_{J}(x, y)G(x, y)^{2}|^{p}\, dy\\ & \lsm_{p, \vep, \dim(C), N(1)} \delta(k)^{-O(p)}N(i_{1}' - i_{1})^{p\kappa_{p}(C) + p\vep}(\sum_{J \in P_{\delta(i_{1}')}((I_1 - b) \cap (C_{i_{1}'} - b))}(\int_{\R}|F_{J}(x, y)G(x, y)^{2}|^{p}\, dy)^{2/p})^{p/2}. \end{aligned} \end{equation} Indeed, once we obtain the above inequality, we can prove \eqref{lowerdim} by just integrating in $x$. For fixed $x$, the Fourier transform in $y$ of $F_{J}(x, y)G(x, y)^{2}$ is supported on an interval of length $O(\delta(i_{1}'))$ centered at $\gamma_{J}^{2}$ where $\gamma_J \gtrsim \delta(k)$ is the center of the interval $J \in P_{\delta(i_{1}')}((I_1 - b) \cap (C_{i_{1}'} - b))$. Note that the implied constant in $O(\delta(i_{1}'))$ is independent of $J$. Now suppose $F_{J_1}G^2$ and $F_{J_2}G^2$ had overlapping Fourier supports. Then $\gamma_{J_1}^{2} = \gamma_{J_2}^{2} + O(\delta(i_{1}'))$ and hence $\gamma_{J_1} = \gamma_{J_2} + O(\delta(i_{1}')\delta(k)^{-O(1)})$ since $\gamma_{J_1}, \gamma_{J_2} \gtrsim \delta(k)$. Thus \eqref{keyeq2} now follows if we can show that \begin{align*} \int_{\R}|&\sum_{J \in P_{\delta(i_{1}')}((I_1 - b) \cap (C_{i_{1}'} - b))}f_{c J}(y)|^{p}\, dy\\ &\lsm_{p, \vep, \dim(C), N(1)} \delta(k)^{-O(p)}N(i_{1}' - i_{1})^{p\kappa_{p}(C) + p\vep}(\sum_{J \in P_{\delta(i_{1}')}((I_1 - b) \cap (C_{i_{1}'} - b))}(\int_{\R}|f_{c J}(y)|^{p}\, dy)^{2/p})^{p/2} \end{align*} for $1 \leq c \lsm \delta(k)^{-O(1)}$ and for arbitrary Schwartz functions $f$. Here, $cJ$ denotes the interval having the same center as $J$ but of length $c|J|$. By rescaling $I_1$ and using the fact that decoupling constants are translation invariant, this then reduces to showing that \begin{align} \label{eq:key-est-last-reduction} \nms{\sum_{J \in P_{\delta(i)}(C_{i})}f_{cJ}}_{L^{p}(\R)} \lsm_{p, \vep, \dim(C), N(1)} c N(i)^{\kappa_{p}(C) + \vep}(\sum_{J \in P_{\delta(i)}(C_{i})}\nms{f_{cJ}}_{L^{p}(\R)}^{2})^{1/2} \end{align} for $c \geq 1$ and for arbitrary Schwartz functions $f$. (Here $i = i_1' - i_1$.) To show \eqref{eq:key-est-last-reduction}, we can assume that $c\geq 1$ is an integer. We can find translations $\{\tau_{k} : 1 \leq k \leq c\}$ such that for any $J \in P_{\delta(i)}(C_{i})$, the interval $cJ$ is covered by the union of $\{\tau_{k}(J) : 1 \leq k \leq c\}$. Therefore \begin{align*} \nms{\sum_{J \in P_{\delta(i)}(C_{i})}f_{cJ}}_{L^{p}(\R)} &= \nms{\sum_{k=1}^c\sum_{J \in P_{\delta(i)}(C_{i})}(f_{cJ})_{\tau_{k}(J)}}_{L^{p}(\R)}\\ &\leq c\sup_{k}\nms{\sum_{J \in P_{\delta(i)}(C_{i})}(f_{cJ})_{\tau_{k}(J)}}_{L^{p}(\R)}\\ &\lsm_{p, \vep, \dim(C), N(1)} c N(i)^{\kappa_{p}(C) + \vep}\sup_{k}(\sum_{J \in P_{\delta(i)}(C_{i})}\nms{(f_{cJ})_{\tau_{k}(J)}}_{L^{p}(\R)}^{2})^{1/2}\\ &\lsm_{p, \vep, \dim(C), N(1)} c N(i)^{\kappa_{p}(C) + \vep}(\sum_{J \in P_{\delta(i)}(C_{i})}\nms{f_{cJ}}_{L^{p}(\R)}^{2})^{1/2} \end{align*} where the third inequality is because decoupling is invariant under translation and \eqref{maininput}, and the last inequality is by boundedness of the Hilbert transform in $L^{p}(\R)$, $1 < p < \infty$, (see for example \cite[p. 59]{duoandikoetxea}). This completes the proof of \eqref{eq:key-est-last-reduction} and hence the proof of \Cref{key}. \end{proof} \subsection{The iteration} We first have the following lemma which allows us to interchange the last two indices in $M_{p}(j, k, i_1, i_2)$. \begin{lemma}\label{interchange} If $0 \leq \delta(j) \leq \delta(i_1) \leq \delta(i_2) \leq \delta(k) \leq 1$, then \begin{align*} M_{p}(j, k, i_1, i_2) \leq M_{p}(j, k, i_2, i_1)^{1/2}D_{3p}(\delta(j - i_2))^{1/2}. \end{align*} \end{lemma} \begin{proof} This lemma follows from $\int F^{p}G^{2p} \leq (\int F^{2p}G^{p})^{1/2}(\int G^{3p})^{1/2}$ and applying the definition of $M_{p}(j, k, i_2, i_1)$ and parabolic rescaling. \end{proof} We are now in a good position to conclude the proof of Theorem \ref{main}. After normalization, the iteration is essentially the same as in \cite{li-efficient}. The proof follows via a contradiction argument, combining the previous lemmas and using an iteration argument. We start normalizing the main objects that we have been considering in order to simplify our argument. Let $$D_{p}'(\delta(i)) := N(i)^{-\kappa_{p}(C)}D_{p}(\delta(i))$$ and \begin{align*} M_{p}'(j, k, i_1, i_2) := M_{p}(j, k, i_1, i_2)(N(j - i_1)N(j - i_2)^{2})^{-\kappa_{p}(C)/3}. \end{align*} With this definition, after multiplying both sides of Lemma \ref{bilinear} by $N(j - i)^{-\kappa_{p}(C)}$, we have that if $0 \leq \delta(j) \leq \delta(i) \leq 1$, then \begin{align}\label{reduction-n} D_{3p}'(\delta(j)) \lsm N(i)^{-\kappa_{p}(C)}D_{3p}'(\delta(j - i)) + N(i)^{O(1)}M_{p}'(j, i, i, i). \end{align} The key estimate Lemma \ref{key} now becomes that if $0 \leq \delta(j) \leq \delta(i_1), \delta(i_{1}'), \delta(i_2) \leq \delta(k) \leq 1$ with $\delta(i_{2})^2 \leq \delta(i_{1}') \leq \delta(i_1)$, then for any $\vep > 0$, \begin{align}\label{key-n} M_{p}'(j, k, i_1, i_2) \lsm_{p, \vep, \dim(C), N(1)}\delta(k)^{-A}N(i_{1}' - i_1)^{\vep/3}M_{p}'(j, k, i_{1}', i_2) \end{align} for some absolute constant $A$. Also, Lemma \ref{interchange} above becomes \begin{align}\label{interchange-n} M_{p}'(j, k, i_1, i_2) \leq M_{p}'(j, k, i_2, i_1)^{1/2}D_{3p}'(\delta(j - i_2))^{1/2}. \end{align} \begin{proof}[Proof of Theorem \ref{main}] Let $\ld$ be the least exponent for which the following statement is true: \begin{equation}\label{lambda} D'_{3p}(\delta(j)) \lsm_{p, \vep, \dim(C), N(1)} N(j)^{\ld + \vep} \qquad\text{for all $j \geq 0$ and $\vep > 0$.} \end{equation} Trivially, $D_{3p}'(\delta(i)) \leq N(i)^{\frac{1}{2} - \kappa_{p}(C)}$ and so \eqref{lambda} is equivalent to the statement that \begin{equation*} D'_{3p}(\delta(j)) \lsm_{p, \vep, \dim(C), N(1)} N(j)^{\ld + \vep} \qquad\text{for all $j \gtrsim 1$ and $0 < \vep \lsm 1$.} \end{equation*} If $\ld = 0$, then we are done, so we assume towards a contradiction that $\lambda > 0$. Fix arbitrary $\vep > 0$, we may assume that $\vep < 1$. If $1 \leq a \leq \frac{j}{4i}$, then $j \geq 4ai \geq 2ai \geq ai \geq i$ which imply that we can talk about $M_{p}'(j, i, 2ai, i)$ and $M_{p}'(j, i, 4ai, 2ai)$. Applying \eqref{interchange-n}, \eqref{key-n}, and \eqref{lambda} in that order obtains \begin{align*} M'_{p}(j,i,2ai,ai) &\leq M'_{p}(j,i,ai,2ai)^{1/2}D'_{3p}(\delta(j-ai))^{1/2} \\ &\lesssim_{p, \vep, \dim(C), N(1)} M'_{p}(j,i,4ai,2ai)^{1/2}\delta(i)^{-A/2}N(4ai-ai)^{\vep/6}D'_{3p}(\delta(j-ai))^{1/2} \\ &\lesssim_{p, \vep, \dim(C), N(1)} M'_{p}(j,i,4ai,2ai)^{1/2}\delta(i)^{-A/2}N(4ai-ai)^{\vep/6}N(j-ai)^{\frac{\lambda}{2}+\frac{\vep}{2}} \\ &= M'_{p}(j,i,4ai,2ai)^{1/2}\delta(i)^{-A/2}N(j)^{\frac{\ld + \vep}{2}}N(i)^{-a\ld/2}. \end{align*} Hence we have shown that for $1 \leq a \leq \frac{j}{4i}$ \begin{align*} \label{eq:iteration-claim1} M'_{p}(j,i,2ai,ai)\leq C_{p, \vep, \dim(C), N(1)} M'_{p}(j,i,4ai,2ai)^{1/2}\delta(i)^{-A/2}N(i)^{-a\ld/2}N(j)^{\frac{\ld + \vep}{2}} \end{align*} for some constant $C_{p, \vep, \dim(C), N(1)}$ depending only on $p, \vep$, $\dim(C)$ and $N(1)$ and $A$ is an absolute constant. Then, we multiply both sides of the previous inequality by $N(j)^{-\lambda}$ and raise both sides to the $1/a$ power to obtain that for every integer $a$ such that $1 \leq a \leq \frac{j}{4i}$, \begin{equation*} \begin{split} (N(j)^{-\ld}&M_{p}'(j, i, 2ai, ai))^{1/a}\\ & \leq (C_{p, \vep, \dim(C), N(1)}\delta(i)^{-A/2}N(j)^{\vep/2})^{1/a}N(i)^{-\ld/2}(N(j)^{-\ld}M_{p}'(j, i, 4ai, 2ai))^{1/(2a)}. \end{split} \end{equation*} Therefore, for all $k\in\N$ with $2^{k + 1} \leq j/i$, the following inequality holds: \begin{align}\label{iteration core} N(&j)^{-\lambda} M'_{p}(j,i,2i,i)\nonumber\\ &\leq \left( \prod_{n=0}^{k-1} (C_{p, \vep, \dim(C), N(1)} \delta(i)^{-A/2}N(j)^{\vep/2})^{1/2^n} \right) N(i)^{-k\lambda/2} \left( N(j)^{-\lambda} M'_{p}(j, i,{2^{k+1}i}, {2^k}i) \right)^{1/2^k} \nonumber\\ &\lsm_{p, \vep, \dim(C), N(1)} (\delta(i)^{-A/2} N(j)^{\vep/2})^{\sum_{n = 0}^{k - 1}\frac{1}{2^n}} N(i)^{-k\lambda/2} N(j)^{\vep/2^k} \nonumber\\ &\lesssim_{p, \vep, \dim(C), N(1)} \delta(i)^{-O(1)}N(i)^{-k\ld/2}N(j)^{\vep} \end{align} where in the second inequality we have used that \begin{align*} M_{p}'(j, i, 2^{k + 1}i, 2^{k}i) &\leq D_{3p}'(\delta(j - 2^{k + 1}i))^{1/3}D_{3p}'(\delta(j - 2^{k}i))^{2/3}\\ &\lsm_{p, \vep, \dim(C), N(1)} N(j - 2^{k + 1}i)^{(\ld + \vep)/3}N(j - 2^{k}i)^{2(\ld + \vep)/3} \leq N(j)^{\ld + \vep} \end{align*} which follows from \eqref{trivialmult} and that $N$ is increasing. Suppose $i$, $j$, and $k$ are such that $N(i) = N(j)^{1/2^{k + 1}}$ and so by multiplicativity of $N(\cdot)$, $2^{k + 1}i = j$. Using \eqref{deltaN}, \eqref{reduction-n}, \eqref{key-n}, \eqref{lambda} and \eqref{iteration core} we conclude that \begin{align*} D'_{3p}(\delta(j))&\lesssim_{p, \vep, \dim(C), N(1)} N(i)^{-\kappa_{p}(C)}D_{3p}'(\delta(j - i)) + \delta(i)^{-O(1)}N(i)^{\vep}M_{p}'(j, i, 2i, i)\notag \\ &\lesssim_{p, \vep, \dim(C), N(1)} N(i)^{-\kappa_{p}(C)}N(j - i)^{\ld + \vep} + \delta(i)^{-O(1)}N(i)^{\vep - k\ld/2}N(j)^{\ld + \vep}\notag\\ &\lsm_{p, \vep, \dim(C), N(1)} N(j)^{\ld + \vep}N(i)^{- \ld} + N(i)^{O(\frac{1}{\dim(C)}) + \vep - k\ld/2}N(j)^{\ld + \vep}\notag \\ &\lesssim_{p, \vep, \dim(C), N(1)} N(j)^{\ld(1 - \frac{1}{2^{k + 1}}) + \vep}+N(j)^{\ld[1 - \frac{1}{2^{k + 1}}(\frac{k}{2} - \frac{O(\frac{1}{\dim(C)})}{\ld} - \frac{\vep}{\ld})]}N(j)^{\vep}. \end{align*} Choose $K$ so that $\frac{K}{2} - \frac{O(\frac{1}{\dim(C)})}{\ld} - \frac{\vep}{\ld} \geq 1$. We have then shown that if $j = 2^{K + 1}\N$, then for every $\vep > 0$, $$D'_{3p}(\delta(j)) \lsm_{p, \vep, \dim(C), N(1)} N(j)^{\ld(1 - \frac{1}{2^{K + 1}}) + \vep}.$$ We now upgrade this to be a statement for all $j \geq 0$. We use almost multiplicativity, \Cref{almostmult}. For $n \geq 0$ and $j$ such that $2^{K + 1}n \leq j \leq 2^{K + 1}(n + 1)$. Note that $$N(2^{K + 1}n) \leq N(j) \leq N(2^{K + 1}(n + 1))$$ and $$\delta(2^{K + 1}n) \geq \delta(j) \geq \delta(2^{K + 1}(n + 1)).$$ From almost multiplicativity and the trivial bound, \begin{align*} D'_{3p}(\delta(j)) &\leq D'_{3p}(\delta(2^{K + 1}n))D'_{3p}(\delta(j - 2^{K + 1}n))\\ &\lsm_{p, \vep, \dim(C), N(1)} N(2^{K + 1}n)^{\ld(1 - \frac{1}{2^{K + 1}}) + \vep}N(j - 2^{K + 1}n)^{1/2}\\ &\lsm_{p, \vep, \dim(C), N(1)} N(j)^{\ld(1 - \frac{1}{2^{K + 1}}) + \vep}(\frac{N(2^{K + 1}(n + 1))}{N(2^{K + 1}n)})^{1/2}\\ &\lsm_{p, \vep, \dim(C), N(1)} N(j)^{\ld(1 - \frac{1}{2^{K + 1}}) + \vep}N(1)^{2^K}. \end{align*} Therefore we have upgraded this estimate to be that for all $j \geq 0$, $$D'_{3p}(\delta(j)) \lsm_{p, \vep, \dim(C), N(1), \ld}N(j)^{\ld(1 - \frac{1}{2^{K + 1}}) + \vep}.$$ This contradicts the minimality of $\ld$. \end{proof} Following the same ideas from the iteration in \cite{li-efficient}, if there is no dependence on $\dim(C)$ and $N(1)$ in \eqref{maininput} (as is the case for our examples in Section \ref{sub:Examples}), the dependence on $\dim(C)$ and $N(1)$ in $D_{3p}(\delta(i))$ is $\exp(\exp(O(\frac{1}{\vep\dim(C)}))\log N(1))$. If there is some dependence on $\dim(C)$ and $N(1)$ in \eqref{maininput}, then an examination of the proof above shows that this same exact dependence shows up again in $D_{3p}(\delta(i))$.
{"config": "arxiv", "file": "2012.11458/Key_Estimate.tex"}
\begin{document} \maketitle \begin{abstract} Approximation properties of the sampling-type quasi-projection operators $Q_j(f,\phi, \w\phi)$ for functions $f$ from anisotropic Besov spaces are studied. Error estimates in $L_p$-norm are obtained for a large class of tempered distributions $\w\phi$ and a large class of functions $\phi$ under the assumptions that $\phi$ has enough decay, satisfies the Strang-Fix conditions and a compatibility condition with $\w\phi$. The estimates are given in terms of moduli of smoothness and best approximations. \end{abstract} \bigskip \textbf{Keywords.} Sampling-type operators, Error estimate, Best approximation, Moduli of smoothness, Anisotropic Besov space. \medskip \textbf{AMS Subject Classification.} 41A35, 41A25, 41A17, 41A15, 42B10, 94A20, 97N50 \section{Introduction} The well-known sampling theorem (Kotel'nikov's or Shannon's formula) states that \be f(x)=\sum_{k\in\z} f(-2^{-j}k)\,\frac{\sin\pi(2^jx+k)}{\pi (2^jx+k)} \label{0} \ee for band-limited to $[-2^{j-1},2^{j-1}]$ signals (functions) $f$. Equality~(\ref0) holds only for functions $f\in L_2(\R)$ whose Fourier transform is supported on $[-2^{j-1},2^{j-1}]$. However the right hand side of~(\ref0) (the sampling expansion of $f$) has meaning for every continuous $f$ with a good enough decay, which attracted mathematicians to study approximation properties of classical sampling and sampling-type expansions as $j\to\infty$. The most general form of such expansions looks as follows $$ Q_j(f,\phi,\w\phi)=|\det M|^{j} \sum_{k\in\zd} \langle f,\w\vp(M^j\cdot+k)\rangle \vp(M^j\cdot+k), $$ where $\phi$ is a function and $\w\phi$ is a distribution or function, $M$ is a matrix, and the inner product $\langle f,\w\vp(M^j\cdot+k) \rangle$ has meaning in some sense. The operator $Q_j(f,\phi,\w\phi)$ is usually called a quasi-projection or sampling operator. The class of quasi-projection operators is very large. In particular, it includes operators with a regular function $\w\phi$, such as scaling expansions associated with wavelet constructions (see~\cite{BDR, DB-DV1, v58, Jia2, KS, Sk1} and others) and Kantorovich-Kotelnikov operators and their generalizations (see, e.g.,~\cite{CV0, CV2, KS3, VZ2, OT15}). One has an essentially different class of operators $Q_j(f,\phi,\w\phi)$ if $\w\phi$ is a distribution. This class includes the classical sampling operators, where $\w\phi$ is the Dirac delta-function. There are a lot of papers devoted to the study of approximation properties of such operators for different classes of functions $\phi$ (see, e.g.,~\cite{Butz4, Brown, BD, Butz5, KKS, KS, SS, Si2, Unser} and the references therein). Consideration of functions $\phi$ with a good decay, in particular, compactly supported ones, is very useful for apllications. The case, where $\phi$ is a certain linear combination of $B$-splines and $\w\phi$ is the Dirac delta-function, was studed, e.g., in~\cite{Butz6, Butz7, SS, Si2}. For a wide class of fast decaying functions $\phi$ and a wide class of tempered distributions $\w\phi$, quasi-projection operators were considered in~\cite{KS}, in which error estimations in $L_p$-norm, $2\le p\le\infty$, were given in terms of the Fourier transform of the approximated function $f$. The goal of the present paper is to improve the results of~\cite{KS} in several directions. In particular, we obtain error estimates for the case $1\le p<2$. \section{Notation} We use the standard multi-index notations. Let $\n$ be the set of positive integers, $\rd$ be the $d$-dimensional Euclidean space, $\zd$ be the integer lattice in $\rd$, $\td=\rd\slash\zd$ be the $d$-dimensional torus. Let $x = (x_1,\dots, x_d)^{T}$ and $y =(y_1,\dots, y_d)^{T}$ be column vectors in $\rd$, then $(x, y):=x_1y_1+\dots+x_dy_d$, $|x| := \sqrt {(x, x)}$; $\nul=(0,\dots, 0)^T\in \rd$; $\z_+^d:=\{x\in\zd:~x_k\geq~{0}, k=1,\dots, d\}.$ If $r>0$, then $B_r$ denotes the ball of radius $r$ with the center in $\nul$. If $\alpha\in\zd_+$, $a,b\in\rd$, we set $$[\alpha]=\sum\limits_{j=1}^d \alpha_j, \quad \alpha!=\prod\limits_{j=1}^d\alpha_j!,\quad a^b=\prod\limits_{j=1}^d a_j^{b_j}, \,\, D^{\alpha}f=\frac{\partial^{[\alpha]} f}{\partial x^{\alpha}}=\frac{\partial^{[\alpha]} f}{\partial^{\alpha_1}x_1\dots \partial^{\alpha_d}x_d}.$$ If $A$ is a $d\times d$ matrix, then $\|A\|$ denotes its operator norm in $\rd$; $A^*$ denotes the conjugate matrix to $A$; the identity matrix is denoted by $I$. A $d\times d$ matrix $M$ whose eigenvalues are bigger than 1 in modulus is called a dilation matrix. Throughout the paper we consider that such a matrix $M$ is fixed and $m=|\det M|$ unless other is specified. Since the spectrum of the operator $M^{-1}$ is located in $B_r$, where $r=r(M^{-1}):=\lim_{j\to+\infty}\|M^{-j}\|^{1/j}$ is the spectral radius of $M^{-1}$, and there exists at least one point of the spectrum on the boundary of the ball, we have $$ \lim_{j\to\infty}\|M^{-j}\|=0. $$ A matrix $M$ is called isotropic if it is similar to a diagonal matrix such that its eigenvalues $\lambda_1,\dots,\lambda_d$ are placed on the main diagonal and $|\lambda_1|=\cdots=|\lambda_d|$. Note that if the matrix $M$ is isotropic then $M^*$ and $M^j$ are isotropic for all $j\in\z.$ It is well known that for an isotropic matrix $M$ and for any $j\in\z$ we have \be C^M_1 |\lambda|^j \le \|M^j\| \le C^M_2 |\lambda|^j,\quad j\in\z, \label{10} \ee where $\lambda$ is one of the eigenvalues of $M$ and the constants $C^M_1, C^M_2$ do not depend on $j$. By $L_p$ we denote the space $L_p(\rd)$, $1\le p\le\infty$, with the usual norm $\|f\|_p=\left(\int_{\rd}|f|^p\right)^{1/p}$ for $p<\infty$, and $\|f\|_\infty={\rm ess\,sup }\,|f|$. If $f, g$ are functions defined on $\rd$ and $f\overline g\in L_1$, then $$\langle f, g\rangle:=\int\limits_{\rd}f\overline g.$$ If $f\in L_1$, then its Fourier transform is $$ \mathcal{F}f(\xi)=\widehat f(\xi)=\int\limits_{\rd} f(x)e^{-2\pi i (x,\xi)}\,dx. $$ Denote by $\mathcal{S}$ the Schwartz class of functions defined on $\rd$. The dual space of $\mathcal{S}$ is $\mathcal{S}'$, i.e. $\mathcal{S}'$ is the space of tempered distributions. Suppose $f\in \mathcal{S}$, $\phi \in \mathcal{S}'$, then $\langle \phi, f\rangle:= \overline{\langle f, \phi\rangle}:=\phi(f)$. If $\phi\in \mathcal{S}',$ then $\h \phi$ denotes its Fourier transform defined by $\langle \h f, \h \phi\rangle=\langle f, \phi\rangle$, $f\in \mathcal{S}$. If $\phi$ is a function defined on $\rd$, we set $$ \phi_{jk}(x):=m^{j/2}\phi(M^jx+k),\quad j\in\z,\quad k\in\rd. $$ If $\w\phi\in \mathcal{S}'$, $j\in\z, k\in\zd$, then we define $\w\phi_{jk}$ by $$ \langle f, \w\phi_{jk}\rangle:= \langle f_{-j,-M^{-j}k},\w\phi\rangle,\quad \forall f\in \mathcal{S}. $$ Denote by $\mathcal{S}_N'$, $N\ge 0$, the set of tempered distributions $\w\phi$ whose Fourier transform $\h{\w\phi}$ is a function on $\rd$ such that $|\h{\w\phi}(\xi)|\le C_{\w\phi} |\xi|^{N}$ for almost all $\xi\notin\td$ and $|\h{\w\phi}(\xi)|\le C_{{\w\phi}}$ for almost all $\xi\in\td$. Let $1\le p \le \infty$. Denote by ${\cal L}_p$ the set $$ {\cal L}_p:= \left\{ \phi\in L_p\,:\, \|\phi\|_{{\cal L}_p}:= \bigg\|\sum_{k\in\zd} \left|\phi(\cdot+k)\right|\bigg\|_{L_p(\td)}<\infty \right\}. $$ It is known (see, e.g., \cite{v58}) that ${\cal L}_p$ is a Banach space with the norm $\|\cdot\|_{{\cal L}_p}$, and the following properties hold: ${\cal L}_1=L_1,$ $\|\phi\|_p\le \|\phi\|_{{\cal L}_p}$, $\|\phi\|_{{\cal L}_q}\le \|\phi\|_{{\cal L}_p}$ for $1\le q \le p \le\infty.$ Therefore, ${\cal L}_p\subset L_p$ and ${\cal L}_p\subset {\cal L}_q$ for $1\le q \le p \le\infty.$ If $\phi\in L_p$ and there exist constants $C>0$ and $\varepsilon>0$ such that $|\phi(x)|\le C( 1+|x|)^{-d-\varepsilon}$ for all $x\in\rd,$ then $\phi\in {\cal L}_\infty$. If $A$ is a $d\times d$ matrix and $1\le p\le \infty$, then $$ \mathcal{B}_{A,p}:=\{g\in L_p\,:\, \supp \h g\subset A\T^d \} $$ is a class of functions band-limited to $A\td$, and $$ E_{A} (f)_p:=\inf\{\Vert f-g\Vert_p\,:\, g\in \mathcal{B}_{A, p}\} $$ is the best approximation of $f\in L_p$ by band-limited functions $g$ from the class $\mathcal{B}_{A,p}$. Additionally, we consider the following special best approximation $$ E_{A}^*(f)_p:=\inf\{\Vert f-g\Vert_p\,:\, g\in \mathcal{B}_{A, p}\cap L_2\}, $$ which is equivalent to $E_{A} (f)_p$ for all $1\le p<\infty$ (see Lemma~\ref{lemE} below). We will use the following anisotropic Besov spaces with respect to the matrix~$M$. We say that $f\in \mathbb{B}_{p,q}^s (M)$, $1\le p\le\infty$, $0<q\le \infty$, and $s>0$, if $f\in L_p$ and $$ \Vert f\Vert_{\mathbb{B}_{p,q}^s (M)}:=\Vert f\Vert_p+\(\sum_{\nu=1}^\infty m^{\frac sd q\nu} E_{M^\nu} (f)_p^q\)^\frac1q<\infty. $$ If $A$ is a $d\times d$ nonsingular matrix, $f\in L_p$ and $s\in \N$, then $$ \Omega_s(f,A^{-1})_p:=\sup_{|At|<1, t\in \R^d} \Vert \Delta_t^s f\Vert_p, $$ where $$ \Delta_t^s f(x):=\sum_{\nu=0}^s (-1)^\nu \binom{s}{\nu} f(x+t\nu). $$ This is the so-called (total) anisotropic modulus of smoothness associated with $A$. The classical modulus of smoothness is defined by $$ \omega_s(f,h)_p:=\sup_{|t|<h} \Vert \Delta_t^s f\Vert_p,\quad h>0. $$ As usual, if $\{a_k\}_{k}$ is a sequence, then $$ \left\Vert \{a_k\}_{k}\right\Vert_{\ell_{p}}:=\left\{ \begin{array}{ll} \displaystyle\bigg(\sum\limits_{k\in \Z^d}|a_k|^p\bigg)^\frac1p, & \hbox{if $1\le p<\infty$,} \\ \displaystyle\sup\limits_{k\in \Z^d}|a_k|, & \hbox{if $p=\infty$.} \end{array} \right. $$ Finally, for an appropriate function $h\,:\, \R^d \to \mathbb{C}$ we denote the Fourier multiplier-type operator $\Lambda_h$ by $$ \Lambda_h (f) := \mathcal{F}^{-1}(h \h f ),\quad f\in L_2. $$ \section{ Preliminary information and auxiliary results.} \label{sa} Sampling-type expansions $\sum_{k\in\zd} \langle f, {\w\phi}_{jk}\rangle \phi_{jk}$ are elements of the shift-invariate spaces generated by~$\phi$. It is well known that a function $f$ can be approximated by elements of such spaces only if $\phi$ satisfies a special property, the so-called Strang-Fix conditions. \begin{defi} \label{d2} A function $\phi$ is said to satisfy {\em the Strang-Fix conditions} of order $s\in \N$ if there exists $\delta\in(0,1/2)$ such that for any $k\in\zd,$ $k \neq \nul$, $\h\phi$ is boundedly differentiable up to order~$s$ on $\{|\xi+k|<\delta\}$ and $D^{\beta}\h{\phi}(k) = 0$ for $[\beta]<s$. \end{defi} Approximation properties of the quasi-projection operators $$ Q_j(f,\phi,\w\phi)=\sum_{k\in\zd} \langle f, {\w\phi}_{jk}\rangle \phi_{jk}, $$ with $\w\phi\in S'_N$, were studied in~\cite{Sk1}, \cite{KS} and~\cite{KKS} for different classes of functions $\phi$. Since $\w\phi$ is a tempered distribution, the inner product $\langle f, \w\phi_{jk}\rangle $ has meaning only for functions $f$ in $\mathcal{S}$. To extend the class of functions $f$, the inner product $\langle f, {\w\phi}_{jk}\rangle $ was replaced by $\langle \h f, \h{\w\phi_{jk}}\rangle$ in these papers. In particular, for $\phi\in {\cal L}_p$ the following result was obtained in~\cite{KS} (see Theorems 4 and 5). \begin{theo} \label{theoQj} Let $2\le p \le \infty$, $1/p+1/q=1$, $s\in\n$, $N\in\z_+$, $\delta\in(0, 1/2)$, $\varepsilon>0$. Let also $M$ be an isotropic dilation matrix, $\phi \in {\cal L}_p$, and $\w\phi \in \mathcal{S}_N'$. Suppose \begin{itemize} \setlength{\itemsep}{0cm} \setlength{\parskip}{0cm} \item[$\star$] there exists $B_{\phi}>0$ such that $\sum\limits_{k\in\zd} |\h\phi(\xi+k)|^q<B_{\phi}$ for all $\xi\in\rd$; \item[$\star$] $\h\phi$ is boundedly differentiable up to order $s$ on $\{|\xi+l|<\delta\}$ for all $l\in\zd\setminus\{\nul\}$; the function $\sum\limits_{l\in\zd,\, l\neq\nul}|D^\beta \h \phi (\xi+l)|^q$ is bounded on $\{|\xi|<\delta\}$ for $[\beta]=s$, and the Strang-Fix conditions of order $s$ hold for $\phi$; \item[$\star$] $\h\phi\h{\w\phi}$ is boundedly differentiable up to order $s$ on $\{|\xi|<\delta\}$, and $D^{\beta}(1-\h\phi\h{\w\phi})({\bf 0}) = 0$ for all $[\beta]<s$. \end{itemize} \noindent If $f\in L_p$, $\h f\in L_q$, and $\h f(\xi)=\mathcal{O}(|\xi|^{-N-d-\varepsilon})$ as $|\xi|\to\infty$, then $$ \bigg\|f-\sum_{k\in\zd} \langle \widehat{f}, \h{\w\phi_{jk}}\rangle \phi_{jk}\bigg\|_p\le \begin{cases} C |\lambda|^{-j(N+\frac dp + \varepsilon)} &\mbox{if } s> N+\frac dp + \varepsilon\\ C (j+1)^{1/q} |\lambda|^{-js} &\mbox{if } s= N+\frac dp + \varepsilon \\ C|\lambda|^{-js} &\mbox{if } s< N+\frac dp + \varepsilon \end{cases}, $$ where $\lambda$ is an eigenvalue of $M$ and $C$ does not depends on $j$. \end{theo} There are several drawbacks in Theorem~\ref{theoQj}. First, an error estimate is obtained only for the case $p\ge 2$. Second, there are additional restrictions on the function $f$. Even in the case of regular functions $\w\phi$ for which the inner product $\langle{f}, {\w\phi_{jk}}\rangle $ has meaning for every $f\in L_p$, the error estimate is obtained only under additional assumption $\h f(\xi)=\mathcal{O}(|\xi|^{-d-\varepsilon})$ as $|\xi|\to\infty$. Third, although Theorem~\ref{theoQj} provides approximation order for $Q_j(f,\phi,\w\phi)$, more accurate error estimates were not obtained in contrast to common results in approximation theory, where estimates are usually given in the terms of moduli of smoothness. The described shortcomings of Theorem~\ref{theoQj} were avoided in~\cite{KS3}, where the Kantorovich-type quasi-projection operators $Q_j(f,\phi,\w\phi)$ with regular functions $\w\phi$ and band-limited functions $\phi \in {\cal B}$ were studied. Here ${\cal B}$ is the class of functions $\phi$ given by $$ \phi(x)=\int\limits_{\rd}\theta(\xi)e^{2\pi i(x,\xi)}\,d\xi, $$ where $\theta$ is supported in a parallelepiped $\Pi:=[a_1, b_1]\times\dots\times[a_d, b_d]$ and such that $\theta\big|_\Pi\in C^d(\Pi)$. In particular, the following result was obtained in~\cite[Theorem~17]{KS3}. \begin{theo} \label{KSth2} Let $f\in L_p$, $1<p<\infty$, and $s\in\n$. Suppose $\phi\in\cal B$, ${\rm supp\,} \h\phi\subset B_{1-\varepsilon}$ for some $\varepsilon\in (0,1)$, $\h\phi\in C^{s+d+1}(B_\delta)$ for some $\delta>0$; $\w\phi \in\mathcal{B}\cup \mathcal{L}_q$, $1/p+1/q=1$, $\h{\w\phi}\in C^{s+d+1}(B_\delta)$ and $D^{\beta}(1-\h\phi\overline{\h{\w\phi}})({\bf 0}) = 0$ for all $\beta\in\zd_+$, $[\beta]<s$. Then \begin{equation*} \bigg\|f-\sum\limits_{k\in\zd} \langle f,\widetilde\phi_{jk}\rangle \phi_{jk}\bigg\|_p\le C \omega_s\(f,\|M^{-j}\|\)_p, \end{equation*} where $C$ does not depend on $f$ and $j$. \end{theo} A similar estimate for other classes of functions $\phi$ and $\w\phi$ follows from the results of Jia. Namely, the next theorem can be obtained by combining Theorem~3.2 in~\cite{Jia2} with Lemma~3.2 in~\cite{Jia1}. \begin{theo} \label{Jia} Let $f\in L_p$, $1\le p<\infty$ or $f\in C(\rd)$ and $p=\infty$, $M=mI$, and $s\in\n$. Suppose $\phi$ and $\w\phi$ are compactly supported functions, the Strang-Fix conditions of order $s$ hold for $\phi$, and $D^{\beta}(1-\h\phi\overline{\h{\w\phi}})({\bf 0}) = 0$ for all $\beta\in\zd_+$, $[\beta]<s$. Then \begin{equation*} \bigg\|f-\sum\limits_{k\in\zd} \langle f,\widetilde\phi_{jk}\rangle \phi_{jk}\bigg\|_p\le C \omega_s\(f,m^{-j}\)_p, \end{equation*} where $C$ does not depend on $f$ and $j$. \end{theo} Our goal is to avoid the mentioned above drawbacks of Theorem~\ref{theoQj} in the case of $\phi\in {\cal L}_p$ and $\w\phi\in S'_N$. For this, we need to specify the tempered distributions $\w\phi$. We will say that a tempered distribution $\w\phi$ belongs to the class $\mathcal{S}_{N,p}'$ for some $N\ge 0$ and $1\le p\le \infty$ if $\w\phi \in \mathcal{S}_N'$ and there exists a constant $C_{\w\phi,p}$ such that \begin{equation}\label{DefS} \Vert \Lambda_{{\overline{\h{\w\phi}}}}(T_\mu) \Vert_p \le C_{\w\vp,p} m^{\frac{N\mu}d} \Vert T_\mu \Vert_p \end{equation} for all $\mu\in \Z_+$ and $T_\mu\in \mathcal{B}_{M^\mu,p}\cap L_2$. Obviously,~\eqref{DefS} is satisfied if $\w\phi$ is the Dirac delta-function. If $M$ is an isotropic matrix, then any distribution $\w\phi$ corresponding to some differential operator (see~\cite{KS}, \cite{KKS}), namely $$ \h{\w\phi} (\xi) = \sum_{[\beta]\le N} c_\beta (2\pi i \xi)^{\beta}, \quad N\in \z_+,\, j\in \N, $$ is also in $\mathcal{S}_{N,p}'$. This easily follows from the well-known Bernstein inequality (see, e.g.,~\cite[p.~115]{Nik}) $$ \Vert g'\Vert_{L_p} \le \sigma \Vert g \Vert_{L_p}, \quad g\in L_p(\R),\quad \supp \h g \subset [-\sigma,\sigma]. $$ To extend the operator $Q_j(f,\phi,\w\phi)$ with $\w\phi \in \mathcal{S}_{N,p}'$ to the space $ \mathbb{B}_{p,1}^{d/p+N}(M)$, we need the following lemmas. \begin{lem}\label{lemMZ} {\sc (\cite[Theorem 4.3.1]{TB})} Let $1\le p<\infty$, $f\in L_p$, and $\supp \h f\subset [-\sigma,\sigma]^d$, $\sigma>0$. Then $$ \sum_{k\in \Z^d} \max_{x\in Q_{k,\sigma}} |f(x)|^p \le C(p,d) \sigma^d\Vert f\Vert_p^p, $$ where $Q_{k,\sigma}=[-\frac1\sigma,\frac1\sigma]^d+\frac{2k}\sigma$. \end{lem} \begin{lem} {\sc (\cite[Theorem 2.1]{v58})} \label{lemKK1} Let $1\le p\le\infty$, $\phi\in {\cal L}_p$, and $\{a_k\}_{k\in\zd}\subset \mathbb{C}$. Then \begin{equation*} \bigg\Vert \sum_{k\in \Z^d} a_k \phi_{0k}\bigg\Vert_p \le \|\phi\|_{\mathcal{L}_p} \left\Vert \{a_k\}_{k}\right\Vert_{\ell_{p}}. \end{equation*} \end{lem} \begin{lem} {\sc (\cite[Proposition 5]{NU})} \label{lemNU} Let $1\le p\le\infty$, $1/p+1/q=1$, $f\in L_p$, and $\phi\in {\cal L}_q$. Then \begin{equation*} \left\Vert \{\langle f, \phi_{0k}\rangle\}_{k}\right\Vert_{\ell_{p}} \le \|\phi\|_{\mathcal{L}_q}\|f\|_p . \end{equation*} \end{lem} \begin{lem}\label{lemE} Let $1\le p<\infty$ and $A$ be $d\times d$ matrix. Then \begin{equation}\label{ee} E_{A}(f)_p\le E_{A}^*(f)_p\le C(p,d) E_{A}(f)_p. \end{equation} \end{lem} {\bf Proof.} We need to prove only the second inequality in~\eqref{ee}. Let $Q\in \mathcal{B}_{A,p}$ be such that $\Vert f-Q\Vert_p\le 2E_A(f)_p$. If $1\le p\le 2$, then using the embedding $\mathcal{B}_{A,p}\subset \mathcal{B}_{A,2}$ (see, e.g.,~\cite[Theorem~3.3.5]{Nik}), we get $ E_A^*(f)_p\le \Vert f-Q\Vert_p\le 2E_A(f)_p. $ Now consider the case $p >2$. Let $T\in \mathcal{S}$ be such that $\Vert f-T\Vert_p\le E_A(f)_p$. Denote $S_A g:=\mathcal{F}^{-1} \chi_{A\T^d} * g$. It is clear that $S_A T\in L_p\cap L_2$ and $S_A Q=Q$. Thus, taking into account that $S_A$ is bounded in $L_p$ for all $1<p<\infty$ (see, e.g.,~\cite[Section~1.5.7]{Nik}), we derive \begin{equation*} \begin{split} E_A^*(f)_p&\le \Vert f-S_A T\Vert_p\le \Vert f-Q\Vert_p+\Vert S_A(Q-T)\Vert_p\\ &\le \Vert f-Q\Vert_p+C_1 \Vert Q-T\Vert_p\le (1+C_1)\Vert f-Q\Vert_p+C_1 \Vert f-T\Vert_p\le C_2 E_A(f)_p, \end{split} \end{equation*} which proves the lemma.~~$\Diamond$ In particular Lemma~\ref{lemE} implies that for any $f\in L_p$, $1\le p<\infty$, there exists a function $T \in \mathcal{B}_{A,p}\cap L_2$ such that $\Vert f-T_\mu \Vert_p\le C(p,d) E_{A}(f)_p$. \begin{lem} \label{lem1} Let $1\le p<\infty$, $N\ge 0$, $\delta \in (0,1]$, $\w\vp\in \mathcal{S}_{N,p}'$, and $f\in \mathbb{B}_{p,1}^{d/p+N}(M)$. Suppose that functions $T_\mu \in \mathcal{B}_{\delta M^\mu,p}\cap L_2$, $\mu\in\z_+$, are such that $$ \Vert f-T_\mu \Vert_p\le C(p,d) E_{\delta M^\mu}(f)_p. $$ Then the sequence $\{\{\langle \h T_\mu, \h{\w\phi}_{0k}\rangle\}_k\}_{\mu=0}^\infty$ converges in $\ell_p$ as $\mu\to\infty$; a fortiori the sequence $\{\langle \h T_\mu, \h{\w\phi}_{0k}\rangle\}_{\mu=0}^\infty$ converges uniformly with respect to $k\in\zd$, and the limit does not depend on the choice of functions~$T_\mu$; $$ \sum_{\mu=n}^\infty \Vert \{\langle \h T_{\mu+1} -\h T_\mu, \h{\w\phi}_{0k}\rangle \}_{k} \Vert_{\ell_p}\le C\sum_{\mu=n}^\infty m^{\mu(\frac{N}d+\frac1p)}E_{\delta M^\mu}(f)_p,\quad \forall n\in\z_+ $$ where the constant $C$ depends only on $\w\vp$, $d$, $p$, and $M$. \end{lem} {\bf Proof.} Set $$ F(x):=\int\limits_{\rd}\Big(\h T_{\mu+1}({M^*}^{\mu+1}\xi)-\h T_{\mu}({M^*}^{\mu+1}\xi)\Big) \overline{\h{\w \phi}({M^*}^{\mu+1}\xi)}e^{2\pi i(\xi,x)}\,d\xi. $$ We have \begin{equation} \label{121} \begin{split} \Vert \{\langle \h T_{\mu+1}, \h{\w\phi}_{0k}\rangle - \langle \h T_\mu, \h{\w\phi}_{0k}\rangle \}_{k} \Vert_{\ell_p}^p&=\sum_{k\in\zd}\bigg|\int\limits_{\rd}\Big(\h T_{\mu+1}(\xi)-\h T_{\mu}(\xi)\Big) \overline{\h{\w \phi}(\xi)}e^{-2\pi i(\xi,k)}\,d\xi\bigg|^p\\ &=m^{p(\mu+1)}\sum_{k\in\zd}|F(M^{\mu+1}k)|^p. \end{split} \end{equation} Since $$ \supp\h T_{\mu+1}({M^*}^{\mu+1}\cdot)\subset \td,\quad \supp\h T_{\mu}({M^*}^{\mu+1}\cdot)\subset \sqrt d\|M^{-1}\|\td, $$ one can easily see that $$ \supp \h F\subset [-\sigma, \sigma]^d,\quad \text{where}\ \ \sigma=\sigma(M,d). $$ Thus, by Lemma~\ref{lemMZ}, taking into account that each set $Q_{r,\sigma}$ contains only finite number (depending only on $M$ and $d$) of points $M^{\mu+1}k\in \zd$, $k\in\zd$, we obtain $$ \sum_{k\in\zd}|F(M^{\mu+1}k)|^p\le C_1\sigma^d\int\limits_{\rd}|F(x)|^p\,dx. $$ After the change of variables, the above inequality yields $$ m^{p(\mu+1)}\sum_{k\in\zd}|F(M^{\mu+1}k)|^p\le C_1\sigma^dm^{\mu+1}\Vert \Lambda_{\mathcal{F}^{-1}{{\w\phi}}}(\overline{T_{\mu+1}}-\overline{T_\mu}) \Vert_p^{p}. $$ Using this estimate together with~\eqref{121} and~\eqref{DefS}, we get $$ \Vert \{\langle \h T_{\mu+1}, \h{\w\phi}_{0k}\rangle - \langle \h T_\mu, \h{\w\phi}_{0k}\rangle \}_{k} \Vert_{\ell_p}\le C_2 m^{\mu(\frac{N}d+\frac1p)}\|T_{\mu+1}-T_\mu\|_p. $$ Hence, for any $n\in\z_+$, we have \begin{equation*} \begin{split} \sum_{\mu=n}^\infty \Vert \{\langle \h T_{\mu+1}, \h{\w\phi}_{0k}\rangle -\langle \h T_\mu, \h{\w\phi}_{0k}\rangle \}_{k} \Vert_{\ell_p}&\le C_4\sum_{\mu=n}^\infty m^{\mu(\frac{N}d+\frac1p)}\|T_{\mu+1}-T_\mu\|_p\\ &\le 2 C_4\sum_{\mu=n}^\infty m^{\mu(\frac{N}d+\frac1p)}E_{\delta M^\mu}(f)_p, \end{split} \end{equation*} and it is easy to see that $C_4$ depends only on $\w\phi$, $d$, $p$, and $M$. The latter series is convergent because $f\in \mathbb{B}_{p,1}^{d/p+N}(M)$, which yields that $\{\{\langle \h T_\mu, \h{\w\phi}_{0k}\rangle\}_k\}_{\mu=0}^\infty$ is a Cauchy sequence in $\ell_p$. In particular, this implies that the sequence $\{\langle \h T_\mu, \h{\w\phi}_{0k}\rangle\}_{\mu=0}^\infty$ is convergent for every $k\in\zd$. It remains to check that the limit does not depend on the choice of functions $T_\mu$. Let $\{T'_\mu\}_\mu$ be another sequence of functions with the same properties. Repeating all above arguments with $T'_\mu$ instead of $T_{\mu+1}$, we obtain $$ \Vert \{\langle \h T'_{\mu}, \h{\w\phi}_{0k}\rangle -\langle\h T_\mu, \h{\w\phi}_{0k}\rangle \}_{k} \Vert_{\ell_p}\le C_5 m^{\mu(\frac{N}d+\frac1p)}\|T'_{\mu}-T_\mu\|_p\le 2 C_5 m^{\mu(\frac{N}d+\frac1p)}E_{\delta M^\mu}(f)_p, $$ where $C_5$ depends only on $\w\phi$, $d$, $p$, and $M$. It follows that $\langle \h T'_{\mu}, \h{\w\phi}_{0k}\rangle -\langle \h T_\mu, \h{\w\phi}_{0k}\rangle\to 0$ as $\mu\to\infty$ for any $k\in\zd$, which completes the proof.~~$\Diamond$ \bigskip Consider the functional $\langle f,\w\vp_{jk}\rangle$. If $\w\phi\in \mathcal{S}'_N$, then $\langle f,\w\vp_{jk}\rangle$ has meaning whenever $f\in \mathcal{S}$. Due to Lemma~\ref{lem1} we can extend the linear functional $\langle f,\w\vp_{jk}\rangle$ from $\mathcal{S}$ to $\mathbb{B}_{p,1}^{d/p+N}(M)$ as follows. \begin{defi} \label{def0} Let $1\le p<\infty$, $N\ge 0$, $\w\phi\in S'_{N, p}$, and the functions $T_\mu$ be as in Lemma~\ref{lem1}. For every $f\in \mathbb{B}_{p,1}^{d/p+N}(M)$ and $k\in\zd$, we set $$ \langle f,\w\vp_{0k}\rangle:=\lim\limits_{\mu\to\infty}\langle\h T_\mu,\h{\w\phi}_{0k}\rangle $$ and $$ \langle f,\w\vp_{jk}\rangle:= m^{-j/2}\langle f(M^{-j}\cdot),\w\vp_{0k}\rangle, \quad j\in\z_+. $$ \end{defi} Now, the quasi projection operators $$ Q_j(f,\phi,\w\phi)=\sum_{k\in\zd}\langle f,\w\vp_{jk}\rangle\phi_{jk} $$ are defined on the space $\mathbb{B}_{p,1}^{d/p+N}(M)$ for a wide class of appropriate functions $\phi$. \begin{rem} \label{prop002} If $\phi\in {\cal L}_p$, then the operator $Q_j(f,\phi,\w\phi)$ is well defined because the series $\sum\limits_{k\in\zd}\langle f,\w\vp_{jk}\rangle\phi_{jk}$ converges unconditionally in $L_p$, which follows from Lemmas~\ref{lemKK1} and~\ref{lem1}. \end{rem} \begin{rem} \label{prop003} If the Fourier transform of a function $f$ has enough decay for the inner product $\langle \h f,\h{{\w \vp}_{0k}}\rangle$ to have meaning, then it is natural to define $Q_j(f,\phi,\w\phi)$ setting $\langle f,\w\vp_{0k}\rangle:=\langle \h f,\widehat{{\w \vp}_{0k}}\rangle$ (see Theorem~\ref{theoQj}). Such an operator $Q_j(f,\phi,\w\phi)$ will be the same as one in correspondence with Definition~\ref{def0}. \end{rem} {\bf Proof.} We assume that $f\in L_p$, $\h f\in L_q$, $1/p+1/q=1$, $\h f(\xi)=\mathcal{O}(|\xi|^{-N-d-\varepsilon})$ as $|\xi|\to\infty$, $\varepsilon>0$, $f\in \mathbb{B}_{p,1}^{d/p+N}(M)$, $\w\phi\in S'_N$, and $M$ is an isotropic matrix. First we consider the case $p<2$. Let $T_\mu \in\mathcal B_{\delta M^\mu, p}$ be such that $ \|f-T_\mu\|_p\le 2E_{\delta M^\mu}(f)_p $ Since $\mathcal{B}_{{\delta M^\mu},p}\subset \mathcal{B}_{{\delta M^\mu},2}$ (see, e.g.,~\cite[Theorem~3.3.5]{Nik}), we have $T_\mu\in\mathcal B_{\delta M^\mu, p}\cap L_2$ and, due to the Hausdorff-Young inequality, \be \label{301} \|\h f-\h T_{\mu}\|_q\le \|f-T_{\mu}\|_p\le 2E_{\delta M^\mu}(f)_p, \ee Next we have \begin{equation*} \begin{split} |\langle \h f,\h{\w\vp_{0k}}\rangle-\langle \h T_\mu,\h{\w\vp_{0k}}\rangle|\le C_1 &\int\limits_{|\xi|\le \delta\sqrt d\|M^\mu\|} (1+|\xi|)^N |\h f(\xi)-\h T_\mu(\xi)| d\xi \\ &\qquad+ C_2 \int\limits_{|\xi|\ge \delta\sqrt d\|M^\mu\|} (1+|\xi|)^N |\h f(\xi)-\h T_\mu(\xi)| d\xi =:C_1I_1+C_2 I_2. \end{split} \end{equation*} If $|\xi|\ge \delta\sqrt d\|M^\mu\|$, then $\xi\not\in \delta M^\mu\td$, and hence $\h T_\mu(\xi)=0$, which yields $$ I_2=\int\limits_{|\xi|\ge \delta\sqrt d\|M^\mu\|} (1+|\xi|)^N |\h f(\xi)| d\xi \too_{\mu\to\infty}0. $$ Using H\"older's inequality and~\eqref{301}, we derive $$ I_1\le 2\Bigg(\int\limits_{|\xi|\le \delta\sqrt d\|M^\mu\|} (1+|\xi|)^{pN}\Bigg)^{1/p}E_{\delta M^\mu}(f)_p \le C_3\|M^\mu\|^{N+\frac dp}E_{\delta M^\mu}(f)_p. $$ Taking into account that $f\in \mathbb{B}_{p,1}^{d/p+N}(M)$ and the inequality $\|M^\mu\|\le C(d, M) m^{\mu/d}$, which holds for any isotropic matrix, we obtain that $I_1\to0$ as $\mu\to\infty$. Thus \be \label{302} |\langle \h f,\h{\w\vp_{0k}}\rangle-\langle \h T_\mu,\h{\w\vp_{0k}}\rangle| \too_{\mu\to\infty}0. \ee Let now $p\ge2$. Denote $T_\mu=\mathcal{F}^{-1}(\chi_{\delta M^\mu\td})*{g_\mu}$, where $g_\mu \in \mathcal{S}$ is such that \be \label{303} \Vert \h f-\h{g_\mu}\Vert_q\le E_{\delta M^\mu}(f)_p. \ee Obviously, $T_\mu\in\mathcal B_{\delta M^\mu, p}\cap L_2$ and $\h T_\mu =\chi_{\delta M^\mu\td}\h{g_\mu}$. Next we have \begin{equation*} \begin{split} &|\langle \h f,\h{\w\vp_{0k}}\rangle-\langle \h T_\mu,\h{\w\vp_{0k}}\rangle|\\ &\le \int\limits_{\rd} (1+|\xi|)^N |\h f(\xi)-\chi_{\delta M^\mu\td}(\xi)\h f(\xi)| d\xi +\int\limits_{\rd} (1+|\xi|)^N |\chi_{\delta M^\mu\td}(\xi)(\h f(\xi)-\h{g_\mu}(\xi)| d\xi\\ &= \int\limits_{\xi\not \in\delta M^\mu\td} (1+|\xi|)^N |\h f(\xi)| d\xi +\int\limits_{\xi\in\delta M^\mu\td} (1+|\xi|)^N |\h f(\xi)-\h{g_\mu}(\xi)| d\xi \end{split} \end{equation*} Due to the same arguments as above, using~\eqref{303} instead of~\eqref{301}, we conclude again that relation~\eqref{302} holds true. To complete the proof it remains to note that the limit of the sequence $\{\langle \h T_\mu,\h{\w\vp_{0k}}\rangle\}_\mu$ does not depend on the choice of appropriate functions $T_\mu$.~~$\Diamond$ \medskip \medskip \begin{lem} \label{thKS} Let $1\le p< \infty$, $1/p+1/q=1$, $\delta\in(0,1]$, $\phi\in {\cal L}_p$, and the functions $T_\nu$, $\nu\in \Z_+$, be as in Lemma~\ref{lem1}. \noindent $(i)$ If $N\ge 0$, $\w\phi\in S'_{N,p}$, $f\in \mathbb{B}_{p,1}^{d/p+N}(M)$, and $\nu\in\z_+$, then \begin{equation}\label{KS000} \begin{split} \|f-Q_0(f,\vp,\w\vp)\|_p\le \|T_\nu-Q_0(T_\nu,\vp,\w\vp)\|_p+ C\sum_{\mu=\nu}^\infty m^{\mu(\frac{N}d+\frac1p)}E_{\delta M^\mu}(f)_p, \end{split} \end{equation} where the constant $C$ does not depend on $f$ and $\nu$. \noindent $(ii)$ If $\w\vp \in \mathcal{L}_q$, $f\in L_p$, and $\nu\in\z_+$, then \begin{equation}\label{KS000_K} \begin{split} \|f-Q_0(f,\vp,\w\vp)\|_p\le \|T_\nu-Q_0(T_\nu,\vp,\w\vp)\|_p+ CE_{\delta M^\nu}(f)_p, \end{split} \end{equation} where the constant $C$ does not depend on $f$ and $\nu$. \end{lem} {\bf Proof.} We have \begin{equation}\label{_1} \begin{split} \|f-Q_0(f,\vp,\w\vp)\|_p\le \|f-T_\nu\|_p+\|T_\nu-Q_0(T_\nu,\vp,\w\vp)\|_p+\|Q_0(f-T_\nu,\vp,\w\vp)\|_p. \end{split} \end{equation} Under assumptions of item $(i)$, by Definition~\ref{def0}, using Lemmas~\ref{lemKK1} and~\ref{lem1}, we derive \begin{equation}\label{_2} \begin{split} \|Q_0(f-T_\nu,\vp,\w\vp)\|_p&\le C_1\|\{\langle f-T_\nu, \w\phi_{0k}\rangle\}_k\|_{\ell_p}\le C_2\sum_{\mu=\nu}^\infty\|\{\langle T_{\mu+1}-T_\mu, \w\phi_{0k}\rangle\}_k\|_{\ell_p}\\ &\le C_3\sum_{\mu=\nu}^\infty m^{\mu(\frac{N}d+\frac1p)}E_{\delta M^\mu}(f)_p. \end{split} \end{equation} Combining~\eqref{_1} and~\eqref{_2}, we get~\eqref{KS000}. Under assumptions of item $(ii)$, it follows from Lemmas~\ref{lemKK1} and~\ref{lemNU} that \begin{equation}\label{_3} \begin{split} \|Q_0(f-T_\nu,\vp,\w\vp)\|_p&\le \|\phi\|_{\mathcal{L}_p}\|\{\langle f-T_\nu, \w\phi_{0k}\rangle\}_k\|_{\ell_p}\\ &\le \|\phi\|_{\mathcal{L}_p}\Vert \w\vp \Vert_{\mathcal{L}_q}\Vert f-T_\nu\Vert_p\le C_4E_{\delta M^\nu}(f)_p. \end{split} \end{equation} Combining~\eqref{_1} and~\eqref{_3}, we obtain~\eqref{KS000_K}.~~$\Diamond$ \bigskip We will use the following Jackson-type theorem (see, e.g.,~\cite[Theorem 5.2.1 (7)]{Nik} or~\cite[5.3.2]{Timan}). \begin{lem}\label{lemJ} Let $f\in L_p$, $1\le p\le \infty$, and $s\in \N$. Then \begin{equation*} E_{A}(f)_p\le C \Omega_{s}(f,A^{-1})_p, \end{equation*} where $C$ is a constant independent of $f$ and $A$. \end{lem} \begin{lem}\label{lem1'} {\sc (See~\cite{Wil}.)} Let $1\le p\le\infty$, $s\in \N$, and $T\in \mathcal{B}_{I,p}$. Then \begin{equation*} \sum_{[\beta]=s} \Vert D^\beta T\Vert_p\le C\omega_s(T,1)_p, \end{equation*} where the constant $C$ depends only on $p$ and $s$. \end{lem} \section{Main results} \begin{theo} \label{corMOD1'+} Let $1\le p<\infty$, $N\ge 0$, $s\in \N$, $\delta\in (0,1/2)$. Suppose that $\vp \in \mathcal{L}_p$ and $\w\phi\in S'_{N,p}$ satisfy the following conditions: \begin{itemize} \item[1)] $\h{\w\phi}\in C^{s+d+1}({2\delta\td)}$; \item[2)] $\h\phi\in C^{s+d+1}\(\cup_{l\in\zd}(2\delta\td+l)\)$; \item[3)] the Strang-Fix conditions of order $s$ hold for $\phi$; \item[4)] $D^{\beta}(1-\h\phi\overline{\h{\w\phi}})({\bf 0}) = 0$ for all $\beta\in\zd_+$, $[\beta]<s$; \item[5)]$ \sum\limits_{l\ne\nul}\sup\limits_{\xi\in{2\delta\td}}|D^{\beta}\h\phi(\xi+l)|\le B_\phi$ for all $\beta\in\zd_+$, $s\le [\beta]\le s+d+1$. \end{itemize} Then, for every $f\in \mathbb{B}_{p,1}^{d/p+N}(M)$ and $j\in \z_+$, we have \begin{equation}\label{110} \begin{split} \Vert f - Q_j(f,\phi,\w\phi) \Vert_p\le C \(\Omega_s(f, M^{-j})_p+m^{-j(\frac1p+\frac Nd)}\sum_{\nu=j}^\infty m^{(\frac1p+\frac Nd)\nu} E_{\delta M^\nu}(f)_p\), \end{split} \end{equation} and if, moreover, $\w\vp \in \mathcal{L}_q$, $1/p+1/q=1$, then for every $f\in L_p$ \begin{equation}\label{110K} \Vert f - Q_j(f,\phi,\w\phi) \Vert_p\le C \Omega_s(f,M^{-j})_p, \end{equation} where the constant $C$ does not depend on $f$ and $j$. \end{theo} {\bf Proof.} First of all note that it suffices to prove~\eqref{110} and~\eqref{110K} for $j=0$. Indeed, $$ \bigg\|f-\sum\limits_{k\in\zd}\langle f,\widetilde\phi_{jk}\rangle \phi_{jk}\bigg\|_p= \bigg\|m^{-j/p}f(M^{-j}\cdot)-\sum\limits_{k\in\zd}\langle m^{-j/p}f(M^{-j}\cdot),\widetilde\phi_{0k}\rangle \phi_{0k}\bigg\|_p. $$ Obviously, $m^{-j/p}f(M^{-j}\cdot)\in \mathbb{B}_{p,1}^{d/p+N}(M^0)$ whenever $f\in \mathbb{B}_{p,1}^{d/p+N}(M)$. We have also that $$ E_{\delta M^\nu}(m^{-j/p}f(M^{-j}\cdot))_p\le E_{\delta M^{\nu+j}}(f)_p $$ and $$ \omega_s(f(M^{-j}\cdot), 1)_p= m^{j/p}\Omega_s(f, M^{-j})_p, $$ which yield~\eqref{110} and~\eqref{110K} whenever these relations hold true for $j=0$. Let $T \in \mathcal{B}_{\delta I,p}\cap L_2$ be such that $\Vert f-T\Vert_p\le 2 E_{\delta I}(f)_p$. Due to Lemma~\ref{thKS}, we need only to check that \be \label{104} \bigg\|T-\sum\limits_{k\in\zd}\langle T,\widetilde\phi_{0k}\rangle \phi_{0k}\bigg\|_p\le C_1\omega_s(f, 1)_p. \ee Since $\supp \h T\subset \delta\td$, without loss of generality, in what follows we assume that $\h{\w\phi}\in C^{s+d+1}(\rd)$ and $\supp\h{\w\phi}\subset 2\delta\td$. Using Lemma~1 from~\cite{KS}, Carleson's theorem (with cubic convergence of the Fourier series), we derive \be \label{102} \mathcal{F}\Big(\sum_{k\in\zd}\langle T, \w\phi_{0k}\rangle\phi_{0k}\Big)=G\h\phi, \ee where $G(\xi):=\sum_{k\in\zd}\h T(\xi+k)\overline{\h{\w\phi}(\xi+k)}$. Consider auxiliary functions $\phi^*$ and $\phi^{(\beta)}$, $[\beta]=s$, $\b\in \Z_+^d$, such that $\h{\phi^*}, \h{\phi^{(\beta)}}\in C^{s+d+1}(\rd)$, $$ \h{\phi^*}(\xi)=\left\{ \begin{array}{ll} \h\phi(\xi), & \xi\in\delta\td, \\ 0, & \xi\in\td\setminus 2\delta\td, \end{array} \right. $$ $$ \h{\phi^{(\beta)}}(\xi)=0\quad\text{for}\quad \xi\in\td, $$ and for any $l\in \Z^d\setminus \{0\}$ \be \label{107} \h{\phi^{(\beta)}}(\xi+l)= \left\{ \begin{array}{lll} \displaystyle \frac{s}{\beta!} \xi^{\beta} \int\limits_0^1 (1-t)^{s-1} D^{\beta}\h{\phi}( t\xi+l) d t, & \xi\in\delta\td, \\ \displaystyle 0, & \xi\in\td\setminus 2\delta\td. \end{array} \right. \ee In view of condition 5), we can also provide that \be \label{106} \sum\limits_{l\ne\nul}\sup\limits_{\xi\in2\delta\td}\left| D^{\beta'}\left(\frac{\h{\phi^{(\beta)}}(\xi+l)}{\xi^\beta}\right)\right|\le B'_\phi, \quad\beta'\in\zd_+, \ 0\le[\beta']\le d+1. \ee By condition 3), taking into account the fact that $\supp \h{T}\subset \delta \T^d$ and that, by Taylor's formula with the integral reminder, $$ \h{\phi} (\xi+l) = \sum_{[\beta]=s} \frac{s}{\beta!} \xi^{\beta} \int\limits_0^1 (1-t)^{s-1} D^{\beta}\h{\phi}( t\xi+l) d t, $$ we have \be \label{103} G(\xi)\h\phi(\xi)=\h T(\xi)\overline{\h{\w\phi}(\xi)}\h{\phi^*}(\xi)+G(\xi) \sum_{[\beta]=s}\h{\phi^{(\beta)}}(\xi). \ee It is easy to see that $D^{\beta'}\h{\phi^*},D^{\beta'}\h{\phi^{(\beta)}}\in L$ for all $\beta'\in\zd_+$, \ $0\le[\beta']\le d+1$. It follows that the functions ${\phi^*}=\mathcal{F}^{-1}(\h{\phi^*})$ and ${\phi^{(\beta)}}=\mathcal{F}^{-1}(\h{\phi^{(\beta)}})$ are in $\mathcal{L}_p$. Using~\eqref{102} and \eqref{103}, we have \be \begin{split} \bigg\|T-\sum\limits_{k\in\zd}\langle T,\widetilde\phi_{0k}\rangle \phi_{0k}\bigg\|_p&\le\bigg\|T-\sum\limits_{k\in\zd}\langle T,\widetilde\phi_{0k}\rangle \phi^*_{0k}\bigg\|_p+\sum_{[\beta]=s} \bigg\|\sum\limits_{k\in\zd}\langle T,\widetilde\phi_{0k}\rangle \phi^{(\beta)}_{0k}\bigg\|_p\\ &=: I^*+ \sum_{[\beta]=s}I^{(\beta)}. \end{split} \label{101} \ee By condition 4), using the Jensen inequality and taking into account the fact that $\supp \h{T}\subset \delta \T^d$ and that by Taylor's formula with the integral reminder \begin{equation*} \h{\phi} (\xi)\overline{\h{\w\phi}(\xi)} = 1 + \sum_{[\beta]=s} \frac{s}{\beta!} \xi^{\beta} \int\limits_0^1 (1-t)^{s-1} D^{\beta}\h{\phi}\overline{\h{\w\phi}}( t \xi) d t, \end{equation*} we obtain \begin{equation*} \begin{split} I^*&=\left\Vert \mathcal{F}^{-1}\left\{ \({1-(\overline{\h{\w\vp}}\h{\vp^*})}\) \h{T}\right\}\right\Vert_{p}\\ &\le \sum_{[\beta]=s} \frac{s}{\beta!} \int\limits_0^1\,dt \bigg( \int\limits_{\rd}\bigg| \int\limits_{\rd} D^{\beta}\h{\phi^*}\overline{\h{\w\phi}}( t \xi) \xi^\beta\h{T}(\xi) e^{2\pi i(x,\xi)}\,d\xi\bigg|^p\,dx\bigg)^{1/p} \\ &=\sum_{[\beta]=s} \frac{s}{\beta!} \int\limits_0^1\,dt \bigg( \int\limits_{\rd}\bigg| \int\limits_{\rd}t^{-d} \mathcal{F}^{-1}(D^{\beta}\h{\phi^*}\overline{\h{\w\phi}})\Big( \frac yt\Big) {D^\beta T}(x-y) \,dy\bigg|^p\,dx\bigg)^{1/p}. \end{split} \end{equation*} Since, obviously, the function $ \mathcal{F}^{-1}(D^{\beta}\h{\phi^*}\overline{\h{\w\phi}})$ is summable on $\rd$, this yields \be \label{105} I^*\le\sum_{[\beta]=s} \frac{s}{\beta!}\|\mathcal{F}^{-1}(D^{\beta}\h{\phi^*} \overline{\h{\w\phi}})\|_1 \|D^\beta T\|_p \le C_2\sum_{[\beta]=s}\|D^\beta T\|_p. \ee Next, we fix $\beta$ and estimate $I^{(\beta)}$. Set $$ \h\psi(\xi)=\frac{\h{\phi^{(\beta)}}(\xi)}{\gamma(\xi)}, $$ where $\gamma$ is a $1$-periodic functions (in each variable) such that $\gamma\in C^{d+1}(\rd)$ and $\gamma(\xi)=\xi^\beta$ on ${2\delta\td}$. Using~\eqref{107} and~\eqref{106}, we have $$ \int\limits_{\rd}|\h\psi(\xi)| \,d\xi =\sum_{k\in\zd}\int\limits_{\td-k} \frac{|\h{\phi^{(\beta)}}(\xi)|}{|\gamma(\xi)|}d\xi =\int\limits_{2\delta\td}\frac{\sum_{k\in\zd}|\h{\phi^{(\beta)}}(\xi+k)|}{|\xi^\beta|}d\xi\le B'_\phi. $$ Thus $\h\psi\in L_1$, and similarly $D^{\beta'}\h\psi\in L_1$ for all $\beta'$ such that $[\beta']\le d+1$. It follows that $\psi\in\mathcal{L}_p$. Next, we set $$ \w\psi=\mathcal{F}^{-1}\left(\h{\w\phi}\, \gamma \right) $$ and note that $\w\psi\in L_1$. Since $\psi\in\mathcal{L}_p$, using Lemmas~\ref{lemKK1} and~\ref{lemMZ}, we obtain $$ \bigg\|\sum\limits_{k\in\zd}\langle T,\widetilde\psi_{0k}\rangle \psi_{0k}\bigg\|_p \le \|\psi\|_{\mathcal{L}_p}\lll \sum\limits_{k\in\zd} | \langle T,\widetilde\psi_{0k}\rangle|^p\rrr^{1/p}\le C_{\beta}\big\| T*\overline{\w\psi}\big\|_p= C_{\beta}\Big\|\mathcal{F}^{-1}(\h T\overline{\h{\w\psi}})\Big\|_p $$ and \begin{equation*} \begin{split} \Big\|\mathcal{F}^{-1}(\h T\overline{\h{\w\psi}})\Big\|_p &=\Big\|\mathcal{F}^{-1}\lll\h T\,\overline{\h{\w\phi}}\,\gamma\rrr\Big\|_p =\Big\|\mathcal{F}^{-1}\lll\h{D^\beta T}\,\,\overline{\h{\w\phi}}\rrr\Big\|_p\\ &=\Big\|D^\beta T*\mathcal{F}^{-1}\overline{\h{\w\phi}}\Big\|_p \le \Big\|\mathcal{F}^{-1}\overline{\h{\w\phi}}\Big\|_1 \|D^\beta T\|_p. \end{split} \end{equation*} Since the function $\mathcal{F}^{-1}\overline{\h{\w\phi}}$ is obviously summable on $\rd$, it follows that \be \label{108} \bigg\|\sum\limits_{k\in\zd}\langle T,\widetilde\psi_{0k}\rangle \psi_{0k}\bigg\|_p\le C'_\beta \|D^\beta T\|_p. \ee Now let us show that \begin{equation}\label{ad1} \sum\limits_{k\in\zd}\langle T,\widetilde\psi_{0k}\rangle \psi_{0k} =\sum\limits_{k\in\zd}\langle T,\widetilde\phi_{0k}\rangle \phi^{(\beta)}_{0k}. \end{equation} Since the functions $\sum_{k\in\zd}\langle T,\widetilde\psi_{0k}\rangle \psi_{0k}$ and $\sum_{k\in\zd}\langle T,\widetilde\phi_{0k}\rangle \phi^{(\beta)}_{0k}$ are locally summable, it suffices to check that their Fourier transforms coincide almost everywhere. Due to Lemma~1 from~\cite{KS} and Carleson's theorem, we derive $$ \mathcal{F}\bigg(\sum_{k\in\zd}\langle T, \w\psi_{0k}\rangle\psi_{0k}\bigg)(\xi) =\sum_{k\in\zd}\h T(\xi+k)\overline{\h{\w\psi}(\xi+k)}\h\psi(\xi), $$ $$ \mathcal{F}\bigg(\sum_{k\in\zd}\langle T, \w\phi_{0k}\rangle\phi^{(\beta)}_{0k}\bigg)(\xi) =\sum_{k\in\zd}\h T(\xi+k)\overline{\h{\w\phi}(\xi+k)}\h{\phi^{(\beta)}}(\xi). $$ Since $ \supp \h T\subset \delta \td\subset\td$, we have for every $\xi\in\td$ and $l\in\zd$ that \ban \begin{split} \mathcal{F}\bigg(\sum_{k\in\zd}\langle T, \w\psi_{0k}\rangle\psi_{0k}\bigg)(\xi+l)&= \h T(\xi)\overline{\h{\w\psi}(\xi)}\h\psi(\xi+l) \\ &=\h T(\xi)\overline{\h{\w\phi}(\xi)}\h{\phi^{(\beta)}}(\xi+l)= \mathcal{F}\bigg(\sum_{k\in\zd}\langle T, \w\phi_{0k}\rangle\phi^{(\beta)}_{0k}\bigg)(\xi+l), \end{split} \ean which implies~\eqref{ad1}. Therefore, by~\eqref{108} we have $$ I^{(\beta)}= \bigg\|\sum\limits_{k\in\zd}\langle T,\widetilde\phi_{0k}\rangle \phi^{(\beta)}_{0k}\bigg\|_p\le C_\beta \|D^\beta T\|_p. $$ Combining this with~\eqref{105} and~\eqref{101}, we obtain $$ \bigg\|T-\sum\limits_{k\in\zd}\langle T,\widetilde\phi_{0k}\rangle \phi_{0k}\bigg\|_p\le C\sum_{[\beta]=s} \|D^\beta T\|_p. $$ To prove~\eqref{104} it remains to apply Lemmas~\ref{lem1'} and\ref{lemJ}, which completes the proof.~~$\Diamond$ \begin{theo} \label{corMOD1'''} Let $1\le p<\infty$, $N\ge 0$, $s\in \N$, and $\delta\in (0,1/2)$. Suppose that $\w\phi\in S'_{N,p}$ and $\phi\in L_p$ satisfy the following conditions: \begin{itemize} \item[1)] $\supp \vp$ is compact; \item[2)] the Strang-Fix conditions of order $s$ hold for $\phi$; \item[3)] $D^{\beta}(1-\h\phi\overline{\h{\w\phi}})({\bf 0}) = 0$ for all $\beta\in\zd_+$, $[\beta]<s$; \item[4)] $\h{\w\vp}\in C^{s+d+1}(2\delta\td)$. \end{itemize} Then, for every $f\in \mathbb{B}_{p,1}^{d/p+N}(M)$ and $j\in \z_+$, we have \begin{equation*} \begin{split} \Vert f - Q_j(f,\phi,\w\phi) \Vert_p\le C \(\Omega_s(f, M^{-j})_p+m^{-j(\frac1p+\frac Nd)}\sum_{\nu=j}^\infty m^{(\frac1p+\frac Nd)\nu} E_{\delta M^\nu}(f)_p\), \end{split} \end{equation*} and if, moreover, $\w\vp \in \mathcal{L}_q$, $1/p+1/q=1$, then for every $f\in L_p$ we have \begin{equation*} \Vert f - Q_j(f,\phi,\w\phi) \Vert_p\le C \Omega_s(f,M^{-j})_p, \end{equation*} where the constant $C$ does not depend on $f$ and $j$. \end{theo} {\bf Proof.} It is sufficient to prove the theorem for $j=0$ (see explanations in the proof of Theorem~\ref{corMOD1'+}), and due to Lemma~\ref{thKS}, we need only to check that \be \label{116} \bigg\|T-\sum\limits_{k\in\zd}\langle T,\widetilde\phi_{0k}\rangle \phi_{0k}\bigg\|_p\le C_1\omega_s(f, 1)_p, \ee where $T \in \mathcal{B}_{\delta I,p}\cap L_2$ is such that $\Vert f-T\Vert_p\le C E_{\delta I}(f)_p$. Let us choose a compactly supported and localy summable function $\w\t$ such that $$ D^{\beta}\h{\w\t}(\nul)=D^{\beta}\h{\w\phi}(\nul)\quad\text{for all}\quad \beta\in\zd_+,\,\, [\beta]<s. $$ An appropriate function $\w\t$ can be easily constructed, e.g., as a linear combination of $B$-splines. Setting $\w\psi=\w\phi-\w\t$, we get $D^{\beta}\h{\w\psi}(\nul) = 0$ whenever $[\beta]<s$. We have \begin{equation}\label{yu1} \bigg\|T-\sum\limits_{k\in\zd}\langle T,\widetilde\phi_{0k}\rangle \phi_{0k}\bigg\|_p\le \bigg\|T-\sum\limits_{k\in\zd}\langle T,\w\t_{0k}\rangle \phi_{0k}\bigg\|_p+\bigg\|\sum\limits_{k\in\zd}\langle T,\w\psi_{0k}\rangle \phi_{0k}\bigg\|_p=I_1+I_2. \end{equation} Since both the functions $\phi$ and $\w\t$ are compactly supported, due to Theorem~\ref{Jia} and Lemma~\ref{lemJ}, we derive \begin{equation}\label{yu2} I_1\le C_2 \omega_s(T,1)_p\le C_2\(2^s \Vert f-T\Vert_p+\omega_s(f,1)_p\)\le C_3 \omega_s(f,1)_p. \end{equation} Let us estimate $I_2$. Since $\vp$ has a compact support and $\supp\h{T}\subset \delta\T^d$, it follows from Lemmas~\ref{lemKK1} and~\ref{lemMZ} that \begin{equation}\label{yu3} \begin{split} I_2^p\le C_4 \sum_{k\in \Z^d} |\langle T, \w\psi_{0k}\rangle|^p \le C_5 \Vert T*\w\psi\Vert_p^p. \end{split} \end{equation} Now we check that \be \label{111} \Vert T*\w\psi\Vert_p\le C_6\sum_{[\beta]=s} \Vert D^\b T \Vert_p. \ee Taking again into account that $\supp\h{T}\subset \delta\T^d$, without loss of generality we can assume that $\h{\w\psi}\in C^{s+d+1}(\rd)$. By Taylor's formula with the integral reminder, we have \begin{equation*} \h{\w\psi} (\xi)= \sum_{[\beta]=s} \frac{s}{\beta!} \xi^{\beta} \int\limits_0^1 (1-t)^{s-1} D^{\beta}\h{\w\psi}(t\xi) dt. \end{equation*} Hence, \begin{equation} \label{113} \begin{split} \Vert T*\w\psi\Vert_p&= \left\Vert \mathcal{F}^{-1}\Big( \overline{\h{\w\psi}} \h{T}\Big)\right\Vert_{p} \\ &\le \sum_{[\beta]=s}\frac{s}{\beta!} \bigg(\int\limits_{\rd}\,dx\bigg| \int\limits_0^1 \, dt (1-t)^s \int\limits_{\rd}\overline{D^\beta{\h{\w\psi}}( t\xi)} \xi^\beta\h T(\xi) e^{2\pi i(\xi, x)}\,d\xi \bigg|^p\bigg)^{1/p} \\ &\le\sum_{[\beta]=s}\frac{s}{\beta!}\int\limits_0^1 \, dt \bigg(\int\limits_{\rd}\,dx \bigg|\int\limits_{\rd}\overline{D^\beta{\h{\w\psi}}( t\xi)}\h{D^\beta T}(\xi) e^{2\pi i(\xi, x)}\,d\xi\bigg|^p\bigg)^{1/p} \\ &\le\sum_{[\beta]=s}\frac{s}{\beta!}\int\limits_0^1 \, \frac{dt}{ t^{d-d/p}} \bigg(\int\limits_{\rd}\,dx \bigg|\int\limits_{\rd}\overline{D^\beta{\h{\w\psi}}( \xi)}\h{D^\beta T}\Big(\frac \xi t\Big) e^{2\pi i(\xi, x)}\,d\xi\bigg|^p\bigg)^{1/p} \\ &=\sum_{[\beta]=s}\frac{s}{\beta!}\int\limits_0^1 \, t^{d/p} \bigg\Vert \mathcal{F}^{-1} \left\{\overline{D^\beta{\h{\w\psi}}}\,\h{Q_{\beta,t}} \right\}\bigg\Vert_p\, dt , \end{split} \end{equation} where $Q_{\beta,t}(x)=D^\beta T(tx)$. Taking into account that $D^\beta{\h{\w\psi}}\in C^{d+1}(\rd)$ whenever $[\beta]=s$, i.e. $\mathcal{F}^{-1}\big\{D^\beta\h{\w\psi}\big\}\in L_1$, we have \begin{equation}\label{emult} \begin{split} \bigg\Vert \mathcal{F}^{-1}\left\{\overline{D^\beta{\h{\w\psi}}}\,\h{Q_{\beta,t}} \right\}\bigg\Vert_p &=\|\mathcal{F}^{-1}\big\{D^\beta\h{\w\psi}\big\} * Q_{\beta,t}\|_p \\ &\le\|\mathcal{F}^{-1}\big\{D^\beta\h{\w\psi}\big\}\|_{1} t^{-d/p}\|D^\beta T\|_p=C_8 t^{-d/p}\|D^\beta T\|_p. \end{split} \end{equation} Combining these estimates with~\eqref{113}, we obtain~\eqref{111}. Next, similarly as in~\eqref{yu2}, using~\eqref{111}, \eqref{yu3}, and Lemmas~\ref{lem1'} and~\ref{lemJ}, we get \begin{equation}\label{fine} I_2 \le C_9 \omega_s(f,1)_p. \end{equation} Finally, combining~\eqref{yu1}, \eqref{yu2}, and~\eqref{fine}, we complete the proof.~~$\Diamond$ \begin{rem} The conditions on smoothness of $\h{\w\vp}$ in Theorem~\ref{corMOD1'''} can be relaxed and given in other terms using different sufficient conditions for Fourier multiplier in $L_p$ spaces, see e.g.,~\cite{LST} and~\cite{Kol}. For this, one needs to estimate the corresponding multiplier norm of the function $D^\beta\h{\w\psi}$ instead of to show that $\mathcal{F}^{-1}\big\{D^\beta\h{\w\psi}\big\}\in L_1$ in inequality~\eqref{emult}. A similar conclusion is valid also for the smoothness conditions on $\h\vp$ and $\h{\w\vp}$ in Theorem~\ref{corMOD1'+}. \end{rem} \begin{rem} It is not difficult to verify that Theorems~\ref{corMOD1'+} and~\ref{corMOD1'''} are valid also in the spaces $L_\infty$ if we additionally suppose that $f\in L_2\cap L_\infty$ and replace the best approximation $E_{\delta M^\mu}(f)_p$ with $E_{\delta M^\mu}^*(f)_p$. See also~\cite[Theorem 17]{KSa} for the corresponding analogue of Lemma~\ref{lemJ}. \end{rem} Let us compare Theorems~\ref{corMOD1'+} and~\ref{corMOD1'''} with Theorem~\ref{theoQj} in the case of the isotropic matrix $M$, $\w\phi\in S'_{N,p}$, and $p\ge2$. First let us show that the class of functions $f$ considered in Theorem~\ref{theoQj} is smaller than $\mathbb{B}_{p,1}^{d/p+N}(M)$. Let $\l$ be an eigenvalue of $M$, $f\in L_p$, $\h f\in L_q$, and $\h f(\xi)=O(|\xi|^{-N-d-\varepsilon})$ as $|\xi|\to\infty$, $\varepsilon>0$. Setting $V_\mu f=\mathcal{F}^{-1}v *{f}$, where $v\in C^\infty(\R)$, $v(\xi)\le1$, $v(\xi)=1$ if $|M^{-\mu}\xi|\le 1/2$ and $v(\xi)=0$ if $|M^{-\mu}\xi|\ge 1$, using Pitt's inequality (see, e.g., \cite[inequality (1.1)]{GT}) $$ \Vert f\Vert_{p}\le C(p)\bigg(\,\int\limits_{\R^d} |\xi|^{d(p-2)}|\h f(\xi)|^p d\xi\bigg)^\frac1p,\quad 2<p<\infty, $$ and taking into account~\eqref{10}, we obtain \begin{equation*} \begin{split} E_{M^\mu}(f)_p&\le \Vert f-V_\mu(f)\Vert_p \le C(p)\(\,\int\limits_{|M^{-\mu}\xi|\ge 1/2}|\xi|^{d(p-2)} |(1-v(\xi))\h f(\xi)|^p d\xi\)^\frac1p \\ &\le 2C(p)\(\,\int\limits_{|\xi|\ge 1/2\|M^{-\mu}\|}|\xi|^{d(p-2)} |\h f(\xi)|^p d\xi\)^\frac1p =\mathcal{O}\(|\lambda|^{-\mu(d/p+N+\varepsilon)}\),\quad \mu\to\infty. \end{split} \end{equation*} It follows that $f\in \mathbb{B}_{p,1}^{d/p+N}(M)$. Next let us compare the error estimates. Using~\eqref{10}, we have $$ \Omega_s(f, M^{-j})\le C_1\omega_s(f, |\lambda|^{-j}), $$ where $C_1$ does not depend on $f$ and $j$, and $$ E_{|\lambda|^\mu I}(f)\le C_2 E_{\sigma M^\mu}(f)_p\le C_3 E_{M^{\mu-\mu_0}}(f)_p, $$ where $C_2$, $C_3$, $\sigma$, and $\mu_0$ do not depend on $f$ and $j$. Hence $$ E_{|\lambda|^\mu I}(f)_p =\mathcal{O}\(|\lambda|^{-\mu(d/p+N+\varepsilon)}\),\quad \mu\to\infty. $$ Using these relations and the following inverse approximation inequality (see~\cite{DDT}): \begin{equation}\label{eqMarch} \omega_s(f,2^{-j})_p\le C2^{-s j}\(\sum_{k=0}^j 2^{2s k} E_{{2^k}I}(f)_p^2\)^\frac12, \end{equation} we can easily see that inequality~\eqref{110} provides the following approximation order $$ \Vert f - Q_j(f,\phi,\w\phi) \Vert_p\le \begin{cases} C |\lambda|^{-j(N+\frac dp + \varepsilon)} &\mbox{if } s> N+\frac dp + \varepsilon\\ C (j+1)^{1/2} |\lambda|^{-js} &\mbox{if } s= N+\frac dp + \varepsilon \\ C|\lambda|^{-js} &\mbox{if } s< N+\frac dp + \varepsilon \end{cases}, $$ which is better than one given in Theorem~\ref{theoQj} in the case $s= N+\frac dp + \varepsilon $, and the same in the other cases. On the other hand, there exist functions in $\mathbb{B}_{p,1}^{d/p+N}(M)$ which do not satisfy assumptions of Theorem~\ref{theoQj}. Indeed, let $b\le N+d$, $\varepsilon >0$, $$ a>\max\left\{1,\ (d+N+\varepsilon-b)\lll\frac12-\frac1p\rrr^{-1}\right\}, $$ $$ \h f(\xi) = \kappa(|\xi|)\frac{e^{i|\xi|^a}}{|\xi|^{b}}, $$ where $\kappa\in C^{\infty}(\Bbb R)$, $ \kappa(u)=0$ for $u=[0,1]$, $ \kappa(u)=1$ for $u\ge2$. Obviously, the decay of $\h f$ is not enough for Theorem~\ref{theoQj}. Let us verify that $f:=\mathcal{F}^{-1}(\h f)$ is in $L_p$ and $E_{M^\nu}(f)_p=\mathcal{O}(|\lambda|^{-\gamma \nu})$, where $\gamma=\frac dp+N+\varepsilon$, i.e., $f\in \mathbb{B}_{p,1}^{d/p+N}(M)$. Setting $g(\xi):=|\xi|^\gamma\h f(\xi)$ and using Proposition~5.1 from~\cite{Mi}) we conclude that the functions $\mathcal{F}^{-1} \h f$ and $\mathcal{F}^{-1} g$ are smooth throughout $\R^d$ and $$ \mathcal{F}^{-1}(\h f)(x)=d_0 |x|^{-\frac{b-d+da/2}{a-1}}\exp\(id_1 |x|^{\frac{a}{a-1}}\)+o\(|x|^{-\frac{b-d+da/2}{a-1}}\)\quad\text{as}\quad |x|\to \infty, $$ $$ \mathcal{F}^{-1}(g)(x)=d_0 |x|^{-\frac{b-\gamma-d+da/2}{a-1}} \exp\(id_1 |x|^{\frac{a}{a-1}}\)+o\(|x|^{-\frac{b-\gamma-d+da/2}{a-1}}\)\quad\text{as}\quad |x|\to \infty. $$ It follows that $f\in L_p$ and $\mathcal{F}^{-1} g\in L_p$, which yields that $f\in \dot H_p^\gamma$ (the homogeneous Sobolev spaces). Then, using the embedding $\dot H_p^\gamma \subset \dot B_{p,\infty}^\gamma$ (the homogeneous Besov space, see, e.g.,~\cite[Theorem~6.3.1]{BL}) and Jackson's theorem, we obtain $E_{M^\nu}(f)_p\le C\omega_{[\gamma]+1}(f,|\lambda|^j)_p\le C\Vert f\Vert_{\dot B_{p,\infty}^\gamma}|\lambda|^{-\gamma j} $. Finally we consider an anisotropic case. Let $d=2$, $M={\rm diag} \(m_1,m_2\)$, and $\frac{\partial^{r_1}}{\partial x_1^{r_1}} f, \frac{\partial^{r_2}}{\partial x_2^{r_2}} f \in L_p$, where $r_1>(\frac1p+\frac Nd)(1+\frac{\ln m_2}{\ln m_1})$ and $r_2>(\frac1p+\frac Nd)(1+\frac{\ln m_1}{\ln m_2})$. Then, taking into account that for $1<p<\infty$ and $s>\max(r_1,r_2)$, see, e.g.,~\cite{timanIz} $$ E_{M^\nu}(f)_p\le C \Omega_s(f,M^{-\nu})_p\le C\(\omega_{s}^{(1)}(f,m_1^{-j})_p+\omega_{s}^{(2)}(f,m_2^{-j})_p\)=\mathcal{O}\({m_1^{-r_1 j}}+{m_2^{-r_2 j}}\), $$ where $\omega_{s}^{(\ell)}(f,h)_p$ is the partial modulus of smoothness with respect to $\ell$-th variable, we have that $f\in \mathbb{B}_{p,1}^{d/p+N}(M)$, and by Theorem~\ref{corMOD1'+} with $s>\max(r_1,r_2)$, we get $$ \Vert f-Q_j(f,\vp,\w\vp)\Vert_p=\mathcal{O}\({m_1^{-r_1 j}}+{m_2^{-r_2 j}}\). $$ That is, the order of approximation by $Q_j(f,\vp,\w\vp)$ essentially depends on anisotropic nature of the function $f$ and the matrix $M$. \section{Conclusions} Error estimates in $L_p$-norm are obtained for a large class of sampling-type quasi-projection operators $Q_j(f,\phi, \w\phi)$ including the classical case, where $\w\phi$ is the Dirac delta-function. Theorems~\ref{corMOD1'+} and~\ref{corMOD1'''} provide essential improvements of the recent results in~\cite{KS}. First, the estimates are given for a wider class of approximated functions, namely for functions from anisotropic Besov spaces. Second, only the case $p\ge2$ was considered in~\cite{KS}, while $1\le p<\infty$ in Theorems~\ref{corMOD1'+} and~\ref{corMOD1'''}. Third, the estimates are given in terms of the moduli of smoothness and best approximations, while the results in~\cite{KS} provide only approximation order. \bigskip \noindent {\bf ACKNOWLEDGMENTS} The first author was partially supported by DFG project KO 5804/1-1 (Theorem~\ref{corMOD1'''} and the corresponding auxiliary results belong to this author). The second author was supported by the Russian Science Foundation under grant No. 18-11-00055 (Theorem~\ref{corMOD1'+} and the corresponding auxiliary results belong to this author).
{"config": "arxiv", "file": "2001.08796.tex"}
TITLE: General formula needed for this product rule expression (differential operator) QUESTION [8 upvotes]: Let $D_i^t$, $D_i^0$ for $i=1,\dots,n$ be differential operators. (For example $D_1^t = D_x^t$, $D_2^t = D_y^t,\dots$, where $x$, $y$ are the coordinates). Suppose I am given the identity $${D}_a^t (F_t u) = \sum_{j=1}^n F_t({D}_j^0 u){D}_a^t\varphi_j$$ where $\varphi_j$ are smooth functions and $F_t$ is some nice map. So $$ D^t_bD^t_a(F_t u) = \sum_j D^t_b\left(F_t({D}_j^0 u)\right){D}_a^t\varphi_j+\sum_j F_t({D}_j^0 u)D^t_b{D}_a^t\varphi_j $$ and because $${D}_b^t (F_t (D_j^0u)) = \sum_{k=1}^n F_t({D}_k^0 D_j^0u){D}_b^t\varphi_k,$$ we have $$D^t_bD^t_a(F_t u) = \sum_{j,k=1}^n F_t({D}_k^0 D_j^0u){D}_b^t\varphi_k+\sum_j F_t({D}_j^0 u)D^t_b{D}_a^t\varphi_j .$$ My question is how do I generalise this and obtain a rule for $$D^t_{\alpha} (F_t u)$$ where $\alpha$ is a multiindex of order $n$ (or order $m$)? My intention is to put the derivatives on $u$ and put the $F_t$ outside, like I demonstrated above. Can anyone help me with getting the formula for this? It's really tedious to write out multiple derivatives so it's hard to tell for me. REPLY [3 votes]: This is probably better served as a comment, but I can't add one because of lack of reputation. If I understand correctly you are probably looking for the multivariate version of Faà di Bruno's formula, beware that the Wikipedia entry uses slightly different notation.
{"set_name": "stack_exchange", "score": 8, "question_id": 473481}
\begin{document} \maketitle \begin{abstract} We consider inference of the parameters of the diffusion term for Cox-Ingersoll-Ross and similar processes with a power type dependence of the diffusion coefficient from the underlying process. We suggest some original pathwise estimates for this coefficient and for the power index based on an analysis of an auxiliary continuous time complex valued process generated by the underlying real valued process. These estimates do not rely on the distribution of the underlying process and on a particular choice of the drift. Some numerical experiments are used to illustrate the feasibility of the suggested method. \\ {\bf Key words}: diffusions processes, power index, Cox-Ingersoll-Ross process, CKLS model, pathwise estimation. \\ {\bf Mathematical Subject Classification (2010):} 91G70 65C30, 65C50, 65C60 \end{abstract} \def\MM{{\scriptscriptstyle M}} \section{Introduction} In this paper, we consider inference of the diffusion term for Cox-Ingersoll-Ross and similar processes with a power type dependence of the diffusion coefficient from the underlying process. These processes are important for applications; in particular, they are used for interest rate models and for volatility models in finance; see, e.g., Heston (1993), Gibbons and Ramaswamy (1993), Lewis (2000), Zhou (2001), Carr and Sun (2007), Andersen and Lund (1997), Gourieroux and Monfort (2013), Fergusson and Platen (2015), Hin and Dokuchaev (2016), and the bibliography therein. Estimation of the parameters for these models was widely studied; see e.g. Gibbons and Ramaswamy (1993), Andersen and Lund (1997), Kessler (1997), S{\o}rensen (2000), Zhou (2001), Fan {\em et al} (2003), Ait-Sahalia (1996), De Rossi (2010), Gourieroux and Monfort (2013). We readdress the problem of inference for these processes. We suggests a new method that allows to obtain pathwise estimates of the diffusion coefficient and the power index represented as explicit functions defined on an auxiliary continuous time complex valued process generated by the underlying real valued process. An attractive feature of the method is that it not require to estimate neither parameters of the drift nor the distributions of the underlying process. In particular, one does not need to know the shape of the likelihood function. In addition, our method allows to consider models with a large number parameters for the drift; therefore, it allows to cover cases where the Maximum Likelihood method is not feasible due to high dimension. This is especially beneficial for financial application where the trend for the prices is usually difficult to estimate since it is overshadowed by a relatively large volatility. Since the drift is excluded from the analysis, our method does not lead to an estimation of the drift. However, this could be a useful supplement to the existing more comprehensive methods such as described in Gibbons and Ramaswamy (1993), Andersen and Lund (1997), S{\o}rensen (2000), Zhou (2001), Fan {\em et al} (2003), Ait-Sahalia (1996), De Rossi (2010), Gourieroux and Monfort (2013), and Kessler (1997). These works used estimation of the parameters for drift term; on the other hand, the method discussed in the present paper allows to bypass this task. Feasibility an robustness of the suggested method is illustrated with some numerical experiments. \section{The model} Let $\t\in\R$ and $T\in (\t,+\infty)$ be given. We are also given a standard complete probability space $(\O,\F,\P)$ and a right-continuous filtration $\{\F_t\}_{t\in[\t,T]}$ of complete $\s$-algebras of events. In addition, we are given an one-dimensional Wiener process $w(t)|_{t\in[\t,T]}$, that is a Wiener process such that $w(\t)=0$ adapted to $\{\F_t\}$ and such that $\F_t$ is independent from $w(s)-w(q)$ if $t\ge s>q\ge \t$. Consider a continuous time one-dimensional random process $y(t)|_{t\ge \t}$ such that $y(\t)>0$ and \baa dy(t)=f(y(\cdot),t)dt+\s(t) y(t)^{\g} d{w}(t),\qquad t\in(\t,T).\label{CT} \eaa Here $\g\in [0,1]$, $\s(t)$ is a bounded $\F_t$-adapted process, $f(s,t):C([\t,T])\times[\t,T]\to\R$ is a measurable function such that $f(s,t)$ is $\F_t$-adapted for any $s\in C([\t,T])$, and that $f(s_1,t)=f(s_2,t)$ if $(s_1-s_2)|_{[\t,t]}\equiv 0$. In addition, we assume that, for any $\d>0$, $|f(s_1,t)-f(s_2,t)|\le c_1\|s_1-s_2\|_{C([0,T])}$ and $|f(s,t)|\le c_2(\|s\|_{C([\t,T])}+1)$ for some constants $c_k=c_k(\d)>0$ a.s. (almost surely) a for all $t\in[\t,T]$, $s_1,s_2\in C([\t,T])$, such that $\inf_{t\in[\t,T],k=1,2}s_k(t)>\d$. Under these assumptions, there exists a Markov time $\tau$ with respect to $\{\F_t\}$ with values in $(\t,T]$ such that there exists an unique almost surely continuous solution $y(s)|_{s\in[\t,\tau]}$ such that $\inf_{t\in[\t,\tau]}y(s)>0$. \subsection*{Examples of applications in financial modelling} The assumptions on the process $y$ allows to use it for a variety of financial models. In particular, the assumption on the drift coefficient $f$ allows to consider a path depending evolution such as described by equations with delay; see some examples in Stoica (2005) and Luong and Dokuchaev (2016).\index{LD,S} The assumptions on the diffusion coefficient allow to cover many important financial models. In particular, the so-called Cox-Ingersoll-Ross process is used for the interest rate models and the volatility of stock prices (Heston (1993)). This equation has the form \baa dy(t)=a[b-y(t)]dt+\s y(t)^{1/2} d{w}(t),\qquad t>0,\label{CIR} \eaa where $a>0$, $b>0$, and $\s>0$ are some constants. A more general model introduced in Chan et al. (1992) \baa dy(t)=a[b-y(t)]dt+\s y(t)^{\g} d{w}(t),\qquad t>0,\label{CKLS} \eaa is called a Chan-Karolyi-Longstaff-Sanders (CKLS) model in the econometric literature; see e.g. Iacus (2008). This equation (\ref{CT}) with $\g=2/3$ is also used for volatility modelling; see, e.g., Carr and Sun (2007) and Lewis (2000). \index{\baaa dy(t)=\kappa(t)[\mu(t)-y(t)]dt+\s(t)y(t)^{1/2} d{w}(t)],\quad t>\t.\label{CIR} \eaaa Here $\mu>0$, $\kappa>0$, $\s>0$ are some constants. It follows from the properties of the solution of (\ref{CT}), as reported in cite{Feller:1951}, that the following holds: (1) if $\kappa \equiv 0$ or $\mu\equiv 0$, $y(t)$ reaches zero almost surely and the point zero is absorbing, (2) if $2\kappa \mu \ge \sigma^2$, $y(t)$ is a transient process that stays positive and never reaches zero, and (3) if $0 < 2\kappa \mu < \sigma^2$, $y(t)$ is instantaneously reflective at point zero.} \section{The main result}\label{SecMain} Up to the end of the paper, we assume conditions for $f$ and $\s$ formulated above holds. We assume below that $\tau$ is a Markov time with respect $\{\F_t\}$ such that $\tau\in (\t,T]$ a.s. and that $\inf_{s\in[\t,\tau]} y(s)>0$ a.s.. In particular, one can select $\tau= T\land\inf\{s>\t:\ y(s)\le M\}$ for any given $M\in (0,y(\t))$. Our main tool for estimation of the pair $(\s,\g)$ will be provided by the following theorem. \begin{theorem}\label{ThC} For any $h\in [0,1]$, \baa \int^{\tau}_{\t} y(s)^{2(\g-h)} \s(s)^2 ds = 2\log |Y_h(\tau)|\quad\hbox{a.s.}, \label{hY} \eaa where $Y_h(s)$ is a complex-valued process defined for $s\in[\t,\tau]$ such that \baa &&dY_h(s)=iY_h(s)\frac{dy(s)}{y(s)^\h},\quad s\in(\t,\tau),\nonumber\\&& Y_h(\t)=1.\label{Y}\eaa \end{theorem} In (\ref{Y}), $i=\sqrt{-1}$ is the imaginary unit. \begin{corollary}\label{corr2} (a) We have that \baa \int^{\tau}_{\t} \s(s)^2ds = 2\log |Y_\g(\tau)|\quad\hbox{a.s.}. \label{sY} \eaa (b) If $\s(t)=\s$ is constant, then, for any $h\in [0,1]$, \baa \s^2= 2\left(\int^{\tau}_{\t} y(s)^{2(\g-h)}ds \right)^{-1} \log |Y_h(\tau)|\quad\hbox{a.s.}. \label{sX} \eaa \end{corollary} \section{Applications of Theorem \ref{ThC} to estimation of $(\g,\s)$}\label{SecAppl} Up to the end of this paper, we assume that $\s(t)\equiv \s$ is an unknown positive constant. We present below estimates of $(\s,\g)$ based on available samples $\{y(t_k)\}$, where $t_k\in[\t,\tau\land T]$, such that $t_{k+1}=t_k+\d$ for $k=\mm0,\mm0+1,...,m-1$, $\d =(\tau\land T-\t)/(m- \mm0)$, $t_{\mm0}=\t$ and $t_m=\tau\land T$. In this setting, $y(t_{k})>0$ for $k= \mm0,...,m$. For $h\in[0,1]$, let \baaa \eta_{h,k}= \frac{y(t_k)-y(t_{k-1})}{y(t_{k-1})^{h}}. \label{eta}\eaaa \subsection*{Estimation of $\s$ under the assumption that $\g$ is known} Let us first suggest an estimate for $\s$ under the assumption that $\g$ is known. \begin{corollary}\label{corr3} For any $h\in[0,1]$, the value $\s$ can be estimated as $\ww\s_{\g,h}$, where \baa \ww\s_{\g,h}^2=\left(\d\sum_{k= \mm0+1}^m y(t_k)^{2(\g-h)}\right)^{-1}\sum_{k= \mm0+1}^m \log (1+\eta_{h,k}^2). \label{ourx} \eaa \end{corollary} \subsection*{Estimation of $\g$ with excluded $\s$}\label{SecU} It appears that Theorem \ref{ThC} implies some useful properties of the process $Y_h(t)$ allowing to estimate $\g$ in a setting with unknown constant $\s$. \begin{proposition}\label{lemmag} For any $h_1,h_2\in [0,1]$, \baa \frac{\int^{\tau}_{\t} y(s)^{2(\g-h_1)}ds}{\int^{\tau}_{\t} y(s)^{2(\g-h_2)}ds}=\frac{\log |Y_{h_1}(\tau)|}{\log |Y_{h_2}(\tau)|}\quad\hbox{a.s.}. \label{ghh}\eaa \end{proposition} \par Since calculation of $Y_{h_1}(\tau)$ and $Y_{h_2}(\tau)$ does not require to know the values of $f$, $\g$, and $\s$, property (\ref{ghh}) allows to calculate $\g$ as is shown below. \begin{corollary}\label{corr4} An estimate $\w\g$ of $\g$ can be found as a solution of the equation \baa \frac{\sum_{k= \mm0+1}^m y(t_k)^{2(\g-h_1)}}{\sum_{k= \mm0+1}^m y(t_k)^{2(\g-h_2)}}= \frac{\sum_{k= \mm0+1}^m \log (1+\eta_{h_1,k}^2)}{\sum_{k= \mm0+1}^m \log (1+\eta_{h_2,k}^2)}, \label{gh} \eaa for any pair of pre-selected $h_1$ and $h_2$. \end{corollary} It can be noted that $\s$ remains unused and excluded from the analysis for the method described in Proposition \ref{lemmag} and Corollary \ref{corr4}; respectively, this method does not lead to an estimate of $\s$. \subsection*{Estimation of the pair $(\s,\g)$}\label{Secg} \begin{proposition}\label{lemmag2} The process \baa \frac{1}{t\land \tau-\t}\log |Y_{\g}(t\land\tau)| \label{cY}\eaa is a.s. constant in $t\in[\t,T]$. \end{proposition} \par Let \baaa v_{h,k}=\log(1+\eta_{h,j}^2), \quad k=\mm0+1,...,m, \label{Loov} \eaaa and \baaa \oo v_h=\frac{1}{m-\mm0}\sum_{j= \mm0+1}^m v_{h,k}. \label{oL} \eaaa \begin{corollary}\label{corr5} An estimate of $\g$ can be found as the solution of the optimization problem \baa \hbox{Minimize}\quad \sum_{k=\mm0+1}^m\left(v_{h,k}- \oo v_h\right)^2\quad \hbox{over}\quad h\in[0,1]. \label{L}\eaa In this case, $\s$ can be estimated as \baa \w\s=\sqrt{\oo v_{\w\g}/\d}. \label{wvol} \eaa where $\w\g$ is the estimate of $\g$ obtained as a solution of (\ref{L}). \end{corollary} \begin{remark}\label{corr05} Corollary \ref{corr5} allows the following modefication a special case of estimation of $\g$ for the case where $\s$ is a known constant $\s$: an estimate $\w\g$ of $\g$ can be found as the solution of the optimization problem \baaa \hbox{Minimize}\quad \sum_{k=\mm0}^m\left(v_{h,k}/\d- \s^2\right)^2\quad \hbox{over}\quad h\in[0,1]. \label{L2}\eaaa \end{remark} \section{Proofs} For $M\in (0,y(\t))$, let $\tau_M=\tau\land\sup\{s\in[\t,T]:\ \inf_{q\in[\t,s]} y(q)\ge M\}$. Clearly, \baa \tau_M\to \tau\quad\hbox{as}\quad M\to 0\quad\hbox{a.s.} \label{lim}\eaa \par {\em Proof of Theorem \ref{ThC}.} The proof follows the idea of the proof of Lemma 3.2 from Dokuchaev (2014), where less general log-normal underlying processes were considered. Let $\ww a(t)=f(y(t),t)y(t)^{-h}$. We have, for any $M\in (0,y(\t))$, \baaa dY_h(t)=iY_h(t)[\ww a(t)dt+y(t)^{\g-h}\s(t) d{w}(t)], \quad t\in(\t,\tau_M).\label{CT22} \eaaa By the Ito formula again, for any $M\in (0,y(\t))$, \baaa Y_h(\tau_M)&=&Y_h(\t)\exp\left(i\int_{\t}^{\tau_M}\ww a(s)ds-\frac{i^2}{2}\int^{\tau_M}_{\t}y(s)^{2(\g-h)}\s(s)^2ds +i\int^{\tau_M}_{\t}y(s)^{\g-h}\s(s)dw(s)\right)\nonumber\\&=&\exp\left(i\int_{\t}^{\tau_M}\ww a(s)ds+\frac{1}{2}\int^{\tau_M}_{\t}y(s)^{2(\g-h)}\s(s)^2ds +i\int^{\tau_M}_{\t}y(s)^{\g-h}\s(s)dw(s)\right)\quad\hbox{a.s.}. \label{solSx} \eaaa Hence \baaa |Y_h(\tau_M)|=\exp\left(\frac{1}{2}\int^{\tau_M}_{\t}y(s)^{2(\g-h)}\s(s)^2ds \right) \quad\hbox{a.s.}\label{SYxx} \eaaa and \baaa\int^{\tau_M}_{\t}y(s)^{2(\g-h)}\s(s)^2ds =2\log |Y_h(\tau_M)|\quad\hbox{a.s.}. \label{sX1}\eaaa Hence (\ref{hY}) follows from (\ref{lim}). $\Box$ {\em Proof of Corollary \ref{corr2}} follows immediately from Theorem \ref{ThC}. $\Box$ \par {\em Proof of Corollary \ref{corr3}}. Let $t_m=\tau_M$, $t_{ \mm0}=\t$, and let $t_k=t_{\mm0}+(k-\mm0)\d$ if $\mm0\le k\le m$. Let $\eta_{h,k}$ be defined by (\ref{eta}). The Euler-Maruyama time discretization of (\ref{Y}) leads to the stochastic difference equation \def\YY{{\mathcal{Y}}} \baaa &&\YY_h(t_k)=\YY_h(t_{k-1})+i\YY_h(t_{k-1})\eta_{h,k},\quad k\ge \mm0+1, \nonumber\\&& \YY_h(t_{ \mm0})=1. \label{Xd}\eaaa (See, e.g., Kloeden and Platen (1992), Ch. 9). This equation can be rewritten as \baaa &&\YY_h(t_k)=\YY_h(t_{k-1})(1+i\eta_{h,k}),\quad k\ge \mm0+1,\nonumber \\&& \YY_h(t_{ \mm0})=1. \label{Xdd}\eaaa Hence \baaa \YY_h(t_m)=\prod_{k= \mm0+1}^m (1+i\eta_{h,k}). \eaaa Clearly, \baaa |\YY_h(t_m)|=\prod_{k= \mm0+1}^m |1+i\eta_{h,k}|=\prod_{k= \mm0+1}^m (1+\eta_{h,k}^2)^{1/2}, \eaaa and \baa \log|\YY_h(t_m)|=\sum_{k= \mm0+1}^m \log [(1+\eta_{h,k}^2)^{1/2}]=\frac{1}{2}\sum_{k= \mm0+1}^m \log (1+\eta_{h,k}^2). \label{Y=}\eaa Then (\ref{sY}) leads to estimate (\ref{ourx}). $\Box$ {\em Proof of Proposition \ref{lemmag} and Proposition \ref{lemmag2}} follows immediately from Theorem \ref{ThC} and Proposition \ref{corr2}(b). $\Box$ {\em Proof of Corollary \ref{corr4}} follows from the natural discretization of integration and (\ref{Y=}). $\Box$ {\em Proof of Corollary \ref{corr5}}. It follows from (\ref{Y=}) that the sequence $\{\log|\Y_{h}(t_k)|\}$ represents the discretization of the continuous time process $\log|Y_h(t\land\tau)|$ at points $t=t_k$; this process is linear in time for $h=\g$ and $2\log|Y_\g(t\land\tau)|\equiv (t\land\tau-\t)\s^2$. Hence \baaa 2\log|Y_\g(t_{k+1})|-2\log|Y_\g(t_k)|=\d\s^2.\eaaa On the other hand, (\ref{Y=}) implies that \baaa 2\log|\Y_h(t_{k+1})|-2\log|\Y_h(t_k)|=v_{h,k},\quad h\in[0,1]. \eaaa This leads to an optimization problem \baaa \hbox{Minimize}\quad \sum_{k=\mm0+1}^m (v_{h,k}/\d- c)^2 \quad \hbox{over}\quad h\in[0,1],\quad c>0. \label{Lu}\eaaa By the properties of quadratic optimization, this problem can be replaced by the problem \baa \hbox{Minimize}\quad \sum_{k=\mm0+1}^m (v_{h,k}/\d- \oo v_h)^2 \quad \hbox{over}\quad h\in(0,1]. \label{Lp2}\eaa Then the proof follows. $\Box$ {\em Proof of Remark \ref{corr05}} repeats the previous proof without optimization over $c$. $\Box$ \section{Numerical experiments} To illustrate numerical implementation of the algorithms described above, we applied these algorithms for discretized Monte-Carlo simulations of some generalized version the Cox-Ingersoll-Ross process (\ref{CIR}). We consider a toy example of a process with a large number of parameter. Presumably, estimation of all these parameters is not feasible due a high dimension for a method of moments or Maximum Likelihood Method. We consider a process evolving as the following: \baa dy(t)=H\left(y(t),y(\max(t-\lambda,0)\right)dt +\s y(t)^{\g} d{w}(t),\qquad t>0,\label{CIRn} \eaa where \baaa &&H(x,y)=\sum_{k=1}^N\left[F_k(x)+G_k(y)\right],\\ &&F_k(x)=a_k[b_k-x^{\nu_k+1/2}]+c_k\cos(d_k x+e_k), \quad G_k(x)=0.1\,\w a_k[\w b_k-x^{\w\nu_k+1/2}], \eaaa The parameters $N,a_k,b_k,\nu_k,c_k,d_k,e_k,\w a_k,\w b_k,\w e_k,\nu_k,\w\nu_k,\lambda$ are randomly selected in each experiment. In particular, the integers $N$ are selected randomly at the set $\{1,2,3,4,5\}$ with equal probability. The delay parameter $\lambda$ has the uniform distribution on the interval $[0,0.2]$. The parameters $a_k,b_k,\nu_k,c_k,d_k,e_k,\w a_k,\w b_k,\w e_k,\nu_k,\w\nu_k$ are uniformly distributed on the interval $[0,1]$. For the Monte-Carlo simulation, we considered corresponding discrete time process $\{y(t_k)\}$ evolving as \baa y(t_{k+1})=y(t_{k})+H(y(t_k), y(t_{\max(k-\ell,0)})\d +\s y(t_k)^{\g}\d^{1/2}\,\xi_{k+1}, \quad k=0,...,n,\label{myy}\eaa with mutually independent random variables $\xi_k$ from the standard normal distribution $N(0,1)$. Here $\d=t_{k+1}-t_k=1/n$ this corresponds to $[\t,T]=[0,1]$ for continuous time underlying model. The delay $\ell$ is the integer part of $\lambda (T-\t)/(n+1)$. We considered $n\in\{52,250, 10000,20000\}$. For the financial applications, the choice of $n=52$ corresponds to weekly sampling; the choice of $n=250$ corresponds to daily sampling. In the Monte-Carlo simulation trials, we considered random $y(t_{\mm0})$ uniformly distributed on $[0.1,10]$ and truncated paths $y(t_k)|_{\mm0\le k\le m}$, with the Markov stopping time $m=n\land \inf\{k:\ y(t_k)\le 0.001y(t_{\mm0})\}$. In this case, $y(t_{k})>0$ for $k= \mm0,...,m$. To exclude the possibility that $y(t_m)\le 0$ (which may happen for our discrete time process since the values of $\xi_k$ are unbounded), we replace $y(t_m)$ defined by (\ref{myy}) by $y(t_m)=y(t_{m-1})>0$ every time when $m<n$ occurs. It can be noted that, for our choice of parameters, the occurrences of the event $m<n$ were very rare and have not an impact on the statistics. We used 10,000 Monte-Carlo trials for each trial (i.e. for each entry in each of the tables \ref{tab1}-\ref{tab3} below). We found that enlarging the sample does not improve the results. Actually, the experiments with 5,000 trials or even 1,000 Monte-Carlo trials produced the same results. The parameters of the errors obtained in these experiments are quite robust with respect to the change of other parameters as well. We denote by $\EE$ the sample means of the corresponding values over all Monte-Carlo simulation trials. For the estimates $(\w\s,\w\g)$ of $(\s,\g)$, we evaluated the root mean-squared errors (RMSE) $\sqrt{\EE \left|\w\s-\s\right|^2}$ and $\sqrt{\EE \left|\w\g-\g\right|^2}$, the mean errors $\EE |\w\s-\s|$ and $\EE \left|\w\g-\g\right|$, and the biases $\EE (\w\s-\s)$ and $\EE \left(\w\g-\g\right)$. In the experiment described below, we used $\s=0.3$, $\g=1/2$, and $\gamma=0.6$. \index{\baaa \d=1/52,\quad \d=1/250,\quad \d=1/6000.\label{as11}\eaaa} \subsection*{Estimation of $\s$ using Corollary \ref{corr3}} The numerical implementation of Corollary \ref{corr3} requires to use the value $\g$. In other words, one have to use certain hypothesis about the value of $\g$, for instance, based on estimation of $\g$ that was done separately. This setting leads to an error caused by miscalculation of $\g$. To illustrate the dependence of the error for the estimate of $\s$ from the error in the hypothesis on $\g$, we considered estimates for inputs simulated with $\g=1/2$ and with different $h$. Tables \ref{tab1} (a),(b) show the parameters of the errors in the experiments described above for estimate (\ref{ourx}) with different $h$ and with $\g=1/2$, for $\d=1/52$ and $\d=1/250$ respectively. Numerical experiments shows that these estimates are robust with respect to small errors for $\g$; however, the estimation error for $\s$ caused by misidentification of $\g$ can be significant. \subsection*{Estimation of $\g$ with unknown $\s$ using (\ref{gh}) and (\ref{L})} In these experiments, we used simulated process with $\g=0.6$ and estimates (\ref{gh}) and (\ref{L}). For solution of equation (\ref{gh}) and optimization problem (\ref{L}), we used simple search over a finite set $\{h_k\}_{k=1}^N=\{k/N\}_{k=1}^N$. We used $N=300$ for (\ref{gh}) and $N=30$ for (\ref{L}). Further increasing of $N$ does not improve the results but slows down calculation. It appears that estimation of $\g$ is more numerically challenging than estimation of $\s$ using (\ref{ourx}) with known $\g$. In our experiments, we observed that the dependence of the value of criterion function in (\ref{L}) depends on $h$ smoothly and the dependence on $h$ for each particular Monte-Carlo trial is represented by an U-shaped smooth convex function. However, the minimum point of this functions is deviating significantly for different Monte-Carlo trials, especially in the case of low-frequency sampling. It requires high-frequency sampling to reduce the error $\w\g-\g$. Table \ref{tab2} shows the parameters of the error $\w\g-\g$. We found that these parameters are quite robust with respect to the change of other parameters of simulated process. \subsubsection*{Estimation of $\s$ using (\ref{L})} The solution of optimization problem (\ref{L}) gives an estimate of $\s$, in addition to an estimate of $\g$, in the setting with unknown $\s$, via (\ref{wvol}). This gives a method for estimation of $\s$ tat can be an alternative to estimator (\ref{ourx}). Table \ref{tab3} shows the parameters of the error $\w \s-\s$. It appears that the RMSE is larger than for estimators (\ref{ourx}) applied with a correct $h=\g$ and has the same order as the RMSE for this estimators applied with $h\neq \g$, i.e., if $\g$ is "miscalculated". \index{It can be added that the error of the same estimator (\ref{wvol}) for $\s$ is much smaller if this estimator is applied with fixed $h=\g$. In some experiments not detailed here \index{(>> [gL]=gammL(10000,1,10000,30,0)WITH KNOWN GAMMA)} we found that RMSE for estimator (\ref{wvol}) applied with $h=\g$ is close to the RMSEs for (\ref{ourx}) with $h=\g$.} \begin{table}[ht] \vspace{-0.cm}\begin{center}(a) $\d=1/52$\\\begin{tabular} {|c|c|c|c|}\hline $\hphantom{xxxx}$ &$\hphantom{\Biggl|}$ $\sqrt{\EE \left|\w\g-\g\right|^2}$ &$\EE \left|\w\g-\g\right|$ &$\EE \left(\w\g-\g\right)$ \\ \hline $\g=0.5$, $h=0.5$ & 0.0312 & 0.0248 & 0.0034 \\ \hline $\g=0.4$, $h=0.5$ & 0.0458 & 0.0365 & 0.0281 \\ \hline $\g=0.6$, $h=0.5$ & 0.0358 & 0.0290 & -0.0183 \\ \hline $\g=0.7$, $h=0.5$ & 0.0495 & 0.0413 & -0.0370\\ \hline \end{tabular}\end{center} \begin{center}(b) $\d=1/250$\\\begin{tabular} {|c|c|c|c|}\hline $\hphantom{xxxx}$ &$\hphantom{\Biggl|}\sqrt{\EE \left|\w\g-\g\right|^2}$ &$\EE \left|\w\g-\g\right|$ &$\EE \left(\w\g-\g\right)$ \\ \hline $\g=0.5$, $h=0.5$ &0.0136 & 0.0109 & 0.0006 \\ \hline $\g=0.4$, $h=0.5$ & 0.0328 & 0.0272 & 0.0259 \\ \hline $\g=0.6$, $h=0.5$ &0.0269 & 0.0227 & -0.0215 \\ \hline $\g=0.7$, $h=0.5$ & 0.0468 & 0.0416 & -0.0414\\ \hline \end{tabular} \end{center}\vspace{0mm} \vspace{-0mm}\caption{Parameters of the error $\w\s-\s$ for $\w\s$ obtained from estimates (\ref{ourx}) with $\d=1/52$ and $\d=1/250$. In the first column, $\gamma$ is the "true" power used for simulation, and $h$ is the parameter of (\ref{ourx}) used for estimation; mismatching of $\g$ and $h$ leads to a larger bias and a larger estimation error. } \label{tab1} \end{table} \begin{table}[ht] \vspace{-0.cm}\begin{center}\begin{tabular} {|c|c|c|c|c|c|c|}\hline $\hphantom{xxxx}$ &$\hphantom{\w{\Biggl|}}$ $\sqrt{\EE \left|\w\g-\g\right|^2}$& $\EE |\w\g-\g|$ &$\EE \left(\w\g-\g\right)$ &$\sqrt{\EE \left|\w\g-\g\right|^2}$ &$\EE \left|\w\g-\g\right|$ &$\EE (\w\g-\g)$\\ & for (\ref{gh}) & for (\ref{gh})& for (\ref{gh}) & for (\ref{L}) & for (\ref{L}) & for (\ref{L})\\ \hline $\d=1/250$ & 0.2078 &0.1736 & 0.1078 & 0.2304 &0.1946 & 0.1166 \\ \hline $\d=1/10,000$ & 0.0309 &0.0182 & 0.0039 & 0.0356 &0.0221 & 0.0042 \\ \hline $\d=1/20,000$ & 0.0222 &0.0109 & 0.0020 & 0.0483 &0.0294 & 0.0004 \\ \hline \end{tabular} \end{center}\vspace{0mm} \caption{Parameters of the error $\w\g-\g$ for the solution of (\ref{gh}) and (\ref{L}) with an unknown $\s$.} \label{tab2} \end{table} \begin{table}[ht] \vspace{-0.cm} \begin{center}\begin{tabular} {|c|c|c|c|}\hline $\hphantom{xxxx}$ &$\hphantom{\Biggl|}\sqrt{\EE \left|\w\s-\s\right|^2}$ &$\EE \left|\w\s-\s\right|$ &$\EE (\w\s-\s)$\\ \hline $\d=1/250$ & 0.0515& 0.0264 &0.0092\\ \hline $\d=1/10,000$ & 0.0063& 0.0038& 0.0001\\ \hline $\d=1/20,000$ & 0.0168& 0.0108& 0.00003 \\ \hline \end{tabular} \end{center}\vspace{0mm} \caption{Parameters of the error $\w\s-\s$ for $\w\s$ obtained from (\ref{wvol}) and (\ref{L}) with an unknown $\g$.} \label{tab3} \end{table} \newpage \subsection*{Comparison with the performance of other metods} S{\o}rensen (2000) \index{, p.94 and p. 98,} and Zhou (2001) reported the results of testing of a variety of estimators based on the maximum likelihood method or the method of moments for special cases of (\ref{CT}). These works considered simulated processes with a preselected structure for the drift term with a low dimension of the vector of parameters. Due to numerical challenges for the methods used, the number of Monte Carlo trials was relatively short in these works (100 trials in S{\o}rensen (2000) and 1,000 trials in Zhou (2001)). S{\o}rensen (2000) considered model (\ref{CKLS}) with one fixed set of parameters $(a,b)$ for the drift, and Zhou (2001) considered model (\ref{CIR}) for a variety of the parameters $(a,b)$ for the drift. S{\o}rensen (2000) considered estimation of $(\s,\gamma)$ and estimation of the drift parameters, and Zhou (2001) considered estimation of $\s$ and estimation of the drift parameters with fixed $\g=1/2$. The results for $\s$ in Table 5 from Zhou (2001) reported for $\d=1/500$ depends significantly on the choice of the the drift parameters $(a,b)$ in (\ref{CIR}) (in our notations). The minimal RMSE for estimates of $\s$ among all pairs $(a,b)$ is of the same order as the RMSE reported in our Table \ref{tab1}(a) for $\d=1/250$ for the case of known $h$; for other choices of the drift the RMSE in Table 5 from Zhou (2001) is much larger. Remind that RMSE is smaller for smaller $\d$. The RMSE for $\s$ reported in Table II.1 from S{\o}rensen (2000) for $\d=1/500$ (in our notations) is approximately the same as in Table \ref{tab3} for $\d=1/250$. However, the RMSE for $\g$ with $\d=1/500$ is three times smaller in Table II.1 from S{\o}rensen (2000) for some estimators than in Table \ref{tab2} with $\d=1/250$. However, it may happen that the performance of the estimators in Table II.1 from S{\o}rensen (2000) is not robust with respect to different choices of the drift parameters, similarly to the case presented in Table 5 from Zhou (2001) for $\g=1/2$. On the other hand, our method allowed to include a high variety of drift models with almost unlimited dimension, and, as we found in some unreported experiments, the choice of particular drifts does not have an impact on the performance of the estimator. \section{On the consistency of the method} Let us describe briefly the consistency of the method. Clearly, one cannot the real life data such as market prices are generated by model (\ref{CT}). Hence we restrict our consideration by the error for simulated data. The equations in the continuous time used for our method are exact and hold almost surely for continuous time underlying processes (\ref{CT}). Therefore, the only source of the error is the time discretization error. This error is inevitable since the the method requires pathwise evaluation of stochastic integrals. Let us discuss briefly consistency of the method as convergence of the estimates to the true values as the sampling frequency is increasing, i.e. $\d\to 0$. A rigorous analysis of convergency for the time discretization requires significant analytical efforts outside of the scope of this paper; see e.g. Kloeden and Platen (1992) and Jourdain and Kohatsu-Higa (2011), where review of the recent literature can be found. We leave this analysis for the future research and give below a short sketch of two possible approaches. There are two options. First, one can consider Euler-Maruyama time discretization for the pair $(y,Y_h)$ such as described for the numerical experiments described above. In this case, $f$ and the sampling frequency $\d$ have to be such that a satisfactory approximation is achieved. In particular, by Theorem 9.6.2 from Kloeden and Platen (1992), p. 324, these conditions are satisfied for CIR models as well as for the case where $f(y(\cdot),t)=f(y(t),t)$. Some analysis o and conditions for the convergence in more general cases can be found in Jourdain and Kohatsu-Higa (2011). The numerical experiments described above demonstrate that the required convergence takes place for equtations will delay modelled there. Another option is to consider convergence of the method for $\d\to0$ given that $\Y(t_k)$ are constructed with the "true" entries $y(t)$. We presume here that it is possible to produce an arbitrarily close approximation of a continuous path $y(t)$ via Monte-Carlo simulation with increasing of the simulation frequency. Let $t_\d(t)$ be selected as $t_k$ such that $|t-t_k|=\min_p|t-t_p|$ (for certainty, let it me the minimal point if $t$ is in the middle of a sampling interval). Clearly, $\E \sup_{t\in [0,\tau]}|y(t_\d(t))-y(t)|^2\to 0$ as $\d\to 0$. Hence $\d\sum_{k= \mm0+1}^m y(t_k)^{2(\g-h)}\to \int^{\tau}_{\t} y(s)^{2(\g-h)}ds$ in probability as $\d\to 0$. Further, can be shown that $\log|\Y_h(t_\d(t))|\to \log|Y_h(t)|$ in probability as $\d\to 0$. This leads to converges of estimates to their true values in probability. \section{Discussion} \begin{enumerate} \item The estimates listed in Section \ref{SecAppl} do not use neither $f$ nor the probability distribution of the process $\{y(t)\}$. In particular, they are invariant with respect to the choice of an equivalent probability measure. This is an attractive feature that allows to consider models with a large number of parameters for the drift. \item It appears that estimation of the power index $\g$ with unknown $\s$ is numerically challenging and requires high-frequency sampling to reduce the error. Perhaps, this can be improved using other modifications of (\ref{L}) and other estimates for the degree of nonlinearity for the implementation of Lemma \ref{lemmag2}. In particular, the standard criterions of linearity for the first order regressions could be used, and $L_2$-type criterions could be replaced by $L_p$-type criterions with $p\neq 2$. So far, we were unable to find a way to reduce the error for lower sampling frequency. We leave it for the future research. \item Our approach does not cover the estimation of the drift $f$ which is a more challenging problem. However, the estimates for $(\s,\g)$ suggested above can be used to simplify statistical inference for $f$ by reduction of the dimension of the numerical problems arising in the maximum likelihood method, methods of moments, or least squares estimators, for $(f,\s,\g)$. This can be illustrated as the following. \par Assume that $\g=1/2$ is given and that evolution of $y(t)$ is described by Cox-Ingersoll-Ross equation (\ref{CIR}) with $\t=0$. It is known that \baaa &&\E y(T)=b(1-e^{-aT})+e^{-aT}y(0),\nonumber\\ && \Var y(T)=\frac{\s^2}{2a}b(1-e^{-aT})+e^{-aT}\frac{\s^2}{a^2}(1-e^{-aT})y(0). \label{V}\eaaa (See, e.g., Gourieroux and Monfort (2013)). This system can be solved with respect to $(a,b)$ given that $\E y(T)$ and $\Var y(T)$ are estimated by their sampling values, and $\s$ is estimated as suggested above. \index{This can be rewritten as \baa && b(1-e^{-aT})=\E y(T)-e^{-aT}y(0),\label{E}\\ && \Var y(T)=\frac{\s^2}{2a}\left[\E y(T)-e^{-aT}y(0)\right]+e^{-aT}\frac{\s^2}{a^2}(1-e^{-aT})y(0). \label{V1}\eaa } \item The paper focuses on the case where $\s(t)$ is constant. However, some results can be extended on the case of time depending and random $\s(t)$. For example, the proofs given above imply that $\int^{t_m}_{\t} \s(s)^2ds$ can be estimated as \baaa \ww\s_{\g,\g}^2= \sum_{k= \mm0+1}^m \log (1+\eta_{\g,k}^2). \label{oury} \eaaa \end{enumerate} \subsection*{Acknowledgments} The author gratefully acknowledges support provided by ARC grant of Australia DP120100928.
{"config": "arxiv", "file": "1506.05627.tex"}
\begin{document} \title[Witten Genus and String Complete Intersections]{Witten Genus and String Complete Intersections} \author{Qingtao Chen} \address{Q. Chen, Department of Mathematics, University of California, Berkeley, CA, 94720-3840} \email{chenqtao@math.berkeley.edu} \date{Jan 11, 2007} \author{Fei Han} \address{F. Han, \ Department of Mathematics, University of California, Berkeley, CA, 94720-3840} \email{feihan@math.berkeley.edu} \subjclass{Primary 53C20, 57R20; Secondary 53C80, 11Z05} \maketitle \begin{abstract}We prove that the Witten genus of certain nonsingular string complete intersections in products of complex projective spaces vanishes. Our result generalizes a known result of Landweber and Stong (cf. [HBJ]). \end{abstract} \section {Introduction} Let $M$ be a $4k$ dimensional closed oriented smooth manifold. Let $E$ be a complex vector bundle over $M$. For any complex number $t$, set $\Lambda_t(E)=\CC|M+tE+t^2\Lambda^2(E)+\cdots$ and $S_t(E)=\CC|M+tE+t^2S^2(E)+\cdots$, where for any integer $j\geq 1, \Lambda^j(E)$ (resp. $S^j(E)$) is the $j$-th exterior (resp. symmetric) power of $E$ (cf. [At]). Set $\widetilde{E}=E-\CC^{\mathrm{rk}(E)}$. Let $q=e^{\pi i \tau}$ with $\tau\in \mathbb{H}$, the upper half plane. Define after Witten ([W]) \be \Theta_q(E)=\bigotimes_{n\geq 1}S_{q^{2n}}(E).\ee The Witten genus ([W]) $\varphi_W(M)$ for $M$ is defined as \be \varphi_W(M)=\langle\widehat{A}(M)\mathrm{ch}(\Theta_q(\widetilde{TM\otimes \CC})), [M]\rangle,\ee where $\widehat{A}(M)$ is the $\widehat{A}$-characteristic class of $M$ and $[M]$ is the fundamental class of $M$. Let $\{\pm 2\pi \sqrt{-1} x_j, 1\leq j \leq 2k\} $ be the formal Chern roots of $TM\otimes \CC$. Then the Witten genus can be written by using Chern roots as (cf. [L2][L3])\be \varphi_W(M)=\langle\prod_{j=1}^{2k}x_j\frac{\theta'(0, \tau)}{\theta(x_j, \tau)}, [M]\rangle,\ee where $\theta(v, \tau)$ is the Jacobi theta function (see (2.1) below). The Witten genus was first introduced by E. Witten [W] in studying quantum filed theory and can be viewed as the loop space analogue of the $\widehat{A}$ genus. According to the Atiyah-Singer index theorem, when $M$ is spin, $\varphi_W(M)\in \mathbb{Z}[[q]]$ (cf. [HBJ]). Moreover, when the spin manifold $M$ is string, i.e. $\frac{p_1(TM)}{2}=0$ ($\frac{p_1(TM)}{2}$ is a dimension 4 characteristic class, twice of which is the first integral Pontryagin class $p_1(TM)$), or even weaker, when the first rational Pontryagin class $p_1(M)=0$, $\varphi_W(M)$ is a modular form of weight $2k$ with integer Fourier expansion (cf. [HBJ]). Let $V_{(d_{pq})}$ be a nonsingular $4k$ dimensional complete intersection in the product of complex projective spaces ${\CC P}^{n_1}\times {\CC P}^{n_2}\times \cdots \times {\CC P}^{n_s}$, which is dual to $\prod_{p=1}^t(\sum_{q=1}^sd_{pq}x_q)\in H^{2t}({\CC P}^{n_1}\times {\CC P}^{n_2}\times \cdots \times {\CC P}^{n_s}, \mathbb{Z})$, where $x_q\in H^2({\CC P}^{n_q}, \mathbb{Z}), 1\leq q \leq s,$ is the generator of $H^*({\CC P}^{n_q}, \mathbb{Z})$ and $d_{pq}, 1\leq p\leq t, 1\leq q \leq s,$ are integers. Let $P_q: {\CC P}^{n_1}\times {\CC P}^{n_2}\times \cdots \times {\CC P}^{n_s}\rightarrow {\CC P}^{n_q}, 1\leq q \leq s,$ be the $q$-th projection. Then $V_{(d_{pq})}$ is the intersection of the zero loci of smooth global sections of line bundles $\otimes_{q=1}^{s}P_q^*(\mathcal{O}(d_{pq})), 1 \leq p \leq t$, where $\mathcal{O}(d_{pq})=\mathcal{O}(1)^{d_{pq}}$ is the $d_{pq}$-th power of the canonical line bundle $\mathcal{O}(1)$ over ${\CC P}^{n_q}$. We should point out that we have abused the terminology, complete intersection, in algebraic geometry. Actually, since we don't require that the integers $d_{pq}$'s be nonnegative, $V_{(d_{pq})}$ might not be an algebraic variety. However, by transversality, $V_{(d_{pq})}$ can always be chosen to be smooth. Putting some relevant conditions (see Proposition 3.1 below) on the data $n_q, 1 \leq q \leq s$ and $d_{pq}, 1\leq p\leq t, 1\leq q \leq s$, the complete intersection $V_{(d_{pq})}$ can be made string. This systematically provides us a lot of interesting examples of string manifolds. This paper is devoted to study the Witten genus of string manifolds generated in this way. See also [GM] and [GO] for a study of elliptic genera of complete intersections and the Landau-Ginzburg/CalabiYau correspondence. Let $$D=\left[\begin{array}{cccc} d_{11}&d_{12}& \cdots &d_{1s}\\ d_{21}&d_{22}&\cdots &d_{2s}\\ \cdots &\cdots & \cdots &\cdots \\ d_{t1}&d_{t2}&\cdots&d_{ts} \end{array}\right].$$ Let $m_q$ be the number of nonzero elements in the $q$-th column of $D$. The main result of this paper is \begin{theorem} If $m_q+2 \leq n_q, 1\leq q \leq s$ and $V_{(d_{pq})}$ is string, then the Witten genus $\varphi_W(V_{(d_{pq})})$ vanishes. \end{theorem} Our result generalizes a known result that the Witten genus of string nonsingular complete intersections of hypersurfaces with degrees $d_1, \cdots, d_t$ in a single complex projective space vanishes, which is due to Landweber and Stong (cf. [HBJ, Sect. 6.3]). In [HBJ], the proof of the result of Landweber and Stong is roughly described by applying the properties of the sigma function. Also in this special case, our result is broader than Landweber-Stong's result, since we don't require $d_1, \cdots, d_t$ be all positive. Explicitly expanding $\Theta_q(\widetilde{TM\otimes \CC})$, we get \be\widehat{A}(M)\mathrm{ch}(\Theta_q(\widetilde{TM\otimes \CC}))=\widehat{A}(M)+\widehat{A}(M)(\mathrm{ch}(TM\otimes \CC)-4k)q^2+\cdots.\ee Therefore it's not hard to obtain the following corollary from Theorem 1.1. \begin{corollary} If $m_q+2 \leq n_q, 1\leq q \leq s$ and $V_{(d_{pq})}$ is string, then $$\langle \widehat{A}(V_{(d_{pq})}), [V_{(d_{pq})}]\rangle=0, \ \langle \widehat{A}(V_{(d_{pq})})\mathrm{ch}(TV_{(d_{pq})}\otimes \CC), [V_{(d_{pq})}]\rangle=0.$$ \end{corollary} Let $M$ be a 12 dimensional oriented closed smooth manifold. The signature of $M$ can be expressed by the $\widehat{A}$-genus and the twisted $\widehat{A}$-genus as the following [AGW][L1], \be L(M)=8\widehat{A}(M)\mathrm{ch}(T_\CC M)-32\widehat{A}(M). \ee Combining Corollary 1.1 and (1.5), we obtain that \begin{corollary} If $m_q+2 \leq n_q, 1\leq q \leq s$, $V_{(d_{pq})}$ is 12-dimensional and string, then the signature of $V_{(d_{pq})}$ vanishes. \end{corollary} Let $M$ be a 16 dimensional oriented closed smooth manifold. One has the following formula [CH], \be L(M)\mathrm{ch}(T_\CC M)=-2048\{\widehat{A}(M)\mathrm{ch}(T_\CC M)-48\widehat{A}(M)\}.\ee Combining Corollary 1.1 and (1.6), for twisted signature $\mathrm{Sig}(M, T)\triangleq \langle L(TM)\mathrm{ch}(T_\CC M), [M]\rangle$, we obtain that \begin{corollary} If $m_q+2 \leq n_q, 1\leq q \leq s$, $V_{(d_{pq})}$ is 16-dimensional and string, then the twisted signature $\mathrm{Sig}(V_{(d_{pq})}, T)$ vanishes. \end{corollary} In the following, we will review some necessary preliminaries in Section 2 and prove Theorem 1.1 in Section 3. \section{Some preliminaries} In this section, we review some tools and knowledge that we are going to apply in the proof of Theorem 1.1. First, let's review the relevant concepts and results on residues in complex geometry. See chapter 5 of [GH] for details. Let $U$ be the ball $\{z\in {\CC}^s:\|z\|<\varepsilon\}$ and $f_1, \cdots, f_s\in \mathcal{O}(\overline{U})$ functions holomorphic in a neighborhood of the closure $\overline{U}$ of $U$. We assume that the $f_i(z)$ have the origin as isolated common zero. Set $$D_i=(f_i)=\mathrm{divisor\ of} f_i,$$ $$D=D_1+\cdots+D_s.$$ Let $$\omega=\frac{g(z)dz_1\wedge \cdots \wedge dz_s}{f_1(z)\cdots f_s(z)}$$ be a meromorphic $s$-form with polar divisor $D$. The {\bf residue} of $\omega$ at the origin is defined as the following $$ \mathrm{Res}_{\{0\}}\omega=\left(\frac{1}{2\pi \sqrt{-1}} \right)^s\int_\Gamma \omega,$$ where $\Gamma$ is the real $s$-cycle defined by $$\Gamma=\{z:|f_i(z)|=\varepsilon, 1 \leq i \leq s\}$$ and oriented by $$d(\mathrm{arg}f_1)\wedge\cdots \wedge d(\mathrm{arg} f_s)\geq0.$$ Let $M$ be a compact complex manifold of dimension $s$. Suppose that $D_1, \cdots, D_s$ are effective divisors, the intersection of which is a finite set of points. Let $D=D_1+\cdots+D_s.$ Let $\omega$ be a meromorphic $s$-form on $M$ with polar divisor $D$. For each point $P\in D_1\cap\cdots \cap D_s$, we may restrict $\omega$ to a neighborhood of $U_P$ of $P$ and define the residue $\mathrm{Res}_P\omega$ as above. One has (cf. [GH], Chaper 5) \begin{lemma}(\bf Residue Theorem) $$\Sigma_{P\in D_1\cap\cdots \cap D_s}\mathrm{Res}_P\omega=0.$$ \end{lemma} We also need some knowledge on the Jacobi theta functions. Although we are going to use only one of them, for the sake of completeness, we list the definitions and transformation laws of all of them. The four Jacobi theta functions are defined as follows (cf. [Ch]): \be\theta(v,\tau)=2q^{1/4}\sin(\pi v) \prod_{j=1}^\infty\left[(1-q^{2j})(1-e^{2\pi \sqrt{-1}v}q^{2j})(1-e^{-2\pi \sqrt{-1}v}q^{2j})\right]\ ,\ee \be \theta_1(v,\tau)=2q^{1/4}\cos(\pi v) \prod_{j=1}^\infty\left[(1-q^{2j})(1+e^{2\pi \sqrt{-1}v}q^{2j}) (1+e^{-2\pi \sqrt{-1}v}q^{2j})\right]\ ,\ee \be \theta_2(v,\tau)=\prod_{j=1}^\infty\left[(1-q^{2j}) (1-e^{2\pi \sqrt{-1}v}q^{2j-1})(1-e^{-2\pi \sqrt{-1}v}q^{2j-1})\right]\ ,\ee \be \theta_3(v,\tau)=\prod_{j=1}^\infty\left[(1-q^{2j}) (1+e^{2\pi \sqrt{-1}v}q^{2j-1})(1+e^{-2\pi \sqrt{-1}v}q^{2j-1})\right]\ ,\ee where $q=e^{\pi i \tau}$, $\tau \in \mathbb{H}$, the upper half plane and $v\in \CC$. They are all holomorphic functions for $(v,\tau)\in \mathbb{C \times H}$. Let $\theta^{'}(0,\tau)=\frac{\partial}{\partial v}\theta(v,\tau)|_{v=0}$. They satisfy the following relations (cf. [Ch]): \be \theta(v+1, \tau)=-\theta(v, \tau),\ \theta(v+\tau, \tau)=-{1\over q}e^{-2\pi i v}\theta(v, \tau),\ee \be \theta_1(v+1, \tau)=-\theta_1(v, \tau),\ \theta_1(v+\tau, \tau)={1\over q}e^{-2\pi i v}\theta_1(v, \tau),\ee \be \theta_2(v+1, \tau)=\theta_2(v, \tau),\ \theta_2(v+\tau, \tau)=-{1\over q}e^{-2\pi i v}\theta_2(v, \tau),\ee \be \theta_3(v+1, \tau)=\theta_3(v, \tau),\ \theta_3(v+\tau, \tau)={1\over q}e^{-2\pi i v}\theta_3(v, \tau).\ee Therefore it's not hard to deduce in the following how the theta functions vary along the lattice $\Gamma=\{m+n\tau| m,n\in \mathbb{Z}\}$. We have \be \theta(v+m, \tau)=(-1)^m\theta(v, \tau)\ee and \be \begin{split} &\theta(v+n\tau, \tau)\\ =&-{1\over q}e^{-2\pi i(v+(n-1)\tau)}\theta(v+(n-1)\tau, \tau)\\ =&-{1\over q}e^{-2\pi i(v+(n-1)\tau)}\left(-{1\over q}\right)e^{-2\pi i(v+(n-2)\tau)}\theta(v+(n-2)\tau, \tau)\\ =&(-1)^n\frac{1}{q^n}e^{-2\pi i[(v+(n-1)\tau)+(v+(n-2)\tau)+\cdots +v]}\theta(v,\tau)\\ =&(-1)^n\frac{1}{q^n}e^{-2\pi i nv-\pi i n(n-1)\tau}\theta(v,\tau)\\ =&(-1)^ne^{-2\pi inv-\pi i n^2\tau}\theta(v,\tau). \end{split} \ee Similarly, we have \be \theta_1(v+m, \tau)=(-1)^m\theta_1(v, \tau), \ \theta_1(v+n\tau, \tau)=e^{-2\pi inv-\pi i n^2\tau}\theta_1(v,\tau);\ee \be \theta_2(v+m, \tau)=\theta_2(v, \tau), \ \theta_2(v+n\tau, \tau)=(-1)^ne^{-2\pi inv-\pi i n^2\tau}\theta_2(v,\tau);\ee \be \theta_3(v+m, \tau)=\theta_3(v, \tau), \ \theta_3(v+n\tau, \tau)=e^{-2\pi inv-\pi i n^2\tau}\theta_3(v,\tau).\ee \section{Proof of Theorem 1.1} Let $i:V_{(d_{pq})}\rightarrow {\CC P}^{n_1}\times {\CC P}^{n_2}\times \cdots \times {\CC P}^{n_s}$ be the inclusion. It's not hard to see that \be i^* T_\mathbb{R}({\CC P}^{n_1}\times {\CC P}^{n_2}\times \cdots \times {\CC P}^{n_s})\cong TV_{(d_{pq})}\oplus i^*\left(\oplus_{p=1}^{t}(\otimes_{q=1}^{s}P_q^* \mathcal{O}(d_{pq}))\right),\ee where we forget the complex structure of the line bundles $\otimes_{q=1}^{s}P_q^* \mathcal{O}(d_{pq}), 1\leq p \leq t.$ Therefore for the total Stiefel-Whitney class, we have\be i^*w( T_\mathbb{R}({\CC P}^{n_1}\times {\CC P}^{n_2}\times \cdots \times {\CC P}^{n_s}))= w(TV_{(d_{pq})})\prod_{p=1}^{t}i^*w(\otimes_{q=1}^{s}P_q^* \mathcal{O}(d_{pq})),\ee or more precisely \be i^*\left(\prod_{q=1}^{s}(1+x_q)^{n_q+1}\right)\equiv w(TV_{(d_{pq})})\prod_{p=1}^{t}i^*\left(1+\sum_{q=1}^{s}d_{pq}x_q\right)\ \ \ \ \ \ \mathrm{mod}\, 2. \ee By (3.3), we can easily see that \be w_1(TV_{(d_{pq})})=0, \ee \be w_2(TV_{(d_{pq})})\equiv \sum_{q=1}^{s}\left(n_q+1-\sum_{p=1}^{t}d_{pq}\right)i^*x_q\ \ \ \ \mathrm{mod}\,2.\ee As for the total rational Pontryagin class, we have \be i^*p( T_\mathbb{R}({\CC P}^{n_1}\times {\CC P}^{n_2}\times \cdots \times {\CC P}^{n_s}))= p(TV_{(d_{pq})})\prod_{p=1}^{t}i^*p(\otimes_{q=1}^{s}P_q^* \mathcal{O}(d_{pq})),\ee or \be p(V_{(d_{pq})})=\prod_{q=1}^s(1+(i^*x_q)^2)^{n_q+1}\prod_{p=1}^t\left(1+\left(\sum_{q=1}^sd_{pq}i^*x_q\right)^2\right)^{-1}.\ee Hence we have \be \begin{split} p_1(V_{(d_{pq})})&=\sum_{q=1}^{s}(n_q+1)(i^*x_q)^2-\sum_{p=1}^{t}\left(\sum_{q=1}^sd_{pq}i^*x_q\right)^2\\ &=\sum_{q=1}^s(n_q+1-\sum_{p=1}^{t}d_{pq}^2)(i^*x_q)^2-\sum_{1\leq u, v \leq s, u\neq v}\left(\sum_{p=1}^{t}d_{pu}d_{pv}i^*x_ui^*x_v\right).\end{split}\ee Let $i_!: H^*(V_{(d_{pq})}, \mathbb{Q})\rightarrow H^{*+2t}({\CC P}^{n_1}\times {\CC P}^{n_2}\times \cdots \times {\CC P}^{n_s}, \mathbb{Q})$ be the push forward. Thus if $p_1(V_{(d_{pq})})=0$, then $$i_!p_1(V_{(d_{pq})})=i_!i^*\left(\sum_{q=1}^s(n_q+1-\sum_{p=1}^{t}d_{pq}^2)x_q^2-\sum_{1\leq u, v \leq s, u\neq v}\left(\sum_{p=1}^{t}d_{pu}d_{pv}x_ux_v\right)\right)=0,$$ i.e. $$\left(\prod_{p=1}^t\left(\sum_{q=1}^sd_{pq}x_q\right)\right)\left(\sum_{q=1}^s(n_q+1-\sum_{p=1}^{t}d_{pq}^2)x_q^2-\sum_{1\leq u, v \leq s, u\neq v}\left(\sum_{p=1}^{t}d_{pu}d_{pv}x_ux_v\right)\right)=0 $$ in $ H^{2t+4}({\CC P}^{n_1}\times {\CC P}^{n_2}\times \cdots \times {\CC P}^{n_s}, \mathbb{Q}). $ If $m_q+2 \leq n_q, 1\leq q \leq s$, then the left hand side of the above equality should not only be a zero element in the cohomology ring but also be a zero polynomial. Note that the polynomial ring is an integral domain. Therefore at least one of it's factor should be zero. But $\prod_{p=1}^t\left(\sum_{q=1}^sd_{pq}x_q\right)$ is nonzero. This means $$\sum_{q=1}^s(n_q+1-\sum_{p=1}^{t}d_{pq}^2)x_q^2-\sum_{1\leq u, v \leq s, u\neq v}(\sum_{p=1}^{t}d_{pu}d_{pv}x_ux_v)=0$$ and consequently the following identities hold, \be n_q+1-\sum_{p=1}^{t}d_{pq}^2=0, \ 1\leq q\leq s;\ee \be \sum_{p=1}^{t}d_{pu}d_{pv}=0,\ 1\leq u, v \leq s, u\neq v.\ee Note that \be n_q+1-\sum_{p=1}^{t}d_{pq}^2\equiv n_q+1-\sum_{p=1}^{t}d_{pq}\ \ \ \\ \mathrm{mod}\,2,\ \ \ \ 1\leq q\leq s. \ee Hence (3.9) implies that $w_2(TV_{(d_{pq})})=0$. In a summary, we have the following proposition. \begin{proposition} When $m_q+2 \leq n_q, 1\leq q \leq s$, $p_1(V_{(d_{pq})})=0$ implies $V_{(d_{pq})}$ is spin. Therefore when $m_q+2 \leq n_q, 1\leq q \leq s$, $V_{(d_{pq})}$ is string if and only if one of the following holds \newline (1) $p_1(V_{(d_{pq})})=0$; \newline (2) the following identities hold, \be n_q+1-\sum_{p=1}^{t}d_{pq}^2=0, \ 1\leq q\leq s, \ee \be \sum_{p=1}^{t}d_{pu}d_{pv}=0,\ 1\leq u, v \leq s, u\neq v;\ee \newline (3) in the matrix $$D=\left[\begin{array}{cccc} d_{11}&d_{12}& \cdots &d_{1s}\\ d_{21}&d_{22}&\cdots &d_{2s}\\ \cdots &\cdots & \cdots &\cdots \\ d_{t1}&d_{t2}&\cdots&d_{ts} \end{array}\right]$$ $\parallel\mathrm{col_qD}\parallel^2=n_q+1, 1 \leq q \leq s$ and any two columns are orthogonal to each other; or equivalently, $$D^tD=\mathrm{diag}(n_1+1, \cdots, n_s+1). $$ \end{proposition} With the above preparations, we are able to prove Theorem 1.1. $$ $$ \noindent{\it Proof of Theorem 1.1:} Let $[V_{(d_{pq})}]$ be the fundamental class of $V_{(d_{pq})}$ in $H_{4k}(V_{(d_{pq})}, \mathbb{Z})$. Then according to (1.3) and the multiplicative property of the Witten genus, up to a constant scalar, \be \begin{split}&\varphi_W(V_{(d_{pq})})\\ =&\left(\left(\prod_{q=1}^{s}\left[\frac{i^*x_q}{\frac{\theta(i^*x_q,\tau)}{\theta'(0, \tau)}}\right]^{n_q+1}\right)\left(\prod_{p=1}^{t}\left[\frac{\sum_{q=1}^sd_{pq}i^*x_q}{\frac{\theta(\sum_{q=1}^sd_{pq}i^*x_q,\tau)}{\theta'(0, \tau)}}\right]^{-1}\right)\right)[V_{(d_{pq})}]\\ =&\left(\left(\prod_{q=1}^{s}\left[\frac{x_q}{\frac{\theta(x_q,\tau)}{\theta'(0, \tau)}}\right]^{n_q+1}\right)\left(\prod_{p=1}^{t}\left[\frac{1}{\frac{\theta(\sum_{q=1}^sd_{pq}x_q,\tau)}{\theta'(0, \tau)}}\right]^{-1}\right)\right)[{\CC P}^{n_1}\times {\CC P}^{n_2}\times \cdots \times {\CC P}^{n_s}]\\ =&\mathrm{coefficient\ of}\ x_1^{n_1}\cdots x_s^{n_s} \mathrm{in}\ \left(\frac{(\prod_{q=1}^{s}{x_q}^{n_q+1})\left(\prod_{p=1}^{t}\frac{\theta(\sum_{q=1}^sd_{pq}x_q,\tau)}{\theta'(0, \tau)}\right)}{\prod_{q=1}^{s}\left[\frac{\theta(x_q,\tau)}{\theta'(0, \tau)}\right]^{n_q+1}}\right)\\ =&\mathrm{Res}_{0}\left(\frac{\left(\prod_{p=1}^{t}\frac{\theta(\sum_{q=1}^sd_{pq}x_q,\tau)}{\theta'(0, \tau)}\right)dx_1\wedge\cdots \wedge dx_s}{\prod_{q=1}^{s}\left[\frac{\theta(x_q,\tau)}{\theta'(0, \tau)}\right]^{n_q+1}}\right).\\ \end{split}\ee Note that in (3.14), we have used Pioncare duality to deduce the second equality. Set $$g(x_1, \cdots, x_s)=\prod_{p=1}^{t}\frac{\theta(\sum_{q=1}^sd_{pq}x_q,\tau)}{\theta'(0, \tau)}, \ \ \ \ \ \ f_q(x_q)=\left[\frac{\theta(x_q,\tau)}{\theta'(0, \tau)}\right]^{n_q+1}, 1\leq q \leq s,$$ and $$\omega=\frac{g(x_1, \cdots, x_s) dx_1\wedge\cdots \wedge dx_s}{f_1(x_1)\cdots f_s(x_s)}.$$ Then up to a constant scalar, \be \varphi_W(V_{(d_{pq})})=\mathrm{Res}_{(0, 0, \cdots, 0)}\omega.\ee By (2.9), \be \begin{split} &g(x_1+1, x_2, \cdots, x_s)\\ =&\prod_{p=1}^{t}\frac{\theta(\sum_{q=1}^sd_{pq}x_q+d_{p1},\tau)}{\theta'(0, \tau)}\\ =&(-1)^{d_{11}+\cdots+d_{t1}}\prod_{p=1}^{t}\frac{\theta(\sum_{q=1}^sd_{pq}x_q,\tau)}{\theta'(0, \tau)}\end{split}\ee and $$f_1(x_1+1)=\left[\frac{\theta(x_1+1,\tau)}{\theta'(0, \tau)}\right]^{n_1+1}=(-1)^{n_1+1}\left[\frac{\theta(x_1,\tau)}{\theta'(0, \tau)}\right]^{n_1+1}.$$ Thus $\frac{g(x_1+1, \cdots, x_s) }{f_1(x_1+1)\cdots f_s(x_s)}=(-1)^{(d_{11}+\cdots+d_{t1})-(n_1+1)}\frac{g(x_1, \cdots, x_s) }{f_1(x_1)\cdots f_s(x_s)}.$ Note that by (3.12), $$(d_{11}+\cdots+d_{t1})-(n_1+1)\equiv (d_{11}^2+\cdots+d_{t1}^2)-(n_1+1)=0\ \mathrm{mod}\,2.$$ Thus one obtains that $\frac{g(x_1+1, \cdots, x_s) }{f_1(x_1+1)\cdots f_s(x_s)}=\frac{g(x_1, \cdots, x_s) }{f_1(x_1)\cdots f_s(x_s)}.$ Similarly, we have \be \frac{g(x_1, \cdots,x_q+1, \cdots, x_s) }{f_1(x_1)\cdots f_q(x_q+1)\cdots f_s(x_s)}=\frac{g(x_1, \cdots, x_s) }{f_1(x_1)\cdots f_s(x_s)}, 1\leq q \leq s.\ee On the other hand, by (2.9) and (2.10), \be \begin{split}&g(x_1+\tau, x_2, \cdots, x_s)\\=&\prod_{p=1}^{t}\frac{\theta(\sum_{q=1}^sd_{pq}x_q+d_{p1}\tau,\tau)}{\theta'(0, \tau)}\\ =&\prod_{p=1}^{t}(-1)^{d_{p1}}e^{-2\pi i d_{p1}(\sum_{q=1}^sd_{pq}x_q)-\pi i d_{p1}^2\tau}\frac{\theta(\sum_{q=1}^sd_{pq}x_q,\tau)}{\theta'(0, \tau)}\\ =&(-1)^{d_{11}+\cdots+d_{t1}}e^{-2\pi i\sum_{p=1}^{t}d_{p1}(\sum_{q=1}^sd_{pq}x_q)-\pi i\tau (\sum_{p=1}^{t}d_{p1}^2)}\frac{\theta(\sum_{q=1}^sd_{pq}x_q,\tau)}{\theta'(0, \tau)}\\ =&(-1)^{d_{11}+\cdots+d_{t1}}e^{-2\pi i\sum_{p=1}^{t}d_{p1}(\sum_{q=1}^sd_{pq}x_q)-\pi i\tau (\sum_{p=1}^{t}d_{p1}^2)}g(x_1, x_2, \cdots, x_s)\\ \end{split}\ee and \be \begin{split} &f_1(x_1+\tau)\\ =&\left[\frac{\theta(x_1+\tau,\tau)}{\theta'(0, \tau)}\right]^{n_1+1}\\ =&\left[-e^{-2\pi i x_1-\pi i\tau}\frac{\theta(x_1,\tau)}{\theta'(0, \tau)}\right]^{n_1+1}\\ =&(-1)^{n_1+1}e^{-2\pi i (n_1+1)x_1-\pi i\tau(n_1+1)}\left[\frac{\theta(x_1,\tau)}{\theta'(0, \tau)}\right]^{n_1+1}\\ =&(-1)^{n_1+1}e^{-2\pi i (n_1+1)x_1-\pi i\tau(n_1+1)}f_1(x_1).\end{split}\ee Therefore \be \begin{split}&\frac{g(x_1+\tau, \cdots, x_s) }{f_1(x_1+\tau)\cdots f_s(x_s)}\\ =& (-1)^{d_{11}+\cdots+d_{t1}-n_1-1}e^{-2\pi i\sum_{p=1}^{t}d_{p1}(\sum_{q=1}^sd_{pq}x_q)-\pi i\tau (\sum_{p=1}^{t}d_{p1}^2)+2\pi i (n_1+1)x_1+\pi i\tau(n_1+1)}\\ &\cdot \frac{g(x_1, \cdots, x_s) }{f_1(x_1)\cdots f_s(x_s)}.\end{split}\ee However \be d_{11}+\cdots+d_{t1}-n_1-1\equiv d_{11}^2+\cdots+d_{t1}^2-n_1-1 \ \mathrm{mod} 2 \ee and \be \begin{split}&-2\pi i\sum_{p=1}^{t}d_{p1}\left(\sum_{q=1}^sd_{pq}x_q\right)-\pi i\tau (\sum_{p=1}^{t}d_{p1}^2)+2\pi i (n_1+1)x_1+\pi i\tau(n_1+1)\\ =&\pi i \tau \left[(n_1+1)-\sum_{p=1}^{t}d_{p1}^2\right]+2\pi i\left[(n_1+1)-\sum_{p=1}^{t}d_{p1}^2\right]x_1-2\pi i\sum_{q=2}^{s}\left(\sum_{p=1}^{t}d_{p1}d_{pq}x_q\right).\end{split}\ee Therefore by (3.12) and (3.13), $$ d_{11}+\cdots+d_{t1}-n_1-1\equiv 0\ \mathrm{mod}\,2,$$ and $$-2\pi i\sum_{p=1}^{t}d_{p1}\left(\sum_{q=1}^sd_{pq}x_q\right)-\pi i\tau (\sum_{p=1}^{t}d_{p1}^2)+2\pi i (n_1+1)x_1+\pi i\tau(n_1+1)=0.$$ Consequently, by (3.20), we obtain that \be\frac{g(x_1+\tau, \cdots, x_s) }{f_1(x_1+\tau)\cdots f_s(x_s)}=\frac{g(x_1, \cdots, x_s) }{f_1(x_1)\cdots f_s(x_s)}.\ee Similarly, one also obtains that \be \frac{g(x_1, \cdots,x_q+\tau, \cdots, x_s) }{f_1(x_1)\cdots f_q(x_q+\tau)\cdots f_s(x_s)}=\frac{g(x_1, \cdots, x_s) }{f_1(x_1)\cdots f_s(x_s)}, \ 1\leq q \leq s. \ee Therefore from (3.17) and (3.24), we see that $\omega$ can be viewed as a meromorphic $s$-form defined on the product of $s$-tori, $\left(\CC/\Gamma\right)^s$, which is a compact complex manifold. $\theta(v, \tau)$ has the lattice points $m+n\tau, m,n\in \mathbb{Z}$ as it's simple zero points [Ch]. We therefore see that $\omega$ has pole divisors $\{0\}\times \left(\CC/\Gamma\right)^{s-1}, \left(\CC/\Gamma\right)\times \{0\}\times \left(\CC/\Gamma\right)^{s-2}, \cdots, \left(\CC/\Gamma\right)^{s-1}\times \{0\}$. So $(0, 0, \cdots, 0)$ is the unique intersection point of these polar divisors. Therefore by the residue theorem on compact complex manifolds, we directly deduces that $\mathrm{Res}_{(0, 0, \cdots, 0)}\omega=0$. By (3.15), we obtain that $\varphi_W(V_{(d_{pq})})=\mathrm{Res}_{(0, 0, \cdots, 0)}\omega=0$. Q.E.D \section{Acknowledgement} F. Han is grateful to Professor Peter Teichner for a lot of discussions and helps. He also thanks Professor Kefeng Liu for inspiring suggestions. Q. Chen is grateful to Professor Nicolai Reshetikhin for his interest and support. Thanks also go to Professor Friedrich Hirzebruch, Professor Stephan Stolz and Professor Weiping Zhang for their interests and many discussions with us. We would like to thank Professor Michael Joachim and Professor Serge Ochanine for inspiring communications with us. The paper was finished when the second author was visiting the Max-Planck-Institut f$\ddot{\mathrm{u}}$r Mathematik at Bonn.
{"config": "arxiv", "file": "math0612055.tex"}
\begin{document} \title{Fixed Point Sets in Digital Topology, 1} \author{Laurence Boxer \thanks{ Department of Computer and Information Sciences, Niagara University, Niagara University, NY 14109, USA; and Department of Computer Science and Engineering, State University of New York at Buffalo email: boxer@niagara.edu } \and{P. Christopher Staecker \thanks{ Department of Mathematics, Fairfield University, Fairfield, CT 06823-5195, USA email: cstaecker@fairfield.edu} } } \date{} \maketitle \begin{abstract} In this paper, we examine some properties of the fixed point set of a digitally continuous function. The digital setting requires new methods that are not analogous to those of classical topological fixed point theory, and we obtain results that often differ greatly from standard results in classical topology. We introduce several measures related to fixed points for continuous self-maps on digital images, and study their properties. Perhaps the most important of these is the fixed point spectrum $F(X)$ of a digital image: that is, the set of all numbers that can appear as the number of fixed points for some continuous self-map. We give a complete computation of $F(C_n)$ where $C_n$ is the digital cycle of $n$ points. For other digital images, we show that, if $X$ has at least 4 points, then $F(X)$ always contains the numbers 0, 1, 2, 3, and the cardinality of $X$. We give several examples, including $C_n$, in which $F(X)$ does not equal $\{0,1,\dots,\#X\}$. We examine how fixed point sets are affected by rigidity, retraction, deformation retraction, and the formation of wedges and Cartesian products. We also study how fixed point sets in digital images can be arranged; e.g., in some cases the fixed point set is always connected. \end{abstract} \section{Introduction} Digital images are often used as mathematical models of real-world objects. A digital model of the notion of a continuous function, borrowed from the study of topology, is often useful for the study of digital images. However, a digital image is typically a finite, discrete point set. Thus, it is often necessary to study digital images using methods not directly derived from topology. In this paper, we introduce several such methods to study properties of the fixed point set of a continuous self-map on a digital image. \section{Preliminaries} Let $\N$ denote the set of natural numbers; and $\Z$, the set of integers. $\#X$ will be used for the number of elements of a set~$X$. \subsection{Adjacencies} A digital image is a pair $(X,\kappa)$ where $X \subset \Z^n$ for some $n$ and $\kappa$ is an adjacency on $X$. Thus, $(X,\kappa)$ is a graph for which $X$ is the vertex set and $\kappa$ determines the edge set. Usually, $X$ is finite, although there are papers that consider infinite $X$. Usually, adjacency reflects some type of ``closeness" in $\Z^n$ of the adjacent points. When these ``usual" conditions are satisfied, one may consider the digital image as a model of a black-and-white ``real world" digital image in which the black points (foreground) are represented by the members of $X$ and the white points (background) by members of $\Z^n \setminus X$. We write $x \adj_{\kappa} y$, or $x \adj y$ when $\kappa$ is understood or when it is unnecessary to mention $\kappa$, to indicate that $x$ and $y$ are $\kappa$-adjacent. Notations $x \adjeq_{\kappa} y$, or $x \adjeq y$ when $\kappa$ is understood, indicate that $x$ and $y$ are $\kappa$-adjacent or are equal. The most commonly used adjacencies are the $c_u$ adjacencies, defined as follows. Let $X \subset \Z^n$ and let $u \in \Z$, $1 \le u \le n$. Then for points \[x=(x_1, \ldots, x_n) \neq (y_1,\ldots,y_n)=y\] we have $x \adj_{c_u} y$ if and only if \begin{itemize} \item for at most $u$ indices $i$ we have $|x_i - y_i| = 1$, and \item for all indices $j$, $|x_j - y_j| \neq 1$ implies $x_j=y_j$. \end{itemize} The $c_u$-adjacencies are often denoted by the number of adjacent points a point can have in the adjacency. E.g., \begin{itemize} \item in $\Z$, $c_1$-adjacency is 2-adjacency; \item in $\Z^2$, $c_1$-adjacency is 4-adjacency and $c_2$-adjacency is 8-adjacency; \item in $\Z^3$, $c_1$-adjacency is 8-adjacency, $c_2$-adjacency is 18-adjacency, and $c_3$-adjacency is 26-adjacency. \end{itemize} We will often discuss the \emph{digital $n$-cycle}, the $n$-point image $C_n =\{x_0,\dots,x_{n-1}\}$ in which each $x_i$ is adjacent only to $x_{i+1}$ and $x_{i-1}$, and subscripts are always read modulo $n$. The literature also contains several adjacencies to exploit properties of Cartesian products of digital images. These include the following. \begin{definition} \cite{Berge} Let $(X,\kappa)$ and $(Y, \lambda)$ be digital images. The {\em normal product adjacency} or {\em strong adjacency} on $X \times Y$, denoted $NP(\kappa, \lambda)$, is defined as follows. Given $x_0, x_1 \in X$, $y_0, y_1 \in Y$ such that \[p_0=(x_0,y_0) \neq (x_1,y_1)=p_1, \] we have $p_0 \adj_{NP(\kappa,\lambda)} p_1$ if and only if one of the following is valid: \begin{itemize} \item $x_0 \adj_{\kappa} x_1$ and $y_0=y_1$, or \item $x_0 = x_1$ and $y_0 \adj_{\lambda} y_1$, or \item $x_0 \adj_{\kappa} x_1$ and $y_0 \adj_{\lambda} y_1$. \end{itemize} \end{definition} \begin{thm} {\rm \cite{BK12}} \label{NPm+n} Let $X \subset \Z^m$, $Y \subset \Z^n$. Then \[(X \times Y, NP(c_m,c_n)) = (X \times Y, c_{m+n}), \] i.e., the $c_{m+n}$-adjacency on $X \times Y \subset \Z^{m+n}$ coincides with the normal product adjacency based on $c_m$ and $c_n$. \end{thm} Building on the normal product adjacency, we have the following. \begin{definition} {\rm \cite{BxNormal}} Given $u, v \in \N$, $1 \le u \le v$, and digital images $(X_i, \kappa_i)$, $1 \le i \le v$, let $X = \Pi_{i=1}^v X_i$. The adjacency $NP_u(\kappa_1, \ldots, \kappa_v)$ for $X$ is defined as follows. Given $x_i, x_i' \in X_i$, let \[p=(x_1, \ldots, x_v) \neq (x_1', \ldots, x_v')=q. \] Then $p \adj_{NP_u(\kappa_1, \ldots, \kappa_v)} q$ if for at least 1 and at most $u$ indices $i$ we have $x_i \adj_{\kappa_i} x_i'$ and for all other indices $j$ we have $x_j = x_j'$. \end{definition} Notice $NP(\kappa, \lambda)= NP_2(\kappa, \lambda)$ \cite{BxNormal}. \begin{comment} \begin{definition} {\rm \cite{HararyTrauth}} Given digital images $(X_i, \kappa_i)$, $1 \le i \le v$, let $X = \Pi_{i=1}^v X_i$, the {\em tensor product adjacency} $T(\kappa_1, \ldots, \kappa_v)$ on $X$ is defined as follows. Given $x_i, x_i' \in X_i$, let \[p=(x_1, \ldots, x_v) \neq (x_1', \ldots, x_v')=q. \] Then $p \adj_{T(\kappa_1, \ldots, \kappa_v)} q$ if and only if for all~$i$, $x_i \adj_{\kappa_i} x_i'$. \end{definition} \end{comment} \subsection{Digitally continuous functions} We denote by $\id$ or $\id_X$ the identity map $\id(x)=x$ for all $x \in X$. \begin{definition} {\rm \cite{Rosenfeld, Bx99}} Let $(X,\kappa)$ and $(Y,\lambda)$ be digital images. A function $f: X \to Y$ is {\em $(\kappa,\lambda)$-continuous}, or {\em digitally continuous} when $\kappa$ and $\lambda$ are understood, if for every $\kappa$-connected subset $X'$ of $X$, $f(X')$ is a $\lambda$-connected subset of $Y$. If $(X,\kappa)=(Y,\lambda)$, we say a function is {\em $\kappa$-continuous} to abbreviate ``$(\kappa,\kappa)$-continuous." \end{definition} \begin{thm} {\rm \cite{Bx99}} A function $f: X \to Y$ between digital images $(X,\kappa)$ and $(Y,\lambda)$ is $(\kappa,\lambda)$-continuous if and only if for every $x,y \in X$, if $x \adj_{\kappa} y$ then $f(x) \adjeq_{\lambda} f(y)$. \end{thm} \begin{thm} \label{composition} {\rm \cite{Bx99}} Let $f: (X, \kappa) \to (Y, \lambda)$ and $g: (Y, \lambda) \to (Z, \mu)$ be continuous functions between digital images. Then $g \circ f: (X, \kappa) \to (Z, \mu)$ is continuous. \end{thm} A {\em path} is a continuous function $r: [0,m]_{\Z} \to X$. We use the following notation. For a digital image $(X,\kappa)$, \[ C(X,\kappa) = \{f: X \to X \, | \, f \mbox{ is continuous}\}. \] \begin{definition} {\rm (\cite{Bx99}; see also \cite{Khalimsky})} \label{htpy-2nd-def} Let $X$ and $Y$ be digital images. Let $f,g: X \rightarrow Y$ be $(\kappa,\kappa')$-continuous functions. Suppose there is a positive integer $m$ and a function $h: X \times [0,m]_{\Z} \rightarrow Y$ such that \begin{itemize} \item for all $x \in X$, $h(x,0) = f(x)$ and $h(x,m) = g(x)$; \item for all $x \in X$, the induced function $h_x: [0,m]_{\Z} \rightarrow Y$ defined by \[ h_x(t) ~=~ h(x,t) \mbox{ for all } t \in [0,m]_{\Z} \] is $(c_1,\kappa')-$continuous. That is, $h_x(t)$ is a path in $Y$. \item for all $t \in [0,m]_{\Z}$, the induced function $h_t: X \rightarrow Y$ defined by \[ h_t(x) ~=~ h(x,t) \mbox{ for all } x \in X \] is $(\kappa,\kappa')-$continuous. \end{itemize} Then $h$ is a {\em digital $(\kappa,\kappa')-$homotopy between} $f$ and $g$, and $f$ and $g$ are {\em digitally $(\kappa,\kappa')-$homotopic in} $Y$, denoted $f \simeq_{\kappa,\kappa'} g$ or $f \simeq g$ when $\kappa$ and $\kappa'$ are understood. If $(X,\kappa)=(Y,\kappa')$, we say $f$ and $g$ are {\em $\kappa$-homotopic} to abbreviate ``$(\kappa,\kappa)$-homotopic" and write $f \simeq_{\kappa} g$ to abbreviate ``$f \simeq_{\kappa,\kappa} g$". If there is a $\kappa$-homotopy between $\id_X$ and a constant map, we say $X$ is {\em $\kappa$-contractible}, or just {\em contractible} when $\kappa$ is understood. \end{definition} \begin{definition} Let $A \subseteq X$. A $\kappa$-continuous function $r: X \to A$ is a {\em retraction}, and {\em $A$ is a retract of $X$}, if $r(a)=a$ for all $a \in A$. If such a map $r$ satisfies $i \circ r \simeq_{\kappa} \id_X$ where $i: A \to X$ is the inclusion map, then $r$ is a {\em $\kappa$-deformation retraction} and $A$ is a {\em $\kappa$-deformation retract} of $X$. \end{definition} A topological space $X$ has the {\em fixed point property (FPP)} if every continuous $f: X \to X$ has a fixed point. A similar definition has appeared in digital topology: a digital image $(X,\kappa)$ has the {\em fixed point property (FPP)} if every $\kappa$-continuous $f: X \to X$ has a fixed point. However, this property turns out to be trivial, in the sense of the following. \begin{thm} {\rm \cite{BEKLL}} \label{BEKLLFPP} A digital image $(X,\kappa)$ has the FPP if and only if $\#X=1$. \end{thm} The proof~\cite{BEKLL} of Theorem~\ref{BEKLLFPP} was due to the establishment of the following. \begin{lem} \label{howBEKLLFPP} Let $(X,\kappa)$ be a digital image, where $\#X > 1$. Let $x_0,x_1 \in X$ be such that $x_0 \adj_{\kappa} x_1$. Then the function $f: X \to X$ given by $f(x_0)=x_1$ and $f(x)=x_0$ for $x \neq x_0$ is $\kappa$-continuous and has 0 fixed points. \end{lem} A function $f: (X,\kappa) \to (Y,\lambda)$ is an {\em isomorphism} (called a {\em homeomorphism} in~\cite{Bx94}) if $f$ is a continuous bijection such that $f^{-1}$ is continuous. \section{Rigidity} We will say a function $f:X\to Y$ is \emph{rigid} when no continuous map is homotopic to $f$ except $f$ itself. This generalizes a definition in \cite{hmps}. When the identity map $\id:X\to X$ is rigid, we say $X$ is rigid. Many digital images are rigid, though it can be difficult to show directly that a given example is rigid. A computer search described in \cite{stae15} has shown that no rigid images in $\Z^2$ with 4-adjacency exist having fewer than 13 points, and no rigid images in $\Z^2$ with 8-adjacency exist having fewer than 10 points. We will demonstrate some methods for showing that a given image is rigid. For example, the digital image in Figure \ref{rigidwedge} is rigid, as shown below in Example \ref{wedgeOfCycles}. \begin{figure} \[ \begin{tikzpicture} \draw[densely dotted] (-3,-1) grid (3,1); \foreach \x in {-2,-1,2,1} { \node at (\x,-1) [vertex] {}; \node at (\x,1) [vertex] {}; } \foreach \x in {-3,0,3} \node at (\x,0) [vertex] {}; \draw (-3,0)--(-2,1)--(-1,1)--(0,0)--(1,1)--(2,1)--(3,0); \draw (-3,0)--(-2,-1)--(-1,-1)--(0,0)--(1,-1)--(2,-1)--(3,0); \end{tikzpicture} \] \caption{A rigid image in $\Z^2$ with 8-adjacency\label{rigidwedge}} \end{figure} An immediate consequence of the definition of rigidity is the following. \begin{prop} Let $(X,\kappa)$ be a rigid digital image such that $\#X > 1$. Then $X$ is not $\kappa$-contractible. \end{prop} Rigidity of functions is preserved when composing with an isomorphism, as the following theorems demonstrate: \begin{thm} \label{isoOfRigidIsRigid} Let $f:X\to Y$ be rigid and $g:Y \to Z$ be an isomorphism. Then $g \circ f:X \to Z$ is rigid. \end{thm} \begin{proof} Suppose otherwise. Then there is a homotopy $h: X \times [0,m]_{\Z} \to Z$ from $g \circ f$ to a map $G: X \to Z$ such that $g \circ f \neq G$. Then by Theorem~\ref{composition}, $g^{-1} \circ h: X \times [0,m]_{\Z} \to X$ is a homotopy from $f$ to $g^{-1} \circ G$, and since $g^{-1}$ is one-to-one, $f \neq g^{-1} \circ G$. This contradiction of the assumption that $f$ is rigid completes the proof. \end{proof} \begin{thm} \label{RigidOfIsoIsRigid} Let $f:X\to Y$ be rigid. Let $g: W \to X$ be an isomorphism. Then $f \circ g$ is rigid. \end{thm} \begin{proof} Suppose otherwise. Then there is a homotopy $h: W \times [0,m]_{\Z} \to Y$ from $f \circ g$ to some $G: W \to Y$ such that $G \neq f \circ g$. Thus, for some $w \in W$, $G(w) \neq f \circ g(w)$. Now consider the function $h': X \times [0,m]_{\Z} \to Y$ defined by $h'(x,t) = h(g^{-1}(x),t)$. By Theorem~\ref{composition}, $h'$ is a homotopy from $f \circ g \circ g^{-1} = f$ to $G \circ g^{-1}$. Since \[ f \circ g (w) \neq G(w) = (G \circ g^{-1})(g(w)), \] the homotopic functions $f$ and $G \circ g^{-1}$ differ at $g(w)$, contrary to the assumption that $f$ is rigid. The assertion follows. \end{proof} As an immediate corollary, we obtain: \begin{cor}\label{rigidiso} If $f:X\to Y$ is an isomorphism and one of $X$ and $Y$ are rigid, then $f$ is rigid. \end{cor} \begin{proof} In the case where $X$ is rigid, the identity map $\id_X$ is rigid. Then by Theorem \ref{RigidOfIsoIsRigid} we have $f \circ \id_X= f$ is rigid. In the case where $Y$ is rigid, similarly by Theorem \ref{isoOfRigidIsRigid} we have $\id_Y \circ f=f$ is rigid. \end{proof} The corollary above can be stated equivalently as follows: \begin{cor} A digital image $X$ is rigid if and only if every digital image $Y$ that is isomorphic to $X$ is rigid. \end{cor} It is easy to see that no digital image in $\Z$ is rigid: \begin{prop} If $X \subset \Z$ is a connected digital image with $c_1$ adjacency and $\#X>1$, then $X$ is not rigid. \end{prop} \begin{proof} A connected subset of $\Z$ having more than one point takes one of the forms \[ [a,b]_{\Z},~\{z \in \Z \, | \, z \ge a\},~\{z \in \Z \, | \, z \le b\},~\Z. \] In all of these cases, it is easily seen that there is a deformation retraction of $X$ to a proper subset of $X$. Therefore, $X$ is not rigid. \end{proof} We also show that a normal product of images is rigid if and only if all of its factors are rigid. \begin{thm} \label{rigidProd} Let $(X_i,\kappa_i)$ be digital images for each $1 \le i \le v$, and $(\prod_{i=1}^v X_i, \kappa)$ be the product image, where $\kappa = NP_u(\kappa_1,\ldots, \kappa_v)$ for some $1 \le u \le v$. Then $X$ is rigid if and only if $X_i$ is rigid for each $i$. \end{thm} \begin{proof} First we assume $X$ is rigid, and we will show that $X_i$ is rigid for each $i$. For some $i$, let $h_i: X_i \times [0,m]_{\Z} \to X_i$ be a $\kappa_i$-homotopy from $\id_{X_i}$ to $f_i: X_i \to X_i$. Without loss of generality we may assume $m = 1$, and we will show that $h_i(x_i,1)=x_i$, and thus $f_i = \id_{X_i}$. The function $h: X \times [0,1]_{\Z} \to X$ defined by \[ h(x_1, \ldots, x_v, t) = (x_1, \ldots, x_{i-1},h_i(x_i,t), x_{i+1}, \dots, x_v), \] is a homotopy. Since $X$ is rigid we must have $h(x_1, \ldots, x_v, 1) = \id_X$, and this means $h_i(x_i,1)=x_i$ as desired. Now we prove the converse: assume $X_i$ is rigid, and we will show $X$ is rigid. Let $J_i: X_i \to X$ be the function \[ J_i(x) = (x_1, \ldots, x_{i-1}, x, x_{i+1}, \ldots, x_n). \] Let $p_i: X \to X_i$ be the projection function, \[p_i(y_1,\ldots, y_n)=y_i. \] Then $J_i$ is $(\kappa_i, \kappa)$-continuous, and $p_i$ is $(\kappa, \kappa_i)$-continuous~\cite{BxNormal,BxAlt}. For the sake of a contradiction, suppose $X$ is not rigid. Then there is a homotopy $h: (X,\kappa) \times [0,m]_{\Z} \to X$ between $\id_X$ and a function $g$ such that for some $y = (y_1,\ldots,y_v) \in X$, $g(y) \neq y$. Then for some index $j$, $p_j(y) \neq p_j(g(y))$. Then the function $h': X_j \times [0,m]_{\Z} \to X_j$ defined by $h'(x,t)= p_j(h(J_j(x), t))$, is a homotopy from $\id_{X_j}$ to a function $g_j$, with \[ g_j(y_j) = p_j(h(J_j(y_j), m)) = p_j(h(y,m)) = p_j(g(y)) \neq p_j(y)= y_j, \] contrary to the assumption that $X_i$ is rigid. We conclude that $X$ is rigid. \end{proof} We have a similar result when $X$ is a disjoint union of digital images. Let $X$ be a digital image of the form $X=A\cup B$ where $A$ and $B$ are disjoint and no point of $A$ is adjacent to any point of $B$. We say $X$ is the disjoint union of $A$ and $B$, and we write $X=A\sqcup B$. \begin{thm} Let $X = A\sqcup B$. Then $X$ is rigid if and only if $A$ and $B$ are rigid. \end{thm} \begin{proof} First we assume that $X$ is rigid, and we will show that $A$ is rigid. (It will follow from a similar argument that $B$ is rigid.) Let $f:A\to A$ be any self-map homotopic to $\id_A$, and we will show that $f=\id_A$. Define $g:X\to X$ by \[ g(x) = \begin{cases} f(x) \quad &\text{ if $x\in A$;} \\ x \quad &\text{ if $x \in B$.} \end{cases} \] Then $g$ is continuous and homotopic to $\id_X$, and since $X$ is rigid we must have $g=\id_X$, which means that $f=\id_A$. Now for the converse, assume that $A$ and $B$ are both rigid. Take some self-map $f:X\to X$ homotopic to $\id_X$, and we will show that $f=\id_X$. Since $f$ is homotopic to the identity, we must have $f(A)\subseteq A$ and $f(B)\subseteq B$. This is because there will always be a path from any point $x$ to $f(x)$ given by the homotopy from $\id_X$ to $f(x)$. Thus if $x\in A$ we must also have $f(x)\in A$ since there are no paths from points of $A$ to points of $B$. Since $f(A)\subseteq A$ and $f(B)\subseteq B$, there are well-defined restrictions $f_A:A\to A$ and $f_B:B\to B$, and the homotopy from $\id_X$ to $f$ induces homotopies from $\id_A$ to $f_A$ and $\id_B$ to $f_B$. Since $A$ and $B$ are rigid we must have $f_A=\id_A$ and $f_B=\id_B$, and thus $f=\id_X$ as desired. \end{proof} Since every digital image is a disjoint union of its connected components, we have: \begin{cor}\label{componentsrigid} A digital image $X$ is rigid if and only if every connected component of $X$ is rigid. \end{cor} Let $X$ be some digital image of the form $X=A \cup B$, where $A \cap B$ is a single point $x_0$, and no point of $A$ is adjacent to any point of $B$ except $x_0$. We say $X=A \cup B$ is the {\em wedge of $A$ and $B$}, denoted $X =A\wedge B$, and $x_0$ is called the \emph{wedge point} of $A\wedge B$. We have the following. \begin{thm} \label{wedgeThm} If $X=A\wedge B$ and $A$ and $B$ are rigid, then $X$ is rigid. \end{thm} \begin{proof} Let $x_0$ be the wedge point of $A\wedge B$, and let $A_0$ and $B_0$ be the components of $A$ and $B$ which include $x_0$. If $\#A_0=1$ or $\#B_0=1$, then the components of $A\wedge B$ are in direct correspondence to the components of $A$ and $B$ and the result follows by Corollary \ref{componentsrigid}. Thus we assume $\#A_0>1$ and $\#B_0>1$. Let $h: A \wedge B \times [0,m]_{\Z} \to A \wedge B$ be a homotopy such that $h(x,0)=x$ for all $x \in A \wedge B$. Without loss of generality, $m=1$. If the induced map $h_1$ is not $\id_{X}$ then there is a point $x' \in X$ such that $h_1(x')=h(x',1) \neq x'$. Without loss of generality, $x' \in A$. Let $p_A: X \to A$ be the projection \[ p_A(x) = \left \{ \begin{array}{ll} x & \mbox{ for } x \in A; \\ x_0 & \mbox{ for } x \in B. \end{array} \right . \] Since $p_A \circ h$ is a homotopy from $\id_A$ to $p_A \circ h_1$, and $A$ is rigid, we have \begin{equation} \label{idA} p_A \circ h_1 = \id_A. \end{equation} Were $h_1(x') \in A$ then it would follow that \[ h_1(x') = p_A \circ h_1(x') = x', \] contrary to our choice of $x'$. Therefore we have $h_1(x') \in B \setminus \{x_0\}$. But $x' \adj h_1(x')$, so $x'=x_0$. Since $A_0$ is connected and has more than 1 point, there exists $x_1 \in A$ such that $x_1 \adj_{\kappa} x_0$. By the continuity of $h_1$ and choice of $x_0$, we must therefore have $h_1(x_1)=x_0$, and therefore $p_A \circ h_1(x_1)=p_A(x_0)=x_0$. This contradicts statement~(\ref{idA}), so the assumption that $h_1$ is not $\id_{X}$ is incorrect, and the assertion follows. \end{proof} A {\em loop} is a continuous function $p: C_m \to X$. The converse of Theorem~\ref{wedgeThm} is not generally true. In \cite{hmps} it was mentioned (without proof) that a wedge of two long cycles is in general rigid. We give a specific example: \begin{exl} \label{wedgeOfCycles} Let $A$ and $B$ be non-contractible simple closed curves. Then $A$ and $B$ are non-rigid~\cite{hmps}. However, $X=A \wedge B$ is rigid. E.g., using $c_2=8$-adjacency in $\Z^2$, let $A = $ \[ \{a_0=(0,0), a_1=(1,-1), a_2=(2,-1), a_3=(3,0), a_4=(2,1), a_5=(1,1)\} \] and let $B = $ \[ \{b_0=a_0, b_1=(-1,-1), b_2=(-2,-1), b_3=(-3,0), b_4=(-2,1), b_5=(-1,1)\}. \] By continuity, if there is a homotopy $h: X \times [0,m]_{\Z} \to X$ - without loss of generality, $m=1$ - such that $h_0 = \id_{X}$ and $h(x,1) \neq x$, then $h$ ``pulls"~\cite{hmps} every point of $A$ or of $B$ and therefore ``breaks" one of the loops of $X$, a contradiction since ``breaking" a loop is a discontinuity. Thus no such homotopy exists. $\Box$ \end{exl} \section{Homotopy fixed point spectrum} The paper \cite{bs19} gave a brief treatment of homotopy-invariant fixed point theory, defining two quantities $M(f)$ and $X(f)$, respectively the minimum and maximum possible number of fixed points among all maps homotopic to $f$. When $f:X\to X$, clearly we will have: \[ 0 \le M(f) \le X(f) \le \#X. \] We will see in the examples below that any one of these inequalities can be strict in some cases, or equality in some cases. More generally, for some map $f:X\to X$, we may consider the following set $S(f)$, which we call the \emph{homotopy fixed point spectrum} of $f$: \[ S(f) = \{ \# \Fix(g) \mid g \simeq f \} \subseteq \{0,\dots,\#X\}. \] An immediate consequence of Lemma~\ref{howBEKLLFPP}: \begin{cor} \label{0inSpectrum} Let $(X,\kappa)$ be a connected digital image, where $\#X > 1$. Then $0 \in S(c)$, where $c \in C(X,\kappa)$ is a constant map. \end{cor} We can also consider the \emph{fixed point spectrum} of $X$, defined as: \[ F(X) = \{ \#\Fix(f) \mid f:X\to X \text{ is continuous} \} \] \begin{remark} {\rm The following assertions are immediate consequences of the relevant definitions.} \begin{itemize} \item If $X$ is a digital image of only one point, then $F(X) = \{1\}$. \item If $f: X \to X$ is rigid, then $S(f) = \{\#\Fix(f)\}$. If $X$ is rigid, then $S(\id) = \{\#X\}$. \end{itemize} Since every image $X$ has a constant map and an identity map, we always have: \[ \{1,\#X\} \subseteq F(X). \] \end{remark} The number of fixed points is always preserved by isomorphism: \begin{lem} \label{isoMatches} Let $X$ and $Y$ be isomorphic digital images. Let $f: X \to X$ be continuous. Then there is a continuous $g: Y \to Y$ such that $\#\Fix(f) = \#\Fix(g)$. \end{lem} \begin{proof} Let $G: X \to Y$ be an isomorphism. Let $A = Fix(f)$. Since $G$ is one-to-one, $\#G(A) = \#A$. Let $g: Y \to Y$ be defined by $g=G \circ f \circ G^{-1}$. For $y_0 \in G(A)$, let $x_0=G^{-1}(y_0)$. Then \[ g(y_0) = G \circ f \circ G^{-1}(y_0) = G \circ f(x_0) = G(x_0) = y_0. \] Let $B = \Fix(g)$ It follows that $G(A) \subseteq B$, so $\#A \le \#B$. Similarly, let $y' \in B$ and let $x' = G^{-1}(y')$. Then \[ f(x') = G^{-1} \circ g \circ G(x') = G^{-1} \circ g(y') = G^{-1}(y') = x'. \] It follows that $G^{-1}(B) \subseteq A$, so $\#B \le \#A$. Thus, $\#\Fix(f) = \#\Fix(g)$. \end{proof} As an immediate consequence, we have the following. \begin{cor} Let $X$ and $Y$ be isomorphic digital images. Then $F(X)=F(Y)$. \end{cor} There is a certain regularity to the fixed point spectrum for connected digital images. When $X$ has only a single point, we have already remarked that $F(X)=\{1\}$. For images of more than 1 point, we will show that $F(X)$ always includes 0, 1, and $\#X$, and, provided the image is large enough, the set $F(X)$ also includes 2 and 3. The following statements hold for connected images. We discuss the fixed point spectrum of disconnected images in terms of their connected components in Theorem \ref{duF} and its corollary. We begin with a simple lemma: \begin{lem}\label{nbhdF(X)} Let $X$ be any connected digital image with $\#X > 1$. Let $x_0 \in X$, and let $0\le k \le \#N^*(x_0)$. Then $k \in S(c) \subseteq F(X)$, where $c$ is a constant map. \end{lem} \begin{proof} By Corollary~\ref{0inSpectrum}, a constant map is homotopic to a map with no fixed points, so $0\in S(c)$ as desired. For $k>0$, let $n=\#N^*(x_0)$ and write \[ N^*(x_0) = \{x_0,x_1,\dots,x_{n-1}\}. \] Then define $f:X\to X$ by: \[ f(x) = \begin{cases} x & \text{ if $x=x_i$ for some $i<k$,} \\ x_0 &\text{ otherwise.} \end{cases} \] Then $f$ is continuous with $\Fix(f) = \{x_0,\dots,x_{k-1}\}$ and thus $k\in F(X)$. Furthermore, $f$ is homotopic to the constant map at $x_0$, and so in fact $k\in S(c)$. \end{proof} \begin{thm}\label{0123} Let $X$ be a connected digital image, and let $c:X\to X$ be any constant map. If $\#X \ge 2$ then \[ \{0,1,2\}\subseteq S(c). \] If $\#X \ge 3$, then \[ \{0,1,2,3\} \subseteq S(c). \] \end{thm} \begin{proof} If $\#X = 2$, then $X$ consists simply of two adjacent points. Thus $\#N^*(x)=2$ for each $x\in X$, and so Lemma \ref{nbhdF(X)} implies that $\{0,1,2\}\subseteq S(c)$. When $\#X\ge 3$, there must be some $x\in X$ with $\#N^*(x) \ge 3$. (Otherwise the image would consist only of disjoint pairs of adjacent points, which would not be connected.) Thus by Lemma \ref{nbhdF(X)} we have $\{0,1,2,3\}\subseteq S(c)$. \end{proof} Since we always have $\#X \in S(\id)$ and \[S(c)\cup S(\id) \subseteq F(X) \subseteq \{0,1,\dots,\#X\},\] the theorem above directly gives: \begin{cor}\label{0123X} Let $X$ be a connected digital image. If $\#X = 2$ then \[ F(X) = \{0,1,2\}. \] If $\#X > 2$, then \[ \{0,1,2,3,\#X\} \subseteq F(X). \] \end{cor} We have already seen that $\#X \in F(X)$ in all cases. There is an easy condition that determines whether or not $\#X-1\in F(X)$. \begin{lem} \label{1off} Let $X$ be connected with $n=\#X >1$. Then $n-1 \in F(X)$ if and only if there are distinct points $x_1,x_2 \in X$ with $N(x_1) \subseteq N^*(x_2)$. \end{lem} \begin{proof} Suppose there are points $x_1,x_2 \in X$, $x_1 \neq x_2$, such that $N(x_1) \subseteq N^*(x_2)$. Then the map \[ f(x_1)=x_2,\quad f(x)=x \text{ for all } x\neq x_1, \] is a self-map on $X$ with exactly $n-1$ fixed points. That $f$ is continuous is seen as follows. Suppose $x,x' \in X$ with $x \adj x'$. \begin{itemize} \item If $x_1 \not \in \{x,x'\}$, then \[ f(x)=x\adj x' = f(x'). \] \item If, say, $x = x_1$, then $x' \in N(x_1) \subseteq N^*(x_2)$, so \[ f(x')=x' \adjeq x_2 = f(x_1). \] \end{itemize} Thus $f$ is continuous, and we conclude $n-1 \in F(X)$. Now assume that $n-1 \in F(X)$. Thus there is some continuous self-map $f$ with exactly $n-1$ fixed points. Let $x_1$ be the single point not fixed by $f$, and let $x_2 = f(x_1)$. Then let $x \in X$ with $x \adj x_1$. Then \[ x=f(x) \adjeq f(x_1) = x_2, \] so $N(x_1) \subseteq N^*(x_2)$. \end{proof} Lemma~\ref{1off} can be used to show that a large class of digital images will satisfy $n-1 \not \in F(X)$. For example when $X=C_n$ for $n>4$, no $N(x_i)$ is contained in $N^*(x_j)$ for $j \neq i$. Thus we have: \begin{cor} Let $n>4$. Then $n-1 \not \in F(C_n)$. \end{cor} In particular this means that $4\not \in F(C_5)$, so the result of Theorem~\ref{0123X} cannot in general be improved to state that $4\in F(X)$ for all images of more than 4 points. \section{Pull indices} Let $\overline \Fix(f)$ be the complement of the fixed point set, that is, \[ \overline \Fix(f) = \{ x \in X \mid f(x)\neq x\}. \] When $f(x)\neq x$, we say \emph{$f$ moves $x$}. \begin{definition} Let $(X,\kappa)$ be a digital image with $\#X>1$ and let $x \in X$. The {\em pull index of $x$}, $P(x)$ or $P(x,X)$ or $P(x,X,\kappa)$, is \[ P(x) = \min \{\#\overline\Fix(f) \mid f:X\to X \text{ is continuous and } f(x)\neq x\}. \] \end{definition} When $f(x)\neq x$, the set $\overline\Fix(f)$ always contains at least the point $x$, and so $P(x)\ge 1$ for any $x$ that is moved by some~$f$. \begin{exl} Let $X=[1,3]_{\Z}$ with $c_1$-adjacency. To compute $P(3)$, consider the function $f(x)=\min\{x,2\}$. This is continuous, not the identity, and $\Fix(f) = \{1,2\}$, and thus $P(3) = 1$. Similarly we can show that $P(1)=1$. But we have $P(2)=2$, since any continuous self-map~$f$ on~$X$ that moves 2 must also move at least one other point: if $f(2)=1$ we must have $f(3) \in \{1,2\}$, and if $f(2)=3$ we must have $f(1)\in\{2,3\}$. \end{exl} \begin{prop} \label{pullThm} Let $(X,\kappa)$ be a connected digital image with $n=\#X >1$. Let $m \in \N$, $1 \le m \le n$. Suppose, for all $x \in X$, we have $P(x) \ge m$ . Then \[ F(X) \cap \{i\}_{i=n-m+1}^{n-1} = \emptyset. \] \end{prop} \begin{proof} By hypothesis, $f \in C(X,\kappa) \setminus \{\id_X\}$ implies $f$ moves at least $m$ points, hence $\#\Fix(f) \le n-m$. The assertion follows. \end{proof} \begin{thm} \label{F(X)andP(x)} Let $(X,\kappa)$ be a connected digital image with $n=\#X > 1$. The following are equivalent. {\rm 1)} $n-1 \in F(X)$. {\rm 2)} There are distinct $x_1,x_2 \in X$ such that $N(x_1) \subseteq N^*(x_2)$. {\rm 3)} There exists $x \in X$ such that $P(x)=1$. \end{thm} \begin{proof} $1) \Leftrightarrow 2)$ is shown in Lemma~\ref{1off}. $1) \Leftrightarrow 3)$: We have $n-1 \in F(X)$ $\Leftrightarrow$ there exists $f \in C(X)$ with exactly~$n-1$ fixed points, i.e., the only $x \in X$ not fixed by $f$ has $P(x)=1$. \end{proof} The following generalizes $1) \Rightarrow 3)$ of Theorem~\ref{F(X)andP(x)}. \begin{prop} Let $(X,\kappa)$ be a connected digital image with $n=\#X > 1$. Let $k \in [1,n-1]_{\Z}$. Then $k \in F(X)$ implies there exist distinct $x_1, \ldots, x_{n-k} \in X$ such that $P(x_i) \le n-k$. \end{prop} \begin{proof} $k \in F(X)$ implies there exists $f \in C(X)$ with exactly $k$ fixed points, hence distinct $x_1, \ldots, x_{n-k} \in X$ such that $x_i \not \in \Fix(f)$. Thus for each~$i$, the members of $\Fix(f)$ are not pulled by~$f$ and~$x_i$. Thus $P(x_i) \le n-k$. \end{proof} \section{Retracts} In this section, we study how retractions interact with fixed point spectra. \begin{thm} {\rm \cite{Bx94}} \label{extension} Let $(X,\kappa)$ be a digital image and let $A \subseteq X$. Then $A$ is a retract of $X$ if and only if for every continuous $f: (A,\kappa) \to (Y,\lambda)$ there is an extension of $f$ to a continuous $g: (X,\kappa) \to (Y,\lambda)$. \end{thm} In the proof of Theorem~\ref{extension}, an extension of $f$ is obtained by using $g=r \circ f$, where $r: X \to A$ is a retraction. We use this in the proof of the next assertion. \begin{thm} \label{retractF} Let $A$ be a retract of $(X,\kappa)$. Then $F(A) \subseteq F(X)$. \end{thm} \begin{proof} Let $f: A \to A$ be $\kappa$-continuous. Let $r: X \to A$ be a $\kappa$-retraction. Let $i: A \to X$ be the inclusion function. By Theorem~\ref{composition}, $G=i \circ f \circ r: X \to X$ is continuous. Further, $G(x)=f(x)$ if and only if $x \in A$, so $\Fix(G) = \Fix(f)$. Since $f$ was taken arbitrarily, the assertion follows. \end{proof} \begin{remark} {\rm We do not have an analog to Theorem~\ref{retractF} by replacing fixed point spectra by spectra of identity maps. E.g, in Example~\ref{wedgeOfCycles} we have $\{0,\#A\} \subseteq S(\id_A)$, and $A$ is a retract of $X$, but $X$ is rigid, so $S(\id_X) = \{\#X\}$. However, we have the following Corollaries~\ref{FforDeformation} and~\ref{FforInterval}.} \end{remark} \begin{cor} \label{FforDeformation} Let $A$ be a deformation retract of $X$. Then $S(\id_A) \subseteq S(\id_X) \subseteq F(X)$. In particular, $\#A \in S(\id_X)$. \end{cor} \begin{cor} \label{FforInterval} Let $a,b \in \Z$, $a < b$. Then \[S(\id_{[a,b]_{\Z}}, c_1) = F([a,b]_{\Z}, c_1) = \{0,1,\dots,b-a+1\}. \] \end{cor} \begin{proof} Since $a<b$ and $[a,b]_{\Z}$ is $c_1$-contractible, it follows from Theorem~\ref{BEKLLFPP} that $0 \in S(\id_{[a,b]_{\Z}}, c_1)$. Since for each $d \in [a,b]_{\Z}$ there is a $c_1$-deformation of $[a,b]_{\Z}$ to $[a,d]_{\Z}$, it follows from Corollary~\ref{FforDeformation} that $\#[a,d]_{\Z} \in S(\id_{[a,b]_{\Z}}, c_1)$. Thus, \[F([a,b]_{\Z}, c_1) \subseteq \{i\}_{i=0}^{b-a+1} = S(\id_{[a,b]_{\Z}}, c_1) \subseteq F([a,b]_{\Z}, c_1). \] The assertion follows. \end{proof} We can generalize this result about intervals to a two-dimensional box in $\Z^2$. \begin{thm} \label{Sbox} Let $X = [1,a]_{\Z} \times [1,b]_{\Z}$, with adjacency $\kappa \in \{c_1,c_2\}$. Then \[ S(\id_X) = F(X) = \{0,1,\dots,ab\} \] \end{thm} \begin{proof} All self-maps on $[1,a]_\Z \times [1,b]_\Z$ are homotopic to the identity, so it suffices only to show that $F(X) = \{0,1,\dots,ab\}$. The proof is by induction on $b$. For $b=1$, our image $X$ is isomorphic to the one-dimensional image $([1,a]_\Z, c_1)$. Thus by Theorem \ref{FforInterval} we have \[ F(X) = \{0,1,\dots,a\} = \{0,1,\dots,ab\} \] as desired. For the inductive step, first note that $[1,a]_\Z \times [1,b-1]_\Z$ is a retract of $X$ (using either $\kappa=c_1$ or $c_2$). Thus by induction and Theorem \ref{retractF} we have \[ \{0,1,\dots,a(b-1) \} \subseteq F(X). \] It remains only to show that \[ \{a(b-1)+1, a(b-1)+2, \dots, ab\} \subseteq F(X). \] We do this by exhibiting a family of self-maps of $X$ having these numbers of fixed points. \begin{figure} \[ \begin{tikzpicture} \draw[densely dotted] (1,1) grid (3,4); \foreach \x in {1,...,3} { \foreach \y in {1,...,4} { \node at (\x,\y) [vertex] {}; } } \newcommand{\pad}{.1} \foreach \x in {1,2} { \draw[->] (1+\pad,\x+\pad) -- (2-\pad,\x+1-\pad); } \end{tikzpicture} \] \caption{The map $f_t$ from Theorem \ref{Sbox}, pictured in the case $t=2$. All points are fixed except those with arrows indicating where they map to.\label{f_tfig}} \end{figure} Let $t\in \{0,\dots, b-1\}$, and define $f_t:X\to X$ as follows: \[ f_t(x,y) = \begin{cases} (x,y) \quad &\text{ if $x>1$ or $y>t$}, \\ (x+1,y+1) &\text{ if $x=1$ and $y\le t$} \end{cases} \] See Figure \ref{f_tfig} for a pictorial depiction of $f_t$. This $f_t$ is well-defined and both $c_1$- and $c_2$-continuous for each $t\in \{0,\dots, b-1\}$ and has $ab-t$ fixed points. Thus we have \[ \{ab, ab-1, \dots, ab-(b-1) = a(b-1)+1\} \subseteq F(X) \] as desired. \end{proof} \section{Cartesian products and disjoint unions} In the following, assume $A_i \subset \N$, $1 \le i \le v$. Define \[ \bigotimes_{i=1}^v A_i = \left\{\prod_{i=1}^v a_i \mid a_i \in A_i \right\}. \] and \[ \bigoplus_{i=1}^v A_i = \left\{\sum_{i=1}^v a_i \mid a_i \in A_i \right\}. \] If $f_i: X_i \to Y_i$, let $\Pi_{i=1}^v f_i: \Pi_{i=1}^v X_i \to \Pi_{i=1}^v Y_i$ be the product function defined by \[ \Pi_{i=1}^v f_i(x_1, \ldots x_v) = (f_1(x_1), \ldots, f_v(x_v)) \mbox{ for } x_i \in X_i. \] \begin{thm} \label{bigOandNP} Suppose $(X_i,\kappa_i)$ is a digital image, $1 \le i \le v$. Let $X = \Pi_{i=1}^v X_i$. Then $\bigotimes_{i=1}^v F(X_i,\kappa_i) \subseteq F(X,NP_v(\kappa_1, \ldots, \kappa_v))$. \end{thm} \begin{proof} Let $f_i: X_i \to X_i$ be $\kappa_i$-continuous. Let $X = \Pi_{i=1}^v X_i$. Then the product function \[ f = \Pi_{i=1}^v f_i(x_1, \ldots x_v): X \to X \] is $NP_v(\kappa_1, \ldots, \kappa_v)$-continuous~\cite{BxNormal}. If $A_i=\{y_{i,j}\}_{j=1}^{p_i}$ is the set of distinct fixed points of $f_i$, then each point $(y_{1,j_1}, \ldots, y_{v,j_v})$, for $1 \le j_i \le p_i$, is a fixed point of $f$. The assertion follows. \end{proof} We note that the conclusion of Theorem \ref{bigOandNP} cannot in general be strengthened to say that $\bigotimes_{i=1}^v F(X_i) = F(X)$. For example, if $X = [1,3]_\Z \times [1,3]_\Z$, we have $F(X) = \{0,1,\dots,9\}$ by Theorem \ref{Sbox}, but \[ F([1,3]_\Z) \otimes F([1,3]_\Z) = \{0,1,2,3\} \otimes \{0,1,2,3\} = \{0,1,2,3,4,6,9\}. \] We do have a similar result, this time with equality, for a disjoint union of digital images. \begin{thm}\label{duF} Let $X = A \sqcup B$. If $A$ and $B$ both have at least 2 points, then \[ F(X) = F(A) \oplus F(B). \] \end{thm} \begin{proof} First we show that $F(A) \oplus F(B) \subseteq F(X)$. Take some $k\in F(A) \oplus F(B)$, say $k=m+n$ with $m\in F(A)$ and $n\in F(B)$. That means there are two self-maps $f:A\to A$ and $g:B\to B$ with $\#\Fix(f) = m$ and $\#\Fix(g) = n$. Let $h:X\to X$ be defined by: \[ h(x) = \begin{cases} f(x) \text{ if $x\in A$} \\ g(x) \text{ if $x \in B$} \end{cases} \] Then \[ \#\Fix(h) = \#\Fix(f) + \#\Fix(g) = m + n = k \] and so $k \in F(X)$ as desired. Next we show $F(X) \subseteq F(A) \oplus F(B)$. Take some $k\in F(X)$, so there is some self-map $f$ with $\#\Fix(f) = k$. Let $f_A:A \to X$ and $f_B:B\to X$ be the restrictions of $f$ to $A$ and $B$. Since $X=A\cup B$, we have \[ \Fix(f) = \Fix(f_A) \cup \Fix(f_B), \] and $\Fix(f_A) = \Fix(f) \cap A$ and $\Fix(f_B) = \Fix(f)\cap B$. Since $A$ and $B$ are disjoint, the union of the fixed point sets above is disjoint. Thus we have $k = \#\Fix(f_A) + \#\Fix(f_B)$. Since continuous functions preserve connectedness, we must have $f_A(A) \subseteq A$ or $f_A(A) \subseteq B$. Similarly $f_B(B)\subseteq A$ or $f_B(B)\subseteq B$. We show that $k\in F(A)\oplus F(B)$ in several cases. In the case where $f_A(A) \subseteq B$ and $f_B(B) \subseteq A$, there are no fixed points of $f_A$ or $f_B$, and thus no fixed points of $f$. Thus $k=0$, and it is true that $k\in F(A)\oplus F(B)$ since $0 \in F(A)$ and $0\in F(B)$ by Theorem \ref{0123}. In the case where $f_A(A)\subseteq B$ and $f_B(B)\subseteq B$, there are no fixed points of $f_A$, and thus $\Fix(f) =\Fix(f_B)$. In this case in fact $f_B$ is a self-map of $B$, and so \[ k = \#\Fix(f) = 0 + \#\Fix(f_B) \in F(A) \oplus F(B) \] since $0 \in F(A)$ by Theorem \ref{0123} and $\#\Fix(f_B) \in F(B)$ since $f_B$ is a self-map on $B$. The case where $f_A(A)\subseteq A$ and $f_B(B)\subseteq A$ is similar. The final case is when $f_A(A) \subseteq A$ and $f_B(B) \subseteq B$. In this case $f_A$ is a self-map of $A$ and $f_B$ is a self-map of $B$. Since $\Fix(f) = \Fix(f_A) \cup \Fix(f_B)$, the $k$ fixed points of $f$ must partition into $m$ fixed points of $f_A$ and $n$ fixed points of $f_B$, where $m+n=k$. Thus $m\in F(A)$ and $n\in F(B)$, and so $k=m+n \in F(A)\oplus F(B)$. \end{proof} The assumption above that $A$ and $B$ have at least 2 points is necessary. For example if $A$ and $B$ are each a single point, then $F(X) = \{0,1,2\}$ while $F(A)=F(B)=\{1\}$ and thus $F(A)\oplus F(B) = \{2\}$. Since any digital image is a disjoint union of its connected components, we have: \begin{cor} Let $X_1, \dots, X_k$ be the connected components of a digital image $X$, and assume that $\#X_i > 1$ for all $i$. Then we have: \[ F(X) = \bigoplus_{i=1}^k F(X_i) \] \end{cor} \section{Locations of fixed points} In many cases, the existence of two fixed points will imply that other fixed points must exist in certain locations. In some cases we will show that $\Fix(f)$ must be connected. We do not have $\Fix(f)$ connected in general, as shown by the following. \begin{exl} Let $X=\{p_0=(0,0), p_1=(1,0), p_2=(2,0), p_3=(1,1)\}$. Let $f: X \to X$ be defined by \[f(p_0)=p_0,~~~f(p_1)=p_3,~~~f(p_2)=p_2,~~~f(p_3)=p_1. \] Then $X$ is $c_2$-connected, $f \in C(X,c_2)$, and $\Fix(f) = \{p_0,p_2\}$ is $c_2$-disconnected. \end{exl} \begin{lem} \label{intermediate} Let $(X,\kappa)$ be a digital image and $f:X\to X$ be continuous. Suppose that $x,x'\in \Fix(f)$ and that $y\in X$ lies on every path of minimal length between $x$ and $x'$. Then $y\in \Fix(f)$. \end{lem} \begin{proof} Let $k$ be the minimal length of a path from $x$ to $x'$. First we show that $y$ must occur at the same location along any minimal path from $x$ to $x'$. That is, we show that there is some $i \in [0,k]_\Z$ with $p(i)=y$ for every minimal path $p$ from $x$ to $x'$. This we prove by contradiction: assume we have two minimal paths $p$ and $q$ with $p(i)=y=q(j)$ for some $j<i$. Then construct a new path $r$ by traveling from $x$ to $y$ along $q$, and then from $y$ to $x'$ along $p$. Then this path $r$ has length less than the length of $p$, contradicting the minimality of $p$. Thus we have some $i \in [0,k]_\Z$ with $p(i)=y$ for every minimal path $p$ from $x$ to $x'$. Let $p$ be some minimal path from $x$ to $x'$, and since the endpoints of $p$ are fixed, the path $f(p)$ is also a path from $x$ to $x'$. Furthermore the length of $f(p)$ must be at most $k$, and thus must equal $k$ since this is the minimal possible length of a path from $x$ to $x'$. Since both $p$ and $f(p)$ are minimal paths from $x$ to $x'$, we have $p(i) = f(p(i)) = y$, and thus $y=f(y)$ as desired. \end{proof} A vertex $v$ of a connected graph $(X,\kappa)$ is an {\em articulation point} of $X$ if $(X \setminus \{v\},\kappa)$ is disconnected. We have the following immediate consequences of Lemma~\ref{intermediate}. \begin{cor} Let $(X,\kappa)$ be a connected digital image. Let $v$ be an articulation point of $X$. Suppose $f \in C(X,\kappa)$ has fixed points in distinct components of $X \setminus \{v\}$. Then $v$ is a fixed point of $f$. \end{cor} \begin{cor} \label{uniqueShortestLem} Let $(X,\kappa)$ be a digital image and $f \in C(X,\kappa)$. Suppose $x,x' \in \Fix(f)$ are such that there is a unique shortest $\kappa$-path $P$ in~$X$ from $x$ to $x'$. Then $P \subseteq \Fix(f)$. \end{cor} \begin{proof} This follows immediately from Lemma~\ref{intermediate}. \end{proof} \begin{cor} Let $(X,\kappa)$ be a digital image that is a tree. Then $f \in C(X,\kappa)$ implies $\Fix(f)$ is $\kappa$-connected. \end{cor} \begin{proof} This follows from Corollary~\ref{uniqueShortestLem}, since given $x,x'$ in a tree $X$, there is a unique shortest path in~$X$ from $x$ to $x'$. \end{proof} For a digital cycle, the fixed point set is typically connected. The only exception is in a very particular case, as we see below. \begin{thm}\label{cycleconnected} Let $f:C_n\to C_n$ be any continuous map. Then $\Fix(f)$ is connected, or is a set of 2 nonadjacent points. The latter case occurs only when $n$ is even and the two fixed points are at opposite positions in the cycle. \end{thm} \begin{proof} If $\#\Fix(f) \in \{0,1\}$, then $\Fix(f)$ is connected. When $\#\Fix(f)>1$, we show that if $x_i,x_j\in \Fix(f)$ are two distinct fixed points, then either there is a path from $x_i$ to $x_j$ through other fixed points, or that no other points are fixed. There are two canonical paths $p$ and $q$ from $x_i$ to $x_j$: the two injective paths going in either ``direction'' around the cycle. Without loss of generality assume $|p| \ge |q|$. This means that $|q|$ is the shortest possible length of a path from $x_i$ to $x_j$. Consider the case in which $|p| > |q|$. In this case $|q|$ is the unique shortest path from $x_i$ to $x_j$, and by Lemma~\ref{uniqueShortestLem}, $q \subseteq \Fix(f)$, and so $x_i$ and $x_j$ are connected by a path of fixed points as desired. Now consider the case in which $|p|=|q|$. In this case again $|q|$ is the shortest possible length of a path from $x_i$ to $x_j$, and $p$ and $q$ are the only two paths from $x_i$ to $x_j$ having this length. Then $f(q)$ is a path from $x_i$ to $x_j$ of length $|q|$, and so we must have either $f(q)=q$ or $f(q)=p$. In the former case, $q$ is a path of fixed points connecting $x_i$ and $x_j$ as desired. In the latter case, $\Fix(f) \cap q = \{x_i,x_j\}$. Similarly considering the path $f(p)$, we must have either $f(p)=p$ (in which case $p$ is a path of fixed points connecting $x_i$ and $x_j$); or $f(p)=f(q)$, in which case $\Fix(f) \cap p = \{x_i,x_j\}$. Considering all cases, either a minimal-length path from $x_i$ to $x_j$ is contained in $\Fix(f)$, or $\Fix(f) = \{x_i,x_j\}$. The second sentence of the theorem follows from our analysis of the various cases. The only case which gives 2 nonadjacent fixed points requires $x_i$ and $x_j$ to be opposite points on the cycle, which requires $n$ to be even. \end{proof} \section{Remarks and examples} In classical topology $M(f)$ is the only interesting homotopy invariant count of the number of fixed points. $S(f)$ is not studied in classical topology, since in all typical cases (all continuous maps on polyhedra) we would have $S(f) = [M(f),\infty)_\Z$. In classical topology the value of $M(f)$ is generally hard to compute. The Lefschetz number gives a very rough indication of homotopy invariant fixed point information, and the more sophisticated Nielsen number is a homotopy invariant lower bound for $M(f)$. See \cite{jiang}. When $X$ is contractible, all self-maps are homotopic, so $S(f)=F(X)$ for any self-map $f$. It is natural to suspect that when $X$ is contractible with $\#X > 1$, we will always have $F(X) = \{0,1,\dots,\#X\}$. This is false, however, as the following example shows: \begin{figure} \[ \begin{tikzpicture}[scale=1.5] \node at (0,0,0) [vertex, label=above right:{$x_3$}] {}; \node at (1,0,0) [vertex, label=right:{$x_2$}] {}; \node at (0,1,0) [vertex, label={$x_7$}] {}; \node at (1,1,0) [vertex, label={$x_6$}] {}; \node at (0,0,1) [vertex, label=below:{$x_0$}] {}; \node at (1,0,1) [vertex, label=below:{$x_1$}] {}; \node at (0,1,1) [vertex, label={$x_4$}] {}; \node at (1,1,1) [vertex, label={$x_5$}] {}; \draw[] (0,0,0) rectangle (1,1,0); \draw[] (0,0,1) rectangle (1,1,1); \draw[] (0,0,0) -- (0,0,1); \draw[] (1,0,0) -- (1,0,1); \draw[] (0,1,0) -- (0,1,1); \draw[] (1,1,0) -- (1,1,1); \end{tikzpicture} \] \caption{A contractible image for which $F(X) \neq \{0,1,\dots, \#X\}$.\label{unitcubefig}} \end{figure} \begin{exl} Let $X \subset \Z^3$ be the unit cube of 8 points with $c_1$ adjacency, shown in Figure \ref{unitcubefig}. Then $X$ is contractible, so $S(f) = F(X)$ for any self-map $f$. By projecting the cube into one of its faces, we see that $X$ retracts to $C_4$, and since $F(C_4) = \{0,1,2,3,4\}$, we have $\{0,1,2,3,4\} \subseteq F(X)$ by Theorem \ref{retractF}. In fact there are also continuous maps having 5 or 6 fixed points: Let: \[ g(x_5) = x_0, \quad g(x_6) = x_3, \quad g(x_i) = x_i \text{ for }i\not \in \{5,6\} \] Then $g$ is continuous with $6$ fixed points. Let: \[ h(x_5) = h(x_7) = x_0, \quad h(x_6) = x_3, \quad h(x_i) = x_i \text{ for }i \not \in \{5,6,7\}\] Then $h$ is continuous with $5$ fixed points. Since of course the identity map has 8 fixed points, we have so far shown that $\{0,1,2,3,4,5,6,8\} \subseteq F(X)$. In fact $7\not \in F(X)$. This follows from Lemma~\ref{1off}. We have shown that: \[ F(X) = \{0,1,2,3,4,5,6,8\}. \] \end{exl} The computation of $S(f)$ in general seems to be a difficult and interesting problem. Even in the case of self-maps on the cycle $C_n$, the results are interesting. First we show that in fact there are exactly 3 homotopy classes of self-maps on $C_n$: the identity map $\id(x_i) = x_i$, the constant map $c(x_i) = x_0$, and the flip map $l(x_i) = x_{-i}$. \begin{thm} \label{3htpyTypes} Given $f \in C(C_n)$, $f$ is homotopic to one of: a constant map, the identity map, or the flip map. \end{thm} \begin{proof} We have noted above that if $n \le 4$ then $C_n$ is contractible, so every $f \in C(C_n)$ is homotopic to a constant map. Thus, in the following, we assume $n > 4$. We can compose $f$ by some rotation $r$ to obtain $g = r\circ f \simeq f$ such that $g(x_0)=x_0$. We will show that $g$ is either the identity, the flip map, or homotopic to a constant map. If $g$ is not a surjection, then its continuity implies $g(C_n)$ is a connected proper subset of $C_n$, hence is contractible. Therefore, $g$ is homotopic to a constant map. If $g$ is a surjection, then $g$ is a bijection because the domain and codomain of $g$ both have cardinality $n$. By continuity, $g(x_1) \adj g(x_0) = x_0$. Therefore, either $g(x_1)=x_{n-1}$ or $g(x_1)=x_{1}$. If $g(x_1)=x_{-1}$, then continuity and the fact that $g$ is a bijection yield an easy induction showing that $g(x_i)=x_{-i}$, $0 \le i < n$. Therefore, $g$ is the flip map. If $g(x_1)=x_{1}$, a similar argument shows that $g$ is the identity. \end{proof} In fact the proof of Theorem \ref{3htpyTypes} demonstrates the following stronger statement. Let $r_d:C_n\to C_n$ be the rotation map $r_d(x_i) = x_{i+d}$. The following generalizes Theorem~3.4 of \cite{Bx10}, which states that any map homotopic to the identity must be a rotation. \begin{thm}\label{3maps} Let $f:C_n \to C_n$ be continuous. Then one of the following is true: \begin{itemize} \item $f$ is homotopic to a constant map \item $f$ is homotopic to the identity, and $f = r_d$ for some $d$ \item $f$ is homotopic to the flip map $l$, and $f = r_d \circ l$ for some $d$ \end{itemize} \end{thm} The proof of Theorem \ref{3htpyTypes} also demonstrated that all non-isomorphisms on $C_n$ must be nullhomotopic. Thus all isomorphisms on $C_n$ fall into the second two categories above, and in fact all maps in those two categories are isomorphisms. Thus we obtain: \begin{cor} Let $n>4$, and $f:C_n\to C_n$ be an isomorphism with $f\simeq g$ for some $g$. Then $g$ is an isomorphism. \end{cor} Now we are ready to compute the values of $S(f)$ for our three classes of self-maps on $C_n$. \begin{thm}\label{Scycle} We have $S(f)=\{1\}$ for every $f:C_1\to C_1$. When $1<n\le 4$, we have $S(f)=\{0,\dots,n\}$ for any $f:C_n \to C_n$. When $n>4$, let $c$ be any constant map, $\id$ be the identity map, and $l$ be the flip map on $C_n$. We have: \begin{align*} S(\id) &= \{0,n\} \\ S(c) &= \{0,1,\dots,\lfloor n/2 \rfloor + 1\} \\ S(l) &= \begin{cases} \{0,1\} &\text{ if $n$ is odd} \\ \{0,2\} &\text{ if $n$ is even} \end{cases} \end{align*} \end{thm} \begin{proof} When $n=1$, our image is a single point, and the constant map (which is also the identity map) is the only continuous self-map. Thus $S(f) =F(X)= \{1\}$ for every $f:C_1\to C_1$. When $1<n\le 4$, again all maps are homotopic, and we have $S(f) = F(X)=\{0,\dots,n\}$ for any $f$ by Theorem \ref{0123X}. Now we consider $C_n$ with $n>4$, which is the interesting case. By Theorem \ref{3maps} the only maps homotopic to $\id$ are rotation maps $r_d$. Since $\#\Fix(r_0) = n$ and $\#\Fix(r_d)=0$ for $d\neq 0$, we have \[ S(\id) = \{0,n\}. \] Now we consider the constant map $c(x_i)=x_0$. \begin{comment} Let $h:C_n\times[0,\lfloor n/2\rfloor]_\Z \to C_n$ be defined as follows: \[ h(x_i,t) = \begin{cases} x_{\min(i,t)} &\text{ if } i \le \lfloor n/2\rfloor \\ x_{\min(n-i,t)} &\text{ if } i > \lfloor n/2 \rfloor \end{cases} \] When $t=0$ this is the constant map. When $t=\lfloor n/2 \rfloor$, \end{comment} Let $f \in C(C_n)$ be defined as follows. \[ f(x_i) = \left \{ \begin{array}{ll} x_i & \mbox{for } 0 \le i \le \lfloor n/2 \rfloor; \\ x_{-i} & \mbox{for } \lfloor n/2 \rfloor < i < n. \end{array} \right . \] \begin{figure} \[ \begin{tikzpicture} \foreach \t in {0,...,4} { \draw [densely dotted] ({72*\t+90}:1) -- ({72*\t+72+90}:1); } \foreach \t in {0,1,4} { \node at ({\t*72+90}:1) [vertex, label=$x_\t$] {}; } \foreach \t in {2,3} { \node at ({\t*72+90}:1) [vertex, label=below:$x_\t$] {}; } \foreach \t in {3,4} { \draw [very thick] ({72*\t+90}:1) -- ({72*\t+72+90}:1); } \newcommand{\pad}{.1} \draw [->] (72*1+90:1-\pad) -- (72*4+90:1-\pad); \draw [->] (72*2+90:1-\pad) -- (72*3+90:1-\pad); \end{tikzpicture} \qquad \begin{tikzpicture} \foreach \t in {0,...,5} { \draw [densely dotted] ({60*\t+90}:1) -- ({60*\t+60+90}:1); } \foreach \t in {0,1,5} { \node at ({\t*60+90}:1) [vertex, label=$x_\t$] {}; } \foreach \t in {2,3,4} { \node at ({\t*60+90}:1) [vertex, label=below:$x_\t$] {}; } \foreach \t in {3,4,5} { \draw [very thick] ({60*\t+90}:1) -- ({60*\t+60+90}:1); } \newcommand{\pad}{.1} \draw [->] (60*1+90:1-\pad) -- (60*5+90:1-\pad); \draw [->] (60*2+90:1-\pad) -- (60*4+90:1-\pad); \end{tikzpicture} \] \caption{The map $f$ from Theorem \ref{Scycle} pictured in the cases $n=5$ and $n=6$. All points are fixed except those with arrows indicating where they map to. The path $f(C_n)$ is in bold.\label{Scyclefig}} \end{figure} This map ``folds'' the cycle onto a path that is ``about half" of the cycle, with $\lfloor n/2 \rfloor+1$ fixed points. See Figure \ref{Scyclefig}. This can be taken as the first step of a homotopy, in which successive steps shrink the path and the number of fixed points by one per step, until a constant map is reached at the end of the homotopy. \begin{comment} For intermediate values of $t$ the map folds the cycle onto half of itself and pushes the values towards $x_0$. The various values of $t \in \{0,\dots, \lfloor n/2 \rfloor\}$ give maps $h(x_i,t)$ with $\#\Fix(h(x_i,t)) = t+1$, and all these maps are homotopic to $c$. \end{comment} Thus $\{1,\dots,\lfloor n/2 \rfloor +1\} \subseteq S(c)$, and of course $0\in S(c)$ also by Theorem \ref{0123}. Thus we have shown there is a fixed path $p$ between fixed points of $f$, $x_i,x_j$, of length at least $\lfloor n/2 \rfloor + 1$. We wish to show that in fact $S(c) = \{0,\dots,\lfloor n/2 \rfloor + 1\}$. We show this by contradiction: take some nullhomotopic $f$, assume that $k \in S(f)$ with $k>\lfloor n/2 \rfloor + 1$, and we will show in fact that all points are fixed; this would be a contradiction since $f \neq \id$. Choose any $x \in C_n \setminus p$. Then $x$ lies on the unique shortest path from $x_i$ to $x_j$. Then $x\in \Fix(f)$ by Lemma \ref{uniqueShortestLem}; this gives our desired contradiction. Finally we consider the flip map $l(x_i) = x_{-i}$. By Theorem \ref{3maps}, all maps homotopic to $l$ have the form $f(x_i) = r_d \circ l (x_i) = x_{d-i}$. Such a map has a fixed point at $x_i$ if and only if $d=2i \pmod n$. When $d$ is odd there are no solutions, and so $\#\Fix(f) = 0$. When $d$ is even, say $d=2a$, and $n$ is odd, there is one solution $i=a$. When $d$ is even and $n$ is also even, there are two solutions: $i=a$ and $i=a+n/2$. Thus we have some maps with no fixed points, and when $k$ is odd we have some with one fixed point, and when $k$ is even we have some with two. We conclude: \[ S(l) = \begin{cases} \{0,1\} &\text{ if $n$ is odd} \\ \{0,2\} &\text{ if $n$ is even} \end{cases} \qedhere \] \end{proof} By Theorem \ref{3htpyTypes}, any self-map on $C_n$ is homotopic to the constant, identity, or flip. Thus by taking unions of the sets above, we have: \begin{cor} \[ F(C_n) = \begin{cases} \{1\} &\text{ if } n=1, \\ \{0,\dots,n\} &\text{ if } 1<n\le 4, \\ \{0,1,\dots,\lfloor n/2 \rfloor + 1, n\} &\text{ if } n>4.\end{cases} \] \end{cor} From the Corollary above we see that $F(C_5)=\{0,1,2,3,5\}$ and $F(C_6) = \{0,1,2,3,6\}$, and thus the formula of Corollary \ref{0123X} is exact in these two cases. The images $C_5$ and $C_6$ are the only examples known to the authors in which this occurs. \begin{question} Is there any digital image $X \not\in\{C_5,C_6\}$ with $\#X > 4$ and $F(X) = \{0,1,2,3,\#X\}$? \end{question} \begin{figure} \[ \begin{tikzpicture} \draw[densely dotted] (0,0) grid (6,2); \foreach \x in {0,...,6} { \node at (\x,0) [vertex] {}; \node at (\x,2) [vertex] {}; } \foreach \x in {0,2,4,6} \node at (\x,1) [vertex] {}; \draw[step=2] (0,0) grid (6,2); \end{tikzpicture} \] \caption{An image having a self-map with $X(f)=0$\label{xexample}} \end{figure} We conclude this section with two interesting examples showing the wide variety of fixed point sets that can be exhibited for other digital images. Tools we use in our discussion include the following. A path $r: [0,m]_{\Z} \to X$ that is an isomorphism onto $r([0,m]_{\Z})$ is a {\em simple path}. If a loop $p$ is an isomorphism onto $p(C_m)$, $p$ is a {\em simple loop}. \begin{definition} {\rm \cite{hmps}} A simple path or a simple loop in a digital image $X$ {\em has no right angles} if no pair of consecutive edges of the path or loop belong to a loop of length~4 in $X$. \end{definition} \begin{definition} {\rm \cite{hmps}} A {\em lasso in $X$} is a simple loop $p: C_m \to X$ and a path $r: [0,k]_{\Z} \to X$ such that $k>0$, $m \ge 5$, $r(k)=p(x_0)$, and neither $p(x_1)$ nor $p(x_{m-1})$ is adjacent to $r(k-1)$. The lasso {\em has no right angles} if neither $p$ nor $r$ has a right angle, and no right angle is formed where $r$ meets $p$, i.e., the final edge of $r$ and neither of the edges of $p$ at $p(x_0)$ form 2 edges of a loop of length 4 in~$X$. \end{definition} \begin{thm} {\rm \cite{hmps}} \label{lassoThm} Let $X$ be an image in which, for any two adjacent points $x \adj x' \in X$, there is a lasso with no right angles having path $r: [0, k]_{\Z} \to X$ with $r(0) = x$ and $r(1) = x'$. Then $X$ is rigid. \end{thm} \begin{exl} \label{has1MiddleBar} Let $X$ be the digital image \[ X = \left ([0,6]_{\Z} \times \{0,2\} \right ) \cup \{(0,1), (2,1), (4,1), (6,1)\} \] (see Figure~\ref{xexample}), with 4-adjacency. By Theorem~\ref{lassoThm}, this image is rigid. It is easy, though a bit tedious, to verify that the hypothesis of Theorem \ref{lassoThm} is satisfied by $X$. For example, in Figure \ref{lassofig} we exhibit a lasso with no right angles for two adjacent points. It is easy to construct such lassos for any pair of adjacent points. \begin{figure} \[ \begin{tikzpicture} \draw[densely dotted] (0,0) grid (6,2); \foreach \x in {0,...,6} { \node at (\x,0) [vertex] {}; \node at (\x,2) [vertex] {}; } \node at (0,1) [label=left:$x$] {}; \node at (0,0) [label=left:$x'$] {}; \foreach \x in {0,2,4,6} \node at (\x,1) [vertex] {}; \draw[step=2] (0,0) grid (6,2); \draw [ultra thick] (0,1) -- (0,0); \foreach \x in {0,...,3} { \draw [ultra thick] (\x,0) -- (\x+1,0); } \foreach \x in {2,3} { \draw [ultra thick] (\x,2) -- (\x+1,2); } \foreach \y in {0,1} { \draw [ultra thick] (2,\y) -- (2,\y+1); \draw [ultra thick] (4,\y) -- (4,\y+1); } \end{tikzpicture} \] \caption{A lasso for the points $x=(0,1)$ and $x'=(0,0)$\label{lassofig}} \end{figure} Since $X$ is rigid, we have $S(\id) = \{\#X\} = \{18\}$. Let $f:X\to X$ be the 180-degree rotation of $X$. Then $f$ is an isomorphism, and so by Theorem~\ref{isoOfRigidIsRigid}, $f=f \circ \id_X$ is rigid. Thus $S(f) = \{\#\Fix(f)\} = \{0\}$. In particular this provides an example for the question posed in~\cite{bs19} if $X(f)$ could ever equal 0 for a connected image. \end{exl} \begin{figure} \[ \begin{tikzpicture} \draw[densely dotted] (0,0) grid (5,2); \foreach \x in {0,...,5} { \node at (\x,0) [vertex] {}; \node at (\x,2) [vertex] {}; } \foreach \x in {0,2,5} \node at (\x,1) [vertex] {}; \draw (0,0) rectangle (2,2); \draw (2,0) rectangle (5,2); \end{tikzpicture} \] \caption{An image with many different values for $S(f)$\label{sexample}} \end{figure} The following example demonstrates an image which has many different possible sets which can occur as $S(f)$ for various self-maps $f$. \begin{exl} \label{self-mapExamples} Let \[X = \left ([0,5]_{\Z} \times \{0,2\} \right ) \cup \{(0,1), (2,1), (5,1)\} \] (see Figure \ref{sexample}), with 4-adjacency. In this image we have several different homotopy classes of maps. We will derive sufficient information about $S$ for some of these to compute $F(X)$. By Theorem \ref{lassoThm}, $X$ is rigid, so $S(\id) = \{\#X\} = \{15\}$. Let $f$ be a vertical reflection. Then $f$ is rigid by Corollary \ref{rigidiso}, and has 3 fixed points, so $S(f) = \{3\}$. Let $g$ be the function that maps the bottom horizontal bar onto the top one, and fixes all other points. Then $g$ has 9 fixed points, and is homotopic to a constant map. We can retract the image of $g$ down to a point one point at a time, and so $\{0,1,\dots, 9\} \subseteq S(g)$. Let $h$ be the function which maps the left vertical bar into the middle vertical bar and fixes all other points. Then $h$ has 12 fixed points. We can additionally map one or both of the next two points into the middle vertical bar to obtain maps homotopic to $h$ with 11 or 10 fixed points. We can do these retractions followed by a rotation around the 10-cycle on the right to obtain a map homotopic to $h$ with no fixed points. Thus $\{0,10,11,12\} \subseteq S(h)$. We have $F(X) \cap \{13,14\} = \emptyset$ by Proposition~\ref{pullThm}, since for all $x \in X$ we see easily that $P(x) \ge 3$. We therefore have \[ F(X) = \{0,1,2,3,4,5,6,7,8,9,10,11,12,15\}. \] \end{exl} \section{Further remarks} We have introduced and studied several measures concerning the fixed point set of a continuous self-map on a digital image. We anticipate further research in this area. \thebibliography{11} \bibitem{Berge} C. Berge, {\em Graphs and Hypergraphs}, 2nd edition, North-Holland, Amsterdam, 1976. \bibitem{Bx94} L. Boxer, Digitally Continuous Functions, {\em Pattern Recognition Letters} 15 (1994), 833-839. https://www.sciencedirect.com/science/article/abs/pii/0167865594900124 \bibitem{Bx99} L. Boxer, A Classical Construction for the Digital Fundamental Group, Journal of Mathematical Imaging and Vision 10 (1999), 51-62. https://link.springer.com/article/10.1023/A \bibitem{Bx10} L. Boxer, Continuous maps on digital simple closed curves, {\em Applied Mathematics} 1 (2010), 377-386. https://www.scirp.org/Journal/PaperInformation.aspx?PaperID=3269 \bibitem{BxNormal} L. Boxer, Generalized normal product adjacency in digital topology, {\em Applied General Topology} 18 (2) (2017), 401-427 https://polipapers.upv.es/index.php/AGT/article/view/7798/8718 \bibitem{BxAlt} L. Boxer, Alternate product adjacencies in digital topology, {\em Applied General Topology} 19 (1) (2018), 21-53 https://polipapers.upv.es/index.php/AGT/article/view/7146/9777 \bibitem{BEKLL} L. Boxer, O. Ege, I. Karaca, J. Lopez, and J. Louwsma, Digital fixed points, approximate fixed points, and universal functions, {\em Applied General Topology} 17(2), 2016, 159-172. https://polipapers.upv.es/index.php/AGT/article/view/4704/6675 \bibitem{BK12} L. Boxer and I. Karaca, Fundamental groups for digital products, {\em Advances and Applications in Mathematical Sciences} 11(4) (2012), 161-180. http://purple.niagara.edu/boxer/res/papers/12aams.pdf \bibitem{bs19} L. Boxer and P.C. Staecker, Remarks on fixed point assertions in digital topology, \emph{Applied General Topology}, to appear. https://arxiv.org/abs/1806.06110 \bibitem{hmps} J. Haarmann, M.P. Murphy, C.S. Peters, and P.C. Staecker, Homotopy equivalence in finite digital images, {\em Journal of Mathematical Imaging and Vision} 53 (2015), 288-302. https://link.springer.com/article/10.1007/s10851-015-0578-8 \bibitem{jiang} B. Jiang, Lectures on Nielsen fixed point theory, {\em Contemporary Mathematics} 18 (1983). \bibitem{Khalimsky} E. Khalimsky, Motion, deformation, and homotopy in finite spaces, in {\em Proceedings IEEE Intl. Conf. on Systems, Man, and Cybernetics}, 1987, 227-234. \bibitem{Rosenfeld} A. Rosenfeld, `Continuous' functions on digital pictures, {\em Pattern Recognition Letters} 4, pp. 177-184, 1986. Available at https://www.sciencedirect.com/science/article/pii/0167865586900176 \bibitem{stae15} P.C. Staecker, Some enumerations of binary digital images, arXiv:1502.06236, 2015 \end{document}
{"config": "arxiv", "file": "1901.11093.tex"}
TITLE: Solve $(x-2)^6+(x-4)^6=64$ using a substitution. QUESTION [9 upvotes]: Solve $(x-2)^6+(x-4)^6=64$ using a substitution. I tired using $t=x-2$. But again I have to know the expansion of the power $6$. Can it be transformed to a quadratic equation ? I know that it can be somehow solved by expanding the powers. But I'm trying to get a good transformation by a sub. REPLY [1 votes]: Let $x-2=a$ and $4-x=b$. Hence, $a+b=2$ and $a^6+b^6=64$ or $$(a^2+b^2)(a^4+b^4-a^2b^2)=64$$ or $$(2-ab)(a^2b^2-16ab+16)=32$$ or $$ab(a^2b^2-18ab+48)=0.$$ $ab=0$ gives $x=2$ or $x=4$. $ab=9-\sqrt{33}$ does not give a real roots and $ab=9+\sqrt{33}$ does not give a real roots. Id est, the answer is $\{2,4\}$.
{"set_name": "stack_exchange", "score": 9, "question_id": 2079939}
\begin{document} \begin{frontmatter} \title{The G-invariant spectrum and non-orbifold singularities} \author{Ian M.~Adelstein \\ M.~R.~Sandoval} \address{Department of Mathematics, Trinity College\\ Hartford, CT 06106 United States} \begin{abstract} We consider the $G$-invariant spectrum of the Laplacian on an orbit space $M/G$ where $M$ is a compact Riemannian manifold and $G$ acts by isometries. We generalize the Sunada-Pesce-Sutton technique to the $G$-invariant setting to produce pairs of isospectral non-isometric orbit spaces. One of these spaces is isometric to an orbifold with constant sectional curvature whereas the other admits non-orbifold singularities and therefore has unbounded sectional curvature. We therefore show that constant sectional curvature and the presence of non-orbifold singularities are inaudible to the G-invariant spectrum. \end{abstract} \begin{keyword} Spectral geometry, Laplace operator, orbit spaces, group actions \MSC[2010] 58J50 \sep 58J53 \sep 22D99 \sep 53C12 \end{keyword} \end{frontmatter} \section*{Acknowledgements} The authors would like to thank Carolyn Gordon and David Webb for many helpful conversations throughout the course of this project, as well as Emilio Lauret for providing valuable feedback. We would also like to acknowledge the support of the National Science Foundation, Grant DMS-1632786. \section{Introduction} Given a compact Riemannian manifold $M$ and a compact subgroup of its isometry group $G \leq Isom(M)$ we consider the spectral geometry of the orbit space $M/G$. We are interested in the $G$-invariant spectrum: the spectrum of the Laplacian restricted to the smooth functions on $M$ which are constant on the orbits. We let $C^{\infty}(M)^G$ denote the space of such functions. The orbit space $M/G$ has a natural smooth structure given by the algebra $C^{\infty}(M/G)$ consisting of functions $f \colon M/G \to \mathbb{R}$ whose pullback via $\pi \colon M \to M/G$ are smooth $G$-invariant functions on $M$, i.e.~$f \in C^{\infty}(M/G)$ if and only if $\pi^* f \in C^{\infty}(M)^G$. A map $\phi \colon M_1/G_1 \to M_2/G_2$ is said to be \emph{smooth} if the pullback of every smooth function on $M_2/G_2$ is a smooth function on $M_1/G_1$, i.e.~$\phi^*f \in C^{\infty}(M_1/G_1)$ for every $f \in C^{\infty}(M_2/G_2)$. The notion of equivalence between orbit spaces that we use in this paper is the following: \begin{defn}\label{d:srfiso} A map $\phi \colon M_1/G_1 \to M_2/G_2$ is said to be a \emph{smooth SRF isometry} if it is an isometry of metric spaces that is smooth (in the above sense) with smooth inverse. \end{defn} We note that the orbit space $M/G$ is an example of a singular Riemannian foliation and the term \emph{SRF isometry} comes from the literature on singular Riemannian foliations cf.~\cite{AS2017, AL2011}. If the orbit spaces $M_1/G_1$ and $M_2/G_2$ have the structure of Riemannian orbifolds then the above definition is equivalent to the standard notion of smooth isometry between Riemannian orbifolds. We are interested in the following inverse spectral questions: What information about the singular set of an orbit space $M/G$ is encoded in its $G$-invariant spectrum? In particular, can one hear the existence of non-orbifold singularities, i.e.~whether or not an orbit space is an orbifold? We note that the negative inverse spectral results from the manifold and orbifold settings hold in the more general setting of orbit spaces. It is therefore known that isotropy type \cite{SSW2006} and the order of the maximal isotropy groups \cite{RSW2008} are inaudible (i.e.~not determined by the $G$-invariant spectrum). Positive results exist for the $G$-equivariant spectrum where it has been shown that dimension and volume of the orbit space are spectrally determined \cite{BH1984}. In the Riemannian foliation setting it follows that the spectrum of the basic Laplacian determines dimension and volume of the space of leaf closures \cite{R1998}. In order to address the above spectral questions we first generalize the Sunada-Pesce-Sutton \cite{Sut2002} technique to the $G$-invariant setting (see Theorem~\ref{generalization}). We then use this generalization to produce pairs of isospectral non-isometric orbit spaces. As with all Sunada isospectral pairs our examples are quotients of a manifold $M$ by representation equivalent subgroups $H_1$ and $H_2$ of the isometry group. The quotients $M/H_i$ are isospectral in the sense that the $H_i$-invariant spectra of the Laplacian on $M$ are equivalent. \begin{thm}\label{main} Let $n \geq 3$ be an odd integer and set $m= (n-1)/2 $ with $H_1 = U(n)$ and $H_2 = Sp( m) \times SO(2n - 2 m)$. Embed $H_1$ into $U(2n)$ by $st \oplus st^*$ where $st$ is the standard representation of $H_1$ and $st^*$ is the $\mathbb{R}$-contragredient representation. Embed $H_2$ into $U(2n)$ by $st' \otimes id \oplus id \otimes st''$ where $st'$ and $st''$ are the standard representations of $Sp(m)$ and $SO(2n-2m)$ respectively. Then the orbit spaces $S^{4n-1}/H_1$ and $S^{4n-1}/H_2$ are isospectral yet non-isometric. \end{thm} The representations of the groups $H_i$ used above are shown to be representation equivalent in \cite{AYY}. The authors then let these groups act on $SU(2n)$ producing isospectral pairs of homogeneous Riemannian manifolds which are simply connected, yet non-homeomorphic \cite[Theorem 1.7]{AYY}. In Section~\ref{examples} we focus on the case $n=3$ and consider the isospectral pair $O_1 = S^{11}/U(3)$ and $O_2 = S^{11}/Sp(1) \times SO(4)$. We use principal isotropy reduction to study these spaces and show that they have different isotropy type, maximal isotropy dimension, and quotient codimension of the strata. The most striking differences between these isospectral orbit spaces are given in the following theorem. \begin{thm}\label{main2} The orbit space $O_1$ is smoothly SRF isometric to a 3-hemisphere of constant sectional curvature whereas $O_2$ admits a non-orbifold point and therefore has unbounded sectional curvature. We conclude that constant sectional curvature and the presence of non-orbifold singularities are inaudible properties of the $G$-invariant spectrum. \end{thm} It is important to note that although the isometry between $O_1$ and the 3-hemisphere is a smooth SRF isometry, it does not preserve leaf codimension and we therefore can not conclude that these spaces are isospectral, cf.~\cite{AS2017}. Indeed, it can be shown by direct computation that the Neumann spectrum on the 3-hemisphere is distinct from the $U(3)$-invariant spectrum on $S^{11}$. We note that the question of whether orbifold singularities are audible, i.e.~whether there exists an isospectral pair of orbifolds one of which admits singular points whereas the other does not (and is therefore a manifold), is an important open question in the field of spectral geometry, cf.~\cite{Sut2010}. Our examples demonstrate that this question can be reformulated in the more general orbit space setting: Does there exist an isospectral pair of orbit spaces one of which admits singular points whereas the other does not (and is therefore a manifold)? The fact that constant sectional curvature in not determined by the $G$-invariant spectrum should be viewed in light of the positive inverse spectral results on constant sectional curvature in the manifold setting. By manipulating the terms in the asymptotic expansion of the heat trace, Berger et al.~\cite{Berger} showed that constant sectional curvature is an audible property of the Laplace spectrum for manifolds of dimension two or three. Using similar techniques Tanno \cite{Tanno1973} extended these results to manifolds of dimensions less that six. Although asymptotic expansions for the heat trace on orbifolds \cite{DGGW2008} and Riemannian foliations \cite{R2010} are known, the terms in these expansions are more complicated than in the manifold setting as they include information from the singular sets of these spaces. The methods of Berger et al.~and Tanno do not readily apply in these more exotic settings. Indeed, as noted in Theorem~\ref{main2}, the isospectral orbit spaces from Theorem~\ref{main} demonstrate that constant sectional curvature is an inaudible property of the G-invariant spectrum. The paper proceeds as follows. In Section~\ref{sunada} we generalize the Sunada-Pesce-Sutton \cite{Sut2002} technique in order to prove Theorem~\ref{main}. In Section~\ref{smooth} we review a classification of orbifold singularities. In Section~\ref{examples} we study the inaudible properties of the isospectral non-isometric orbit spaces provided by Theorem~\ref{main} and prove Theorem~\ref{main2}. \section{The $G$-invariant Sunada-Pesce-Sutton technique}\label{sunada} Negative inverse spectral results are realized by studying pairs of isospectral non-isometric spaces. The celebrated Sunada technique \cite{Sun1985} provides a systematic method for producing such pairs. This technique has been generalized and applied in many different settings by modifying its various hypotheses. In this section we generalize the Sunada-Pesce-Sutton \cite{Sut2002} technique to the $G$-invariant setting. We start by reviewing the original Sunada technique and the various generalizations that lead to our $G$-invariant formulation. The original theorem concerns finite group actions and is stated in terms of \emph{almost conjugate} subgroups of the isometry group. \begin{defn} Let $G$ be a finite group and let $\Gamma_1$ and $\Gamma_2$ be subgroups of $G$. Then $\Gamma_1$ is said to be \emph{almost conjugate} to $\Gamma_2$ in $G$ if $$( \# [g]_G \cap \Gamma_1) =( \# [g]_G \cap \Gamma_2) $$ for each $G$-conjugacy class $[g]_G$. We call the triple $(G, \Gamma_1, \Gamma_2)$ a \emph{Sunada triple}. \end{defn} \begin{thm}[Sunada's Theorem] Let $(G, \Gamma_1, \Gamma_2)$ be a Sunada triple and $M$ a compact Riemannian manifold on which $G$ acts by isometries. Assume that $\Gamma_1$ and $\Gamma_2$ act freely. Then $M / \Gamma_1$ and $M / \Gamma_2$ with the induced Riemannian metrics are strongly isospectral as Riemannian manifolds. \end{thm} A simple generalization of this theorem allows for the action to have fixed points (i.e.~not necessarily free). One need only note that the hypothesis that the action be free is unnecessary in the proof of the theorem. Using this generalization one can produce isospectral pairs of orbifolds, cf.~\cite[Theorem 2.5]{SSW2006}. It is precisely the hypothesis that the action be free that we will modify in the Sunada-Pesce-Sutton generalization to produce pairs of isospectral non-isometric orbit spaces. One of the earliest and most fruitful generalizations of the original Sunada theorem was provided by DeTurck and Gordon \cite{DeTGL}. They allow $G$ to be a Lie group and $\Gamma_1$ and $\Gamma_2$ to be representation equivalent cocompact discrete subgroups. If $G$ acts by isometries on a Riemannian manifold $M$ in such a way that $M / \Gamma_1$ and $M / \Gamma_2$ are compact manifolds then these manifolds are strongly isospectral. The main advantage of their generalization is that the quotients no longer need be by finite subgroups of the isometry group. Pesce \cite{P} weakened the hypothesis that the discrete subgroups $\Gamma_1$ and $\Gamma_2$ be representation equivalent, applying the idea of Frobenius reciprocity in his proof. Although the subgroups we use in Theorem~\ref{main} are representation equivalent, so that we do not need the generality of Pesce's result, we mention it here because the version of Sunada that we employ in the proof of Theorem~\ref{main} follows from Sutton's generalization of Pesce's result. Sutton \cite[Theorem 2.3]{Sut2002} generalized the Sunada-Pesce technique from the setting of discrete subgroups to connected subgroups, allowing for the construction of the first locally non-isometric, simply-connected isospectral homogeneous manifolds. His theorem requires the action of the $H_i$ on $M$ to be free, but this assumption is made to ensure that the quotient spaces are manifolds, and is not necessary in his proof. Indeed, the spaces $M / H_i$ in Theorem~\ref{main} need not be Riemannian manifolds or orbifolds, but more generalized quotient spaces. Sutton additionally requires in his theorem that the Riemannian submersions $\pi_i \colon M \to M / H_i$ have totally geodesic fibers in order to conclude that the eigenfunctions on $M / H_i$ lift to eigenfunctions on $M$. This assumption is unnecessary in our setting as we consider the $H_i$-invariant spectrum on $C^{\infty}(M)$ as opposed to the spectrum of the Laplacian acting on $C^{\infty}(M / H_i)$. We now state the $G$-invariant version of the Sunada theorem that we will apply in this paper, noting that it follows directly from the Pesce-Sutton generalizations. Indeed, we need only note that the the hypothesis that the action be free in \cite[Theorem 2.3]{Sut2002} is unnecessary in the proof of the theorem. Moreover, Sutton generalizes his own theorem in \cite[Theorem 2.7]{Sut2010} and the version stated below is a special case of this result, where again his assumption that $\pi_i \colon M \to M / H_i$ have minimal fibers is unnecessary in the $G$-invariant setting. \begin{thm}[$G$-invariant Sunada-Pesce-Sutton technique]\label{generalization} Let $M$ be a compact Riemannian manifold and $G \leq Isom(M)$ a compact Lie group. Suppose $H_1, H_2 \leq G$ are closed, representation equivalent subgroups. Then the orbit spaces $M/H_1$ and $M/H_2$ are isospectral in the sense that the $H_i$-invariant spectra of the Laplacian on $M$ are equivalent. \end{thm} \begin{proof}[Proof of Theorem~\ref{main}] Under the given embeddings \cite[Theorem 1.5]{AYY} states that $H_1$ and $H_2$ are representation equivalent as subgroups of $SU(2n) \leq Isom(S^{4n-1})$. The isospectrality of the spaces $O_1 = S^{4n-1}/H_1$ and $O_2 = S^{4n-1}/H_2$ then follows immediately from Theorem~\ref{generalization}. Finally, we note that there are many geometric properties that distinguish these spaces as non-isometric. We catalogue these properties in Section~\ref{examples}. \end{proof} \begin{section}{Orbifold Points and Non-orbifold Points in an Orbit Space}\label{smooth} Here we make note of an important characterization of orbifolds in terms of slice representations from \cite{LT2010} that we will use to prove Theorem \ref{main2}. We begin by recalling the notion of polar actions and polar representations, comprehensively studied in \cite{D1985}. \begin{defn} An action by a Lie group $G$ of isometries on a complete Riemannian manifold $M$ is said to be {\it polar} if there exists a complete submanifold $\Sigma$ that meets every orbit orthogonally. Such a submanifold is known as a {\it section}. Note that such submanifolds are necessarily totally geodesic. \end{defn} \begin{rem}\label{s2}\normalfont By way of illustration of this idea, we note that the action of $SO(2)$ on $S^2$ by rotations about an axis is polar. One may choose $\Sigma$ to be any meridian great circle. The quotient space is an interval of the form $[0,\pi].$ This has an orbifold structure with $\mathbb{Z}_2$ action at the endpoints. In fact, it is a good orbifold, $S^1/\mathbb{Z}_2.$ This is more generally true for all polar actions, as we see from the proposition below, which follows easily from Chapter 5 of \cite{PT1988}. \end{rem} \begin{prop} If the action of $G$ on $M$ is polar, then the quotient $M/G$ has the smooth structure of a good orbifold. \end{prop} \begin{proof} If the action of $G$ on $M$ is polar, then if we can generate $M$ from the action of $G$ on $\Sigma.$ In other words $G\cdot \Sigma=M$. Now recall the generalized Weyl group of $\Sigma$ from \cite{PT1988}: Let $N(\Sigma)=\{g\in G\,|\,g\cdot\Sigma=\Sigma\}$. Note that this is the largest subgroup of $G$ which induces an action on $\Sigma$. Let $Z(\Sigma)=\{g\in G\,|\,g\cdot s=s,\, \forall s\in \Sigma\} ,$ which is the kernel of the induced action. Then the {\it generalized Weyl group of }$\Sigma$ is $W(\Sigma)=N(\Sigma)/Z(\Sigma)$. Let $W:=W(\Sigma).$ From Proposition 5.6.15 of \cite{PT1988}, $W$ is discrete in general, and thus, for $G$ compact, $W$ is finite, and thus $\Sigma/W$ is a good orbifold. Furthermore, $M/G=\Sigma/W.$ Thus, the quotient space $M/G$ corresponds to the good orbifold $\Sigma/W.$ Finally, we note that there is a correspondence between the sheaf of $G$-invariant functions on $M$ and the $W$-invariant functions on $\Sigma$. If $f\in C^\infty(M)^G$ then the restriction of $f$ to $\Sigma$ is a function in $C^\infty(\Sigma)^W.$ Conversely, for any $h\in C^\infty(\Sigma)^W,$ we can extend it to all of $M$ by defining it to be constant on the orbits of $G.$ Hence, $h\in C^\infty(M)^G.$ As noted in Theorem 5.6.25 of \cite{PT1988}, the above defines an isomorphism between $C^\infty(M)^G$ and $C^\infty(\Sigma)^W.$ \end{proof} \begin{rem}\normalfont Notice that in Remark \ref{s2}, the action of $S^1$ is effective and thus $W$ is the group generated by 180-degree rotations, and hence is isomorphic to $\mathbb{Z}_2$ with the previously claimed action on the endpoints. \end{rem} The arguments in the previous proof are global, rather than local, in nature and produce a stronger result than we seek--that of a good orbifold. One can ask if weaker, more local conditions on the action would guarantee a more generic orbifold structure in the quotient. Indeed, that is known to be the case in the literature of orbit spaces, as we see in what follows. Before discussing these conditions, we will make note of some additional definitions related to polar actions that are necessary to the results cited in Section \ref{examples}. We also note an important connection between polar actions and symmetric spaces. The examples of Theorem \ref{main} are quotients of spheres. Polar actions on spheres arise from the restriction to the spheres $S^n$ of certain representations of actions by $G$ on the ambient space $\mathbb{R}^{n+1}.$ These representation are said to be {\it polar representations} which we will state in general terms below. If $M=V$ is a Euclidean space with some inner product $\langle\cdot,\cdot\rangle$ then the analog of a global section, known as a linear cross section, is defined as follows. \begin{defn} Let $T_v(G\cdot v)$ be the tangent space to the orbit of $v$. We say $v$ is {\it regular} if $T_v(G\cdot v)$ is of maximal dimension. If $v$ is regular, we define the linear space $\mathfrak{a}_v=\{u\in V\,|\, \langle u,T_v(G\cdot v)\rangle=0\}$ to be a {\it linear cross-section}. Note that $\mathfrak{a}_v$ meets every orbit, by Lemma 1 of \cite{D1985}. \end{defn} \begin{defn} A representation $\rho:G\rightarrow O(V)$ is said to be a {\it polar representation} if for any regular point $v_0$, and for any $u\in \mathfrak{a}_{v_0},$ $\langle T_u (G\cdot u), \mathfrak{a}_{v_0}\rangle=0.$ Thus, we have a linear cross section of minimal dimension that meets the tangent space to every orbit orthogonally. \end{defn} Polar representations are closely related to symmetric spaces, as noted in \cite{D1985}. The relationship is as follows: if $G/H$ is a symmetric space, then the action of $H$ on $T_{eH}(G/H)$ at the identify coset $eH$ is polar. On the other hand, if $\rho: G'\rightarrow O(V)$ is any polar representation on a real vector space $V$, then there exists a symmetric space $G/H$ and an isometry $\varphi: V\rightarrow T_{eH}(G/H)$ that sends the $G'$-orbits injectively onto the $H$-orbits, (Proposition 6 of \cite{D1985}). Now we turn to the infinitesimal version of the definition of a polar action which is essential to determining when an orbit space has the smooth structure of an orbifold. \begin{defn} Let $x\in M$ and let $H_x$ be the normal space to the orbit through $x$. The isotropy group $G_x$ acts on $H_x.$ This is called the {\it slice representation} at $x$. If the slice representations at every point $x\in M$ are polar, then the representation is said to be {\it infinitesimally polar}, as in \cite{GL2015}. We note that every polar representation is infinitesimally polar. \end{defn} The characterization of orbifold points that we will use comes from Theorem 1.1 of \cite{LT2010}, which we state below: \begin{thm}[Theorem 1.1 of \cite{LT2010}]\label{22} Let $M$ be a Riemannian manifold and let $G$ be a closed group of isometries of $M$. Let $B=M/G$ be the quotient, which is stratified by orbit type. Let $B_0$ be the maximal stratum of $B$. Let $x\in M$ be a point with isotropy group $G_x$ acting on the normal space $H_x$ of the orbit $G\cdot x\subset M$. Set $y=G\cdot x\in B.$ Let $z\in B_0$, and let $\overline{\kappa}(z)$ be the maximum of the sectional curvatures of the tangent planes at $z$. Then the following are equivalent: \begin{enumerate} \item $\displaystyle{\limsup_{z\in B_0, z\rightarrow y}\overline{\kappa}(z)<\infty.}$ \item $\displaystyle{\limsup_{z\in B_0, z\rightarrow y}\overline{\kappa}(z)\cdot (d(z,y))^2=0.}$ \item The action of $G_x$ on $H_x$ is polar. (I.e., the slice representation at $x$ is polar.) \item A neighborhood of $y$ in $B$ is a smooth Riemannian orbifold. (I.e., there exists a neighborhood $\overline{U}$ of $y$ in $B$ that is isometric as a metric space to $U/\Gamma$ where $U$ is a neighborhood in a smooth Riemannian manifold and $\Gamma$ is a finite group of isometries of $U$.) \end{enumerate} \end{thm} It is these last two equivalent conditions that are of particular interest here. As a consequence, an orbit space has an orbifold structure if and only if the action of $G$ on $M$ is infinitesimally polar. We note that the proof of the last two equivalences above relies on fundamental properties and results relating to singular Riemannian foliations and polar actions, and thus illustrates how one may augment the local understanding of orbifolds via the connection with quotients of manifolds by isometric group actions (or, equivalently, with the leaf spaces of singular Riemannian foliations with closed leaves). \begin{defn} A point $y$ in the quotient satisfying the last condition of the above is said to be an {\it orbifold point}. Otherwise, $y$ is a non-orbifold point. From the above, a point $y\in M/G$ is an orbifold point if and only if the slice representation is polar. \end{defn} As noted in Corollary 1.2 of \cite{LT2010}, the set of non-orbifold points in the quotient has relatively high quotient codimension--at least 3. \end{section} \section{The Examples}\label{examples} In this section we examine the pair of isospectral spaces $O_1 = S^{11}/U(3)$ and $O_2 = S^{11}/Sp(1) \times SO(4)$ provided by Theorem~\ref{main} to determine inaudible properties of the $G$-invariant spectrum. In particular, we prove Theorem~\ref{main2} demonstrating that the existence of non-orbifold singularities and constant sectional curvature are not determined by the $G$-invariant spectrum. Additionally we have the following proposition which follows immediately from the tables below. \begin{prop}\label{props} As demonstrated by the isospectral pair $O_1 = S^{11}/U(3)$ and $O_2 = S^{11}/Sp(1) \times SO(4)$ the following properties of orbit spaces are inaudible, i.e.~not determined by the $G$-invariant spectrum:~isotropy type, maximal isotropy dimension, and quotient codimension of the strata. \end{prop} We note that some of these results are known in the orbifold setting where the inaudibility of isotropy type \cite{SSW2006} and the order of the maximal isotropy groups \cite{RSW2008} have been demonstrated. Quotient codimension of the strata is a topological property of the quotient and is known to be preserved under smooth SRF isometry \cite{AS2017}. It is nevertheless not determined by the $G$-invariant spectrum. The first step in our study of the pairs of isospectral non-isometric orbit spaces provided by Theorem~\ref{main} is an application of principal isotropy reduction, cf.~\cite{GL2015} or \cite{GL2014}. \begin{lemma}\label{pid} Principal isotropy reduction yields the following smooth SRF isometries (note that these isometries do not preserve the spectra): $$O_1= S^{11}/U(3)=S^7/U(2)$$$$O_2= S^{11}/Sp(1) \times SO(4)= S^7/Sp(1)\times O(2).$$ \end{lemma} \begin{proof} Here we view $S^{11} \subset \mathbb{C}^6$ so that points $(z_1, \ldots, z_6) \in S^{11}$ are such that $z_i \in \mathbb{C}$ and $|z_1|^2 + \dots + |z_6|^2 = 1$. We consider each of the reductions separately. First we apply principal isotropy reduction to $S^{11}/U(3)$. The principal isotropy for this action is $K=U(1)\times Id_2$. We choose a specific copy of $K \leq U(3)$, say \[ K= \bigg{ \{ } \left[ \begin{array}{c | c c} z & & \\ \hline & 1 & 0 \\ & 0 & 1 \\ \end{array} \right] : z \in U(1) \bigg{ \} } \] \noindent and then use the embedding of $U(3)$ into $Isom(S^{11})$ given in Theorem~\ref{main} to realize the principal isotropy as \[ K= \bigg{ \{ } \left[ \begin{array}{ c | c} \begin{array}{c | c c} z & & \\ \hline & 1 & 0 \\ & 0 & 1 \\ \end{array} & \\ \hline & \begin{array}{c | cc } \bar{z} & & \\ \hline & 1 & 0 \\ & 0 & 1 \\ \end{array} \end{array} \right] \colon z \in U(1) \bigg{ \} } \leq SU(6) \leq Isom(S^{11}). \] \noindent We see that the vectors $v=(0,z_2,z_3,0,z_5,z_6) \in S^{11}$ are fixed by $K$ and that the fixed point set is therefore a copy of $S^7 \subset S^{11}$. Let $N$ denote the induced action on this fixed point set of the normalizer of $K$ in $U(3)$. Because $N$ acts on the vectors $v=(0,z_2,z_3,0,z_5,z_6) \in S^{11}$ the largest it can be is \[ N= \bigg{ \{ } \left[ \begin{array}{ c | c} \begin{array}{c | c } z & \begin{array}{c c} & \end{array} \\ \hline \begin{array}{c } \\ \end{array} & \\ & A \end{array} & \\ \hline & \begin{array}{c | c } \bar{z} & \begin{array}{c c} & \end{array} \\ \hline \begin{array}{c } \\ \end{array} & \\ & \bar{A} \end{array} \end{array} \right] : z \in U(1) , ~ A \in U(2) \bigg{ \} }. \] \noindent It is straightforward to check that $U(1) \times U(2)$ normalizes $K=U(1) \times Id_2$ so that indeed $N=U(1) \times U(2)$. Principal isotropy reduction therefore yields the metric space isometry $S^{11}/U(3)= S^7/ (N/K) = S^7/U(2)$. Finally, as these orbit spaces are all three dimensional, \cite[Theorem 1.5]{AL2011} implies that these metric space isometries are smooth SRF isometries. Next we apply principal isotropy reduction to $S^{11}/Sp(1)\times SO(4)$. The principal isotropy for this action is $K=Id_4 \times (SO(2) \times Id_2)$. We choose a specific copy of $K \leq Sp(1)\times SO(4)$, say \[ K= \bigg{ \{ } \left[ \begin{array}{c | c c } Id_4 & & \\ \hline & A & 0 \\ & 0 & Id_2 \\ \end{array} \right] : A \in SO(2) \bigg{ \} } \] \noindent and then use the embedding of $Sp(1) \times SO(4)$ into $Isom(S^{11})$ given in Theorem~\ref{main} to realize the principal isotropy as \[ K= \bigg{ \{ } \left[ \begin{array}{c | c c | c c} Id_4 & & & & \\ \hline & A & 0 & & \\ & 0 & Id_2 & & \\ \hline & & & A & 0 \\ & & & 0 & Id_2 \\ \end{array} \right] : A \in SO(2) \bigg{ \} } \leq SU(6) \leq Isom(S^{11}). \] \noindent We see that the vectors $v=(z_1, z_2, 0, z_4, 0, z_6) \in S^{11}$ are fixed by $K$ and that the fixed point set is therefore a copy of $S^7 \subset S^{11}$. Let $N$ denote the induced action on this fixed point set of the normalizer of $K$ in $Sp(1) \times SO(4)$. Because $N$ acts on the vectors $v=(z_1, z_2, 0, z_4, 0, z_6) \in S^{11}$ the largest it can be is \[ N= \bigg{ \{ } \left[ \begin{array}{c | c c | c c} B & & & & \\ \hline & C & 0 & & \\ & 0 & D & & \\ \hline & & & C & 0 \\ & & & 0 & D \\ \end{array} \right] : B \in Sp(1), \left[ \begin{array}{ c c} C & 0 \\ 0 & D \end{array} \right] \in SO(4) \bigg{ \} } \] \noindent It is straightforward to check that the above matrices normalize $K=Id_4 \times (SO(2) \times Id_2)$ so that indeed $N= Sp(1) \times S(O(2) \times O(2))$. We have that $N/K = Sp(1) \times S(O(1) \times O(2)) = Sp(1) \times O(2)$ so that principal isotropy reduction yields the metric space isometry $$S^{11}/Sp(1) \times SO(4)= S^7/(N/K)= S^7/Sp(1) \times O(2) .$$ Again, as these orbit spaces are all three dimensional, \cite[Theorem 1.5]{AL2011} implies that these metric space isometries are smooth SRF isometries. \end{proof} \begin{rem}\normalfont Theorem~\ref{main} states that the spaces $S^{4n-1}/H_1$ and $S^{4n-1}/H_2$ are isospectral for odd integers $n \geq 3$. Analogous to Lemma~\ref{pid}, we note without proof that for such $n$ principal isotropy reduction yields the following smooth SRF isometries (again note that these isometries do not preserve the spectra): $$S^{4n-1}/U(n)=S^7/U(2)$$$$S^{4n-1}/Sp(m) \times SO(2n-2m)= S^7/Sp(1)\times O(2).$$ \end{rem} We now study the isotropy type and quotient codimension of these reduced spaces. We note that the embeddings of $U(2)$ and $Sp(1)\times O(2)$ into $Isom(S^7)$ are the same as the embeddings given in Theorem~\ref{main}. We therefore consider the vector decomposition $v=(v_1,v_2) \in S^7 \subset \mathbb{C}^2 \oplus \mathbb{C}^2$ when discussing the first quotient and $v=(v_1,v_2,v_3) \in S^7 \subset \mathbb{C}^2 \oplus \mathbb{C} \oplus \mathbb{C}$ when discussing the second quotient. The following two tables catalogue the isotropy and quotient codimension of these reduced spaces using this vector decomposition. Within each table, rows with the same letter have the same isotropy and orbit (hence the same slice representation), whereas the numerical subscripts distinguish between zero and non-zero values for the scalars $z \in \mathbb{C}$ and $\lambda \in \mathbb{R}$. \vspace{-.09in} \begin{center} \captionof{table}{$O_1 =S^7/U(2)$} \begin{tabular}{ l L{2.8cm} c L{3.5cm}} \toprule {Row} & {Isotropy} & {qcodim} & {Points} \\ \midrule $A$ & $Id$ & 0 & $v_1 \ne z \cdot \bar{v}_2$ \\ $B_1$ & $U(1)$ & 1 & $v_1= z \cdot \bar{v}_2$ \\ $B_2$ & $U(1)$ & $3$ & $v_1 \ne 0,~ v_2=0$ \\ $B_3$ & $U(1)$ & $3$ & $v_1=0, ~v_2 \ne 0$ \\ \bottomrule \end{tabular} $\text{Note: } v=(v_1,v_2) \in S^7 \subset \mathbb{C}^2 \oplus \mathbb{C}^2$ and $z \in \mathbb{C}^*$ \end{center} \vspace{-.15in} \begin{center} \captionof{table}{$O_2 =S^7/Sp(1)\times O(2)$} \begin{tabular}{l L{3.3cm} c L{3.5cm}} \toprule {Row} & {Isotropy} & {qcodim} & {Points} \\ \midrule $A$ & $Id \times Id$ & 0 & $v_1 \ne 0, ~ v_2 \ne \lambda \cdot v_3$ \\ $B_1$ & $Id \times O(1)$ & 1 & $v_1 \ne 0, ~ v_2 = \lambda \cdot v_3$ \\ $B_2$ & $Id \times O(1)$ & 2 & $v_1 \ne 0, ~ v_2 \ne 0,~ v_3= 0$ \\ $B_3$ & $Id \times O(1)$ & 2 & $v_1 \ne 0, ~ v_2 =0,~ v_3 \ne 0$ \\ $C$ & $Sp(1) \times Id$ & 1 & $v_1 = 0, ~ v_2 \ne \lambda \cdot v_3$ \\ $D$ & $Id \times O(2) $ & 3 & $v_1 \ne 0, ~ v_2 =v_3=0$ \\ $E_1$ & $Sp(1) \times O(1)$ & 2 & $v_1 = 0, ~ v_2 = \lambda \cdot v_3$ \\ $E_2$ & $Sp(1) \times O(1)$ & 3 & $v_1 = 0, ~ v_2 \ne 0,~ v_3= 0$ \\ $E_3$ & $Sp(1) \times O(1)$ & 3 & $v_1 = 0, ~ v_2 =0,~ v_3 \ne 0$ \\ \bottomrule \end{tabular} $\text{Note: } v=(v_1,v_2,v_3) \in S^7 \subset \mathbb{C}^2 \oplus \mathbb{C} \oplus \mathbb{C}$ and $\lambda \in \mathbb{R}^*$ \end{center} \vspace{.2in} We hope to provide some intuition for the structure of the quotient space $O_2$. As the space is 3-dimensional we choose the following three variables to parameterize the space $$r_1= |v_1|,~ r_2 = |v_2|,~ \alpha= \measuredangle{(v_2,v_3)}$$ where $~r_1,~ r_2\in [0,1]$ and $\alpha \in [0,\pi]$. Note that $r_3 = |v_3|$ is determined by the equation $r_1^2 + r_2^2 + r_3^2 = 1$. In the figure below we let the $z$-direction correspond to the $r_1$ parameter, the $y$-direction correspond to the $r_2$ parameter, and the $x$-direction correspond to the $\alpha$ parameter. It is important to note that this is not a metric picture, but instead a schematic intended to illustrate how the pieces of the quotient space fit together. Indeed, we have arranged so that in any horizontal slice ($r_1$ constant), the $\alpha$ constant curves are half-ellipses which foliate the circular cross section (right side of the figure). The boundary circle of any such horizontal slice will therefore have $\alpha=0$ along the $x$-negative half of the circle and $\alpha=\pi$ along the $x$-positive half of the circle. The $x=0$ line in any such cross section corresponds to $\alpha=\pi/2$. This means that all values of $\alpha \in [0,\pi]$ are represented where these ellipses meet (for example at the points $E_2$ and $E_3$ in the $r_1=0$ slice), but this is precisely where $v_2=0$ or $v_3=0$ and the $\alpha$ parameter collapses. \begin{figure}[!ht] \includegraphics[scale=.43]{both} \caption{Schematic of the quotient space on the left and the $r_1=0$ horizontal cross section with $\alpha$ constant curves on the right.} \end{figure} Let us now describe how each piece of the parameterization of the quotient corresponds to the stratification by isotropy given in the table. The interior of the cone corresponds to the open dense 3-dimensional regular region with trivial isotropy from row $A$. The bottom face of the cone has $r_1=0$ and therefore corresponds to the 2-dimensional region from row $C$. The circular boundary of this bottom face is where $r_1=0$ and $\alpha = 0$ or $\pi$ and corresponds to the 1-dimensional region from row $E_1$. The two labeled points on this circle are where $v_3=0$ or $v_2=0$ and correspond to $0$-dimensional regions $E_2$ and $E_3$, respectively. The conical boundary face is where $\alpha = 0$ or $\pi$ and corresponds to the 2-dimensional region from row $B_1$. The lines from $E_2$ to $D$ and from $E_3$ to $D$ are where $v_3=0$ and $v_2=0$, respectively, and correspond to the 1-dimensional regions $B_2$ and $B_3$, respectively. Finally the vertex is where $r_1=1$ and corresponds to the 0-dimensional region from row $D$. We now show that the vertex is the only non-orbifold point in the quotient space. \begin{lemma}\label{two} The space $O_2=S^7/Sp(1)\times O(2)$ admits a non-orbifold point. \end{lemma} \begin{proof} We first show that the slice representation of the action is non-polar at points $v =(v_1,0,0) \in S^7$ from row D of Table 2. By Theorem~\ref{22} we can then conclude that the image of this strata is a non-orbifold point. The slice representation of the action at $v$ is polar if and only if the slice representation of the restriction of the action to the connected component of the identity is polar, cf.~\cite[Section 2.4]{GL2015}. We therefore consider the action of $Sp(1)\times SO(2)$ on $S^7$ which acts with isotropy $Id \times SO(2)$ at $v =(v_1,0,0) \in S^7$. We recall from \cite[Lemma 2]{GL2015} that polar actions of connected groups with trivial principal isotropy must have a codimension 1 stratum with non-trivial isotropy. The slice representation of $Sp(1)\times SO(2)$ at $v =(v_1,0,0) \in S^7$ has non-trivial isotropy only at the zero vector, hence by \cite[Lemma 2]{GL2015} can not be polar. \end{proof} \begin{lemma}[\cite{GL2015}, Theorem 1]\label{one} The space $O_1= S^{11}/U(3)$ is isometric to the 3-hemisphere of constant sectional curvature 4. \end{lemma} The proof of Lemma~\ref{one} is provided in \cite[Section 3.4]{GL2015} where $S^{11}/U(3)$ is first reduced to $S^7/U(2)$ via principal isotropy reduction as above and then shown to be isometric to the 3-hemisphere of constant curvature 4. We note that O'neill's formula can also be used to show that the space $S^7/U(2)$ has constant sectional curvature 4. \begin{proof}[Proof of Theorem~\ref{main2}] Lemmas~\ref{two} and \ref{one} show that the orbit space $O_1$ is isospectral to an orbifold whereas $O_2$ admits a non-orbifold point, demonstrating the inaudibility of non-orbifold singularities. We can then apply Theorem~\ref{22} to the space $O_2$ to conclude that it has unbounded sectional curvature. We conclude that constant sectional curvature is not determined by the $G$-invariant spectrum. \end{proof} \bibliographystyle{alpha}
{"config": "arxiv", "file": "1607.05593/examples_split.tex"}
TITLE: Compact Riemann surfaces as holomorphically convex subsets of affine algebraic varieties QUESTION [1 upvotes]: Is there a simple argument (or a counterexample) to show that a holomorphically convex subset of an affine algebraic variety is a subvariety which is a compact Riemann surface? REPLY [7 votes]: Perhaps I should turn my comment into an "answer". Affine algebraic varieties over $\mathbb{C}$ are Stein spaces. That is, they are already holomorphically convex, and points can be separated by global holomorphic functions. The latter property implies that affine varieties can never contain compact Riemann surfaces.
{"set_name": "stack_exchange", "score": 1, "question_id": 91089}
TITLE: Is there a way to map $GF(2^m)$ to $GF(p)$ and back again? QUESTION [0 upvotes]: Is there a way to map a characteristic-2 field $GF(2^m)$ to a larger prime field $GF(p)$, perform arithmetic in that field, and then map it back to the original field as though I had stayed in the original field? Background: FPGA chips have a lot of fast multipliers built into the silicon, but when doing multiplications in, say, $GF(2^{10})$, these are useless to me and go unused. I could use these for multiplications in a non-extension prime field p, using one multiplier for the multiplication and another for reduction mod p. I'd like to map the original values from one field to another, do matrix multiplications, and then map that answer back to the extension field. REPLY [1 votes]: If you map some nonzero element $x \in GF(2^m)$ to $y \in GF(p)$, you'll have $y + y \ne 0$ but $x + x = 0$, so the result of arithmetic on the images does not map back to the result of arithmetic in the original field. Similarly for multiplication, $x^{2^m} = x$ but $y^{2^m} \ne y$.
{"set_name": "stack_exchange", "score": 0, "question_id": 3269957}
TITLE: Dimension of Ker(T) when $T(A) = \operatorname{Tr} {(A)}$. QUESTION [3 upvotes]: For a positive integer $n > 1$, let $ T: \mathbb{R}^{n\times n} \to \mathbb{R}$ be the linear transformation defined by $T(A) = \operatorname{Tr} {(A)}$, where $A$ is an $n \times n$ matrix with real entries. Determine the dimension of $\ker{(T)}$. I can see just by looking at this transformation that it's not one-to-one. But I'm sort of stuck. I know the zero matrix and matrices with diagonal entries of 0 are examples of tr(A) = 0. But I still can't really see what the dimension is. Any help? REPLY [3 votes]: Eric's way is probably the easiest and most canonical. Here is another one, along the same lines, but more explicit: one can prove that $$ \ker T=\{x-\frac{T(x)}n\,I_n:\ x\in \mathbb R^{n\times n}\}. $$ Indeed, it is easy to see that the set on the right is in the kernel of $T$. And, conversely, if $T(x)=0$, then $x$ is of the required form. Also, any $x$ can be written as $\frac{T(x)}n\,I_n+(x-\frac{T(x)}n\,I_n)$, which shows that $\mathbb R^{n\times n}=\mbox{Im}\,T\,I_n+\ker T$. And this is a direct sum, because if $y=T(x)\,I_n$ and $T(y)=0$, then $y=0$. Thus we have that the dimensions of $\ker T$ and $\mbox{Im}\,T$ add to $n^2$. So $$ \dim\ker T=n^2-1. $$
{"set_name": "stack_exchange", "score": 3, "question_id": 456089}
TITLE: One-sided nilpotent ideal not in the Jacobson radical? QUESTION [5 upvotes]: Problem XVII.5a of Lang's Algebra, revised 3rd edition, is: Suppose $N$ is a two-sided nilpotent ideal of a ring $R$. Show that $N$ is contained in the Jacobson radical $J: = \{ \cap\, I: I \text{ a maximal left ideal of } R \}$. I put my solution below the fold. My question is: can't we generalize a bit more? It seems that all we need is that $N$ is a nil ideal; further, I don't see why $N$ can't be merely a one-sided ideal. I assume there's some error in my thinking here... Solution: Take $y \in N$, and show that $1-xy$ has a left inverse for all $x\in R$ (this is an equivalent characterization of the Jacobson radical, see here). The way to construct the left inverse is to note that $xy \in N$, so $\exists k$ s.t. $(xy)^k= 0$, so $(1 + xy + \dotsb + (xy)^{k-1})(1-xy)=1$. REPLY [3 votes]: That is totally correct. If you require further validation, then check out Lam's First course in noncommutative rings pg 53, lemma 4.11 which has exactly the generalization you describe, with the same proof.
{"set_name": "stack_exchange", "score": 5, "question_id": 1183460}
\section{Subset of Normed Vector Space is Everywhere Dense iff Closure is Normed Vector Space/Necessary Condition} Tags: Normed Vector Spaces, Denseness, Set Closures \begin{theorem} Let $\struct {X, \norm {\, \cdot \,}}$ is a [[Definition:Normed Vector Space|normed vector space]]. Let $D \subseteq X$ be a [[Definition:Subset|subset]] of $X$. Let $D^-$ be the [[Definition:Closure/Normed Vector Space|closure]] of $D$. Then $D$ is [[Definition:Everywhere Dense/Normed Vector Space|dense]] iff $D^- = X$. \end{theorem} \begin{proof} Let $x \in X \setminus D$. Suppose $D$ is [[Definition:Everywhere Dense/Normed Vector Space|dense]] in $X$. Then: :$\forall n \in N : \exists d_n \in D : d_n \in \map {B_{\frac 1 n}} x$ where $\displaystyle \map {B_{\frac 1 n}} x$ is an [[Definition:Open Ball in Normed Vector Space|open ball]]. Let $\sequence {d_n}_{n \mathop \in \N}$ be a [[Definition:Sequence|sequence]] in $D$. Then: :$\forall n \in \N : \norm {x - d_n} < \frac 1 n$ Hence, $x$ is a [[Definition:Limit Point (Normed Vector Space)|limit point]] of $D$. In other words, $x \in D^-$. We have just shown that: :$x \in X \setminus D \implies x \in D^-$ Hence: :$X \setminus D \subseteq D^-$. By definition of [[Definition:Closure/Normed Vector Space|closure]]: :$D \subseteq D^-$ Therefore: {{begin-eqn}} {{eqn | l = X | r = D \cup \paren {X \setminus D} }} {{eqn | o = \subseteq | r = D^- }} {{eqn | o = \subseteq | r = X }} {{end-eqn}} Thus: :$X = D^-$. \end{proof}
{"config": "wiki", "file": "thm_19161.txt"}
\begin{document} \newtheorem{property}{Property}[section] \newtheorem{proposition}{Proposition}[section] \newtheorem{append}{Appendix}[section] \newtheorem{definition}{Definition}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{theorem}{Theorem}[section] \newtheorem{example}{Example}[section] \newtheorem{corollary}{Corollary}[section] \newtheorem{condition}{Condition} \newtheorem{remark}{Remark}[section] \newtheorem{assumption}{Assumption}[section] \newtheorem{algorithm}{Algorithm}[section] \newtheorem{problem}{Problem}[section] \medskip \begin{center} {\large \bf Interval Optimization Problems on Hadamard manifolds} \vskip 0.6cm Le Tram Nguyen \footnote{E-mail: letram07st@gmail.com.} \\ Faculty of Mathemactics,\ \\ The University of Da Nang - University of Science and Education \\ and \\ Department of Mathematics \\ National Taiwan Normal University \\ Taipei 11677, Taiwan \vskip 0.6cm Yu-Lin Chang \footnote{E-mail: ylchang@math.ntnu.edu.tw.} \\ Department of Mathematics \\ National Taiwan Normal University \\ Taipei 11677, Taiwan \vskip 0.6cm Chu-Chin Hu \footnote{E-mail: cchu@ntnu.edu.tw.} \\ Department of Mathematics \\ National Taiwan Normal University \\ Taipei 11677, Taiwan \vskip 0.6cm Jein-Shan Chen \footnote{Corresponding author. E-mail: jschen@math.ntnu.edu.tw. The research is supported by Ministry of Science and Technology, Taiwan.} \\ Department of Mathematics \\ National Taiwan Normal University \\ Taipei 11677, Taiwan \vskip 0.6cm May 2, 2022 \end{center} \medskip \noindent {\bf Abstract.}\ In this article, we introduce the interval optimization problems (IOPs) on Hadamard manifolds as well as study the relationship between them and the interval variational inequalities. To achieve the theoretical results, we build up some new concepts about $gH$-directional derivative and $gH$-Gâteaux differentiability of interval valued functions and their properties on the Hadamard manifolds. The obtained results pave a way to further study on Riemannian interval optimization problems (RIOPs). \medskip \noindent {\bf Keywords.}\ Hadamard manifolds, interval variational inequalities, interval valued function, set valued function on manifolds. \medskip \section{Motivations} This paper studies a new problem set, which is called the interval optimization problems on Hadamard manifolds. First, as below, we elaborate the motivation about why we focus on this problem. The variational inequalities have been investigated since the dawn of the sixties \cite{KS00} and plenty of results are already established, see \cite{FP97, C74} and references therein. From \cite{ZZML15}, it is known that the interval optimization problems (IOPs) and interval variational inequalities (IVIs) possess a close relationship under some assumptions. In addition, the interval programming \cite{ALMV21, BP13, GCMD20, C22, S08, W07, W08, ZZML15} is one of the approaches to tackle the uncertain optimization problems, in which an interval is used to characterize the uncertainty of a variable. Because the variation bounds of the uncertain variables can be obtained only through a small amount of uncertainty information, the interval programming can easily handle some optimization problems. \medskip Nowadays, many important concepts and methods of optimization problems have been extended from Euclidean space to Riemannian manifolds, particularly to Hadamard manifolds \cite{AMS08, CH15, BFO10, BM12, C22, H13, B20}. In general, a manifold has no linear structure, nonetheless, it is locally identified with Euclidean space. In this setting, the Euclidean metric is replaced by Riemannian metric, which is smoothly varying inner product defined on the tangent space of manifold at each point, and the line segments is replaced by minimal geodesics. This means that the generalization of optimization problems from Euclidean spaces to Riemannian manifolds is very important, for example, some nonconvex problems on Euclidean space can be viewed as convex problems on Riemannian manifolds. To our best knowledge, there is very limited study on Riemannian interval optimization problems (RIOPs) in the literature. In \cite{C22}, the authors studied the KKT conditions for optimization problems with interval valued objective functions on Hadamard manifolds, which is just a routine extension. \medskip In this paper, we further investigate the interval optimization problems on Hadamard manifolds, and characterize the relationship between them and the interval variational inequalities. To achieve the theoretical results, we build up some new concepts about $gH$-directional derivative and $gH$-Gâteaux differentiability of interval valued functions and their properties on the Hadamard manifolds. The analysis differs from the one used in traditional variational inequalities and nonlinear programming problems. The obtained results pave a way to further study on Riemannian interval optimization problems (RIOPs). \medskip The paper is organized as follows. In Section 2, we formulate the problem set, introduce the notations and recall notions of Rienmannian manifolds, tangent space, geodesically convex and exponential mapping. We also recall some background materials regarding the set of closed, bounded intervals, $gH$-difference and some properties of interval valued functions as well as interval valued functions on Hadamard manifolds. In Section 3, we study the $gH$-continuity, the $gH$-directional derivative and $gH$-Gâteaux differentiability of interval valued functions on Hadamard manifolds. Then, we characterize the relationship between the $gH$-directional differentiability and geodesically convex of Riemannian interval valued functions. In Section 4, we introduce the RIOPs and the necessary and sufficient conditions for efficient points of the RIOPs. Besides, we define the Riemannian interval variational inequalities problems (RIVIPs) and establish the relationship between them and RIOPs. Finally, we draw a conclusion in Section 5. \medskip \section{Premilinaries} In this section, we review some background materials about Riemannian manifolds with a special case, the Hadamard manifolds. In particular, we study the intervals, interval valued functions, interval valued functions on Hadamard manifolds. We first recall some definitions and properties about Riemannian manifolds, which will be used in subsequent analysis. These materials can be found in textbooks on Riemannian geometry, such as \cite{P16, D92, J05}. \medskip Let $\M$ be a Riemannian manifold, we denote by $T_x\M$ the tangent space of $\M$ at $x\in \M$, and the tangent bundle of $\M$ is denoted by $T\M=\cup_{x\in\M}T_x\M$. For every $x, y\in\M$, the Riemannian distance $d(x,y)$ on $\M$ is defined by the minimal length over the set of all piecewise smooth curves joining $x$ to $y$. Let $\nabla$ is the Levi-Civita connection on Riemannian manifold $\M$, $\gamma: I\subset \mathbb{R}\longrightarrow\M$ is a smooth curve on $\M$, a vector field $X$ is called parallel along $\gamma$ if $ \nabla_{\gamma'}X=0$, where $\gamma'=\dfrac{\partial \gamma(t)}{\partial t}$. We say that $\gamma$ is a geodesic if $\gamma'$ is parallel along itself, in this cases $\lVert \gamma'\rVert$ is a constant. When $\lVert \gamma'\rVert=1, \gamma$ is said to be normalized. A geodesic joining $x$ to $y$ in $\M$ is called minimal if its length equals $d(x,y)$. \medskip For any $x\in\M$, let $V$ be a neighborhood of $0_x\in T_x\M$, the exponential mapping $\exp_{x}:V \longrightarrow \M$ is defined by $\exp_x(v)=\gamma(1)$ where $\gamma$ is the geodesic such that $\gamma(0)=x$ and $\gamma'(0)=v$. It is known that the derivative of $\exp_x$ at $0_x\in T_x\M$ is the identity map; furthermore, by the Inverse Theorem, it is a local diffeomorphism. The inverse map of $\exp_x$ is denoted by $\exp_x^{-1}$. A Riemannian manifold is complete if for any $x\in\M$, the exponential map $\exp_x$ is defined on $T_x\M$. A simply connected, complete Riemannian manifold of nonpositive sectional curvature is called a Hadamard manifold. If $\M$ is a Hadamard manifold, for all $x, y\in\M$, by the Hopf-Rinow Theorem and Cartan-Hadamard Theorem (see \cite{J05}), $\exp_x$ is a diffeomorphism and there exists a unique normalized geodesic joining $x$ to $y$, which is indeed a minimal geodesic. \medskip \begin{example} \textbf{Hyperbolic spaces.} We equip $\mathbb{R}^{n+1}$ with the Minkowski product defined by \[ \langle x, y\rangle_{1}=-x_0y_0+\sum\limits_{i=1}^{n}x_iy_i, \] where $x=(x_0, x_1, \cdots, x_n)$, $y=(y_0, y_1, \cdots, y_n)$; and define \[ \mathbb{H}^n:=\{x\in\mathbb{R}^{n+1} \, | \, \langle x, x \rangle_{1}=-1, x_0>0\}. \] Then, $\langle \cdot , \cdot \rangle_{1}$ induces a Riemannian metric $g$ on the tangent spaces $T_p\mathbb{H}^{n}\subset \mathbb{R}^{n+1}$, for all $p \in \mathbb{H}^{n}$. The section curvature of $(\mathbb{H}^{n}, g)$ is $-1$ at every point. \end{example} \medskip \begin{example} \textbf{Manifold of symmetric positive definite matrices (SPD).} \label{SPD-example} The space of $n\times n$ symmetric positive definite matrices with real entries, denoted by $S^{n}_{++}$, is a Hadamard manifold if it is equipped with the below Riemannian metric: \[ g_A(x, Y)=\Tr(A^{-1}XA^{-1}Y), \forall A\in S^{n}_{++}, \quad X, Y\in T_{A}S^{n}_{++}. \] \end{example} \medskip For more examples, please refer to \cite{B14}. From now on, through the whole paper, when we mention $\M$, it means that $\M$ is a Hadamard manifold. \begin{definition}[Totally convex set \cite{C94}] A subset $\D \subseteq \M$ is said totally convex if $\D$ contains every geodesic $\gamma_{xy}$ of $\M$, whose end points $x, y$ are in $\D$. \end{definition} \begin{definition}[Geodesically convex set \cite{C94}] A subset $\D \subseteq \M$ is said geodesically convex if $\D$ contains the minimal geodesic $\gamma_{xy}$ of $\M$, whose end points $x, y$ are in $\D$. \end{definition} It is easy to see that both total convexity and geodesic convexity are the generalization of convexity in Euclidean space. The total convexity is stronger than geodesic convexity, but when the geodesic between any two points are unique, they coincide. \medskip \begin{example} Consider $S^{n}_{++}$ as in Example \ref{SPD-example}. Given $a>0$ and let \[ D_a=\{X\in S^{n}_{++} \, | \, \det{X}=a \}, \] then $D_a$ is a nonconvex subset of $S^{n}_{++}$. In fact, from \cite{V18}, the minimal geodesic joining $P, Q\in S^{n}_{++}$ is described by \[ \gamma (t)=P^{1/2}(P^{-1/2}QP^{-1/2})^{t}P^{1/2}, \quad \forall t\in[0, 1]. \] If $P, Q\in D_a$, then for all $t\in [0, 1]$ we have \begin{eqnarray*} \det(\gamma(t)) &=& \det(P^{1/2}(P^{-1/2}QP^{-1/2})^{t}P^{1/2}) \\ &=& \det(P)^{1/2}(\det(P)^{-1/2}\det(Q)\det(P)^{-1/2})^{t}\det(P)^{1/2} \\ &=& a^{1-t}a^t \\ &=& a. \end{eqnarray*} This means that $\gamma(t)\in D_a$, for all $t\in[0, 1]$, that is, $D_a$ is a geodesically convex subset of $S^{n}_{++}$. \end{example} \medskip Following the notations used in \cite{DK94}, let $\mathcal{I}(\mathbb{R})$ be the set of all closed, bounded interval in $\mathbb{R}$, i.e., \[ \mathcal{I}(\mathbb{R})=\{[\underline{a}, \overline{a}] \, | \, \underline{a}, \overline{a}\in\mathbb{R}, \, \underline{a} \leq \overline{a}\}. \] The Hausdorff metric $d_H$ on $\mathcal{I}(\mathbb{R})$ is defined by \[ d_H(A,B)=\max\{|\underline{a}-\underline{b}|, |\overline{a}-\overline{b}| \}, \quad \forall A=[\underline{a}, \overline{a}], \ B=[\underline{b}, \overline{b}]\in\I(\mathbb{R}). \] Then, $(\mathcal{I}(\mathbb{R}), d_H)$ is a complete metric space, see \cite{LBD05}. The Minkowski sum and scalar multiplications is given respectively by \begin{eqnarray*} A+B &=& [\underline{a}+\underline{b}, \overline{a}+\overline{b}], \\ \lambda A &=& \begin{cases} [\lambda\underline{a}, \lambda\overline{a}]& \text{ if } \lambda \geq 0, \\ [\lambda\overline{a}, \lambda\underline{a}]& \text{ if } \lambda< 0. \end{cases} \end{eqnarray*} where $A=[\underline{a}, \overline{a}]$, $B=[\underline{b}, \overline{b}]$. Note that, $A-A=A+(-1)A \neq 0$. A crucial concept in achieving a useful working definition of derivative for interval-valued functions is trying to derive a suitable difference between two intervals. \medskip \begin{definition}[$gH$-difference of intervals \cite{S08}] Let $A, B\in\mathcal{I}(\mathbb{R})$. The $gH$-difference between $A$ and $B$ is defined as the interval $C$ such that \[ C=A-_{gH}B \quad \Longleftrightarrow \quad \begin{cases} A =B+C \\ \text{or} \\ B =A-C. \end{cases} \] \end{definition} \medskip \begin{proposition}{\cite{S08}} For any two intervals $A=[\underline{a}, \overline{a}]$, $B=[\underline{b}, \overline{b}]$, the $gH$-difference $C=A-_{gH}B$ always exists and \[ C =\left[ \min\{\underline{a}-\underline{b},\overline{a}-\overline{b} \}, \ \max\{\underline{a}-\underline{b},\overline{a}-\overline{b} \} \right]. \] \end{proposition} \medskip \begin{proposition}{\cite{LBD05}} \label{property-dH} Suppose that $A, B, C\in\I(\mathbb{R})$. Then, the following properties hold. \begin{description} \item[(a)] $d_H(A, B)=0$ if and only if $A=B$. \item[(b)] $d_H(\lambda A, \lambda B)=|\lambda|d_H(A, B)$, for all $\lambda\in\mathbb{R}$. \item[(c)] $d_H(A+C, B+C)=d_H(A, B)$. \item[(d)] $d_H(A+B, C+D) \leq d_H(A, C)+d_H(B, D)$. \item[(e)] $d_H(A, B)= d_H(A-_{gH}B, 0)$. \item[(f)] $d_H(A-_{gH}B, A-_{gH}C)=d_H(B-_{gH}A, C-_{gH}A)=d_H(B, C)$. \end{description} \end{proposition} Notice that, for all $A\in \I(\mathbb{R})$, we define $||A||:=d_H(A, 0)$, then $||A||$ is a norm on $\I(\mathbb{R})$ and $d_H(A, B)=||A-_{gH}B||$. There is no natural ordering on $\mathcal{I}(\mathbb{R})$, therefore we need to define it. \medskip \begin{definition}\label{ordering} \cite{W08} Let $A=[\underline{a}, \overline{a}]$ and $B=[\underline{b}, \overline{b}]$ be two elements of $\mathcal{I}(\mathbb{R})$. We write $A\preceq B$ if $\underline{a}\le \underline{b}$ and $\overline{a}\le \overline{b}$. We write $A\prec B$ if $A\preceq B$ and $A\ne B$. Equivalently, $A\prec B$ if and only if one of the following cases holds: \begin{itemize} \item $\underline{a}<\underline{b}$ and $\overline{a}\le \overline{b}$. \item $\underline{a}\le\underline{b}$ and $\overline{a}< \overline{b}$. \item $\underline{a}<\underline{b}$ and $\overline{a}< \overline{b}$. \end{itemize} We write, $A\nprec B$ if none of the above three cases hold. If neither $A\prec B$ nor $B\prec A$, we say that none of $A$ and $B$ dominates the other. \end{definition} \medskip \begin{lemma} \label{property-sets} For two elements $A, B, C$ and $D$ of $\mathcal{I}(\mathbb{R})$, there hold \begin{description} \item[(a)] $A\preceq B \ \Longleftrightarrow \ A -_{gH}B\preceq \textbf{0}$. \item[(b)] $A\nprec B \ \Longleftrightarrow \ A-_{gH}B\nprec \textbf{0}$. \item[(c)] $A\preceq B \ \Longrightarrow \ A-_{gH}C\preceq B-_{gH}C$, \item[(d)] $A\preceq B-_{gH}C \ \Longrightarrow \ B\nprec A+C$. \item[(e)] $0\preceq (A-_{gH}B)+(C-_{gH}D)\Longrightarrow 0\preceq (A+C)-_{gH}(B+D)$. \end{description} \end{lemma} \beginproof (a) The proofs of part(a) can be found in \cite{GCMD20}. \medskip \noindent (b) Let $A=[\underline{a}, \overline{a}]$, $B=[\underline{b}, \overline{b}]$, since $A\nprec B$ then $A=B$, or $\underline{a}>\underline{b}$, or $\overline{a}>\overline{b}$. If $\underline{a}>\underline{b}$, or $\overline{a}>\overline{b}$ then $\max\{\underline{a}-\underline{b}, \overline{a}-\overline{b}\}>0$. Thus, there holds $\ A-_{gH}B\nprec \textbf{0}$. For the other direction, if $\ A-_{gH}B\nprec \textbf{0}$ then $A=B$ or $\max\{\underline{a}-\underline{b}, \overline{a}-\overline{b}\}>0$. This says that $\underline{a}>\underline{b}$, or $\overline{a}>\overline{b}$, which implies $A\nprec B$. \medskip \noindent (c) Let $A=[\underline{a}, \overline{a}]$, $B=[\underline{b}, \overline{b}]$, and $C=[\underline{c}, \overline{c}]$, it is clear that \begin{eqnarray*} A-_{gH}C &=& \left[ \min\{\underline{a}-\underline{c}, \overline{a}-\overline{c}\}, \ \max\{\underline{a}-\underline{c}, \overline{a}-\overline{c}\} \right], \\ B-_{gH}C &=& \left[ \min\{\underline{b}-\underline{c}, \overline{b}-\overline{c}\}, \ \max\{\underline{b}-\underline{c}, \overline{b}-\overline{c}\} \right]. \end{eqnarray*} If $A\preceq B$, then $\underline{a} \leq \underline{b}$ and $\overline{a} \leq \overline{b}$, which yield \[ \begin{cases} \underline{a}-\underline{c} & \leq \underline{b}-\underline{c} \\ \overline{a}-\overline{c} & \leq \overline{b}-\overline{c} \end{cases} \quad \Longrightarrow \quad A-_{gH}C \preceq B-_{gH}C. \] \noindent (d) Assume $B\preceq A+C$, by part(c), we know that \[ B-_{gH}C \preceq (A+C)-_{gH}C=A, \] which indicates $A= B-_{gH}C$. From the definition of $gH$-difference, we have $B=A+C$ or $C=B-A$. If $C=B-A$, by the assumption $B\preceq A+C$, there has \[ B\preceq A+B-A \ \Longrightarrow \ A\in\mathbb{R}. \] Therefore, $B=A+C$. In other words, there holds \[ A \preceq B-_{gH}C \ \Longrightarrow \ B \nprec A+C, \] which is the desired result. \noindent (e) Let $A=[\underline{a}, \overline{a}]$, $B=[\underline{b}, \overline{b}]$, $C=[\underline{c}, \overline{c}]$ and $D=[\underline{d}, \overline{d}]$.\\ Case 1: If \[ \begin{cases} A-_{gH}B&=[\underline{a}-\underline{b}, \overline{a}-\overline{b}]\\ C-_{gH}D&=[\underline{c}-\underline{d}, \overline{c}-\overline{d}] \end{cases} or \begin{cases} A-_{gH}B&=[\overline{a}-\overline{b}, \underline{a}-\underline{b}]\\ C-_{gH}D&=[\overline{c}-\overline{d}, \underline{c}-\underline{d}] \end{cases}, \] then \[ \begin{cases} \underline{a}-\underline{b}+\underline{c}-\underline{d}&\ge 0\\ \overline{a}-\overline{b}+\overline{c}-\overline{d}&\ge 0 \end{cases}\Longrightarrow \begin{cases} (\underline{a}+ \underline{c})-(\underline{b}+\underline{d})&\ge 0\\ (\overline{a}+ \overline{c})-(\overline{b}+\overline{d})&\ge 0. \end{cases} \] Case 2: If \[ \begin{cases} A-_{gH}B&=[\overline{a}-\overline{b}, \underline{a}-\underline{b}]\\ C-_{gH}D&=[\underline{c}-\underline{d}, \overline{c}-\overline{d}] \end{cases}\Longrightarrow \begin{cases} \overline{a}-\overline{b}+\underline{c}-\underline{d}&\ge 0\\ \underline{a}-\underline{b}+\overline{c}-\overline{d}&\ge 0, \end{cases} \] together with \[ \begin{cases} \underline{a}-\underline{b}&\ge \overline{a}-\overline{b}\\ \overline{c}-\overline{d}&\ge \underline{c}-\underline{d} \end{cases} \] we have \[ \begin{cases} (\underline{a}+ \underline{c})-(\underline{b}+\underline{d})&\ge 0\\ (\overline{a}+ \overline{c})-(\overline{b}+\overline{d})&\ge 0. \end{cases} \] Case 3: If \[ \begin{cases} A-_{gH}B&=[\underline{a}-\underline{b}, \overline{a}-\overline{b}]\\ C-_{gH}D&=[\overline{c}-\overline{d}, \underline{c}-\underline{d}] \end{cases}\Longrightarrow \begin{cases} \overline{a}-\overline{b}+\underline{c}-\underline{d}&\ge 0\\ \underline{a}-\underline{b}+\overline{c}-\overline{d}&\ge 0, \end{cases} \] together with \[ \begin{cases} \overline{a}-\overline{b}&\ge \underline{a}-\underline{b}\\ \underline{c}-\underline{d}&\ge \overline{c}-\overline{d} \end{cases} \] we have \[ \begin{cases} (\underline{a}+ \underline{c})-(\underline{b}+\underline{d})&\ge 0\\ (\overline{a}+ \overline{c})-(\overline{b}+\overline{d})&\ge 0. \end{cases} \] \endproof \begin{remark} The inverse of Lemma \ref{property-sets}(c)-(d) are not true. To see this, taking $A=[1, 2]$, $B=[0, 5]$ and $C=[-1,3]$, then \[ A-_{gH}C=[-1, 2],\quad B-_{gH}C=[1, 2]. \] This means that $A-_{gH}C\preceq B-_{gH}C$, but we do not have $A\preceq B$. If taking $A=\textbf{0}$, $B=[0,3]$, $C=[1, 2]$ then \[ A+C=[1, 2], \quad B-_{gH}C=[-1, 1], \] which says $B\nprec A+C$, but we do not have $A\preceq B-_{gH}C$. \end{remark} Let $\D\subseteq \M$ be a nonempty set, a mapping $f: \D\longrightarrow \mathcal{I}(\mathbb{R})$ is called a Riemannian interval valued function (RIVF). We write $f(x)=[\underline{f}(x), \overline{f}(x)]$ where $\underline{f}, \overline{f}$ are real valued functions satisfy $\underline{f}(x)\le \overline{f}(x)$, for all $x\in\M$. Since $\mathbb{R}^n$ is a Hadamard manifold, an interval valued function (IVF for short) $f:U\subseteq\mathbb{R}^n\longrightarrow \mathcal{I}(\mathbb{R})$ is also a RIVF. \medskip \begin{definition}\cite{W07} Let $U\subseteq\mathbb{R}^{n}$ be a convex set. An IVF $f: U\longrightarrow\mathcal{I}(\mathbb{R})$ is said to be convex on $U$ if \[ f(\lambda x_1+(1-\lambda)x_2)\preceq \lambda f(x_1)+(1-\lambda)f(x_2), \] for all $x_1, x_2 \in U$ and $\lambda\in [0, 1]$. \end{definition} \medskip \begin{definition} \cite{GCMD20} Let $U\subseteq\mathbb{R}^n$ be a nonempty set. An IVF $f:U\longrightarrow \mathcal{I}(\mathbb{R})$ is said to be \textit{monotonically increasing} if for all $x, y\in U$ there has \[ x \leq y \ \Longrightarrow \ f(x)\preceq f(y). \] The function $f$ is said to be \textit{monotonically decreasing} if for all $x, y\in U$ there has \[ x \leq y \ \Longrightarrow \ f(y)\preceq f(x). \] \end{definition} It is clear to see that if an IVF is monotonically increasing (or monotonically decreasing), if and only if both the real-valued functions $\underline{f}$ and $\overline{f}$ are monotonically increasing (or monotonically decreasing). \medskip \begin{definition}\cite{GCMD20} Let $\D\subseteq\M$ be a nonempty set. An RIVF $f:\D\longrightarrow \mathcal{I}(\mathbb{R})$ is said to be \textit{bounded below} on $\D$ if there exists an interval $A\in\mathcal{I}(\mathbb{R})$ such that \[ A\preceq f(x), \quad \forall x\in \D. \] The function $f$ is said to be \textit{bounded above} on $\D$ if there exists an interval $B\in\mathcal{I}(\mathbb{R})$ such that \[ f(x)\preceq B, \quad \forall x\in \D. \] The function $f$ is said to be \textit{bounded} if it is both bounded below and above. \end{definition} It is easy to verify that if an RIVF $f$ is bounded below (or bounded above) if and only if both the real-valued functions $\underline{f}$ and $\overline{f}$ are bounded below (or bounded above). \medskip \begin{definition}\label{geodesically_convex_set} Let $\mathcal{D}\subseteq \M$ be a geodesically convex set and $f:\D\longrightarrow\I(\mathbb{R})$ be a RIVF. $f$ is called geodesically convex on $\D$ if \[ f(\gamma(t))\preceq (1-t) f(x)+tf(y), \quad \forall x, y\in \mathcal{D} \ {\rm and} \ \forall t\in [0, 1], \] where $\gamma:[0, 1]\longrightarrow \M$ is the minimal geodesic joining $x$ and $y$. \end{definition} \medskip \begin{proposition} Let $\D$ be a geodesically convex subset of $\M$ and $f$ be a RIVF on $\D$. Then, $f$ is geodesically convex on $\D$ if and only if $\underline{f}$ and $\overline{f}$ are geodesically convex on $\D$. \end{proposition} \beginproof This is a direct consequence from Definition \ref{geodesically_convex_set} and Definition \ref{ordering}. \endproof \medskip \begin{example} Consider the set \[ \D=\{A\in S^{n}_{++} \, | \, \det(A)>1\}. \] \end{example} \noindent For all $X, Y\in \D$, we have the minimal geodesic joining $X, Y$ defined by \[ \gamma(t)=X^{1/2}(X^{-1/2}YX^{-1/2})^{t}X^{1/2}, \quad \forall X, Y\in D \ {\rm and} \ \forall t\in [0, 1]. \] For any $t\in [0, 1]$, we also obtain \[ \det(X^{1/2}(X^{-1/2}YX^{-1/2})^{t}X^{1/2})=(\det(X))^{1-t}(\det(Y))^{t}>1, \] which says that $\D$ is a geodesically convex subset of $S^{n}_{++}$. Moreover, on set $\D$, we define a RIVF as below: \begin{align} f:& \, D\longrightarrow \mathcal{I}(\mathbb{R}) \nonumber \\ & X\longmapsto \left[0, \, \ln(\det(X)) \right] \nonumber \end{align} Then, for any $X, Y\in \D$ and $t\in [0, 1]$, we have \begin{eqnarray*} f(\gamma(t)) &=& \left[0, \, \ln\det(X^{1/2}(X^{-1/2}YX^{-1/2})^{t}X^{1/2}) \right] \\ &=& \left[0, \, (1-t)\ln(\det(X))+t\ln(\det(Y)) \right] \\ &=& (1-t) \left[0, \, \ln(\det(X)) \right] + t \left[0, \, \ln(\det(Y)) \right] \\ &=& (1-t)f(X) + tf(Y), \end{eqnarray*} which shows that $f$ is a geodesically convex RIVF on $\D$. \medskip \begin{proposition} The RIVF $f:\D\longrightarrow\I(\mathbb{R})$ is geodesically convex if and only if for all $x, y\in\D$ and $\gamma:[0, 1]\longrightarrow \M$ is the minimal geodesic joining $x$ and $y$, the IVF $f\circ\gamma$ is convex on $[0, 1]$. \end{proposition} \beginproof Assume $f$ is geodesically convex, for all $x, y\in \D$ if $\gamma:[0, 1]\longrightarrow \M$ is the minimal geodesic joining $x$ to $y$, then the restriction of $\gamma$ to $[t_1, t_2], t_1, t_2\in [0, 1]$ joins the points $\gamma(t_1)$ to $\gamma(t_2)$. We re-parametrize this restriction \[ \alpha(s)=\gamma(t_1+u(t_2-t_1)), s\in[0, 1]. \] Since $f$ is geodesically convex, for all $s\in [0, 1]$, we have \begin{eqnarray*} & & f(\alpha(s))\preceq (1-s)f(\alpha(0))+sf(\alpha(1)) \\ & \Rightarrow & f(\gamma( (1-s)t_1+st_2))\preceq (1-s)f(\gamma(t_1))+sf(\gamma(t_2)) \\ & \Rightarrow & (f\circ\gamma)( (1-s)t_1+st_2)\preceq (1-s)(f\circ\gamma)(t_1)+s(f\circ\gamma)(t_2), \end{eqnarray*} which says the IVF $f\circ\gamma$ is convex on $[0, 1]$. \medskip \noindent Conversely, for all $x, y\in \D$ and $\gamma:[0, 1]\longrightarrow \M$ is the minimal geodesic joining $x$ and $y$, suppose that $f\circ\gamma:[0, 1]\longrightarrow \I(\mathbb{R})$ is a convex IVF. In other words, for all $t_1, t_2\in [0, 1]$, there has \[ (f\circ\gamma)((1-s)t_1+st_2)) \preceq (1-s)(f\circ\gamma)(t_1)+s(f\circ\gamma)(t_2), \ \forall s\in[0, 1]. \] Letting $t_1=0$ and $t_2=1$ gives \[ (f\circ\gamma)(s)\preceq (1-s)(f\circ\gamma)(0)+s(f\circ\gamma)(1), \ \forall s\in[0, 1], \] or \[ f(\gamma(s))\preceq(1-s)f(x)+sf(y), \ \forall s\in[0, 1]. \] Then, $f$ is a geodesically convex RIVF. \endproof \medskip \begin{lemma} If $f$ is a geodesically convex RIVF on $\D$ and $A$ is an interval, then the sublevel set \[ \D^{A}=\{x\in\D: f(x)\preceq A\}, \] is a geodesically convex subset of $\D$. \end{lemma} \beginproof For all $x, y\in\D^{A}$, there has $f(x)\preceq A$ and $f(y)\preceq A$. Let $\gamma:[0, 1]\longrightarrow \M$ be the minimal geodesic joining $x$ and $y$. For all $t\in[0, 1]$, by the convexity of $f$, we have \[ f(\gamma(t))\preceq (1-t)f(x)+tf(y)\preceq (1-t)A+tA=A. \] Thus, $f(\gamma(t))\in \D^{A}$ for all $t\in[0, 1]$, which says that $\D^{A}$ is a geodesically convex subset of $\D$. \endproof \medskip \section{The $gH$-continuity and $gH$-differentiability of Riemannian interval valued functions} In this section, we generalize the $gH$-continuous and $gH$-differentiable property of interval valued functions to the settings on the Hadamard manifolds. The relationship between $gH$-differentiability and geodesically convex property of the RIVFs is also established. \medskip \begin{definition} Let $f:\M\longrightarrow \mathcal{I}(\mathbb{R})$ be a RIVF, $x_0\in\M, A=[\underline{a}, \, \overline{a}]\in \mathcal{I}(\mathbb{R})$. We say $\lim_{x\rightarrow x_0}f(x)=A$ if for every $\epsilon>0$, there exists $\delta>0$ such that, for all $x\in\M$ and $d(x, x_0)<\delta$, there holds $d_H(f(x), A)<\epsilon$. \end{definition} \medskip \begin{lemma} Let $f:M\longrightarrow \mathcal{I}(\mathbb{R})$ be a RIVF, $A=[\underline{a}, \overline{a}]\in \mathcal{I}(\mathbb{R})$. Then, \[ \lim\limits_{x\rightarrow x_0}f(x)=A \quad \Longleftrightarrow \quad \begin{cases} \lim\limits_{x\rightarrow x_0}\underline{f}(x)=\underline{a}, \\ \lim\limits_{x\rightarrow x_0}\overline{f}(x)=\overline{a}. \end{cases} \] \end{lemma} \beginproof If $\lim_{x\rightarrow x_0}f(x)=A$, then for every $\epsilon>0$, there exists $\delta>0$ such that, for all $x\in\M$ and $d(x, x_0)<\delta$, there have \begin{eqnarray*} & & d_H(f(x), A)<\epsilon \\ & \Longrightarrow & \max\{|\underline{f}(x)-\underline{a}|, \, |\overline{f}(x)-\overline{a}|\} < \epsilon \\ & \Longrightarrow & \begin{cases} |\underline{f}(x)-\underline{a}|<\epsilon \\ |\overline{f}(x)-\overline{a}|<\epsilon \end{cases}. \end{eqnarray*} Consequently, we have \[ \begin{cases} \lim\limits_{x\rightarrow x_0}\underline{f}(x)=\underline{a}, \\ \lim\limits_{x\rightarrow x_0}\overline{f}(x)=\overline{a}. \end{cases} \] On other hand, if we have \[ \begin{cases} \lim\limits_{x\rightarrow x_0}\underline{f}(x)=\underline{a}, \\ \lim\limits_{x\rightarrow x_0}\overline{f}(x)=\overline{a}, \end{cases} \] for every $\epsilon>0$, there exists $\delta>0$ such that, for all $x\in\M$ and $d(x, x_0)<\delta$, there has \[ \begin{cases} |\underline{f}(x)-\underline{a}|<\epsilon \\ |\overline{f}(x)-\overline{a}|<\epsilon \end{cases} \quad \Longrightarrow \quad d_H(f(x), A)<\epsilon. \] which says $\lim\limits_{x\rightarrow x_0}f(x)=A$. Thus, the proof is complete. \endproof \medskip \begin{remark} From Proposition \ref{property-dH}, we know that $d_H(f(x), A)=d_H(f(x)-_{gH}A, \textbf{0})$, which yields \[ \lim\limits_{x\rightarrow x_0}f(x)=A \quad \Longleftrightarrow \quad \lim\limits_{x\rightarrow x_0}(f(x)-_{gH}A)=\textbf{0}. \] \end{remark} \medskip \begin{definition}[$gH$-continuity] \label{gH-continuity} Let $f$ be a RIVF on a nonempty open subset $\mathcal{D}$ of $\M$, $x_0\in \D$. The function $f$ is said to be $gH$-continuous at $x_0$ if for all $v\in T_{x_0}\M$ with $\exp_{x_0}v\in \D$, there has \[ \lim\limits_{||v||\rightarrow 0} \left( f(\exp_{x_0}(v))-_{gH} f(x_0) \right)=\textbf{0}. \] We call $f$ is $gH$-continuous on $\D$ if $f$ is $gH$-continuous at every $x\in \D$. \end{definition} \medskip \begin{remark} We point out couple remarks regarding $gH$-continuity. \begin{enumerate} \item When $\M\equiv\mathbb{R}^{n}$, $f$ become an IVF and $\exp_{x_0}(v)=x_0+v$. In other words, Definition \ref{gH-continuity} generalizes the concept of the $gH$-continuity of the IVF setting, see \cite{G17}. \item By Lemma 3.1 and Remark 3.1, we can see that $f$ is $gH$-continuous if and only if $\underline{f}$ and $\overline{f}$ are continuous. \end{enumerate} \end{remark} \medskip \begin{theorem} Let $\D\subseteq \M$ be a geodesically convex set with nonempty interior and $f: \D\longrightarrow \I(\mathbb{R})$ be a geodesically convex RIVF. Then, $f$ is $gH$-continuous on $\intt\D$. \end{theorem} \beginproof Let $x_0\in\intt \D$ and $B(x_0, r)$ be an open ball center at $x_0$ and of sufficient small radius $r$. Choose $A\in\I(\mathbb{R})$ such that the geodesically convex set $\D^{A}=\{x\in \D: f(x)\preceq A\}$ contains $\overline{B}(x_0, r)$. Let $\gamma: [-1, 1]\longrightarrow\M$ be a minimal geodesic in $\overline{B}(x_0, r)$ such that $\gamma(-1)=x_1, \gamma(0)=x_0, \gamma(1)=x_2$. For convenience, we denote $\gamma(t)=x$ where $t=\frac{d(x_0, x)}{r}\in[0, 1]$. By the convexity of $f$, we have \[ f(\gamma(t))\preceq (1-t)f(x_0)+tf(x_2)\preceq (1-t)f(x_0)+tA, \] which together with Lemma \ref{property-sets} implies \begin{equation} \label{theorem3.11} f(x)-_{gH}f(x_0) \preceq t(A-f(x_0)). \end{equation} The minimal geodesic joining $x_1$ and $x$ is the restriction $\gamma(u), u\in[-1, t]$. Setting $u=-1+s(t+1), s\in[0, 1]$, we obtain the re-parametrization \[ \alpha(s)=y(-1+s(t+1)), \ s\in[0, 1]. \] It is clear to see that \[ \alpha(0)=\gamma(-1)=x_1, \ \alpha\left(\dfrac{1}{t+1}\right)=\gamma(0)=x_0, \ \alpha(1)=\gamma(t)=x. \] Due to the convexity of $f$, we have \[ f(\alpha(s))\preceq (1-s)f(x_1)+sf(x)\preceq (1-s)A+sf(x), \ \forall s\in[0, 1]. \] Letting $s=\frac{1}{1+t}$ yields \[ f(x_0)\preceq \dfrac{t}{t+1}A+\dfrac{1}{t+1}f(x), \] which together with Lemma \ref{property-sets} further implies \begin{equation} \label{theorem3.12} f(x_0)-_{gH}f(x)\preceq [(tA+f(x)-_{gH}tf(x_0)]-_{gH}f(x)). \end{equation} From (\ref{theorem3.11}) and (\ref{theorem3.12}), plugging in $t=\frac{d(x_0, x)}{r}$, we obtain $\lim\limits_{x\rightarrow x_0}f(x)=f(x_0)$. Then, the proof is complete. \endproof \medskip \begin{definition} \cite{LMWY11} Let $\mathcal{D}\subseteq \M$ be a nonempty open set and consider a function $f:\mathcal{D}\longrightarrow\mathbb{R}$. We say that $f$ has directional derivative at $x\in \mathcal{D}$ in the direction $v\in T_x\M$ if the limit \[ f'(x,v)=\lim\limits_{t \to 0^{+}}\dfrac{f(\exp_{x}(tv))-f(x)}{t} \] exists, where $f'(x,v)$ is called the directional derivative of $f$ at $x$ in the direction $v\in T_x\M$. If $f$ has directional derivative at $x$ in every direction $v\in T_x\M$, we say that $f$ is directional differentiable at $x$. \end{definition} \medskip \begin{definition}[$gH$-directional differentiability \cite{C22}] Let $f$ be a RIVF on a nonempty open subset $\D$ of $\M$. The function $f$ is said to have $gH$-directional derivative at $x\in \D$ in direction $v\in T_x\M$, if there exists a closed bounded interval $f'(x,v)$ such that the limits \[ f'(x,v)=\lim\limits_{t \to 0^{+}}\dfrac{1}{t}(f(\exp_x(tv))-_{gH}f(x)) \] exists, where $f'(x,v)$ is called the $gH$-directional derivative of $f$ at $x$ in the direction of $v$. If $f$ has $gH$-directional derivative at $x$ in every direction $v\in T_x\M$, we say that $f$ is $gH$-directional differentiable at $x$. \end{definition} \medskip \begin{lemma}{\cite{C22}} Let $\mathcal{D}\subseteq \M$ be a nonempty open set and consider a RIVF $f: \mathcal{D}\longrightarrow \mathcal{I}(\mathbb{R})$. Then, $f$ has $gH$-directional derivative at $x\in\D$ in the direction $v\in T_x\M$ if and only if $\underline{f}$ and $\overline{f}$ have directional derivative at $x$ in the direction $v$. Furthermore, we have \[ f'(x,v)=\left[\min\{\underline{f}'(x, v), \overline{f}'(x,v)\}, \, \max\{\underline{f}'(x, v), \overline{f}'(x,v)\}\right], \] where $\underline{f}'(x, v)$ and $\overline{f}'(x,v)$ are the directional derivatives of $\underline{f}$ and $\overline{f}$ at $x$ in the direction $v$, respectively. \end{lemma} \medskip \begin{theorem}\label{Exitence_gH-directional_derivetive} Let $\mathcal{D}\subseteq \M$ be a nonempty open geodesically convex set. If $f:\D\longrightarrow \I(\mathbb{R})$ is a geodesically convex RIVF, then at any $x_0\in\D$, $gH$-directional derivative $f'(x_0, v)$ exists for every direction $v\in T_{x_0}\M$. \end{theorem} \medskip \noindent To prove Theorem \ref{Exitence_gH-directional_derivetive}, we need two Lemmas. \begin{lemma}\label{monotonically_increasing} Let $\D\subseteq \M$ be a nonempty geodesically convex set and consider a geodesically convex RIVF $f:\D\longrightarrow \mathcal{I}(\mathbb{R})$. Then, $\forall x_{0}\in\D, v\in T_{x_0}\M$, the function $\phi:\mathbb{R}^{+}\backslash\{0\}\longrightarrow\mathcal{I}(\mathbb{R})$, defined by \begin{center} $\phi(t)=\dfrac{1}{t}(f(\exp_{x_0}(tv))-_{gH}f(x_0))$, \end{center} for all $t>0$ such that $\exp_{x_0}(tv)\in\D$, is monotonically increasing. \end{lemma} \beginproof For all $t, s$ such that $0\le t\le s$, by the convexity of $f$, for all $\lambda\in[0, 1]$, we have \begin{center} $f(\exp_{x_0}(\lambda(sv)))\le (1-\lambda)f(x_0)+\lambda f(\exp_{x_0}(sv))$. \end{center} Since $\frac{t}{s}\in[0, 1]$, there holds \begin{center} $f(\exp_{x_0}(tv))\le\dfrac{s-t}{s}f(x_0)+\dfrac{t}{s}f(\exp_{x_0}(sv))$, \end{center} or \begin{align} &f(\exp_{x_0}(tv)-_{gH}f(x_0))\nonumber\\ \le&\left[\dfrac{s-t}{s}f(x_0)+\dfrac{t}{s}f(\exp_{x_0}(sv))\right]-_{gH}f(x_0)\nonumber\\ =&\left[\min\left\{\dfrac{s-t}{s}\underline{f}(x_0)+\dfrac{t}{s}\underline{f}(\exp_{x_0}(sv))-\underline{f}(x_0), \dfrac{s-t}{s}\overline{f}(x_0)+\dfrac{t}{s}\overline{f}(\exp_{x_0}(sv))-\overline{f}(x_0)\right\},\right.\nonumber\\ &\left.\max\left\{\dfrac{s-t}{s}\underline{f}(x_0)+\dfrac{t}{s}\underline{f}(\exp_{x_0}(sv))-\underline{f}(x_0), \dfrac{s-t}{s}\overline{f}(x_0)+\dfrac{t}{s}\overline{f}(\exp_{x_0}(sv))-\overline{f}(x_0)\right\}\right]\nonumber\\ =&\left[\min\left\{\dfrac{t}{s}(\underline{f}(\exp_{x_0}(sv))-\underline{f}(x_0)), \dfrac{t}{s}(\overline{f}(\exp_{x_0}(sv))-\overline{f}(x_0))\right\}\right.\nonumber\\ &\left.\max\left\{\dfrac{t}{s}(\underline{f}(\exp_{x_0}(sv))-\underline{f}(x_0)), \dfrac{t}{s}(\overline{f}(\exp_{x_0}(sv))-\overline{f}(x_0))\right\}\right]\nonumber\\ =&\dfrac{t}{s}(f(\exp_{x_0}(sv))-_{gH}f(x_0)).\nonumber \end{align} Then, the proof is complete. \endproof \medskip \begin{lemma}\label{bounded_below} Let $\D\subseteq\M$ be an open geodesically convex set. If $f:\D\longrightarrow\I(\mathbb{R})$ is a geodesically convex RIVF, then for all $x_0\in \D$ and $v\in T_{x_0}\M$, there exists $t_0\in \mathbb{R}$ such that $\phi(t)=\dfrac{1}{t}\left(f(\exp_{x_0}(tv))-_{gH}f(x_0) \right)$ is bounded below for all $t\in(0, t_0]$. \end{lemma} \beginproof For all $v\in T_{x_0}\M$, let $\gamma$ be the geodesic such that $\gamma(0)=x_0$ and $\gamma'(0)=v$. Since $\mathcal{D}\subseteq \M$ be a nonempty open geodesically convex set, there exists $t_1, t_2\in\mathbb{R}$ such that $0\in(t_1, t_2)$ and the restriction of $\gamma$ on $[t_1, t_2]$ is contained in $\D$. Let $\lambda\in (0, t_2]$ and fix the point $\gamma(\lambda)$. The restriction of $\gamma$ to $[t_1, \lambda]$ joins $\gamma(t_1)$ and $\gamma(\lambda)$. We can re-parametrize this restriction \[ \alpha(s)=\gamma(t_1+s(\lambda-t_1)), \ s\in [0, 1]. \] Using the convexity of $f$ gives \[ f(\alpha(s))\preceq (1-s)f(\alpha(0))+sf(\alpha(1)) \ \Longrightarrow \ f(\alpha(s))\preceq (1-s)f(\gamma(t_1))+sf(\gamma(\lambda)) . \] Plugging in $s=\frac{t_1}{t_1-\lambda}$ leads to \[ f(x_0)\preceq\dfrac{\lambda}{\lambda-t_1}f(\gamma(t_1)) + \dfrac{-t_1}{\lambda-t_1}f(\gamma(\lambda)). \] Then, we have \[ (\lambda-t_1)[\underline{f}(x_0), \overline{f}(x_0)]\preceq \left[ -t_1\underline{f}(\gamma(\lambda))+\lambda\underline{f}(\gamma(t_1)), \ -t_1\overline{f}(\gamma(\lambda))+\lambda\overline{f}(\gamma(t_1)) \right] \] or \[ \left \{ \begin{array}{l} \dfrac{1}{-t_1}(\underline{f}(x_0)-\underline{f}(\gamma(t_1))) \leq \dfrac{1}{\lambda}(\underline{f}(\gamma(\lambda))-\underline{f}(x_0)), \\ \dfrac{1}{-t_1}(\overline{f}(x_0)-\overline{f}(\gamma(t_1))) \leq \dfrac{1}{\lambda}(\overline{f}(\gamma(\lambda))-\overline{f}(x_0)). \end{array} \right. \] Thus, the proof is complete. \endproof \medskip \noindent As below, we provide the proof of Theorem \ref{Exitence_gH-directional_derivetive}: \beginproof Let any $x_0\in\D, v\in T_{x_0}\M$. Define an IVF $\phi:\mathbb{R}^{+}\backslash\{0\}\longrightarrow \I(\mathbb{R})$ by \[ \phi(t)=\dfrac{1}{t}(f(\exp_{x_0}(tv))-_{gH}f(x_0)). \] If $\phi(t)=[\underline{\phi}(t), \overline{\phi}(t)]$, by Lemma \ref{monotonically_increasing} and Lemma \ref{bounded_below}, we have both real-valued functions $\underline{\phi}$ and $\overline{\phi}$ are monotonically increasing and bounded below with $t$ enough small. Therefore, the limits $\lim_{t\rightarrow 0^{+}}\underline{\phi}(t)$ and $\lim_{t\rightarrow 0^{+}}\overline{\phi}(t)$ exist or the limit $\lim_{t\rightarrow 0^+}\phi(t)$ exists. Thus, the function $f$ has $gH$-directional derivative at $x_0\in\D$ in the direction $v$. \endproof \medskip \begin{theorem} \label{geodesical-convexity} Let $f:\D\longrightarrow \I(\mathbb{R})$ be a $gH$-directional differentiable RIVF. If $f$ is geodesically convex on $\D$, then \[ f'(x, \exp_{x}^{-1}y)\preceq f(y)-_{gH}f(x), \quad \forall x, y\in \D. \] \end{theorem} \beginproof For all $x, y\in \D$ and $t\in (0, 1]$, by the convexity of $f$, we have \[ f(\gamma(t))\preceq tf(y)+(1-t)f(x), \] where $\gamma:[0, 1]\longrightarrow \M$ is the minimal geodesic joining $x$ and $y$. Applying Lemma \ref{property-sets} yields that \begin{eqnarray*} & & f(\gamma(t))-_{gH}f(x) \\ &\preceq & \left[tf(y)+(1-t)f(x)\right]-_{gH}f(x) \\ &=& \left[\min\{t\underline{f}(y)+(1-t)\underline{f}(x)-\underline{f}(x), t\overline{f}(y)+(1-t)\overline{f}(x)-\overline{f}(x)\},\right. \\ & & \max \left.\{t\underline{f}(y)+(1-t)\underline{f}(x)-\underline{f}(x), t\overline{f}(y)+(1-t)\overline{f}(x)-\overline{f}(x)\}\right] \\ &=& \left[ \min\{t(\underline{f}(y)-\underline{f}(x)), t(\overline{f}(y)-\overline{f}(x))\}, \, \max\{t(\underline{f}(y)-\underline{f}(x)), t(\overline{f}(y)-\overline{f}(x))\}\right] \\ &=& t \, [f(y)-_{gH}f(x)]. \end{eqnarray*} Then, we achieve \[ \dfrac{1}{t}\left[f(\gamma(t))-_{gH}f(x)\right]\preceq f(y)-_{gH}f(x), \quad \forall x, y\in\D, \ {\rm and} \ t\in (0, 1]. \] As a result, when $t\longrightarrow 0^{+}$, we obtain \begin{center} $f'(x, \exp_{x}^{-1}y)\preceq f(y)-_{gH}f(x), \quad \forall x, y\in\D.$ \end{center} Thus, the proof is complete. \endproof \medskip \begin{corollary} Let $\D\subseteq\M$ be nonempty open geodesically convex set and suppose that the RIVF $f:\D\longrightarrow \I(\mathbb{R})$ is $gH$-directional differentiable on $\D$. If $f$ is geodesically convex on $\D$, then \[ f(y)\nprec f'(x, \exp_{x}^{-1}y)+f(x), \quad \forall x, y\in \D. \] \end{corollary} \beginproof The result follows immediately from Theorem \ref{geodesical-convexity} and Lemma \ref{property-sets}. \endproof \medskip \begin{definition}\cite{GCMD20} Let $\V$ be a linear subspace of $\mathbb{R}^{n}$. The IVF $F:\V\longrightarrow \I(\mathbb{R})$ is said to be linear if \begin{description} \item[(a)] $F(\lambda v)=\lambda F(v)$, for all $v\in\V$, $\lambda\in\mathbb{R}$; and \item[(b)] for all $ v, w\in \V $, either $F(v)+F(w)=F(v+w)$ or none of $F(v)+F(w)$ and $F(v+w)$ dominates the other. \end{description} \end{definition} \medskip \begin{definition}[$gH$-Gâteaux differentiability] Let $f$ be a RIVF on a nonempty open subset $\D$ of $\M$ and $x_0\in \D$. The function $f$ is called $gH$-Gâteaux differentiable at $x_0$ if $f$ is $gH$-directional differentiable at $x_0$ and $f'(x_0, \cdot): T_{x_0}\M\longrightarrow \I(\mathbb{R})$ is a $gH$-continuous, linear IVF. The $gH$-Gâteaux derivative of $f$ at $x_0$ is defined by \[ f_G(x_0)(\cdot):=f'(x_0, \cdot). \] The function $f$ is called $gH$-Gâteaux differentiable on $\D$ if $f$ is $gH$-Gâteaux differentiable at every $x\in \D$. \end{definition} \medskip \begin{example} \label{example-linear-diff} Let $\M:=\mathbb{R}^2$ with the standard metric. Then, $\M$ is a flat Hadamard manifold. We consider the RIVF given as below: \begin{align} f: & \, \M \longrightarrow \I(\mathbb{R}) \nonumber \\ & (x_1, x_2)\longmapsto \begin{cases} \dfrac{x_1x_2^2}{x_1^4+x_2^2}[1, 2] & \text{ if } (x_1, x_2)\neq (0, 0), \\ \textbf{0} & \text { otherwise}. \end{cases} \nonumber \end{align} \end{example} \noindent For all $v=(v_1, v_2)\in T_{(0,0)}\M\equiv \mathbb{R}^{2}$, we compute \begin{eqnarray*} f'((0,0), v) &=& \lim\limits_{t\rightarrow 0^+}\dfrac{1}{t}\left(f( (0, 0)+tv)-_{gH}f((0,0))\right) \\ &=& \lim\limits_{t\rightarrow 0^+}\dfrac{1}{t}\dfrac{t^3v_1v_2^2}{t^4v_1^4+t^2v_2^2}[1, 2] \\ &=& \lim\limits_{t\rightarrow 0^+}\dfrac{v_1v_2^2}{t^2v_1^4+v_2^2}[1, 2] \\ &=& v_1[1,2]. \end{eqnarray*} On the other hand, for all $h=(h_1, h_2)\in \mathbb{R}^2$, we have \begin{eqnarray*} & & f'((0, 0), v+h)-_{gH}f'((0, 0), v) \\ &=& \left[ \min\{ v_1+h_1-v_1, 2(v_1+h_1)-2v_1\}, \, \max\{ v_1+h_1-v_1, 2(v_1+h_1)-2v_1\} \right] \\ &=& \left[ \min\{h_1, 2h_1\}, \, \max\{h_1, 2h_1\} \right] \end{eqnarray*} which says $\lim\limits_{||h||\rightarrow 0}(f'((0, 0), v+h)-_{gH}f'((0, 0), v)=\textbf{0}$. In other words, $f'((0,0), \cdot)$ is a $gH$-continuous IVF. Hence, $f'((0,0), \cdot)$ is a linear, $gH$-continuous IVF or $f$ is $gH$-Gâteaux differentiable at $(0, 0)$ and $f_G( (0, 0))(v)=v_1[1, 2]$ for all $v=(v_1, v_2)\in T_{(0,0)}\M$. \medskip \begin{example} We consider a RIVF defined by \begin{align} f: & \, S^{n}_{++}\longrightarrow \I(\mathbb{R}) \nonumber \\ & X\longmapsto \begin{cases} \left[ \ln(\det (X)), \ \ln(\det( X^2)) \right] & \text{ if } \det (X) \geq 1, \\ \left[ \ln(\det( X^2)), \ \ln(\det (X)) \right] & \text{ otherwise}. \end{cases}\nonumber \end{align} \end{example} \noindent For all $ v\in T_IS^{n}_{++}\equiv S^n$, where $S^n$ is the space of $n\times n$ symmetric matrices and $I$ is the $n\times n$ identity matrix, by denoting $Y=\exp_{I}(v)$ for all $t\in (0, 1]$, we have \[ \ln \left( \det (I^{1/2}(I^{-1/2}YI^{-1/2})^tI^{1/2}) \right) = t\ln(\det( Y)), \] which implies \begin{eqnarray*} & & f_G(I)(v) \\ &=& \lim\limits_{t\rightarrow 0+}\dfrac{1}{t}[f(\exp_{I}(tv))-_{gH}f(I)] \\ &=& \lim\limits_{t\rightarrow 0+}\dfrac{1}{t}\left[\min\{t\underline{f}(Y), t\overline{f}(Y)\}, \max\{t\underline{f}(Y), t\overline{f}(Y)\}\right], \\ &=& \left[ \min\{\ln(\det( Y)), 2\ln(\det( Y))\}, \max \{\ln(\det( Y)), 2\ln(\det (Y))\}\right] \\ &=& \begin{cases} [\ln(\det( Y)), 2\ln(\det( Y))]& \text{ if } \det( Y) \geq 1 \\ [2\ln(\det( Y)), \ln(\det( Y))]& \text{ otherwise}. \end{cases} \end{eqnarray*} This concludes that $f$ is $gH$-directional differentiable at $I$. \medskip \noindent On the other hand, for all $v\in S^n, \lambda\in\mathbb{R}$, we know that \begin{eqnarray*} \exp_{I}(\lambda v) &=& I^{1/2}\Exp(I^{-1/2}(\lambda v)I^{-1/2})I^{1/2} \\ &=& \Exp(\lambda v), \end{eqnarray*} where $\Exp$ denotes to the matrix exponential. From \cite{H15}, we also have \begin{eqnarray*} & & \det (\exp_{I}(\lambda v))=\det (\Exp(\lambda v)) =e^{\Tr (\lambda v)}=\left(e^{\Tr v}\right)^{\lambda} \\ &\Longrightarrow & \ln(\det (\exp_{I}(\lambda v)))=\lambda\ln(\det (\exp_{I}( v))). \end{eqnarray*} To sum up, the function $f_G(x)(.)$ is a linear IVF. Moreover, for all $v, h\in S^n$, it follows from \cite{H15} that \[ \exp_{I}(v+h)=\Exp(v+h) =\Exp (v).\Exp( h). \] Thus, we obtain \begin{eqnarray*} & & \det (\exp_{I}(v+h))=\det\left(\Exp (v)\right).\det(\Exp (h)) \\ & \Longrightarrow & \ln(\det (\exp_{I}(v+h)))=\ln(\det (\Exp (v)))+\ln(\det(\Exp (h))). \\ & \Longrightarrow & \lim\limits_{||h||\rightarrow 0}(f_G(I)(v+h)-_{gH}f_G(I)(v)) \\ & & =\lim\limits_{||h||\rightarrow 0}[\min\{\ln(\det(\Exp (h))), 2\ln(\det(\Exp (h)))\}, \\ & & \quad \ \max\{\ln(\det(\Exp (h))), 2\ln(\det(\Exp (h)))\}] \\ & & =0. \end{eqnarray*} which says that $f_G(I)(\cdot)$ is a $gH$-continuous IVF. Thus, $f_G(I)(\cdot)$ is $gH$-Gâteaux differentiable at $I$. \medskip \begin{remark} We point out that the $gH$-Gâteaux differentiability does not imply the $gH$-continuity of RIVF. In fact, in Example \ref{example-linear-diff}, the function $f$ is $gH$-Gâteaux differentiable at $(0, 0)$, but \[ \lim\limits_{||h||\rightarrow 0}(f(h_1, h_2)-_{gH}f( (0, 0))) =\lim\limits_{||h||\rightarrow 0}\dfrac{h_1h_2^2}{h_1^4+h_2^2}[1, 2], \] does not exist, which indicates that $f$ is not $gH$-continuous at $(0, 0)$. \end{remark} \medskip \section{Interval optimization problems on Hadamard manifolds} This section is devoted to building up some theoretical results on the interval optimization problems on Hadamard manifolds. To proceed, we introduce the so-called ``efficient point" concept, which is parallel to the role of traditional ``minimizer". \medskip \begin{definition}(Efficient point) Let $\D\subseteq\M$ be a nonempty set and $f:\D\longrightarrow \I(\mathbb{R})$ be a RIVF. A point $x_0\in\D$ is said to be an efficient point of the Riemannian interval optimization problem (RIOP): \begin{equation}\label{RIOP} \min\limits_{x\in\D} f(x) \end{equation} if $f(x)\nprec f(x_0)$, for all $x\in \D$. \end{definition} \medskip Sice the objective function $f(x)=[\underline{f}(x), \overline{f}(x)]$ in RIOP (\ref{RIOP}) is an interval-valued function, we can consider two corresponding scalar problems for (\ref{RIOP}) as follows: \begin{equation}\label{LRIOP} \min\limits_{x\in\D} \underline{f}(x) \end{equation} and \begin{equation}\label{URIOP} \min\limits_{x\in\D} \overline{f}(x) \end{equation} \medskip \begin{proposition} Consider problems (\ref{LRIOP}) and (\ref{URIOP}). \begin{description} \item[(a)] If $x_0\in\D$ is an optimal solution of problems (\ref{LRIOP}) and (\ref{URIOP}) simultaneously , then $x_0$ is an efficient point of the RIOP (\ref{RIOP}). \item[(b)] If $x_0\in\D$ is an unique optimal solution of problems (\ref{LRIOP}) or (\ref{URIOP}) , then $x_0$ is an efficient point of the RIOP (\ref{RIOP}). \end{description} \end{proposition} \beginproof (a) If $x_0\in\D$ is an optimal solution of problems (\ref{LRIOP}) and (\ref{URIOP}) simultaneously, then \[ \begin{cases} \underline{f}(x_0)&\le \underline{f}(x)\\ \overline{f}(x_0)&\le \overline{f}(x) \end{cases}, \forall x\in\D\Rightarrow f(x)\nprec f(x_0), \forall x\in\D, \] or $x_0$ is an efficient point of RIOP (\ref{RIOP}). \medskip \noindent (b) If $x_0\in\D$ is an unique optimal solution of problems (\ref{LRIOP}) or (\ref{URIOP}), then \[ \left[\begin{aligned} \underline{f}(x_0)<\underline{f}(x)\nonumber\\ \overline{f}(x_0)<\overline{f}(x)\nonumber \end{aligned}\right., \forall x\in\D\backslash\{x_0\}, \] which says $f(x)\nprec f(x_0)$ for all $x\in \D$, or equivalently $x_0$ is an efficient point of the RIOP (\ref{RIOP}). \endproof \medskip \begin{proposition} Consider the RIOP (\ref{RIOP}) with $f(x)=[\underline{f}(x), \overline{f}(x)]$. Given any $\lambda_1, \lambda_2>0$, if $x_0\in\D$ is an optimal solution of the following problem \begin{equation}\label{MP} \min\limits_{x\in\D} h(x)=\lambda_1\underline{f}(x)+\lambda_2\overline{f}(x), \end{equation} then $x_0$ is an efficient point of the RIOP (\ref{RIOP}). \end{proposition} \beginproof Assume that $x_0$ is not an efficient point of RIOP (\ref{RIOP}), then there exists $x'\in\D$ such that \[ f(x')\prec f(x_0) \ \Longrightarrow \ \lambda_1\underline{f}(x')+\lambda_2\overline{f}(x') < \lambda_{1}\underline{f}(x_0)+\lambda_2\overline{f}(x_0). \] This says that $x_0$ is not an optimal solution of (\ref{MP}), which is a contradiction. Thus, $x_0$ is an efficient point of the RIOP (\ref{RIOP}). \endproof \medskip \begin{theorem}[Characterization I of efficient point] \label{characterization-I} Let $f: \D\longrightarrow \I(\mathbb{R})$ be a RIVF on a nonempty open subset $\D$ of $\M$ and $x_0\in \D$ such that $f$ is $gH$-directional differentiable at $x_0$. \begin{description} \item[(a)] If $x_0$ is an efficient point of RIOP (\ref{RIOP}), then for all $x\in \D$ \begin{center} $f'(x_0, \exp_{x_0}^{-1}x)\nprec \textbf{0}$ or $f'(x_0, \exp_{x_0}^{-1}x)=[a, 0]$ for some $a<0$. \end{center} \item[(b)] If $\D$ is geodesically convex, $f$ is geodesically convex on $\D$ and \begin{center} $f'(x_0, \exp_{x_0}^{-1}x)\nprec \textbf{0} \quad \forall x\in \D$, \end{center} then $x_0$ is an efficient point of the RIOP (\ref{RIOP}). \end{description} \end{theorem} \beginproof For each $x\in\D$, let $v=\exp_{x_0}^{-1}x$. Since $f$ is $gH$-directional differentiable at $x_0$, then \begin{center} $f'(x_0, \exp_{x_0}^{-1}x)=f'(x_0, v)=\lim\limits_{t\rightarrow 0^{+}}\dfrac{1}{t}(f(\exp_{x_0}^{-1}(tv))-_{gH}f(x_0))$. \end{center} (a) If $x_0$ is an efficient point of RIOP (\ref{RIOP}), then \begin{align} &f(\exp_{x_0}(tv))\nprec f(x_0), \forall t>0 \nonumber \\ \Rightarrow & f(\exp_{x_0}(tv))-_{gH}f(x_0)\nprec 0, \forall t>0 (\text{ by Lemma \ref{property-sets}}) \nonumber\\ \Rightarrow & \dfrac{1}{t}(f(\exp_{x_0}(tv))-_{gH}f(x_0))\nprec 0, \forall t>0 \nonumber\\ \Rightarrow &\left[\begin{aligned} & f'(x_0, v)\nprec \textbf{0} \nonumber\\ & f'(x_0, v)=[a, 0], \text{ for some } a<0 \nonumber \end{aligned}\right. \nonumber \\ \Rightarrow & \left[\begin{aligned} & f'(x_0, \exp_{x_0}^{-1}x)\nprec \textbf{0} \nonumber \\ & f'(x_0, \exp_{x_0}^{-1}x)=[a, 0], \text{ for some } a<0 \nonumber \end{aligned}\right. \nonumber \end{align} \medskip \noindent (b) For all $x\in\D$, by the convexity of $f$ and applying Proposition 3.1, we have \begin{equation}\label{pt2} f'(x_0, \exp_{x_0}^{-1}x)\preceq f(x)-_{gH}f(x_0)\Rightarrow f(x)-_{gH}f(x_0)\nprec \textbf{0}. \end{equation} On the other hand, by Lemma \ref{property-sets}, there has \begin{equation}\label{pt3} f(x)\nprec f(x_0)\Leftrightarrow f(x)-_{gH}f(x_0)\nprec \textbf{0} . \end{equation} From (\ref{pt2}) and (\ref{pt3}), it is clear to see that \begin{center} $f(x)\nprec f(x_0), \forall x\in\D$. \end{center} Then, $x_0$ is an efficient point of the RIOP(\ref{RIOP}). \endproof \medskip \begin{example} Consider the RIOP $\min\limits_{x\in D} f(x)$ with $f$ and $D$ are defined as in Example 2.4. For all $X, Y\in D$ we have \[ f'(X, \exp_{X}^{-1}Y)= \begin{cases} [0, \, \ln(\det (YX^{-1}))] & \text{ if } \det (Y)\ge\det (X), \\ [\ln(\det (YX^{-1})), \, 0] & \text{ otherwise}. \end{cases} \] Note that, for all $X\in D$, we can find $Y\in D$ such that $\det Y<\det X$, which indicates that this RIOP does not have efficient point. \end{example} \medskip \begin{theorem}[Characterization II of efficient point] \label{characterization-II} Let $f: \D\longrightarrow \I(\mathbb{R})$ be a RIVF on a nonempty open subset $\D$ of $\M$ and $x_0\in \D$ such that $f$ is $gH$-Gâteaux differentiable at $x_0$. \begin{description} \item[(a)] If $x_0$ is an efficient point of the RIOP (\ref{RIOP}), then \[ 0 \in f_{G}(x_0)(\exp_{x_0}^{-1}x), \quad \forall x \in\D. \] \item[(b)] If $\D$ is geodesically convex, $f$ is a geodesically convex RIVF on $\D$ and \[ 0 \in [\underline{f}_{G}(x_0)(\exp_{x_0}^{-1}x), \overline{f}_{G}(x_0)(\exp_{x_0}^{-1}x)), \quad \forall x \in\D. \] where $f_{G}(x_0)(\exp_{x_0}^{-1}x)=[\underline{f}_{G}(x_0)(\exp_{x_0}^{-1}x), \overline{f}_{G}(x_0)(\exp_{x_0}^{-1}x)]$, then $x_0$ is an efficient point of the RIOP (\ref{RIOP}). \end{description} \end{theorem} \beginproof For all $x\in\D$, letting $v=\exp_{x_0}^{-1}x$ and due to $f$ being $gH$-Gâteaux differentiable at $x_0$, the function $f$ has $gH$-directional derivative at $x_0$ in direction $v$ and by Theorem 4.1 we have \[ f_{G}(x_0)(\exp_{x_0}^{-1}x)=f'(x_0, v) \nprec \textbf{0} \text{ or } _{G}(x_0)(\exp_{x_0}^{-1}x)=[a, 0] \text{ for some } a<0. \] $T_{x_0}\M$ is a linear space then $-v\in T_{x_0}\M$. Because $f$ is $gH$-Gâteaux differentiable at $x_0$, hence $f_G(x_0)(\cdot)$ is linear. Then, we obtain \[ f_G(x_0)(-v)=-f_G(x_0)(v), \] Assume $\underline{f}_G(x_0)(v)>0$, then we have \begin{align} &-\underline{f}_G(x_0)(v)<0 \nonumber \\ \Rightarrow & \begin{cases} & f_G(x_0)(-v)=[-\overline{f}_G(x_0)(v), -\underline{f}_G(x_0)(v)]\prec\textbf{0} \\ & f_G(x_0)(-v)\ne [a, 0] \text{ for some } a<0 \end{cases}, \nonumber \end{align} which is a contradiction or $\underline{f}_G(x_0)(v) \leq 0$. Thus, we show that $0\in f_G(x_0)(\exp_{x_0}^{-1}x)$. \medskip \noindent For the remaining part, let $x\in\D$, we have \[ 0\in [\underline{f}_{G}(x_0)(\exp_{x_0}^{-1}x), \overline{f}_{G}(x_0)(\exp_{x_0}^{-1}x)) \ \Longrightarrow \ f_G(x_0)(\exp_{x_0}^{-1}x)\nprec \textbf{0} \ \Longrightarrow \ f'(x_0, \exp_{x_0}^{-1}x)\nprec \textbf{0}, \] which together with the convexity of $f$ and Theorem \ref{characterization-I} proves that $x_0$ is an efficient point of the RIOP (\ref{RIOP}). \endproof \medskip \begin{example} Let $\M=\mathbb{R}_{++}:=\{x\in \mathbb{R} \, | \, x>0\}$ be endowed with the Riemannian metric given by \[ \langle u, v\rangle_x=\dfrac{1}{x^2}uv, \quad \forall u, v\in T_xM\equiv \mathbb{R}. \] Then, it is known that $\M$ is a Hadamard manifold. For all $x\in\M$, $v\in T_x\M$, the geodesic $\gamma:\mathbb{R}\longrightarrow \M$ such that $\gamma(0)=x, \gamma'(0)=v$ is described by \[ \gamma(t)=\exp_{x}(tv)=xe^{(v/x)t} \quad {\rm and} \quad \exp_{x}^{-1}y=x\ln\dfrac{y}{x}, \quad \forall y\in \M. \] We consider the RIOP $\ds\min\limits_{x\in \M} f(x)$ with $f: \M\longrightarrow \I(\mathbb{R})$ is defined by \[ f(x)=\left[x, x+\dfrac{1}{x}\right], \quad \forall x\in \M. \] For all $x\in \M$, $v\in \mathbb{R}$, we compute \begin{eqnarray*} f'(x, v) &=& \lim\limits_{t\longrightarrow 0^{+}}\dfrac{1}{t}(f(\exp_{x}(tv))-_{gH}f(x))) \\ &=& \lim\limits_{t\longrightarrow 0^{+}}\dfrac{1}{t}\left[\min\left\{x(e^{(v/x)t}-1), x(e^{(v/x)t}-1)+\dfrac{1}{x}(e^{(v/x)t}-1)\right\}\right., \\ & & \left. \max\left\{x(e^{(v/x)t}-1), x(e^{(v/x)t}-1)+\dfrac{1}{x}(e^{(v/x)t}-1)\right\}\right] \\ &=& \left[\min\left\{v, v-\dfrac{1}{x^2}v \right\}, \max\left\{v, v-\dfrac{1}{x^2}v \right\}\right] \\ &=& v\left[1-\dfrac{1}{x^2}, 1\right], \end{eqnarray*} which says that $f$ is $gH$-directional differentiable on $\M$. We can also easily verify that $f'(x, \cdot)$ is $gH$-continuous and linear, and hence $f$ is $gH$-Gâteaux differentiable on $\M$. \medskip \noindent On the other hand, by the Cauchy-Schwarz inequality, for all $x>0$, we have \begin{center} $x+\dfrac{1}{x}\ge 2, $ and $x+\dfrac{1}{x}=2\Leftrightarrow x=1$, \end{center} then \begin{center} $\left[x, x+\dfrac{1}{x}\right]\nprec [1, 2], \forall x>0$, \end{center} or $x=1$ is an efficient point of this RIOP. \medskip \noindent Particularly, at $x_0=1\in \M$, we have \begin{eqnarray*} f_G(1)(\exp_{1}^{-1}x) &=& \left[\min\left\{\ln x, 0\right\}, \, \max\left\{\ln x, 0\right\} \right], \\ & = & \begin{cases} [\ln x, 0]& \text{ if } x<1\\ [0, \ln x]& \text{ if } x\ge 1. \end{cases} \end{eqnarray*} \end{example} \medskip \begin{remark} Note that there are similar results in \cite[Theorem 3.2 and Theorem 4.2]{GCMD20}, which are not correct. \begin{enumerate} \item From Example 4.1 and Example 4.2, we see that, at $x_0\in \D\subseteq \M$, if there exists $x\in \D$ such that $f'(x_0, \exp_{x_0}^{-1}x)=[a, 0]$ for some $a<0$, we still do not have enough conditions to answer the question: is $x_0$ an efficient point? \item Theorem \ref{characterization-I} and Theorem \ref{characterization-II} are the generalization of the Euclidean concepts in \cite[Theorem 3.2 and Theorem 4.2]{GCMD20}. We think their statements are not correct as pointed out as above. Hence, we fix their errors and provide correct versions as in Theorem \ref{characterization-I} and Theorem \ref{characterization-II}. \end{enumerate} \end{remark} \medskip The interval variational inequality problems (IVIPs) was introduced by Kinderlehrer and Stampacchia \cite{KS00}. There are some relationships between the IVIPs and the IOPs. Let $\D$ be a nonempty subset of $\M$ and $T:\D\longrightarrow\mathbb{L}_{gH}(T\M, \I(\mathbb{R}))$ be a mapping such that $T(x)\in \mathbb{L}_{gH}(T_x\M, \I(\mathbb{R}))$, where $\mathbb{L}_{gH}(T_x\M, \I(\mathbb{R}))$ denotes the space of $gH$-continuous linear mapping from $T_x\M$ to $\I(\mathbb{R})$ and $\mathbb{L}_{gH}(T\M, \I(\mathbb{R}))=\bigcup_{x\in\M}\mathbb{L}_{gH}(T_x\M, \I(\mathbb{R}))$. Now, we define the Riemannian interval inequality problems (RIVIPs) as follows: \begin{description} \item[(a)] The Stampacchia Riemannian interval variational inequality problem (RSIVIP) is a problem, which to find $x_0\in\D$ such that \[ T(x_0)(\exp_{x_0}^{-1}y)\nprec\textbf{ 0}, \quad \forall y \in\D. \] \item[(b)] The Minty Riemannian interval variational inequality problem (RMIVIP) is a problem to find $x_0\in\D$ such that \[ T(y)(\exp_{x_0}^{-1}y)\nprec \textbf{0}, \quad \forall y\in\D. \] \end{description} \medskip \begin{definition}[Pseudomonotone] With a mapping $T$ defined as above, we call $T$ is pseudomonotone if for all $x, y\in \D, x\neq y$, there holds \[ T(x)(\exp_{x}^{-1}y)\nprec\textbf{ 0} \quad \Longrightarrow \quad T(y)(\exp_{x}^{-1}y)\nprec \textbf{0}. \] \end{definition} \medskip \begin{definition} [Pseudoconvex] Let $\D\subseteq \M$ be a nonempty geodesically convex set and $f:\M\longrightarrow \I(\mathbb{R})$ be a $gH$-Gâteaux differentiable RIVF. Then, $f$ is called pseudoconvex if for all $x, y\in \D$, there holds \[ f_G(x)(\exp_{x}^{-1}y)\nprec \textbf{0} \quad \Longrightarrow \quad f(y)\nprec f(x). \] \end{definition} \medskip \begin{proposition} Let $\D\subseteq \M$ be a nonempty set and consider a mapping $T: \D\longrightarrow \mathbb{L}_{gH}(T\M, \I(\mathbb{R}))$ such that $T(x)\in \mathbb{L}_{gH}(T_{x}\M, I(\mathbb{R}))$. If $T$ is pseudomonotone, then every solution of the RSIVIP is a solution of the RMIVIP. \end{proposition} \beginproof Suppose that $x_0$ is a solution of the RSIVIP. Then, we know that \[ T(x_0)(\exp_{x_0}^{-1}y)\nprec \textbf{0}, \quad \forall y\in \D, \] which together with the pseudomonotonicity of $T$ yields \[ T(y)(\exp_{x_0}^{-1}y)\nprec \textbf{0}, \quad \forall y\in\D. \] Then, $x_0$ is a solution of the RMIVIP. \endproof \medskip It is observed that if a RIVF $f:\D\longrightarrow \I(\mathbb{R})$ is $gH$-Gâteaux differentiable at $x\in\D$, then $f_{G}(x)\in \mathbb{L}_{gH}(T_x\M, \I(\mathbb{R}))$. It means there are some relationships between the RIOPs and the RIVIPs. \medskip \begin{theorem} Let $\D\subseteq\M$ be a nonempty set, $x_0\in \D$ and $f:\D\longrightarrow\I(\mathbb{R})$ be a $gH$-Gâteaux differentiable RIVF at $x_0$. If $x_0$ is a solution of the RIOP (\ref{RIOP}) and $f_{G}(x_0)(\exp_{x_0}^{-1}y) \neq [a, 0]$ for all $a<0$, then $x_0$ is a solution of the RSIVIP with $T(x_0)=f_{G}(x_0)$. \end{theorem} \beginproof Since $f$ is $gH$-Gâteaux differentiable on an open set containing $\D$, the function $f$ is $gH$-directional differentiable at $x_0$. In light of Theorem \ref{characterization-I}, it follows that \[ f_G(x_0)(\exp_{x_0}^{-1}y)\nprec \textbf{0}, \quad \forall y\in \D. \] Consider $T: \D\longrightarrow \mathbb{L}_{gH}(T\M, \I(\mathbb{R}))$ such that $T(x_0)=f_G(x_0)$ for all $x\in \D$, it is clear that \[ T(x_0)(\exp_{x_0}^{-1}y)\nprec \textbf{0}, \quad \forall y\in \D, \] which says that $x_0$ is a solution of the RSIVIP with $T(x_0)=f_{G}(x_0)$. \endproof \medskip \begin{theorem} Let $\D\subseteq\M$ be a nonempty geodesically convex set, $x_0\in\D$ and $f:\D\longrightarrow\I(\mathbb{R})$ be a pseudoconvex, $gH$-Gâteaux differentiable RIVF at $x_0$. If $x_0$ is a solution of the RSIVIP with $T(x_0)=f_G(x_0)$, then $x_0$ is an efficient point of the RIOP (\ref{RIOP}). \end{theorem} \beginproof Let $x_0$ is a solution of the RSIVIP with $T(x_0)=f_G(x_0)$. Suppose that $x_0$ is not an efficient point of RIOP (\ref{RIOP}). Then, exists $y\in\D$ such that $f(y)\prec f(x_0)$. From the pseudoconvexity of $f$, we have \[ f_G(x_0)(\exp_{x_0}^{-1}y)\prec\textbf{ 0}, \] which is a contradiction. Thus, $x_0 $ is an efficient point of the RIOP (\ref{RIOP}). \endproof \medskip \section{Conclusions} In this paper, we study the Riemannian Interval Optimization problems (RIOPs) on Hadamard manifolds, for which we establish the necessary and sufficient conditions of efficient points. Moreover, we introduce a new concept of $gH$-Gâteaux differentiability of the Riemannian interval valued functions (RIVFs), which is the generalization of $gH$-Gâteaux differentiability of the interval valued functions (IVFs). The Riemannian interval variational inequalities problems (RIVIPs) and their relationship, as well as the relationship between the RIVIPs and the RIOPs are also investigated in this article. Some examples are presented to illustrate the main results. \\ In our opinions, the obtained results are basic bricks towards further investigations on the Riemannian interval optimization, in particular, when the Riemannian manifolds acts as Hadamard manifolds. For future research, we may either study the theory for more general Riemannian manifolds or design suitable algorithms to solve the RIOPs. \medskip
{"config": "arxiv", "file": "2205.11793.tex"}
TITLE: Small Galois group solution to Fermat quintic QUESTION [5 upvotes]: I have been looking into the Fermat quintic equation $a^5+b^5+c^5+d^5=0$. To exclude the trivial cases (e.g. $c=-a,d=-b$), I will take $a+b+c+d$ to be nonzero for the rest of the question. It can be shown that if $a+b+c+d=0$, then the only solutions in the rational numbers (or even the real numbers) are trivial. I had the idea of representing the values $a, b, c, d$ as roots of a quartic polynomial, and then trying to force the Galois group to have low order. As the Fermat quintic is symmetric, it yields an equation in the coefficients of the quartic, which are satisfied by the quartic equations: $5x^4-5x^3+5qx^2-5rx+5(q-1)(q-r)+1=0$, so clearly the most general case of Galois group $S_4$ is attainable. One could also fix one or two of the roots, obtaining a smaller Galois group. However, what I have yet to discover, is a quartic in the family above which has a square discriminant, forcing the Galois group to be a subgroup of the alternating group $A_4$. Is there an example of such a quartic? Is there one with Galois group a proper subgroup of $A_4$? Better yet, are there infinite families of such quartics? REPLY [1 votes]: Here is a construction that appears to give a 2-parameter family of solutions that lie to cubic extensions of $\mathbb{Q}$. The construction uses some basic algebraic geometry. Consider the line $L=V(a+b,c+d)$ in the projective surface $S=V(a^5+b^5+c^5+d^5)$ in $\mathbb{P}^3$. For each point $p$ of $L$ consider the (projective) tangent plane $P_p=\overline{T_p(S)}$ to $S$ at $p$; this plane contains $L$. The intersection of $P_p$ and $S$ is the union of $L$ and a quartic curve $C_p$ in the projective plane $P_p\cong\mathbb{P}^2$, which has a point $p$. Projecting from this point gives a map $C_p\to\mathbb{P}^1$ of degree $3$. Thus, choosing a point $t$ of $\mathbb{P}^1$ gives an equation of degree $3$ over $\mathbb{Q}$ whose solutions give points on the curve $C_p$ and hence on the surface $S$. The choice of $p$ and $t$ give two parameters for this system of equations. The analogous construction for $a^3+b^3=c^3+d^3$ can be found in an article in "Resonance".
{"set_name": "stack_exchange", "score": 5, "question_id": 430627}
TITLE: Proving that the solution set of $ax+by+cz=0$ and $dx+ey+fz=0$ is a line in $\mathbb{R}^3$ QUESTION [2 upvotes]: I have a question here which says: "The solution set of: $$ax+by+cz=0$$ $$dx+ey+fz=0$$ is a line in $\mathbb{R^3}$" I am supposed to show that this is true or false. If it's true, I have to explain why and if it's false, I have to explain why it's false. I don't think it's true. We COULD have a line if the planes intersected but if $a,b,c$ are scalar multiples of $d,e,f$, then we would just have the same plane since there are infinitely many solutions. Is that the right should process or did I mess up my reasoning? REPLY [3 votes]: When absolutely nothing is provided about $a,b,c,d,e,f$ then for one you may even just take say all of them to be equal to zero, and then the system of solutions is the whole of $\mathbb R^3$, which is not a line. Of course, a plane is also obtainable : it is obtained exactly when the ratios $d:a,e:b,f:c$ are the same (else, the planes would not be parallel and hence would intersect at a line passing through zero). Then, the equations $ax+by+cz = 0$ and $dx+ey+fz = 0$ describe the same plane. This is clearly seen for example with $a,b,c,d,e,f$ all equal to $1$, or say $a,b,c=1$, $d,e,f = 2$. So, obviously the statement given can be false. Note , however, that a solution must exist. This is because of the rank-nullity theorem, applied on the linear transformation $(x,y,z) \to (ax+by+cz,dx+ey+fz)$. This is a map from $\mathbb R^3$ to $\mathbb R^2$. It has rank atmost $2$, therefore nullity at least one i.e. the null space has at least one line.
{"set_name": "stack_exchange", "score": 2, "question_id": 3019598}
\section{Disjoint Compact Sets in Hausdorff Space have Disjoint Neighborhoods} Tags: Hausdorff Spaces, Compact Spaces \begin{theorem} Let $T = \struct {S, \tau}$ be a [[Definition:Hausdorff Space|Hausdorff space]]. Let $V_1$ and $V_2$ be [[Definition:Compact Topological Space|compact sets]] in $T$. Then $V_1$ and $V_2$ have [[Definition:Disjoint Sets|disjoint]] [[Definition:Neighborhood of Set|neighborhoods]]. \end{theorem} \begin{proof} Let $\FF$ be the [[Definition:Set|set]] of all [[Definition:Ordered Pair|ordered pairs]] $\tuple {Z, W}$ such that: :$Z, W \in \tau$ :$V_1 \subseteq Z$ :$Z \cap W = \O$ By the [[Disjoint Compact Sets in Hausdorff Space have Disjoint Neighborhoods/Lemma|lemma]], $\Img \FF$ [[Definition:Cover of Set|covers]] $V_2$. {{explain|It is not clear what mapping or relation $\Img \FF$ refers to.}} By the definition of [[Definition:Compact Topological Space|compact space]], there exists a [[Definition:Finite Set|finite]] [[Definition:Subset|subset]] $K$ of $\Img \FF$ which also [[Definition:Finite Cover|covers]] $V_2$. By the definition of [[Definition:Topology|topology]], $\displaystyle \bigcup K$ is [[Definition:Open Set (Topology)|open]]. By the [[Principle of Finite Choice]], there exists a [[Definition:Bijection|bijection]] $\GG \subseteq \FF$ such that $\Img \GG = K$. {{explain}} Then $\GG$, and hence its [[Definition:Preimage of Mapping|preimage]], will be [[Definition:Finite Set|finite]]. Let $\displaystyle J = \bigcap \Preimg \GG$. By [[Intersection is Largest Subset]], $V_1 \subseteq J$. By the definition of a [[Definition:Topology|topology]], $J$ is [[Definition:Open Set (Topology)|open]]. Then $\displaystyle \bigcup K$ and $J$ are [[Definition:Disjoint Sets|disjoint]] [[Definition:Open Set (Topology)|open sets]] such that $\displaystyle V_2 \subseteq \bigcup K$ and $V_1 \subseteq J$. {{qed}} \end{proof}
{"config": "wiki", "file": "thm_2866.txt"}
\begin{document} \begin{center} {\Large \bf Decay Of Correlation For Expanding Toral Endomorphisms\footnote{This paper was pubished in Dynamical Systems, {\em Proceedings of the International Conference in Honor of Professor Liao Shantao Peking University, China, 9 – 12 August 1998.} Ed. L.Wen and Y. P. Jiang, World Scientic, 1999. However, it is not reviewed in Mathscinet and it is not easy to find the book. So, as asked by colleagues, I put it in ArXiv. Some incomplete references are updated.} }\\ \smallskip Ai-hua FAN \\ \smallskip Department of Mathematics, University of Picardie, 80039 Amiens, France \ E-mail: {\tt Ai-Hua.Fan@u-picardie.fr} \\ \medskip ({\it In memory of Professor Liao Shantao}) \end{center} \begin{abstract} Let $A$ be an expanding endomorphism on the torus ${\Bbb T}^d = {\Bbb R}^d /{\Bbb Z}^d$ with its smallest eigenvalue $\lambda >1$. Consider the ergodic system $({\Bbb T}^d, A, \mu)$ where $\mu$ is Haar measure. We prove that the correlation $\rho_{f, g}(n)$ of a pair of functions $f, g \in L^2(\mu)$ is controlled by the modulus of $L^2$-continuity $\Omega_{f, 2}(\lambda^{-n})$ and that the estimate is to some extent optimal. We also prove the central limit theorem for the stationary process $f(A^n x)$ defined by a function $f$ satisfying $\Sigma_n \Omega_{f,2}(\lambda^{-n}) <\infty$. An application is given to the Ulam-von Neumann system. \end{abstract} \section*{1. Introduction and Main Results} \setcounter{section}{1} \setcounter{equation}{0} There has been much interest in the study of rates of correlation decay for various kinds of systems \cite{FJ,FP,Liv1,You} (see the references therein). However few rates for a given class of test functions are known to be optimal. We intend in this note to study the simple dynamics of expanding torus endomorphisms and we shall see that optimal rates may be obtained in this case. Another motivation is to better understand a method introduced in \cite{FJ,FP} to study decay of correlations in a general case, which provides rather precise decay rates but which seems not be able to cover a classical result that there is an exponential decay rate for H\"{o}lder functions. We consider the dynamical system $({\Bbb T}^d, A)$ where ${\Bbb T}^d = {\Bbb R}^d /{\Bbb Z}^d$ ($d\geq 1$) is the $d$-dimensional torus and $A$ is an endomorphism on ${\Bbb T}^d $. We suppose that $A$ is {\it expanding}, that is, all of its eigenvalues have absolute value strictly larger than $1$. A central theme of the ergodic theory for such a system is to consider the behavior of $A^n$ as $n \rightarrow \infty$. A natural way is to describe the behavior of $A^n$ through an invariant measure. We take the Haar-Lebesgue measure $\mu = dx$ on ${\Bbb T}^d$. The system $({\Bbb T}^d, A, \mu )$ is strong mixing. That means the {\it correlation} $$ \rho(n) = \rho_{f,g}(n):= \int f \cdot g\circ A^n d\mu - \int f d\mu \cdot \int g d \mu $$ tends to zero, as $n \rightarrow \infty$, for any $f\in L^2(\mu)$ and any $g \in L^2(\mu)$ (called {\it test functions}). Our purpose is to study the rate of correlation decay for a given pair of test functions $f, g \in L^2(\mu)$. It is well known that if $g$ and $f$ are H\"{o}lder functions, the correlation decays exponentially fast. We shall show that for less regular functions, the correlations decay more slowly and that different kinds of decay rates are possible. In order to state our results, we recall here the modulus of continuity and the modulus of $L^r$-continuity ($1\leq r < \infty$) of a function $f$ defined on the torus: $$ \Omega_f(\delta) = \sup_{|x-y|\leq \delta} |f(x) - f(y)| $$ $$ \Omega_{f, r}(\delta) = \sup_{|v|\leq \delta} \|f(\cdot + v) - f(\cdot)\|_r $$ where $\|f\|_r$ denotes the norm of $f \in L^r(\mu)$. Formally, we may write $\Omega_f(\delta)= \Omega_{f, \infty}(\delta)$. \begin{Thm} Let $A$ be an expanding integral matrix with the least eigenvalue (in absolute value) $\lambda>1$. For $f, g\in L^2(\mu)$, we have $$ |\rho_{f, g}(n)| \leq C \|g\|_2 \Omega_{f, 2}(\lambda^{-n}) \qquad (\forall n \geq 1) $$ where $C>0$ is a constant depending only on $A$. \end{Thm} This is actually a consequence of an estimate on a transfer operator that we state as the following theorem. Let $D$ ($\subset {\Bbb Z}^d$) be a set of representatives of cosets in ${\Bbb Z}^d/A {\Bbb Z}^d$ (one and only one for each coset). We call $D$ a set of {\it digits}. The cardinal of $D$ is equal to $q = |{\rm det} A|$. For any function $f$ on ${\Bbb T}^d$, define $$ {\mathcal L} f(x) = \frac{1}{q} \sum_{\gamma \in D} f\left(A^{-1}(x + \gamma ) \right). $$ The operator ${\mathcal L}$ is called a {\it transfer operator}. Note that ${\mathcal L}$ doesn't depend on the choice of $D$. \begin{Thm} Let $A$ be an expanding integral matrix with the least eigenvalue (in absolute value) $\lambda>1$. Suppose $\int f d\mu =0$. If $f \in L^r(\mu)$ ($1\leq r \leq \infty$), we have $$ \|{\mathcal L}^n f \|_r \leq C \Omega_{f,r}(\lambda^{-n}) \qquad (\forall n \geq 1). $$ \end{Thm} Exponential decays are obtained for H\"{o}lder continuous functions and functions of bounded variation (not necessary continuous). However, few results are known to be optimal and few work has been carried out for less regular test functions. As far as we know, an optimal decay is obtained for some systems studied in \cite{Huh} and less regular test functions are discussed for expanding systems in \cite{FJ,FP,Pol}. The method introduced in \cite{FJ,FP} gives rather precise estimate on the correlations for functions having Dini continuity ($\int_0^1 \Omega_f(t)/t d t <\infty$). For the endomorphisms discussed here, it seems that the modulus of $L^2$-continuity is a good tool to describe the decay of correlation. It gives a decay rate for {\it every} pair of test functions in $L^2$ and the decay rate is optimal to some extent, as we shall see by considering lacunary trigonometric series. Another consequence of the above theorem is the following CLT (Central Limit theorem). \begin{Thm} Let $A$ be an expanding integral matrix and let $f\in L^2 ({\Bbb T})$ with $\int f =0$. Suppose $$ \int_0^1 \frac{\Omega_{f,2}(t)}{t} d t < \infty. $$ Then $$ \frac{1}{\sqrt{n}} \sum_{j=0}^{n-1} f(A^n x) $$ converges in law to a Gaussian variable of zero mean and finite variance $$ \sigma^2 = -\int f^2(x) d x + 2 \sum_{n=0}^\infty \int f(x) f(A^n x) d x. $$ \end{Thm} In the one-dimensional case, M. Kac \cite{Kac} first proved the CLT for a function of the class $ {\rm Lip}_\alpha$ with $\alpha >1/2$. I.A. Ibragimov \cite{Ibr} weakened the ${\rm Lip}_\alpha$ condition to that the modulus of $L^r$-continuity $\Omega_{f, r}(\delta)$ of $f$ for some $r >2$ is of order $\delta^\beta$ ($\beta>0$). The above Theorem 3 improves significantly these results. (But it should be pointed out that a convergence speed for CLT was obtained in \cite{Ibr}). For the higher dimensional case, there were no similar satisfactory results. \section*{2. Tiling} We refer to \cite{GM,LW} for the facts recalled here and for further information about tilings. Given a measurable set $T \subset {\Bbb R}^d$, we use $1_T$ to denote its characteristic function and $|T|$ to denote its Lebesgue measure. Given two measurable sets $T$ and $S$, the notation $T \simeq S$ means that $T$ and $S$ are equal up to a set of null Lebesgue measure. An endomorphism of the torus is represented by an integral matrix. Suppose $A$ is a $d \times d$ integral matrix which is {expanding}, that is, all of its eigenvalues $\lambda_i$ have $|\lambda_i|>1$. Denote $\lambda = \inf |\lambda_i|$ and $q = |{\rm det} A|$ ($q\geq 2$ and is an integer). Take a digit set $D$. Recall that it consists of representatives of cosets in ${\Bbb Z}^d/A {\Bbb Z}^d$ (one and only one for a coset). For each $\gamma \in D$, define $S_\gamma : {\Bbb R}^d \rightarrow {\Bbb R}^d$ by $$ S_\gamma x = A^{-1} (x + \gamma). $$ As for hyperbolic iterated function systems, it can be proved that there exists a unique compact set $T$ having the self-affinity $$ T = \bigcup_{\gamma \in D} S_\gamma(T). $$ Actually, $|S_{\gamma'}(T) \bigcap S_{\gamma''}(T)|=0$ when $\gamma' \not= \gamma''$. Therefore the self-affinity implies \begin{equation} \sum_{k \in D} 1_T(Ax -k) = 1_T(x) \qquad {\rm a.e.} \end{equation} It is also known that the compact set $T$ has the tiling property \begin{equation} \sum_{k \in {\Bbb Z}^d} 1_T(x -k) = 1 \qquad \qquad {\rm a.e.} \end{equation} Since $T$ satisfies (1) and (2), we say it generates an {\em integral self-affine tiling}. The compact set $T$ is called a self-affine tiling {\it tile}. We have $|T|=1$. The tiling property allows us to identify ${\Bbb T}^d$ with $T$ up to a null measure set. The self-affinity allows us to decompose $T$ into $q$ disjoint (up to a null measure set) self-affine parts. If $A$ is a similarity, we say $T$ is self-similar tiling tile. \section*{3. Proofs of Theorems} {\it Notation}: For $\gamma=(\gamma_1, \cdots, \gamma_n) \in D^n$, write $$ S_\gamma x = S_{\gamma_n}\circ S_{\gamma_2}\circ \cdots \circ S_{\gamma_1}x, \qquad T_\gamma = S_\gamma(T). $$ Clearly $$ S_\gamma x = A^{-n}x +A^{-n}\gamma_1 +\cdots + A^{-2}\gamma_{n-1} + A^{-1} \gamma_n. $$ Denoting $b_\gamma =S_\gamma 0$, we get $$ {\mathcal L}^n f (x) = \frac{1}{q^n } \sum_{\gamma \in D^n} f\left( A^{-n}x + b_\gamma \right). $$ {\it Proof of Theorem 2}\ \ For $\gamma \in D^n$, write $$ f_\gamma = \frac{1}{|T_\gamma|}\int_{T_\gamma} f(x) d x. $$ Note that $|T_\gamma| = q^{-n} |T|= q^{-n}$ and that $$ \sum_{\gamma \in D^n} f_\gamma = q^{n} \int_T f = q^n \int_{{\Bbb T}^d} f=0. $$ Then \begin{eqnarray*} {\mathcal L}^n f (x) & = & \frac{1}{q^n} \sum_{\gamma \in D^n} f\left( A^{-n} x + b_\gamma \right)\\ & = & \frac{1}{q^n} \sum_{\gamma \in D^n} \left[ f\left( A^{-n} x + b_\gamma \right) - f_\gamma \right]. \end{eqnarray*} Since $T_\gamma = A^{-n}T +b_\gamma$, we have immediately $$ |{\mathcal L}^n f (x)| \leq \frac{1}{q^n} \sum_{\gamma \in D^n} \Omega_f({\rm diam}\ A^{-n}T ) \leq C \Omega_f(\lambda^{-n}) $$ where ${\rm diam } B$ denotes the diameter of a set $B$. We used the fact that ${\rm diam} A^{-n} T \leq a \lambda^{-n}$ for some $a>0$ and the fact that $\Omega_f(2\delta) \leq 2 \Omega_f(\delta)$. The estimate on $\|{\mathcal L}^n f\|_\infty$ is thus proved. For the estimate on $\|{\mathcal L}^n f\|_r$, we first use the H\"{o}lder inequality to get $$ |{\mathcal L}^n f (x)|^r \leq \frac{1}{q^n} \sum_{\gamma \in D^n} \left| f\left(A^{-n} x + b_\gamma \right) - f_\gamma \right|^r . $$ By making the change of variables $ y = A^{-n} x + b_\gamma$, we get $$ \frac{1}{q^n} \int_T \left| f\left(A^{-n} x + b_\gamma \right) - f_\gamma \right|^r dx = \int_{T_\gamma} | f(y) - f_\gamma|^r d y. $$ Then $$ \|{\mathcal L}^n f\|_r^r \leq \sum_{\gamma \in D^n} \int_{T_\gamma} | f(y) - f_\gamma|^r d y. $$ However \begin{eqnarray*} \int_{T_\gamma} | f(y) - f_\gamma|^r d y & = & \int_{T_\gamma} \left| \int_{T_\gamma} \left(f(y) - f(x)\right) \frac{dx}{|T_\gamma|} \right|^r d y \\ &\leq & \int_{T_\gamma} dy \int_{T_\gamma} |f(y) - f(x)|^r \frac{d x}{|T_\gamma|}\\ & \leq & \ q^n \int_{T_\gamma} dy \int 1_{ T_\gamma - T_\gamma}(y-x) |f(y) - f(x)|^r dx\\ & = & \ q^n \int_{T_\gamma} dy \int 1_{ A^{-n} (T-T)}(y-x) |f(y) - f(x)|^r dx \end{eqnarray*} Then, if we make a change of variables $u = y-x$ and $v = x$, we get \begin{eqnarray*} \|{\mathcal L}^n f\|_r^r & \leq & \ q^n \int_T dy \int 1_{ A^{-n} (T-T)}(y-x) |f(y) - f(x)|^r dx \\ & \leq & q^n \int_T dv \int_{ A^{-n} (T-T)} |f(u+v) - f(v)|^r du\\ & = & \ q^n \int_{ A^{-n} (T-T)} du \int_T |f(u+v) - f(v)|^r dv\\ & \leq & \ q^n | A^{-n} (T-T)| \cdot \left[ \Omega_{f, r}({\rm diam} A^{-n}(T - T)) \right]^r. \end{eqnarray*} Suppose $|T-T| \leq C_1 |T|$ with some sufficiently large constant $C_1$ (which may depend on the digit set $D$), we have $$ |A^{-n} (T-T)| \leq C_1 q^n. $$ There is another constant $C_2$ such that $$ {\rm diam} A^{-n}(T-T) \leq C_2 \lambda^{-n}. $$ Therefore for some constant $C_3 >0$, we have $$ \|{\mathcal L}^n f\|_r^r \leq C_3 \left( \Omega_{r, f} (\lambda^{-n}) \right)^r. $$ $\Box$ \bigskip {\it Proof of Theorem 1}\ \ Assume $\int f =0$, without loss of generality. Consider the operator $V: L^2 \rightarrow L^2$ defined by $V f = f \circ A$. It can be verified that ${\mathcal L}$ is just the adjoint operator of $V$. Therefore $$ | \rho_{f, g}(n)| = \left|\int g {\mathcal L}^n f \right| \leq \|g\|_2 \|{\mathcal L}^n f\|_2. $$ Then Theorem 1 follows from Theorem 2. $\Box$ \bigskip {\it Proof of Theorem 3}\ \ By Theorem 1.1 in \cite{Liv2}, it suffices to verify $$ \sum_{n=0}^\infty \left| \int f \cdot f \circ A^n \right|< \infty, \qquad \sum_{n=0}^\infty \int |{\mathcal L}^n f | < \infty. $$ However, both sums are finite whenever $\sum_{n=0}^\infty \Omega_{f, 2}(\lambda^{-n}) <\infty$. This last condition is equivalent to $\int_0^1 \frac{ \Omega_{f, 2}(s)}{s} ds <\infty$. $\Box$ \section*{4. Transfer operator, Fourier series and Modulus of continuity} \begin{Prop} For $f \in L^1$, we have $$ {\mathcal L}^n f(x) = \sum_{k \in {\Bbb Z}^d} \hat{f}(A^{*n} k) e^{2 \pi i \langle k, x\rangle } \qquad (\forall n \geq 1). $$ \end{Prop} {\it Proof}\ \ It suffices to prove the expression for $n=1$. Take a digit set $D^*$ representing ${\Bbb Z}^d /A^*{\Bbb Z}^d $. Assume that $0 \in D^*$. Write $$ f(x) = \sum_{k \in {\Bbb Z}^d} \hat{f}(k) e^{2 \pi i \langle k, x\rangle} = \sum_{k \in {\Bbb Z}^d} \sum_{\beta \in D^*} \hat{f}(A^*k+\beta) e^{2 \pi i \langle A^*k+ \beta, x\rangle}. $$ We have $$ f(A^{-1}(x+\gamma)) = \sum_{k \in {\Bbb Z}^d} \sum_{\beta \in D^*} \hat{f}(A^* k+ \beta) e^{2 \pi i \langle A^* k+\beta,\ A^{-1}(x+\gamma)\rangle} $$ then \begin{eqnarray*} {\mathcal L} f(x) & = & \frac{1}{q} \sum_{k \in {\Bbb Z}^d} \sum_{\beta \in D^*} \sum_{\gamma \in D} \hat{f}(A^* k+ \beta) e^{2 \pi i \langle k+ A^{*-1}\beta, \ x+\gamma\rangle } \\ & = & \frac{1}{q} \sum_{k \in {\Bbb Z}^d} e^{2 \pi i \langle k,\ x\rangle} \sum_{\beta \in D^*} \hat{f}(A^* k+ \beta) e^{2 \pi i \langle A^{*-1}\beta,\ x\rangle} \sum_{\gamma \in D} e^{2 \pi i \langle A^{*-1}\beta, \ \gamma\rangle } \end{eqnarray*} So, in order to get $$ {\mathcal L} f(x) = \sum_{k \in {\Bbb Z}^d} \hat{f}(A^{*} k) e^{2 \pi i \langle k, x\rangle } $$ It suffices to note that $$ \frac{1}{q} \sum_{\gamma \in D} e^{2 \pi i \langle A^{*-1}\beta,\ \gamma\rangle} = \left\{ \begin{array}{ll} 1 & \quad {\rm if} \ \ \beta =0 \\ 0 & \quad {\rm if} \ \ \beta \not=0. \end{array} \right. $$ In fact, suppose the above sum is not zero. We have only to show that $\beta =0$. Since the group $D$ is a product of cyclic groups and $m^{-1}\sum_{j=0}^{m-1} e^{2 \pi i j x} = 1 $ or $0$ for a real number $x$ according to $x\in {\Bbb Z}$ or not, we must have $\langle A^{*-1}\beta,\ \gamma'\rangle$ is an integer for any cyclic group generator $\gamma'$. Then $\langle A^{*-1}\beta,\ \gamma\rangle$ is an integer for any $\gamma \in D$. Let $z \in {\Bbb Z}^d$. Write $z = \gamma + A k$ with $\gamma \in D$ and $k \in {\Bbb Z}^d$. Then $$ \langle A^{*-1}\beta,\ z\rangle = \langle A^{*-1}\beta,\ \gamma\rangle + \langle \beta,\ k \rangle = 0 \quad{\rm (mod \ {\Bbb Z})}. $$ It follows that $\beta = 0$ (mod $A^*{\Bbb Z}^d$). So $\beta = 0$. $\Box$ \bigskip {\it Notation}: " $a_n \approx b_n$" means there are constants $C_1>0, C_2>0$ such that $C_1 a_n \leq b_n \leq C_n a_n$ for all $n \geq 1$. \bigskip From Theorem 2 and Proposition 1, we get $$ \|{\mathcal L}^n f\|_2 \approx \sqrt{\sum_{k \in {\Bbb Z}^d} |\widehat{f}(A^{*n} k)|^2 } \leq C \Omega_{f, 2}(\lambda^{-n}). $$ $$ \|{\mathcal L}^n f\|_\infty \leq \sum_{k \in {\Bbb Z}^d} |\widehat{f}(A^{*n} k)| \leq C \Omega_{f, 2}(\lambda^{-n}). $$ These inequalities may become "$\approx$" for some functions $f$. Let us consider the class of functions defined by lacunary trigonometric series $$ H (x)= \sum_{k=1}^\infty a_n e^{ 2 \pi i \langle A^{*k} h, \ x \rangle} \qquad ( h \in {\Bbb Z}^d \setminus \{0\}). $$ Since $\{ A^{*k} h\}_{k\geq 1}$ is a Sidon set \cite{Kah}, we have $\|{\mathcal L}^n H\|_r \approx \|{\mathcal L}^n H\|_2 $ ($\forall 1\leq r <\infty$) and \begin{equation} \|{\mathcal L}^n H \|_\infty \approx \sum_{k=n+1}^\infty |a_{k}| \leq C \Omega_H(\lambda^{-n}) \end{equation} \begin{equation} \|{\mathcal L}^n H \|_2 \approx \sqrt{ \sum_{k=n+1}^\infty |a_{k}|^2} \leq C \Omega_{H, 2} (\lambda^{-n}). \end{equation} Now let us estimate the modulus of continuity of $H$ by its Fourier coefficients. The following proposition is immediate, just because $$ |H(x+ y) -H(x)| \leq \sum_{k=1}^n |a_k| \left| e^{ 2 \pi i \langle A^{*k} h, \ y \rangle} - 1 \right| + \sum_{k=n+1}^\infty |a_k| $$ $$ \int |H(x+ y) -H(x)|^2 d x \leq \sum_{k=1}^n |a_k|^2 \left| e^{ 2 \pi i \langle A^{*k} h, \ y \rangle} - 1 \right|^2 + \sum_{k=n+1}^\infty |a_k|^2. $$ \begin{Prop} Let $A$ be an expanding integral similarity matrix with spectral radius $\lambda >1$. Let $H$ be the function defined by the above lacunary trigonometric series. We have $$ \Omega_H(\lambda^{-n}) \leq C \lambda^{-n}\sum_{k=1}^n |a_k|\lambda^k + \sum_{k=n+1}^\infty |a_k| $$ $$ \Omega_{H,2}(\lambda^{-n})^2 \leq C \lambda^{-2 n}\sum_{k=1}^n |a_k|^2\lambda^{2k} + \sum_{k=n+1}^\infty |a_k|^2 $$ where $C>0$ is a constant. \end{Prop} If $a_n = \frac{1}{n^\alpha}$ with $\alpha >1$, then $$ \|{\mathcal L}^n H\|_\infty \approx \Omega_H (\lambda^{-n}) \approx \frac{1}{n^{\alpha -1}} $$ $$ \|{\mathcal L}^n H\|_2 \approx \Omega_{H, 2} (\lambda^{-n}) \approx \frac{1}{n^{\alpha -1/2}}. $$ If $a_n = \frac{1}{n \log^\beta n}$ with $\beta >1$, then $$ \|{\mathcal L}^n H\|_\infty \approx \Omega_H (\lambda^{-n}) \approx \frac{1}{(\log n)^{\beta -1}} $$ $$ \|{\mathcal L}^n H\|_2 \approx \Omega_{H, 2} (\lambda^{-n}) \approx \frac{1}{ (\log n)^{\beta-1/2}}. $$ If $a_n = \theta^n$ with $\frac{1}{\lambda}< \theta <1$, then $$ \|{\mathcal L}^n H\|_\infty \approx \|{\mathcal L}^n H\|_2 \approx \Omega_H(\lambda^{-n}) \approx \Omega_{H,2}(\lambda^{-n}) \approx \theta^n. $$ In fact, we have only to use (3) and (4) for getting lower bounds and to use Proposition 2 for getting upper bounds. When $a_n = \frac{1}{n^\alpha}$, it suffices to remark that (for any $\lambda >1$ and any $\alpha >1$) $$ \sum_{k=1}^n \frac{\lambda^k}{k^\alpha} \approx \sum_{k=1}^{[n/2]} \frac{\lambda^k}{k^\alpha} + \sum_{k=[n/2] +1}^n \frac{\lambda^k}{k^\alpha} \leq C\left( \lambda^{n/2} + \frac{\lambda^n}{n^\alpha}\right) $$ $$ \sum_{k=n+1}^\infty \frac{1}{k^\alpha} \approx \frac{1}{n^{\alpha-1}} $$ where $[n/2]$ denotes the integral part of $n/2$. In the same way, we treat the case $a_n = \frac{1}{n \log^\beta n}$. The case $a_n = \theta^n$ is simpler. More generally, suppose $|a_n|$ is a decreasing sequence such that $$ \limsup_{n \rightarrow \infty} \frac{|a_{[\delta n]}|}{|a_n|}< \infty, \qquad \limsup_{n \rightarrow \infty} \frac{\lambda^{-(1-\delta)n}}{|a_n|}< \infty $$ for some $0<\delta<1$. Then we have $$ \Omega_H(\lambda^{-n}) \approx \sum_{k= n+1}^\infty |a_k|, \qquad \Omega_{H,2}(\lambda^{-n}) \approx \sqrt{ \sum_{k= n+1}^\infty |a_k|^2 }. $$ \bigskip Let us finish our discussion by making some remarks:\\ 1. In all three cases of $H$ discussed above, the estimate provided by Theorem 2 is optimal. Let us point out that $H$ belongs to the class of functions with $\Omega_f(\delta) =O(1/|\log \delta|^{\alpha-1})$ when $a_n= \frac{1}{n^\alpha}$; $H$ belongs to the class of functions with $\Omega_f(\delta) =O(1/(\log |\log \delta|)^{\beta-1} ))$ when $a_n= \frac{1}{n \log^\beta n}$; $H$ belongs to the class of functions with $\Omega_f(\delta) =O(\delta^{ \frac{\log \theta}{\log \lambda} })$ when $a_n= \theta^n$. 2. For every $f \in L^r$ ($1\leq r <\infty)$, $\|f(\cdot +y) -f(\cdot)\|_r$ is continuous as a function of $y$. It follows that $\lim_{\delta \rightarrow 0} \Omega_{f, r}(\delta)=0$. Then, by Theorem 2, $\|{\mathcal L}^n f - \int f \|_r$ tends to zero for any $f \in L^r$. 3. Let $0<\delta_n <1$ be an arbitrary decreasing sequence tending to zero. There is a function $H$ such that $\|{\mathcal L}^n H\|_\infty \approx \delta_n$. In fact, it suffices to take the function $H$ defined by $a_n = \delta_{n-1}-\delta_n$ (with $\delta_0=1$). Also, if we take the function $H$ defined by $a_n = \sqrt{ \delta_{n-1}^2 - \delta_n^2 }$, we have $\|{\mathcal L}^n H\|_2 \approx \delta_n$. 4. The function $H$ defined above with $a_n = \frac{1}{n^\alpha}$ ($1<\alpha\leq 2$) or $a_n = \frac{1}{n \log^\beta n}$ ($\beta >1$) is a continuous function but not of summable variation. However, $\|{\mathcal L}^n f \|_\infty$ tends to zero with a precisely known convergence speed. Such a situation was not seen before. \section*{5. Ulam-von Neumann map} The map $Uy = 1 -2y^2$ from $I= [-1, 1]$ into itself was studied by Ulam and von Neumann \cite{UV}. This Ulam-von Neumann map $U$ is conjugate to the tent map $Tx = 1 - 2|x|$. More precisely, we have $U \circ h = h \circ T$ where $h$ is the conjugacy defined by $h(x) = \sin \frac{\pi}{2} x$. The Lebesgue measure $dx$ (normalized so that $I$ has measure $1$) is $T$-invariant and its image under $h$, $d \mu = \frac{2}{\pi} \frac{dy}{\sqrt{1-y^2}}$, is $U$-invariant. Consider now the system $(I, U, \mu)$. For this system, the transfer operator is defined by $$ {\mathcal L}_U f(y) = \sum_{z \in U^{-1}y} \frac{f(z)}{|U'(z)|}. $$ \begin{Thm} For any $f \in {\rm Lip}_\alpha$ ($0<\alpha<1$) with $\int f d\mu =0$, we have $\|{\mathcal L}_U^n f\|_2 \leq C 2^{-\alpha n}$. For $g(y) = \log |y| + \log 2$, we have $\int g d \mu =0$ and $\|{\mathcal L}_U^n g\|_2 \approx 2^{- n}$. \end{Thm} {\it Proof}\ \ First observe that Theorem 2 remains true for the system $(I, T, dx)$ because $I$ can be decomposed into $I= S_0(I) \bigcup S_1(I)$, where $S_0 x = \frac{x-1}{2}, S_1 x = \frac{1-x}{2}$ are inverses of $T$. Let ${\mathcal L}_T$ be the transfer operator associated to $T$, which is defined in the same way as ${\mathcal L}_U$. By using the fact that ${\mathcal L}_U$ is the adjoint operator of $f \rightarrow f\circ U$ acting on $L^2(\mu)$ and the similar fact about ${\mathcal L}_T$, we get the relation between ${\mathcal L}_U$ and ${\mathcal L}_T$: for any $g, f \in L^2(\mu)$ \begin{eqnarray*} & & \int g \cdot {\mathcal L}_U^n f d \mu = \int g \circ U^n \cdot f d \mu \\ & = & \int g \circ U^n \circ h \cdot f \circ h dx = \int g \circ h \circ T^n \cdot f \circ h dx \\ & = & \int g \circ h \cdot {\mathcal L}_T^n (f \circ h) dx = \int g \cdot {\mathcal L}_T^n (f \circ h)\circ h^{-1} d\mu. \end{eqnarray*} It follows that $\|{\mathcal L}_U^n f\|_2 = \|{\mathcal L}_T^n (f\circ h)\|_2$. In fact, \begin{eqnarray*} & & \int {\mathcal L}_U^n f \cdot \overline{ {\mathcal L}_U^n f } d \mu = \int {\mathcal L}_U^n f \cdot {\mathcal L}_T^n (\overline{f}\circ h)\circ h^{-1} d \mu \\ & = &\int {\mathcal L}_T^n (f \circ h) \circ h^{-1} \cdot {\mathcal L}_T^n (\overline{f}\circ h)\circ h^{-1} d \mu \\ & = & \int {\mathcal L}_T^n (f \circ h) \cdot {\mathcal L}_T^n (\overline{f}\circ h) d x. \end{eqnarray*} So, we have only to estimate $\|{\mathcal L}_T^n(f \circ h)\|_2$. To this end, apply Theorem 2 ( its variant mentioned above). We are then led to estimate the modulus of continuity of $f\circ h$. Since $h$ is Lipschitz, we have $\Omega_{f\circ h, 2}(\delta)\leq \Omega_{f}(C' \delta)\leq C \Omega_{f}( \delta)$. However, $\log |y|$ is neither continuous nor of bounded variation. But it is in $L^2(\mu)$, equivalently $\log |h(x)| \in L^2(dx)$. Note that $$ \log |h(x)| =\log |\sin \frac{\pi}{2} x| = - \log 2 - \sum_{n=1}^\infty \frac{\cos \pi n x}{n}. $$ It follows that $\int \log |y| d \mu = \int \log |\sin \frac{\pi}{2} x| dx = -\log 2$. From the above series and Proposition 1, we get $\|{\mathcal L}^n_U g\|_2 = \|{\mathcal L}^n_T g\circ h\|_2 \approx 2^{-n}$. $\Box$ \bigskip Remark that we can also get $\Omega_{g\circ h, 2}(\delta)\leq C \sqrt{\delta}$. In fact, for any $u \not = 0$, take $N$ to be the integral part of $1/|u|$. According to Parseval's equality, we have \begin{eqnarray*} & & \int [ \log |h (x+u)|- \log |h(x)| ]^2 d x \\ &= &2 \sum_{n=1}^\infty \frac{\sin^2 \frac{\pi}{2} n u}{n^2} \leq \frac{\pi^2}{2}N |u|^2 + 2 \sum_{n=N +1}^\infty \frac{1}{n^2} \leq C \sqrt{|u|}. \end{eqnarray*} \bigskip It is known that the Liapunov exponent of $(I, U, \mu)$ is equal to $$ \int \log |U'(y)| d\mu(y) = \log 2. $$That means for $\mu$ almost all points $y$, $\frac{1}{n} \log |(U^n)'(y)|$ converges to $\log 2$. As a consequence of the last theorem, we get that $\frac{1}{\sqrt{n}} [\log |(U^n)'(y)| - n \log 2]$ converges in law to a centered gaussian variable (following the arguments in the proof of Theorem 3). \bigskip The above discussion on Ulam-von Neumann map reveals a possibility to reduce the study of a general system to that of an endomorphism on torus or a system like the tent map. It is the case when there is a conjugacy between the two systems and when the conjugacy has some smoothness so that the modulus of continuity of $f \circ h$ is small. \bigskip {\it Acknowledgement} \ \ The author would like to thank Yunping JIANG for his valuable discussions and Oliver JENKINSON for his careful reading of an earlier version of the paper. \bigskip
{"config": "arxiv", "file": "1511.06868.tex"}
TITLE: Proving reasoning by cases. $P(X \mid Y) = \sum_{z} P(X,z \mid Y)$ QUESTION [1 upvotes]: I am trying to prove the following statement: Let $X, Y , Z$ be random variables, then $P(X \mid Y) = \sum_{z} P(X,z \mid Y)$. I have a sketch of the proof but I do not know if it is correct: $$\sum_{z} P(X,z \mid Y) = \sum_{z} \frac{P(X,Y,z)}{P(Y)} =$$ $$\frac{\sum_{z}P(X,Y,z)}{P(Y)}$$ Then by the fact that the joint probability distribution must be consistent with the marginal probability: $$\sum_{z}P(X,Y,z) = P(X,Y) $$ Completing the proof. REPLY [2 votes]: Your proof is essentially correct except $X,Y,z$ are not events; they are two random variables and a value. $X=x, Y=y, Z=z$ are events. For discrete random variables $X,Y,Z$, we use the Law of Total Probability to state: $$\begin{align} \mathsf P(X {=} x \mid Y{=}y) \; & =\; \mathsf P(X{=}x,Y{=}y)\,/\,\mathsf P(Y{=}y) & \textsf{iff } \mathsf P(Y{=}y)\neq 0 \\[1ex] & = \; \sum\limits_{z\in \mathcal Z}\mathsf P(X {=} x, Z{=}z, Y{=}y)\,/\,\mathsf P(Y{=}y) &\textrm{where }\mathcal Z\textrm{ is the support of }Z \\[2ex]\therefore \mathsf P(X {=} x \mid Y{=}y) & =\; \sum\limits_{z\in \mathcal Z}\mathsf P(X {=} x, Z{=}z\mid Y{=}y) & \Box \end{align}$$
{"set_name": "stack_exchange", "score": 1, "question_id": 1577732}
TITLE: Minimal number of "words" that contain all possible pairs of letters in all position pairs QUESTION [7 upvotes]: Defining a word as sequence of ordered letters ($1$..$q$ letters) of length L, what is the minimal number of words such that among the entire list of words, at every pair of positions, I can find any two letters? for example, for $q=3$ and $L=2$ here is the minimal list: $$1 1, 2 2, 3 3, 1 2, 2 3, 3 1, 1 3, 2 1, 3 2,$$ total $q^2$ words are needed. but for $L=3$ the minimal number is still $q^2$, obtained by: $$1 1 1, 2 2 1, 3 3 1, 1 2 2, 2 3 2, 3 1 2, 1 3 3, 2 1 3, 3 2 3,$$ for $L=4$ the number is different... what is the minimal number of words for $(q,L)$ and specifically, what is the asymptotic value for $L\gg 1$? Thanks for you answers! REPLY [2 votes]: First, let me elaborate the upper bound from @kodlu’s answer. If $q$ is a power of a prime, then $q^2$ words suffice for $L=q+1$ (for $L=q$ it is almost trivial, improving by $1$ needs just a bit more). Then, doubling $L$ increases the number of words by $q(q-1)$, so for those values of $L$ it suffices to take $q(q-1)\log_2\frac L{q+1}+q^2$ words. Let me show a somewhat close lower bound. Let $w$ be the number of words; set $k=w-q(q-1)+1$. Take any $k$ words. Let $v_i$ be a vector composed from all $i$th entries of the $k$ words. If two of those vectors, say $v_i$ and $v_j$, coincide, this means that at most $w-k=q(q-1)-1$ words differ in positions $i$ and $j$, so not all pairs are covered. Thus, we have $L$ distinct vectors in $[q]^k$, so $L\leq q^k$, or $w\geq q(q-1)+1+\log_qL$. Therefore, the growth rate is indeed logarithmic (but the constant at the logarithm is yet unclear). Addendum. Let me present an example for $L=q+1$, when $q$ is a power of a prime. Consider an affine plane $\mathbb F_q^2$. All lines in it are partitioned into $q+1$ classes $C_1,\dots,C_{q+1}$ of mutually parallel lines (one class consists of all lines with equations of the form $ax+by=c$ with fixed $(a:b)$). Enumerate the lines in each class by numbers from 1 to $q$. For each point $p\in \mathbb F_q^2$, take a word $w_1\dots w_{q+1}$ where $w_i$ is the number of the line in $C_i$ passing through $p$. Then, for any two classes $C_i$ and $C_j$ and for any two lines in them, the lines meet at a unique point, which means exactly what we need to get in the $i$th and $j$th positions.
{"set_name": "stack_exchange", "score": 7, "question_id": 358349}
TITLE: Poles of the L-series of the elliptic curve which has CM QUESTION [1 upvotes]: Let $E/\mathbb{Q}$ be the elliptic curve which has CM. It is well known that the L-series of the elliptic curve which has CM is Hecke L-series. On the other hand, it is pointed out that Hecke L-series has no pole. Is it right to consider that the L-series of $E/\mathbb{Q}$ has no pole? As usual, let the local part be $L_p(T) = 1 - a_pT + pT^2$ when the curve has good reduction at p. Setting $T = p^{-s}$, we will think of $L_p(s) = L_p(p^{-s})$. Let $L(E/\mathbb{Q}, s) = \Pi_pL_p(s)^{-1}$. Suppose that $L(E/\mathbb{Q}, s)$ has no pole. Fix a prime number $p_0$. If $L_{p0}(s_0) = 0$ then it yields that $L_p(s_0) = 0$ for infinitely many prime numbers p’s, because $\Pi_pL_p(s)^{-1}$ is absolutely convergent at the complex number $s = s_0 $. Is this right? Cordially, M. Shimoinuda. REPLY [4 votes]: It is a general fact that if $E$ is any elliptic curve over $\mathbf{Q}$ (CM or otherwise) its Hasse-Weil $L$-series has no poles, at least if you multiply by the appropriate $\Gamma$ factor. (This follows from work of Hecke and Deuring for CM curves, and for non-CM curves it is a much deeper theorem relying on the fact that all elliptic curves over $\mathbf{Q}$ are modular.) But I think your argument regarding zeros of the local $L$-factors does not work, because although the $L$-function is defined and holomorphic for all $s$, that does not mean that the Euler product expansion converges for all $s$. The domain of holomorphy of the function isn't necessarily the same as the domain of convergence of the Euler product.
{"set_name": "stack_exchange", "score": 1, "question_id": 140001}
TITLE: Asymptotics of number of possibilities that $V$ dice rolls add up to $n V$ for $V\to\infty$ QUESTION [3 upvotes]: If we roll an $(s+1)$-sided dice $V$ times (assuming the sides are labelled by $0$, $1$ up to $s$) the number of possibilities to get exactly $N$ is well-known to be given by (see derivation here) \begin{align} d(V,N,s)=\sum_j (-1)^j\binom{V}{t}\binom{V+n-j(s+1)-1}{V-1}\,. \end{align} We also know that the resulting probability distribution \begin{align} p(V,N,s)=(s+1)^{-V}d(V,N,s) \end{align} approaches a normal distribution centered at $\mu=\frac{s V}{2}$ with standard deviation $\sigma=\sqrt{\frac{V s (2+s)}{12}}$. However, I would like to find the asymptotics of $d(V,n V,s)$ as we take $V\to\infty$, where $n$ is a fixed fraction $n\in[0,s]$ and $s$ a fixed integer (of course!). I know that the Gaussian approximation is only exact right at the center $N=\frac{s V}{2}$, i.e., \begin{align} d(V,\frac{sV}{2},s)\sim \sqrt{\frac{6}{s(2+s)\pi V}}\,e^{\log(1+s)V}\quad\text{as}\quad V\to\infty\,. \end{align} I'm struggling to compute the asymptotics for other choices of $n$, as I do not know how to deal with the sum. I expect a structure alike to \begin{align} d(V,nV,s)\sim \frac{\alpha(s,n)}{\sqrt{V}}\,e^{f(s,n) V}\quad\text{as}\quad V\to\infty\ \end{align} with $f(s,\frac{s}{2})=\log(1+s)$, but I don't know how to find $f$ for other values of $n$. REPLY [1 votes]: Partial update: This is not a full solution, but based on the helpful comments I wanted to give an update. What I am looking for is clearly the asymptotic behavior of the extended binomial coefficients (also known as polynomial coefficients - not to be confused with the $q$-binomial coefficients). While the $q$-binomial coefficients usually have the $q$ as lower subscript, the extended binomial coefficients either have the $q$ separated by a comma or as superscript. There appears to be a slight difference in convention, but I followed the convention where $$ \sum^\infty_{k=0}\binom{n,q}{k}x^k=(1+x+x^2+\dots+x^q)^n.$$ There is also the convention where the sum only goes up to $x^{q-1}$, so everything in this case $q\to q+1$ (with respect to my convention). The asymptotics of these coefficients has been studied, but to my knowledge the case considered by me has not been solved in generality (but rather only for $n=q/2$, which I had already found). Recent references include: JIYOU LI: ASYMPTOTIC ESTIMATE FOR THE POLYNOMIAL COEFFICIENTS Steffen Eger: Stirling's approximation for central extended binomial coefficients Thorsten Neuschel: A Note on Extended Binomial Coefficients I'm still trying to figure out if the integral representation suggested by leonbloy provides some help, but using a Gaussian approximation only allowed me to rederive the result for $n=q/2$. As this concerns integration, I started this separate question. I also found an interesting reference in the appendix of this book, where the trinomial case ($s=2$) is solved. The relevant asymptotics is given in Theorem D.4 and the form of the function $f(n,s=2)$ is explained in Corollary D.6.
{"set_name": "stack_exchange", "score": 3, "question_id": 4521960}
TITLE: Eliminating parameters from polynomial relations QUESTION [1 upvotes]: Let $p =x^3+y^3+z^3$, $q= x^2y+y^2 z+z^2x$, $r=xy^2+yz^2+zx^2$, and $s=xyz$. I want to find some non-zero polynomial in $\phi$ in $p,q,r,s$ such that $\phi(p,q,r,s)=0$; that is, to eliminate $x,y,z$. I have $p^2-2qr-2ps+6s^2=x^6+y^6+z^6$ and $p^3-3q^3-3r^3-24s^3+18qrs=x^9+y^9+z^9$, but at this point it seems like I'm going in circles. The PDF I'm reading is on ring actions, if that gives any context. REPLY [2 votes]: Given the polynomials $$ p \!=\! x^3\!+\!y^3\!+\!z^3,\;\; q\!=\! x^2y\!+\!y^2 z\!+\!z^2x,\;\; r\!=\!xy^2\!+\!yz^2\!+\!zx^2,\;\; s\!=\!xyz \tag{1} $$ then we have the syzygy $$ 0 = q^3 + r^3 + 9 s^3 + p^2 s + 3 p s^2 - p q r- 6 r q s . \tag{2} $$ I used a general purpose tool for finding algebraic relations to find it. In this particular case, there is a homogeneous cubic polynomial with $20$ different possible monomials. If needed, you can use the undetermined coefficients method and solve for the integer coefficients. My tool just automates the method using a PARI/GP function I wrote. REPLY [1 votes]: Multiply $r$ and $q$ \begin{eqnarray*} rq &=& \sum x^3y^3+ 3(xyz)^2+xyz \sum x^3 \\ &=&\sum x^3y^3+ 3s^2 +sp. \\ \end{eqnarray*} So \begin{eqnarray*} \sum x^3y^3= rq -3s^2 -sp \\ \end{eqnarray*} Now multiply this by $p$ \begin{eqnarray*} p(rq-sp-3s^2) = \sum_{perms} x^6y^3+ 3(xyz)^3 \\ \sum_{perms} x^6y^3= p(rq-sp-3s^2) -3s^3 \\ \end{eqnarray*} Now cube $q$ \begin{eqnarray*} q^3 = \sum_{cyc} x^6 y^3 + 3xyz \sum x^3 y^3 + 3(xyz)^2 \sum x^3 + 6(xyz)^3 \\ = \sum_{cyc} x^6 y^3 + 3 s(rq-3s^2-sp) +3s^2p+6s^3. \end{eqnarray*} Do the same for $r$ add and then eliminate using $\sum_{cyc} x^6y^3 +\sum_{cyc} x^3y^6=\sum_{perms} x^6y^3$. This gives \begin{eqnarray*} q^3 - 3 srq +3s^3 +r^3 - 3 srq +3s^3 = pqr-sp^2 -3s^2p -3s^3 \\ \color{red}{q^3+r^3+9s^3 +sp^2+3s^2p-pqr-6srq=0}. \end{eqnarray*}
{"set_name": "stack_exchange", "score": 1, "question_id": 3520237}
TITLE: How would I work out the Cayley table for $F_3 [x]$ modulo $x^2 +2$ with addition and multiplication. QUESTION [1 upvotes]: How would one display the Cayley table for $F_3 [x]/(x^2 +2)$ and show that it is a ring (I have assumed addition and multiplication are associative and that multiplication is distributive over addition). REPLY [0 votes]: If you are looking for a less powerful approach than Bernard's answer, you can just write down each element of $\frac{\mathbb{F}_3[x]}{x^2+2}$. $$\frac{\mathbb{F}_3[x]}{x^2+2} = \{ a_0 + a_1 \alpha \ | \ \alpha^2=-2 = 1 \textrm{ and } a_0,a_1\in \mathbb{F}_3\}.$$ When you take a quotient of a ring by an ideal, you always get a ring. But you can write down the multiplication table if you wanted to. For example: $(1 + \alpha)*\alpha = \alpha + \alpha^2 = \alpha + 1$. Another example: $(1 + 2\alpha)*(2+\alpha) = (2 + \alpha + 4\alpha + 2\alpha^2) = 2 +5\alpha + 2\alpha^2 = 2 + 2\alpha + 2 = 4 + 2\alpha = 1 + 2\alpha.$
{"set_name": "stack_exchange", "score": 1, "question_id": 1771388}
TITLE: $\lim_{n\rightarrow\infty}\sup(a_n+b_n) = \lim_{n\rightarrow\infty}\sup a_n + \lim_{n\rightarrow\infty}\sup b_n$ for all bounded sequences $(b_n)$ QUESTION [1 upvotes]: Let $(a_n)$ be a bounded sequence. Suppose that for every bounded sequence $(b_n)$ we have $\lim_{n\rightarrow\infty}\sup(a_n+b_n) = \lim_{n\rightarrow\infty}\sup a_n + \lim_{n\rightarrow\infty}\sup b_n$. Prove that $(a_n)$ is convergent. We can take $b_n=-a_n$ to get $\lim_{n\rightarrow\infty}\sup a_n + \lim_{n\rightarrow\infty}\sup (-a_n) = 0$. For any subsequence of $a_n$ which converges to $L$, the corresponding subsequence of $-a_n$ converges to $-L$. How can we conclude from here? REPLY [2 votes]: Hint: $$\lim_{n\to\infty}\sup (-a_n)=-\lim_{n\to\infty}\inf a_n.$$
{"set_name": "stack_exchange", "score": 1, "question_id": 415873}
\begin{document} \centerline{\Large \bf Long waves instabilities} \bigskip \centerline{D. Bian\footnote{Beijing Institute of Technology, School of Mathematics and Statistics, Beijing, China}, E. Grenier\footnote{UMPA, CNRS UMR $5669$,Ecole Normale Sup\'erieure de Lyon, Lyon, France}} \subsubsection*{Abstract} The aim of this paper is to give a detailed presentation of long wave instabilities of shear layers for Navier Stokes equations, and in particular to give a simple and easy to read presentation of the study of Orr Sommerfeld equation and to detail the analysis of its adjoint. Using these analyses we prove the existence of long wave instabilities in the case of slowly rotating fluids, slightly compressible fluids and for Navier boundary conditions, under smallness conditions. \section{Introduction} Let us first consider incompressible Navier Stokes equations in an half space \beq \label{NS1} \partial_t u^\nu + (u^\nu \cdot \nabla) u^\nu - \nu \Delta u^\nu + \nabla p^\nu = f^\nu, \eeq \beq \label{NS2} \nabla \cdot u^\nu = 0, \eeq together with the Dirichlet boundary condition \beq \label{NS3} u^\nu = 0 \qquad \hbox{for} \qquad z = 0. \eeq The study of the inviscid limit $\nu \to 0$ of this system has been extensively studied (see for instance \cite{GR08,DGV2,GGN3, Mae,SammartinoCaflisch1,SammartinoCaflisch2} and the references therein). In this paper we are interested in the stability of a shear layer profile $U(z) = (U_s(z), 0)$. Note that this shear layer profile is a stationary solution of Navier Stokes equations provided we add the forcing term $f^\nu = (- \nu \Delta U_s, 0)$. The stability of such shear layer profiles has been intensively studied in physics since the beginning of the twentieth century, see \cite{Blasius,Reid,Hei,Lin0,LinBook,Schlichting} and the references therein. We recall that such shear layers are unstable in $3$ dimensions if and only if they are unstable in $2$ dimensions (so called Squire's theorem). We also recall that non trivial shear layer profiles are always linearly unstable with respect to Navier Stokes equations. Two classes of instabilities appear as follows: \begin{itemize} \item "Inviscid instabilities": instabilities which persist as $\nu$ goes to $0$. According to Rayleigh's criterium, such instabilities only occur for profiles $U_s$ with inflection points, and exhibit scales in $t$, $x$ and $z$ of order $O(1)$. This first kind of instabilities is now well understood, both at the linear and nonlinear levels \cite{GN, GN1}. \item "Long wave instabilities": these instabilities arise even in the case of concave profiles $U_s$, such that $U_s'' < 0$. They are characterized by a strong spatial anisotropy since their sizes are of order $O(1)$ in $z$ but of order $O(\nu^{-1/4})$ in $x$. Moreover they grow very slowly, within time scales of order $O(\nu^{-1/2})$. This second type of instabilities is more delicate to study, even at the linear level. \end{itemize} The purpose of this paper is to give an educational presentation of this second kind of instabilities, by recalling recent results on Orr Sommerfeld equations, and to give the first mathematical study of the adjoint of Orr Sommerfeld equation. We then apply these studies to systems close to incompressible Navier Stokes equations with Dirichlet boundary condition. We in particular formally prove that there exists such long wave instabilities for incompressible Navier Stokes equations with Navier boundary conditions, for slightly rotating fluids and for compressible Navier Stokes equations in the low Mach number regime, all under a smallness assumption. We only give formal proofs. Rigorous approaches are straightforward using the methods used in \cite{GN}. This present work is a first step in a general program to study the nonlinear instability in the viscous boundary layer \cite{Bian2}. \section{Orr Sommerfeld equations} We first recall the main results on Orr Sommerfeld equations. The corresponding proofs may be found for instance in \cite{GN1}. \subsection{Introduction} Let $L$ be the linearized Navier Stokes operator near the shear layer profile $U$, namely \beq \label{linearNS} L u = (U \cdot \nabla) u + (u \cdot \nabla) U - \nu \Delta u + \nabla q, \eeq with $\nabla \cdot u = 0$ and Dirichlet boundary condition. We want to study the resolvent of $L$, namely to study the equation \beq \label{resolvant} (L - \lambda) u = f \eeq where $f$ is a given forcing term. Following the classical analysis we take advantage of the incompressibility condition to introduce the stream function, take its Fourier transform in $x$ and its Laplace transform in time, and thus look for velocities of the form $$ u = \nabla^\perp \Bigl( e^{i \alpha (x - c t) } \psi(z) \Bigr)=e^{i \alpha (x - c t) }(\partial_z\psi, -i\alpha \psi). $$ Note that $\lambda = i \alpha c$. We also take the Fourier and Laplace transform of the forcing term $f$ $$ f = \Bigl( f_1(z),f_2(z) \Bigr) e^{i \alpha (x - c t) } . $$ Taking the curl of (\ref{resolvant}) we then get \beq \label{Orrmod} Orr_{c,\alpha,\nu}(\psi) := (U_s - c) (\partial_z^2 - \alpha^2) \psi - U_s'' \psi - { \nu \over i \alpha} (\partial_z^2 - \alpha^2)^2 \psi =- {\nabla \times f \over i \alpha}, \eeq where $$ \nabla \times (f_1,f_2) = i \alpha f_2 - \partial_z f_1. $$ The Dirichlet boundary condition gives \beq \label{condOrr} \psi(0) = \partial_z \psi(0) = 0. \eeq Let \beq \label{epsilon} \eps = {\nu \over i \alpha} . \eeq As $\nu$ goes to $0$, the $Orr$ operator degenerates into the classical Rayleigh operator \beq \label{Rayleigh} Ray_\alpha(\psi) = (U_s - c) (\partial_z^2 - \alpha^2) \psi - U_s'' \psi \eeq which is a second order operator, together with the boundary condition \beq \label{condRay} \psi(0) = 0. \eeq Two cases arise, depending on whether this Rayleigh operator has unstable eigenvalues or not. In the first case there exists $\alpha$, $c$ and $\psi_R$, which satisfy \eqref{condRay}, such that $$ Ray_\alpha(\psi_R) = 0. $$ Starting from this solution, through a perturbative analysis, it is possible to construct eigenmodes $\psi_\nu$ of the Orr Sommerfeld operator with corresponding eigenvalues $c_\nu$ which satisfy (\ref{condOrr}) and $Orr(\psi_\nu) = 0$, and it is also possible to prove the nonlinear instability of the corresponding shear layer profiles. This case is now well understood and we in particular refer to \cite{GN} and \cite{GN1} for further details. We will thus focus on the second case, where Rayleigh's operator has no unstable eigenvalue. \subsection{Rayleigh equation} Let us first focus on Rayleigh's equation. It is a second order differential equation. For $c$ close to zero, we note that $U_s - c$ almost vanishes. More precisely for small $c$, let $z_c$ be such that \beq \label{yc} U_s(z_c) = c . \eeq Such a complex number $z_c$ is called a "critical layer". At $z_c$, the Rayleigh operator degenerates from a second order operator to an operator of order $0$ and thus has a "turning point" at $z = z_c$. We therefore expect that there exist two independent solutions of Rayleigh operator, one regular at $z_c$ and the other one which is singular near this point. It turns out that it is possible to get a very accurate description of these two independent solutions for small $\alpha$. Namely we first observe that for $\alpha = 0$, Rayleigh operator degenerates into $$ Ray_0(\psi) = (U_s - c) \partial_z^2 \psi - U_s'' \psi. $$ In particular \beq \label{psimoins0} \psi_{-,0}(z) = U_s(z) - c \eeq is an explicit solution of this limiting operator. An independent solution $\psi_{+,0}$ can be computed using the Wronskian $$ W(\psi_{-,0},\psi_{+,0}) = 1. $$ More precisely, \beq \label{psiplus0} \psi_{+,0}(z) = C(z) \psi_{-,0}(z) \eeq where $$ C'(z) = (U_s(z) - c)^{-2} . $$ This other solution behaves linearly at infinity and has a $(z - z_c) \log(z - z_c)$ singularity at $z = z_c$. Using a perturbative argument it is possible to prove that, for $\alpha$ small enough, the Rayleigh operator has two independent solutions, one, $\psi_{-,\alpha}$ which goes to $0$ at infinity and is close to $U_s - c$ for bounded $z$, and another one, $\psi_{+,\alpha}$, which diverges at $+ \infty$ and which has a $(z - z_c) \log(z - z_c)$ singularity at $z_c$. \subsection{Solutions of Orr Sommerfeld \label{sec23}} Let us go back to the Orr Sommerfeld operator. Note that $Orr$ is a fourth order differential operator. It can be proven that there exists four independent solutions to this operator (not taking into account any boundary condition), two with a "slow" behavior, called $\psi_{s,\pm}$, and two with a "fast" behavior, called $\psi_{f,\pm}$. The "-" subscript refers to solutions which go to $0$ at infinity and the "+" subscript to solutions diverging as $y$ goes to $+ \infty$. The "slow" solutions have bounded derivatives. For such solutions the $Orr$ operator reduces to Rayleigh's operator as $\nu$ goes to $0$. More precisely, $$ Orr = Ray - \eps Diff, $$ where $$ Diff(\psi) = (\partial_z^2 - \alpha^2)^2 \psi . $$ For "slow" solutions, $Diff$ may be treated as a small perturbation. For $\psi_{-,\alpha}$ we directly have $Orr(\psi_{-,\alpha}) = O(\eps)$. For $\psi_{+,\alpha}$ the situation is more delicate since $Orr(\psi_{+,\alpha})$ may be large near $z_c$. However, using an iterative scheme, treating $Diff$ as a perturbation, it is possible to construct genuine solutions of the $Orr$ operator, which are close to $\psi_{\pm,\alpha}$. In particular it can be proven that $\psi_{s,-}$ satisfies \beq \label{psism} \psi_{s,-}(0) = - c + \alpha {U_+^2 \over U_s'(0)} + O(\alpha^2), \eeq \beq \label{psisd} \partial_z \psi_{s,-}(0) = U'_s(0) + O(\alpha), \eeq \beq \label{psisdd} \partial_z^2 \psi_{s,-}(0) = O(1), \eeq where $U_+=\lim_{z\rightarrow +\infty}U_s(z)$. On the contrary "fast" solutions have very large gradients and higher derivatives. For these solutions Orr Sommerfeld may be approximated by its higher order derivatives, namely by \beq \label{modAiry} (U_s - c) \partial_z^2 - \eps \partial_z^4 = Airy \, \partial_z^2, \eeq where $Airy$ is the modified Airy operator $$ Airy = (U_s - c) - \eps \partial_z^2. $$ More precisely, $$ Orr = Airy \, \partial_z^2 + Rem, $$ where the remainder operator $Rem$ contains derivatives of lower order, and is therefore smaller with respect to $Airy \, \partial_z^2$ for "fast" solutions. The next step is to construct solutions of $Airy \, \, \partial_z^2 = 0$, or, up to two integrations, to construction solutions of $Airy = 0$. Near $z_c$, this operator may be approximated by the classical Airy operator \beq \label{linAiry} U_s'(z_c) (z - z_c) - \eps \partial_z^2 . \eeq Solutions of (\ref{linAiry}) may be explicitly expressed as combinations of the classical Airy's functions $Ai$ and $Bi$, which are solutions of the classical Airy equation $z \psi = \partial_y^2 \psi$. More precisely, $Ai(\gamma (z - z_c))$ and $Bi(\gamma (z - z_c))$ are solutions of (\ref{linAiry}) provided we choose \beq \label{gamma} \gamma = \Bigl( {i \alpha U_s'(z_c) \over \nu} \Bigr)^{1/3}. \eeq Now solutions of $Airy = 0$ may be constructed, starting from solutions of (\ref{linAiry}) through the so called Langer's transformation (which is a transformation of the phase and amplitude of the solution). To go back to (\ref{modAiry}) we then have to integrate twice these solutions. As a consequence, fast solutions to the Orr Sommerfeld equation may be expressed in terms of second primitives of the classical Airy functions $Ai$ and $Bi$. Let us call $Ai(2,.)$ and $Bi(2,.)$ this second primitives of $Ai$ and $Bi$. It can be proven that $\psi_{f,-}$ satisfies \beq \label{psif} \psi_{f,-}(0) = Ai(2,-\gamma z_c) + O(\alpha), \eeq \beq \label{psifd} \partial_z \psi_{f,-}(0) = \gamma Ai(1, - \gamma z_c) + O(1), \eeq \beq \label{psifdd} \partial_z^2 \psi_{f,-}(0) = \gamma^2 Ai(- \gamma z_c) + O(\alpha^{-1}). \eeq \subsection{Dispersion relation} An eigenmode of Orr Sommerfeld equation is a combination of these particular solutions which goes to $0$ at infinity and which vanishes, together with its first derivative, at $z = 0$. As an eigenmode must go to $0$ as $y$ goes to infinity, it is a linear combination of $\psi_{f,-}$ and $\psi_{s,-}$ only. There should exist nonzero constants $a$ and $b$ such that $$ a \psi_{f,-} (0) + b \psi_{s,-} (0) = 0 $$ and $$ a \partial_z \psi_{f,-} (0) + b \partial_z \psi_{s,-} (0) = 0. $$ Equivalently the determinant $$ E = \left| \begin{array}{cc} \psi_{f,-} (0) & \psi_{s,-} (0) \cr \partial_z \psi_{f,-} (0) & \partial_z \psi_{s,-} (0) \cr \end{array} \right| $$ must vanish. The dispersion relation is therefore \beq \label{disper} { \psi_{f,-} (0) \over \partial_z \psi_{f,-} (0)} = { \psi_{s,-} (0) \over \partial_z \psi_{s,-} (0)} \eeq or \beq \label{disper2} \alpha {U_+^2 \over U_s'(0)^2} - {c \over U_s'(0)} = \gamma^{-1} {Ai(2, - \gamma z_c) \over Ai(1,-\gamma z_c)} +O(\alpha^2). \eeq We will focus on the particular case where $\alpha$ and $c$ are both of order $\nu^{1/4}$. It turns out that this is an area where instabilities occur, and we conjecture that it is in this region that the most unstable instabilities may be found. We rescale $\alpha$ and $c$ by $\nu^{1/4}$ and introduce $$ \alpha = \alpha_0 \nu^{1/4}, \quad c = c_0 \nu^{1/4}, \quad Z = \gamma z_c, $$ which leads to \beq \label{disper3} \alpha_0 {U_+^2 \over U_s'(0)^2} - {c_0 \over U_s'(0)} = {1 \over (i \alpha_0 U'_s(z_c))^{1/3}} {Ai(2, - Z) \over Ai(1,- Z)} + O(\nu^{1/4}). \eeq Note that as $U_s(z_c) = c$, $$ z_c = U_s'(0)^{-1} c + O(c) $$ and $$ Z = \Bigl(i U_s'(z_c) \Bigr)^{1/3} \alpha_0^{1/3} U_s'(0)^{-1} c_0 + O(\nu^{1/4}) . $$ Note that the argument of $Z$ equals $\pi / 6$. We then introduce the following function, called Tietjens function, of the real variable $z$ $$ Ti(z) = {Ai(2,z e^{- 5 i \pi / 6}) \over z e^{- 5 i \pi / 6} Ai(1,z e^{- 5 i \pi / 6})} . $$ At first order the dispersion relation becames \beq \label{dispersionlimit} \alpha_0 {U_+^2 \over U_s'(0)} = c_0 \Bigl[ 1 - Ti(- Z e^{5 i \pi / 6} ) \Bigr] . \eeq This dispersion relation can be numerically investigated. Note that it only depends on the limit $U_+$ of the horizontal velocity at infinity and on $U_s'(0)$. It can be proven that provided $\alpha_0$ is large enough there exists $c_0$ with $\Im c_0 > 0$, leading to an unstable mode. Let us numerically illustrate this dispersion relation in the case $U_s'(0) = U_+ = 1$. Figure \ref{valeurc} shows the imaginary part of $c_0$ as a function of $\alpha_0$. We see that there exists a constant $\alpha_c$ such that $\Im c_0 > 0$ if $\alpha_0 > \alpha_c$ and $\Im c_0 < 0$ if $\alpha_0 < \alpha_c$. Figure \ref{valeurlambda} shows $\Re \lambda = \alpha_0 \Im c_0$ as a function of $\alpha_0$. We see that there exists a unique global maximum to this function, at $\alpha_0 = \alpha_M \sim 2.8$. \begin{figure} \includegraphics[width = 10cm]{valeurc.jpeg} \caption{$\Im c_0$ as a function of $\alpha_0$} \label{valeurc} \end{figure} \begin{figure} \includegraphics[width = 10cm]{valeurlambda.jpeg} \caption{$\Re \lambda$ as a function of $\alpha_0$} \label{valeurlambda} \end{figure} \subsection{Description of the linear instability} Let us now detail the linear instability. Its stream function $\psi_{lin}$ is of the form $$ \psi_{lin} = b \psi_{s,-} + a \psi_{f,-} . $$ Choosing $b = 1$ we see that $a = O(\nu^{1/4})$, hence \beq \label{psilin} \psi_{lin}(z) = U_s(z) - c + \alpha {U_+^2 \over U_s'(0)} + a Ai \Bigl(2, \gamma (z - z_c) \Bigr) + O(\nu^{1/2} ). \eeq The corresponding horizontal and vertical velocities $u_{lin}$ and $v_{lin}$ are given by \beq \label{vlin} u_{lin} = \partial_z \psi_{lin} = U_s'(z) +\gamma a Ai \Bigl(1, \gamma (z - z_c) \Bigr) + O(\nu^{1/4}), \eeq and \beq \label{ulin} v_{lin} = -i\alpha \psi_{lin} = O(\nu^{1/4}). \eeq Note that $\gamma a = O(1)$. The first term in (\ref{vlin}) may be seen as a "displacement velocity", corresponding to a translation of $U_s$. The second term is of order $O(1)$ and located in the boundary layer, namely within a distance $O(\nu^{1/4})$ to the boundary. Note that the vorticity \beq \label{check-use-number} \omega_{lin} = -(\partial_z^2 - \alpha^2) \psi_{lin}(z) = -U_s''(z) - \gamma^2 a Ai \Bigl( \gamma (z - z_c) \Bigr) \eeq is large in the critical layer (of order $O(\nu^{-1/4})$). \section{Adjoint Orr Sommerfeld operator} Let us now turn to the study of the adjoint of Orr Sommerfeld operator. \subsection{Definition of the adjoint} We will consider the classical $L^2$ product between two stream functions $\psi_1$ and $\psi_2$, namely $$ (\psi_1,\psi_2) = \int \psi_1 \bar \psi_2 dx . $$ We have $$ (Orr (\psi_1), \psi_2) = (\psi_1, TOrr (\psi_2)), $$ where $$ TOrr (\psi) = (\partial_z^2 - \alpha^2) (U_s - \bar c) \psi - U_s'' \psi + { \nu \over i \alpha} (\partial_z^2 - \alpha^2)^2 \psi, $$ Taking the complex conjugate we define the adjoint of Orr Sommerfeld equation to be \beq \label{Orrad} Orr^t_{c,\alpha,\nu}(\psi) := (\partial_z^2 - \alpha^2) (U_s - c) \psi - U_s'' \psi - { \nu \over i \alpha} (\partial_z^2 - \alpha^2)^2 \psi, \eeq with boundary conditions $\psi(0) = \partial_z \psi(0) = 0$. We also introduce the corresponding adjoint of the Rayleigh operator \beq \label{Rayad} Ray^t = (\partial_z^2 - \alpha^2) (U_s - c) \psi - U_s'' \psi \eeq and the adjoint Airy operator \beq \label{Airyad} {\cal A} = (\partial_z^2 - \alpha^2) \, Airy . \eeq We already know that the spectrum of Orr Sommerfeld operator and that of its adjoint is the same. Let $\psi_1$ be an eigenvector of $Orr$ with corresponding eigenvalue $\lambda_1$ and let $\psi_2$ be an eigenvector of $Orr^t$ with corresponding eigenvalue $\lambda_2$. Then, multiplying $Orr(\psi_1) = \lambda_1 \psi_1$ by $\bar \psi_2$, $Orr^t(\psi_2) = \lambda_2 \psi_2$ by $\bar \psi_1$ and combining we get (see \cite{Schensted}, chapter $2$) \beq \label{ortho} (\lambda_1 - \lambda_2) \int \psi_1 (\partial_z^2 - \alpha^2) \bar \psi_2 dx = 0. \eeq We can therefore normalize the eigenvectors $\psi_1$ and $\psi_2$ such that \beq \label{ortho2} \int \psi_1 (\partial_z^2 - \alpha^2) \bar \psi_2 dx = \delta_{\lambda_1 = \lambda_2}. \eeq Let $v_1 = \nabla^\perp \psi_1$ and $v_2 = \nabla^\perp \psi_2$, then we get \beq \label{ortho3} \int v_1 \bar v_2 dx = \delta_{\lambda_1 = \lambda_2}. \eeq \subsection{Study of the adjoint Rayleigh operator} The main observation is that $Ray$ and $Ray^t$ are conjugated through $U_s - c$, namely \beq \label{conjugation} Ray^t (\psi) = (U_s - c)^{-1} Ray \Bigl( ( U_s - c) \psi \Bigr) . \eeq As a consequence \beq \label{conjugation2} (Ray^t)^{-1}(f ) = (U_s - c)^{-1} Ray^{-1} \Bigl( ( U_s - c) f \Bigr) . \eeq All the study of $Ray^t$ can be thus deduced from that of $Ray$, as developed in \cite{GN}, a reference that we now closely follow. \subsubsection{The case $\alpha = 0$} For $\alpha = 0$, the adjoint of Rayleigh operator reduces to $$ Ray^t_0(\psi) = \partial_z^2 \Bigl[ (U_s - c) \psi \Bigr] - U''_s \psi . $$ Using (\ref{conjugation}), there exists two particular solutions to $Ray_0^t(\psi) = 0$, namely \beq \label{solRayt} \psi_{-,0}^t = 1, \qquad \psi_{+,0}^t = {\psi_{+,0} \over U_s(z) - c} , \eeq where $\psi_{+,0}$ is a growing solution to $Ray_0 = 0$. Note in particular that $\psi_{+,0}^t$ has a singularity of the form $(U_s(z) - c)^{-1}$, and is therefore much more singular than $\psi_{+,0}$, which is bounded near the critical layer. Note that $$ W \Bigl[ \psi^t_{-,0}, \psi^t_{+,0} \Bigr] = {1 \over (U_s - c)^2}. $$ The Green function of $Ray_0^t$ can be deduced from that of $Ray_0$ and is (see \cite{GN}) $$ G_{R,0}(x,z) = \left\{ \begin{array}{rrr} - (U_s(z)-c)^{-1} \psi_{-,0}(z) \psi_{+,0}(x), \quad \mbox{if}\quad z>x,\\ - (U_s(z)-c)^{-1} \psi_{-,0}(x) \psi_{+,0}(z), \quad \mbox{if}\quad z<x.\end{array}\right. $$ Note that this Green function is singular in $z$, with a $(U(z) - c)^{-1}$ singularity. The inverse of $Ray_0^t$ is explicitly given by \begin{equation}\label{def-RayS0} RaySolver_0^t(f) (z) : = \int_0^{+\infty} G_{R,0}(x,z) f(x) dx. \end{equation} As in \cite{GN}, we define $$ X^\eta = L_\eta^{\infty} = \Bigl\{ f \quad | \quad \sup_{z \ge 0} | f(z) | e^{\eta z} < + \infty \Bigr\}. $$ and the spaces $Y^\eta$ defined by: a function $f$ lies in $Y^\eta$ if for any $z \ge 1$, $$ |f(z)| + |\partial_z f(z) | + | \partial_z^2 f (z) | \le C e^{-\eta z} $$ and if for $z \le 1$, $$ | \partial_z f(z) | \le C (1 + | \log (z - z_c) | ) , $$ and $$| \partial_z^2 f(z) | \le C (1 + | z - z_c |^{-1} ). $$ The best constant $C$ in the previous bounds defines the norm $\| f \|_{Y^{\eta}}$. Note that, following Lemma 3.3 of \cite{GN}, if $(U_s(z) - c) f \in X^\eta$ then there exists a solution $\phi$ to $Ray_0^t(\phi) = f$ such that $(U_s(z) - c) \phi \in Y^0$, and $$ \| (U_s(z) - c) \phi \|_{Y^0} \le C ( 1 + | \log \Im c | ) \| (U_s(z) - c) f \|_{X^\eta}. $$ Again, note the singularity at $z = z_c$. \subsubsection{Approximate Green function when $\alpha \ll1$} We again follow \cite{GN}. Let $\psi^t_{-,0}$ and $\psi^t_{+,0}$ be the two solutions of $Ray^t_0(\psi) = 0$ that are constructed above. We now construct an approximate Green function to the adjoint Rayleigh equation for $\alpha > 0$. To proceed, let us introduce \begin{equation}\label{def-phia12} \phi_{-,\alpha } = \psi^t_{-,0} e^{-\alpha z} = e^{-\alpha z} ,\qquad \phi_{+,\alpha} = \psi^t_{+,0} e^{-\alpha z}. \end{equation} A direct computation shows that their Wronskian determinant equals to $$ W[\phi_{-,\alpha},\phi_{+,\alpha}] = \partial_z \phi_{+,\alpha} \phi_{-,\alpha} - \phi_{+,\alpha} \partial_z \phi_{-,\alpha} $$ $$ = e^{-2\alpha z}W[\psi^t_{-,0},\psi^t_{+,0}] = {e^{-2 \alpha z} \over (U_s(z) - c)^2}. $$ Note that the Wronskian vanishes at infinity since both functions have the same behavior at infinity, and is singular at $z = z_c$. In addition, \begin{equation}\label{Ray-phia12} Ray^t_\alpha(\phi_{\pm,\alpha}) = - 2 \alpha \partial_z \psi_{\pm,0} e^{-\alpha z}. \end{equation} We are now led to introduce an approximate Green function $G_{R,\alpha}(x,z)$, which is defined as follows $$ G_{R,\alpha}^t(x,z) = \left\{ \begin{array}{rrr} - (U_s(z)-c)^{-1} e^{-\alpha (z-x)} \psi_{-,0}(z) \psi_{+,0}(x), \quad \mbox{if}\quad z>x\\ - (U_s(z)-c)^{-1} e^{-\alpha (z-x)} \psi_{-,0}(x) \psi_{+,0}(z), \quad \mbox{if}\quad z< x.\end{array}\right. $$ Note that $$ G^t_{R,\alpha} = { U_s(x) - c \over U_s(z) - c} G_{R,\alpha} . $$ Similar to $G_{R,0}^t(x,z)$, the Green function $G_{R,\alpha}^t(x,z)$ is ``singular'' near $z_c$. By a view of (\ref{Ray-phia12}), \begin{equation}\label{id-Gxz} Ray^t_\alpha (G^t_{R,\alpha}(x,z)) = \delta_{x} + E_{R,\alpha}^t(x,z), \end{equation} for each fixed $x$, where the error kernel $E_{R,\alpha}^t(x,z)$ is defined by $$ E_{R,\alpha}^t(x,z) = \left\{ \begin{array}{rrr} - 2 \alpha (U_s(z)-c)^{-1} e^{-\alpha (z-x)} \partial_z \psi_{-,0}(z) \psi_{+,0}(x), \quad \mbox{if}\quad z>x\\ - 2 \alpha (U_s(z)-c)^{-1}e^{-\alpha (z-x)}\ \psi_{-,0}(x) \partial_z \psi_{+,0}(z), \quad \mbox{if}\quad z< x.\end{array}\right. $$ Then an approximate inverse of the operator $Ray_\alpha$ can be defined by \begin{equation}\label{def-RaySa} RaySolver_\alpha^t(f)(z) := \int_0^{+\infty} G_{R,\alpha}^t(x,z) f(x) dx, \end{equation} and the related error operator takes the form of \begin{equation}\label{def-ErrR} Err_{R,\alpha}^t(f)(z) := \int_0^{+\infty} E_{R,\alpha}^t(x,z) f(x) dx. \end{equation} Note that $$ RaySolver_\alpha^t (f) = (U_s - c)^{-1} RaySolver_\alpha \Bigl( (U_s - c) f \Bigr) , $$ and similarly for $E_{R,\alpha}^t$. Using (\ref{conjugation2}) and Lemma $3.4$ of \cite{GN}, we get the following Lemma. \begin{lemma}\label{lem-RaySa} Assume that $\Im c > 0$. For any $f\in {X^{\eta}}$, with $\alpha<\eta$, the function $RaySolver_\alpha^t(f)$ is well-defined in $Y^{\alpha}$, and satisfies $$ Ray_\alpha^t(RaySolver^t_\alpha(f)) = f + Err^t_{R,\alpha}(f). $$ Furthermore, there hold \begin{equation}\label{est-RaySa} \| (U_s - c) RaySolver_\alpha^t(f)\|_{Y^{\alpha}} \le C (1+|\log \Im c|) \| (U_s - c) f\|_{{X^{\eta}}}, \end{equation} and \begin{equation}\label{est-ErrRa} \| (U_s - c) Err_{R,\alpha}^t(f)\|_{Y^{\eta}} \le C | \alpha | (1+|\log (\Im c)|) \| (U_s - c) f\|_{X^{\eta}} , \end{equation} for some universal constant $C$. \end{lemma} \subsection{Transposed Airy operator and construction of $\phi^{t,app}_{f,-}$} Near the critical layer we will have to study $$ {\cal A} = (\partial_z^2 - \alpha^2) Airy $$ where $$ Airy = (U_s - c ) \psi - \eps \partial_z^2 \psi . $$ We have $$ Orr^t = {\cal A} - U_s'' \psi. $$ Note that ${\cal A}$ is easily inverted into $$ {\cal A}^{-1} = Airy^{-1} (\partial_z^2 - \alpha^2)^{-1} . $$ The study of $Airy$ and $Airy^{-1}$ has been fully detailled in \cite{GN}. In this latest paper the authors construct two particular approximate solutions of $Airy = 0$, denoted by $\phi^{app}_{f,\pm}$. In particular, near $z = 0$, $$ \phi^{app}_{f,-} = {1 \over g'(z)} Ai(\gamma g(z)) $$ where $g(z)$ is Langer's transformation, which leads to $$ \phi^{app}_{f,-}(z) = Ai(\gamma (z - z_c)) + O(\nu^{1/4}) . $$ The corresponding formulas for $\phi^{app}_{f,+}$ are similar, with $Ai$ replaced by $Ci$, a linear combination of $Ai$ and $Bi$. We now construct an approximate Green's function for the $Airy$ operator. Let $$ G^{Ai}(z,y) = {1 \over \eps W^{Ai}(x)} \left\{ \begin{array}{c}{\phi^{app}_{f,+}(z) \phi^{app}_{f,-}(x)} \quad \hbox{if} \quad z < x, \\ {\phi^{app}_{f,-}(z) \phi^{app}_{f,+}(x)} \quad \hbox{if} \quad z > x, \end{array} \right. $$ where $W^{Ai}$ is the Wronskian determinant of $\phi^{app}_\pm(x)$. Note that the Wronskian determinant is independent of $x$, since there is no first derivative term in $Airy$. In addition, we have $$ W^{Ai}(x) \sim \gamma = O(\nu^{-1/4}). $$ Therefore $G^{Ai}$ is rapidly decreasing in $z$ on both sides of $x$, within scales of order $\nu^{1/4}$. By construction, $$ Airy \, G^{Ai}(x,z) = \delta_x + O(\nu^{3/4}) G^{Ai}(x,z) . $$ Let us now turn to the study of the operator ${\cal A}$. Note that there exist four approximate independent solution to ${\cal A} = 0$. Two are simply $\psi_{1,2} = \psi^{app}_{f,\pm}$ (which are approximate solutions of $Airy = 0$), and two others are defined by $$ Airy(\psi_3) = e^{-\alpha z}, \qquad Airy(\psi_4) = {e^{\alpha z} - e^{- \alpha z} \over 2 \alpha} . $$ Note that $\psi_4$ does not decay to $0$. On the other side $\psi_3$ decays slowly and will be pivotal in the relation dispersion for the adjoint operator. We explicitly have $$ \psi_3 = G^{Ai} \star e^{- \alpha z} + O(\nu^{3/4}) . $$ Note that $\psi_3$ is explicitly given by $$ \psi_3(x) = {1 \over \eps W^{Ai}} \Bigl( \phi_{f,-}^{app}(x) \int_0^x \phi_{f,+}^{app}(y) e^{- \alpha y} dy + \phi_{f,+}^{app}(x) \int_x^{+ \infty} \phi_{f,-}^{app}(y) e^{- \alpha y} dy \Bigr) . $$ Note also that for bounded $z$, and in particular in the critical layer $e^{-\alpha z}$ may replaced by $1$, up to $O(\alpha)$ smaller terms. We define $\phi^{t,app}_{f,-} = \phi^{app}_{f,-}$, which is a solution of ${\cal A} = 0$ and is an approximate fast decaying mode. Note that \beq \label{partdisper1} {\partial_z \phi^{app}_{f,-}(0) \over \phi^{app}_{f,-}(0)} = \gamma {Ai'(- \gamma z_c) \over Ai(- \gamma z_c)} + O(\nu^{1/4}) . \eeq \subsection{Construction of $\phi^{t,app}_{s,-}$} The construction of $\phi^{t,app}_{s,-}$ is done by iteration. We start from $\phi^t_{-,0} = 1$, which leads to a first guess $$ \psi_0 = e^{- \alpha z}. $$ Note that the corresponding error is \beq \label{e0} e_0 = Orr^t(\psi_0) = - 2 \alpha U_s' e^{-\alpha z}, \eeq which is of order $O(\alpha)$. We thus introduce a first corrector to $\psi_0$ defined by $$ Ray^t_\alpha(\psi_1) = - e_0. $$ We then have \beq \label{iteration2} Orr^t(\psi_0 + \psi_1) = - \eps Diff(\psi_1), \eeq where $Diff(\psi_1)$ is defined in Section \ref{sec23}. We have $$ \psi_1 = - (U_s(z) - c)^{-1} f_1, $$ where $$ f_1 = RaySolver_\alpha \Bigl( (U_s - c) e_0(x) \Bigr) = - 2 \alpha G_{R,\alpha} \star (U_s - c) U_s' e^{-\alpha z}, $$ where $G_{R,\alpha}$ is the Green function of Rayleigh's operator (see \cite{GN}). In order to study the singularity of $\psi_1$ we write $f_1$ in the form of $$ f_1 = f_1(z_c) + (U_s(z) - c) g_1(z), $$ where $g_1$ is a smooth function. In particular $Diff(g_1)$ is also smooth. Note that, as $e_0$ is of order $O(\alpha)$, $f_1(z_c)$ is also of order $O(\alpha)$. Therefore $\psi_1$ has a singularity at $z_c$, which is exactly of the form $$ \psi_1 = \psi_1^1 + \psi_1^2 := - {f_1(z_c) \over U_s (z) - c} - g_1(z). $$ In particular, the leading term in $-\eps Diff(\psi_1)$ is $$ -\eps Diff(\psi_1) = \eps f_1(z_c) Diff \Bigl( {1 \over U_s(z) - c} \Bigr) + O(\eps). $$ To remove the singularity at leading order we introduce $\psi_2$ defined by $$ \partial^2_z Airy \, \psi_2 = \eps \partial_z^4 \psi_1^1 = - \eps f_1(z_c) \partial_z^4 \Bigl ({ 1 \over U_s(z) - c} \Bigr). $$ This leads to \beq \label{theta2} Airy \, \psi_2 = - \eps f_1(z_c) \partial_z^2 \Bigl( { 1 \over U_s(z) - c } \Bigr). \eeq We recall that $$ Airy \, \psi_2 = \Bigl( (U_s(z) - c ) - \eps \partial_z^2 \Bigr) \psi_2. $$ Next we define $\psi_2^1$ by $$ \psi_2 = {f_1(z_c) \over U_s(z) -c } + \psi_2^1. $$ We have at leading order $$ Airy \, \psi_2^1 = - f_1(z_c), $$ thus at leading order $$ \psi_2^1 = - f_1(z_c) \psi_3(z), $$ leading to $$ \psi_2 = {f_1(z_c) \over U_s(z) -c } - f_1(z_c) \psi_3(z). $$ Summing up, this gives an approximate mode of the form \beq \label{approxmode} \phi^{t,app}_{s,-}(z) = e^{-\alpha z} - f_1(z_c) \psi_3(z) - g_1(z) +\cdots. \eeq Note in particular that \beq \label{disperstwo} {\partial_z \phi^{t,app}_{s,-}(0) \over \phi^{t,app}_{s,-}(0) } = O(\nu^{-1/4}). \eeq \subsection{Dispersion relation} The relation dispersion is \beq \label{adjointdispersion} {\partial_z \phi^{t,app}_{s,-}(0) \over \phi^{t,app}_{s,-}(0) } = \gamma {Ai'(- \gamma z_c) \over Ai(- \gamma z_c)}, \eeq which both are of size $O(\nu^{-1/4})$. This dispersion relation should be the same as that of Orr Sommerfeld, namely (\ref{disper}), which is not clear (and seems not proven, even at a formal level, in the physical literature). \subsection{Use of the adjoint} Let us recall in a general setting how to use the adjoint in order to study the perturbation of the spectrum of an operator. Let $A(\eps)$ be a smooth families of linear operators, such that $A(\eps)$ has an eigenvector $u(\eps)$ with corresponding eigenvalue $\lambda(\eps)$. Assuming that $A(\eps)$, $u(\eps)$ and $\lambda(\eps)$ have expansions in $\eps$, of the form $$ A = A_0 + \eps A_1 + \eps^2 A_2 + \cdots $$ and similarly for $u$ and $\lambda$ and writing $A(\eps) = \lambda(\eps) u(\eps)$ we get $$ A_0 e_0 = \lambda_0 e_0 $$ and at first order \beq \label{first1} (A_0 - \lambda_0) e_1 = \lambda_1 e_0 - A_1 e_0. \eeq Note that $(A_0 - \lambda_0)$ is precisely not invertible. Therefore we must choose $\lambda_1$ such that $$ \lambda_1 e_0 - A_1 e_0 \in Range(A_0 - \lambda_0) = ker(A_0^t - \lambda_0)^{\perp}. $$ In particular for any $v \in ker(A_0^t - \lambda_0)$, $$ \lambda_1 (e_0 | v) = (A_1 e_0 | v) , $$ which gives $\lambda_1$. For this $\lambda_1$, (\ref{first1}) is precisely invertible, since the right hand side is in the range of $A_0 - \lambda_0$. The construction may then be iterated to get a whole expansion of $\lambda(\eps)$ and $u(\eps)$. The previous arguments are of course purely formal. Conversely, if we know that $A(\eps)$ has such an expansion in powers of $\eps$, then, for any arbitrarily large $N$ we may construct approximate eigenvalues and normalized eigenvectors such that $$ A(\eps) u(\eps) = \lambda(\eps) u(\eps) + O(\eps^N). $$ Note that the adjoint is precisely used to construct an inverse of $(A_0 - \lambda_0)$ when $\lambda_0$ is in the spectrum of $A_0$. \section{Other boundary conditions for Navier Stokes equations} The previous linear analysis may be extended to nearby systems, namely to genuine Navier Stokes equations with nearby boundary condition like succion or Navier boundary condition. Let us detail the case of Navier boundary condition and let us consider incompressible Navier Stokes equations in an half plane \beq \label{NS1-navier-boundary} \partial_t u^\nu + (u^\nu \cdot \nabla) u^\nu - \nu \Delta u^\nu + \nabla p^\nu = f^\nu, \eeq \beq \label{NS2-navier-boundary} \nabla \cdot u^\nu = 0 , \eeq together with Navier boundary condition \beq \label{navier} u_1 = \beta \omega, \qquad u_2 = 0 \qquad \hbox{for} \qquad y = 0, \eeq where $u_1$ is the tangential velocity, $u_2$ the normal velocity, $\beta$ a constant and $\omega = -(\partial_z^2 \psi - \alpha^2 \psi)$. (Note that $\omega=\nabla\times u=\partial_xu_2-\partial_zu_1$.) We are interested in the stability of a shear layer profile $U(z) = (U_s(z),0)$. The objective of this section is to prove that long wave instabilities persist provided $\beta$ is small enough. More precisely we will prove the following claim. \medskip {\it Claim: There exists $\beta_0$, small enough, such that if $\beta \le \beta_0 \nu^{1/4}$ then there exists a linearly unstable mode to Navier Stokes equations with Navier boundary condition.} \medskip As for the Dirichlet boundary condition we look for velocities of the form $$ u= \nabla^\perp \Bigl( e^{i \alpha (x - c t) } \psi(z) \Bigr)=e^{i \alpha (x - c t) } (\partial_z \psi, -i\alpha \psi), $$ which also now gives to the same Orr Sommerfeld operator, except that the Navier boundary condition (\ref{navier}) leads to $$ \psi(0) =0, \qquad \partial_z \psi(0) =- \beta \partial_z^2\psi(0). $$ As previously we must find a linear combination of $\psi_{s,-}$ and $\psi_{f,-}$ which satisfies (\ref{navier}). First note that, $\omega = \partial_z^2 \psi - \alpha^2 \psi$. Neglecting the second term, this gives $$ a \psi_{f,-}(0) + b \psi_{s,-} (0) = 0, $$ $$ a \partial_z \psi_{f,-}(0) + b \partial_z \psi_{s,-}(0) = - \beta \Bigl( a \partial_z^2 \psi_{f,-}(0) + b \partial_z^2 \psi_{s,-}(0) \Bigr) . $$ Note that at leading order $$ \partial_z^2 \psi_{f,-}(0) = \gamma^2 Ai(-\gamma z_c) . $$ This leads to, neglecting $\beta \partial_z^2 \psi_{s,-}(0)$ in front of $\partial_z \psi_{s,-}(0)$ \beq \label{disperbeta} { \psi_{f,-} (0) \over \partial_z \psi_{f,-} (0) +\beta \partial_z^2 \psi_{f,-}(0)} = { \psi_{s,-} (0) \over \partial_z \psi_{s,-} (0)} \eeq and to the dispersion relation \beq \label{disperbeta2} \alpha {U_+^2 \over U_s'(0)^2} - {c \over U_s'(0)} = \gamma^{-1} {Ai(2, - \gamma z_c) \over Ai(1,-\gamma z_c) +\beta \gamma Ai(- \gamma z_c)} +O(\alpha^2). \eeq The study of the dispersion relation continues as in the previous section, except that the Tietjens' function is replaced by $$ Ti(z) = {Ai(2,z e^{- 5 i \pi / 6}) \over z e^{- 5 i \pi / 6} \Bigl( Ai(1,z e^{- 5 i \pi / 6}) +i^{1/3} \alpha_0^{1/3} \beta \nu^{-1/4} U_s'(z_c)^{1/3} Ai(z e^{-5 i \pi / 6}) \Bigr)} . $$ We therefore look at $\beta$ of order $\nu^{1/4}$ and define $$ \beta = \beta_0 \nu^{1/4} . $$ A continuity argument immediately gives that provided $\beta_0$ is small enough, there exists an unstable mode to Navier Stokes equations with Navier boundary condition, which ends the proof of the claim. \section{Rotation or magnetic field} In this section we focus on Navier Stokes equations in a rotating frame, which read \beq \label{NS1C} \partial_t u^\nu + (u^\nu \cdot \nabla) u^\nu - \nu \Delta u^\nu + \eta e \times u^\nu + \nabla p^\nu = f^\nu, \eeq \beq \label{NS2C} \nabla \cdot u^\nu = 0 , \eeq where $e$ is a fixed vector and $\eta$ a scalar. Note that $\eta e \times u$ is the usual Coriolis force (or magnetic force in the case of a charged fluid). We will focus on the case where $e$ is perpendicular to the plane $z = 0$. The case of an arbitrarily $e$ can be treated using similar ideas. We will prove the following claim. \medskip {\it Claim: there exists $N$ large enough ($N \ge 3$) such that if $\eta \le \nu^N$, then there exists an approximate linear unstable mode to \eqref{NS1C} and \eqref{NS2C}.} \medskip If $\eta \to + \infty$ as $\nu \to 0$, Ekman's layers appear on the boundary, with a typical size $\sqrt{\nu / \eta}$. These layers are known to be linearly stable provided the associated Reynolds number $Re = \| u^\nu \|_{L^\infty} / \sqrt{\eta \nu}$ remains small enough. If $\eta^{-1}$ is of order $\nu$, they are known to be linearly and nonlinear unstable for this Reynolds number large enough. In this paper we are interested in the case where $\eta$ is small. In this case the size of the boundary layer is $O(\sqrt{\nu})$, as for genuine Prandtl's layer. In this paper we focus on the evolution of the stability of such layers as $\eta$ increases, but remains small. The case where $\eta$ is large, but negligible with respect to $\nu^{-1}$ remains open. We start with a shear layer, as for Navier Stokes equations. We first note that the Coriolis force $e \times u$ may be absorbed into the pressure term. For sufficiently small $\eta$ we construct a genuine three dimensional instability starting from a two dimensional one. Note that one of the main interest of this construction is that the instability is fully three dimensional, and not two dimensional as for genuine Navier Stokes equations. Namely let us choose $e_1$, direction of the $x$ variable, parallel to the plane $ z = 0$. Using Orr Sommerfeld equation, we may construct an instability $u_0$ which is independent on $y$ and only depends on $x$ and $z$. This instability $u_0$ solves (\ref{NS1C}) up to the Coriolis term $\eta e \times u_0$, a term in the direction $y$. We therefore alter Orr Sommerfeld to take into account this velocity in the $y$ direction. More precisely we look for $(u,v,w)$ of the form $$ (u,v,w) = e^{i \alpha (x - c t)} ( \psi'(z), v(z), -i \alpha \psi(z) ). $$ This leads to the following modified Orr Sommerfeld equations \beq \label{OrrCor1} (U_s - c) (\partial_z^2 - \alpha^2) \psi - U_s'' \psi - { \nu \over i \alpha} (\partial_z^2 - \alpha^2)^2 \psi - {\eta \over i \alpha} \partial_z v = 0, \eeq \beq \label{OrrCor2} (U_s - c) v - \eps (\partial_z^2 - \alpha^2) v - {\eta \over i \alpha} \psi' = 0. \eeq Note that \eqref{OrrCor1}-\eqref{OrrCor2} is a coupling between a genuine Orr Sommerfeld equation and an Airy type equation, through terms of order $\eta / i \alpha$. Let $$ Airy(v) = (U_s - c + \eps \alpha^2) v - \eps \partial_z^2 v. $$ Let us study the evolution of an eigenvector $\psi(\eta)$, $v(\eta)$ with corresponding eigenvalue $c(\eta)$ as $\eta$ increases from $0$. We will only detail formal computations and look for an asymptotic expansion of these quantities in $\eta$. This leads to the construction of approximate unstable eigenmodes, a first step in the construction of nonlinear instabilities. We have, underlying the $c$ dependency of the operators on $c$, $$ Orr(c,\psi) = {\eta \over i \alpha} \partial_z v, $$ $$ Airy(c,v) = {\eta \over i \alpha} \psi'. $$ We look for expansions of the form $$ c = c_0 + \eta^2 c_1 + \eta^4 c_2 + \cdots, $$ $$ \psi = \psi_0 + \eta^2 \psi_1 + \eta^4 \psi_2 + \cdots $$ and $$ v = \eta v_0 + \eta^3 v_1 + \cdots. $$ This leads to $Orr(c_0,\psi_0) = 0$, meaning that we construct a branch of solution starting from a long wave instability $(c_0,\psi_0)$ of the non rotating Navier Stokes equations. Moreover, $$ Airy(c_0,v_0) = {1 \over i \alpha} \psi_0'. $$ The next order gives \beq \label{nextorder} Orr(c_0,\psi_1) + Orr(c_1,\psi_0) = {1 \over i \alpha} \partial_z v_0 + c_0 (\partial_z^2 - \alpha^2) \psi_0. \eeq Let $\psi_1^t$ be the eigenvalue of $Orr^t$ with the same eigenvalue $c_0$. Then (\ref{nextorder}) may be solved provided $$ \Bigl( Orr(c_1,\psi_0), \psi_1^t \Bigr) = \Bigl(-i \alpha^{-1} \partial_z v_0 + c_0 (\partial_z^2 - \alpha^2) \psi_0 , \psi_1^t \Bigr), $$ or expanding the Orr Sommerfeld operator, and denoting by $v_1$, $\omega_1$, $v_1^t$ and $\omega_1^t$ the corresponding velocity fields and vorticities we get \beq \label{c-1-solution} \begin{split} c_1 \int v_1 \bar v_1^t &= -i \alpha^{-1} ( \partial_zv_0, \psi_1^t) - \int U_s \omega_1 \bar \psi_1^t + \int U_s'' \psi_1 \bar \psi_1^t\\ & \quad+ {\nu \over i \alpha} \int \omega_1 \bar \omega_1^t+\int c_0 \omega_1 \bar \psi_1^t, \end{split} \eeq which explicitly gives $c_1$. The formal construction of an approximate eigenmode can then be continued, up to any arbitrarily high order. Keeping track of the various dependencies in $\nu$ we get that, provided $\eta \le \nu^N$ with $N$ large enough ($N \ge 3$), their exists an approximate unstable eigenmode to \eqref{NS1C} and \eqref{NS2C}. Using the arguments developed in \cite{GN1} it is then possible to get an exact description of the corresponding Green function, and an exact unstable eigenmode. We will not detail these points here, which are routine, but lengthly. \section{Effect of density} Similar technics may be used to study compressible Navier Stokes equations or inhomogeneous Navier Stokes equations, provided the effects of compressibility or inhomogeneity are very small. Let us detail the case of compressible fluids. Let us focus on the classical barotropic compressible Navier Stokes equations \beq \label{NSC1} \partial_t \rho + \nabla \cdot (\rho u) = 0, \eeq \beq \label{NSC2} \partial_t u + (u \cdot\nabla) u - \mu \rho^{-1} \Delta u - \xi \rho^{-1} \nabla (\nabla \cdot u) + {\nabla p(\rho) \over \eta^2 \rho} = 0 \eeq with boundary conditions $u = 0$ on the boundary $y = 0$. In these equations, $\eta$ is the Mach number, which is assumed to be small, and $\rho$ is a small perturbation of a constant, fixed to be $1$. As $\eta$ goes to $0$, formally $\rho$ goes to $0$, therefore $\nabla \cdot u$ goes to $0$ and we recover incompressible Navier Stokes equations. Let $h(\rho)$ such that $$ \nabla h(\rho) = {\nabla p(\rho) \over \rho} . $$ The objective of this paper is to study the stability of the shear layer profile $\rho_s = 1$, $U_s = (U_s(y),0)$. More precisely we will prove formally that such a shear profile exhibits long wave instabilities provided $\eta$ is small enough. The linearized system reads \beq \label{NSC1lin} \partial_t \rho + \nabla \cdot (u + \rho U_s) = 0, \eeq \beq \label{NSC2lin} \partial_t u + ( U_s \cdot \nabla) u + ( u \cdot \nabla) U_s - \mu \Delta u + \mu \rho \Delta U_s - \xi \nabla (\nabla \cdot u) + h'(1) {\nabla \rho \over \eta^2} = 0. \eeq We will prove the following claim. \medskip {\it Claim: there exists $N$ large enough such that if $\eta \le \nu^N$, then there exists a linear unstable mode to \eqref{NSC1lin} and \eqref{NSC2lin}.} \medskip The strategy is to construct an approximate growing mode of \eqref{NSC1lin} and \eqref{NSC2lin} starting from a long wave growing mode of the genuine incompressible Navier Stokes equations. As usual in such incompressible limits, $\rho$ is of order $O(\eta^2)$. Moreover, the velocity field will only be divergence free at leading order. We will therefore look for velocities and density of the form $$ (u,w) = (u_0,w_0) + \eta^2 (u_2,w_2) + \eta^2 \nabla \theta_2 + \cdots, $$ $$ \rho = \eta^2 \rho_2 + \cdots $$ and $$ c = c_0 + \eta^2 c_2 + \cdots . $$ We start with a linear instability of incompressible Navier Stokes equations, of the form $$ (u_0,w_0) = e^{i \alpha (x - c t)} ( \psi'(z), -i \alpha \psi(z) ) . $$ This instability has a corresponding pressure $p_0$. As $\rho$ will be of order $O(\eta^2)$, $\mu \rho \Delta U_s$ is negligible at this stage, and moreover $\nabla \cdot u = 0$, hence we define $\rho_2$ by $$ h'(1) \nabla \rho_2 = - \nabla p_0 . $$ Now using (\ref{NSC1lin}) we get $$ \Delta \theta_2 = - \partial_t \rho_2 - \nabla \cdot (\rho_2 U_s), $$ which defines $\theta_2$. We then project (\ref{NSC2lin}) on divergence free vector fields and gradient vector fields. The project on gradients allows to define $\rho_4$. The projection on divergence free vector fields gives $(u_2,w_2)$, up to a solvability condition which gives $c_2$. This construction can be iterated at any order. This defines an approximate unstable mode. Using technics of \cite{GN1} we can study the corresponding Green functions and construct an exact unstable mode. This proves the claim, by keeping track of the $\nu$ dependency of the various operators (we do not detail the precise computation of $N$). \bigskip \noindent {\bf Acknowledgment.} D. Bian is supported by NSFC under the contract 11871005. \small
{"config": "arxiv", "file": "2205.12001/Bian-Grenier-1-1.tex"}
TITLE: Question about the substitution rule. Where did I go wrong? QUESTION [1 upvotes]: $$\int (f(x))' dx = f(x) + c$$ if $u=g(x)$ then $$\int (f(u))'du = f(u)+c$$ But $$\int (f(g(x)))'dx = f(g(x))+c$$ Where did I go wrong? REPLY [1 votes]: In short, you forgot the chain-rule. That is to say that if $u = g(x)$, then $du = g'(x)dx$. It is this that let's us say that $$ \int [f(g(x))]'dx =\int f'(g(x))g'(x)dx = f(g(x)) + C$$
{"set_name": "stack_exchange", "score": 1, "question_id": 178267}
TITLE: martices calculation for gauss-newton method QUESTION [2 upvotes]: I'm looking into the derivation of Gauss-Newton method. Why $\frac 12(F_k+J_k\Delta x)^T(F_k+J_k\Delta x)$ become $\frac 12F_k^TF_k+F_k^TJ_k\Delta x+\frac12\Delta x^TJ_k^TJ_k\Delta x$? Is $F_k^TJ_k$ equal $J_k^TF_k$? REPLY [1 votes]: Mulitply the factors as if they were scalars but with the additional observation that (A + B)^T = A^T + B^T and (A B)^T = B^T A^T. If you work through the algebra you'll find that: $\frac 12(F_k+J_k\Delta x)^T(F_k+J_k\Delta x)$ = $\frac 12F_k^TF_k+F_k^TJ_k\Delta x+\frac12\Delta x^TJ_k^TJ_k\Delta x$?
{"set_name": "stack_exchange", "score": 2, "question_id": 997264}
TITLE: Showing a function has exactly one zero with IVT and Rolle's Theorem QUESTION [8 upvotes]: This is an exercise that appears on differential calculus exams at my university. I'm typing up a thorough response to this exercise here to share with my class, and maybe it'll help other students too. Let $f$ be the function given by $$f(x) = 3x -2\sin(x)+7\,.$$ Use the Intermediate Value Theorem to show that $f$ has at least one zero, and then use the Mean Value Theorem to show $f$ has exactly one zero. See the sidebar under the Linked header for similar questions. ​ ​ ​ ​ ​ ​ ​ REPLY [19 votes]: First, let's recall what the Intermediate Value Theorem says. Informally, this theorem just says that if you have a function $f$ that is continuous on some interval $[a,b]$, every possible output value between $f(a)$ and $f(b)$ must get hit by $f$ for some input in that interval. A special case of this is when $f(a)$ and $f(b)$ have different sign (one is negative and the other is positive), then the graph of $f$ must cross the $x$-axis between $a$ and $b$ (so $f$ has a zero between $a$ and $b$). As an example, for the continuous function with the above graph (green), since the value of the function at $2$ and at $-2$ (red) have different sign, the function must have a zero (yellow) on the interval $(-2,2)$. Now it's may be very difficult to calculate the actual value of that zero, but the strength of the Intermediate Value Theorem is that it allows us to say a zero exists without ever calculating it. So now to show that our function $f(x) = 3x -2\sin(x)+7$ has a zero, we just have to find a pair of inputs for $f$ that return outputs of different sign. First we'll input our favorite number, zero, and get that $f(0) = 7$. Great! So now we need to find an input for which $f$ outputs a negative number. This is a little tricky, but it helps to notice that $\sin(x)$ is bounded; it bounces between $1$ and $-1$. So in our expression for $f(x)$, the $-2\sin(x)+7$ is bounded too; it's going to bounce between $-2(1)+7$ and $-2(-1)+7$, so between $5$ and $9$. To get a negative output then, we'll need to plug in a sufficiently large negative number so that the $3x$ summand "overpowers" the $-2\sin(x)+7$ summand and makes the value of the function negative. Notice that $x=-4$ works: $$f(-4) = -12 + (-2\sin(-4) + 7) < -12 + (9) = -3\,.$$ So $f(-4)$ must be negative, and we can say by the intermediate value theorem that $f$ must have at least one zero on the interval $(-4,0)$. Arguing that $f$ has exactly one zero is a bit trickier. First let's recall what the Mean Value Theorem says. Informally, if you've got a function that is continuous and differentiable on some interval, then there must be an input in that interval such the rate of change of the function at that input is equal to the average rate of change of the function over the entire interval. For our response though it'll turn out to be helpful to think of a special case of the Mean Value Theorem called Rolle's Theorem, where this average rate of change of the function is zero (i.e. when the value of the function at the endpoints are equal). In this case the theorem reduces to saying that there must be an input in the interval such that the rate of change of the function at that input is zero, which is the same as saying the slope of the tangent line through that corresponding point on the graph must be zero, which is the same as saying the value of the derivative at that point must be zero. For example, look at function with it's graph (green) above. The function is continuous and differentiable on the interval between its two zeros (yellow). So by Rolle's Theorem, there must be an point on the graph (red) such that the slope of the tangent line at that point (violet) is the same as the slope of the line through the two (yellow) endpoints of the interval, which is zero. Thinking of the Mean Value Theorem this way is actually a helpful hint for how to show what we need to show: we can apply the Mean Value Theorem this way if we have at least two zeros of the function to work with. Showing that $f$ has exactly one zero is equivalent to showing that $f$ cannot have more than one zero, so the big idea is that we're going to (falsely) assume that $f$ has at least two zeros, apply the Mean Value Theorem (Rolle's Theorem specifically), and arrive at a contradiction. This sort of argument is called a proof by contradiction. So let's do it! Suppose for the sake of arriving at a contradiction that $f$ has at least two zeros. So there are at least two inputs $a$ and $b$ such that $f(a) = f(b) = 0$. The average rate of change of $f$ on the interval $[a,b]$ is $$\frac{f(b)-f(a)}{b-a} = 0\,,$$ and so by the Mean Value Theorem there must be some input $c$ in the interval $(a,b)$ such that the rate of change of $f$ at this point is zero too. I.e. $f'(c) = 0$. Now we can calculate the derivative of $f$, which is $f'(x) = 3-2\cos(x)$. Remember that the value $\cos(x)$ bounces between $1$ and $-1$, so the value of $f'$ bounces between $1$ and $5$. In particular, $f'$ is always positive. But the Mean Value Theorem told us some input $c$ exists such that $f'(c)=0$. If $f'$ is always positive, this must be incorrect! No such $c$ can exist! And so our original assumption that led us to believe that such a value of $c$ existed, the assumption that $f$ has two or more zeros, must have been wrong! Since $f$ doesn't have more than one zero, it must have exactly one zero, which is exactly what we wanted to show. Now this is a lot to parse and condense into a response. After understanding everything above, here's how I would write it down if this question were asked on an exam: Notice that $f(0) = 7$, which is greater than zero, and that since $\sin(x)$ is bounded by $-1$ and $1$, $f(-4) = 3(-4) + -2\sin(-4) + 7 < 0$. Then by the Intermediate Value Theorem, there must be some input $c$ in the interval $(-4,0)$ such that $f(c) = 0$. So $f$ has at least one zero. Next suppose that $f$ doesn't have exactly one zero, so it has at least two zeros, say $a$ and $b$. Since $f(a) = f(b) = 0$, the average rate of change of the function on the interval $[a,b]$ is $$\frac{f(b)-f(a)}{b-a} = 0\,.$$ By the Mean Value Theorem, there must be some $c$ in the interval $(a,b)$ such that $f'(c)=0$. But $f'(x) = 3-2\cos(x)$. Since $\cos(x)$ is bounded between $1$ and $-1$, the value of $f'$ is bounded between $1$ and $5$. In particular $f'$ is never zero, so no such value of $c$ can exist, and so it can't be the case that $f$ has at least two zeros! Therefore, $f$ must have exactly one zero.
{"set_name": "stack_exchange", "score": 8, "question_id": 2683403}
TITLE: Finite set of points as hypersurfaces in $\mathbb{A}^2(\mathbb{R})$ QUESTION [0 upvotes]: We know that finite set of points $(a_1,b_1),\ldots,(a_n,b_n)$ in $\mathbb{A}^2(\mathbb{R})$ are hypersurfaces. In deed, if we consider $$f=\prod_{i=1}^n((x-a_i)^2+(y-b_i)^2),$$ then we would have that $\{(a_1,b_1),\ldots,(a_n,b_n)\}=V(f)$. My question here is, ¿Is it possible to obtain the same result for $f$ irreducible? So that every finite set of points in $\mathbb{A}^2(\mathbb{R})$ is a hypersurface of an irreducible polynomial. REPLY [2 votes]: Any finite set of points in the plane over an infinite field, after a change of variables can be assumed to lie on $y=0$. So, they are of the form $(a_i,0)$. So, they lie on the irreducible curve when the field is the reals, $p(x)^2+y^2$ where $p(x)=\prod (x-a_i)$. I explain my first sentence as asked by the OP. Let $(a_i,b_i)$ be the points with coordinates in the infinite field $K$. For any pair $i,j$, there can be at most one $t\in K$ such that $a_i+tb_i=a_j+tb_j$. Since the field is infinite, there is some $u\in K$ so that $a_i+ub_i\neq a_j+ub_j$ for any $i\neq j$. So, after a change of variables, we may assume all the $a_i$s are distinct. Let $p(x)$ be a polynomial of degree $n-1$ where $n$ is the number of points. We solve for the coefficients of $p$ from the equations $b_i=p(a_i)$ for all $i$ which just uses the fact that van der Monde matrices are invertible, since $a_i$s are distinct. Then your points lie on $y=p(x)$ and then a change of variables make this curve to be $y=0$.
{"set_name": "stack_exchange", "score": 0, "question_id": 4082433}
TITLE: Numerical solution of system of PDEs (1 order,linear,homogenious) method Lax-Wendroff QUESTION [0 upvotes]: I'm trying to solve linear system of PDEs $$\frac{\partial \bar{u}}{\partial t}+\begin{pmatrix} \frac{2}{5} & -\frac{72}{5} & \frac{64}{5} \\ \frac{6}{5} & \frac{-56}{5} & \frac{42}{5} \\ \frac{12}{5} & \frac{-52}{5} & \frac{34}{5} \\ \end{pmatrix} \frac{\partial \bar{u}}{\partial x}=0$$ with conditions $u_{1}(x,0)=3,x<0$ $u_{1}(x,0)=1,x>0$ $u_{2}(x,0)=1,x<0$ $u_{2}(x,0)=2,x>0$ $u_{3}(x,0)=2,x<0$ $u_{3}(x,0)=3,x>0$ numerically (with a help of method of Lax-Wendroff). So, I've written a program on MATLAB. function [Xg,Yg,st,APlus,AMinus] = Sett21(N) A=[2/5 -72/5 64/5 ; 6/5 -56/5 42/5 ; 12/5 -52/5 34/5] A1=zeros(3,3); for i=1:3 for j=1:3 if (A(i,j)>0) A1(i,j)=A(i,j); else A1(i,j)=0; end; end; end; A2=zeros(3,3); for i=1:3 for j=1:3 if (A(i,j)<0) A2(i,j)=A(i,j); else A2(i,j)=0; end; end; end; for i=1:N x(i)=-3+(6/N)*(i-1); end; for i=1:(10*N) y(i)=(5/(10*N))*(i-1); end; Xg=x; Yg=y; st=3; APlus=A1; AMinus=A2; That makes a grid on (X,T) and something more, but this won't be used this time. Then, I've made another program that calculates the values of $\bar{u}(x,t)$ in the points of the grid. [Xg,Yg,st,APlus,AMinus] = Sett21(50); N=50; Ukt=zeros(3,N,10*N); for t=1:3 for i=1:N if (t==1) if (Xg(i)<=0) Ukt(1,i,1)=3; else Ukt(1,i,1)=1; end; elseif (t==2) if (Xg(i)<=0) Ukt(2,i,1)=1; else Ukt(2,i,1)=2; end; elseif (t==3) if (Xg(i)<=0) Ukt(3,i,1)=2; else Ukt(3,i,1)=3; end; end; end; end; h1=6/N; h2=5/(10*N); A=[2/5 -72/5 64/5; 6/5 -56/5 42/5; 12/5 -52/5 34/5]; T1=[-2/5+h1/h2 72/5 -64/5; -6/5 56/5+h1/h2 -42/5; -12/5 52/5 -34/5+h1/h2]; T2=[2/5+h1/h2 -72/5 64/5; 6/5 -56/5+h1/h2 42/5; 12/5 -52/5 34/5+h1/h2]; for j=2:10*N for i=2:N-1 vec=(1/2)*(eye(3)+(h2/h1)*A)*[Ukt(1,i-1,j-1);Ukt(2,i-1,j-1);Ukt(3,i-1,j-1)]+(1/2)*(eye(3)-(h2/h1)*A)*[Ukt(1,i+1,j-1);Ukt(2,i+1,j-1);Ukt(3,i+1,j-1)]; Ukt(1,i,j)=vec(1); Ukt(2,i,j)=vec(2); Ukt(3,i,j)=vec(3); end; vec1=-A*[Ukt(1,2,j); Ukt(2,2,j); Ukt(3,2,j)]+(h1/h2).*[Ukt(1,1,j-1); Ukt(2,1,j-1);Ukt(3,1,j-1)]; vecx=T1\vec1; Ukt(1,1,j)=vecx(1); Ukt(2,1,j)=vecx(2); Ukt(3,1,j)=vecx(3); vec2=A*[Ukt(1,N-1,j);Ukt(2,N-1,j);Ukt(3,N-1,j)]+(h1/h2).*[Ukt(1,N,j-1);Ukt(2,N,j-1);Ukt(3,N,j-1)]; vecxx=T2\vec2; Ukt(1,N,j)=vecxx(1); Ukt(2,N,j)=vecxx(2); Ukt(3,N,j)=vecxx(3); end; for i=1:N for j=1:10*N Ukt1(i,j)=Ukt(1,i,j); end; end; for i=1:N for j=1:10*N Ukt2(i,j)=Ukt(2,i,j); end; end; for i=1:N for j=1:10*N Ukt3(i,j)=Ukt(3,i,j); end; end; mesh(Yg,Xg,Ukt1); Something goes not right and the solution between characteristics of this system reaches infinity. I know that it is not right, because it is the Reimann's problem, that's why the solution is constant between characteristics. I've been searching for a mistake since previous month and can't find it. Please, help me. REPLY [0 votes]: N=400; Ukt=zeros(3,N,10*N); for i=1:N x(i)=-3+(6/N)*(i-1); end; for i=1:(10*N) y(i)=(5/(10*N))*(i-1); end; Xg=x; Yg=y; for t=1:3 for i=1:N if (t==1) if (Xg(i)<=0) Ukt(1,i,1)=3; else Ukt(1,i,1)=1; end; elseif (t==2) if (Xg(i)<=0) Ukt(2,i,1)=1; else Ukt(2,i,1)=2; end; elseif (t==3) if (Xg(i)<=0) Ukt(3,i,1)=2; else Ukt(3,i,1)=3; end; end; end; end; REPLY [0 votes]: h1=6/N; h2=5/(10*N); alfa=h2/h1; A=[2/5 -72/5 64/5; 6/5 -56/5 42/5; 12/5 -52/5 34/5]; T1=[-2/5+h1/h2 72/5 -64/5; -6/5 56/5+h1/h2 -42/5; -12/5 52/5 -34/5+h1/h2]; T2=[2/5+h1/h2 -72/5 64/5; 6/5 -56/5+h1/h2 42/5; 12/5 -52/5 34/5+h1/h2]; for j=1:10*N-1 for i=2:N-1 vec=(eye(3)-(alfa^2)*A^2)*[Ukt(1,i,j); Ukt(2,i,j); Ukt(3,i,j)]+((1/2)*alfa*A+(1/2)* (alfa^2)*A^2)*[Ukt(1,i-1,j); Ukt(2,i-1,j); Ukt(3,i-1,j)]+((-1/2)*alfa*A+(1/2)*(alfa^2)*A^2)*[Ukt(1,i+1,j);Ukt(2,i+1,j);Ukt(3,i+1,j)]; Ukt(1,i,j+1)=vec(1); Ukt(2,i,j+1)=vec(2); Ukt(3,i,j+1)=vec(3); end; vec1=-A*[Ukt(1,2,j+1); Ukt(2,2,j+1); Ukt(3,2,j+1)]+(h1/h2)*[Ukt(1,1,j); Ukt(2,1,j);Ukt(3,1,j)]; vecx=T1\vec1; Ukt(1,1,j+1)=vecx(1); Ukt(2,1,j+1)=vecx(2); Ukt(3,1,j+1)=vecx(3); vec2=A*[Ukt(1,N-1,j+1);Ukt(2,N-1,j+1);Ukt(3,N-1,j+1)]+(h1/h2)*[Ukt(1,N,j);Ukt(2,N,j);Ukt(3,N,j)]; vecxx=T2\vec2; Ukt(1,N,j+1)=vecxx(1); Ukt(2,N,j+1)=vecxx(2); Ukt(3,N,j+1)=vecxx(3); end; for i=1:N for j=1:10*N Ukt1(i,j)=Ukt(1,i,j); end; end; for i=1:N for j=1:10*N Ukt2(i,j)=Ukt(2,i,j); end; end; for i=1:N for j=1:10*N Ukt3(i,j)=Ukt(3,i,j); end; end; %mesh(Yg,Xg,Ukt3)
{"set_name": "stack_exchange", "score": 0, "question_id": 401189}
TITLE: If $x+iy=\sqrt\frac{1+i}{1-i}$, where $x$ and $y$ are real, prove that $x^2+y^2=1$ QUESTION [0 upvotes]: If $x+iy=\sqrt\frac{1+i}{1-i}$, where $x$ and $y$ are real, prove that $x^2+y^2=1$ I tried multiplying $\sqrt{(\frac{1+i}{1-i})(\frac{1+i}{1+i})}=\sqrt{i}$ but I'm not sure what to do after thanks in advance :))) REPLY [1 votes]: Just observe that numerator and denominator have the same modulus since $1+i$ and $1-i$ are conjugate, and the modulus is a multiplicative function.
{"set_name": "stack_exchange", "score": 0, "question_id": 2470463}
\begin{document} \preprint{APS/123-QED} \title{Ranking the information content of distance measures} \author { Aldo Glielmo$^{1, \ast}$, Claudio Zeni$^{1}$, Bingqing Cheng$^{2}$,\ G\'abor Cs\'anyi$^{3}$, Alessandro Laio$^{1}$\\ \normalsize{$^{1}$International School for Advanced Studies (SISSA)}, \normalsize{Via Bonomea 265, Trieste, Italy}\\ \normalsize{$^{2}$Department of Computer Science and Technology, University of Cambridge,}\\ \normalsize{J. J. Thomson Avenue, Cambridge, CB3 0FD, United Kingdom}\\ \normalsize{$^{3}$Engineering Laboratory, University of Cambridge, Trumpington St,}\\ \normalsize{Cambridge CB21PZ, United Kingdom}\\ \normalsize{$^\ast$aglielmo@sissa.it} } \begin{abstract} Real-world data typically contain a large number of features that are often heterogeneous in nature, relevance, and also units of measure. When assessing the similarity between data points, one can build various distance measures using subsets of these features. Using the fewest features but still retaining sufficient information about the system is crucial in many statistical learning approaches, particularly when data are sparse. We introduce a statistical test that can assess the relative information retained when using two different distance measures, and determine if they are equivalent, independent, or if one is more informative than the other. This in turn allows finding the most informative distance measure out of a pool of candidates. The approach is applied to find the most relevant policy variables for controlling the Covid-19 epidemic and to find compact yet informative representations of atomic structures, but its potential applications are wide ranging in many branches of science. \end{abstract} \maketitle \subsection*{Introduction} An open challenge in machine learning is to extract useful information from a database with relatively few data points, but with a large number of features available for each point~\cite{Ni2020_Few_shot_learning_survey,Lopes2017_Facial_expression_few_data,Valera2020_Handling_heterogeneous_data_with_VAEs}. For example, clinical databases typically include data for a few hundreds patients with a similar clinical history, but an enormous amount of information for each patient: the results of clinical exams, imaging data, and a record of part of their genome \cite{Biobank2015}. In cheminformatics and materials science, molecules and materials can be described by a large number of features, but databases are limited in size by the great cost of the calculations or the experiments required to predict quantum properties~\cite{Pande2017_Low_data_drug_discovery,Yoshida2019_Predicting_materials_with_little_data}. In short, real-world data are often ``big data'', but in the wrong direction: instead of millions of data points, there are often too many features for a few samples. As such, training accurate learning models is challenging, and even more so when using deep neural networks, which typically require a large amount of independent samples \cite{Shorten2019_Survey_data_augmentation}. One way to circumvent this problem is to perform preliminary feature selection, and discard features that appear irrelevant or redundant \cite{Yang2018_Feature_selection_new_perspective,Lopes2017_Facial_expression_few_data,Bogunovic_Feature_selection_review,Deng_Feature_selection_review}. Alternatively, one can perform a dimensional reduction aimed at finding a representation of the data with few variables built as functions of the original features \cite{vanderMaaten:2008tm, mcinnes2018umap, Bengio:2013bu}. In some cases, explicit features are not available, as in the case of raw text documents or genome sequences. What one can always define, even in these cases, are \emph{distances} between data points whose definition, however, can be rather arbitrary \cite{Kaya:2019fo,Kulis:2013kz}. How can one select, among an enormous amount of possible choices, the most appropriate distance measure for a given task? Finding the correct distance is of course as difficult as performing feature selection or dimensionality reduction. In fact, these tasks can be considered equivalent if explicit features are available, since in this case a particular choice of features naturally gives rise to a different distance function computed through the Euclidean norm. In this work, we approach feature/distance learning through a novel statistical and information theoretic concept. We pose the question: given two distance measures $A$ and $B$, can we identify whether one is more \emph{informative} than the other? If distance $A$ is more informative than distance $B$, even partial information on the distance $A$ can be predictive about $B$, while the reverse will not necessarily be true. If this happens, and if the two distances have the same complexity e.g, they are built using the same number of features, $A$ should be generally preferred with respect to $B$ in any learning model. We introduce the concept of ``information imbalance'', a measure able to quantify the relative information content of one distance measure with respect to another. We show how this tool can be used for feature learning in different branches of science. For example, by optimizing the information content of a distance measure we are able to select from a set of more than 300 material descriptors, a subset of around 10 which is sufficient to define the state of a material system, and predict its energy. Moreover, we find the combinations of national policy measures which are most effective in containing the Covid-19 epidemic. In this case, the information imbalance also provides striking evidence on the \emph{causality} relationship between these policies and the severity of the epidemic. \begin{figure*} \centering \subfloat[]{{\includegraphics[width=0.4 \columnwidth]{figures/illustrazione_rango} }} \qquad \subfloat[]{{\includegraphics[width=0.35\columnwidth]{figures/illustrazione_rango2}}} \caption{ a): Illustration of the distance rank of two points in different feature spaces $A$ and $B$. The rank $r_{ij}$ of point $j$ relative to $i$ is equal to 1 in space $A$, meaning that $j$ is the first neighbor of $i$. This is not the case in space $B$, where point $j$ is the third neighbor of point $i$. b): Illustration of how ranks can be used to verify that space $x$ is less informative than space $xy$. The figure shows how a distance bound in the $xy$ space automatically implies a distance bound in the less informative $x$ space. The opposite is not necessarily true and, in principle, the first neighbor of a point in the $x$ space can be at any distance from the point in the $xy$ space. } \label{fig:rank_illustration} \end{figure*} \subsection*{The information imbalance} \begin{figure*} \centering \includegraphics[width=0.7\columnwidth]{figures/illustration_3d_gauss.png} \caption{ Illustration of the information imbalance calculation and usage on a 3D Gaussian dataset with a small variance along $z$. a), c), e): scatter plot of the rank between ordered pairs of points. The highlighted regions indicate the points considered for generating the bottom plots. b), d), f): Probability of that two points have a given rank in one representation given that they are first neighbors in the other. The three columns represent different pairs of representations. g): The four different types of relationships that can characterize the relative information content of two metric spaces $A$ and $B$. h): Information imbalance plane for the 3D Gaussian dataset discussed. The different colors roughly mark the regions corresponding to the four types of relationships listed in g. } \label{fig:illustration_on_3dgauss} \end{figure*} Inspired by the widespread idea of using local neighborhoods to perform dimensional reduction \cite{Hastie2001_Elements_of_stat_learning} and classification \cite{Gashler2008_Sculpting} we quantify the relative quality of two distance measures by analyzing the \emph{ranks} of the first neighbors of each point. For each pair of points $i$ and $j$, the rank $r_{ij}$ of point $j$ relative to point $i$, is obtained by sorting the pairwise distances between $i$ and rest of the points from smallest to largest. For example, $r_{ij}^{A}=1$ if point $j$ is the first neighbor of point $i$ according to the distance $d_A$. The rank of two points will be, in general, different when computed using a different distance measures $B$, as illustrated in Figure~\ref{fig:rank_illustration}a. The key idea of our approach is that distance ranks can be used to identify whether one metric is more informative than the other. Take the example given in Figure \ref{fig:rank_illustration}b, depicting a cartoon of a dataset represented either in the two-dimensional space $xy$ or in the less informative one-dimensional space $x$. Point $j$ is the first neighbor of $i$ in the space $xy$ and it becomes the third neighbor in space $x$ ($r_{ij}^{x}=3$). Similarly, point $k$ is the first neighbor of $i$ in space $x$ and it becomes fifth neighbor in space $xy$ ($r_{ik}^{xy}=5$). In this case we find that $r_{ij}^x < r_{ik}^{xy}$ i.e., the rank in space $x$ of first neighbors in space $xy$ is smaller than the rank in space $xy$ of first neighbors in space $x$. To give a more quantitative example, let's consider a dataset of points harvested from a 3-dimensional Gaussian whose standard deviation along the $z$ direction is a tenth of those along $x$ and $y$. In this case, one can define a Cartesian distance between data points either using all the three features, $d_{xyz}^2=(x_i-x_j)^2+(y_i-y_j)^2+(z_i-z_j)^2$, or using a subset of these features ( $d_{xy}$, $d_{yz}$ and so on). Intuitively, $d_{xyz}$ and $d_{xy}$ are almost equivalent since the standard deviation along $z$ is small, while there are information imbalances, say, between $d_{x}$ and $d_{xy}$, which would allow saying that $d_{xy}$ is more informative than $d_{x}$. In the first row of Figure \ref{fig:illustration_on_3dgauss}, we plot the ranks computed using one distance against the ranks computed using a second distance (for example the ranks in $d_{xy}$ as a function of those in $d_{xyz}$ for panel a). In the second row of the figure we show the probability distribution $p(r^{A} \mid r^{B} =1)$ of the ranks $r_{ij}^{A}$ in space $A$ restricted to those pairs for which $r_{ij}^{B}=1$, namely to the nearest neighbors according to distance $B$. In panels a and b, we compare the most informative distance containing all three coordinates to the one containing only $x$ and $y$ coordinates. Given the small variance along the $z$ direction, these two distance measures are practically equivalent, and this results in rank distributions strongly peaked around one. In panels c and d, we compare the two metrics $d_{xy}$ and $d_x$. In this case, the former is clearly more informative than the latter, and we find that the distribution of ranks when passing from $d_{xy}$ to $d_{x}$ is more peaked around small values than when going in the opposite direction. Finally, for two metrics built using independent coordinates ($x$ and $y$, in panels c and f) the rank distributions are completely uniform. We hence propose to assess the relationship between any two distance measures $d_A$ and $d_B$ by using the properties of the conditional rank distribution $p(r^B \mid r^A = 1)$. The closer this distribution is to a delta function peaked at one, the more information about space $B$ is contained within space $A$. This intuition can be made more rigorous through the statistical theory of copula variables. We can define a copula variable $c_A$ as the cumulative distribution $c_A = \int_0^{d_A} p_A(w \mid x) dw$, where $p_A(w \mid x)$ is the of probability of sampling a data point within distance $w$ from $x$ in the $A$ space. The value of $c_A$ can be estimated from a finite dataset by counting the fraction of points that fall within distance $d_A$ of point $x$, $c_A \approx r_A/N$. Copula variables and distance ranks can be considered continuous-discrete analogues of each other. As a consequence, the distributions $p(r^{B} \mid r^{A} =1)$ shown in Figure~\ref{fig:illustration_on_3dgauss} are nothing else but estimates of the copula distributions $p(c_{B} \mid c_{A})$ with $c_A$ conditioned to be very small. This is important, since Sklar's theorem guarantees that the copula distribution $p(c_A, c_B)$ contains the entire correlation structure of the metric spaces $A$ and $B$, independently of any details of the marginal distributions $p(d_A \mid x)$ and $p(d_B \mid x)$ \cite{Nelsen2006_Introduction_to_Copulas,Vincente_An_information_theoretic_approach,Panzeri_Information_estimation}. Using the copula variables, we define the ``information imbalance'' from space $A$ to space $B$ as \begin{equation} \Delta(A\rightarrow B) = 2 \lim_{\epsilon \rightarrow 0} \, \langle c_B \mid c_A = \epsilon \rangle , \label{eq:imbalance_definition_copulas} \end{equation} where we used the conditional expectation $\langle c_B \mid c_A = \epsilon \rangle = \int c_B \, p(c_B \mid c_A = \epsilon) dc_B$ to characterize the deviation of $p(c_B \mid c_A = \epsilon)$ from a delta function. In the limit cases where the two spaces are equivalent or completely independent we have that $\langle c_B \mid c_A = \epsilon \rangle = \epsilon$ and $\langle c_B \mid c_A = \epsilon \rangle = 1/2$ respectively, so that the definition provided in Eq.~(\ref{eq:imbalance_definition_copulas}) statistically confines $\Delta$ in the range $(0,1)$. The information imbalance defined in Eq.~(\ref{eq:imbalance_definition_copulas}) is estimated on a dataset with $N$ data points as \begin{equation} \Delta(A\rightarrow B) \approx 2\langle r^B \mid r^A = 1 \rangle/N \end{equation} We remark that the conditional expectation used in Eq.~(\ref{eq:imbalance_definition_copulas}) is only one of the possible quantities that can be used to characterize the deviation of the conditional copula distribution from a delta function. Another attractive option is the entropy of the distribution. In the Supplementary Information (SI) (S1.3), we show how these two quantities are related and we demonstrate that the specific choice does not substantially affect the results. In the SI (S1.2), we also show how copula variables can be used to connect the information imbalance to the standard information theoretic concept of mutual information. By measuring the information imbalances $\Delta(A \rightarrow B)$ and $\Delta(B\rightarrow A)$, we can identify four classes of relationships between the two spaces $A$ and $B$. We can find whether $A$ and $B$ are equivalent or independent, whether they symmetrically share both independent and equivalent information, or whether one space contains the information of the other. These relationships are presented in Figure \ref{fig:illustration_on_3dgauss}g. These relationships can be identified visually by plotting the two imbalances $\Delta(A\rightarrow B)$ and $\Delta(B\rightarrow A)$ against each other in a graph as done in Figure \ref{fig:illustration_on_3dgauss}h. We will refer to this kind of graphs as \emph{information imbalance planes}. In Figure~\ref{fig:illustration_on_3dgauss}h we present the information imbalance plane of the 3-dimensional Gaussian dataset discussed so far, and used for Figure~\ref{fig:illustration_on_3dgauss}a-f. Looking at this figure, one can immediately verify that the small variance along the $z$ axis makes the two spaces $xyz$ and $xy$ practically equivalent. Similarly, one can verify that space $x$ is correctly identified to be contained in $xyz$ and that the two spaces $x$ and $y$ are classified as orthogonal. The figure also includes a point corresponding to a different dataset sampled from a 4-dimensional isotropic Gaussian with dimensions $\tilde{x}$, $\tilde{y}$, $\tilde{z}$ and $\tilde{w}$. This point (black star) shows that the spaces $\tilde{x}\tilde{y}\tilde{z}$ and $\tilde{y}\tilde{z}\tilde{q}$ are correctly identified as sharing symmetric information. Importantly, the information imbalance only depends on the local neighborhood of each point and, for this reason, it is naturally suited to analyze data manifolds which are arbitrarily nonlinear. In the SI (section 2.1), we show that our approach is able to correctly identify the best feature for describing a spiral of points wrapping around one axis, and a sinusoidal function. \subsection*{Identifying causal relationships in the spreading of the \mbox{Covid-19} epidemic} We now use the information imbalance measure to verify whether national policy measures have been useful in containing the Covid-19 epidemic, and to identify which measures have been the most effective. The ``Covid-19 Data Hub'' provides comprehensive and up to date information on the Covid-19 epidemic~\cite{Guidotti:2020gz}, including epidemiological indicators such as the number of confirmed infections and the number of Covid-19 related deaths for nations where this is available, as well as the policy indicators that quantify the severity of the governmental measures such as school and workplace closing, restrictions on gatherings and movements of people, testing and contact tracing~\cite{Hale:wd}. More details on the dataset are available in the SI (S2.2.1). We define the space of policy measures $P_t$ as the set of policy indicators at week $t$, and the state of the epidemic $E_{t'}$ as a two-dimensional space composed of the number of weekly deaths $D_{t'}$ and the ratio $R_{t'} = C_{t'}/T_{t'}$ of confirmed cases $C_{t'}$ over total number of tests performed $T_{t'}$ per week at time $t'$. Here we use a time lag ($t' - t$) of two weeks, but the analysis is similar for time lags of one and three weeks; these results are reported in the SI (S2.2.2). \begin{figure*} \centering \includegraphics[width=0.75\columnwidth]{figures/combined_plots_covid} \caption{ Information imbalances between sets of policy variables $P_t$ and the state of the epidemic after two weeks $E_{t+2}$. a): Minimum information imbalances from $P_t$ to $E_{t+2}$ achievable with a given number of policy measures. b): The corresponding information imbalance plane with the number of policy variables going from 1 to 10 reported in the gray circles. Point 10 is not visible as it lies below point 9. The figure shows that the policy measures space $P_t$ can predict the state of the epidemic $E_{t+2}$, while $E_{t+2}$ cannot predict $P_t$. } \label{fig:covid_fig} \end{figure*} What is the information imbalance $\Delta(P_t \rightarrow E_{t'})$ between the space of policy measures indicators $P_t$ at time $t$ and the space of epidemiological variables $E_{t'}$ at a later time $t'>t$? A low value of $\Delta(P_t \rightarrow E_t)$ means that $P_t$ can predict $E_{t'}$. We first compute the information imbalances between all possible combinations of $d$ policy variables among a total of ten. In Figure \ref{fig:covid_fig}a we present the minimum information imbalance $\Delta(P_t \rightarrow E_{t'})$ achievable with any set of $d$ policy measures. For $d \le 2$, $\Delta(P_t \rightarrow E_{t'})$ is close to one, indicating that no single or couple of policy measure is predictive about the state of the epidemic, consistently with \cite{Haug_Ranking_the_effectiveness}. When three or more policy measures are considered, the information imbalance decreases rapidly reaching a value of about $0.28$ when almost all policy measures are considered. This sharp decrease and the low information imbalance clearly indicate that policy measures \emph{do contain} information on the future state of the epidemic. As a sanity check, a dummy policy variable was introduced for this test (blue hexagon). This variable is never selected by the algorithm, and its addition deteriorates the information content of the policy space. We finally note that the information imbalance $\Delta(E_{t+2} \rightarrow P_t)$ (shown in Figure \ref{fig:covid_fig}b) remains considerably high for any number of policy variables. This is a clear indication of the asymmetry in the relationship between policy measures and state of the epidemic, and of the sensitivity of the information imbalance to causality and to the arrow of time. Our analysis show that policy interventions have been effective in containing the spreading of the Covid-19 epidemic, a result which has been already verified in a number of studies~\cite{Brauner_Inferring_the_effectiveness,Haug_Ranking_the_effectiveness,Hsiang_The_effect_of_large_scale,Flaxman_Estimating_the_effects}. In accordance with these studies, we also find that multiple measures are necessary to effectively contain the epidemic, with no single policy being sufficient on its own \cite{Soltesz_Matters_arising}, and that the impact of policy measures increases monotonically with the number of measures put in place. We find that a small yet effective set of policy measures has been the combination of testing, stay home restrictions and restrictions on international movement and gatherings. While our results are computed as averages over all nations considered, further analysis carried out in the SM (S2.2.3) on disjointed subsets of nations give results which are consistent with our main findings. In the SM (S2.2.4), we also show that when building a model for predicting future Covid-19 related deaths, one can optimally choose the relative scale of heterogeneous epidemiological variables using the information imbalance. This is important in real-world applications, where features are often characterized by different units of measure and different scales of variations. \subsection*{Selection and compression of descriptors for atomistic systems} \begin{figure*} \subfloat[]{{\includegraphics[width=0.25\columnwidth]{figures/a_Si_imbalances}} } \subfloat[]{{\includegraphics[width=0.25\columnwidth]{figures/iterative_optimisation} }} \subfloat[]{{\includegraphics[width=0.25\columnwidth]{figures/aC_inf_convergence} }} \subfloat[]{{\includegraphics[width=0.25\columnwidth]{figures/aC_testerr_z4_N90} }} \caption{ Use of the information imbalance for the selection and compression atomistic descriptors. a): Information imbalances between ground truth ``rmsd'' distance metric and standard atomistic descriptors. b): Information imbalances between a full description and the most informative $d$-plet of components ($d=1,\dots,4$). c): Convergence of the ``symmetric'' information imbalance with the number of components for three different compression strategies. The symmetric information imbalance is defined as $\bar{\Delta} (A, B) = [\Delta(A \rightarrow B) + \Delta(B \rightarrow A)] /\sqrt{2}$; more details can be found in the SM (S2.3.3). d): Force error on a validation set of a machine learning potential energy model built on the compressed descriptors. } \label{fig:test_materials} \end{figure*} We now show that the information imbalance criterion can be used to assess the information content of commonly used numerical descriptors of the geometric arrangement of atoms in materials and molecules, as well as to compress the dimension (number of features) of a given descriptor with minimal loss of information. Such atomistic descriptors are needed for applying any statistical learning algorithm to problems in physics and chemistry \cite{Physics:vk,Schutt:2020ww,Carleo2019_RevModPhys,Schmidt:2019iz,Butler:2018fla}. Example applications include the interpolation of potential energy surfaces \cite{Behler:2007fe,Bartok:2010fj}, the prediction of a variety of molecular and materials properties \cite{Ryan:2018jl,Wu:2018hx,Balachandran:2018kf}, and visualization and exploration of atomistic databases \cite{Bartok:2017hz, cheng2020mapping}. We first consider a database consisting of an atomic trajectory of amorphous silicon generated from a molecular dynamics simulation at 500$\rm{K}$ (see S2.3.1 of the SM for details). At each time step of this trajectory we select a single local environment by including all the neighboring atoms within the cutoff radius of $4.5\text{\AA}$ from a given central atom. In this simple system, which does not undergo any significant atomic rearrangement, one can define a \emph{fully informative} distance measure as the minimum over all rigid rotations of the root mean square deviation (rmsd) of two local environments (details in S2.3.2 of the SM). In Figure \ref{fig:test_materials}a, this ground truth distance measure is compared with some of the descriptors most commonly used for materials modeling: the ``Atom-centered Symmetry Functions'' (ACSF) \cite{Behler:2007fe,Behler:2011ita}, the ``Smooth Overlap of Atomic Positions'' (SOAP) \cite{Bartok:2013cs,Caro:2019eta} and the 2 and 3-body kernels \cite{Glielmo:2018bm,Zeni:2019hg}. Unsurprisingly, all descriptors are contained in the ground truth distance measure. For ACSF and SOAP representations, one can increase the resolution by increasing the size of the descriptor in a systematic way, and we found that doing this allows both representations to converge to the ground truth. A SOAP descriptor typically involves a few hundred components. Following a procedure similar to the one used in the last section to select policy measures, we use the information imbalance to efficiently compress this high-dimensional vector with minimal loss of information (more details are given in S2.3.3 of the SM). We perform this compression for a complex database of local atomic environments sampled from different phases of carbon \cite{Deringer:2017ea} \bibnote{The quantum reference data are freely available at http://www.libatoms.org}. As illustrated in Figure \ref{fig:test_materials}b and c, the selection leads to a rapid decrease of the information imbalance, and converge much more quickly than other strategies such as random selection (blue squares) and standard sequential selection (green triangles). Figure \ref{fig:test_materials}d depicts the test error of a potential energy model constructed using a a state-of-the-art Gaussian process regression model \cite{Bartok:2010fj} (see S2.3.5 of the SM) on the compressed descriptors, as a function of the size of the descriptors and for the different compression strategies considered. Remarkably, the graph shows that a very accurate model can be obtained using only 16 out of the 324 original components of the SOAP vector considered here \cite{Caro:2019eta}. In the SM (S2.3.6), we present more details on the components selected by our procedure, and show that they appear in an order that can be understood considering the fundamental structure of the SOAP descriptor. \subsection*{Conclusions} In this work we introduce the information imbalance, a new method to assess the relative information content between two distance measures. The key property which makes the information imbalance useful is its asymmetry: it is different when computed using a distance $A$ as a reference and a distance $B$ as a target, and when the two distances are swapped. This allows distinguishing three classes of similarity between two distance measures: a full equivalence, a partial but symmetric equivalence, and an asymmetric equivalence, in which one of the two distances is observed to contain the information of the other. The potential applications of the information imbalance criterion are multifaceted. The most important one is probably the long-standing and crucial problem of feature selection~\cite{vanderMaaten:2008tm, mcinnes2018umap, Bengio:2013bu}. Low-dimensional models typically allow for more robust predictions in supervised learning tasks \cite{Lopes2017_Facial_expression_few_data,Yang2018_Feature_selection_new_perspective}. Moreover, they are generally easier to interpret and can be used for direct data visualization if sufficiently low dimensional. We design feature selection algorithm by selecting the subset of features which minimizes the information imbalance with respect to a target property, or to the original feature space. As we have showcased, such algorithms can be ``exact'' if the distances to be compared are relatively few (as done for the Covid-19 database) or approximate, if one has to compare a very large number of distances (as done for the atomistic database). Such algorithms work well even when in the presence of strong nonlinearities and correlations within the feature space. This is exemplified by the analysis of the Covid-19 dataset, where 4 policy measures which appear similarly irrelevant when taken singularly, were instead identified as maximally informative when taken together with regards to the future state of the epidemic.\\ Other applications include dimensionality reduction, as the information imbalance could be used directly as an objective function. Admittedly such function will in general be non differentiable and highly non-linear but, in spite of this, efficient optimization algorithms could be developed exploiting recent results on the computation of approximate derivatives for sorting and ranking operations \cite{pmlr-v119-blondel20a}.\\ Another potentially fruitful line of research would be exploiting the information imbalance to optimize the performance of deep neural networks. For example, in SM (S2.3.7), we show that one can reduce the size of the input layer of a neural network that predicts the energy of a material, yielding more computationally efficient and robust predictions. However, one can imagine to go much further, and compare distance measures built using the representations in different hidden layers, or in different architectures. This could allow for designing maximally informative and maximally compact neural network architectures. We finally envision potential applications of the proposed method in the study of causal relationships: we have seen that in the Covid-19 database the use of information imbalance makes it possible to distinguish the future from the past, as the former contains information about the latter, but not vice-versa. We believe that this empirical observation can be made robust by dedicated theoretical investigations, and used in practical applications in other branches of science. \begin{acknowledgments} AG, CZ, and AL gratefully acknowledge support from the European Union’s Horizon 2020 research and innovation program (Grant No. 824143, MaX 'Materials design at the eXascale' Centre of Excellence). The authors would like to thank M. Carli, D. Doimo and I. Macocco (SISSA) for useful discussions and M. Caro (Aalto University) for precious help in using the TurboGap code. \end{acknowledgments} \bibliographystyle{unsrt} \bibliography{AG_1, AG_all, other_refs} \appendix \end{document}
{"config": "arxiv", "file": "2104.15079/main.tex"}
TITLE: Prove that $\dfrac{f(x)^2}{1+f(x)^2}\le C$. QUESTION [1 upvotes]: Let $ f: \Bbb R \to \Bbb R $ be a continuous and non-zero function, $ a, b $ real numbers with $ a <b $. Prove that there exists a constant with $ 0 <C <1 $ such that $$\dfrac{f(x)^2}{1+f(x)^2}\le C$$ My attempt is the following, after touching that inequality a little, it is equivalent to demonstrating the following $$f(x)^2\leq \dfrac{C}{1-C}.$$ Since $ f $ is a continuous function and not zero, we can show that $ | f '(x) | <M $, where $ M <1 $ is a constant. In short, we can show that the derivative is bounded. Is there any way to say that if $ | f '(x) | <M $, then there is also a constant which satisfies that $ | f (x) | <P $ for a certain constant $ P <1 $? REPLY [3 votes]: I suppose you have omitted a constriction $a\leq x\leq b$. Else it is very easy to contruct a counter example (e.g. $f(x) = x$). So then: $f$ in continuous on $[a,b]$. So $f^2$ takes a maximal value $M$. Now consider: $$ \frac{x}{1+x}$$ for $x>0$ is growing (as can easily be seen by differentiation). Thus $$ \frac{f(x)^2}{1+f(x)^2} \leq \frac{M}{1+M} < 1$$ EDIT: Elaborating on the growing aspect of $x/(1+x)$: The derivative of this is $$ \frac1{1+x} - \frac{x}{(1+x)^2} = \frac{1}{1+x}\left(1-\frac{x}{1+x}\right)$$ as $x/(1+x)<1$ this will always be positive. REPLY [1 votes]: hint Let $ c>0$ such that $$(\forall x\in[a,b]) \; f(x)^2\le c$$ as $ f $ is continuous, we can take $$c=\sup_{x\in[a,b]}\{f(x)^2\}$$ then $$(\forall x\in [a,b])$$ $$\;1-\frac{1}{1+f(x)^2}\le 1-\frac{1}{1+c}<1$$
{"set_name": "stack_exchange", "score": 1, "question_id": 4241943}
\begin{document} \author{Mikhail Antipov \\[-1pt] \small Saint-Petersburg University\\[-3pt] {\tt \small hyperbor@list.ru} } \title{Derived equivalence of symmetric special biserial algebras} \maketitle{} \begin{abstract} We introduce Brauer complex of symmetric SB-algebra, and reformulate in terms of Brauer complex the so far known invariants of stable and derived equivalence of symmetric SB-algebras. In particular, the genus of Brauer complex turns out to be invariant under derived equivalence. We study transformations of Brauer complexes which preserve class of derived equivalence. Additionally, we establish a new invariant of derived equivalence of symmetric SB-algebras. As a consequence, symmetric SB-algebras with Brauer complex of genus 0 are classified. Keywords: Brauer tree algebras, special biserial algebras, tilting complex \end{abstract} \tableofcontents \section{Introduction} The present paper lies within a series of papers, devoted to classification of symmetric special biserial algebras up to derived equivalence (i.e., up to equivalence of derived categories). Recall that a symmetric SB-algebra $\Lambda$ is uniquely determined by a pair $(\Gamma(\Lambda), f)$, where $\Gamma(\Lambda)$ is the Brauer graph of $\Lambda$ and $f:V(\Gamma(\Lambda))\to \mathbb{N}$ maps vertices of $\Gamma(\Lambda)$ to their multiplicities (see, e.g.,~\cite{1} and Proposition~\ref{bijection}). \begin{itemize} \item We show that the multiset of multiplicities of vertices of $\Gamma(\Lambda)$ is invariant under derived equivalence (Proposition~\ref{multiplicities}). In order to prove this, we determine the center $Z(\Lambda)$. \item In section 3 we introduce {\it Brauer} $\Cw$-{\it complex} $C(\Lambda)$ --- a relevant tool for studying derived equivalence. Topologically, $C(\Lambda)$ is a sphere with handles. We reformulate in terms of $C(\Lambda)$ the basic notions related to algebra $\Lambda$ and the invariants of stable equivalence, which appeared in~\cite{2}. In particular, the genus of $C(\Lambda)$ turns out to be invariant under stable equivalence. By a celebrated theorem of Rickard \cite{6}, these invariants are invariants of derived equivalence, too. \item We introduce {\it elementary tilting complexes} over symmetric special biserial algebras --- a generalization of tilting complexes, which were treated in~\cite{3} (section 4). Equivalences of algebras, corresponding to elementary tilting complexes, can be reformulated in terms of 'elementary transformations' of Brauer $\Cw$-complexes of these algebras (Proposition~\ref{correspondence}). One sees that the algebra, which corresponds to the $\Cw$-complex obtained from $C(\Lambda)$ by an elementary transformation, is derived equivalent to $\Lambda$. Thus we obtain a direct graphic way of proving derived equivalence. \item In the last section we show that if the geometric realization of $C(\Lambda)$ is a sphere, then the invariants which we discuss in this paper determine $\Lambda$ up to derived equivalence. \end{itemize} \section{The center $Z(\Lambda)$ and the multiplicities of $A$-cycles} Let $\Lambda$ be a symmetric SB-algebra over field $K$. Consider an extended quiver $Q_e=Q_e(\Lambda)$. Consider the partitions of its arrow set into $A$-cycles and into $G$-cycles (see\cite{1}). Recall that $A$-cycles (and their multiplicities) correspond to the vertices of Brauer graph $\Gamma(\Lambda)$. We denote $A$-cycles by lower-case latine letters and denote vertices of $\Gamma(\Lambda)$ by the correspondent upper-case latine letters. Let $\{c_1,c_2,\dots, c_k\}$ be the set of $A$-cycles. For each $i=1,\dots, k$ consider a cyclic sequence $(\alpha_{i,1},\alpha_{i,2},\dots,\alpha_{i, l_i})$ of arrows of the cycle $c_i$. Let $f(c_1),f(c_2),\dots,f(c_k)\in \mathbb{N}$ denote the multiplicities of $A$-cycles. For each loop $\alpha=\alpha_{i,k}$ which is not formal, set $$ q_{\alpha}=(\alpha_{i,k+1}\alpha_{i,k+2}\dots, \alpha_{i,l_i}\dots\alpha_{i,k})^{f(c_i)-1} \alpha_{i,k+1}\alpha_{i,k+2}\dots, \alpha_{i,l_i}\dots\alpha_{i,k-1}. $$ \begin{proposition} \label{multiplicities} {\bf 1.} The center $Z(\Lambda)$ is generated as a vector space over $K$ by $1$ and by the elements of the following three forms: \begin{enumerate} \item[a.] Elements $ m_{i,t}=(\alpha_{i,1}\alpha_{i,2}\dots,\alpha_{i,l_i})^{t} +(\alpha_{i,2}\alpha_{i,3}\dots,\alpha_{i,1})^{t}+\dots +(\alpha_{i,l_i}\alpha_{i,1}\dots,\alpha_{i,l_i-1})^{t}$ for all $i=1,2,\dots, k$ and $t=1,\dots , f(c_i)-1$. \item[b.] Elements $q_{\alpha}$ for each non-formal loop $\alpha$. \item[c.] Elements $s_r=(\alpha_{i_r,1}\alpha_{i_r,2}\dots,\alpha_{i_r,l_{i_r}})^{f(c_{i_r})}$ for each vertex $r$ of $Q_e$, where $c_{i_r}$ is one of the two $A$-cycles, passing through $r$. \end{enumerate} \smallskip\noindent {\bf 2.} $Z/(\Soc Z)\cong K[x_1,x_2,\dots,x_k]/\lan \{x_i^{f(c_i)},(x_ix_j)_{i\neq j}\} \ran$, where i, j $\in$ {1, \dots, k}. \smallskip \noindent {\bf 3.} The multiset $(f(c_1),f(c_2),\dots,f(c_k))$ is invariant under derived equivalence. \end{proposition} \begin{proof} {\bf 1.} Recall that the value of $s_r$ doesn't depend on the choice of an $A$-cycle $c_{i_r}$ and that the elements $s_1, s_2,\dots ,s_n$ form a $K$-basis of $\Soc(\Lambda)$ (see, e.g.,~\cite{1}). Since $\Lambda$ is a symmetric algebra, the socle $\Soc(\Lambda)$ is contained in $Z$, so $s_r\in Z$. Moreover, for a non-formal loop $\alpha$ at vertex $r$ and for the corresponding idempotent $e_r$ and path $p \notin \{e_r, \mbox{ } \alpha\}$ we get $e_rq_{\alpha}=q_{\alpha}=q_{\alpha}e_r$, $\alpha q_{\alpha}=s_r=q_{\alpha}\alpha$, è $q_{\alpha}p=0=pq_{\alpha}$. Thus $q_{\alpha}\in Z$. Similarily, for all $i,t,r$ we get $e_rm_{i,t}=m_{i,t}e_r$, since the summands in $m_{i,t}$ are circuits. Furthermore, for all $l_1,l_2,t_1$ \begin{multline*} (\alpha_{i,l_1}\alpha_{i,l_1+1}\dots,\alpha_{i,l_1-1})^{t_1}\alpha_{i,l_1}\alpha_{i,l_1+1}\dots,\alpha_{i,l_2}m_{i,t}=\\ (\alpha_{i,l_1}\alpha_{i,l_1+1}\dots,\alpha_{i,l_1-1})^{t+t_1}\alpha_{i,l_1}\alpha_{i,l_1+1}\dots,\alpha_{i,l_2}= \\ m_{i,t}(\alpha_{i,l_1}\alpha_{i,l_1+1}\dots,\alpha_{i,l_1-1})^{t_1}\alpha_{i,l_1}\alpha_{i,l_1+1}\dots,\alpha_{i,l_2}\end{multline*} \noindent Since for the rest paths $p$ (subpaths of other $A$-cycles) $m_{i,t}p=pm_{i,t}=0$, we get $m_{i,t}\in Z$. Each $z \in Z$ can be uniquely represented as \begin{equation} \label{2} z=\sum_{j=1}^{N} a_jp_j+s, \end{equation} \noindent where $0\neq a_j\in K$, paths $p_j$ are distinct nonzero paths in the quiver $Q_e$ which are not contained in the socle, $s\in \Soc (\Lambda)$. By induction on the number of summands in the sum ~(\ref{2}) we show, that $z$ can be represented as a linear combination of elements $m_{i,t}$ and $q_{\alpha}$. Fix $i \in \{1, \dots N\}$ and write \begin{equation*} p_i=\alpha_1\alpha_2\dots\alpha_{m}, \end{equation*} \noindent where $\alpha_1,\alpha_2,\dots,\alpha_{m}$ are consequent arrows of an $A$-cycle $c_i$. Let $\alpha_{m+1}$ be the next arrow of $c_i$. There are two cases: \smallskip Case 1: $\alpha_1\alpha_2\dots\alpha_{m}\alpha_{m+1}\notin \Soc(\Lambda)$. In this case the path $\alpha_{m+1}\alpha_1\alpha_2\dots\alpha_{m}$ has coefficient $a_i$ in the sum $\sum a_j\alpha_{m+1}p_j$. Since $z\alpha_{m+1}=\alpha_{m+1}z$, we obtain $\alpha_{m+1}=\alpha_1$, i.e. $p_i=(\alpha_{u,1}\alpha_{u,2}\dots,\alpha_{u,l_u})^{t}$ for some $A$-cycle $c_u$, $t < f(c_u)$. Moreover, the other summands of $m_{u,t}$ also have coefficient $a_i$ in the sum ~(\ref{2}). We see that the sum representing element $z-a_im_{u,t}\in Z$ has less summands than the sum representing $z$, so the inductive hypothesis is applied. \smallskip Case 2: $\alpha_1\alpha_2\dots\alpha_m\alpha_{m+1}=s_l\in \Soc(\Lambda)$ for some $l$. Consider an idempotent $e_r$ such that $\alpha_m e_r\alpha_{m+1}\neq 0$. The expressions for $ze_r$ and $e_rz$ must contain $p_i$ as a summand. Therefore $p_i$ is a closed path. It follows that $\alpha_{m+1}$ is a loop and $p_i=q_{\alpha_{m+1}}$, and we apply the inductive hypothesis to $z-a_ip_i$. \smallskip {\bf 2.} Observe that $\Soc(Z)$ is generated by the elements $s_r$ and $q_{\alpha}$, for all loops $\alpha$ which are not separate $A$-cycles (i.e., $\alpha q_{\alpha }=0$). Moreover, $m_{i,t}m_{j,t_1}=\delta_{ij}m_{i,t+t_1}$ and $m_{i,t}^{f(c_i)}\in \Soc(Z)$. These two observations imply the claim. \smallskip {\bf 3.} The claim follows directly from p.{\bf 2}, since $Z(\Lambda)$ is invariant under derived equivalence (see~\cite{5}). The maximal element $f(c_i)$ equals the maximal index of nilpotency of nilpotents in $Z/\Soc(Z)$; the remaining proof is by induction. \end{proof} \section{Brauer complex} \subsection {Definitions and constructions} In this section we define a 2-dimensional $\Cw$-complex corresponding to a symmetric SB-algebra $\Lambda$. Associate with each $G$-cycle $z$ of length $k$ a $k$-gon $F_z$ with an oriented border. The sides of $F_z$ are labeled with the vertices of $Q_e$ which lie on $z$ (in the counter-clockwise order in the orientation of $F_z$). Consider a $\Cw$-complex $C=C(\Lambda)$ which is obtained from the resulting set of polygons by identifying oppositely oriented edges labeled by the same vertex. Since each vertex of $Q_e$ belongs to exactly two $G$-cycles, $C$ is an oriented manifold (without boundary). \begin{definition} $\Cw$-complex $C(\Lambda)$ is called {\it Brauer complex} of $\Lambda$. \end{definition} Denote by $\Gamma=\Gamma(\Lambda)$ the Brauer graph of $\Lambda$. For a vertex $V\in V(\Gamma)$, consider a cyclic permutation $\pi_V$ of half-edges, incident with $V$, which is defined by passing along the corresponding $A$-cycle. A 'picture' of a graph $\Gamma$ on an oriented surface also determines, for any vertex of the graph, a cyclic permutation on the set of incident half-edges, which agrees with orientation. There exists an embedding $i_\Gamma$ of $\Gamma$ into an oriented surface $M$, which preserves the cyclic permutations ($i_\Gamma$ and $M$ are uniquely defined up to a homeomorphism). Note that we consider {\it strict} embeddings, i.e. such embeddings that each connectivity component of $M \setminus \Gamma$ is homeomorphic to an open disk). See~\cite{7} for the construction of embedding. It follows from the construction of embedding that the connectiity components of $M \setminus \Gamma$ correspond to the $G$-cycles of $\Lambda$. Now it is clear that $M$ is a geometric realization of $C(\Lambda)$ and that the 1-skeleton $S_M$ of $C(\Lambda)$ is isomorphic as a graph to $\Gamma$ (we will refer to $S_M$ as $\Gamma$). In particular, the vertices (edges) of $C(\Lambda)$ are in one-to-one correspondence with the $A$-cycles (resp., vertices) of $Q_e$. It is to be mentioned that the arrows of $Q_e$ are in one-to-one correspondence with the angles of the 2-dimensional faces of $C(\Lambda)$. \begin{definition} {\it Perimeter} of a 2-dimensional face of $C(\Lambda)$ is the number of its edges, taking multiplicities into account (i.e., perimeter is the length of the corresponding $G$-cycles). \end{definition} \subsection{Invariants of stable equivalence} Observe that $C(\Lambda)$ is an oriented surface. The following statement holds since the Euler characteristic of an oriented surface is even. \begin{proposition} If in the extended quiver $Q_e$ of $\Lambda$ the number of $A$-cycles is $k$, the number of $G$-cycles is $g$ and the number of vertices is $n$, then $k+g-n$ is even. \end{proposition} \begin{remark} This statement was proved in~\cite{2} without topological arguments (Lemma 3.2). \end{remark} \begin{definition} The value $k+g-n$ is called the {\it genus} of $\Lambda$ (and of $C(\Lambda)$). \end{definition} \noindent In~\cite{2} it is proved that the multiset of lengths of $G$-cycles, as well as the number of $A$-cycles, is invariant under stable equivalence. By Rickard's Theorem, the derived equivalence of self-injective algebras implies stable equivalence (See~\cite{6}). The number of isomorphism classes of simple modules (i.e., the number of vertices of $Q_e$) is also stable invariant (See~\cite{4}). Therefore we get \begin{proposition} The multiset of perimeters of faces, the number of vertices and the genus of $C(\Lambda)$ are invariant under derived equivalence. \end{proposition} \noindent It was shown in~\cite {2} that the free rank of the Grothendieck group of the stable category $\text{stmod-}\Lambda$ equals $n-k$ if and only if $\Gamma(\Lambda)$ is not bipartite. Therefore, we have \begin{proposition} \label{bipartite} Derived (stable) equivalence preserves the property of the Brauer graph to be bipartite. \end{proposition} It should be mentioned that for algebras of genus $0$ this invariant gives nothing new, since an embedded into a sphere graph is bipartite if and only if the perimeters of all its faces are even. But there are algebras of genus 1, the derived categories of which are not distinguished by the previously discussed invariants, but which are not equivalent by Proposition~\ref{bipartite}. \begin{example} Consider the following symmetric SB-algebras $\Lambda_1$ and $\Lambda_2$: \smallskip \noindent \begin{tabular}{rp{12cm}} $\xymatrix{ &&3 \ar@<2pt>[ddl]^\gamma \ar@<-1pt>[ddl]_\eta \\ &&&\\ &1 \ar@<2pt>[rr]^\alpha \ar@<-1pt>[rr]_\delta &&2 \ar@<1pt>[uul]^\varepsilon \ar@<-2pt>[uul]_\beta }$ & The quiver $Q_1$ of $\Lambda_1$ consists of vertices $1,2,3$ and arrows $$\alpha, \delta: 1\to 2, \text{ } \beta, \varepsilon: 2\to 3, \text{ } \gamma, \eta:3\to 1.$$ Ideal $I_1$ of relations of $\Lambda_1$ is generated by the elements $$\alpha\beta, \text{ }\beta\gamma,\text{ }\gamma\delta, \text{ }\delta\varepsilon,\text{ }\varepsilon\eta,\text{ }\eta\alpha,\text{ } \alpha\epsilon\gamma-\delta\beta\eta,\text{ }\varepsilon\gamma\alpha-\beta\eta\delta,\text{ } \gamma\alpha\varepsilon-\eta\delta\beta.$$ \end{tabular} \noindent\begin{tabular}{p{12cm}r} \noindent The quiver $Q_2$ of $\Lambda_2$ consists of vertices $1, 2, 3$ and arrows $$\alpha_1:1\to 2, \text{ } \beta_1: 2\to 3, \text{ } \gamma_1:3\to 1, \text{ } \delta_1:1\to 3, \text{ } \varepsilon_1: 3\to 2, \text{ } \eta_1:2\to 1.$$ Ideal $I_2$ of relations of $\Lambda_2$ is generated by the elements \begin{multline*}\alpha_1\beta_1, \text{ } \beta_1\gamma_1, \text{ } \gamma_1\delta_1, \text{ } \delta_1\varepsilon_1, \text{ } \varepsilon_1\eta_1, \text{ } \eta_1\alpha_1, \text{ } \alpha_1\eta_1\delta_1\gamma_1-\delta_1\gamma_1\alpha_1\eta_1, \\\text{ } \beta_1\varepsilon_1- \eta_1\delta_1\gamma_1\alpha_1, \text{ } \varepsilon_1\beta_1-\gamma_1\alpha_1\eta_1\delta_1.\end{multline*} & $\xymatrix{ &&3 \ar@<2pt>[ddl]^{\gamma_1} \ar@<2pt>[ddr]^{\varepsilon_1} \\ &&&\\ &1 \ar@<2pt>[rr]^{\alpha_1} \ar@<1pt>[uur]^{\delta_1} &&2 \ar@<1pt>[ll]^{\eta_1} \ar@<1pt>[uul]^{\beta_1} }$ \end{tabular} \noindent It is easy to see that $\Lambda_1$ and $\Lambda_2$ are algebras with 3 simple modules, with one $G$-cycle of length $6$ ($(\alpha\beta\gamma\delta\varepsilon\eta)$ and $(\alpha_1\beta_1\gamma_1\delta_1\varepsilon_1\eta_1)$, respectively) and with 2 $A$-cycles of multplicities $1$ ($c^1_1=(\alpha\varepsilon\gamma)$, $c^1_2=(\delta\beta\eta)$ and $c^2_1=(\varepsilon_1\beta_1)$, $c^2_2=(\gamma_1\alpha_1\eta_1\delta_1)$). In particular, $\Lambda_1$ and $\Lambda_2$ have genus 1. But $\Gamma(\Lambda_1)$ is bipartite (it consists of 2 vertices, connected by 3 edges) whereas $\Gamma(\Lambda_2)$ is not (the edge, corresponding the vertex 1 of $Q_2$ is a loop). Therefore, $\Lambda_1$ and $\Lambda_2$ are not derived equivalent. \end{example} Despite existence of an 'additional' invariant, the invariants and equivalences which are discussed in this paper are not enough to classify algebras of positive genus, in contrast to the 'spherical' case, which is treated in section 5 (see also example~\ref{example}). \begin{proposition}\label{bijection} Correspondence $\Lambda\mapsto C(\Lambda)$ gives a bijection from the set of (pairly non-isomorphic) indecomposable symmetric $\Sb$-algebras to the set of (pairly non-isomrphic) pairs $(C$, $f)$, where \begin{enumerate} \item $C$ is a $\Cw$-complex homeomorphic to 2-dimensional oriented manifold with fixed orientaton; \item $f$ is an arbitrary map from the $0$-skeleton of $C$ to $\mathbb{N}$. \end{enumerate} \end{proposition} \begin{proof} It remains to show that a Brauer complex uniquely determines a symmetric $\Sb$-algebra. It follows from the fact the 1-skeleton of Brauer complex has a structure of Brauer graph, which uniquely determines a symmetric SB-algebra (see~\cite{1})\footnote{ In~\cite{1} it was shown that a symmetric SB-algebra is uniquely determined by the (labeled) Brauer graph and certain parameters. It can be easily shown that these parameters are excessive and can be eliminated.}. \end{proof} \section{Elementary tilting complexes} \subsection{Definition of elementary tilting complex} Fix an edge $i$ of $C$ (equivalently, fix a vertex $i$ in quiver $Q_e$), and suppose that there are other edges in $C$. We distinguish three cases. \begin{enumerate} \item $i$ is a leaf of $\Gamma$. Equivalently, in the quiver $Q_e$ there is a loop $\alpha_i$ at vertex $i$ and this loop is an $A$-cycle (i.e., it annihilates all other arrows of $Q_e$). \item $i$ is a loop, which bounds some face of $C$. Equivalently, in the quiver $Q_e$ there is a loop $\alpha_i$ at vertex $i$ and this loop is a $G$-cycle. In this case there is a unique $A$-cycle passing through $i$ (this cycle contains at least 3 arrows, one of which is $\alpha_i$). \item For $r=1,2$ the end $C_{i,r}$ of the edge $i$ is incident with an edge $i_r \neq i$, such that $\pi_{C_{i,r}}(i_r)=i$. We permit $i_1=i_2$ and we permit $i$ to be a loop (i.e., $C_{i,1}=C_{i,2}$). Equivalently, there is no loop at vertex $i$ of $Q_e$, i.e. the vertices $i_1, i_2$ which precede $i$ on both $A$-cycles passing through $A$ ($c_{i,1}$ and $c_{i,2}$) are different from $i$. \end{enumerate} In each of these cases, to the edge $i$ we put in correspondence a complex $T_i$ as follows. For a vertex $j\in V(Q_e)$ we denote by $P_j$ the indecomposable left projective $\Lambda$-module, which corresponds to $j$. For $i \neq j$, denote by $T_{ij}$ the complex $\dots\to 0\to P_j\to 0\to\dots$ concentrated in degree $0$. If $i$ is a leaf of $\Gamma$, define complex $T_{ii}$ by $$ T_{ii}:\dots\to 0\to P_j\stackrel{\beta_i}{\longrightarrow} P_i \to 0\to \dots $$ where $j \in V(Q_e)$, $j\neq i$ is the vertex preceding vertex $i$ on the (unique) $G$-cycle, which contains $i$; $\beta_i \neq \alpha_i$ is the arrow preceding $\alpha_i$ on the same $G$-cycle. \noindent If $i$ is a loop which bounds some face of $C$, define $T_{ii}$ by $$ T_{ii}:\dots\to 0\to P_j\bigoplus P_j\xrightarrow{(\beta_i,\text{ }\beta_i\alpha_i)} P_i \to 0\to \dots $$ where $j \in V(Q_e)$, $j\neq i$ is the vertex preceding vertex $i$ on the (unique) $A$-cycle, which contains $i$; $\beta_i \neq \alpha_i$ is the arrow preceding $\alpha_i$ on the same $A$-cycle. \noindent Otherwise, define $T_{ii}$ by $$ T_{ii}:\dots\to 0\to P_{i_1}\bigoplus P_{i_2}\xrightarrow {(\beta_{i}^1,\text{ }\beta_{i}^2)} P_i \to 0\to \dots $$ where $i_1, i_2$ are the vertices preceding $i$ on the $A$-cycles $c_{i,1}$ and $c_{i,2}$, respectively; $\beta_i^1,\beta_i^2$ are the respective arrows preceding $\alpha_i$. Finally, set $T_i=\bigoplus_{j=1}^{n}T_{ij}$. \begin{proposition} $T_i$ is a tilting complex over $\Lambda$. \end{proposition} \begin{proof} We verify that $T_i$ satisfies the two conditions from the definition of tilting complex. In the definition of $T_i$ we distinguished three cases. We show verification only for the third case, the other cases are treated in the same way. First, we must verify that $D^b(\Lambda)= Add(T_i)$, where $Add(T_i)$ is the smallest triangulated subcategory, which contains all direct summands of object $T_i$. It is enough to verify that all objects of the form $0\to P_j\to 0$ belong to $Add(T_i)$. For $i\neq j$ this is by definition of $T_i$. For $i=j$ it is easy to see that $P_i[-1]$ is the third term of the triangle, which corresponds to the natural embedding of $T_{ii_1}\bigoplus T_{ii_2}$ into $T_{ii}$. It follows that $T_i$ satisfies the first condition. Now we verify that $\text{Hom}_{D^b(\Lambda)}(T_i,T_i[r])=0$ for $r\in \mathbb{Z}\setminus0$. It is enough to proof that for each $j \in V(Q_e)$ $\text{Hom}_{D^b(\Lambda)}(T_{ii},T_{ij}[-1])=\text{Hom}_{D^b(\Lambda)}(T_{ij}[-1],T_{ii})=0$. Each morphism from $T_{ij}$ to $T_{ii}$ is determined by a morphism $f:P_j\to P_i$, where $f$ is a multiplication by a linear combination of paths with starting point $j$ and endpoint $i$. Each of these paths ends either with $\beta_{i}^1$ or with $\beta_{i}^2$. Therefore $f$ factors through $(\beta_{i}^1,\beta_{i}^2):P_{i_1}\bigoplus P_{i_2}\to P_i$. It follows that $f$ is homotopic to zero. Similarly, each morphism from $T_{ii}$ to $T_{ij}$ is determined by a morphism $f:P_i\to P_j$, where $f$ is a multiplication by a linear combination $S$ of paths with starting point $i$ and endpoint $j$. Suppose that $S$ has nonzero summands. Since $i\neq j$, the underlying paths are not maximal. Multiplying $S$ by $\beta_{i}^1$ or by $\beta_{i}^2$ from the left, we again get a nontrivial sum of linearly independent summands. This contradicts the definition of morphism of complexes. Therefore $f=0$ and $\text{Hom}_{D^b(\Lambda)}(T_{ii},T_{ij}[-1])=0$. \end{proof} \subsection{Elementary transformations of Brauer complexes} Now we define {\it elementary transformations} of Brauer complexes. We will prove below that in terms of algebras, an elementary transformation puts an algebra $\Lambda$ to the endomorphism algebra of one of the above defined tilting complexes over $\Lambda$. We fix convention that under elementary transformation the vertices are fixed, the configuration of edges (labeled with vertices of a quiver) --- and therefore the configuration of faces (labeled with $G$-cycles) --- is changed. In other words, we identify the edges (and faces) by their labels, not by the vertices incident to them. The pictures below illustrate the simplest cases, in general they can be quite different. \begin{definition} Let $C$ be a Brauer complex, let $Q_e$ be the corresponding extended quiver. Let $a \in E(C)$, $V\in V(C)$, let $F$ be a face of $C$. Permutations $\Next_F$: $V(C) \rightarrow V(C)$ and $E(C) \rightarrow E(C)$ are induced by the counter-clockwise order of vertices and edges in the orientation of $F$. Recall that $\pi_V$ denotes the permutation of half-edges incident with vertex $V\in V(C)$, which is defined by passing along the corresponding $A$-cycle $v$. By abuse of language, we will name half-edges after correspondent edges. Thus by abuse of language for a loop $a$ both situations $\pi_V(a)=a$ and $\pi_V(a)\neq a$ can happen. However, from the context it will always be clear which half-edge is meant. \end{definition} \subsubsection{Transformation of type 1: shift of a leaf} \begin{figure}[ht] \begin{center} \includegraphics{leaf.eps} \caption{Shift of a leaf} \label{fig:leaf} \end{center} \end{figure} Let $V \in V(C)$ be a dangling vertex. Suppose that the edge (the face) incident with $V$ is labeled by $a$ (resp., by $F$). Let $V_1$ be the second vertex incident with $a$. Put $V_2 = \Next_F(V_1)$, $a_1= \Next_F(a)$. Now shift edge $a$, so that $a$ becomes incident with $V$ and $V_2$ and $a=\Next_F(a_1)$. \subsubsection{Transformation of type 2: shift of a loop} Let $a$ be a loop at vertex $V_1$, bounding some face $F_1$. Let $F_2$ be the second face, incident with $a$, put $V_2 = Next_{F_2}(V_1)$, $a_1= Next_{F_2}(a)$. Replace loop $a$ with a loop at vertex $V_2$, which lies inside $F_2$ after $a_1$. Note that $F_1$ is again bounded by a loop, which separates it from $F_2$. \begin{figure}[ht] \begin{center} \includegraphics{loop.eps} \caption{Shift of a loop} \label{fig:loop} \end{center} \end{figure} \subsubsection{Transformation of type 3: the general case} \label{mainshift} Let $a$ be an edge. Suppose that the vertices (faces) incident with $a$ are labeled by $V_1, V_2$ (resp., by $F_1$ and $F_2$; we permit $F_1=F_2$). For $i=1,2$ put ${V_i}'=Next^{-1}_{F_i}(V_i)$, $a_i=Next^{-1}_{F_i}(a)$. Shift $a$ so that it becomes incident with ${V_1}'$ and ${V_2}'$, separates $F_1$ from $F_2$ and lies after $a_i$ on the new boundary of $F_{3-i}$. \begin{figure}[ht] \begin{center} \includegraphics{general.eps} \caption{The general case} \label{fig:general} \end{center} \end{figure} \begin{definition} We call the transformations of types 1-3 {\it tilting transformations}. The resulting complex is denoted by $C(a)$. \end{definition} \subsection{Correspondence} \begin{proposition} \label{correspondence} Let $\Lambda$ be an $\Sb$-algebra, $C=C(\Lambda)$, $a\in E(C)$. Let $T_a$ be the tilting complex which corresponds to $a$. Then $End_{D^b(\Lambda)} T_a$ is a symmetric $\Sb$-algebra with Brauer complex $C(a)$ ($C$ and $C(a)$ have the same multiplicities of vertices). \end{proposition} \begin{proof} Denote by $Q_e$ the extended quiver of $\Lambda$. By Rickard's theorem \cite{5}, $\Lambda_a=End_{D^b(\Lambda)} T_a$ is derived equivalent to $\Lambda$. Since $\Lambda$ is a symmetric algebra, $\Lambda_a$ is a symmetric algebra, too. By Pogorjaly's result, an algebra, which is stable equivalent to an SB-algebra, is an SB-algebra, too \cite{4}. Therefore, by another Rickard's theorem \cite{6} $\Lambda_a$ is an $SB$-algebra. Let $e=\sum_1^n e_i$ be the decomposition of unity of $\Lambda_a$, which corresponds to the decomposition $T_a=\bigoplus _{i=1}^n T_{ai}$. Since the number of simple modules is invariant under derived equivalence, $\Lambda_a$ is an algebra with $n$ simple modules and therefore $\{e_i\}$ is a set of primitive orthogonal idempotents. Set $f_a=1-e_a\in\Lambda$, and denote $\Lambda_{-a}=f_a\Lambda f_a$. Since for $i\neq a$ the complexes $T_{ai}$ are concentrated in degree $0$, we have $\Lambda_{-a}=\text{End}_{\Lambda}\bigoplus_{i\neq a}P_i=\text{End}_{D^b(\Lambda)} \bigoplus_{i\neq a}T_{ai}$. Consider Brauer complex $C_{-a}$, obtained from $C$ by deletion of an edge $a$ (if $a$ is a leaf, we delete it with the incident dangling vertex). The marks on the remaining vertices are preserved. We need the following lemma. \begin{lemma} \label{lemma} The symmetric $\Sb$-algebra which corresponds to $C_{-a}$ is isomorphic to $\Lambda_{-a}$. \end{lemma} \begin{proof} We consider the case when $C(a)$ is obtained from $C$ by a transformation of type 3 (i.e., $a$ is not a loop which bounds a face and not a leaf). The other cases are treated in the same way. Denote the arrows of $Q$ incident with $a$ by $\alpha,\beta,\gamma,\delta$, so that $\alpha\beta\neq 0$ and $\gamma\delta\neq 0$. The elements of $\Lambda_{-a}$ are linear combinations of paths whose starting points and endpoints differ from $a$. It is clear that $\Lambda_{-a}$ is generated as algebra by idempotents $e_i$, where $i\neq a$, by arrows of $Q$ different from $\alpha,\beta,\gamma,\delta$ and by the elements $\alpha\beta,\gamma\delta$. Observe that in terms of quivers $\Lambda_{-a}$ can be obtained from $\Lambda$ in the following way: the arrows $\alpha$ and $\beta$, lying on a common $A$-cycle, are replaced with an arrow $\alpha\beta$ on the same $A$-cycle (respectively, the arrows $\gamma$ and $\delta$ are replacesd with an arrow $\gamma\delta$). This implies the claim. \end{proof} \noindent We return to the proof of proposition~\ref{correspondence}. Observe that the symmetric $\Sb$-algebra which corresponds to $C(a)_{-a} = C_{-a}$ is isomorphic to $\Lambda_{-a}$. To obtain the Brauer complex of $\Lambda_a$ from $C_{-a}$ we need to add an edge on some face of $C_{-a}$ (the multiplicities of vertices are preserved). It should be noted that all arrows of the quiver of $\Lambda_{-a}$ except at most two coincide with the respective arrows of the quiver of $\Lambda_{a}$. The arrows which don't coincide, are products of two or three arrows of the quiver of $\Lambda_{a}$. Again, we finish the proof only for the case when $C(a)$ is obtained from $C$ by tilting transformation of type 3; the other cases are treated in the same way. For $i=1,2$ denote by $b_i$ the edge, which precedes $a_i$ on $F_i$ in counter-clockwise order, i.e. $b_i$ precedes $a_i$ on a $G$-cycle (see notations in~\ref{mainshift}). Denote by $\mu$ (by $\rho$) the arrow in $Q_e$ which corresponds to the angle at vertex $V_1'$ included between $a_1$ and $b_1$ (resp., to the angle at $V_2'$ included between $a_2$ and $b_2$). Define elements $\alpha_1,\beta_1,\gamma_1,\delta_1\in End_{D^b(\Lambda)} T_a$ such that $\alpha_1\beta_1=\mu$, $\gamma_1\delta_1=\rho$ in $\text{End}_{D^b(\Lambda)}\bigoplus_{i\neq a}T_{ai}$. Each of these elements is induced by a morphism between two indecomposable summands of $T_a$: \noindent$ \alpha_1: $ $$ \begin{CD} \dots @>>> 0 @>>> P_{b_1} @>>> 0 @>>> \dots @.\\ @. @VVV @VV\binom{\mu}0 V @VVV @. @.\\ \dots @>>> 0 @>>> P_{a_1}\bigoplus P_{a_2} @>(\alpha,\gamma)>> P_a @>>> 0 @>>> \dots \end{CD} $$ $ \beta_1: $ $$ \begin{CD} \dots @>>> 0 @>>> P_{a_1}\bigoplus P_{a_2} @>(\alpha,\gamma)>> P_a @>>> 0 @>>> \dots\\ @. @VVV @VV(id,0) V @VVV @. @.\\ \dots @>>> 0 @>>> P_{a_1} @>>> 0 @>>> \dots @.\\ \end{CD} $$ $ \gamma_1: $ $$ \begin{CD} \dots @>>> 0 @>>> P_{b_2} @>>> 0 @>>> \dots @.\\ @. @VVV @VV\binom0{\rho} V @VVV @. @.\\ \dots @>>> 0 @>>> P_{a_1}\bigoplus P_{a_2} @>(\alpha,\gamma)>> P_a @>>> 0 @>>> \dots \end{CD} $$ $ \delta_1: $ $$ \begin{CD} \dots @>>> 0 @>>> P_{a_1}\bigoplus P_{a_2} @>(\alpha,\gamma)>> P_a @>>> 0 @>>> \dots\\ @. @VVV @VV(0,id) V @VVV @. @.\\ \dots @>>> 0 @>>> P_{a_2} @>>> 0 @>>> \dots @.\\ \end{CD} $$ The elements $\alpha_1,\beta_1,\gamma_1,\delta_1$ are not invertible, since for $i\neq a$ $H^*(T_{aa})\neq H^*(T_{ai})$. Therefore these are the arrows $\mu$ and $\rho$ (in $\Lambda_{-a}$) which are products of two arrows of $\Lambda_{a}$. Now observe that in terms of Brauer complexes, transformation of the quiver of $\Lambda_{-a}$ to $\Lambda_{a}$ is insertion of edge labeled by $a$, incident with $V_1'$ and $V_2'$ into the union of faces $F_1$ and $F_2$. \end{proof} \begin{corollary} Let $\Lambda_1$ and $\Lambda_2$ be symmetric $\Sb$-algebras, let $C_1$ and $C_2$ be their Brauer complexes. Suppose that $C_2$ can be obtained from $C_1$ by a sequence of tilting transformations. Then $\Lambda_1$ and $\Lambda_2$ are derived equivalent. \end{corollary} \begin{proof} The statement follows from Proposition~\ref{correspondence}, Lemma~\ref{lemma} and the Rickard's Theorem. \end{proof} \begin{example} \label {example} Consider decagons $D_1$ and $D_2$. Fix an orientation on each of decagons. Mark the edges of $D_1$ (of $D_2$) with letters $a$, $b$, $c$, $d$, $e$ so that they form a word $abcdeabcde$ (resp., $abcdeadebc$) in counter-clockwise order. In each decagon, identify the edges which are marked by the same letter in such way that the resulting manifolds are oriented. It's easy to see that both complexes (we call them $C_1$ and $C_2$) have 2 vertices, 5 edges, one face, i.e. they are homeomorphic to a sphere with two handles. Moreover, the 1-skeletons of $C_1$ and $C_2$ are bipartite graphs. But these complexes cannot be obtained from each other by tilting transformations: any complex $C'$, obtained from the complex $C_1$, is isomorphic to $C_1$. This construction gives pairs of symmetric $\Sb$-algebras of genus 2, for which the methods given in present paper are not enough to determine whether they are derived equivalent or not. \end{example} \section{Algebras of genus 0} Now we prove that if Brauer complex of $\Lambda$ is homeomorphic to a sphere, then the multiset of perimeters of its faces and the multiset of multiplicities of vertices determine the class of derived equivalence of $\Lambda$. For a start, we don't take into consideration the multiplicities of vertices, i.e. we consider graphs with non-labeled vertices. We fix plane graphs $\Gamma_1$ and $\Gamma_2$ with the same multisets of perimeters of faces and show that $\Gamma_2$ can be obtained from $\Gamma_1$ by a sequence of tilting transformations (statements from Lemma~\ref{lemma1} to Proposition~\ref{itog1}). \begin{definition} Graphs which can be obtained from each other by a sequence of tilting transformations will be called {\it chain equivalent} graphs. \end{definition} \begin{lemma} {\label {lemma1}} Let $\Gamma$ be a plane graph, $A \in V(\Gamma)$. There exists a plane graph $\Gamma'$, chain equivalent to $\Gamma$, in which the vertex $A$ is incident with all edges and one of the following conditions holds: \begin{enumerate} \item $\Gamma'$ has no loops \item Each edge of $\Gamma'$ is either a leaf or a loop at vertex $A$ (i.e., there are no multiedges in $\Gamma'$ except for loops). \end{enumerate} \end{lemma} \begin{definition} Plane graph of this form is called a {\it reduced graph}. \end{definition} \begin{proof} Consider among graphs, which are chain equivalent to $\Gamma$, a graph $\Gamma'$ with a maximal degree of $A$. Observe that all edges of $\Gamma'$ are incident with $A$. Indeed, otherwise there are vertices $B, C \neq A$ and an edge $e\in E(B,C)$ such that either $B$ or $C$ is incident with $A$ (without loss of generality, $B$) and such that the edge $\pi_B(e) \in E(A,B)$. If $B\neq C$, we apply to $e$ a transformation of type 3. If $B=C$, we apply to $e$ a transformation of type 2 so that $e$ shifts from $B$ to $A$. Thus the degree of $A$ can be increased, a contradiction. It follows that there are three types of edges in $\Gamma'$: \begin{enumerate} \item[a)] a loop at vertex $A$; \item[b)] edges which form a multiedge incident with $A$; \item[c)] a leaf $(A,X)$. \end{enumerate} For further convenience, elements of type a) don't belong to type b). We show that in $\Gamma'$ edges of types a) and b) cannot exist simultaneously. Suppose that there is a loop $a$, leaves $a_1=\pi_A(a)$, $a_2=\pi_A(a_1), \dots$, $a_s=\pi_A(a_{s-1})$ and an edge $b=\pi_A(a_s)$ of type b). Consider the edge $c=\pi_B(b)$. By transformations of type 1, we shift $a_1, \dots, a_s$ along $a$. Now there are no edges between $a$ and $b$ around $A$, and we can apply a transformation of type 3 to the edge $b$ and $b$ becomes a loop. This increases the degree of $A$, a contradiction. \end{proof} \begin{definition} A reduced graph which has no loops is called a {\it reduced graph of type 1}. \end{definition} \noindent Observe that the border of any face of a reduced graph of type 1 is formed by several pairs of edges $(A,B_1), \dots, (A,B_k)$ and by several leaves (any leaf is counted in the perimeter of the face twice). Observe that a reduced graph of type 1 is bipartite. \begin{definition} A reduced graph which has loops is called a {\it reduced graph of type 2}. \end{definition} \noindent In a reduced graph of type 2, any edge which is not a leaf is a loop. Observe that a reduced graph of type 2 is not bipartite. Let $\Gamma_1'$ and $\Gamma_2'$ be reduced graphs, chain equivalent to $\Gamma_1$ and to $\Gamma_2$, respectively. By Proposition~\ref{bipartite}, $\Gamma_1'$ and $\Gamma_2'$ are of the same type. We will show that all reduced graphs of the same type, with the same multisets of perimeters of faces, are chain equivalent: \smallskip \noindent {\bfseries I. Reduced graphs of type 1.} Fix a graph $\Gamma$ of type 1. \begin{lemma} \label{reduced1} Each reduced graph $\Gamma$ of type 1 is chain equivalent to a reduced graph $\Gamma'$ of type 1, which has at most two non-dangling vertices. \end{lemma} \begin{proof} Consider among reduced graphs, which are chain equivalent to $\Gamma$, a graph $\Gamma'$ with maximal number of dangling vertices. Let $A$ be the vertex of $\Gamma'$, which is incident with all edges. We show that $\Gamma'$ has at most two non-dangling vertices (including $A$). Indeed, let $b\in E(A,B)$ and $c\in E(A,C)$ be two edges of type 2 ($B\neq C$) such that there are only leaves between $b$ and $c$ in clockwise order around A. As above, by transformations of type 1 we obtain a graph, in which there are no leaves between $b$ and $c$ (around $A$). Suppose that $C$ has degree $2$. Applying the transformation of type 3 to $c$ (shift along $b$), we get a reduced graph with a greater number of leaves, since $C$ becomes a leaf. In order to transform $C$ to a leaf when deg$(C)=r$, we need to carry out the same operations with $r-1$ edges, which are incident with $C$. \end{proof} Consider a reduced graph $\Gamma'$ which was obtained in lemma~\ref{reduced1}. It is easy to see that the faces of $\Gamma'$ and the edges of $\Gamma'$ which are not leaves can be cyclically numbered by $1, 2...\dots, g$ so that the border of the face number $i$ consists of the edges number $i$ and $i+1$ and several inner leaves. It should be mentioned that if $g=1$ then $\Gamma'$ is a tree in a form of star, and we get Brauer trees, which were studied by Rickard in~\cite{6}, as a first application of the criterion of derived equivalence. \begin{lemma}\label{canonical} Graph $\Gamma'$ is chain equivalent to a graph of the same form (i.e., as in lemma~\ref{reduced1}), in which the perimeters of faces are in ascending ordering. \end{lemma} \begin{proof} It's enough to show how to 'transpose' two faces, see Figure~\ref{fig:circles}. \begin{figure}[ht] \begin{center} \includegraphics{circles.eps} \caption{to Lemma~\ref{canonical}} \label{fig:circles} \end{center} \end{figure} \end{proof} We see that any bipartite plane graph is chain equivalent to a (unique) {\it canonical representative} (we will also say "a {\it graph in canonical form}") --- a graph in which the perimeters of faces are in ascending ordering. Two graphs with the same multisets of perimeters are chain equivalent to the same canonical representative, and therefore they are chain equivalent to each other. \smallskip \noindent {\bfseries II. Reduced graphs of type 2.} Consider a reduced graph $\Gamma$ of type 2. First suppose that $A$ is the only vertex of $\Gamma$, i.e. {\bf all edges of $\Gamma$ are loops} and $n=g-1$, where $n$ is the number of vertices of $Q_e$ and $g$ is the number of $G$-cycles. Consider a graph $T=T(\Gamma)$, which is plane dual to $\Gamma$. $T$ is a tree with $g-1$ edges and $g$ vertices. Observe that the transformations of type 1 cannot be applied to $\Gamma$. The transformations of types 2 and 3 can be described in terms of $T$ as follows. \begin{itemize} \item Transformation of type 2. A leaf $V_1V_2$ of $T$ (with dangling vertex $V_1$) is shifted around $V_2$ in arbitrary way. This transformation of a plane labeled tree will be called a {\it flip-over}. \item Transformation of type 3. Suppose that $\pi^{-1}_{V_1}(V_1V_2)=V_1V_3$ and that $\pi^{-1}_{V_2}(V_1V_2)=V_2V_4$. Replace edges $V_1V_3$ and $V_2V_4$ with edges $V_1V_4$ and $V_2V_3$ in a way that $\pi_{V_1}(V_1V_2)=V_1V_4$ and $\pi_{V_2}(V_1V_2)=V_2V_3$. This transformation of a plane labeled tree will be called a {\it flip} (see Figure~\ref{fig:flip}; an arc between two edges in the pictures denotes absence of other edges). \end{itemize} \begin{figure}[ht] \begin{center} \includegraphics{flip.eps} \caption{Flip} \label{fig:flip} \end{center} \end{figure} \begin{definition} Plane trees with labeled vertices, which can be obtained from each other by flips and flip-overs, are called {\it equivalent}. Clearly, equivalent trees are dual to chain-equivalent graphs. \end{definition} \begin{proposition} \label{equivalent} Two plane trees with the same multisets of labeled vertices and the same degrees of correspondent vertices are equivalent. \end{proposition} \noindent We need the following lemma. \begin{lemma} \label{incident} Let $V_1V_2$ be a leaf in a plane tree $T$ with dangling vertex $V_1$. Let $V_1,V_2,\dots, V_r$ be a path in $T$ such that $V_r$ is an non-dangling vertex. Then $T$ is equivalent to a tree, in which $V_1$ is adjacent with $V_r$. \end{lemma} \begin{proof} The proof is by induction on $r$. For $r=2$ the claim is trivial. Suppose that there is a number $i\in\{1,\dots,r\}$ such that deg$(V_i)\geq 3$. Consider the minimal such $i$. Without loss of generality we assume that $V_{i+1}\neq V$, where $V$ is such vertex that $\pi_{V_i}(V_iV_{i-1})=V_iV$. If $i\neq 2$, replace edges $V_{i-1}V_{i-2}$ and $V_iV$ with $V_{i}V_{i-2}$ and $V_{i-1}V$ by a flip. Otherwise, we make $V_2V_3$ follow $V_2V_1$ by several flip-overs, and then make the above flip. The distance between $V_1$ and $V_r$ decreases, and we apply the inductive hypothesis. If $i$ cannot be defined, consider the unique vertex $V_{r+1}\neq V_{r-1}$ adjacent with $V_r$. Replace $V_{r-1}V_{r-2}$ and $V_rV_{r+1}$ with $V_{r}V_{r-2}$ and $V_{r-1}V_{r+1}$ by a flip. Again, the distance between $V_1$ and $V_r$ is decreased, and we apply the inductive hypothesis. \end{proof} Now we prove Proposition~\ref{equivalent}. \begin{proof} The proof is by induction on the number of vertices. For $g=1$ the claim is trivial. Let $T_1$ and $T_2$ be two plane trees with $g$ vertices. Let a dangling vertex $V$ be adjacent with $V_1$ in $T_1$ and with $V_2$ in $T_2$. By Lemma~\ref{incident}, we can replace $T_1$ with an equivalent tree $T_3$ in which $V$ is adjacent with $V_2$. Let $T_3^1$ and $T_2^1$ be the trees, obtained from $T_3$ and $T_2$ by removing $V$ with the corresponding edge. They have the same degrees of correspondent vertices, and therefore they are equivalent by inductive hypothesis. It remains to show that it is still possible to carry out the sequence of transformations, which puts $T_3^1$ to $T_2^1$, when edge $V_2V$ is not deleted. After these transformation we will be able to flip-over the edge $V_2V$ to the required place. Start to apply the above sequence of transformations to $T_3$. We can encounter difficulties in the following cases: \begin{itemize} \item When in $T_3$ the edge $V_2V$ is between two subsequent edges (around $V_2$) of $T_1^3$ and doesn't allow to make a flip. We cope with this by an arbitrary flip-over of $V_2V$. \item If $V_2$ is a dangling vertex in $T_3^1$, incident with an edge $V_2V_3$, and in $T_3^1$ it is possible to make a flip-over of $V_2V_3$. In $T_3$ instead of this flip-over we make the following sequence of transformations (Figure~\ref{fig:krakozabra}). \end{itemize} \begin{figure}[ht] \begin{center} \includegraphics{krakozabra.eps} \caption{to Proposition~\ref{equivalent}} \label{fig:krakozabra} \end{center} \end{figure} \noindent This finishes the proof. \end{proof} \noindent Now suppose that {\bf there are dangling vertices in $\Gamma$}. \begin{definition} {\it External perimeter} of a face is the number of its edges, which separate it from other faces (in our case, these are loops). \end{definition} \begin{definition} {\it Reduction} of graph $\Gamma$ is a graph $\mathcal{R}(\Gamma)$ which is obtained from $\Gamma$ by removing all dangling vertices. \end{definition} \begin{proposition} \label{analogue} Let $\Gamma_1$ and $\Gamma_2$ be reduced graphs of type 2. Suppose that there is a tilting transformation $p$ which puts $\mathcal{R}(\Gamma_1)$ to $\mathcal{R}(\Gamma_2)$. Suppose also that the correspondent labeled faces of $\Gamma_1$ and of $\Gamma_2$ have the same number of edges. Then graphs $\Gamma_1$ and $\Gamma_2$ are chain equivalent. \end{proposition} \begin{proof} Let $l$ be the loop, which is shifted by $p$ and let $F_1$ and $F_2$ be the faces separated by $l$. We need to obtain a sequence of transformations which would serve as an analogue of $p$ for $\Gamma_1$. Figure~\ref{fig:butterfly} illustrates the case when $F_1$ has inner leaves and $l$ is the only loop on the border of $F_1$. The case when there are other loops on the border of $F_1$ is even easier: the analogue of $p$ is a transformation of type 3, made after necessary flip-overs of leaves. \begin{figure}[ht] \begin{center} \includegraphics{butterfly.eps} \caption{to Proposition ~\ref{analogue}} \label{fig:butterfly} \end{center} \end{figure} \end{proof} \begin{remark}\label{mainremark} It follows from Propositions~\ref{equivalent} and~\ref{analogue} that the class of chain equivalence of a reduced graph of type 2 is determined by the multiset of pairs $(P(F_i)$, $p(F_i))$, where $P(F_i)$ is the perimeter and $p(F_i)$ is the external perimeter of the face $F_i$. \end{remark} \begin{definition} The multiset of pairs $(P(F_i)$, $p(F_i))$ will be called a {\it multiset of double perimeters} of graph $\Gamma$. \end{definition} \begin{proposition} \label{double} Let $\Gamma_1$ and $\Gamma_2$ be reduced graphs of type 2 with the same multisets of perimeters of faces. Then there exists a reduced graph $\Gamma_3$ of type 2, chain equivalent to $\Gamma_1$, such that the multisets of double perimeters of $\Gamma_2$ and $\Gamma_3$ are the same. \end{proposition} \begin{proof} Let $\{(P_i$, $p_i)\}$ be the multiset of double perimeters of $\Gamma_2$, let $\{(P_i$, $p^1_i)\}$ be the multiset of double perimeters of $\Gamma_1$, for $i=1, \dots, g$. Observe that $p_i\equiv P_i\equiv p^1_i\pmod 2$ for each $i\in \{1,\dots g\}$ and that $\sum_i p_i=2g-2=\sum_i p^1_i$. Set $q_i=p_i^1$ for each $i$. Consider the following algorithm of 'transformation' of the multiset $\{q_i\}$ to the multiset $\{p_i\}$. Below we will show that for each step of this algorithm there is a chain equivalence of graphs, which properly changes their external perimeters. Consider maximal $k$ such that $q_i=p_i$ for all $i<k$. \begin{enumerate} \item If $q_k<p_k$ then $q_j>p_j$ for some $j>k$. Replace $q_k$ with $q_k+2$ and replace $q_j$ with $q_j-2$. \item Otherwise $q_k>p_k\geq 1$ and $q_j<p_j$ for some $j>k$. In this case we replace $q_k$ with $q_k-2$ and replace $q_j$ with $q_j+2$. \end{enumerate} Observe that at each step the number which is decreased is greater than two, so the resulting numbers are positive. Moreover, since $q_i \leq$ max$(p_i,p_i^1)$, at each step $q_i \leq P_i$ for all $i$. Clearly, the multiset of numbers $q_i$ can be transformed to the multiset of numbers $p_i$ by these operations. To find the chain equivalences which correspond to these operations, we need the following lemma. \begin{lemma} \label{triv} Let $T$ be a tree, let $V_1, V_2 \in V(T)$. If $V_1$ and $V_2$ are not both dangling vertices, then there exists a tree in which the degrees of all vertices are the same and the vertices $V_1$ and $V_2$ are adjacent. \end{lemma} \begin{proof} The proof is by induction on the number of vertices in $T$. \end{proof} \noindent We return to the proof of Proposition~\ref{double}. We need a sequence of tilting transformations under which the multiset of external perimeters changes in accordance to the above algorithm. Suppose that we are to change the external perimeters $q_i$ and $q_k$ of faces $F_i$ and $F_k$, respectively, in a graph $\Gamma$. By Lemma~\ref{triv} and Remark~\ref{mainremark}, $\Gamma$ can be transformed to a chain equivalent graph $\Gamma'$ with the same multiset of double perimeters, such that in the dual tree $T(\Gamma')$ the vertices of degrees $q_i$ and $q_k$ are adjacent. Without loss of generality, we are to increase $q_i$. In this case $q_i<P_i$ and $q_k \geq 3$. Consider faces $F_1$ and $F_2$ of $\Gamma'$ which can be described in terms of dual tree $T(\Gamma')$ as follows: $F_1=\pi_{F_k}^{-1}F_i$, $F_2=\pi^{-1}_{F_k}F_1$ (all faces $F_1$, $F_2$ and $F_i$ are different, since $q_k \geq 3$). Since $q_i<P_i$, there is at least one leaf in $F_i$.The following sequence of transformations finishes the proof (see Figure~\ref{fig:klever}; in the picture the shifts of leaves are omitted). \begin{figure}[ht] \begin{center} \includegraphics{klever.eps} \caption{to Proposition~\ref{double}} \label{fig:klever} \end{center} \end{figure} Thus $q_i$ is increased by 2 and $q_k$ is decreased by $2$, which was required. \end{proof} \noindent Altogether, we get \begin{proposition}\label{itog1} Two plane graphs with the same multiset of perimeters of faces are chain equivalent. \end{proposition} Now we again consider graphs with labeled vertices, i.e., we return the multiplicities of vertices into consideration. In statements from Lemma~\ref{lem1} to Theorem~\ref{maintheorem} we prove that {\it if two plane Brauer graphs with the same multisets of labels of vertices are isomorphic as non-labeled graphs, then they are chain equivalent as labeled graphs. } In view of the above arguments, it's enough to prove this for reduced graphs. Moreover, in the case of bipartite graphs we may restrict ourselves to considering graphs in canonical form. Recall that the process of putting a graph to reduced form (and to canonical form, for graphs of type 1) started with choosing an {\it arbitrary} vertex $A$. Recall also that we can arbitrarily shift leaves in a face, by tilting transformation of type 1. \noindent{\bf I. Reduced graphs of type 1.} For reduced graphs of type 1, it suffices to prove the following lemmas: \begin{lemma} \label{lem1} Let $\Gamma$ be a graph in canonical form, let $B\neq A$ be the second non-dangling vertex of $\Gamma$, let $F$ be a face. Then $\Gamma$ is chain equivalent to a graph in canonical form, in which \begin{enumerate} \item $B$ is a dangling vertex in the face $F$. \item Some vertex $C \neq A$ which belongs in $\Gamma$ to $F$ is a non-dangling vertex. \item The other dangling vertices belong in $\Gamma$ and in $\Gamma_1$ to the same faces. \end{enumerate} \end{lemma} \begin{lemma} \label{lem2} Let $\Gamma$ be a graph in canonical form, let $B\neq A$ be the second non-dangling vertex of $\Gamma$, let faces $F_1$ and $F_2$ be adjacent. Then $\Gamma$ is chain equivalent to a graph in canonical form $\Gamma_1$, in which \begin{enumerate} \item There is a dangling vertex which belongs in $\Gamma$ to $F_1$ and belongs in $\Gamma_1$ to $F_1$, and there is another dangling vertex which belongs in $\Gamma$ to $F_2$ and in $\Gamma_1$ to $F_2$. \item The other dangling vertices belong in $\Gamma$ an in $\Gamma_1$ to the same faces. \end{enumerate} \end{lemma} \noindent For the proof of Lemma~\ref{lem1} see Figure~\ref{fig:lemma1}. For the proof of Lemma~\ref{lem2} see Figure~\ref{fig:lemma2}. \begin{figure}[ht] \begin{center} \includegraphics{lemma1.eps} \caption{Proof of Lemma~\ref{lem1}} \label{fig:lemma1} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics{lemma2.eps} \caption{Proof of Lemma~\ref{lem2}} \label{fig:lemma2} \end{center} \end{figure} \noindent{\bf II. Reduced graphs of type 2.} Since the dangling vertices in a face can be shifted in arbitrary way, it's enough to show how to interchange dangling vertices belonging to different faces (say, to $F_1$ and $F_2$). First consider the case when the external perimeter of $F_1$ or $F_2$ is greater then 1. Then by Lemma~\ref{triv} and Remark~\ref{mainremark}, there is a sequence of tilting transformations making $F_1$ and $F_2$ adjacent. Moreover, this sequence preserves the faces to which belong the dangling vertices (see Figure~\ref{fig:butterfly}). Therefore, in this case it's enough to show how to interchange dangling vertices which belong to adjacent faces: see Figure~\ref{fig:short}. \begin{figure}[ht] \begin{center} \includegraphics{short.eps} \caption{Interchange between adjacent faces} \label{fig:short} \end{center} \end{figure} \noindent Now consider the case when the dangling vertices which we want to interchange belong to faces, which correspond to dangling vertices of the dual tree. \begin{lemma} Let $T$ be a plane tree with labeled vertices, let $V_1$ and $V_2$ be dangling vertices of $T$. Suppose that $T$ is not a chain. Then $T$ is equivalent to a tree, in which the edges which are incident with $V_1$ and $V_2$ are incident to a common vertex $V$. Moreover, $\pi_{V}(VV_1)=VV_2$. \end{lemma} \begin{proof} By Remark~\ref{mainremark} it is enough to find a tree with the same multiset of degrees as in $T$, in which some two leaves are adjacent to a common vertex. Denote the degrees of $T$ by $r_1,\dots, r_g$ in such way that $r_1=r_2=1$, $r_3\geq 3$. Observe that the sum of numbers $d_3-2,d_4,\dots,d_g$ equals $2g-6$. It can be shown by induction on $g$ that there is a tree $T'$, in which these numbers are the degrees of vertices. To obtain the needed tree, we add two leaves to the vertex of $T'$ of degree $r_3-2$. \end{proof} \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.9]{long1.eps} \caption{Interchange between "dangling" faces} \label{fig:long1} \end{center} \end{figure} We see that if $T$ is not a chain, then it suffices to show how to interchange dangling vertices between two "dangling" faces, which have a common adjacent face: see Figure~\ref{fig:long1}. \begin{figure}[!ht] \begin{center} \includegraphics{long2.eps} \caption{Chain case} \label{fig:long2} \end{center} \end{figure} It remains to examine the case when $T(\Gamma)$ is a chain, and $F_1$ and $F_2$ correspond to the two dangling vertices of $T(\Gamma)$. If some other face of $\Gamma$ contains a dangling vertex, the needed interchange comes to three interchanges of the above form. In Figure~\ref{fig:long2} is is shown how to interchange leafs in case when the rest faces have perimeter 2. (For the graph in the picture $g=4$, and this case fully represents the general case.) \noindent This finishes the proof of the main theorem in this section: \begin{theorem} \label{maintheorem} Let $\Lambda_1$ and $\Lambda_2$ be symmetric $SB$-algebras of genus $0$. Then $\Lambda_1$ and $\Lambda_2$ are derived equivalent if and only it their Brauer complexes have the same multisets of perimeters of faces and the same multisets of labels on vertices. \end{theorem} \newpage
{"config": "arxiv", "file": "math0701586/Derived_Equivalence.tex"}
\begin{document} \title{Superposition principle and Kirchhoff's integral theorem} \author{M. I. Krivoruchenko} \affiliation{ Institute for Theoretical and Experimental Physics$\mathrm{,}$ B. Cheremushkinskaya 25\\ 117218 Moscow, Russia} \affiliation{ Moscow Institute of Physics and Technology$\mathrm{,}$ 141700 Dolgoprudny$\mathrm{,}$ Russia} \affiliation{ Bogoliubov Laboratory of Theoretical Physics$\mathrm{,}$ Joint Institute for Nuclear Research\\ 141980 Dubna$\mathrm{,}$ Russia } \begin{abstract} The need for modification of the Huygens-Fresnel superposition principle arises even in the description of the free fields of massive particles and, more extensively, in nonlinear field theories. A wide range of formulations and superposition schemes for secondary waves are captured by Kirchhoff's integral theorem. We discuss various versions of this theorem as well as its connection with the superposition principle and the method of Green's functions. A superposition scheme inherent in linear field theories, which is not based on Kirchhoff's integral theorem but instead relies on the completeness condition, is also discussed. \end{abstract} \maketitle \setcounter{page}{1} \pagebreak \baselineskip= 14pt \tableofcontents \newpage \baselineskip= 15pt \section{Introduction} \renewcommand{\theequation}{I.\arabic{equation}} \setcounter{equation}{0} An excellent and detailed explanation of Huygens' principle for undergraduate students, together with the optical-mechanical analogy and the Hamilton-Jacobi method, can be found in the monograph by Arnold \cite{Arno89}. Students are introduced to a generalization of Huygens' principle, viz. the Huygens-Fresnel superposition principle, in the study of general physics (see, e.g., \cite{Save82}), and this principle is presented in greater detail in the study of theoretical physics (see, e.g., \cite{Land71}). The method of Green's functions (GF) which has found numerous applications in a large variety of different fields is discussed in the first volume of a two-volume monograph by Bjorken and Drell \cite{Bjor64,Bjor65}, where, in particular, the superposition principle is used in~\S\S~21~-~22 to derive the equation for the Green's function. Further development of concepts related to the superposition principle has led to the emergence in quantum theory of the path integral formalism, an excellent overview of which can be found in the monograph by Dittrich and Reuter \cite{DITT2001}. This material is intended for advanced students studying quantum field theory. A detailed presentation of the superposition principle for electromagnetic fields, its rationale and its generalizations, based on Kirchhoff's integral theorem, \cite{Kirch1883} is given in the monograph by Born and Wolf \cite{Born99}. This monograph is intended for postgraduate students and researchers specializing in the theory of the propagation of electromagnetic waves and light phenomena. Thus, it is clear that the superposition principle is closely related to the GF method which, in turn, lies at the heart of quantum field theory and the diagram technique. In the literature, this relationship is typically mentioned only in passing, while the mathematical aspects, modifications, and physical meaning of the generalized schemes of superposition are treated as matters beyond dispute. A rigorous formulation of the superposition principle is based on Kirchhoff's integral theorem. The generalizations to which it leads are used also in the theory of interacting fields. In this paper, we attempt to specify the precise place of the superposition principle in classical and quantum field theory and discuss its relationship with the GF method and Kirchhoff's integral theorem. Surprisingly, the answers to the main questions can be obtained by analyzing the dynamics of the one-dimensional oscillator. The oscillator problem from the viewpoint of Kirchhoff's integral theorem, as well as its connections with the superposition principle and the GF method, is discussed in the next section. In Sect. III, we consider a free massive scalar field. For massive fields, the superposition scheme includes an integral over three-dimensional space. Both in the limit of zero mass and for monochromatic fields, the canonical superposition scheme, in which the summation of the sources of secondary waves is limited to a two-dimensional surface, arises. The statement of Kirchhoff's theorem depends on the asymptotic conditions imposed on the propagator at $t\to \pm \infty $. In quantum field theory, the Feynman asymptotic conditions are used. Emphasis is therefore placed on the versions of the theorem that satisfy the Feynman asymptotic conditions. In Sect. IV, we discuss a charged scalar field in an external electromagnetic field, prove the appropriate version of Kirchhoff's integral theorem, and demonstrate that in an external electromagnetic field, the superposition schemes are not fundamentally modified. In nonlinear theories the superposition principle holds in relation to the secondary waves. In Sect. V, we consider a class of nonlinear scalar field theories. The physical meaning of Kirchhoff's integral theorem is discussed, including its connections with the GF method and the superposition principle. Vectorial generalizations of Kirchhoff's integral theorem for retarded Green's function are discussed Appendix. The conclusions section summarizes the discussion. \section{The Huygens-Fresnel superposition principle and Kirchhoff's integral theorem in the oscillator problem} \renewcommand{\theequation}{II.\arabic{equation}} \setcounter{equation}{0} A free scalar field obeys the Klein-Gordon equation: \begin{equation} (\Box +m^{2} )\phi _{0} ^{} (x)=0. \label{1} \end{equation} Of interest are the general features of solutions of the wave equation, which extend to its nonlinear modifications. The main consequences of Kirchhoff's theorem and the physical content of the Fresnel-Huygens superposition principle can be explained using the example of the one-dimensional oscillator; thus, we begin by considering the evolution of a one-dimensional harmonic oscillator. This problem can also be regarded as a problem of the evolution of a free scalar field in momentum space. \subsection{Harmonic oscillator} We write the equation in the form \begin{equation} \left(\frac{d^{2} }{dt^{2} } +m^{2} \right)\phi _{0} (t)=0. \label{2} \end{equation} Here, $m$ is the frequency of the oscillator and $\phi _{0} (t)$ is its coordinate. If $\phi _{0} (t)$ is a spatially homogeneous field in the Klein-Gordon equation, then $m$ is the mass of the particle. \subsubsection{Complete orthonormal basis functions} A complete set of solutions to Eq.~(\ref{2}) is formed by the two functions \begin{equation} f^{(+)} (t)=\frac{e^{-imt} }{\sqrt{2m} } \, \, \, \, \textrm{and} \, \, \, \, f^{(-)} (t)=\frac{e^{imt} }{\sqrt{2m} } . \label{3} \end{equation} The normalization and completeness conditions are expressed in terms of the Wronskian. If $\varphi $ and $\chi $ are two functions, then their Wronskian is equal to \begin{equation} W[\varphi ,\chi ]=\det \left\| \begin{array}{cc} {\varphi } & {\chi } \\ {\dot{\varphi }} & {\dot{\chi }} \end{array}\right\| =\varphi \dot{\chi }-\dot{\varphi }\chi . \end{equation} The notation \[\varphi \stackrel{\leftrightarrow}{\partial }_{t} \chi =W[\varphi ,\chi ]\] is often used. The normalization and orthogonality of the basis functions are represented as follows: \begin{equation} iW[f^{(\pm )*} ,f^{(\pm )} ]=\pm 1 \, \, \, \, \textrm{and} \, \, \, \, W[f^{(\pm )*} ,f^{(\mp )} ]=0. \label{4} \end{equation} If the functions for which we compute the Wronskian are solutions of Eq.~(\ref{2}), then the Wronskian is independent of time. Let $\phi_{0} (t)$ be a solution of Eq.~(\ref{2}). We define the following time-independent complex numbers: \begin{equation} a=iW[f^{(+)*} ,\phi _{0} ] \, \, \, \, \textrm{and} \, \, \, \, a^{*} =-iW[f^{(-)*} ,\phi _{0} ]. \label{5} \end{equation} After quantization, the values $a$ and $a^*$ become annihilation and creation operators. The completeness condition takes the form \begin{equation} \phi _{0} (t)=f^{(+)} (t)iW[f^{(+)*} ,\phi _{0} ]-f^{(-)} (t)iW[f^{(-)*} ,\phi _{0} ]. \label{6} \end{equation} This equation also allows for the decomposition of the solution into its positive- and negative-frequency components: \begin{equation} \phi _{0}^{} (t)=\phi _{0}^{(+)} (t)+\phi _{0}^{(-)} (t), \label{7} \end{equation} where \begin{equation} \phi_{0}^{(\pm )} (t)=\pm f^{(\pm )} (t)iW[f^{(\pm )*} ,\phi _{0} ].\end{equation} Equation (\ref{6}) is valid not only in the linear vector space spanned by the basis functions (\ref{3}), but also for any function evaluated at time $t$. The right-hand side of Eq.~(\ref{6}) for an arbitrary function $\chi(t)$ has the form \begin{equation*} \textrm{r.h.s.} =i\left( f^{(+)}(t)f^{(+)\ast }(t)-f^{(-)}(t)f^{(-)\ast }(t)\right) \dot{\chi}(t) -i\left( f^{(+)}(t)\dot{f}^{(+)\ast }(t)-f^{(-)}(t)\dot{f} ^{(-)\ast }(t)\right) \chi(t). \end{equation*} Using the explicit form of $f^{(\pm)}(t)$, one can see that $\textrm{r.h.s.} = \chi(t)$. Although this property appears fortuitous, it is rather fundamental. Let us consider the Poisson bracket relations \begin{eqnarray} \{\phi _{0}(t),\phi _{0}(t)\} &=&0, \label{PB1} \\ \{\phi _{0}(t),\pi _{0}(t)\} &=&1, \label{PB2} \end{eqnarray} where $\pi_{0}(t) = \dot{\phi}_{0}(t)$ is the canonical momentum. A simple calculation using Eq.~(\ref{6}) gives \begin{eqnarray} \{\phi _{0}(t^{\prime }),\phi _{0}(t)\} &=&f^{(+)}(t)i\{\phi _{0}(t^{\prime }),W[f^{(+)\ast },\phi _{0}]\}-f^{(-)}(t)i\{\phi _{0}(t^{\prime }),W[f^{(-)\ast },\phi _{0}]\} \nonumber \\ &=&f^{(+)}(t)i\{\phi _{0}(t^{\prime }),f^{(+)\ast }(t^{\prime })\pi _{0}(t^{\prime })-\dot{f}^{(+)\ast }(t^{\prime })\phi _{0}(t^{\prime })\} \nonumber \\ &-&f^{(-)}(t)i\{\phi _{0}(t^{\prime }),f^{(-)\ast }(t^{\prime })\pi _{0}(t^{\prime })-\dot{f}^{(-)\ast }(t^{\prime })\phi _{0}(t^{\prime })\} \nonumber \\ &=& i\left( f^{(+)}(t)f^{(+)\ast}(t^{\prime})-f^{(-)}(t)f^{(-)\ast}(t^{\prime}) \right), \label{bracket1} \\ \{\phi _{0}(t^{\prime }),\pi _{0}(t)\} &=& i\left( \dot{f}^{(+)}(t)f^{(+)\ast }(t^{\prime }) - \dot{f}^{(-)}(t) f^{(-)\ast }(t^{\prime}) \right). \label{bracket2} \end{eqnarray} By virtue of Eqs.~(\ref{PB1}) and (\ref{PB2}), \begin{eqnarray} f^{(+)}(t)f^{(+)\ast}(t)-f^{(-)}(t)f^{(-)\ast}(t) &=&0, \label{Fund1} \\ f^{(+)}(t)\dot{f}^{(+)\ast }(t) - f^{(-)}(t)\dot{f}^{(-)\ast }(t) &=& i, \label{Fund2} \\ \dot{f}^{(+)}(t)f^{(+)\ast }(t) - \dot{f}^{(-)}(t)f^{(-)\ast }(t) &=& -i. \label{Fund3} \end{eqnarray} Identity $\textrm{r.h.s.} = \chi(t)$ is, therefore, a consequence of the completeness condition (\ref{6}) for functions $\phi_0(t)$, which are solutions of Eq.~(\ref{2}), and the Poisson bracket relations for the canonical variables. \subsubsection{The Green's functions} A Green's function is defined by the equation \begin{equation} \left(\frac{d^{2} }{dt^{2} } +m^{2} \right)\Delta_X (t)=-\delta (t). \label{8} \end{equation} By performing the Fourier transform in time, we obtain the Green's function in frequency space: $\Delta_X (\omega )= (\omega ^{2} -m^{2})^{-1}$. For the inverse Fourier transformation, \begin{equation} \Delta_X (t)=\int _{-\infty }^{+\infty }\frac{d\omega }{2\pi } \, e^{-i\omega t} \frac{1}{\omega ^{2} -m^{2} } , \label{10} \end{equation} it is necessary to bypass the poles on the real axis that arise for $\omega =\pm m.$ There are four possibilities, which correspond to four Green's functions: \begin{eqnarray} \Delta_{F}(t'-t) &=& \int_{-\infty}^{+\infty}\frac{d\omega }{2\pi} e^{-i\omega (t'-t)} \frac{1}{ \omega^{2} - m^{2} +i0} \nonumber \\ &=&-i\left( f^{(+)}(t') f^{(+)*}(t)\theta(t'-t) + f^{(-)}(t')f^{(-)*}(t)\theta(-t'+t)\right), \label{11} \\ \Delta _{F} ^{c} (t'-t)&=&\int _{-\infty }^{+\infty }\frac{d\omega }{2\pi } \, e^{-i\omega (t'-t)} \frac{1}{\omega ^{2} -m^{2} -i0} \nonumber \\ &=& i\left(f^{(-)} (t')f^{(-)*} (t)\theta (t'-t)+f^{(+)} (t')f^{(+)*} (t)\theta (-t'+t)\right), \label{12} \\ \Delta _{\mathrm{ret}} (t'-t)&=&\int _{-\infty }^{+\infty }\frac{d\omega }{2\pi } e^{-i\omega (t'-t)} \frac{1}{\omega ^{2} -m^{2} +i0\mathrm{sgn}(\omega )} \nonumber \\ &=&-i\left( f^{(+)} (t')f^{(+)*} (t)-f^{(-)} (t')f^{(-)*} (t)\right)\theta (t'-t), \label{13} \\ \Delta _{\mathrm{adv}} (t'-t)&=&\int _{-\infty }^{+\infty }\frac{d\omega }{2\pi } e^{-i\omega (t'-t)} \frac{1}{\omega ^{2} -m^{2} -i0\mathrm{sgn}(\omega )} \nonumber \\ &=& i\left(f^{(+)} (t')f^{(+)*} (t)-f^{(-)} (t')f^{(-)*} (t)\right)\theta (-t'+t). \label{14} \end{eqnarray} Each of these functions satisfies Eq.~(\ref{8}). The difference between any two Green's functions is a solution of the free equation (\ref{2}). It is instructive to verify by the direct calculation that the representation (\ref{11}) satisfies Eq.~(\ref{8}). With the help of equation \begin{equation*} f(x)\delta ^{\prime }(x)= f(0)\delta ^{\prime }(x)-f^{\prime }(0)\delta (x), \end{equation*} one finds \begin{eqnarray} \left( \frac{d^{2}}{dt^{\prime 2}} + m^{2} \right) i\Delta_F (t^{\prime }-t) &=& 2 \left( \dot{f}^{(+)}(t^{\prime })f^{(+)\ast }(t) - \dot{f}^{(-)}(t^{\prime })f^{(-)\ast }(t)\right) \delta (t^{\prime }-t) \nonumber \\ &+&\left( f^{(+)}(t^{\prime })f^{(+)\ast}(t) -f^{(-)}(t^{\prime })f^{(-)\ast}(t)\right) \delta^{\prime }(t^{\prime }-t) \nonumber \\ &=& \left( \dot{f}^{(+)}(t^{\prime })f^{(+)\ast }(t) -\dot{f}^{(-)}(t^{\prime })f^{(-)\ast }(t) \right) \delta (t^{\prime }-t) \nonumber \\ &+& \left( f^{(+)}(t)f^{(+)\ast }(t) -f^{(-)}(t)f^{(-)\ast }(t)\right) \delta ^{\prime }(t^{\prime }-t). \label{delta1} \end{eqnarray} Using Eqs.~(\ref{Fund1}) and (\ref{Fund3}), we arrive at Eq.~(\ref{8}). In terms of quantized variables, the Feynman propagator is defined by \begin{equation} i\Delta _{F}(t^{\prime }-t)=\langle 0|T\hat{\phi}_{0}(t^{\prime })\hat{\phi}_{0}(t)|0\rangle. \label{A7} \end{equation} The $T$ product entering this expression occurs naturally in solutions of the evolution equation $i\partial_{t} \Psi(t) = \hat{H}(t) \Psi(t)$ of systems with a time-dependent Hamiltonian. If, at various times, $\hat{H}$ does not commute with itself, namely $[\hat{H}(t'),\hat{H}(t)] \neq 0$, then the solution $\Psi(t) = U(t,0) \Psi(0)$ is expressed in terms of the time-ordered exponential $U(t,0) = T \exp (-i\int^{t}_{0} \hat{H}(t^{\prime})dt^{\prime})$. In perturbation theory $\Delta _{F}(t^{\prime }-t)$ then arises by Wick's theorem, which explains why $\Delta _{F}(t^{\prime }-t)$ plays a special role in quantum theory. The definition (\ref{A7}) is consistent with the definition (\ref{11}). \subsubsection{Superposition principle from Kirchhoff's integral theorem} Let us compute the Wronskian of the Feynman propagator $\Delta _{F}^{} (t^{\prime} -t)$ and a solution $\phi _{0} (t)$ of Eq.~(\ref{2}). By taking the derivative with respect to $t$ of $W[\Delta _{F} (t^{\prime} -t),\phi _{0} (t)]$ and integrating the result over the interval $(t_{1} ,t_{2} )$, the following equation is obtained for $t_{1} < t^{\prime} < t_{2} $: \begin{equation} \phi _{0} (t^{\prime} )=W[\Delta _{F}^{} (t^{\prime} -t_{2} ),\phi _{0} (t_{2} )]-W[\Delta _{F}^{} (t^{\prime} -t_{1} ),\phi _{0} (t_{1} )]. \label{15} \end{equation} This relation is the harmonic oscillator analog of Kirchhoff's integral theorem. Despite the drastic simplification, the fundamental meaning is maintained and is amenable to interpretation. According to Eq.~(\ref{15}), the coordinate $\phi_{0}(t)$ is determined by both the past and the future. From the past, the Wronskian selects the positive-frequency component of $\phi _{0} (t_{1} )$ and propagates it into the future up to the moment $t=t^{\prime} >t_{1} $. From the future, the Wronskian selects the negative-frequency component of $\phi _{0} (t_{2} )$ and propagates it into the past up to the moment $t=t^{\prime} <t_{2} $. The result is a superposition of the two \textit{waves}. Equation (\ref{2}) is commonly regarded as the equation of motion of a particle (oscillator) in the 1-dimensional space. A less obvious interpretation of this equation as an evolution equation of a wave in the 0-dimensional space is also possible. Equation (\ref{15}) underlines the second interpretation. The analogy with quantum field theory is apparent: particles are identified with positive-frequency solutions of wave equations, and antiparticles are identified with negative-frequency solutions. Particles move \textit{forward in time}, whereas antiparticles move \textit{backward in time}. In accordance with the Huygens-Fresnel superposition principle adapted here for the Feynman asymptotic conditions, the wave $\phi _{0} (t^{\prime} )$ is equal to the sum of the negative-frequency component of $\phi _{0} (t_{2} )$, propagating backward in time, and the positive-frequency component of $\phi _{0} (t_{1} )$, propagating forward in time. Equation (\ref{15}) can thus be interpreted both in the spirit of the Huygens-Fresnel superposition principle and in the spirit of the GF method, thereby establishing the close relationship between them. According to Eq.~(\ref{15}), the coordinate $\phi _{0} (t^{\prime} )$ is determined by its value and its first derivative at the other two time points. Arguing reversely, this suggests that the evolution equation contains time derivatives of no higher than second order. If $t^{\prime} \notin (t_{1} ,t_{2} )$, then there is a zero on the left-hand side of Eq.~(\ref{15}): \begin{equation} 0=W[\Delta _{F}(t^{\prime} -t_{2} ),\phi _{0} (t_{2} )] - W[\Delta _{F}(t^{\prime} -t_{1} ),\phi _{0} (t_{1} )]. \label{16} \end{equation} Equations (\ref{15}) and (\ref{16}) remain valid after the replacement $\Delta _{F}$ with any other propagator. For the retarded Green's function, the analog of Eqs.~(\ref{15}) and (\ref{16}) for $t_2 \to +\infty$ reads \begin{equation} \phi _{0}^{} (t^{\prime} )\theta (t^{\prime} - t_{1})=-W[\Delta _{\mathrm{ret}}^{} (t^{\prime} -t_{1}),\phi _{0}^{} (t_{1})]. \label{17} \end{equation} Here, the positive- and negative-frequency components propagate forward in time, corresponding to the usual formulation of the Huygens-Fresnel superposition principle, so that $\phi_{0}(t)$ is determined by the past only. \subsubsection{Superposition principle from the completeness condition} Here, we present a different formulation of the superposition principle. To begin, let us find the Wronskian $W$ of $\Delta _{F} (t^{\prime} -t)$ and $\phi _{0} (t)$. The expression (\ref{11}), when substituted into $W$, yields \begin{eqnarray} W[\Delta _{F}(t^{\prime} -t),\phi _{0} (t)] &=& - if^{(+)}(t^{\prime} )W[f^{(+)*} (t)\theta(t^{\prime} -t),\phi _{0} (t)] -if^{(-)}(t^{\prime} )W[f^{(-)*} (t)\theta(t-t^{\prime} ),\phi _{0}] \nonumber \\ &=&-if^{(+)}(t^{\prime} )\theta(t^{\prime} -t)W[f^{(+)*} ,\phi _{0} ]-if^{(-)} (t^{\prime} )\theta (t-t^{\prime} )W[f^{(-) *} ,\phi _{0} ] \nonumber \\ && + \Delta(t^{\prime} - t) \phi _{0} (t)\delta (t^{\prime} -t), \label{18} \end{eqnarray} where \begin{equation} i\Delta(t^{\prime} - t) = f^{(+)}(t^{\prime} )f^{(+) *} (t)-f^{(-)} (t^{\prime} )f^{(-) *} (t). \label{comm} \end{equation} By virtue of Eq.~(\ref{bracket1}) \begin{equation*} \Delta(t^{\prime } - t) = \{\phi _{0}(t^{\prime }),\phi _{0}(t)\}. \end{equation*} In the transition to the last lines of Eq.~(\ref{18}), the properties of the Wronskian and the definitions of the basis functions (\ref{3}) are used. According to Eq.~(\ref{Fund1}), the term $\sim \Delta(t^{\prime } - t) \delta(t^{\prime } - t)$ vanishes, yielding \begin{equation} \phi _{0}^{(+)} (t^{\prime} )\theta (t^{\prime} -t)-\phi _{0}^{(-)} (t^{\prime} )\theta (t-t^{\prime} )=-W[\Delta _{F}^{} (t^{\prime} -t),\phi _{0}^{} (t)]. \label{19} \end{equation} Equation (\ref{19}) can be regarded as an equation for $\Delta _{F}(t^{\prime} -t)$. By taking the time ($t$) derivative of both sides, we obtain Eq.~(\ref{8}). The superposition principle, formalized as in (\ref{19}), thus determines the Green's function up to a solution of the free equation. To obtain a unique Green's function, the asymptotic behavior must be fixed. By taking the differences between both sides of Eq.~(\ref{19}) for $t=t_{2} $ and $t=t_{1}< t_{2}$, we obtain Eq.~(\ref{15}), provided that $t^{\prime} \in (t_{1} ,t_{2} )$. If the inverse condition, $t^{\prime} \notin (t_{1} ,t_{2} )$, holds, then we obtain Eq.~(\ref{16}). Finally, by taking the time ($t^{\prime} $) derivative, we obtain the superposition principle for the canonical momentum $\pi _{0}^{} (t)=\dot{\phi }_{0}^{} (t)$: \begin{equation} \pi _{0}^{(+)} (t^{\prime} )\theta (t^{\prime} -t)-\pi _{0}^{(-)} (t^{\prime} )\theta (t-t^{\prime} )=-W[\Delta _{F}^{} (t^{\prime} -t),\pi _{0}^{} (t)]. \label{19momentum} \end{equation} The proof of Eq.~(\ref{19}) is not based on Kirchhoff's theorem nor its obvious modification. For the retarded Green's function, the completeness condition does not lead to a new equation (compared with (\ref{17})). In quantum field theory, the diagram technique is based on the Feynman propagator; thus, what is of interest to us here is the superposition principle formalized as in (\ref{15}), (\ref{16}) and (\ref{19}). \subsubsection{Path integral} Kirchhoff's integral theorem can also be used as a starting point for developing path integral method. To show this, we note a useful relation \begin{eqnarray} iW[\Delta _{F}(t_{3}-t_{2}),\Delta _{F}(t_{2}-t_{1})]&=&-\theta (t_{3}-t_{2})\theta (t_{2}-t_{1})f^{(+)}(t_{3})f^{(+)\ast }(t_{1}) \nonumber \\ &&+\theta(t_{1}-t_{2})\theta (t_{2}-t_{3})f^{(-)}(t_{3})f^{(-)\ast }(t_{1}). \label{twopro} \end{eqnarray} This relation tells that a wave propagating toward the future continues to propagate forward in time. A similar property holds for waves propagating backward in time. We choose a sequence of the intervals $(t_{1},t_{2})\subset (t_{3},t_{4})\subset \ldots \subset (t_{2n-1},t_{2n})$ and consider $t^{\prime} \in (t_{1},t_{2})$. Eq.~(\ref{15}) being iterated $n$ times gives \begin{eqnarray} \phi _{0}(t^{\prime})&=& W[\Delta _{F}(t^{\prime }-t_{2}),W[\Delta _{F}(t_{2}-t_{4}),W[\ldots ,W[\Delta _{F}(t_{2n}-t_{2n + 2}),\phi_{0}(t_{2n + 2})]\ldots ]]] \label{pi} \\ &+&(-)^{n + 1}W[\Delta _{F}(t^{\prime }-t_{1}),W[\Delta_{F}(t_{1}-t_{3}),W[\ldots ,W[\Delta _{F}(t_{2n-1}-t_{2n+1}),\phi_{0}(t_{2n+1})]\ldots ]]]. \nonumber \end{eqnarray} According to this equation, $\phi _{0}(t_{2n+2})$ generates a secondary wave that propagates into the past. In the neighboring instant of time $t=t_{2n}< t_{2n+2}$ it generates new secondary wave, etc. The same interpretation is valid for the wave propagating forward in time. Equation~(\ref{15}) is reproduced with $n=0$ for $t_{-1} = t_{0} = t^{\prime}$. The mixed terms containing forward and backward propagation do not arise as a consequence of (\ref{twopro}). In the limit of $n \rightarrow \infty $, $t_2 - t_1 \to 0$ and $(t_{l+3} - t_{l+2}) \to (t_{l+1} - t_{l})$, we arrive at the continuous product \textit{over history}. Equation (\ref{pi}) can be regarded as a path-integral representation in the space $\mathbb{R}^{1,0}$. Path integral in the space $\mathbb{R}^{1,3}$ is discussed in Sect. III.E. \subsection{Harmonic oscillator with a time-dependent frequency } A field theoretical version of the evolution problem with a time-dependent oscillator frequency, in light of the superposition principle, is discussed in Sect. IV, where proofs are presented. Here, we restrict ourselves to statements of the main assertions. We consider the equation \begin{equation} \left(\frac{d^{2} }{dt^{2} } +m^{2} +\Delta m^{2} (t)\right)\phi (t)=0, \label{20} \end{equation} where $\Delta m^{2} (\pm \infty )=0.$ The perturbation $\Delta m^{2} (t)$ is switched on and off adiabatically. Let $\Delta _{F} (t',t)$ be the Feynman propagator for Eq.~(\ref{20}). The following superposition schemes hold: As a consequence of Kirchhoff's integral theorem, \begin{eqnarray*} \phi (t^{\prime} )&=& W[\Delta_{F}(t^{\prime} ,t_{2}),\phi (t_{2} )] - W[\Delta_{F}(t^{\prime} ,t_{1}),\phi (t_{1} )] \;\;\;\;\textrm{for} \;\;\;\; t^{\prime} \in (t_{1} ,t_{2} ), \\ 0&=& W[\Delta_{F}(t^{\prime} ,t_{2}),\phi (t_{2} )] -W[\Delta_{F}(t^{\prime} ,t_{1}),\phi (t_{1} )] \;\;\;\; \textrm{for} \;\;\;\; t^{\prime} \notin (t_{1} ,t_{2} ), \end{eqnarray*} and, as a consequence of the completeness condition, \begin{equation*} \phi _{}^{(+)} (t^{\prime} )\theta (t^{\prime} -t)-\phi _{}^{(-)} (t^{\prime} )\theta (t-t^{\prime} )=-W[\Delta _{F}^{} (t^{\prime} ,t),\phi (t)], \end{equation*} where $\phi^{(\pm )} (t) \sim f^{(\pm )} (t)$ at $t\to \pm \infty $. The expansion of $\phi (t)$ into positive- and negative-frequency components $ \phi^{(\pm)} (t)$ has an objective meaning because the evolution equation is linear. \subsection{Anharmonic oscillator} Let us consider a more general case. We add to the oscillator potential an arbitrary potential $V(\phi )$. The equation of motion takes the form \begin{equation} \left(\frac{d^{2} }{dt^{2} } +m^{2} \right)\phi (t)=-V'(\phi (t)). \label{21} \end{equation} \subsubsection{Superposition principle from Kirchhoff's integral theorem} Equation (\ref{15}) is modified as follows: \begin{equation} \phi (t^{\prime} )=W[\Delta _{F}^{} (t^{\prime} -t_{2} ),\phi (t_{2} )]-W[\Delta _{F}^{} (t^{\prime} -t_{1} ),\phi (t_{1} )]+\int _{t_{1} }^{t_{2} }dt\Delta _{F}^{} (t^{\prime} -t)V'(\phi (t)) . \label{22} \end{equation} The propagator $\Delta _{F}^{} (t)$ is determined from Eq.~(\ref{8}). On the interval $(t_{1} ,t_{2} )$ the sum of the first two terms satisfies the evolution equation of the harmonic oscillator. We denote this sum as \begin{equation} \phi _{0} (t^{\prime} )\equiv W[\Delta _{F}^{} (t^{\prime} -t_{2} ),\phi (t_{2} )]-W[\Delta _{F}^{} (t^{\prime} -t_{1} ),\phi (t_{1} )]. \label{23} \end{equation} The solution takes the form \begin{equation} \phi (t^{\prime} )=\phi _{0} (t^{\prime} )+\int _{t_{1} }^{t_{2} }dt\Delta _{F}^{} (t^{\prime} -t)V'(\phi (t)) . \label{24} \end{equation} Given that the Green's function properties of the harmonic oscillator are known, the solution can be written immediately. If $t^{\prime} \notin (t_{1} ,t_{2} )$, then we obtain \begin{equation} 0=\phi _{0} (t^{\prime} )+\int _{t_{1} }^{t_{2} }dt\Delta _{F}^{} (t^{\prime} -t)V'(\phi (t)) . \label{25} \end{equation} The last two equations constitute a version of Kirchhoff's integral theorem for the one-dimensional anharmonic oscillator. Equation (\ref{24}) cannot be interpreted canonically. Although the first term has the standard meaning under the Fresnel superposition scheme, the second term indicates that a component arises among the secondary waves that is generated continuously in time. According to the Huygens-Fresnel superposition principle, to describe the propagation of a wave, it is sufficient to know its phase and amplitude at a fixed time. However, this is true only in linear theories. In nonlinear theories, the propagation of a wave is determined by its entire history (for retarded solutions, its prehistory), even if the original wave equation is local. The dependence of the wave observables on the entire history of the wave indicates, in general, the nonlocal nature of its evolution. Only a narrow family of representations that contain an integral over time correspond to local but nonlinear theories. The derivative of the potential is an additional source of secondary waves (corrections to the coordinate), and the potential depends on the exact coordinate. This means that Eq. (\ref{24}) is self-consistent and that its solution is obvious only in the context of perturbation theory. In quantum field theory, an equation similar to Eq.~(\ref{24}) serves as the starting point for the development of the diagram technique (see, e.g., \cite{Bjor64}). The equations obtained by replacing the Feynman propagator in Eq.~(\ref{24}) with the retarded and advanced propagators are used to develop the axiomatic scattering theory (see, e.g., \cite{Bjor65}). \subsubsection{Positive- and negative-frequency solutions} In the theory of interacting fields, the decomposition of solutions into positive- and negative-frequency components makes sense only asymptotically for outgoing and incoming states. We assume that the nonlinear interaction is adiabatically switched on at $t \to - \infty$ and adiabatically switched off at $t \to + \infty$. If positive- and negative-frequency components $\phi^{(\pm )} (t)$ are somehow defined, then the subsequent modification of Eq.~(\ref{19}) is obvious: \begin{equation} \phi^{(+)}(t^{\prime} )\theta (t^{\prime} -t)-\phi _{}^{(-)} (t^{\prime} )\theta (t-t^{\prime} )=-W[\Delta _{F}^{} (t^{\prime} -t),\phi (t)] +\int _{t}^{t^{\prime} }d\tau\Delta _{F} (t^{\prime} - \tau)V'(\phi (\tau)) . \label{26} \end{equation} By taking the time ($t$) derivative, after some simple transformations, we obtain $\phi (t)=\phi^{(+)} (t)+\phi^{(-)} (t)$ and Eq.~(\ref{8}). The difference in this equation at two unequal time points leads to Eqs.~(\ref{22}) - (\ref{25}). It might seem, therefore, that Eq. (\ref{26}) is no less general than Eqs.~(\ref{22}) - (\ref{25}). However, we do not have an independent definition of the decomposition into positive- and negative-frequency components. We are forced, therefore, to regard Eq.~(\ref{26}) as a definition of $\phi^{(\pm)} (t)$. According to this equation, $\phi _{}^{(\pm )} (t)\sim f^{(\pm )} (t)$ at $t\to \pm \infty $. The calculation of the first derivative of Eq.~(\ref{26}) in $t^{\prime} $ leads to the superposition principle for the canonical momentum \begin{equation} \pi^{(+)} (t^{\prime} )\theta (t^{\prime} -t)-\pi _{}^{(-)} (t^{\prime} )\theta (t-t^{\prime} )=-W[\Delta _{F}(t^{\prime} -t),\pi (t)] +\int _{t}^{t^{\prime} }d\tau\Delta _{F}^{} (t^{\prime} -\tau)V''(\phi (\tau))\pi (\tau) . \end{equation} This equation is consistent with the evolution equation for $\pi^{(\pm )} (t)=\dot{\phi }^{(\pm )}(t)$. Obviously, in nonlinear theories, a full generalization of (\ref{19}) does not exist. A field theoretical version of the anharmonic oscillator problem is discussed in Sect. V. \subsubsection{Numerical example} We use a numerical example to demonstrate the application of the superposition scheme (\ref{24}) for the description of radial motion in the Keplerian problem. After separation of the angular variables, the evolution problem reduces to solving a problem of one-dimensional motion in an effective potential \[ U=-\frac{\alpha }{r}+\frac{L^{2}}{2\mu r^{2}}, \] where $\alpha =GM_{\odot }\mu $, $M_{\odot }$ is the solar mass, $\mu $ is the mass of a celestial body, and $L$ is the angular momentum. We add and subtract from the potential $U$ an oscillator potential \[ U_{osc}=\frac{1}{2}\mu m^{2}(r-a)^{2} \] and treat $U_{osc}$ as the undisturbed potential. The perturbation potential is thus $V=U-U_{osc}.$ In order to improve convergence and eliminate the need to determine optimized $U_{osc}$, the frequency parameter $m$ is chosen in agreement with the exact solution (see, e.g., \cite{Arno89}): $m=2\pi/T$, where $T=2\pi \mu ab/L$ is the orbital period, $a = (r_{\min } + r_{\max })/2$ and $b = \sqrt{pa}$ are the major and minor semi-axes of the ellipse and $L = \sqrt{p \alpha \mu}$; the variable $r$ lies in the interval $(r_{\min },r_{\max})$, where $r_{\min } = p/(1 + e)$, $r_{\max } = p/(1 - e)$, $p$ is the semi-latus rectum, and $e$ is the eccentricity. As a zeroth-order approximation for $\phi(t) \equiv r(t) - a$, we choose a free solution \begin{equation} \phi^{[0]}(t) = C_{+}^{[0]}f^{(+)}(t)+C_{-}^{[0]}f^{(-)}(t) \end{equation} with unknown coefficients $C_{\pm}^{[0]}$ and $f^{(\pm)}(t)$ defined by Eq.~(\ref{3}). The motion begins at perihelion $\phi^{[0]}(0)=r_{\min } - a$, with the vanishing velocity $\dot{\phi}^{[0]}(0)=0$. These conditions allow $C_{\pm}^{[0]}$ to be fixed. Given the $l$th-order approximation, $r^{[l]} (t) = a + \phi^{[l]}(t)$ can be substituted in place of the argument of $V^{\prime }$ in Eq.~(\ref{24}) to produce the next-order iteration \begin{equation} \phi^{[l+1]}(t) = C_{+}^{[l+1]}f^{(+)}(t)+C_{-}^{[l+1]}f^{(-)}(t)+\int_{t_{1}}^{t_{2}}d \tau \Delta _{F}(t-\tau )V^{\prime }(a + \phi^{[l]}(\tau )), \label{25 bis} \end{equation} where $\Delta _{F}(t)$ is defined by (\ref{11}). The interval $(t_{1},t_{2})$ covers an interval within which we seek the solution. $C_{\pm}^{[l+1]}$ are fixed by the conditions $\phi^{[l+1]}(0)=r_{\min } - a$ and $\dot{\phi}^{[l+1]}(0)=0$. \begin{table}[h] \caption{Expansion coefficients of free solutions in the unperturbed potential for the first two iterations and for the exact solution ($l=\infty$).} \label{tab:2} \centering \vspace{2pt} \begin{tabular}{|c|l|l|} \hline\hline $l$ & $~~~~~~~~~~~~~C^{[l]}_{+}$ & $~~~~~~~~~~~~~C^{[l]}_{-}$ \\ \hline\hline $0$ & $-0.142872$ & $-0.142872$ \\ $1$ & $-0.155969 - i0.040544$ & $-0.155322 - i0.068246$ \\ $\infty$ & $-0.151619 - i0.033743$ & $-0.151875 - i0.033990$ \\ \hline\hline \end{tabular} \end{table} The numerical convergence of the recursion is a subtle issue that should be studied separately. Assuming the convergence of the approximate sequence, we should obtain an identity when using $r(t)$ to evaluate the integral in Eq.~(\ref{24}): \begin{equation} \phi^{[\infty]}(t) = C_{+}^{[\infty]}f^{(+)}(t)+C_{-}^{[\infty]}f^{(-)}(t)+\int_{t_{1}}^{t_{2}}d \tau \Delta _{F}(t-\tau )V^{\prime }(a + \phi(\tau )). \label{N2} \end{equation} The exact solutions are parameterized in terms of the eccentric anomaly $E$: $r=a(1-e\cos E )$ and $t=\sqrt{ma^{3}/\alpha }(E -e\sin E )$, where $t$ is time. For our numerical estimates, we choose $\alpha = \mu = p = 1$ and $e=0.2$. The values $t_{1}$ and $t_{2}$ are taken arbitrarily; they correspond to $E_{1} = -1$ and $E_{2} = 7.2$. The coefficients $C_{\pm}^{[l]}$ for $l=0,1,\infty$ found as described above are presented in Table \ref{tab:2}. Table \ref{tab:1} shows $r^{[0]}$, $r^{[1]}$ and $r^{[\infty]}$ for seven values of $E \in [ 0,2\pi ].$ The inclusion of the secondary waves generated by the nonlinear source $V^{\prime}$ reduces the standard deviation $\chi^2 = \sum (r^{[l]} - r)^2$ from $0.0038$ to $0.0015$, whereas $r^{[\infty]}$ coincides with $r$. Equation~(\ref{24}) can also be derived directly, under the assumption of $t \in (t_1,t_2)$, by using the GF method, whereas Eqs.~(\ref{23}) and (\ref{25}) are specific consequences of Kirchhoff's integral theorem. We verified that the free term in Eq.~(\ref{N2}) fulfills, numerically, Eq.~(\ref{23}) and checked Eq.~(\ref{25}) for a sample set of time points $t \notin (t_1,t_2)$ as well. \vspace{4pt} Summarizing, the idea of Kirchhoff’s integral theorem was explained in this section with a one-dimensional toy model (a harmonic oscillator). Such a pedagogical approach illustrates formalism while the attempt to draw a physical analogy with well-known phenomena leads to the seemingly paradoxical observation: no waves in the $\mathbb{R}^{1,0}$ space, but the superposition principle is there, and even the problem of celestial mechanics was solved using Kirchhoff’s integral theorem in a technically consistent manner. A parallelism between classical mechanics and geometrical optics was regarded as purely formal until the advent of quantum mechanics. The possibility of solving the problems of classical mechanics using the methods of wave optics seems to be a surprising circumstance. \begin{table}[t] \caption{First two iterations $r^{[l]}$ for the approximate solution of the radial equation of motion as compared to the exact solution $r=r^{[\infty]}$ for seven values of $E \in [0,2\pi]$. } \label{tab:1} \centering \vspace{2pt} \begin{tabular}{|c|c|c|c|} \hline\hline $E$ & $r^{[0]}$ & $r^{[1]}$ & $r^{[\infty]}$ \\ \hline\hline $0$ & 0.8333 & 0.8333 & 0.8333 \\ $\pi/3$ & 0.9079 & 0.9410 & 0.9375 \\ $2\pi/3$ & 1.1131 & 1.1619 & 1.1458 \\ $\pi$ & 1.2500 & 1.2500 & 1.2500 \\ $4\pi/3$ & 1.1132 & 1.1232 & 1.1458 \\ $5\pi/3$ & 0.9079 & 0.9094 & 0.9375 \\ $2\pi$ & 0.8333 & 0.8333 & 0.8333 \\ \hline\hline \end{tabular} \end{table} \section{ Kirchhoff's integral theorem for a free scalar field} \renewcommand{\theequation}{III.\arabic{equation}} \setcounter{equation}{0} \subsection{Complete orthonormal basis functions} A complete set of solutions to the Klein-Gordon equation is formed by the functions \begin{equation*} f_{\mathbf k}^{(+)}(x) = \frac{e^{-ikx}} {\sqrt{2\omega_{\mathbf k}}} \;\;\;\; \mathrm{and} \;\;\;\; f_{\mathbf k}^{(-)}(x) = \frac{e^{ ikx}} {\sqrt{2\omega_{\mathbf k}}}, \end{equation*} where $k=(\omega_{\mathbf k},{\mathbf k})$, $\omega _{\mathbf k} = \sqrt{{\mathbf k}^{2} + m^{2} }$, $x=(t,{\mathbf x}) \in \mathbb{R}^{1,3}$, and $kx = \omega_{\mathbf k}t - {\mathbf k}{\mathbf x}$. These functions correspond to the positive- and negative-frequency solutions in the oscillator problem. The orthonormality conditions are \begin{eqnarray} i\int d{\mathbf x} W[f_{{\mathbf k'}}^{(\pm )*} (x),f_{{\mathbf k}}^{(\pm )} (x)]&=&\pm (2\pi )^{3} \delta ({\mathbf k'}-{\mathbf k}), \nonumber \\ \int d{\mathbf x} W[f_{{\mathbf k'}}^{(\mp )*} (x),f_{{\mathbf k}}^{(\pm )} (x)]&=&0. \label{27} \end{eqnarray} For any function $\phi _{0} (x)$ that is a solution of the Klein-Gordon equation, \begin{equation} \phi _{0} (x)=\int \frac{d{\mathbf k}}{(2\pi )^{3} } \left(f_{{\mathbf k}}^{(+)} (x)i\int d{\mathbf y} W[f_{{\mathbf k}}^{(+)*} (y),\phi _{0} (y)] -f_{{\mathbf k}}^{(-)} (x)i\int d{\mathbf y} W[f_{{\mathbf k}}^{(-)*} (y),\phi _{0} (y)]\right). \label{28} \end{equation} After the second quantization, the time-independent quantities \begin{equation*} a({\mathbf k}) = i\int d{\mathbf y} W[f_{\mathbf k}^{(+)*}(y),\phi_{0}(y)]\;\;\;\; \mathrm{and} \;\;\;\; a^{*}({\mathbf k})=-i\int d{\mathbf y} W[f_{\mathbf k}^{(-)*}(y),\phi_{0}(y)] \end{equation*} become annihilation and creation operators. The first and the second terms in Eq.~(\ref{28}) are identified with the positive- and negative-frequency components of $\phi_{0} (x)$. According to the completeness condition (\ref{28}), the solutions of the free equation thereby split into the sum \begin{equation*} \phi_{0} (x)=\phi _{0} ^{(+)} (x)+\phi _{0} ^{(-)} (x). \end{equation*} This decomposition is analogous to the decomposition of Eq.~(\ref{7}). The orthonormality conditions (\ref{27}) and the completeness condition (\ref{28}) are the generalized equivalents to Eqs. (\ref{4}) and (\ref{6}), respectively, for the oscillator problem. Using the analogy with equations (\ref{PB1}) - (\ref{Fund3}) and the Poisson bracket relations \begin{eqnarray} \{\phi _{0}(x),\phi _{0}(y)\}|_{x^0 = y^0} &=&0, \label{PB3} \\ \{\phi _{0}(x),\pi _{0}(y)\}|_{x^0 = y^0} &=& \delta(\mathbf{x} - \mathbf{y}), \label{PB4} \end{eqnarray} one can prove that \begin{eqnarray} \int \frac{d\mathbf{k}}{(2\pi)^3} \left( f^{(+)}_{\mathbf{k}}(x)f^{(+)\ast}_{\mathbf{k}}(y) - f^{(-)}_{\mathbf{k}}(x)f^{(-)\ast}_{\mathbf{k}}(y) \right)|_{x^0 = y^0} &=&0, \label{Fund4} \\ \int \frac{d\mathbf{k}}{(2\pi)^3} \left( f^{(+)}_{\mathbf{k}}(x)\dot{f}^{(+)\ast }_{\mathbf{k}}(y) - f^{(-)}_{\mathbf{k}}(x)\dot{f}^{(-)\ast }_{\mathbf{k}}(y) \right)|_{x^0 = y^0} &=& i\delta(\mathbf{x} - \mathbf{y}), \label{Fund5} \\ \int \frac{d\mathbf{k}}{(2\pi)^3} \left( \dot{f}^{(+)}_{\mathbf{k}}(x)f^{(+)\ast }_{\mathbf{k}}(y) - \dot{f}^{(-)}_{\mathbf{k}}(x)f^{(-)\ast }_{\mathbf{k}}(y) \right)|_{x^0 = y^0} &=& -i\delta(\mathbf{x} - \mathbf{y}). \label{Fund6} \end{eqnarray} Equations (\ref{Fund4}) and (\ref{Fund5}) can be used to show that the completeness condition (\ref{28}) holds for arbitrary functions at $x^0 = y^0$. \subsection{Feynman propagator} The equation for the Feynman propagator is \begin{equation} (\Box +m^{2} )\Delta_F (x)=-\delta ^{4} (x). \label{29} \end{equation} It is easiest to find the solution in four-momentum space and then apply the Fourier transform to convert it into coordinate space. Here, as in the oscillator problem, we must shift the contour of the integral over $k^0$ from the real axis in the vicinity of $k^0 = \pm \omega_{\mathbf k}$. The four possible ways to do so correspond to four Green's functions. The Feynman propagator can be written as follows: \begin{eqnarray} \Delta _{F}(x-y)&=&\int \frac{d^{4} k}{(2\pi )^{4} } \frac{e^{-ik(x-y)} }{k^{2} -m^{2} +i0} \nonumber \\ &=&-i\int \frac{d{\mathbf k}}{(2\pi )^{3} } \left( f_{{\mathbf k}}^{(+)} (x)f_{{\mathbf k}}^{(+)*} (y)\theta (x^{0} -y^{0} )+f_{{\mathbf k}}^{(-)} (x)f_{{\mathbf k}}^{(-)*} (y)\theta (- x^{0} + y^{0} ) \right). \label{Fpro} \end{eqnarray} In comparison with Eq.~(\ref{11}), the phase space integral is added here. After the replacement $f^{(\pm)}(t) \to f_{\mathbf k}^{(\pm)}(x)$ and the integration over the phase space in Eqs.~(\ref{12}), (\ref{13}), and (\ref{14}), the form of the other propagators is restored. Using the analogy with Eq.~(\ref{delta1}) and Eqs.~(\ref{Fund4}) and (\ref{Fund6}), one can verify that the propagator (\ref{Fpro}) satisfies Eq.~(\ref{29}). \subsection{Superposition principle from Kirchhoff's integral theorem} \subsubsection{General form of the superposition principle} We start from the identity \begin{equation} \phi _{0} (\xi )\delta ^{4} (\xi -x)=\Delta _{F} (x-\xi )\left((\Box _{\xi } +m^{2} )\phi _{0} (\xi )\right)-\left((\Box _{\xi } +m^{2} )\Delta _{F} (x-\xi )\right)\phi _{0} (\xi ). \label{30} \end{equation} The right-hand side can be written in divergence form as follows: \begin{equation} \phi _{0} (\xi )\delta ^{4} (\xi -x)=\frac{\partial }{\partial \xi _{\mu } } \left(\Delta _{F} (x-\xi )\frac{\stackrel{\leftrightarrow}{\partial }}{\partial \xi ^{\mu } } \phi _{0} (\xi )\right). \label{31} \end{equation} By taking the integral over a four-dimensional region $\Omega $ and transforming the right-hand side into a surface integral, the equation \begin{equation} \phi _{0} (x)\theta (x\in \Omega )=\int _{\partial \Omega }^{}dS_{\xi }^{\mu } \left(\Delta _{F} (x-\xi )\frac{\stackrel{\leftrightarrow}{\partial }}{\partial \xi ^{\mu } } \phi _{0} (\xi )\right), \label{32} \end{equation} is obtained, where $\theta (x\in \Omega )$ is the indicator function of $\Omega $: \begin{equation*} \theta (x\in \Omega )=\left\{\begin{array}{cc} {1,} & {x\in \Omega \, ,} \\ {0,} & {x\notin \Omega \, .} \end{array}\right. \end{equation*} By choosing for the surface $\partial \Omega $ a hyperplane $\xi ^{0} =y^{0} $ in the past, i.e., three-dimensional space at a time $\xi ^{0} = y^{0} <x^{0} $, and a three-dimensional space $\xi ^{0} =z^{0} $ at a time $\xi ^{0} = z^{0} > x^{0} $ in the future, and then combining these spaces at infinity, where the integral vanishes, we arrive at \begin{equation} \phi _{0} (x) =\int d{\mathbf z}W[\Delta _{F} (x-z),\phi _{0} (z)] -\int d{\mathbf y}W[\Delta _{F} (x-y),\phi _{0} (y)] . \label{33} \end{equation} If $x\notin \Omega $, we obtain \begin{equation} 0 =\int d{\mathbf z}W[\Delta _{F} (x-z),\phi _{0} (z)] -\int d{\mathbf y}W[\Delta _{F} (x-y),\phi _{0} (y)] . \label{34} \end{equation} Equation (\ref{33}) states that $\phi _{0} (x)$ is determined by its past and future. Equation (\ref{34}) suggests that the interference of secondary waves outside the interval $(y^{0} ,z^{0} )$ is strictly destructive. Equation (\ref{32}) and its consequences (\ref{33}) and (\ref{34}) constitute a version of Kirchhoff's theorem in the most general form; these equations hold for any choice of propagator. \subsubsection{Monochromatic field } The Fourier transform simplifies the superposition scheme of secondary waves. We restrict ourselves to the case of monochromatic, spatially inhomogeneous waves. Consider the following Fourier transforms in time of the scalar field and the Green's function: \begin{equation} \phi _{0} (\omega ,{\mathbf x})=\int _{-\infty }^{+\infty }dte^{i\omega t} \phi _{0} (t,{\mathbf x}),\, \, \, \, \Delta _{F} (\omega ,{\mathbf x})=\int _{-\infty }^{+\infty }dte^{i\omega t} \Delta _{F} (t,{\mathbf x}). \label{36} \end{equation} They satisfy the equations \begin{equation*} (\Delta +{\mathbf k}^{2} )\phi _{0} (\omega ,{\mathbf x})=0,\, \, \, \, (\Delta +{\mathbf k}^{2} )\Delta _{F} (\omega ,{\mathbf x})=\delta ({\mathbf x}), \end{equation*} where ${\mathbf k}^{2} =\omega ^{2} -m^{2} $. The right-hand side of the identity \begin{eqnarray} \phi _{0} (\omega ,{\boldsymbol\xi })\delta ({\boldsymbol \xi}-{\mathbf x}) &=& -\Delta _{F} (\omega ,{\mathbf x}-{\boldsymbol\xi})\left((\Delta _{{\xi}} +{\mathbf k}^{2} )\phi _{0} (\omega ,{\boldsymbol\xi})\right) \nonumber \\ && +\left((\Delta _{{\xi}} +{\mathbf k}^{2} )\Delta _{F} (\omega ,{\mathbf x}-{\boldsymbol\xi})\right)\phi _{0} (\omega ,{\boldsymbol\xi}). \label{G2I} \end{eqnarray} can be represented as the divergence \begin{equation*} \phi _{0} (\omega ,{\boldsymbol\xi })\delta ({\boldsymbol\xi }-{\mathbf x})=-\frac{\partial }{\partial \xi ^{\alpha } } \left(\Delta _{F} (\omega ,{\mathbf x}-{\boldsymbol\xi })\frac{\stackrel{\leftrightarrow}{\partial }}{\partial \xi ^{\alpha } } \phi _{0} (\omega ,{\boldsymbol\xi })\right). \end{equation*} Integrating over the region $\Omega _{3} $, we obtain von Helmholtz's theorem for the monochromatic field: \cite{Helm1860} \begin{equation} \phi _{0} (\omega ,{\mathbf x})\theta ({\mathbf x}\in \Omega _{3} )=-\int _{\partial \Omega _{3} }dS^{\alpha }_{\xi } \Delta _{F} (\omega ,{\mathbf x}-{\boldsymbol\xi })\frac{\stackrel{\leftrightarrow}{\partial }}{\partial \xi ^{\alpha } } \phi _{0} (\omega ,{\boldsymbol\xi }), \label{37} \end{equation} which is a particular case of the third Green's identity \cite{GGreen1828} and a precursor of Kirchhoff's integral theorem. The integration is performed over the surface $\partial \Omega _{3} $, which is the boundary of $\Omega _{3} $. The equation shows that the field at the point ${\mathbf x}$ is determined by its values on any surrounding surface. This surface is not required to be the wave surface. If the point ${\mathbf x}$ lies outside the closed surface, then the integral vanishes. Regardless of the specific form of $\Delta _{F} (\omega ,{\mathbf x})$, we can conclude from the form of the equation alone that if the field $\phi _{0} (\omega ,{\mathbf x})$ satisfies a differential equation, then this equation contains derivatives over the spatial coordinates that are no higher than second order. Equation (\ref{37}) is used to describe the diffraction phenomena of light \cite{Land71,Born99}. In the monochromatic, spatially inhomogeneous case, the integration is over the surface rather than over the volume, as in Eq.~(\ref{33}). However, because we are discussing the calculation of the Fourier transform in time, an implicit time integration enters the problem. \subsubsection{Massless field } For massless particles the interference scheme for secondary waves simplifies. Let us apply the inverse Fourier transform in Eq.~(\ref{37}): \begin{equation} \phi _{0} (t,{\mathbf x})\theta ({\mathbf x}\in \Omega _{3} )=-\int _{\partial \Omega _{3} }dS_{{\xi }}^{\alpha } \int _{-\infty }^{+\infty }dt' \Delta _{F} (t-t',{\mathbf x}-{\boldsymbol\xi })\frac{\stackrel{\leftrightarrow}{\partial }}{\partial \xi ^{\alpha } } \phi _{0} (t',{\boldsymbol\xi }). \label{38} \end{equation} This equation follows from Eq.~(\ref{32}) if we select for $\Omega $ an infinite cylinder whose spatial section $\Omega _{3} $ is covered by the surface of integration $\partial \Omega _{3} $ and the axis is parallel to the time axis. As is well known, the propagator $\Delta _{F} (t,{\mathbf x})$ does not vanish outside of the light cone $t^{2} -{\mathbf x}^{2} <0$. This property does not generally violate causality, as $\Delta _{F} (t,{\mathbf x})$ also describes the propagation of the wave surfaces at which the phase remains constant. In the relativistic theory, the phase velocity $v_{p} \equiv \omega _{{\mathbf k}} /\left|{\mathbf k}\right|\ge 1$ is greater than the speed of light; however, it is the group velocity $v_{g} \equiv \partial \omega _{{\mathbf k}} /\partial \left|{\mathbf k}\right| $$=\left|{\mathbf k}\right|/\omega _{{\mathbf k}} \le 1$ with which the propagation of signals is associated. In the limit of zero mass, the propagator $\Delta _{F} (t,{\mathbf x})$ takes the following form (see \cite{Bjor65}, Appendix B or Eq.~(29) in Ref. \cite{Zavi79} in the massless limit): \begin{equation} \Delta _{F} (t,{\mathbf x})=\frac{i}{4\pi ^{2} } \frac{1}{t^{2} -|{\mathbf x}|^{2} -i0}. \label{39} \end{equation} Substituting (\ref{39}) into (\ref{38}) and taking into account that \begin{equation*} \int _{-\infty }^{+\infty }dt' \Delta _{F} (t-t',{\mathbf x})e^{\mp i\omega _{{\mathbf k}} t'} = \frac{1}{4\pi |{\mathbf x}| } e^{\mp i\omega_{{\mathbf k}} (t \mp |{\mathbf x}|) }, \end{equation*} we obtain \begin{eqnarray} \phi _{0} (t,{\mathbf x})\theta ({\mathbf x}\in \Omega _{3} ) = \frac{1}{4\pi } \int _{\partial \Omega _{3} }dS_{{\boldsymbol\xi }}^{\alpha } \left[ - \frac{1}{\rho } \frac{\partial }{\partial \xi ^{\alpha } } \right. \left(\phi _{0}^{(+)} (t-\rho /c,{\boldsymbol\xi })+\phi _{0}^{(-)} (t+\rho/c ,{\boldsymbol\xi }) \right)&& \nonumber \\ +\left(\frac{\partial }{\partial \xi ^{\alpha } } \frac{1}{\rho } \right)\left(\phi _{0}^{(+)} (t-\rho /c ,{\boldsymbol\xi })+\phi _{0}^{(-)} (t+\rho /c,{\boldsymbol\xi }) \right)&& \nonumber \\ - \left. \frac{1}{\rho } \frac{\partial \rho }{\partial \xi ^{\alpha } } \frac{\partial }{c \partial t} \left(\phi _{0}^{(+)} (t-\rho /c,{\boldsymbol\xi })-\phi _{0}^{(-)} (t+\rho /c,{\boldsymbol\xi }) \right) \right],&& \label{40} \end{eqnarray} where $\rho =\left|{\boldsymbol\xi }-{\mathbf x}\right|$ and in the first term the differentiation with respect to $\xi ^{\alpha } $ does not apply to $\rho $. The dependence on the speed of light $c$ is here made explicit. Equation (\ref{40}) represents a general form of Kirchhoff's integral theorem for the Feynman asymptotic conditions. The function is determined by its values on the selected arbitrary closed surface, taking into account the delay of the positive-frequency component and the advancement of the negative-frequency component. This representation is possible because massless particles travel at the speed of light, regardless of their momentum. \footnote{In Euclidean space of dimension $n \geq 3$ Green's function has the form $\Delta(x) \sim 1/(x^2)^{(n-2)/2}$. Performing a Wick rotation, we find that the Green's function as an analytic function of the variable $ t=x^0$ has two isolated poles in the spaces of even dimension and two root branching points in the spaces of odd dimension. This means that in the massless case the Green's function is effectively localized on the light cone in the spaces of even dimension only. Here an analogue of the representation (\ref{40}) holds. In the spaces of odd dimension, the superposition scheme involves the integration over all spatial coordinates. This property of the Green's function suggests that the requirement of equal phase and group velocities and the speed of light is a necessary but not sufficient condition for the representation of superposition scheme in the form of a surface integral.} By contrast, the speed of a massive particle depends on its momentum; therefore, the more general representation (\ref{38}) includes the integral over time delay and advance. Kirchhoff's theorem is a precise mathematical formulation of the Huygens-Fresnel superposition principle. A special feature of the Feynman asymptotic conditions is that the negative-frequency components are determined by the future. An analogue of Eq.~(\ref{40}) for the retarded solutions is the original version of Kirchhoff's integral theorem. It is briefly outlined in Appendix and discussed in detail in Ref.~\cite{Born99} \subsection{Superposition principle from the completeness condition} As a formalization of the superposition principle for the Feynman asymptotic conditions, by analogy with Eq.~(\ref{19}), we can consider \begin{equation} \phi _{0} ^{(+)} (x)\theta ( x^{0} - y^{0} ) -\phi _{0} ^{(-)} (x)\theta (-x^{0} + y^{0} ) =-\int d{\mathbf y}W[\Delta _{F} (x-y),\phi _{0} (y)] . \label{35} \end{equation} The physical content of this equation is quite traditional: At the moment $y^{0} $, the wave is a source of secondary waves, and the propagation from point $y$ to point $x$ is described by $\Delta _{F} (x-y)$. To construct the positive-frequency waves, the past $y^{0} <x^{0} $ must be known, and to construct the negative-frequency waves, the future $x^{0} <y^{0} $ must be known. This property is reflected in the presence of the theta functions on the left-hand side of the equation. The proof of Eq.~(\ref{35}) is similar to the proof of Eq.~(\ref{19}). It is not based on Kirchhoff's theorem but instead relies on the completeness condition (\ref{28}) and the expansion of the Feynman propagator into plane waves. Given that Eq.~(\ref{35}) is postulated, the Green's function is uniquely determined. Indeed, let us take the derivative over $y^{0} $ on both sides of the equation. After the transformation of the integrand, we obtain Eq.~(\ref{29}); it must then be supplemented by asymptotic conditions. We take the difference (\ref{35}) for two instants of time, $z^{0}$ and $y^{0} $, such that $y^{0} < x^{0} < z^{0} $. The result is Eq.~(\ref{33}). If $x^{0} \notin (y^{0} ,z^{0} )$, then we obtain (\ref{34}). According to equations (\ref{35}) and (\ref{33}), the field is determined by its values and first derivatives at two time points. This property indicates the local nature of the evolution equation. Arguing backward, since the initial conditions required to determine the field are the field values and the first derivatives, the evolution equation may contain time derivatives of no higher than second order. Additionally, in Eq.~(\ref{32}), a hypersurface $\partial \Omega $ in the form of an infinite cylinder with its axis parallel to the time axis, can be chosen. In such a case, Kirchhoff's theorem would assert that the wave is determined by its values and gradients on a two-dimensional surface at all times. This version of the theorem indicates the local nature of the evolution equation in the spatial coordinates. The corresponding differential equation may contain derivatives of the spatial coordinates of no higher than second order. \subsection{Path integral} The path integral representation is a consequence of Eq.~(\ref{32}). We choose a set of four-dimensional regions $\Omega _{1}\subset \Omega _{2}\subset \ldots \subset \Omega _{n} \subset \mathbb{R}^{1,3}$. By iterating Eq.~(\ref{32}), we obtain \begin{eqnarray} \phi _{0}(x)\theta (x \in \Omega _{1}) &=& \int_{\partial \Omega _{1}}dS_{\xi _{1}}^{\mu _{1}}\int_{\partial \Omega _{1}}dS_{\xi _{2}}^{\mu _{2}}\ldots \int_{\partial \Omega _{n}}dS_{\xi _{n}}^{\mu _{n}} \label{pir13} \\ &\times& \Delta _{F}(x-\xi _{1})\frac{\overset{\leftrightarrow }{\partial }}{\partial \xi _{1}^{\mu _{1}}} \Delta _{F}(\xi _{1}-\xi _{2})\frac{\overset{\leftrightarrow }{\partial }}{\partial \xi _{2}^{\mu _{2}}} \ldots \Delta _{F}(\xi _{n-1}-\xi _{n})\frac{\overset{ \leftrightarrow }{\partial }}{\partial \xi _{n}^{\mu _{n}}}\phi _{0}(\xi _{n}). \nonumber \end{eqnarray} There exists considerable freedom in choosing $\Omega_{i}$. A similar freedom exists in the factorization of unitary evolution operator $U(t_2,t_1)$ in quantum mechanics, where the equation $U(t_2,t_1) = U (t_2,t) U (t,t_1)$ holds for any instant of time $t \in (t_1,t_2)$. While the evolution operator is factorized in time, the integration in the path integral goes over the coordinates in three-dimensional space. Such a representation easily follows from Eq.~(\ref{pir13}). Indeed, choosing $\Omega_{i}$ to be cylinders with infinite radii and axes parallel to the time axis, we arrive at a representation of this kind. The broken lines connecting the points $x$ and $\xi_{n} \in \partial \Omega_{n}$ through $\xi_{i} \in \partial \Omega_{i}$ ($i=1,\ldots,n-1$) form in the continuum limit the class of paths over which the continual integral is defined. The comparison of Eqs.~(\ref{32}) and (\ref{pir13}) also yields, in the limit of $n \to \infty$, an integral representation for the Green's function in the form of a continual integral. \section{Charged scalar field in an external electromagnetic field} \renewcommand{\theequation}{IV.\arabic{equation}} \setcounter{equation}{0} Equations (\ref{32}) and (\ref{35}) and their particular cases were obtained for a free field. The following question arises: which relations can be generalized in the presence of an external field? We restrict ourselves to scalar electrodynamics. \subsection{Complete orthonormal basis functions} Substituting the normal derivatives with respect to the space-time coordinates in the Klein-Gordon equation with gauge covariant derivatives, \begin{equation} \partial _{\mu } \to D_{\mu } =\partial _{\mu } +ieA_{\mu } \label{41} \end{equation} yields the evolution equation for a complex scalar field in an external electromagnetic field, \begin{equation} (D_{\mu } D^{\mu } +m^{2} )\phi (x)=0. \label{42} \end{equation} The external field is adiabatically switched on at $t\to -\infty $ and off at $t\to +\infty $. The set of positive- and negative-frequency asymptotic solutions $f_{{\mathbf k}}^{(\pm )} (x)$ is complete and orthonormal. The second-order Eq.~(\ref{43}) has a set of independent solutions $F_{{\mathbf k}}^{(\pm )} (x)$. The asymptotic conditions can be taken as \begin{equation*} F_{{\mathbf k}}^{(\pm )} (x)\to f_{{\mathbf k}}^{(\pm )} (x)\equiv \frac{e^{\mp ikx} }{\sqrt{2\omega_{ \mathbf{k} } } } \, \, \, \, {\mathrm {for}}\, \, \, \, t \to -\infty . \end{equation*} All other solutions of Eq.~(\ref{42}) are expressed as linear superpositions of the basis functions $F_{{\mathbf k}}^{(\pm )} (x)$. It would be natural to use the prescription (\ref{41}) for extending the Huygens-Fresnel superposition principle. It can be assumed that in an external electromagnetic field, the suitable generalization of the Wronskian is given by \begin{equation*} W_{A} [\varphi ^{*} ,\chi ]\equiv \varphi ^{*} (\stackrel{\leftrightarrow}{\partial }_{t} +2ieA_{0} )\chi =\varphi ^{*} (D_{t} \chi )-(D_{t} \varphi )^{*} \chi . \end{equation*} We note a useful property: \begin{eqnarray} \partial _{t} W_{A} [\varphi ^{*} ,\chi ] &=& \partial _{t} (\varphi ^{*} (D_{t} \chi )-(D_{t}^{} \varphi )^{*} \chi ) \nonumber \\ &=& \varphi^{*} (D_{t} D_{t} \chi )-(D_{t}^{} D_{t}^{} \varphi )^{*} \chi . \label{43} \end{eqnarray} It is not difficult to show that if $\varphi $ and $\chi $ are two solutions of Eq.~(\ref{42}), then the following condition holds: \begin{equation*} \partial_{t} \int d \mathbf{x} W_{A} [\varphi ^{*} ,\chi ]=0. \end{equation*} This condition allows us to calculate the normalization integral by sending the time variable to negative infinity, where solutions are represented as plane waves. The orthonormality conditions thus take the form \begin{eqnarray*} i\int d {\mathbf x} W_{A} [F_{{\mathbf k}'}^{(\pm )*} (x),F_{{\mathbf k}}^{(\pm )} (x)]&=&\pm (2\pi )^{3} \delta ({\mathbf k}'-{\mathbf k}), \\ \int d {\mathbf x} W_{A} [F_{{\mathbf k}'}^{(\pm )*} (x),F_{{\mathbf k}}^{(\mp )} (x)]&=&0. \end{eqnarray*} The completeness condition is obvious: \begin{equation} \phi (x)=\int \frac{d\mathbf{k}}{(2\pi )^{3} } \left(F_{{\mathbf k}}^{(+)} (x)i\int d{\mathbf y} W_A [F_{\mathbf k}^{(+)*} (y),\phi (y)]-F_{{\mathbf k}}^{(-)} (x)i\int d{\mathbf y} W_A [F_{{\mathbf k}}^{(-)*} (y),\phi (y)]\right). \label{extf} \end{equation} In the theory of a charged scalar field, the canonical momenta are defined by equations $\pi^{\ast} (x) = D_t \phi (x)$ and $\pi (x) = (D_t \phi (x))^{\ast}$. The canonically conjugate variables satisfy \begin{equation} \{\phi(x),\pi(y)\}|_{x^0 = y^0} = \{\phi(x)^{\ast},\pi^{\ast}(y)\}|_{x^0 = y^0} = \delta(\mathbf{x} - \mathbf{y}), \label{PB6} \end{equation} while other pairs have the vanishing Poisson bracket. The generalization of the corresponding relations of a free scalar field can be written as follows \begin{eqnarray} \int \frac{d\mathbf{k}}{(2\pi)^3} \left( F^{(+)}_{\mathbf{k}}(x)F^{(+)\ast}_{\mathbf{k}}(y) - F^{(-)}_{\mathbf{k}}(x)F^{(-)\ast}_{\mathbf{k}}(y) \right)|_{x^0 = y^0} &=&0, \label{Fund7} \\ \int \frac{d\mathbf{k}}{(2\pi)^3} \left( F^{(+)}_{\mathbf{k}}(x)D_t^{\ast } F^{(+)\ast }_{\mathbf{k}}(y) - F^{(-)}_{\mathbf{k}}(x)D_t^{\ast } F^{(-)\ast }_{\mathbf{k}}(y) \right)|_{x^0 = y^0} &=& i \delta(\mathbf{x} - \mathbf{y}), \label{Fund8} \\ \int \frac{d\mathbf{k}}{(2\pi)^3} \left( D_t F^{(+)}_{\mathbf{k}}(x)F^{(+)\ast }_{\mathbf{k}}(y) - D_t F^{(-)}_{\mathbf{k}}(x) F^{(-)\ast }_{\mathbf{k}}(y) \right)|_{x^0 = y^0} &=& -i \delta(\mathbf{x} - \mathbf{y}). \label{Fund9} \end{eqnarray} Equations (\ref{Fund7}) and (\ref{Fund8}) show that the completeness condition (\ref{extf}) holds for arbitrary functions evaluated at $x^0 = y^0$. In conclusion we note that the zeroth component of vector potential can be removed by a gauge transformation, in which case $W_A = W$ and other relations and their proofs take the form more similar to the free case. \subsection{Feynman propagator} The decomposition of the Feynman propagator over the basis functions has the form \begin{equation} \Delta _{F} (x,y)=-i\int \frac{d\mathbf{k}}{(2\pi )^{3} } \left(F_{{\mathbf k}}^{(+)} (x)F_{{\mathbf k}}^{(+)*} (y)\theta (x^{0} -y^{0} )+F_{{\mathbf k}}^{(-)} (x)F_{{\mathbf k}}^{(-)*} (y)\theta (-x^{0} +y^{0} )\right). \label{proext} \end{equation} The use of Eqs.~(\ref{Fund7}) and (\ref{Fund9}) allows to verify by the direct calculation that \begin{equation} (D_{\mu } D^{\mu } +m^{2} )\Delta _{F} (x,\xi )=-\delta ^{4} (x-\xi ). \label{44} \end{equation} \subsection{Superposition principle from Kirchhoff's integral theorem } To derive Eq.~(\ref{31}), the identity (\ref{30}) was used. After recapitulating the arguments used in the proof of Eq.~(\ref{43}), we rewrite the divergence of \begin{equation*} \varphi \stackrel{\leftrightarrow}{D}_{\mu } \chi \equiv \varphi (D_{\mu } \chi )-(D_{\mu }^{*} \varphi )\chi , \end{equation*} where $\varphi $ and $\chi $ are arbitrary functions, in the form \begin{equation*} \partial _{\mu } (\varphi \stackrel{\leftrightarrow}{D^{\mu }} \chi )=\varphi (D_{\mu } D^{\mu } \chi )-(D^{*} _{\mu } D^{*\mu } \varphi )\chi . \end{equation*} Substituting $\Delta _{F} (x,\xi )$ and $\phi (\xi )$ in place of $\varphi $ and $\chi $, respectively, we obtain \begin{equation} \phi (\xi )\delta ^{4} (x-\xi )=\frac{\partial }{\partial \xi _{\mu } } \left( \Delta _{F} (x,\xi )(D_{\mu } \, \phi (\xi ))-(D_{\mu }^{*} \Delta _{F} (x,\xi ))\phi (\xi )\right). \label{45} \end{equation} By choosing as the integration region a four-dimensional space with the variable $\xi ^{0} $ running in the interval $(y^{0} ,z^{0} )$, we find for $x^{0} \in (y^{0} ,z^{0} )$ \begin{eqnarray} \phi (x) = \int d{\mathbf z}W_{A} [\Delta _{F} (x,z),\phi (z)] -\int d{\mathbf y}W_{A} [\Delta _{F} (x,y),\phi (y)]. \end{eqnarray} In the opposite case of $x^{0} \notin (y^{0} ,z^{0} )$ the left-hand side vanishes. \subsection{Superposition principle from the completeness condition} The linearity of the evolution equation allows for the generalization of the superposition principle (\ref{35}) in the presence of an external electromagnetic field. The completeness condition leads to the following scheme: \begin{equation} \phi ^{(+)} (x)\theta (x^{0} -y^{0} )-\phi ^{(-)} (x)\theta (-x^{0} +y^{0} )=-\int d{\mathbf y}W_{A} [\Delta _{F} (x,y),\phi (y)] . \label{46} \end{equation} Under the integral sign, the derivative entering $W_A$ also generates the term \begin{equation*} \Delta(x,y) = -i \int \frac{d\mathbf{k}}{(2\pi )^{3} } \left( F_{\mathbf k}^{(+)} (x) F_{\mathbf k}^{(+)*} (y) - F_{\mathbf k}^{(-)} (x) F_{\mathbf k}^{(-)*} (y) \right) \end{equation*} multiplied by $\phi (y) \delta (x^{0} -y^{0})$. In view of the relationship $x^{0} = y^{0}$ and Eq.~(\ref{Fund7}), this term vanishes. By calculating the derivative of Eq.~(\ref{46}) with respect to $y^{0} $, one can prove that the propagator obeys equation \begin{equation} (D_{\mu }^{*} D_{}^{\mu *} +m^{2} )\Delta _F (x,\xi )=-\delta (x-\xi ), \label{47} \end{equation} where the differentiation is over $\xi $. This equation is equivalent to Eq.~(\ref{44}), where $D^{\mu } $ acts on $x$. The superposition scheme for the retarded propagator is as follows \begin{equation} \phi (x)\theta (x^{0} -y^{0} )=-\int d{\mathbf y} W_{A} [\Delta _{\mathrm{ret}} (x,y),\phi (y)] . \label{48} \end{equation} This equation is the analog of Eq.~(\ref{17}). It can also be derived from Eq.~(\ref{45}). To conclude, the superposition schemes for a free scalar field are fundamentally valid for a scalar complex field in an external electromagnetic field. \section{Nonlinear field theory} \renewcommand{\theequation}{V.\arabic{equation}} \setcounter{equation}{0} The superposition principle for secondary waves, which is the consequence of the GF method, should be distinguished from the superposition principle as a manifestation of the linearity of the problem. In linear theory, the wave is a source of secondary waves. In nonlinear theory, two sources of secondary waves exist: the wave itself plus a function $V^{\prime}(\phi)$. In both cases, secondary waves satisfy free linear wave equations, so the superposition principle applies to secondary waves universally. \subsection{Superposition principle from Kirchhoff's integral theorem} For a Lagrangian ${\rm {\mathcal L}}={\rm {\mathcal L}}_{{\rm free}} - V$, that contains a term $V=V(\phi )$ of a general form, the identity (\ref{30}) is modified as follows: \begin{eqnarray} \phi(\xi )\delta^{4} (\xi - x) &=& \Delta_{F} (x-\xi ) ((\Box_{\xi } + m^{2} ) \phi(\xi )+ V'(\phi (\xi ))) - ((\Box_{\xi } + m^{2} ) \Delta_{F} (x-\xi ))\phi (\xi ) \nonumber \\ &=& \frac{\partial }{\partial \xi _{\mu}} \left( \Delta _{F} (x - \xi ) \frac{\stackrel{\leftrightarrow}{\partial }}{\partial \xi ^{\mu } } \phi (\xi ) \right) +\Delta _{F} (x-\xi )V'(\phi (\xi )). \end{eqnarray} For $x^{0} \in (y^{0} ,z^{0} )$, this equation gives \begin{equation} \phi (x)=\phi _{0} (x)-\int d^{4} \xi \Delta _{F} (x-\xi )V'(\phi (\xi )), \label{49} \end{equation} where the integration over $\xi ^{0} $ runs over $\xi ^{0} \in (y^{0},z^{0})$ and the integral in $\boldsymbol\xi$ extends over all space. The field $\phi _{0} (x)$ is defined by the relation \begin{equation} \phi _{0} (x)=\int d{\mathbf z}W[\Delta _{F} (x-z),\phi (z)] -\int d{\mathbf y}W[\Delta _{F} (x-y),\phi (y)] . \label{50} \end{equation} For $x^{0} \in (y^{0} ,z^{0} )$, $\phi _{0} (x)$ satisfies the free Klein-Gordon equation. If $x^{0} \notin (y^{0} ,z^{0} )$, then \begin{equation} 0=\phi_{0}(x) - \int d^{4} \xi \Delta _{F} (x-\xi ) V'(\phi (\xi )). \label{51} \end{equation} In quantum field theory, Eq.~(\ref{49}) in the infinite limits $(y^{0} ,z^{0} )=(-\infty ,+\infty )$ is used in the development of perturbation theory. Unlike in the canonical formulation of the Fresnel superposition scheme, the integrand contains the nonlinear term $V'(\phi (\xi ))$ as an additional source of secondary waves and the integration spans the entire four-dimensional space. Equations (\ref{49}) - (\ref{51}) in nonlinear scalar field theory are analogous to Eqs.~(\ref{22}) - (\ref{24}) in the anharmonic oscillator problem. The mass term of ${\rm {\mathcal L}}$ can be attributed either to ${\rm {\mathcal L}}_{{\rm free}}$ or to the potential $V$. In the last case, ${\rm {\mathcal L}}_{{\rm free}}$ describes massless particles. This might seem disadvantageous, because asymptotic states of ${\rm {\mathcal L}}$ are massive in general. The positive feature is that the retarded Green's function of massless particles, being localized on the light cone (see Eq.~(\ref{A1})), ensures reduction of four-dimensional integrals in Eqs.~(\ref{49}) and (\ref{51}) to three-dimensional integrals and transformation of integrals in Eq.~(\ref{50}) to surface integrals. \subsection{Positive- and negative-frequency solutions} Interacting fields can be decomposed into a sum of positive- and negative-frequency solutions only asymptotically. In Sect. II.C.2, we demonstrated that the straightforward generalization of the Fresnel superposition scheme to nonlinear dynamical systems is possible and consistent; however, its value is limited to only providing the definitions of positive- and negative-frequency solutions for arbitrary $t$. For the sake of completeness, we present here a field theoretical version of the nonlinear superposition scheme (\ref{26}): \begin{eqnarray} \phi^{(+)}(x)\theta ( x^{0} - y^{0}) - \phi^{(-)}(x)\theta (- x^{0} + y^{0}) = &-&\int d\mathbf{y}W[\Delta _{F}(x - y),\phi (y)] \nonumber \\ &+&\int d^4 \xi \Delta _{F}(x - \xi)V'(\phi(\xi)), \label{261} \end{eqnarray} where the integral over $ \xi^0 $ runs from $y^0$ to $x^0$. The derivative over $y^{0}$ leads to the relation $\phi (t)=\phi^{(+)} (t)+\phi^{(-)} (t)$ and Eq.~(\ref{29}). The difference in Eq.~(\ref{261}) at two different time points leads to Eqs.~(\ref{49}) - (\ref{51}). Equation (\ref{261}) ensures that $\phi^{(\pm )} (x)$ is a linear superposition in $\mathbf{k}$ of the basis functions $f^{(\pm )}_{\mathbf k} (x)$ at $t\to \pm \infty $. The calculation of the first derivative of Eq.~(\ref{261}) in $x^{0} $ yields a superposition scheme for the canonical momentum: \begin{eqnarray} \pi^{(+)}(x)\theta ( x^{0} - y^{0}) - \pi^{(-)}(x)\theta (- x^{0} + y^{0}) =& -&\int d\mathbf{y}W[\Delta _{F}(x - y),\pi (y)] \nonumber \\ &+&\int d^4\xi \Delta _{F}(x - \xi)V''(\phi(\xi))\pi(\xi), \label{262} \end{eqnarray} where the integral over $\xi^0$ runs from $y^0$ to $x^0$. \section{Conclusions} \renewcommand{\theequation}{6.\arabic{equation}} \setcounter{equation}{0} The evolution of the ideas underlying the Huygens-Fresnel superposition principle from geometrical and wave optics to the theory of interacting fields is highly instructive: In geometrical optics, a \textit{wave front} refers to the two-dimensional surface that defines the farthest extent to which the wave has arrived after a certain period of time. Huygens' principle (1678), based on the Fermat principle, allows for the determination of how the wave front is propagating. In wave optics, the term wave front has no strict definition. Instead, the term \textit{wave surface} is used. The wave surface is the two-dimensional surface on which the phase of the wave is constant. A.-J. Fresnel proposed the principle of superposition (1816), which details the wave process. A wave is a result of interference of secondary waves emitted at an earlier time. At any fixed point, it is determined by the phase and amplitude at a wave surface corresponding to a preceding instant of time. The wave surface in the past can be chosen arbitrarily. The superposition principle anticipates informal content of the GF method (1828). Kirchhoff's integral theorem (1883) is a dynamic, four-dimensional extension of Green's third identity of the static potential theory. More than half a century separates this theorem from Green's major work \cite{GGreen1828}, which introduced the basic concepts of the GF method. \footnote{In 1839 year G. Green came closely to the notion of the four-dimensional Green's function. The value of the GF method in quantum field theory is highly appreciated. \cite{Schw93}} Kirchhoff's integral theorem provides a mathematical proof of the superposition principle, clarifying and quantifying it. First, the theorem demonstrates that the amplitude of the secondary waves is determined by the Wronskian of the Green's function and the field at a previous time. Second, the wave surfaces are not highlighted; this is perfectly consistent with the fact that they are not necessarily observable (in the massive theory, e.g., the speed of a wave surface of a plane wave is always greater than the speed of light). The surface must be closed and contain the point at which the wave is calculated; otherwise, it can be arbitrarily chosen. Outside the closed surface, the interference of the secondary waves is strictly destructive: for any exterior point, the calculation of the surface integral yields zero. The reasoning used in the proof can be regarded as a standard piece of the GF method; it is of high generality, goes beyond the problem of propagation of electromagnetic waves and allows for an understanding of how the superposition principle should be modified in the theory of interacting fields. Note the most significant modifications: i)~~According to the Huygens-Fresnel superposition principle, a wave at a given point is expressed as a superposition of secondary waves emitted from centers located on a two-dimensional surface. This property arises only in massless theories, including the theory of electromagnetic fields, where the group and phase velocities coincide with the speed of light, which is the necessary condition for the integral over time delay and advance be not available in Eq.~(\ref{40}). Kirchhoff's integral theorem for massive particles, Eq.~(\ref{38}), states that a wave is determined by its values on a closed surface at all times. The physical interpretation of this fact is quite transparent. The Fourier expansion of a massive field contains components of various momenta corresponding to various group and phase velocities, which leads to a spread in time lags. As a result, the two-dimensional integral over the sources of secondary waves is transformed into a three-dimensional integral. ii)~~In the nonlinear theory, there is a need for a more extensive modification of the superposition scheme. In addition to the wave itself, a nonlinear function of the field $V'(\phi (\xi ))$ becomes the source of secondary waves. The summation runs over distributed sources: from a two-dimensional surface in theories with massless particles to a two-dimensional surface and the time axis in theories with massive particles and the entirety of four-dimensional space. This type of representation holds for both local nonlinear and nonlocal theories. We see that after each modification, the effectiveness of the superposition principle weakens. In the most general nonlinear case, the modified principle certainly does not promise fast results. To determine the field, it is necessary to calculate a four-dimensional integral in a self-consistent manner. In linear theories, the superposition principle solves the evolution problem, but in nonlinear theories, it only offers a different formulation of the problem. Nevertheless, relations of this type are still useful when searching for solutions within the framework of perturbation theory, when the non-linearity is small. In other cases, the solutions found using other techniques can be checked. The four-dimensional representation given by Eq.~(\ref{49}) is a consequence of Kirchhoff's integral theorem, but in quantum field theory, it is typically derived directly from the properties of Green's functions. In the context of a field theory, the original form of the superposition principle only has heuristic value. The superposition schemes for the secondary waves that are used to solve specific problems are unified by Kirchhoff's integral theorem, which exploits the properties of the Wronskian of the Green's functions and solutions of the wave equation under consideration. The spectrum of such problems is quite comprehensive: from the harmonic oscillator to scalar electrodynamics and nonlinear field theories. In addition to the use of the GF method, which has found a variety of applications in quantum field theory, Kirchhoff's theorem has a wider range of corollaries. Equation (\ref{38}), which represents one version of Kirchhoff's theorem, does not arise in quantum field theory because of the boundary conditions, which are atypical of a scattering problem. However, the superposition scheme represented by Eq. (\ref{35}), which is not based on Kirchhoff's theorem, is not sufficiently general because it does not extend to theories with interaction. The statement of Kirchhoff's integral theorem depends on the asymptotic conditions imposed on the Green's function. In the main part of the paper, because we were interested in the place of this remarkable principle and well-known theorem in quantum field theory, we applied the Feynman asymptotic conditions almost universally. \begin{acknowledgments} This work was supported in part by RFBR Grant No.~16-02-01104 and Grant~No.~HLP-2015-18~of Heisenberg-Landau~Program. \end{acknowledgments} \appendix \renewcommand{\theequation}{A.\arabic{equation}} \renewcommand{\thesection}{} \setcounter{equation}{0} \section{Kirchhoff's integral theorem and its vector extensions with the retarded Green's function} \renewcommand{\theequation}{A.\arabic{equation}} \setcounter{equation}{0} In the main sections of the paper, emphasis is placed on the Feynman asymptotic conditions, which play a special role in quantum field theory. Here, we formulate Kirchhoff's integral theorem and its vectorial generalizations for the retarded Green's function. \renewcommand{\thesubsection}{A.1} \subsection{Free massless scalar field} Equation~(\ref{32}) is essentially the third Green's identity for time-dependent solutions of the wave equation; its proof is outlined in Sect. III.A. As noted earlier, Eq.~(\ref{32}) holds for any Green's function. The retarded Green's function in the coordinate representation has the following form (see e.g., \cite{Bjor65} Appendix B) \begin{eqnarray} \label{A1} \Delta _{\mathrm{ret}}(t,{\mathbf x}) &=&\int \frac{d^{4}q}{(2\pi )^{4}}e^{-iqx}\frac{1}{q^{2}+i0\mathrm{sgn}(q_{0})} \nonumber \\ &=&-\frac{1}{4\pi |\mathbf{x}|}\left( \delta (|\mathbf{x}|-t)-\delta (|\mathbf{x}|+t)\right) \theta (t). \end{eqnarray} The product of generalized functions of a single variable is not defined. The propagator depends on four space-time coordinates. Generalized functions of four variables allow for products of up to four generalized functions of one variable, provided their arguments are independent. $\Delta _{\mathrm{ret}}(t,{\mathbf x})$ is thus a well-defined generalized function. $\Delta _{\mathrm{ret}}(t,{\mathbf x})$ is localized on the upper half of the light cone $t^2 - {\mathbf x}^2 = 0$. Substituting (\ref{A1}) in place of $\Delta _{F} (t,{\mathbf x})$ in Eq.~(\ref{38}), one arrives at the original Kirchhoff representation \cite{Kirch1883,Born99} \begin{eqnarray} \phi _{0} (t,{\mathbf x})\theta ({\mathbf x}\in \Omega _{3} ) = \frac{1}{4\pi } \int _{\partial \Omega _{3} }dS_{{\boldsymbol\xi }}^{\alpha } \left[ - \frac{1}{\rho } \frac{\partial }{\partial \xi ^{\alpha } } \right. \phi _{0} (t - \rho/c ,{\boldsymbol\xi }) +\left(\frac{\partial }{\partial \xi ^{\alpha } } \frac{1}{\rho } \right) \phi _{0} (t - \rho/c ,{\boldsymbol\xi }) && \nonumber \\ - \left. \frac{1}{\rho } \frac{\partial \rho }{\partial \xi ^{\alpha } } \frac{\partial }{c \partial t} \phi _{0} (t - \rho/c ,{\boldsymbol\xi }) \right],&& \label{401} \end{eqnarray} where $\rho =\left|{\boldsymbol\xi }-{\mathbf x}\right|$ and where the differentiation in $\xi ^{\alpha } $ does not affect $\rho $ in the first term. The dependence on the speed of light $c$ is here made explicit. The wave $\phi _{0} (t,{\mathbf x})$ at ${\mathbf x}\in \Omega _{3}$ is determined by its values on the closed surface $\partial \Omega _{3}$ considering the delay $\rho/c$. Linear second-order hyperbolic partial differential equations possessing this property are well-studied from a mathematical point of view \cite{Cour1962,Gunt1991,Duis1992}. \renewcommand{\thesubsection}{A.2} \subsection{Monochromatic electromagnetic fields with sources} A generalization of Kirchhoff's integral theorem, which takes into account vectorial character of the electromagnetic field and the electromagnetic currents, is obtained by von Ignatowsky. \cite{Igna1907} First, however, we consider a generalization of von Helmholtz's theorem, following Stratton and Chu \cite{Stra1939}. Most methods used in Sect.~III.C.2 for a free monochromatic scalar field apply to a monochromatic electromagnetic field with sources after some slight modifications. We replace scalar field $\phi _{0}$ by the electromagnetic field tensor $F^{\mu \nu }=\partial ^{\nu }A^{\mu }-\partial ^{\mu }A^{\nu }$. In the Lorentz gauge $\partial _{\mu }A^{\mu }=0$, the evolution equations $\partial _{\nu }F^{\mu \nu }=j^{\mu }$ become $\square A^{\mu }=j^{\mu }$, where $j^{\mu }$ is the electromagnetic current. It is assumed that the fields are harmonic and that all quantities contain a factor $\exp (-i\omega t)$, so that $\partial ^{0}j^{\mu }=-i\omega j^{\mu }$, $(\omega ^{2}+\triangle )A^{\mu }=-j^{\mu }$, and \begin{equation} \notag (\omega ^{2}+\triangle )F^{\mu \nu }=-\partial ^{\nu }j^{\mu }+\partial ^{\mu }j^{\nu }. \end{equation} Since the right-hand side of this equation is different from zero, the analogue of Eq.~(\ref{G2I}) takes a more complicated form: \begin{eqnarray} F^{\mu \nu }(\omega ,{\boldsymbol{\xi }})\delta ({\boldsymbol{\xi }}-{ \mathbf{x}}) &=&\Delta _{\mathrm{ret}}(\omega ,{\mathbf{x}}-{\boldsymbol{\xi }})(-\partial ^{\nu }j^{\mu }(\omega ,{\boldsymbol{\xi }})+\partial ^{\mu }j^{\nu }(\omega ,{\boldsymbol{\xi }})) \notag \\ &&-\frac{\partial }{\partial \xi ^{\alpha }}\left( \Delta _{\mathrm{ret} }(\omega ,{\mathbf{x}}-{\boldsymbol{\xi }})\frac{\overset{\leftrightarrow }{ \partial }}{\partial \xi ^{\alpha }}F^{\mu \nu }(\omega ,{\boldsymbol{\xi }} )\right) . \label{30 bis} \end{eqnarray} The sum in $\alpha$ runs from 1 to 3, while $\mu,\nu = 0, 1, 2, 3$. By integrating over a three-dimensional region $\Omega _{3}$, one gets \begin{eqnarray} F^{\mu \nu }(\omega ,{\mathbf{x}}) \theta ({\mathbf x}\in \Omega _{3} ) &=& \int_{\Omega _{3}}d{\boldsymbol{\xi }}\Delta _{\mathrm{ret}}(\omega ,{\mathbf{x}}-{\boldsymbol{\xi }}) (-\partial^{\nu }j^{\mu }(\omega ,{\boldsymbol{\xi }})+\partial ^{\mu }j^{\nu }(\omega ,{\boldsymbol{\xi }})) \notag \\ &&-\int_{\partial \Omega _{3}}dS_{\xi }^{\beta }\Delta _{\mathrm{ret} }(\omega ,{\mathbf{x}}-{\boldsymbol{\xi }})\frac{\overleftrightarrow{ \partial }}{\partial \xi ^{\beta }}F^{\mu \nu }(\omega ,{\boldsymbol{\xi }}). \label{31 bisbis} \end{eqnarray} The dependence on the derivatives of $F^{\mu \nu }$ can be eliminated. \cite{Stra1939} \renewcommand{\thesubsection}{A.3} \subsection{Non-monochromatic electromagnetic fields with sources} In the presence of external currents, electromagnetic fields satisfy the identity \begin{eqnarray} F^{\mu \nu }(\xi )\delta ^{4}(\xi -x) &=& \Delta _{\mathrm{ret}}(x-\xi)(-\partial ^{\nu }j^{\mu }(\xi )+\partial ^{\mu }j^{\nu }(\xi )) \notag \\ &&-\frac{\partial }{\partial \xi_{\sigma }}\left( \Delta _{\mathrm{ret}}(x-\xi )\frac{\overset{\leftrightarrow }{\partial }}{\partial \xi ^{\sigma }}F^{\mu \nu }(\xi )\right) . \label{30 time} \end{eqnarray} The sum in $\sigma$ runs from 0 to 3. By taking the integral over a four-dimensional region $\Omega $, we obtain \begin{eqnarray} F^{\mu \nu }(x)\theta (x \in \Omega ) &=& \int_{\Omega }d^{4}\xi \Delta _{ \mathrm{ret}}(x-\xi )(-\partial ^{\nu }j^{\mu }(\xi )+\partial ^{\mu }j^{\nu }(\xi )) \notag \\ &&-\int_{\partial \Omega }^{{}}dS_{\xi }^{\sigma }\left( \Delta _{\mathrm{ret} }(x-\xi )\frac{\overset{\leftrightarrow }{\partial }}{\partial \xi ^{\sigma}} F^{\mu \nu }(\xi )\right). \label{32 bis} \end{eqnarray} The representation becomes linear in $F^{\mu \nu }$ after replacing $j^{\mu }$ with $\partial _{\nu }F^{\mu \nu }$. The field derivatives are assumed to be smooth. Equation~(\ref{32 bis}) can be simplified by choosing $\Omega $ to be an infinite cylinder, $\Omega = \mathbb{R}^1 \otimes \Omega _{3} $, whose cross section is a three-dimensional space-like region $\Omega_{3}$. With the use Eq.~(\ref{A1}), the integration over the time coordinate gives \cite{Igna1907} \begin{eqnarray} F^{\mu \nu }(t,{\mathbf{x}})\theta ({\mathbf{x}} \in \Omega _{3})&=& -\frac{1}{4\pi }\int_{\Omega _{3}}d{\boldsymbol{\xi }}\frac{1}{\rho } (-\frac{\partial}{\partial \xi_{\nu}}j^{\mu }(t-\rho,{\boldsymbol{\xi }})+\frac{\partial}{\partial \xi_{\mu}}j^{\nu}(t-\rho,{\boldsymbol{\xi }})) \nonumber \\ &&+\frac{1}{4\pi }\int_{\partial \Omega _{3}}dS_{{\boldsymbol{\xi }} }^{\alpha }\left[ -\frac{1}{\rho }\frac{\partial }{\partial \xi ^{\alpha }} \right. F^{\mu \nu }(t-\rho,{\boldsymbol{\xi }}) \nonumber \\ &&+ \left( \frac{\partial }{ \partial \xi ^{\alpha }}\frac{1}{\rho }\right) F^{\mu \nu }(t-\rho,{ \boldsymbol{\xi }})- \left. \frac{1}{\rho }\frac{\partial \rho }{\partial \xi ^{\alpha }}\frac{\partial }{\partial t}F^{\mu \nu }(t-\rho,{\boldsymbol{ \xi }})\right], \label{a8} \end{eqnarray} where $\rho =\left\vert {\boldsymbol{\xi }}-{\mathbf{x}}\right\vert$. In the first two lines, the differentiation with respect to $\xi ^{\alpha }$ does not apply to $\rho $. Kirchhoff's integral theorem (\ref{401}) and Eq.~(\ref{31 bisbis}) extend von Helmholtz's theorem (\ref{37}) in different directions. Equation (\ref{a8}) constitutes, from one hand, the generalization of Kirchhoff's integral theorem by taking into account the vectorial character of electromagnetic field and including the effect of electromagnetic currents and, from other hand, the generalization of Eq.~(\ref{31 bisbis}) by going beyond the monochromatic field assumption.
{"config": "arxiv", "file": "1706.01881.tex"}
\begin{document} \title{Differential invariants of Kundt waves} \author[B. Kruglikov, D. McNutt, E. Schneider]{Boris Kruglikov$^{\dagger\ddagger}$, David McNutt$^\ddagger$, Eivind Schneider$^\dagger$} \date{} \address{\hspace{-17pt} $^\dagger$Department of Mathematics and Statistics, UiT the Arctic University of Norway, Troms\o\ 90-37, Norway.\newline E-mails: {\tt boris.kruglikov@uit.no, eivind.schneider@uit.no}. \newline $^\ddagger$Department of Mathematics and Natural Sciences, University of Stavanger, 40-36 Stavanger, Norway.\newline E-mail: {\tt david.d.mcnutt@uis.no}. } \keywords{Lorentzian metric, scalar curvature invariant, Cartan invariant, differential invariant, invariant derivation, Poincar\'e function} \vspace{-14.5pt} \begin{abstract} Kundt waves belong to the class of spacetimes which are not distinguished by their scalar curvature invariants. We address the equivalence problem for the metrics in this class via scalar differential invariants with respect to the equivalence pseudo-group of the problem. We compute and finitely represent the algebra of those on the generic stratum and also specify the behavior for vacuum Kundt waves. The results are then compared to the invariants computed by the Cartan-Karlhede algorithm. \end{abstract} \maketitle \section*{Introduction}\label{S0} The Kundt waves can be written in local coordinates as follows \begin{equation}\label{KW} g = dx^2+dy^2-du\,\Bigl(dv-\tfrac{2v}x\,dx+\bigl(8xh-\tfrac{v^2}{4x^2}\bigr)\,du\Bigr), \end{equation} where $h=h(x,y,u)$ is an arbitrary function. In order for $g$ to be vacuum, $h$ must be harmonic in $x,y$. These metrics were originally defined by Kundt \cite{Ku} in 1961, as a special class of pure radiation spacetimes of Petrov type III or higher, admitting a non-twisting, non-expanding shear-free null congruence $\ell$ \cite{ESEFE}: $g(\ell,\ell)=0$, $\op{Tr}_g(\nabla\ell)=0$, $\|\nabla\ell\|_g^2=0$. All Weyl curvature invariants \cite{W}, i.e.\ scalars constructed from tensor products of covariant derivatives of the Riemann curvature tensor by complete contractions, vanish for these spacetimes. Thus, these plane-fronted metrics belong to the collection of VSI spacetimes, where all polynomial scalar curvature invariants vanish \cite{PPCM}. These spaces have been extensively explored in the literature \cite{CHP,CHPP}. Since it is impossible to distinguish Kundt waves from Minkowski spacetime by Weyl curvature invariants, other methods have been applied. In \cite{MMC} Cartan invariants have been computed for vacuum Kundt waves and the maximum iteration steps in Cartan-Karlhede algorithm was determined. Cartan invariants allow to distinguish all metrics, but initially they are functions on the Cartan bundle, also known as the orthonormal frame bundle, not on the original spacetime. Cartan invariants are polynomials in structure functions of the canonical frame (Cartan connection) and their derivatives along the frame \cite{C}. Thus they are obtained from the components of the Riemann curvature tensor and its covariant derivatives without complete contractions. Absolute invariants are chosen among those that are invariant with respect to the structure group of the Cartan bundle. This is usually achieved by a normalization of the group parameters \cite{C,O}. When the frame is fixed (the structure group becomes trivial) the Cartan invariants descend to the base of the Cartan bundle, i.e.\ the spacetime (in some cases, which we do not consider, the frame cannot be completely fixed but then the form of the curvature tensor and its covariant derivatives are unaffected by the frame freedom). The Cartan-Karlhede algorithm \cite{Kar,ESEFE} specifies when the normalization terminates and how many derivatives of the curvature along the frame are involved in the final list of invariants. In this paper we propose another approach, which originates from the works of Sophus Lie. Namely we distinguish spacetimes by scalar differential invariants of their metrics. The setup is different: we first determine the equivalence group of the problem that is the group preserving the class of metrics under consideration. It is indeed infinite-dimensional and local, so it is more proper to talk of a Lie pseudogroup, or its Lie algebra sheaf. Then we compute invariants of this pseudogroup and its prolonged action. The invariants live on the base of the Cartan bundle, i.e.\ the spacetime, but they are allowed to be rational rather than polynomial in jet-variables (derivatives of the metric components). We recall the setup in Section \ref{S1}. Recently \cite{KL2} it was established that the whole infinite-dimensional algebra of invariants can be finitely generated in Lie-Tresse sense. This opens up an algebraic approach to the classification, and that is what we implement here. We compute explicitly the generating differential invariants and invariant derivations, organize their count in Poincar\'e series, and resolve the equivalence problem for generic metrics within the class. We also specify how this restricts to vacuum Kundt waves. This is done in Sections \ref{S2}-\ref{S3}. More singular spaces can be treated in a manner analogous to our computations. Since vacuum Kundt waves have already been investigated via the Cartan method \cite{MMC}, we include a discussion on the correspondence of the invariants in this case. This correspondence does not preserve the order of invariants, because the approaches differ, and we include a general comparison of the two methods. This is done in Section \ref{S4}. \section{Setup of the problem: actions and invariants}\label{S1} Metrics of the form (\ref{KW}) are defined on an open subset of the manifold $M= (\R \setminus \{0\}) \times \R^3\subset\R^4$. Thus a metric $g$ can be identified as a (local) section of the bundle $\pi\colon M \times \R\to M$ with the coordinates $x,y,u,v,h$. We denote the total space of the bundle by $E$. The Kundt waves then satisfy the condition $h_v=0$. This partial differential equation (PDE) determines a hypersurface $\E_1$ in $J^1 \pi$. Here $J^k\pi$ denotes the $k$-th order jet bundle. This space is diffeomorphic to $M \times \mathbb R^N$, where $N=\tbinom{k+4}{4}$, and we will use the standard coordinates $h,h_x,h_y,...,h_{u v^{k-1}},h_{v^{k}}$ on $\mathbb R^N$. Function $h=h(x,y,u,v)$ determines the section $j^kh$ of $J^k\pi$ in which those standard coordinates are the usual partial derivatives of $h$. The space $J^k\pi$ comes equipped with a distribution (a sub-bundle of the tangent bundle), called the Cartan distribution. A PDE of order $k$ is considered as a submanifold of $J^k \pi$, and its solutions correspond to maximal integral manifolds of the Cartan distribution restricted to the PDE. For a detailed review of jets, we refer to \cite{O,KL1}. The prolongation $\E_k\subset J^k\pi$ is the locus of differential corollaries of the defining equation of $\E_1$ up to order $k$. We also let $\E_0=J^0\pi=E$. The vanishing of the Ricci tensor is equivalent to the condition $h_{xx}+h_{yy}=0$. This yields a sub-equation $\ric_2 \subset \E_2 \subset J^2\pi$, whose prolongations we denote by $\ric_k \subset J^k \pi$. Since this case of vacuum Kundt waves was considered thoroughly in \cite{MMC} we will focus here mostly on general Kundt waves. However, after finding the differential invariants in the general case it is not difficult to describe the differential invariants in the vacuum case. This will be done in Section \ref{S3}. \subsection{Lie pseudogroup}\label{S1.1} The Lie pseudogroup of transformations preserving the shape (i.e.\ form of the metric) can be found by pulling back $g$ from (\ref{KW}) through a general transformation $(\tilde x,\tilde y,\tilde u,\tilde v) \mapsto (x,y,u,v)$, and then requiring that the obtained metric is of the same shape: \[ d \tilde x^2+d \tilde y^2-d \tilde u\,\Bigl(d \tilde v-\tfrac{2\tilde v}{\tilde x}\,d\tilde x+ \bigl(8\tilde x \tilde h-\tfrac{\tilde v^2}{4\tilde x^2}\bigr)\,d\tilde u \Bigr). \] This requirement can be given in terms of differential equations on $x,y,u,v$ as functions of $\tilde x,\tilde y,\tilde u,\tilde v$, with the (invertible) solutions described below. The obtained differential equations are independent of whether the Kundt wave is Ricci-flat or not, so the shape-preserving Lie pseudogroup is the same for both general and Ricci-flat Kundt waves. A pseudogroup preserving shape (\ref{KW}) contains transformations of the form (we also indicate their lift to $J^0\pi=E$) \begin{align} x \mapsto x, \quad y &\mapsto y+C, \quad u \mapsto F(u), \quad v \mapsto \frac{v}{F'(u)}-2 \frac{F''(u)}{F'(u)^2} x^2, \label{e2}\\ h &\mapsto \frac{h}{F'(u)^2}+\frac{2 F'''(u) F'(u)-3 F''(u)^2}{8 F'(u)^4}x, \label{e3} \end{align} where $F$ is a local diffeomorphism of the real line, i.e.\ $F'(u)\neq0$. This Lie pseudogroup was already described in \cite{PPCM}, formula (A.37). Transformations \eqref{e2}-\eqref{e3} form the Zariski connected component $\Sym_0$ of the entire Lie pseudogroup $\Sym$ of shape-preserving transformations. (Note that $\Sym_0$ differs from the topologically connected component of unity given by $F'(u)>0$.) The pseudogroup $\Sym$ is generated, in addition to transformations \eqref{e2}-\eqref{e3}, by the maps $y \mapsto - y$ and $(x,h) \mapsto (-x, -h)$ preserving shape (\ref{KW}). Note that $\Sym/\Sym_0=\mathbb Z_2 \times \mathbb Z_2$. The Lie algebra sheaf $\sym$ of vector fields corresponding to $\Sym$ (and $\Sym_0$) is spanned by the vector fields \begin{equation} X = \partial_y, \quad Y(f) = 4 f \partial_u-(4 v f'+8x^2 f'') \partial_v+(xf'''-8hf') \partial_h \end{equation} where $f=f(u) \in C_{\text{loc}}^\infty(\mathbb R)$ is an arbitrary function. When looking for differential invariants, it is important to distinguish between $\Sym$ and $\Sym_0$. Firstly, differential $\Sym_0$-invariants need not be $\Sym$-invariant. Secondly, a set of differential invariants that separates $\Sym$-orbits as a rule will not separate $\Sym_0$-orbits. We will restrict our attention to the $\Sym$-action while outlining the changes needed to be made for the other choices of the Lie pseudogroup. \subsection{Differential invariants and the global Lie-Tresse theorem}\label{S1.2} A differential invariant of order $k$ is a function on $\E_k$ which is constant on orbits of $\Sym$. In accordance with \cite{KL2} we consider only invariants that are rational in the fibers of $\pi_k:\E_k\to E$ for every $k$. The global Lie-Tresse theorem states that for algebraic transitive Lie pseudogroups, rational differential invariants separate orbits in general position in $\E_\infty$ (i.e.\ orbits in the complement of a Zariski-closed subset), and the field of rational differential invariants is generated by a finite number of differential invariants and invariant derivations. In fact it suffices to consider the (sub)algebra of invariants that are rational on fibers of $\pi_{\ell}:\E_\ell\to E$ and polynomial on fibers of $\pi_{k,\ell}:\E_k\to\E_\ell$ for some $\ell$. In the case of Kundt waves we will show that $\ell=2$. For simplicity we will mostly discuss the field of rational invariants in what follows. We refer to \cite{KL2} for the details of the theory which holds for transitive Lie pseudogroups. The Lie pseudogroup we consider is not transitive: the $\Sym$-orbit foliation of $E$ is $\{x=\op{const}\}$. Let us justify validity of a version of the Lie-Tresse theorem for our Lie pseudogroup action. For every $a\in E$ the action of the stabilizer of $a$ in $\Sym_0$ is algebraic on the fiber $\pi_{\infty,0}^{-1}(a)$, and so for every $k$ and $a$ we have an algebraic action of a Lie group on the algebraic manifold of $\pi_{k,0}^{-1}(a)$. By Rosenlicht's theorem rational invariants separate orbits in general position. It is important that the dependence of the action on $a$ is algebraic. From the description of the $\Sym_0$ action on $E$ it is clear that orbits in general position intersect with the fiber over $a(x)=(x,0,0,0,1)$ for a unique $x\in\R\setminus\{0\}$. A $\Sym$-orbit in $\E_\infty$ intersecting with the fiber of $a(x)$ intersects $a(-x)$ as well. Thus we can separate orbits with scalar differential invariants, in addition to the invariant $x$ or $x^2$, for $\Sym_0$ or $\Sym$ respectively. It is not difficult to see, following \cite{KL2}, that in our case the field of differential invariants is still finitely generated. We skip the details because this will be apparent from our explicit description of the generators of this field in what follows. \subsection{The Hilbert and Poincar\'e functions}\label{S1.3} The transcendence degree of the field of rational differential invariants of order $k$ (that is the minimal number of generators of this field, possibly up to algebraic extensions) is equal to the codimension of the $\sym$-orbits in general position in $\E_k$. The results in this section are valid for both $\Sym_0$ and $\Sym$ and all intermediate Lie pseudogroups (there are three of them since the quotient $\Sym/\Sym_0$ is the Klein four-group). For $k\geq 0$, the dimension of $J^k \pi$ is given by \[\dim J^k \pi = 4+ \binom{k+4}{4}. \] The number of independent equations defining $\E_k$ is $\binom{k+3}{4}$ which yields \[\dim \E_k = \dim J^k \pi- \binom{k+3}{4} = 4+ \binom{k+3}{3}, \quad k \geq 0.\] For small $k$, the dimension of a $\sym$-orbit in $J^k \pi$ in general position may be found by computing the dimension of the span of $\sym|_{\theta_k} \subset T_{\theta_k} J^k\pi$ for a general point $\theta_k \in J^k\pi$. It turns out that the equation $\E_k$ intersects with regular orbits, so we get the same results by choosing $\theta_k\in\E_k$. \begin{theorem}\label{counting} The dimension of a $\sym$-orbit in general position in $\E_k$ is $4$ for $k=0$ and it is equal to $k+5$ for $k>0$. \end{theorem} \begin{proof} We need to compute the dimension of the span of $X^{(k)}$ and $Y(f)^{(k)}$ at a point in general position in $\E_k$. The $k$-th prolongation of the vector field $Y(f)$ is given by \begin{equation} Y(f)^{(k)}= 4 f \D_u^{(k+1)}-(4 v f'+8x^2 f'') \D_v^{(k+1)} + \sum_{|\sigma| \leq k} \D_\sigma (\phi) \partial_{h_\sigma} \label{prolongation} \end{equation} where $\sigma=(i_1,\dots,i_t)$ is a multi-index of length $|\sigma|=t$ ($i_j$ corresponds to one of the base coordinates $x,y,u,v$), $\D_\sigma=\D_{i_1}\cdots\D_{i_t}$ is the iterated total derivative, $\D_i^{k+1}$ is the truncated total derivative as a derivation on $J^k\pi$, and \begin{align*} \phi =\,& Y(f)\lrcorner\,(dh-h_xdx-h_ydy-h_udu-h_vdv)\\ =\,& x f'''-8hf'-4f\,h_u+(4vf'+8x^2 f'')\,h_v \end{align*} is the generating function for $Y(f)$; we refer to Section 1.5 in \cite{KL1}. We see that the $k$-th prolongation depends on $f,f',...,f^{(k+3)}$. We can without loss of generality assume that the $u$-coordinate of our point in general position is $0$, since $\partial_u$ is contained in $\sym$. At $u=0$ the vector field $Y(f)^{(k)}$ depends only on the $(k+3)$-degree Taylor polynomial of $f$ at $u=0$, which implies that there are at most $k+4$ independent vector fields among these. Adding the vector field $X^{(k)}$ to them gives $k+5$ as an upper bound of the dimension of an orbit. Let $\theta_k \in \E_k$ be the point defined by $x=1, h=1$, with all other jet-variables set to $0$ and let $Z_m=Y(u^m)$. It is clear from (\ref{prolongation}) that the $k$-th prolongations of $X, Z_0,...,Z_{k+3}$ span a $(k+5)$-dimensional subspace of $T_{\theta_k} \E_k$, implying that $k+5$ is also a lower bound for the dimension of an orbit in general position and verifying the claim of the theorem. \end{proof} Let $s_k^\E$ denote the codimension of an orbit in general position inside of $\E_k$, i.e.\ the number of independent differential invariants of order $k$. It is given by \[ s^\E_0=1\ \text{ and }\ s_k^\E= \frac{k}{6}(k+5)(k+1) \text{ for } k \geq 1. \] The Hilbert function $H_k^\E=s_k^\E-s_{k-1}^\E$ is given by $$ H_0^\E=H_1^\E=1\ \text{ and }\ H_k^\E = \frac{k(k+3)}{2} \text{ for } k\geq 2. $$ This counts the number of independent differential invariants of ``pure'' order $k$. For small $k$ the results are summed up in the following table. \[ \begin{array}{|c|rrrrrrr|} \hline k & 0 & 1 & 2 & 3 & 4 & 5 & 6\\ \hline \dim J^k \pi & 5 & 9 & 19 & 39 & 74 & 130 & 214 \\ \dim \E_k & 5 & 8 & 14 & 24 & 39 & 60 & 88 \\ \dim \mathcal{O}_k & 4 & 6 & 7 & 8 & 9 & 10 & 11 \\ s_k^{\E} & 1 & 2 & 7 & 16 & 30 & 50 & 77 \\ H_k^{\E} & 1 & 1 & 5 & 9 & 14 & 20 & 27 \\\hline \end{array} \] The corresponding Poincar\'e function $P_\E(z)=\sum_{k=0}^\infty H_k^\E z^k$ is given by \[ P_\E(z) = \frac{1-2z+5z^2-4z^3+z^4}{(1-z)^3}. \] \section{Differential invariants of Kundt waves}\label{S2} We give a complete description of the field of rational differential invariants. We will focus on the action of the entire Lie pseudogroup $\Sym$ (with four Zariski connected components), while also describing what to do if one wants to consider only one (or two) connected components. \subsection{Generators}\label{S2.1} The second order differential invariants of the $\Sym$-action are generated by the following seven functions \begin{align*} I_0 &= x^2, & I_1 &= \frac{(x h_x-h)^2}{h_y^2}, & I_{2a} &= \frac{h_{xx}}{x h_x-h}, \\ I_{2b} &= \frac{x h_{xy}}{h_y}, & I_{2c} &= \frac{h_{yy}}{x h_x-h}, & I_{2d} &= \frac{(x^2 h_{yu}-v h_y)^2}{x (x h_x-h)^3 }, \\ & \hphantom{a}\hskip20pt I_{2e} = \frac{(x^3 h_{xu}-v x h_x-x^2 h_u+v h)(x h_x-h)}{(x^2 h_{yu}-v h_y) h_y} \hspace{-7cm} \end{align*} and these invariants separate orbits of general position in $\E_2$. They are independent as functions on $\E_2$, and one verifies that the number of invariants agrees with the Hilbert function $H_k^\E$ for $k=0,1,2$. Note that $\sqrt{I_0}=x$ and $\sqrt{I_1} = \tfrac{x h_x-h}{h_y}$ are not invariant under the discrete transformations $(x,h)\mapsto (-x,-h)$ and $y \mapsto -y$. They are however invariant under the Zariski connected pseudogroup $\Sym_0$ and should be used for generating the field of differential $\Sym_0$-invariants, since the invariants above do not separate $\Sym_0$-orbits on $\E_2$. \begin{remark} If $\mathcal A_2$ denotes the field of second order differential $\Sym$-invariants and $\mathcal B_2$ the field of second order differential $\Sym_0$-invariants, then $\mathcal B_2$ is an algebraic field extension of $\mathcal A_2$ of degree $4$ and its Galois group is $\Sym/\Sym_0= \mathbb Z_2 \times \mathbb Z_2$. Intermediate pseudogroups lying between $\Sym_0$ and $\Sym$ are in one-to-one correspondence with subgroups of $\mathbb Z_2 \times \mathbb Z_2$ that, by Galois theory, are in one-to-one correspondence with algebraic field extensions of $\mathcal A_2$ that are contained in $\mathcal B_2$. Including $\mathcal B_2$ there are four such nontrivial algebraic extensions of $\mathcal A_2$, and they are the splitting fields of the polynomials $t^2-I_0$, $t^2-I_1$, $t^2-I_0 I_1$ and $(t^2-I_0)(t^2-I_1)$ over $\mathcal A_2$, respectively. Higher-order invariants are generated by second-order invariants and invariant derivations, so the field of all differential invariants depends solely on the chosen field extension of $\mathcal A_2$. \end{remark} In order to generate higher-order differential invariants we use invariant derivations, i.e.\ derivations on $\E_\infty$ commuting with the $\Sym$-action. It is not difficult to check that the following derivations are invariant. \begin{gather*} \nabla_1 =x D_x+2v D_v,\quad \nabla_2= \frac{x h_x-h}{h_y}\,D_y,\quad \nabla_4=\frac{x^2 h_{yu}-v h_y}{h_y}\,D_v,\\ \nabla_3=\frac{h_y}{x^2 h_{yu}-v h_y} \left(D_u-\Bigl(8x^2 h_x-\frac{v^2}{4x^2}\Bigr) D_v \right). \end{gather*} \begin{theorem}\label{generators} The field of rational scalar differential invariants of $\Sym$ is generated by the second-order invariants $I_0,I_1,I_{2a},I_{2b},I_{2c},I_{2d},I_{2e}$ together with the invariant derivations $\nabla_1,\nabla_2,\nabla_3,\nabla_4$. The algebra of rational differential invariants, which are polynomial starting from the jet-level $\ell=2$, over $\mathcal A_2$, $\mathcal B_2$ or an intermediate field, depending on the choice of Lie pseudogroup, is generated by the above seven second-order invariants (with possible passage from $I_0$ to $\sqrt{I_0}$ and from $I_1$ to $\sqrt{I_1}$) and the above four invariant derivations. \end{theorem} \begin{proof} We shall prove that the field generated by the indicated differential invariants and invariant derivations for every $k>2$ contains $H_k^\E=\frac{k(k+3)}{2}$ functionally independent invariants, and moreover that their symbols are quasilinear and independent. This together with the fact that the indicated invariants generate all differential invariants of order $\leq2$ implies the statement of the theorem. We demonstrate by induction in $k$ a more general claim that there are $H_k^\E$ quasilinear differential invariants of order $k$ with the symbols at generic $\theta_{k-1}\in J^{k-1}\pi$ proportional to $h_{x^i y^{j} u^l}$, where $i+j+l=k$ and $0 \leq l <k$. The number of such $k$-jets is indeed equal to the value of the Hilbert function $H_k^\E$. The base $k=3$ follows by direct computation of the symbols of $\nabla_1I_{2a}, \nabla_1I_{2b}, \nabla_1I_{2c}, \nabla_1I_{2d}, \nabla_1I_{2e}, \nabla_2I_{2c}, \nabla_2I_{2d}, \nabla_3I_{2d}, \nabla_3I_{2e}$. Assuming the $k$-th claim, application of $\nabla_1$ gives $k(k+3)/2$ differential invariants of order $k+1$, and $\nabla_2$ adds $k$ additional differential invariants, covering the symbols $h_{x^i y^{j} u^l}$ with $i+j+l=k+1$ and $0 \leq l <k$. Further application of $\nabla_3$ gives $2$ more differential invariants with symbols $h_{xu^k}$, $h_{yu^k}$. Thus the invariants are independent and the calculation \[ \frac{k(k+3)}{2} + k+2= \frac{(k+1)(k+4)}{2} \] completes the induction step. For the algebra of invariants it is enough to note that our generating set produces invariants that are quasi-linear in jets of order $\ell=2$ or higher, and so any differential invariant can be modified by elimination to an element in the base field $\mathcal A_2$, $\mathcal B_2$ or an intermediate field. \end{proof} \begin{remark} As follows from the proof it suffices to have only derivations $\nabla_1,\nabla_2,\nabla_3$. Yet $\nabla_4$ is obtained from those by commutators. \end{remark} It is possible to give a more concise description of the field/algebra of differential invariants than that of Theorem \ref{generators}. Let $\alpha_i$ denote the horizontal coframe dual to the derivations $\nabla_i$, i.e. \begin{gather*} \alpha_1 =\frac1x\,dx,\quad \alpha_2= \frac{h_y}{x h_x-h}\,dy,\quad \alpha_3=\frac{x^2 h_{yu}-v h_y}{h_y}\,du,\\ \alpha_4=\frac{h_y}{x^2 h_{yu}-v h_y} \left(dv-\frac{2v}x\,dx+\Bigl(8x^2 h_x-\frac{v^2}{4x^2}\Bigr) du \right). \end{gather*} Then we have: $$ \alpha_1 \wedge \alpha_2 \wedge \alpha_3 \wedge \alpha_4= (I_0 I_1)^{-1/2} dx \wedge dy\wedge du\wedge dv. $$ Metric (\ref{KW}) written in terms of this coframe has coefficients $g_{ij}=g(\nabla_i,\nabla_j)$ and therefore has the form \[ g=I_0 \alpha_1^2+I_1 \alpha_2^2+8 (I_1 I_{2d})^{-1} \alpha_3^2 - \alpha_3 \alpha_4. \] This suggests that $\nabla_i$ and $I_0,I_1,I_{2d}$ generate the field of differential invariants. This is indeed true, and can be demonstrated as follows. The differential invariants appearing as nonzero coefficients in the commutation relations $[\nabla_i, \nabla_j]=K_{ij}^k \nabla_k$ are given by \com{ \begin{gather*} K_{12}^2 = (I_0 I_{2a}-I_{2b}),\ K_{13}^3 = - (I_0 \nabla_3(I_{2b}) +2),\ K_{13}^4 =-\frac{8I_0 I_{2a}}{I_1 I_{2d}},\\ K_{14}^4 = I_0 \nabla_3(I_{2b}),\ K_{23}^2 = - \frac{\nabla_3(I_1)}{2I_1},\ K_{23}^3 = (I_1-I_{2e}) I_{2c}-I_0 I_1 \nabla_3(I_{2c}),\\ K_{23}^4 = \frac{-8I_{2b}}{I_1I_{2d}}, K_{24}^4 = -K_{23}^3, K_{34}^3 = -1, K_{34}^4 = \frac{I_{2e}}{2I_0I_1} -\frac{I_1 I_{2d}}{2} \nabla_3\Bigl(\frac1{I_1 I_{2d}}\Bigr). \end{gather*} } \begin{gather*} K_{12}^2 = (I_0 I_{2a}-I_{2b}),\ K_{13}^3 = - (I_0 \nabla_3(I_{2b}) +2),\ K_{13}^4 =-\frac{8I_0 I_{2a}}{I_1 I_{2d}},\\ K_{23}^2 = - \frac{\nabla_3(I_1)}{2I_1},\, K_{23}^3 = I_{2c}(I_1-I_{2e})-I_0 I_1 \nabla_3(I_{2c}) = -K_{24}^4,\, K_{34}^3 = -1,\\ K_{14}^4 = I_0 \nabla_3(I_{2b}),\ K_{23}^4 = -\frac{8I_{2b}}{I_1I_{2d}},\ K_{34}^4 = \frac{I_{2e}}{2I_0I_1} -\frac{I_1 I_{2d}}{2} \nabla_3\Bigl(\frac1{I_1 I_{2d}}\Bigr). \end{gather*} In particular we can get the differential invariants $I_{2a},I_{2b},I_{2c},I_{2e}$ from $K_{13}^4, \nabla_1(I_1), \nabla_2(I_1), \nabla_3(I_1)$ thereby verifying that $I_0,I_1,I_{2d}$ are in fact sufficient to be a generating set of differential invariants. \begin{remark} For the $\Sym_0$-action, the invariant derivations $D_x+\frac{2v}{x} D_v$ and $D_y$ should be used instead of $\nabla_1,\nabla_2$ (they are not invariant under the reflections). In this case only one coefficient of $g$ is nonconstant, suggesting that one differential invariant and four invariant derivations are sufficient for generating the field of differential invariants. \end{remark} \subsection{Syzygies}\label{S2.2} Differential relations among the generators of the algebra of differential invariants are called differential syzygies. They enter the quotient equation, describing the equivalence classes $\E_\infty/\Sym$. To simplify notations let us rename the generators $a=I_0,b=I_1,c=I_1 I_{2d}$ and use the iterated derivatives $f_{i_1...i_r} = (\nabla_{i_r} \circ \cdots \circ \nabla_{i_1})(f)$ for $f=a,b,c$. We can generate all differential invariants of order $k$ by using only these and $\nabla_1^{k-2} (K_{13}^4)$. The syzygies coming from the commutation relations of $\nabla_i$ have been described in the previous section. Thus it is sufficient to only consider iterated derivatives that satisfy $i_1 \leq \cdots \leq i_r$. These are generated by some simple syzygies \begin{align*} a_1=2 a, \quad a_2=0, \quad a_3=0, \quad a_4=0, \quad b_4=0, \quad c_4=-2 c \end{align*} and by two more complicated syzygies that involve differentiation of $b,c$ with respect to $\nabla_1$, $\nabla_2$, $\nabla_3$ up to order three: \begin{align*} 0 = &2a^2c^2(2b^2b_{3}b_{233}-2b^2b_{23}b_{33}-3bb_{3}^2b_{23}+3b_{2}b_{3}^3) -ab(4b^2b_{3}cc_{13}\\ -&4b^2b_{3}cc_{23}-4b^2b_{3}c_{1}c_{3}+4b^2b_{3}c_{2}c_{3}+8b^2b_{33}c^2 -4b^2b_{33}cc_{1} \\+&4b^2b_{33}cc_{2}-2bb_{1}b_{33}c^2-4bb_{3}^2c^2+2bb_{3}^2cc_{1}-4bb_{3}^2cc_{2} +2bb_{3}b_{13}c^2 \\ +&2bb_{3}b_{23}c^2-b_{1}b_{3}^2c^2-3b_{2}b_{3}^2c^2) -b^2b_{3}c(4bc-2bc_{1}+2bc_{2}-b_{1}c), \end{align*} \begin{align*} 0 = &8ab^2c^2(b_{3}b_{123}-b_{3}b_{223}-b_{13}b_{23}+b_{23}^2) \\ +&4abc^2(b_{2}b_{3}b_{13}-b_{2}b_{3}b_{23}-2b_{3}^2b_{12}+4b_{3}^2b_{22})\\ +&ac^2(4b_{1}b_{2}b_{3}^2-12b_{2}^2b_{3}^2) +16b^3c^2(b_{23}-b_{13}-b_{3}) \\ +&8b^3c((2 c_{1}-2 c_{2}-c_{11}+2 c_{12}-c_{22})b_{3}+(b_{13}-b_{23})(c_{1}-c_{2})) \\ +&b^3(4b_{3}c_{1}^2-8b_{3}c_{1}c_{2}+4b_{3}c_{2}^2) +bc^2(b_{1}^2b_{3}+2b_{1}b_{2}b_{3})\\ +&b^2c^2(16b_{1}b_{3}+4b_{1}b_{13}-4b_{1}b_{23}-24b_{2}b_{3}-4b_{3}b_{11}+4b_{3}b_{12})\\ +&b^2c(-8b_{1}b_{3}c_{1}+12b_{1}b_{3}c_{2}+12b_{2}b_{3}c_{1}-12b_{2}b_{3}c_{2}). \end{align*} \subsection{Comparing Kundt waves}\label{S2.3} In order to compare two Kundt waves of the form (\ref{KW}) choose four independent differential invariants $J_1,...,J_4$ of order $k$ such that $\hat d J_1 \wedge \hat d J_2 \wedge \hat d J_3 \wedge \hat d J_4 \neq 0$, where $\hat d$ is the horizontal differential defined by $(\hat d f) \circ j^k h = d (f \circ j^k h)$ for a function $f$ on $\E_k$. Then rewrite the metric in terms of the obtained invariant coframe, similar to what we did in Section \ref{S2.1}: \[ g=G_{ij} \hat d J_i \hat d J_j \] where $G_{ij}$ are differential invariants of order $k+1$. For a given Kundt wave metric $g$ the ten invariants $G_{ij}$, expressed as functions of $J_i$, determine its equivalence class. In practice one can proceed as follows. Let $\hat \partial_i$ be the horizontal frame dual to the coframe $\hat{d}J_j$. These are commuting invariant derivations, called Tresse derivatives. In terms of them $G_{ij}=g(\hat \partial_i,\hat \partial_j)$. Together the 14 functions $(J_a,G_{ij})$ determine a map $\sigma_g:M^4\to\R^{14}$ (for a Zariski dense set of $g$) whose image, called the signature manifold, is the complete invariant of a generic Kundt wave $g$. In particular, we can take the four \textit{second-order} differential invariants $I_0,I_1,I_{2d},I_{2e}$ that are independent for generic Kundt waves. Then $G_{ij}$ are differential invariants of third order, implying that third order differential invariants are sufficient for classifying generic Kundt waves. \begin{remark} The four-dimensional submanifold $\sigma_g(M^4)\subset\R^{14}$ is not arbitrary. Indeed, the differential syzygies of the generators $(J_a,G_{ij})$ can be interpreted as a system of PDE (the quotient equation) with independent $J_a$ and dependent $G_{ij}$. The signature manifolds, encoding the equivalence classes of Kundt waves, are solutions to this system. \end{remark} \subsection{Example}\label{S2.4} Consider the class of Kundt waves parametrized by two functions of two variables: \begin{equation}\label{Skea+} h=E(u)-\tfrac14\,\mathcal{S}\bigl(F(u)\bigr)x+F''(u)^2(x^3\pm y), \end{equation} where $\mathcal{S}(F)=\frac{F'''}{F'}-\frac32\bigl(\frac{F''}{F'}\bigr)^2$ is the Schwartz derivative. This class is $\Sym$-invariant and using the action \eqref{e2}-\eqref{e3} the pseudogroup is almost fully normalized in passing from this class to \begin{equation}\label{Skea} h(x,y,u)= A(u)+x^3+y. \end{equation} The metric $g$ corresponding to this $h$ was found by Skea in \cite{S} as an example of class of spacetimes whose invariant classification requires the fifth covariant derivative of the Riemann tensor (so up to order seven in the metric coefficients $g_{ij}$ equivalently given by $j^7h$). However with our approach they can be classified via third order differential invariants, and we will demonstrate how to do it for this simple example. The transformations from $\Sym_0$ preserving \eqref{Skea} form the two-dimensional non-connected group $\Sym_0'$: $(x,y,u,A)\mapsto (x,y+c,\pm u+b,A-c)$, and those of $\Sym$ form the group $\Sym'$ extending $\Sym_0'$ by the map $(x,y,u,A)\mapsto(-x,-y,u,-A)$. Distinguishing the Kundt waves given by \eqref{Skea+} with respect to pseudogroup $\Sym$ (or $\Sym_0$) is equivalent to distinguishing the Kundt waves given by \eqref{Skea} with respect to group $\Sym'$ (or $\Sym_0'$). The differential invariants from Section \ref{S2.1} can be used for this purpose. However the normalization of \eqref{Skea+} to \eqref{Skea} allows for a reduction from 4-dimensional signature manifolds to signature curves as follows. The metrics with $A_{uu} \equiv 0$ are easy to classify, so assume $A_{uu} \neq 0$. The invariants $\sqrt{I_0}=x$, $\sqrt{I_1}=\frac{x h_x-h}{h_y}$, $I_{2d}$, $I_{2e}$ are basic for the action of $\Sym_0$, and their combination gives simpler invariants $J_1=x$, $J_2=A+y$, $J_3=v^2$, $J_4= A_u/v$ with $\frac{\hat dJ_1 \wedge \hat d J_2 \wedge \hat d J_3 \wedge \hat d J_4} {dx \wedge dy \wedge du \wedge dv}= -2A_{uu}$. The nonzero coefficients $G_{ij}$ are given by \begin{gather*} G_{11}=1=G_{22}, \quad G_{13}=\frac{J_4}{2 J_1 A_{uu}},\quad G_{14} = \frac{J_3}{J_1 A_{uu}}, \quad G_{23} = -\frac{J_4^2}{2A_{uu}}, \\ G_{33} = -\frac{J_4 (32 J_1^6 J_4-4 J_1^2 J_3 J_4^3+32 J_1^3 J_2 J_4+4 J_1^2 A_{uu}-J_3 J_4)}{16 J_3 A_{uu}^2 J_1^2},\\ G_{34} = \frac{-32 J_1^6 J_4-32 J_4 J_2 J_1^3+(4 J_3 J_4^3-2 A_{uu}) J_1^2+J_4 J_3}{8A_{uu}^2 J_1^2},\\ G_{24} = -\frac{J_3 J_4}{A_{uu}}, \quad G_{44} = \frac{-32 J_1^6 J_3+4 J_1^2 J_3^2 J_4^2-32 J_1^3 J_2 J_3+J_3^2}{4 A_{uu}^2 J_1^2}. \end{gather*} There are five functionally independent invariants, and they are expressed by $J_1$, $J_2$, $J_3$, $J_4$, $A_{uu}$. Restricted to the specific Kundt wave \eqref{Skea}, only four of them are independent yielding one dependence. This can be interpreted as a relation between the invariants $A_u^2$ and $A_{uu}$, giving a curve in the plane due to constraints $A_x=A_y=A_v=0$, and completely determining the equivalence class. In addition, $A+y$ is a $\Sym_0$-invariant of order 0. Consequently, two Skea metrics given by \eqref{Skea} are $\Sym_0$-equivalent if their signatures $\{(A_u(u)^2, A_{uu}(u))\} \subset \mathbb R^2$ coincide as unparametrized curves. Indeed, let $A_{uu}=f(A_u^2)$ be a signature curve (no restrictions but, for simplicity, we consider the one that projects injectively to the first components). Viewed as an ODE on $A=A(u)$ it has a solution uniquely given by the initial data $(A(0),A_u(0))$. This can be arbitrarily changed using the freedom $(u,y)\mapsto(u+b,y+c)$ of $\Sym_0'$ whence the data encoding $g$ is restored uniquely. For the $\Sym$-action, we combine the invariants $I_0, I_1 I_{2a}, I_{2d}, I_{2e}$ to construct a simpler base $J_1=x^2, J_2=(A+y) x, J_3=v^2, J_4=x A_u/v$ of invariants. In this case we again get $\frac{\hat dJ_1 \wedge \hat d J_2 \wedge \hat d J_3 \wedge \hat d J_4} {dx \wedge dy \wedge du \wedge dv}= -4x^3 A_{uu}\neq0$, and basic order 0, 1 and 2 differential invariants for the dimension reduction are $(A+y)^2$, $A_u^2$, $A_{uu}/(A+y)$. Proceeding as before we obtain a signature curve $\{(A_u(u)^2, A_{uu}(u)^2)\} \subset \mathbb R^2$ that, as an unparametrized curve, is a complete $\Sym$-invariant of the Kundt waves of Skea type \eqref{Skea}. \section{Specification to the vacuum case}\label{S3} It was argued in Section \ref{S1.1} that the Lie pseudogroup preserving vacuum Kundt waves of the form (\ref{KW}) is the same as the one preserving general Kundt waves of the same form. The PDE $\ric_k = \{h_{xx}+h_{yy}=0\}^{(k-2)}\cup\E_k$ defining vacuum Kundt waves contains some orbits in $\E_k$ of maximal dimension. This follows from the proof of Theorem \ref{counting}, since the point $\theta_k \in \E_k$ chosen there belongs also to $\ric_k$. This implies that orbits in general position in $\ric_k$ are also orbits in general position in $\E_k$. Generic vacuum Kundt waves are separated by the invariants found in Section \ref{S2}, and all previous results are easily adapted to the vacuum case. \subsection{Hilbert and Poincar\'e function}\label{S3.1} For vacuum Kundt waves we have additional $\binom{k+1}{3}$ independent differential equations of order $k$ defining $\ric_k\subset\E_k$, so the dimension of $\ric_k$ is $4+(k+1)^2$ for $k\geq 0$. The codimension of orbits in general position in $\ric_k$ is thus given by $$ s^\ric_0=1\ \text{ and }\ s_k^\ric= k(k+1) \text{ for }k\geq1. $$ Consequently the Hilbert function $H_k^\ric = s_k^\ric-s_{k-1}^\ric$ is given by $$ H_0^\ric=H_1^\ric=1\ \text{ and }\ H_k^\ric = 2k \text{ for } k\geq2. $$ The corresponding Poincar\'e function $P_\ric(z)=\sum_{k=0}^\infty H_k^\ric z^k$ is equal to \[ P_\ric(z)= \frac{1-z+3z^2-z^3}{(1-z)^2}. \] \subsection{Differential invariants}\label{S3.2} The differential invariants of second order from Section \ref{S2.1} are still differential invariants in the vacuum case. The only difference is that two second order invariants $I_{2a},I_{2c}$ become dependent since the vacuum condition implies $I_{2a}+I_{2c}=0$; in higher order we add differential corollaries of this relation. It follows that we can generate all $\Sym$-invariants of higher order by using the differential invariants $I_0, I_1, I_{2d}$ and invariant derivations $\nabla_i$ above. The differential syzygies found in Section \ref{S2.2} will still hold, but we get some new ones obtained by $\nabla_i$ differentiations of the Ricci-flat condition $I_{2a}+I_{2c}=0$. In terms of the differential invariants $a,b,c,K_{13}^4$ from Section \ref{S2.2}, the syzygy on $\ric_2$ takes the form \begin{equation*} K_{13}^4 b c (a + b)+4 a (2b+ b_{1}+ b_{2}) = 0. \end{equation*} The case of $\Sym_0$-invariants is treated similarly. \subsection{Comparing vacuum Kundt waves} \label{S3.3} For the basis of differential invariants we can take the same second-order invariants as for the general Kundt waves: $I_0,I_1,I_{2d},I_{2e}$. Then we express the metric coefficients $G_{ij}$ in terms of this basis of invariants. The corresponding four-dimensional signature manifold $\sigma_g(M^4)$ is restricted by differential syzygies of the general case plus the vacuum constraint. Considered as an unparametrized submanifold in $\R^{14}$ it completely classifies the metric $g$. \section{The Cartan-Karlhede algorithm}\label{S4} Next, we would like to compare the Lie-Tresse approach to differential invariants with Cartan's equivalence method. We outline the Cartan-Karlhede algorithm for finding differential invariants. The general description of the algorithm can be found in \cite{Kar}. Its application to vacuum Kundt waves has been recently treated in \cite{MMC}. \subsection{The algorithm for vacuum Kundt waves}\label{S4.1} Consider the following null-coframe in which metric \eqref{KW} has the form $g=2 m \odot \bar{m}-2\ell \odot n$ (as before $h_v=0=h_{xx}+h_{yy}$): \[ \ell = du,\quad n= \frac{1}{2}dv-\frac{v}{x} dx +\left(4xh-\frac{v^2}{8x^2}\right) du,\quad \begin{array}{l}m=\frac{1}{\sqrt{2}\mathstrut} (dx+i dy),\\ \bar m=\frac{1\mathstrut }{\sqrt{2}}(dx-i dy).\end{array} \] Let $\Delta, D, \delta, \bar\delta$ be the frame dual to coframe $\ell,n,m,\bar m$: \[ \Delta = \p_u-\left(8xh-\frac{v^2}{4x^2}\right)\,\p_v,\quad D= 2\p_v,\quad \begin{array}{l}\delta=\frac{1}{\sqrt{2}\mathstrut} (\p_x-i\p_y)+\frac{v\sqrt{2}}{x}\,\p_v,\\ \bar\delta=\frac{1}{\sqrt{2}\mathstrut} (\p_x+i\p_y)+\frac{v\sqrt{2}}{x}\,\p_v.\end{array} \] There is a freedom in choosing the (co)frame, encoded as the Cartan bundle. The general orthonormal frame bundle $\tilde\rho:\tilde{\mathcal P}\to M$ is a principal bundle with the structure group $O(1,3)$. For Kundt waves the non-twisting non-expanding shear-free null congruence $\ell$ is up to scale unique, and this reduces the structure group to the stabilizer $H \subset O(1,3)$ of the line direction $\R\cdot\ell$, yielding the reduced frame bundle $\rho:\mathcal P\to M$, which is a principal $H$-subbundle of $\tilde{\mathcal{P}}$. This so-called parabolic subgroup $H$ has dimension four and the $H$-action on our null (co)frame is given by boosts $(\ell,n)\mapsto (B \ell, B^{-1} n)$, spins $m \mapsto e^{i\theta} m$ and null rotations $(n,m)\mapsto (n+cm+\bar c \bar m+|c|^2 \ell,m+\bar c \ell)$ about $\ell$, where parameters $B,\theta$ are real and the parameter $c$ is complex. Let $\nabla$ denote the Levi-Civita connection of $g$, and let $R$ be the Riemann curvature tensor. Written in terms of the frame, the components of $R$ and its covariant derivatives are invariant functions on $\mathcal P$, but they are not invariants on $M$. The structure group $H$ acts on them and their $H$-invariant combinations are absolute differential invariants. In practice $H$ is used to set as many components of $\nabla^k R$ as possible to constants, as this is a coordinate independent condition for the parameters of $H$. In the Newman-Penrose formalism \cite{PR}, the Ricci ($\Phi$) and Weyl ($\Psi$) spinors for the Kundt waves are given by \[ \Phi_{22}=2x (h_{xx}+h_{yy}), \qquad \Psi_{4} = 2x(h_{xx}-h_{yy}-2i h_{xy}). \] A boost and spin transform $\Psi_4$ to $B^{-2} e^{-2i\theta} \Psi_4$. Thus if $\Psi_4\neq0$ it can be made equal to $1$ by choosing $B^2=4x\sqrt{h_{xx}^2+h_{xy}^2}$ and $e^{2i\theta}= \frac{h_{xx}-i h_{xy}}{\sqrt{h_{xx}^2+h_{xy}^2}}$. This reduces the frame bundle and the new structure group $H$ is two-dimensional. In the next step of the Cartan-Karlhede algorithm we use the null-rotations to normalize components of the first covariant derivative of the Weyl spinor. The benefit of setting $\Psi_4=1$ is that components of the Weyl spinor and its covariant derivatives can be written in terms of the spin-coefficients and their derivatives. For example, the nonzero components of the first derivative of the Weyl spinor are \[ (D \Psi)_{50} = 4 \alpha, \quad (D \Psi)_{51} = 4 \gamma, \quad (D \Psi)_{41} = \tau . \] The null-rotations, with complex parameter $c$, sends $\gamma$ to $\gamma+c \alpha+\frac{5}{4} \bar c \tau$, but leaves $\alpha$ and $\tau$ unchanged. Assuming that $|\alpha| \neq \frac{5}{4} |\tau|$ it is possible to set $\gamma=0$, and this fixes the frame. In this case there will be four Cartan invariants of first order in curvature components, namely the real and imaginary parts of $\alpha$ and $\tau$. They can be expressed in terms of differential invariants as follows: \begin{align*} \alpha &= \frac{-\sqrt{2i}}{8\sqrt{I_0}} \frac{J_-^{1/4}}{J_+^{5/4}} \left(i\sqrt{I_0I_1}(2I_0I_{2a}^2-I_{2a}+2\nabla_1I_{2a})+2I_{2b}^2-3I_{2b}+2\nabla_1I_{2b}\right)\\ \tau &= \frac1{\sqrt{2iI_0}} \frac{J_+^{1/4}}{J_-^{1/4}}, \qquad\text{ where }\quad J_\pm=I_{2b}\pm i\sqrt{I_0I_1}I_{2a}. \end{align*} These give four independent invariant functions on $\mathcal R_\infty$, but when restricted to a vacuum Kundt wave metric (to the section $j^\infty_Mg\subset \mathcal R_\infty$) at most three of them are independent: \[ \hat d (\alpha+\bar \alpha) \wedge \hat d(\alpha-\bar \alpha) \wedge \hat d (\tau+\bar \tau) \wedge \hat d (\tau-\bar \tau)=0. \] The generic stratum of this case corresponds to the invariant branch (0,3,4,4) of the Cartan-Karlhede algorithm in \cite{MMC}. At the next step of this algorithm the derivatives of the three Cartan invariants from the last step are computed, resulting in the invariants $\Delta|\tau|, \bar\delta\alpha, \mu, \nu$ (the latter again complex-valued). One more derivative gives the invariant $\Delta(\Delta|\tau|)$ as a component of the third covariant derivative of the curvature tensor. Further invariants (when restricted to $j^\infty_Mg$) will depend on those already constructed, so only 12 real-valued Cartan invariants are required to classify vacuum Kundt waves. \begin{remark} In Section \ref{S2.3} it was stated that 14 differential invariants ($J_a, G_{ij}$) are sufficient for classifying Kundt waves, but choosing $J_1=I_0,J_2=I_1,J_3=I_{2d},J_4=I_{2e}$ it turns out that we get precisely 12 functionally independent differential invariants among them. \end{remark} \subsection{Cartan invariants vs.\ absolute differential invariants}\label{S4.2} Let us take a closer look at the relationship between the Cartan invariants and the differential invariants from Section \ref{S2}. Differential invariants are functions on $J^\infty \pi$, or on a PDE therein, which are constant on orbits of the Lie pseudogroup $\mathcal G$. Cartan invariants, on the other hand, are components of the curvature tensor and its covariant derivatives. These components are dependent on the point in $M$ and the frame. If we normalize the group parameters and hence fix the frame, i.e.\ a section of the Cartan bundle, then the Cartan invariants restricted to this section are invariant functions on $J^\infty \pi$. The following commutative diagram explains the situation. \begin{center} \begin{tikzcd} &\mathcal{P} \arrow[d, "\rho"] & \arrow[l]\arrow[d] \pi_\infty^* \mathcal P & \\ &M &\arrow[l, "\pi_\infty"] \E_\infty \subset J^\infty \pi \end{tikzcd} \end{center} Initially the Cartan invariants are functions on \[ \pi_\infty^* \mathcal P = \{(\omega,g_\infty) \in \mathcal P \times \E_\infty \mid \rho(\omega)=\pi_\infty(g_\infty) \} \] and they suffice to solve the equivalence problem because $\mathcal P$ is equipped with an absolute parallelism $\Omega$ (Cartan connection) whose structure functions generate all invariants on the Cartan bundle. Indeed, an equivalence of two Lorentzian spaces $(M_1,g_1)$ and $(M_2,g_2)$ lifts to an equivalence between $(\mathcal P_1,\Omega_1)$ and $(\mathcal P_2,\Omega_2)$ and vise versa the equivalence upstairs projects to an equivalence downstairs. Projecting the algebra of invariants on the Cartan bundle to the base we obtain the algebra of absolute differential invariants consisting of $\mathcal G$-invariant functions on $\E_\infty$. This is achieved by invariantization of the invariants on $\mathcal P$ with respect to the structure group. This is done in steps by normalizing the group parameters, effecting in further reduction of the structure group. When the frame is fully normalized (or normalized to a group acting trivially on invariants) the Cartan bundle is reduced to a section of $\mathcal P$, restriction to which of the $\nabla^k R$ components gives scalar differential invariants on $M$. Often these functions and their algebraic combinations that are absolute differential invariants, evaluated on the metric, are called Cartan invariants. \subsection{A comparison of the two methods}\label{S4.3} The definite advantage of Cartan's invariants is their universality. A basic set of invariants can be chosen for almost the entire class of metrics simultaneously. The syzygies are also fully determined by the commutator relations, the Bianchi and Ricci identities in the Newman-Penrose formalism \cite{PR}. Yet this basic set is large and algebraically dependent invariants should be removed, resulting in splitting of the class into different branches of the Cartan-Karlhede algorithm. See the invariant-count tree for the class of vacuum Kundt waves in \cite{MMC}. The normalization of group parameters however usually introduces algebraic extensions into the algebra of invariants. The underlying assumption at the first normalization step in Section \ref{S4.1} is that $\Psi_4$ is nonzero. This means that also for Cartan invariants we must restrict to the complement of a Zariski-closed set in $\E_k$. Setting $\Psi_4$ to $1$ introduces radicals into the expressions of Cartan invariants. A sufficient care with this is to be taken in the real domain, because the square root is not everywhere defined and is multi-valued. At this stage it is the choice of the $\pm$ sign, but the multi-valuedness becomes more restrictive with further invariants. For instance, the expressions for $\alpha$ and $\tau$ contain radicals of $J_\pm$ depending on $\sqrt{I_0I_1}$. Recall that even though the invariant $I_0$ and $I_1$ are squares, the extraction of the square root cannot be made $\Sym$-equivariantly and is related to a choice of domain for the pseudogroup $\Sym_0$. Changing the sign of $\sqrt{I_0I_1}$ results in interchange $J_-\leftrightarrow J_+$ modifying the formula for $\alpha$ and $\tau$ (which, as presented, is also subject to some sign choices). The complex radicals carry more multi-valued issues: choosing branch-cuts and restricting to simply connected domains. Thus Cartan's invariants computed via the normalization technique are only locally defined. In addition, the domains where they are defined are not Zariski open, in particular they are not dense. In contrast, elements of the algebra of rational-polynomial differential invariants described in Section \ref{S2} are defined almost everywhere, on a Zariski-open dense set. The above radicals are avoidable because we know from Section \ref{S1.2} that generic Kundt waves, as well as vacuum Kundt waves, can be separated by rational invariants. Another aspects of comparison is coordinate independence. The class of metrics \eqref{KW} is given in specific Kundt coordinates, from which we derived the pseudogroup $\Sym$. Changing the coordinates does not change the pseudogroup, but only its coordinate expression. In other words, this is equivalent to a conjugation of $\Sym$ in the pseudogroup $\op{Diff}_\text{loc}(M)$. The Cartan-Karlhede algorithm is manifestly coordinate independent, i.e.\ the invariants are computed independently of the form in which a Kundt wave is written. However a normalization of parameters is required to get a canonical frame. It is a simple integration to derive from this Kundt coordinates. It is also possible to skip integration with the differential invariants approach as abstractly jets are coordinate independent objects. This would give an equivalent output. \section{Conclusion}\label{S5} In this paper we discussed Kundt waves, a class of metrics that are not distinguished by Weyl's scalar curvature invariants. We computed the algebra of scalar differential invariants that separate generic metrics in the class and showed that this algebra is finitely generated in Lie-Tresse sense globally. These invariants also separate the important sub-class of vacuum Kundt waves. The latter class of metrics was previously investigated via Cartan's curvature invariants in \cite{MMC} and we compared the two approaches. In particular, we pointed out that normalization in the Cartan-Karlhede algorithm leads to multi-valuedness of invariants. Moreover, the obtained Cartan's invariants are local even in jets-variables (derivatives of the metric components). This leads to restriction of domains of definitions, which in general may not be even invariant with respect to the equivalence group, see \cite{KL2}. With the differential invariant approach the signature manifold can be reduced in dimension, as we saw in Section \ref{S2.4}. For the general class of Kundt waves where $h_v=0$, the $v$-variable can be removed from consideration and furthermore it is not difficult to remove the $y$-variable too. This dimension reduction leads to a much simpler setup and the classification algorithm. We left additional independent variables to match the traditional approach via curvature invariants. The two considered approaches are not in direct correspondence and each method has its own specifications. For instance, the invariant-count tree in the Cartan-Karlhede algorithm ideologically has a counter-part in the Poincar\'e function for the Lie-Tresse approach. However orders of the invariants in the two methods are not related, obstructing to align the filtrations on the algebras of invariants. For simplicity in this paper we restricted to generic metrics in the class of Kundt waves. This manifests in a choice of four functionally independent differential invariants, which is not always possible. For instance, metrics admitting a Killing vector never admit four independent invariants. With the Cartan-Karlhede approach this corresponds to invariant branches like (0,1,3,3) ending not with 4, and for the vacuum case all such possibilities were classified in \cite{MMC}. With the differential invariants approach we treated metrics specified by explicit inequalities: $h_y\neq0$, $I_0I_1\neq0$, $\dots$, such that the basic invariants and derivations are defined. It is possible to restrict to the singular strata, and find the algebra of differential invariants with respect to the restricted pseudogroup. Thus differential invariants also allow to distinguish more special metrics in the class of Kundt waves. To summarize, the classical Lie-Tresse method of differential invariants is a powerful alternative to the Cartan equivalence method traditionally used in relativity applications. \bigskip \textsc{Acknowledgement.} {\footnotesize BK is grateful to the BFS/TFS project Pure Mathematics in Norway for funding a seminar on the topic of the current research. The work of DM was supported by the Research Council of Norway, Toppforsk grant no.\,250367: Pseudo-Rieman\-nian Geometry and Polynomial Curvature Invariants: Classification, Characterisation and Applications. ES acknowledges partial support of the same grant and hospitality of the University of Stavanger.}
{"config": "arxiv", "file": "1901.02635.tex"}
\section{Introduction} The Italian school of algebraic geometry in the early 20th century established a classification theory for smooth complex surfaces, which was generalised by Kodaira, Shafarevich and Bombieri--Mumford. In particular, Bombieri--Mumford showed the abundance theorem for a smooth projective surface $X$ over an algebraically closed field $k$, i.e. if $K_X$ is nef, then it is semi-ample. After that, Fujita succeeded to extend it to a logarithmic case \cite{Fuj84}, which was extended to $\Q$-factorial surfaces by Fujino and the author (\cite{Fujino}, \cite{minimal}). As a consequence, Fujita established the abundance theorem for log canonical surfaces over any algebraically closed fields. The purpose of this paper is to prove the abundance theorem for log canonical surfaces over any field of positive characteristic. \begin{thm}\label{0abundance} Let $k$ be a field of positive characteristic. Let $(X, \Delta)$ be a log canonical surface over $k$, where $\Delta$ is an effective $\R$-divisor. Let $f:X \to S$ be a proper $k$-morphism to a scheme $S$ which is separated and of finite type over $k$. If $K_X+\Delta$ is $f$-nef, then $K_X+\Delta$ is $f$-semi-ample. \end{thm} If $k$ is a perfect field, then Theorem~\ref{0abundance} immediately follows from the case where $k$ is algebraically closed. Thus the essential difficulty appears only when $k$ is an imperfect field. Even if we work over algebraically closed fields of positive characteristic, we often encounter varieties over imperfect fields. For instance, given a fibration between smooth varieties of positive characteristic, general fibres are not smooth in general, whilst the generic fibre is always a regular scheme (e.g. quasi-elliptic surfaces). On the other hand, there is a possibility to find new phenomena appearing only in positive characteristic by studying the surface theory over imperfect fields. Indeed, Schr\"{o}er discovered a $3$-dimensional del Pezzo fibration $f:X \to C$ with $R^1f_*\MO_X \neq 0$ by studying del Pezzo surfaces over imperfect fields (cf. \cite{Schroer}, \cite{Maddock}). \subsection{Proof of Theorem~\ref{0abundance}} Let us look at some of the ideas of the proof of Theorem~\ref{0abundance}. To this end, we assume that $S=\Spec\,k$ and $\Delta$ is a $\Q$-divisor throughout this subsection. \subsubsection{Klt case} If $k$ is perfect, then we can show Theorem~\ref{0abundance} just by taking the base change to the algebraic closure. However, if the base field is imperfect, the situation is subtler. First, the normality and reducedness might break up by taking the base change. Second, for the case where $k$ is algebraically closed, the proof of the abundance theorem for smooth surfaces depends on Noether's formula and Albanese morphisms. These techniques can not be used in general for varieties over non-closed fields. In particular it seems to be difficult to imitate the proof for the case over algebraically closed fields. Let us overview the proof of Theorem~\ref{0abundance}. To clarify the idea, we treat the following typical situation: $X$ is a projective regular surface over a field of characteristic $p>0$ and $K_X \equiv 0$. We would like to show that $K_X \sim_{\Q} 0$. First we can assume that $k$ is separably closed by taking the base change to the separable closure. Such a base change is harmless because, for example, a finite separable extension is \'etale. Second, we take the normalisation $Y$ of $(X\times_k \overline{k})_{\red}$. Set $f:Y \to X$ to be the induced morphism: $$f:Y \to (X\times_k \overline{k})_{\red} \to X\times_k \overline{k} \to X.$$ We can check that $Y$ is $\Q$-factorial (Lemma~\ref{purely-basic}(3)). By \cite[Theorem~1.1]{purely} (cf. Theorem~\ref{purely}), we can find an effective $\Z$-divisor $E$ such that $$K_Y+E=f^*K_X.$$ If $E$ is a prime divisor, then we can apply the following known result. \begin{thm}[Theorem~1.2 of \cite{minimal}]\label{Q-fac-abundance} Let $Y$ be a projective normal $\Q$-factorial surface over an algebraically closed field of characteristic $p>0$. Let $\Delta_Y$ be a $\Q$-divisor on $Y$ whose coefficients are contained in $[0, 1]$. If $K_Y+\Delta_Y$ is nef, then $K_Y+\Delta_Y$ is semi-ample. \end{thm} Thus, we focus on the simplest but non-trivial case: $E=2C$ where $C$ is a prime divisor: $$K_Y+2C=f^*K_X.$$ Set $C_X:=f(C)$. We have the three cases: $C^2>0$, $C^2=0$, and $C^2<0$. If $C^2>0$, then $C$ is nef and big, hence we can apply Theorem~\ref{Q-fac-abundance}. If $C^2<0$, then we contract a curve $C_X$ in advance and may exclude this case. Thus, we can assume that $C^2=0$, i.e. $C_X^2=0$. We consider the following equation $$K_Y+(2+\alpha)C=f^*(K_X+C_X),$$ where $\alpha$ is the positive integer satisfying $\alpha C=f^*C_X$. We apply adjunction to $(X, C_X)$ and $(Y, C)$ (\cite[Proposition 2.15]{MMP}). Then we obtain $(K_X+C_X)|_{C_X}\sim_{\Q} 0$ and $(K_Y+C)|_C \sim_{\Q} 0$. We deduce that the $\Q$-divisor $C|_C$ is torsion, which implies that $C$ is semi-ample by Mumford's result (cf. Proposition~\ref{icct}). If $(X, \Delta)$ is klt, then we can apply a similar argument, although we need some modifications. For more details, see Section~\ref{section-klt-abundance}. \subsubsection{Log canonical case} We overview the proof of the log canonical case assuming the klt case. After we get the abundance theorem for klt surfaces, the main difficulty is to show the non-vanishing theorem (Theorem~\ref{non-vanishing}): for a log canonical surface $(X, \Delta)$, if $K_X+\Delta$ is nef, then $\kappa(X, K_X+\Delta) \geq 0$. To this end, we may replace $X$ by its minimal resolution, hence assume that $X$ is regular. Since we may assume that $\kappa(X, K_X)=-\infty$, it follows from the abundance theorem for the regular case that a $K_X$-MMP induces a Mori fibre space structure $Z \to B$ where $Z$ is the end result. Roughly speaking, our idea is to prove that the base change of the Mori fibre space to the algebraic closure is again a Mori fibre space, although the singularities could no longer be log canonical. Based on this idea, we extend Fujita's result over algebraically closed fields to the case over imperfect fields (cf. Lemma~\ref{semi-positivity}). For more details, see Subsection~\ref{Subsection-nonvanishing}. \begin{rem} In \cite{BCZ}, Birkar--Chen--Lei independently obtained Theorem~\ref{0abundance} under the assumption that $(X, \Delta)$ is klt, $\Delta$ is a $\Q$-divisor, $S=\Spec\,k$ and $\kappa(X, K_X+\Delta) \geq 0$. The strategies are different. \end{rem} \begin{ack} Part of this work was done whilst the author visited National Taiwan University in December 2013 with the support of the National Center for Theoretical Sciences. He would like to thank Professor Jungkai Alfred Chen for his generous hospitality. The author would like to thank Professors Caucher~Birkar, Yoshinori~Gongyo, Paolo~Cascini, J\'anos~Koll\'ar, Mircea~Musta\c{t}\u{a}, Chenyang~Xu for very useful comments and discussions. This work was partially supported by JSPS KAKENHI (no. 24224001) and EPSRC. \end{ack}
{"config": "arxiv", "file": "1502.01383/section1.tex"}
TITLE: Least square method in general QUESTION [0 upvotes]: If I have this points given: $f(\frac{1}{2})=-1$ , $f(1)=2$, $f(3)=4$, $f(9)=5$, $f(81)=8$. How can I fit the curve with the following form: $f(x)=a_0+a_1\log_3 x$? Also in general :$f(x)=a_0 g(x)+a_1 h(x)$ REPLY [0 votes]: If you have $n$ data points $(x_i,y_i)$ and you want to fit the model $$y=a_0 \,g(x)+a_1\, h(x)$$ where $g(x)$ and $h(x)$ are known, define two variables $u_i=g(x_i)$, $v_i=h(x_i)$ and you model is simply $$y=a_0\,u+a_1\,v$$ which is a multilinear regression with no intercept. This is easy to solve using matrix calculations. Otherwise, write the so-called normal equations $$\sum_{i=1}^n u_i\,y_i=a_0\sum_{i=1}^n u_i^2+a_1\sum_{i=1}^n u_i\,v_i$$ $$\sum_{i=1}^n v_i\,y_i=a_0\sum_{i=1}^n u_i\,v_i+a_1\sum_{i=1}^n v_i^2$$ Two linear equations for two unknowns is quite simple.
{"set_name": "stack_exchange", "score": 0, "question_id": 3719667}
TITLE: The thought process of derivatives explained (intermediate calculus) "derivatives with respect to what" QUESTION [2 upvotes]: My intention here is to contribute, if there is a problem with my solution or explanation--if it is wrong--please add a comment and don't just down vote. My answer represents my understanding and I spent a lot of time writing it; it would be very helpful to know if there is a flaw so I may fix it or take down my answer so nobody learns a mistaken concept. I had asked a question on here previously about trying to find a deeper understanding of derivatives. There was just a missing link in the whole picture. My question can be found here. Recently I had an epiphany and developed a better understanding of derivatives and I'd like to share that here with an example problem. Since this is a question and answer format forum, I pose the question: "When we take derivative, how do we know what we have to take them with respect to?" Note that this information assumes that one already understands the general concept of a derivative... i.e. $f'(x) = \lim_{h\rightarrow 0 } \frac{f(x+h) - f(x)}{(x+h)-x}$ and how a derivative finds the instantaneous change of a function REPLY [9 votes]: It turns out there are two separate issues to consider. In functional notation, derivatives are things that are applied to functions, not variables. The derivative of a univariate function (i.e. a function with one argument) is always the derivative of the value of the function with respect to the argument of the function. i.e. if $f$ is the function defined by $f(x) = x^2$, then $f'$ is the function defined by $f'(z) = 2z$. In the equations above, $x$ and $z$ are dummy variables; they have no meaning on their own, and only purpose in existence is to let us write down an equation for the value of $f$ at a point. In dependent variable notation, the variables you use all have some intrinsic meaning. (e.g. you might use $t$ to refer to "time"). You can't differentiate variables, but you can take their differentials. The differential of $x$ is $dx$. The differential of $x^2$ is $d(x^2) = 2x~dx$. Sometimes, two differentials can be proportional. For example, if $x$ and $t$ are dependent one one either via the equation $x = t^2 + 1$, then this equation also holds when we compute the differential on both sides: $dx = 2t~dt$. In Leibniz notation, when we have such a proportion, we use $dx/dt$ to express the ratio. So if $dx = 2t~dt$, then we say $dx/dt = 2t$. And $dt/dx = 1/(2t)$. If the relationship between $x$ and $t$ is $x = f(t)$, then fortunately we have $dx = f'(t) dt$, and so in Leibniz notation, $dx/dt = f'(t)$. If we have two equations, such as $$ \frac{8.5}{10-x} = \frac{1.5}{y} $$ and $$ x = 2.2t $$ then we can get two equations between the differentials. Let me first simplify the first equation to $$ \frac{10-x}{8.5} = \frac{y}{1.5} $$ Now, when we take the differential, we get two equations $$ dx = 2.2~dt $$ $$ -\frac{1}{8.5} dx = \frac{1}{1.5} dy $$ and if we wanted, we can solve the first for $dx$ and plug it into the second: $$ -\frac{2.2}{8.5} dt = \frac{1}{1.5} dy $$ We can't always write differentials as proportions. e.g. if $A = xy$, then $dA = x dy + y dx$. If $x$ and $y$ aren't functionally related to each other, then $dA/dx$ and $dA/dy$ simply don't make sense.
{"set_name": "stack_exchange", "score": 2, "question_id": 380693}
\subsection{Regularisation of $\psi$} For the purposes of building a weak solution to the system (S), it is inconvenient to work directly with the singular map $\psi$. We work instead with a regularised map $\psi_{N}$ parameterised by the mollification index $N=1, 2, 3, ...$, such that $\psi$ can be recovered in the limit $N\rightarrow\infty$. Firstly, for $J=1, 2, 3, ...$ we define $\psi_{J}:\symn\rightarrow \mathbb{R}$ to be the Yosida-Moreau regularisation of $\psi$, namely \begin{equation}\label{yosidamoreau} \psi_{J}(Q):=\min_{A\in\symn}\left(J|A-Q|^{2}+\psi(A)\right). \end{equation} Secondly, for fixed $J$ and for any $K=1, 2, 3, ...$ we define $\psi_{J, K}$ to be the standard mollification of the map $\psi_{J}$, namely \begin{equation}\label{mollification} \psi_{J, K}(Q):=K^{d^{2}}\int_{\mathbb{R}^{d\times d}}\psi_{J}(K(Q-R))\Phi(R)\,dR, \end{equation} where $\Phi\in C^{\infty}_{c}(\mathbb{R}^{d\times d}, \mathbb{R}_{+})$ has the unit mass property $\int_{\mathbb{R}^{d}\times\mathbb{R}^{d}}\Phi(R)\,dR=1$. Finally, we define $\psi_{N}:=\psi_{N, N}$ for each $N\geq 1$. We now quote without proof a number of properties of $\psi_{N}$, taken from \textsc{Feireisl et Al.} \cite{feireisl}, which are of use to us in the construction of weak solutions. \begin{thm} For each $N\geq 1$, the regularisation $\psi_{N}$ of the Ball-Majumdar potential has the following properties: \begin{itemize} \item[\textbf{(M1)}]\label{mone} The map $\psi_{N}$ is both $C^{\infty}$ and convex on $\mathbb{R}^{d\times d}$; \item[\textbf{(M2)}] It is bounded from below, i.e. $-\psi_{0}\leq \psi_{N}(X)$ for all $X\in \mathbb{R}^{d\times d}$ and for all $N\geq 1$, where $\psi_{0}>0$ is the same constant appearing in \emph{\textbf{(P2)}}; \item[\textbf{(M3)}] $\psi_{N}\leq \psi_{N+1}\leq\psi$ on $\mathbb{R}^{d\times d}$ for $N\geq 1$; \item[\textbf{(M4)}] $\psi_{N}\rightarrow \psi$ in $L^{\infty}_{\mathrm{loc}}(\mathsf{D}(\psi))$ as $N\rightarrow\infty$, and $\psi_{N}$ is uniformly divergent on $\symn\setminus\mathsf{D}(\psi)$ as $N\rightarrow\infty$;\vspace{1mm} \item[\textbf{(M5)}] $\displaystyle\frac{\partial\psi_{N}}{\partial Q}\rightarrow \frac{\partial\psi}{\partial Q}$ in $L^{\infty}_{\mathrm{loc}}(\mathsf{D}(\psi))$ as $N\rightarrow\infty$; \vspace{1mm} \item[\textbf{(M6)}] The regularised map $\psi_{N}$ satisfies \begin{equation*} c_{N}^{1}|X|- c_{N}^{2}\leq \left|\frac{\partial\psi_{N}}{\partial Q}(X)\right|\leq C_{N}^{1}|X|+C_{N}^{2} \end{equation*} for all $X\in \mathbb{R}^{d\times d}$ and positive constants $c_{N}^{i}$ and $C_{N}^{i}$ which depend on the mollification parameter $N\geq 1$. \end{itemize} \end{thm}
{"config": "arxiv", "file": "1211.6083/reg.tex"}
TITLE: On how to solve the ODE $v'-\frac{1}{v}=x$... QUESTION [4 upvotes]: I've been having trouble finding the general solution of $v$ for $v'-\frac{1}{v}=x$. I've attempted various substitutions in attempts of obtaining separation of variables or recognizable form to apply the method of the integrating factor. A couple of substitutions I've attempted: $$\alpha=\frac{1}{v}$$ $$\beta=\frac{1}{\alpha^2}$$ I tried others but threw out the scratch paper (yeah...would've helped now to see the other substitutions that didn't work so I don't reattempt them...). Does anyone have an idea of how to get this DE into a form we can play with? Please don't supply the final to the DE. I'm simply looking for direction down the proper path. REPLY [4 votes]: A change of function leads to an Riccati ODE. The usual method of solving leads to an ODE of Bessel kind. Finally the general solution is expressed on parametric form : While I was typing my answer, "Start wearing purple" posted his answer which is based on the same method.
{"set_name": "stack_exchange", "score": 4, "question_id": 1674126}
TITLE: Some calculations about divergence of vector field on Riemannian manifold QUESTION [1 upvotes]: Assume first that we have a Riemannian manifold $(M,g)$. Furthermore, $X$ is a vector field on M and $\nabla$ is the Levi-Civita connection as usual. Let $\{e_i\}$ be an orthonormal basis on M. Then how can we get $\operatorname{div} X =\sum_i\langle e_i,\nabla_{e_i}X\rangle$? Where $\operatorname{div} $ represents the divergence of $X$. In other words, $\operatorname{div} X=\operatorname{tr}(\nabla X)$. I try to use the definition of covariant differential and we have $$\operatorname{div} X = \sum_{i}\nabla X(e_i,e_i)=\sum_{i}\nabla_{e_i}X(e_i)$$ but what’s next? I have found many books and they just ignore the detail so can someone help me ? Many thanks to you. REPLY [3 votes]: $\DeclareMathOperator{\div}{div} \DeclareMathOperator{\tr}{tr}$ We consider the following definition of the divergence. Let $(M,g)$ be an orientable Riemannian manifold, with Riemannian volume form $\mathrm{d}vol_g$, and $X$ be a vector field. The divergence of $X$ is the unique smooth function $\div X$ such that $L_X\mathrm{d}vol_g = (\div X) \mathrm{d}vol_g$. Let $\{e_1,\ldots,e_n\}$ be a local orthonormal frame and consider its dual frame $\{\theta^1,\ldots,\theta^n\}$, that is $\theta^i = g(\cdot,e_i)$. Then we have the equality $$ \mathrm{d}vol_g = \theta^1\wedge\cdots\wedge \theta^n. $$ Indeed, these two volume forms are proportional and coincide on the orthonormal frame $\{e_1,\ldots,e_n\}$. Now, since $L_X$ is a derivation of the exterior algebra, it follows that $$ L_X\mathrm{d}vol_g = \sum_{i=1}^n \theta^1\wedge\cdots\wedge L_X\theta^i \wedge \cdots \wedge \theta^n. $$ For any vector field $Y$, we have \begin{align} (L_X\theta^i)(Y) &= X\theta^i(Y) - \theta^i([X,Y]) & \text{by Leibniz rule}\\ &= Xg(Y,e_i) - g([X,Y],e_i)\\ &= g(\nabla_XY-[X,Y],e_i) + g(Y,\nabla_Xe_i) &\text{from the compatibility of $g$ and $\nabla$}\\ &= g(\nabla_YX,e_i) + g(Y,\nabla_Xe_i) & \text{since $\nabla$ is torsion-free}\\ &= (\theta^i\circ \nabla X)(Y) + g(Y,\nabla_Xe_i). \end{align} Applying $\nabla_X$ to the equality $\|e_i\|^2 = 1$ gives $\nabla_Xe_i\perp e_i$, so that the 1-form $g(\cdot,\nabla_Xe_i)$ vanishes on $e_i$, and is then a linear combination of $\{\theta^j\}_{j\neq i}$. It then disappears in the wedge product, and it follows that $$ L_X\mathrm{d}vol_g = \sum_{i=1}^n \theta^1\wedge\cdots\wedge (\theta^i\circ \nabla X) \wedge \cdots \wedge \theta^n. $$ Finally, evaluating on the orthonormal frame $\{e_1,\ldots,e_n\}$ yields \begin{align} \div X &= \sum_{i=1}^n \theta^1(e_1)\times \cdots \times \theta^i(\nabla_{e_i}X) \times \cdots \theta^n(e_n)\\ &= \sum_{i=1}^n \theta^i(\nabla_{e_i}X)\\ &= \sum_{i=1}^n g(\nabla_{e_i}X,e_i)\\ &= \tr \nabla X. \end{align} There are plenty of other proofs (on this website also) relying on the coordinate expression of the Riemannian volume form. This method is known as the method of moving frames (referring to the local orthonormal (co-)frame). It surprisingly turns out that it is less known whilst it is (in my opinion) simpler. Some people (including me) prefer to work in a coordinate-free manner, and to that end, moving frames are good friends.
{"set_name": "stack_exchange", "score": 1, "question_id": 4382713}
TITLE: Using Lagrange multipliers to find the extrema of $f(x,y) = e^{2xy}$ subject to $x^2+y^2 = 16$ QUESTION [2 upvotes]: Find the maximum and minimum values of $f = e^{2xy}$ with respect to $x^2+y^2 = 16$. Using Lagrange multipliers, $\nabla f = \lambda\nabla g$. Therefore, the constraints are the following: $$x^2+y^2 = 16$$ $$2ye^{2xy} = \lambda2x$$ $$2xe^{2xy} = \lambda2y$$ How would I solve this system of equations? REPLY [1 votes]: If you note that $x\neq 0$ and $y\neq 0$ you can divide the second equation by the third and get: $$\frac{2ye^{2xy}}{2xe^{2xy}} = \frac{2\lambda x}{2\lambda y} \ ,$$ which after simplification says that $x^2=y^2$. Taking the first equation, you can then write $$x^2 + y^2 = 2x^2 = 16$$ This gives $x=\pm 2\sqrt{2}$ and therefore four possible solutions $(x,y)=(\pm 2\sqrt{2},\pm 2\sqrt{2})$, where all sign combinations are possible.
{"set_name": "stack_exchange", "score": 2, "question_id": 1074092}
TITLE: Rewriting a statement in logical form QUESTION [1 upvotes]: I am trying to rewrite the following statement using logical symbols: Regular work is not necessary to pass the course. I know how to do this if the "not" wasn't there: P: Regular work was done. Q: The was course was passed. P is necessary for Q, so Q -> P. However, the inclusion of the "not" is confusing. How can I, in a manner similar to above, write the statement with the "not" included? REPLY [1 votes]: $P$ : regular work is done. $Q$ : the course is passed. The material implication, $( Q\to P)$, that is "regular work is done if the course is passed," is quite reasonably interpreted as stating "regular work is necessary to pass the course".   (Also written as $P\gets Q$ though not often.) However, saying "not necessary" is problematic in classic predicate logic.   The simple negation, $\neg(Q\to P)$, is (classically) equivalent to $\neg P\wedge Q$, which is "regular work is not done yet the course is passed."   Yet we merely want to assert that this may happen, rather than that it will happen; that it is "possible". Trying to express a negation duality between "necessarily" and "possibly" is exactly the inspiration for developing "modal logics". "It is not necessarily so that regular work is done whenever the course is passed", "It is possibly so that regular work is not done yet the course is passed." $$\neg\Box(Q\to P)\iff \Diamond(\neg P\wedge Q)$$
{"set_name": "stack_exchange", "score": 1, "question_id": 2621603}
TITLE: Volume and orientation application -Calculus of manifolds QUESTION [1 upvotes]: If $\omega$ is the volumen element of $V$ determined by $T$ and $\mu$, and $f\colon \mathbb{R}^n\to V$ is an isomorphism such that $f^*T=\langle,\rangle$, and such that $[f(e_1),\ldots,f(e_n)]=\mu$, show that $f^*\omega=det$ I am trying to do exercise 4-4 of calculus of manifolds Spivak, The hypothesis of the exercise is very similar to Theorem 4-2.(Calculus of manifolds - Spivak) what I have tried to give an ortnormal base $[f(e_1),\ldots,f(e_n)]$, such that the pullback $f^*T(e_i,e_j)=\delta_{ij}$, and $f^*\omega(e_1,\ldots,e_n)=\omega(e_1,\ldots,e_n)$. They are only ideas, since I have not been able to solve it, any help or suggestion is well received REPLY [1 votes]: The thorem you attached isn't really what you should be looking at. This exercise amounts to reading carefully the pages on how an inner product and orientation determines a volume element. Let us briefly recall how this goes. First a definition: if $W$ is an $n$-dimensional real vector space then a volume element is by definition a non-zero alternating $(0,n)$ tensor over $W$; i.e an element $\eta \in \mathcal{A}^n(W)\setminus\{0\}$ (recall that this space is $1$-dimensional so we're essentially choosing a basis for this space). On an arbitrary vector space $W$ we don't have a specific way of deciding which volume element to take: if we choose one, then any non-zero scalar multiple will do. However, if we have an oriented inner-product space $(W,g,\nu)$ then there is a unique choice, because there is a unique $\eta \in \mathcal{A}^n(W)\setminus\{0\}$ such that for some (and hence any) ordered orthonormal basis $\{w_1,\dots, w_n\}$ of $W$ with the correct orientation $[w_1,\dots, w_n]=\nu$, we have \begin{align} \eta(w_1,\dots, w_n)&=+1. \end{align} I intentionally avoided the notation $V$ in this discussion because for your question we'll be taking $W=\Bbb{R}^n$, $g=\langle \cdot,\cdot\rangle$ the standard inner product, and the usual orientation $\nu=[e_1,\dots, e_n]$. For your question, you're given an isomorphism $f$ along with a volume element $\omega$ on $V$. You should know (pretty much by basic definition unwinding) that $f^*\omega$ is an alternating $(0,n)$ tensor on $\Bbb{R}^n$, i.e $f^*\omega\in \mathcal{A}^n(\Bbb{R}^n)$. But of course we already know that $\det$ spans this $1$-dimensional vector space (because as mentioned in the comments this is usually how one defines the determinant... or if you take a more abstract definition of the determinant this also follows easily). Hence, there is a $c\in\Bbb{R}$ such that \begin{align} f^*\omega &= c\cdot \det \end{align} Ideally, we'd like $c=1$, so how do we show that? Simple. Take the standard basis $\{e_1,\dots, e_n\}$ of $\Bbb{R}^n$ then we have \begin{align} c&= c\cdot 1\\ &= c\cdot \det(e_1,\dots, e_n)\\ &=(f^*\omega)(e_1,\dots, e_n)\\ &= \omega(f(e_1),\dots, f(e_n)) \end{align} Now, based on everything I've mentioned above can you justify why this last line equals $1$? (Of course you need to use your assumptions on $f$ at this stage).
{"set_name": "stack_exchange", "score": 1, "question_id": 4026558}
TITLE: Test irreducibility of polynomial over finite field QUESTION [4 upvotes]: I am reading a text, where it says that $X^5-X-1$ is irreducible modulo 3. I am not sure how I can know that. Could someone help? By the way, is there some practical trick to judge whether a polynomial irreducible over a finite field in general? Because when I calculate the galois group I find this information is very important, so I am very curious. Thanks! REPLY [3 votes]: Hint $ $ It has no roots so no linear factors, so if it splits it has an irreducible quadratic factor $g.\,$ In $\,\Bbb F_9 = \Bbb F_3[x]/g\!:\,$ $x^8 = 1\,$ so $\,\color{#c00}{x^4 = \pm1}\,$ so $\,0 =\color{#c00}{x^4}\color{#0a0a}x\!-x-1 = \color{#c00}{\pm}\color{#0a0}x\!-\! x\! -\! 1\,$ contra $\,\deg g > 1$. Remark $ $ We used $\,0\neq f\in \Bbb F_9\Rightarrow f^{\large \color{#c00}8}\! = 1\,$ (for $\,f =x),\,$ the analogue of Fermat's little Theorem, which is true because $\,\Bbb F_9$ has multiplicative group $\,\Bbb F_9^*$ of size $\,9-\color{#0a0}1 = \color{#c00}8\,$ (all $\rm\color{#0a0}{nonzero}$ elements are invertible in a field), so Lagrange's Theorem $\Rightarrow f^8 = 1\,$ for all $\,f\neq 0.$
{"set_name": "stack_exchange", "score": 4, "question_id": 1876134}
TITLE: Coproduct in the category of metric spaces QUESTION [6 upvotes]: While discussing categories without coproducts, we stumbled with the category $\mathbf{Met}$ that takes metric spaces as its objects and short maps as its morphisms. It is claimed that $\mathbf{Met}$ does not have coproducts. I'm puzzled since the disjoint union of metric spaces is also a metric space so that would make it a candidate for the coproduct in the category but I'm unable to see why it fails to be one. Any ideas? REPLY [7 votes]: An easy argument proceeds by contradiction: Suppose that there were coproducts, and consider $X+Y$ for nonempty $X,Y$. Now consider the metric spaces $2_N =\{-N,N\} \subseteq \Bbb R$ together with the (obviously short) maps $$\begin{align}+&:X \to 2_N, x \mapsto N\\-&:Y\to2_N,y\mapsto -N\end{align}$$ Then $d(+(x),-(y)) = 2N$ for arbitrary $x,y,N$; it is easy to see that this contradicts the simultaneous shortness of all induced maps $X+Y\to 2_N$ (finitely many go fine, but the distance between $x \in X$ and $y \in Y$ in $X+Y$ needs to be infinite to make them all short). Therefore, the coproduct $X+Y$ cannot exist. The fundamental reason behind this is the inability to make the distance between $X$ and $Y$ large enough. REPLY [5 votes]: EDIT: I wrote this answer a few seconds before Lord_Farin's answer appeared, so in the first sentence below I was referring to his comment on the question, not to his answer. I guess I'll keep this here for the comment about allowing $\infty$ as a distance, but regard this more as a comment now that he's posted a full proof. @Lord_Farin hit on the difficulty. The easiest way to see what goes wrong is to try to make the coproduct of a point with itself: you might think it's two points, but at what distance? If the distance is $d$, the universal property fails for maps from this coproduct to any space with some pair of points at a distance greater than $d$. If you change the definition of metric slightly to allow $d(x_1,x_2)=\infty$, then the resulting category of slightly generalized metrics spaces and short maps does have coproducts: in the coproduct of $X_1$ and $X_2$ you make the distance from points of $X_1$ to $X_2$ infinite.
{"set_name": "stack_exchange", "score": 6, "question_id": 1778408}
TITLE: $d\mid a,b \iff d\mid\gcd(a,b) \ $ [GCD Universal Property] QUESTION [5 upvotes]: I know that the definition of gcd of two numbers $ a, b $ is $G$ ,where $G\mid a$, $\;G\mid b $ and If $d\mid a$, $\;d\mid b $, then $ d\mid G.$ Now, using this how do I prove that in $\mathbb Z$, this definition coincides with the definition that $G$ is the GREATEST (w.r.t. usual ordering) common divisor? So to prove that I write, say $G=dq+ r,\; 0\le r<d$. My aim is to show that $ r$ is $0$. So I have to construct some $G+x $ which is also a divisor of both $ a$ and $b$, to get a contradiction. How do I get $ x$? REPLY [1 votes]: The definition says that the gcd $G$ is a common divisor that is divisibly greatest, i.e. if $\,d\,$ is any common divisor then $\,d\mid G,\,$ so $\, d\le G,\,$ thus $G$ is a greatest common divisor. Combining both directions we obtain the following handy bidirectional form of the general definition of a gcd $$g\,\text{ is a gcd of }\,a,b\,\text{ in }R\ \ \text{ if }\ \ \bbox[5px,border:1px solid #c00]{d\mid a,b\iff d\mid g}\ \text{ holds for all}\ \ d\in R\qquad\qquad\ \ \ \ \ \ \ \ $$ Indeed putting $\,d=g\,$ in $(\Leftarrow)$ yields $\,g\mid a,b,\,$ so $\,g\,$ is a common divisor of $\,a,b,\,$ and necessarily divisibly greatest since direction $(\Rightarrow)$ shows every common divisor $\,d\,$ divides $\,g.$ Below is a proof of the "divisibly greatest" form of the gcd in $\Bbb Z,\,$ via Bezout. Theorem $\ \ \ \ d\mid a,b\iff d\mid (a,b)\ \ \ $ [GCD Universal Property] ${\bf Proof}\ \ (\Rightarrow)\ \ \ d\mid a,b\,\Rightarrow\, d\mid (a,b) = i\:\!a\!+\!j\:\!b,\, $ some $\, i,j\in\Bbb Z,\,$ by Bezout. $(\Leftarrow)\ \ \ \ d\mid (a,b)\mid a,b\,\Rightarrow\, d\mid a,b\ $ by transitivity of $ $ "divides". Remark $\ $ Dually we have the universal property of LCM Lemma $\ \ \ a,b\mid m\iff [a,b]\mid m\ \ \ $ [LCM Universal Property] In more general UFD's such as $\,\Bbb Z[x]\,$ and $\,\Bbb Q[x,y]\,$ there generally is not a Bezout equation for gcds so the above proof of the gcd universal property fails. But instead use prime factorizations to prove the gcd universal property (then it boils down to the universal property of min & max on exponents of primes, e.g. see here). Or we can prove the GCD Universal Property directly by induction on $\,\color{#90f}{{\rm size} := a\!+\!b}.\,$ It's clearly true if $\,a\!=\!0\,$ or $\,b\!=\!0,\,$ or if $\,a\! =\! b\!:\ c\mid a,a\!\iff\! c\mid (a,a)=a.\,$ Else $\,a\!\neq\! b\!\neq\!0.\,$ By symmetry, wlog $\,a>b.\,$ so $\, c\mid a,b\!\iff\! \color{#0a0}{c\mid a\!-\!b,b}\!\iff\! c\mid(a\!-\!b,b)=(a,b)\,$ since $\,\color{#0a0}{\rm green}\,$ instance has smaller $\,\color{#90f}{{\rm size}} = (a\!-\!b)+b = a < \color{#90f}{a+b},\,$ so $\rm\color{}{induction}\,$ applies.
{"set_name": "stack_exchange", "score": 5, "question_id": 3356075}
TITLE: Let $T:V\to V$ over $\mathbb F$, $\dim V=n$, $\dim \operatorname{Im}T=1$. Prove that there's a scalar $a$ for all $v\in V$ such that $T^2(v)=aT(v)$ QUESTION [0 upvotes]: Let $T:V\to V$ over $\mathbb F$, $\dim V=n$, $\dim \operatorname{Im}T=1$. Prove that there's a scalar $a$ for all $v\in V$ such that $T^2(v)=aT(v)$. The official solution is: Because $\dim \operatorname{Im}T=1$ then there's a vector $w\neq 0$ such that $\operatorname{Im}T=\operatorname{span}\{w\}$. Then there's a scalar $k\in V$ such that $T(v)=kw$ and in particular there's $a\in V$ such that $T(w)=aw\quad \ast$. Then: $$ T^2(v)=T(kw)=kT(w)=kaw=akw=aT(v) $$ Why do we know that there's a $w$ such that $T(w)=aw$ because every vector in the domain must have an image? REPLY [2 votes]: Basically, $$T(w)\in Im\, T = span\{w\}\Longrightarrow \exists a: T(w) = aw.$$ You can do this for any given $w\in V.$
{"set_name": "stack_exchange", "score": 0, "question_id": 2350269}
\begin{document} \title{Polynomial Supersymmetry for Matrix Hamiltonians} \author{A.V. Sokolov\footnote{E-mail: avs\_avs@rambler.ru.}\\ \\{\it Department of Theoretical Physics, Saint-Petersburg State University,}\\{\it Ulianovskaya ul., 1, Saint-Petersburg 198504, Russia}} \maketitle \abstract{We study intertwining relations for matrix one-dimensional, in general, non-Hermitian Hamiltonians by matrix differential operators of arbitrary order. It is established that for any matrix intertwining operator $Q_N^-$ of minimal order $N$ there is a matrix operator $Q_{N'}^+$ of different, in general, order $N'$ that intertwines the same Hamiltonians as $Q_N^-$ in the opposite direction and such that the products $Q_{N'}^+Q_N^-$ and $Q_N^-Q_{N'}^+$ are identical polynomials of the corresponding Hamiltonians. The related polynomial algebra of supersymmetry is constructed. The problems of minimization and of reducibility of a matrix intertwining operator are considered and the criteria of minimizability and of reducibility are presented. It is shown that there are absolutely irreducible matrix intertwining operators, in contrast to the scalar case.} \ \noindent{\it Keywords}: {Supersymmetry; Intertwining operator; Matrix non-Hermitian Hamiltonian.} \section{Introduction} Supersymmetric matrix models in Quantum Mechanics arise for spectral design in description of motion of spin particles in external fields and of scattering of particles with strong coupling of channels. The simplest cases of Darboux transformations for matrix Hamiltonians and the corresponding supersymmetry algebras are considered in several papers, in particular, in \cite{acd88,acd90-1,aisv91,canio92,canio93,takahiro93,hgb95,lrsfv98,dc99,tr99,iknn06,fmn10,nk11}. Matrix $n\times n$ differential operators of the first order and matrix $2\times2$ differential operators of the second order which intertwine Hermitian Hamiltonians and the corresponding algebras of supersymmetry were studied in \cite{acni97}. In \cite{gove98} the formulae were proposed that allow us to construct for a given matrix $n\times n$, in general, non-Hermitian Hamiltonian a new matrix $n\times n$ Hamiltonian and a matrix $n\times n$ linear differential operator of the $N$-th order with the identity matrix coefficient at $({d\over{dx}})^N$ intertwining these Hamiltonians. The formulae of \cite{gove98} are very general since they provide us with the opportunity to build any matrix intertwining operator with the identity matrix coefficient at $d\over{dx}$ in the highest degree, but at the same time the results of \cite{gove98} have some shortcomings. Firstly, the formulae considered in \cite{gove98} are constructed in terms of a basis in a subspace which is invariant with respect to initial Hamiltonian, {\it i.e.} in terms of columns of $n\times nN$ matrix-valued solution ${\bf\Psi}(x)$ of the equation \begin{equation}H_+{\bf\Psi}={\bf\Psi}\Lambda,\la{gove1}\end{equation} where $H_+$ and $\Lambda$ are initial Hamiltonian and $nN\times nN$ constant matrix respectively. In the present work we demonstrate that in order to construct any matrix $n\times n$ intertwining operator of arbitrary order $N$ with arbitrary constant nondegenerate matrix coefficient at $({d\over{dx}})^N$ it is sufficient in fact to use only such matrices ${\bf\Psi}(x)$ that the matrix $\Lambda^t$ has a normal (Jordan) form. It is evident that in this case the columns of ${\bf\Psi}(x)$ are formal vector-eigenfunctions or formal associated vector-functions of a given initial Hamiltonian~$H_+$, where the word ``formal'' means that these vector-functions are not necessarily normalizable. Secondly, the formulae of \cite{gove98} are rather complicated, in particular, because of the use of {\it quasideterminants} introduced in \cite{gere}. More useful formulae for a matrix intertwining operator with the identity matrix coefficient at $d\over{dx}$ in the highest degree and for the potential of the related new Hamiltonian, in terms of usual determinants, are obtained in a very complex way in \cite{sampe03} for the partial case where all columns of ${\bf\Psi}(x)$ are formal vector-eigenfunctions of $H_+$. We emphasize that the use of only formal vector-eigenfunctions of $H_+$ as columns in ${\bf\Psi}(x)$ leads to the fact that the set of intertwining operators, which can be constructed with the help of the formulae of \cite{sampe03}, is much narrower than the set of intertwining operators which can be received due to the formulae of \cite{gove98}. In the present paper we derive in a simple way the formulae for any matrix $n\times n$ intertwining operator of arbitrary order $N$ with an arbitrary constant nondegenerate matrix coefficient at $({d\over{dx}})^N$ and for the potential of related new Hamiltonian in terms of usual determinants. Our formulae in partial case of \cite{sampe03} correspond to the formulae of \cite{sampe03}. The third shortcoming of \cite{gove98} which is present in \cite{sampe03} as well is the absence of any condition that guarantees (i) implementability of the described procedure of constructing of a matrix intertwining operator and of the corresponding new matrix Hamiltonian and (ii)~smoothness (the absence of pole(s)) for the matrix coefficients of constructed intertwining operator and for the potential of the corresponding new Hamiltonian. Moreover, Theorem~1 from \cite{gove98} containing a sufficient condition of existence of a matrix intertwining operator is wrong in view of the following arguments. This theorem asserts that for any $nN$-dimensional invariant with respect to matrix $n\times n$ initial Hamiltonian $H_+$ subspace $V$ of $n$-dimensional vector-functions there exists the intertwining matrix $n\times n$ differential operator $Q_N^-$ of the order $N$ and a final matrix $n\times n$ Hamiltonian $H_-$ such that $\ker Q_N^-= V$ and $Q_N^-H_+=H_-Q_N^-$. Incorrectness of this statement takes place in view of the following simple counterexample. Let us assume that $h_1=-({d\over{dx}})^2+v_1(x)$ and $h_2=-({d\over{dx}})^2+v_2(x)$ are scalar, in general, non-Hermitian Hamiltonians and $\varphi_1(x)$ and $\varphi_2(x)$ are formal ({\it i.e.} not necessarily from $L_2(\Bbb R)$) eigenfunctions of $h_1$ for the spectral values $\lambda_1$ and $\lambda_2$ respectively such that \[h_1\varphi_j=\lambda_j\varphi_j,\quad\lambda_j\in{\Bbb C},\quad j=1,2,\qquad \varphi'_1(x)\varphi_2(x)-\varphi_1(x)\varphi'_2(x)\not\equiv0.\] Then, the subspace with the basis \begin{equation}\Phi_j(x)=\begin{pmatrix}\varphi_j(x)\\0\end{pmatrix},\qquad j=1,2\la{bas1}\end{equation} is obviously invariant with respect to the matrix Hamiltonian \[H_+={\rm{diag}}\,(h_1,h_2),\] but it is not hard to see that there is no a matrix $2\times2$ linear differential operator of first order with the identity matrix coefficient or with any other nonzero coefficient at ${d\over{dx}}$, the kernel of which has the basis \gl{bas1}. In addition, for the solution \[{\bf\Psi}(x)=\begin{pmatrix}\varphi_1(x)&\varphi_2(x)\\0&0\end{pmatrix},\qquad\Lambda=\begin{pmatrix}\lambda_1&0\\0&\lambda_2\end{pmatrix}\] of the equation \gl{gove1} the procedures from \cite{gove98} and \cite{sampe03} of constructing of a matrix intertwining operator and the corresponding new Hamiltonian are not implementable. One can check that the mentioned theorem from \cite{gove98} and its proof would be correct if in the formulation of this theorem to add to properties of $V$ the condition that the Wronskian of elements of a basis in $V$ is different from identical zero. It will be shown in our paper that the condition that the Wronskian of a set of formal vector-functions and formal associated vector-functions of $H_+$ does not vanish on the real axis together with the condition that the matrix $\Lambda^t$ corresponding to this set has a normal (Jordan) form guarantees (i) implementability of the procedure, described in our paper, of constructing of a matrix intertwining operator and of the corresponding new matrix Hamiltonian and (ii) smoothness (absence of pole(s)) for the matrix coefficients of constructed intertwining operator and for the potential of the corresponding new Hamiltonian. The fourth shortcoming of \cite{gove98} is the absence of a direct answer to the question about possibility to construct any matrix intertwining operator with the identity matrix coefficient at $d\over{dx}$ in the highest degree with the help of the procedure described there, though the positive answer to this question follows from the corrected version of the theorem from \cite{gove98} mentioned above and from the fact that a matrix linear differential operator with the identity matrix coefficient at $d\over{dx}$ in the highest degree is uniquely defined by its kernel. The paper \cite{pepusa11} is devoted to the generalization of results of \cite{sampe03} on the case with a degenerate matrix coefficient at $d\over{dx}$ in the highest degree in an intertwining operator. In \cite{tanaka11} it is proposed to consider a supersymmetry for two matrix $n\times n$, in general, non-Hermitian Hamiltonians of Schr\"odinger form $H_+$ and $H_-$ generated by two matrix $n\times n$ linear differential operators $Q_N^+$ and $Q_N^-$ of the same order $N$ with constant coefficients proportional to the identity matrix at $({d\over{dx}})^N$. It is supposed that the operators $Q_N^+$ and $Q_N^-$ intertwine the Hamiltonians $H_+$ and $H_-$ in the opposite directions so that the products $Q_N^+Q_N^-$ and $Q_N^-Q_N^+$ are the same polynomials with matrix coefficients of the Hamiltonians $H_+$ and $H_-$ respectively. Moreover, the intertwining operators $Q_N^+$ and $Q_N^-$ are assumed to be mutually conjugated by certain unnatural operation which is, in general, neither Hermitian conjugation nor transposition. Thus, intertwining of the Hamiltonians by one of these operators is not, in general, consequence of intertwining of the Hamiltonians by another of these operators, even in the case where both Hamiltonians are Hermitian conjugated or symmetric with respect to transposition. Hence, these intertwining operators lead, in general, to independent restrictions on the considered system. In \cite{tanaka11} it was not offered any method for finding of the potentials and the coefficients of considered Hamiltonians and intertwining operators respectively. Only in the case $n=N=2$ it was found a general solution of intertwining relations under additional assumption that both potentials and all coefficients of intertwining operators are Hermitian. One of main results of the present paper is that for any matrix $n\times n$ linear differential operator $Q_N^-$ of arbitrary order $N$ with nondegenerate matrix coefficient at $({d\over{dx}})^N$ that intertwines two matrix $n\times n$, in general, non-Hermitian Hamiltonians $H_+$ and $H_-$ of Schr\"odinger form, there is a matrix $n\times n$ linear differential operator $Q_{N'}^+$ of the order $N'$ different, in general, from $N$ that intertwines the same Hamiltonians in the opposite direction and such that the product $Q_{N'}^+Q_N^-$ is a polynomial (coefficients of which are numbers and not matrices as in \cite{tanaka11}) of $H_+$. Moreover, if there is no a nonzero intertwining operator of the order less than $N$ then the product $Q_N^-Q_{N'}^+$ is the same polynomial of $H_-$. Polynomial supersymmetry algebra constructed of the Hamiltonians $H_+$ and $H_-$ and of the intertwining operators $Q_N^-$ and $Q_{N'}^+$ is presented in this paper as well. All papers mentioned above are devoted in fact (in regard to Darboux transformations) to the case with one spatial variable, only \cite{aisv91} briefly touches also the cases with two and three spatial variables. The latter cases are considered in \cite{ione03} too. It is evident that a product of an intertwining operator and a polynomial of the corresponding Hamiltonian with the coefficients being symmetry operators for this Hamiltonian is again an intertwining operator for the same pair of Hamiltonians. Thus, there is the problem of minimization of an intertwining operator by means of removing from it a superfluous factor polynomial in the corresponding Hamiltonian. The criterion is presented in this paper of a weak minimizability of a matrix differential intertwining operator, {\it i.e.} of the possibility to separate from this operator a nonconstant polynomial of the corresponding Hamiltonian coefficients of which are a numbers (not matrices). This criterion is analogous to one of \cite{anso03,ancaso07}. The problem of reducibility of a differential intertwining operator, {\it i.e.} of possibility to represent it in the form of a product of differential intertwining operators of lower orders with smooth coefficients\footnote{If the intertwined Hamiltonians are either Hermitian or symmetric with respect to transposition or all elements of their potentials are real-valued then one can impose the respective restrictions on the potentials of all intermediate Hamiltonians and a suitable additional restrictions on the coefficients of all intertwining operators of lower orders.}, is important for the theory of intertwining operators and the theory of spectral design (see, for example, \cite{acdi95,samsonov99,anca04,anso06,sokolov07,sokolov10} and references therein) since reducibility of an intertwining operator allows us to reduce a complicated transformation of an initial Hamiltonian to a final Hamiltonian described by this intertwining operator to a chain of more simple transformations described by an intertwining operators of lower orders. We present in this paper a criterion of reducibility for a matrix differential intertwining operator. In addition, it is shown that in contrast to the scalar case $n=1$ (see, for example, Lemma 1 in \cite{anso03}) in the matrix case there are absolutely irreducible matrix differential intertwining operators, {\it i.e.} intertwining operators which cannot be represented in the form of a product of intertwining operators of lower orders even with a pole singularity(-ies) into coefficients. This paper is based on the report \cite{sokolov12} and its goal is to present briefly and mostly without proofs our new results on intertwining of matrix Hamiltonians of Schr\"odinger form by matrix linear differential operators and on polynomial supersymmetry algebra constructed of such operators and Hamiltonians. The paper is organized as follows. The basic definitions and notation are presented in Section 2. Section~3 is devoted to description of constructing of any matrix $n\times n$ differential intertwining operator of the $N$-th order with arbitrary constant nondegenerate matrix coefficient at $\big({d\over{dx}}\big)^N$ and of the corresponding final Hamiltonian in terms of formal vector-eigenfunctions and formal associated vector-functions of initial matrix $n\times n$ Hamiltonian. Section 4 includes the definition of minimizability of a matrix differential intertwining operator (in the sense mentioned above as a weak minimizability) and the criterion of minimizability for such operator. Section 5 contains the results on existence for any matrix intertwining operator $Q_N^-$ of the order $N$ a ``conjugate'' matrix intertwining operator $Q_{N'}^+$ and on polynomial algebra of supersymmetry with these operators. The definitions of reducible, irreducible and absolutely irreducible matrix differential intertwining operators, the criterion of reducibility for such operators and an example of absolutely irreducible matrix $2\times 2$ differential intertwining operator of the $2$-nd order (generalization of this example for arbitrary $n\geqslant2$ and $N\geqslant2$ is straightforward) are given in Section 6. Conclusions contain the list of problems which can be investigated in the future papers. \section{Basic definitions and notation} Let us consider two defined on the entire axis matrix $n\times n$ Hamiltonians \[H_+=-I_n\partial^2+V_+(x),\quad H_-=-I_n\partial^2+V_-(x),\qquad \partial\equiv{d/{dx}},\la{h+h-2.1}\] where $I_n$ is the identity matrix and $V_+(x)$ and $V_-(x)$ are square matrices, all elements of which are sufficiently smooth and, in general, complex-valued functions. We suppose that these Hamiltonians are {\it intertwined} by a matrix linear differential operator $Q_N^-$, so that \begin{equation} Q_N^-H_+=H_-Q^-_N,\qquad Q_N^-=\sum\nolimits_{j=0}^NX^-_j(x)\partial^j,\la{splet}\end{equation} where $X^-_j(x)$, $j=0$, \dots, $N$ are as well square matrices of $n$-th order, all elements of which are sufficiently smooth and, in general, complex-valued functions. Transforming the left- and right-hand sides of \gl{splet} into expansions in powers of $\partial$ and equating coefficients of equal powers of $\partial$ in these sides, we obtain the following equations for the cases with the coefficients for $\partial^{N+1}$ and $\partial^N$: \[-X^-_{N-1}(x)=-2X^{-\,\prime}_{N}(x)-X^-_{N-1}(x),\] \[-X^-_{N-2}(x)+X^-_N(x)V_+(x)=-X^{-\,\prime\prime}_N(x) -2X^{-\,\prime}_{N-1}(x)-X^-_{N-2}(x)+V_-(x)X^-_N(x).\] It follows from the first of these equations that $X^-_N(x)$ is a constant matrix and, thus, the second of these equations takes the form \begin{equation}X^-_NV_+(x)=-2X^{-\,\prime}_{N-1}(x)+V_-(x)X^-_N.\la{v12}\end{equation} In the what follows we restrict ourselves by the case $\det X_N^-\ne0$. In this case one can find from \gl{v12} the matrix potential $V_-(x)$ in terms of $V_+(x)$ and $X^-_{N-1}(x)$, \begin{equation}V_-(x)=X^-_NV_+(x)(X^-_N)^{-1}+2X^{-\,\prime}_{N-1}(x)(X^-_N)^{-1}.\la{vmp2.1}\end{equation} Existence of a ``conjugate'' matrix $n\times n$ intertwining operator $Q_M^+$ for a given matrix intertwining operator $Q_N^-$ such that \begin{equation} H_+Q_M^+=Q^+_MH_-,\qquad Q_M^+=\sum\nolimits_{j=0}^MX^+_j(x)\partial^j,\la{q+int}\end{equation} is not evident in general but is evident in the following cases: \renewcommand{\labelenumi}{(\theenumi)} \begin{enumerate} \item $H_+^\dag\!=\!H_+,\,\,H^\dag_-\!=\!H_-\quad\Rightarrow\quad H_+Q_N^+\!=\!Q_N^+H_-,\quad Q_N^+\!=\!Q_N^{-\,\dag}\!=\!\sum_{j=0}^N(-\partial)^jX_j^{-\,\dag}(x),$\hfill\\ where $\dag$ denotes Hermitian conjugation; \item $H_+^t\!=\!H_+,\,\,H^t_-\!=\!H_-\quad\Rightarrow\quad H_+Q_N^+\!=\!Q_N^+H_-,\quad Q_N^+\!=\!Q_N^{-\,t}\!=\!\sum_{j=0}^N(-\partial)^jX_j^{-\,t}(x),$\hfill\\ where $t$ denotes transposition; \item \qquad$H_+^\ast=H_-\quad\Rightarrow\quad H_+Q_N^+=Q_N^+H_-,\qquad Q_N^+=Q_N^{-\,\ast}=\sum_{j=0}^NX_j^{-\,\ast}(x)\partial^j,$\\ where $*$ denotes complex conjugation. \end{enumerate} Existence of a ``conjugate'' matrix intertwining operator of the type \gl{q+int} for any matrix intertwining operator~$Q_N^-$ is guaranteed by the results of Section \ref{ecio}. By virtue of the intertwining \gl{splet} the kernel of $Q_N^-$ is an invariant subspace for~$H_+$: \[H_+\ker Q_N^-\subset\ker Q_N^-.\] Hence, for any basis $\Phi^-_1(x)$, \dots, $\Phi^-_d(x)$ in $\ker Q_N^-$, $d=\dim\ker Q_N^-=nN$ there is a constant square matrix $T^+\equiv\|T^+_{ij}\|$ of the $d$-th order such that \[H_+\Phi^-_i=\sum\nolimits_{j=1}^dT^+_{ij}\Phi^-_j,\qquad i=1,\ldots,d. \la{tm}\] A basis in the kernel of an intertwining operator $Q_N^-$ in which the matrix $T^+$ has a normal (Jordan) form is called a {\it canonical basis}. Elements of a canonical basis are called a {\it transformation vector-functions}. If a Jordan form of the matrix $T^+$ has block(s) of order higher than one, then the corresponding canonical basis contains not only formal vector-eigenfunctions of $H_+$ but also formal associated vector-function(s) of $H_+$ which are defined as follows (see \cite{naim}). A finite or infinite set of vector-functions $\Phi_{m,i}(x)$, $i=0$, 1, 2, \dots\, is called a {\it chain of formal associated vector-functions} of $H_+$ for a spectral value $\lambda_m$ if \[H_+\Phi_{m,0}\!=\!\lambda_m\Phi_{m,0}, \quad\Phi_{m,0}(x)\!\not\equiv\!0, \qquad (H_+\!-\!\lambda_mI_n)\Phi_{m,i}\!=\!\Phi_{m,i-1},\quad i\!=\!1,2,3,\ldots\,.\] The vector-function $\Phi_{m,i}(x)$ in this case is called {\it a formal associated vector-function of $i$-th order} of the Hamiltonian $H_+$ for the spectral value $\lambda_m$, $i=0$, 1, 2, \dots\, and $\Phi_{m,0}(x)$ is called as well a formal vector-eigenfunction of $H_+$ for the same spectral value. The term ``formal'' emphasizes that the vector-function $\Phi_{m,i}(x)$ is not necessarily normalizable, $i=0$, 1, 2, \dots\,. \section{Constructing of a matrix intertwining operator in terms of transformation vector-functions \la{ioNgc}} Let us consider a set of formal associated vector-functions \[\Phi_l^-(x)\equiv\big( \varphi^-_{l1}(x),\varphi^-_{l2}(x),\ldots,\varphi^-_{ln}(x)\big)^t,\qquad l=1,\ldots,nN,\qquad n,N\in{\Bbb N}\la{phl5}\] of a matrix $n\times n$ Hamiltonian $H_+$ such that this set can be divided into a chains of formal associated vector-functions of $H_+$ for different, in general, spectral values of $H_+$ and the Wronskian $W(x)$ of all $\Phi^-_l(x)$, $l=1$, \dots, $nN$ does not vanish on the real axis. There is the only matrix $n\times n$ linear differential operator $Q_N^-$ of the $N$-th order with arbitrarily fixed constant nondegenerate matrix coefficient at $\partial^N$ such that $\ker Q_N^-$ contains all vector-functions $\Phi_l^-(x)$, $l=1$, \dots, $nN$. This operator can be found with the help of the following evident explicit formula, \[Q_N^-={1\over{W(x)}}\, X_N^-\begin{vmatrix}\varphi^-_{11}&\dots&\varphi^-_{1n}\quad \varphi^{-\prime}_{11}&\dots&\varphi^{-\prime}_{1n}&\ldots&(\varphi^-_{11})^{(N-1)}&\dots&(\varphi^-_{1n})^{(N-1)}&(\Phi^-_1)^{(N)}\\ \varphi^-_{21}&\dots&\varphi^-_{2n}\quad \varphi^{-\prime}_{21}&\dots&\varphi^{-\prime}_{2n}&\ldots& (\varphi^-_{21})^{(N-1)}&\dots&(\varphi^-_{2n})^{(N-1)}&(\Phi^-_2)^{(N)}\\ \vdots&\ddots&\vdots\qquad\vdots&\ddots&\vdots&\ddots&\vdots&\ddots&\vdots&\vdots\\ \varphi^-_{nN,1}\!\!\!\!&\dots&\!\!\!\!\varphi^-_{nN,n}\,\,\varphi^{-\prime}_{nN,1}\!\!\!\!&\dots&\!\!\!\!\varphi^{-\prime}_{nN,n}\!\!\!\!&\ldots&\!\!\!\!(\varphi^-_{nN,1})^{(N-1)}\!\!\!\!&\dots&\!\!\!\!(\varphi^-_{nN,n})^{(N-1)}\!\!&\!\!(\Phi^-_{nN})^{(N)}\\ P_1&\ldots&P_n\quad P_1\partial&\ldots&P_n\partial&\ldots&P_1\partial^{N-1}&\ldots&P_n\partial^{N-1}&I_n\partial^N \end{vmatrix}\!,\] \begin{equation}P_l\Phi=\varphi_l,\qquad\forall\,\,\Phi(x)\equiv\big(\varphi_1(x),\varphi_2(x),\ldots,\varphi_n(x)\big)^t,\qquad l=1,\ldots,n,\la{qNrep5}\end{equation} where during calculation of the determinant in each of its terms the corresponding of the operators $P_1$, \dots, $P_n$, $P_1\partial$, \dots, $P_n\partial$, $P_1\partial^{N-1}$, \dots, $P_n\partial^{N-1}$, $I_n\partial^N$ must be placed on the last position. It is not hard to see in view of \gl{qNrep5} that $l$-th column of the matrix coefficient $X_j^-(x)$ of $Q_N^-$ (see \gl{splet}) is equal to \[-{1\over{W(x)}}\, X_N^-\,\, \begin{matrix}\big|&\!\!\!\varphi^-_{11}&\dots&\varphi^-_{1n}&\varphi^{-\prime}_{11}&\dots& \varphi^{-\prime}_{1n}&\ldots\\ \Big|&\!\!\!\varphi^-_{21}&\dots&\varphi^-_{2n}&\varphi^{-\prime}_{21}&\dots&\varphi^{-\prime}_{2n}&\ldots\\ \Bigg|&\!\!\!\vdots&\ddots&\vdots&\vdots&\ddots&\vdots&\ddots\\ \Big|&\!\!\!\varphi^-_{nN,1}&\dots&\varphi^-_{nN,n}&\varphi^{-\prime}_{nN,1}&\dots&\varphi^{-\prime}_{nN,n}&\ldots \end{matrix}\qquad\qquad\qquad\qquad\qquad\qquad\quad\] \[\quad\qquad \begin{matrix}(\varphi^-_{1,l-1})^{(j)}&(\Phi^-_1)^{(N)}& (\varphi^-_{1,l+1})^{(j)}&\ldots&(\varphi^-_{11})^{(N-1)}&\dots&(\varphi^-_{1n})^{(N-1)}&\Big|\\ (\varphi^-_{2,l-1})^{(j)}&(\Phi^-_2)^{(N)}&(\varphi^-_{2,l+1})^{(j)}&\ldots& (\varphi^-_{21})^{(N-1)}&\dots&(\varphi^-_{2n})^{(N-1)}&\Big|\\ \vdots&\vdots&\vdots&\ddots&\vdots&\ddots&\vdots&\Bigg|\\ (\varphi^-_{nN,l-1})^{(j)}&(\Phi^-_{nN})^{(N)}&(\varphi^-_{nN,l+1})^{(j)}&\ldots&(\varphi^-_{nN,1})^{(N-1)}\!&\dots&(\varphi^-_{nN,n})^{(N-1)}\!\!\!&\Big| \end{matrix}\] \begin{equation} l=1,\ldots,n,\qquad j=0,\dots,N-1.\la{tilstx5}\end{equation} We emphasize that the condition that the Wronskian $W(x)$ does not vanish on real axis provides in view of \gl{qNrep5} and \gl{tilstx5} existence for $Q_N^-$ and smoothness (absence of pole(s)) for the matrix-valued functions $X_0^-(x)$, \dots, $X_{N-1}^-(x)$. Existence of a matrix $n\times n$ Hamiltonian $H_-$ of Schr\"odinger form which is intertwined with $H_+$ by $Q_N^-$ in accordance with \gl{splet} can be proved in the same way as in the proof of Theorem 1 in \cite{gove98}. The potential $V_-(x)$ of the Hamiltonian $H_-$ can be found with the help of \gl{tilstx5} for $j=N-1$ and the relation \gl{vmp2.1}. Thus, the potential $V_-(x)$ is smooth as well. The partial cases of the representation of $Q_N^-\Phi$ for arbitrary $n$-dimensional vector-function $\Phi(x)$ with the help of \gl{qNrep5} and of the representation of $V_-(x)$ with the help of \gl{tilstx5} for $j=N-1$ and \gl{vmp2.1} when $X_N^-=I_n$ and all vector-functions $\Phi_l^-(x)$, $l=1$, \dots, $nN$ are formal vector-eigenfunctions of the Hamiltonian $H_+$ are contained in \cite{sampe03}. The fact that for any matrix $n\times n$ initial Hamiltonian $H_+$ of Schr\"odinger form any matrix $n\times n$ linear differential intertwining operator of arbitrary order $N$ with arbitrary constant nondegenerate matrix coefficient at $\partial^N$ can be constructed with the help of the procedure described in this section follows from the facts that (i) for any such operator there is a canonical basis in its kernel, Wronskian of which does not vanish on the real axis and (ii) a matrix $n\times n$ linear differential operator of $N$-th order with a given constant nondegenerate matrix coefficient at $\partial^N$ is uniquely determined by a basis in its kernel. \section{Minimizability of a matrix intertwining operator} It is evident that if to multiply $Q_N^-$ by a polynomial of the Hamiltonian, \[Q_N^-\Big[\sum\nolimits_{l=0}^La_lH_+^l\Big]\equiv\Big[\sum\nolimits_{l=0}^{L}a_lH_-^l\Big]Q_N^-,\qquad a_l\in{\Bbb C},\quad l=0,\ldots,L,\] then such product is again an intertwining operator for the same Hamiltonians: \[\Big\{Q_N^-\Big[\sum\nolimits_{l=0}^La_lH_+^l\Big]\Big\}H_+=Q_N^-H_+\Big[\sum\nolimits_{l=0}^La_lH_+^l\Big]=H_-\Big\{Q_N^-\Big[\sum\nolimits_{l=0}^La_lH_+^l\Big]\Big\}.\] Thus, the question arises about possibility to simplify an intertwining operator by separation from it a superfluous polynomial in the corresponding Hamiltonian factor. Let us present definition of minimizable and non-minimizable matrix intertwining operators. \vskip1pc \noindent{\bf Definition 1.} An intertwining operator $Q_N^-$ is called {\it minimizable} if this operator can be represented in the form \[Q_N^-\!=\!P_M^-\Big[\sum\nolimits_{l=0}^La_lH_+^l\Big],\qquad a_l\!\in\!\Bbb C,\quad l\!=\!1, \ldots,L,\quad a_L\!\ne\!0,\quad1\!\leqslant\! L\!\leqslant\! N/2,\] where $P_M^-$ is a matrix $n\times n$ linear differential operator of the $M$-th order, $M=N-2L$ that intertwines the Hamiltonians $H_+$ and $H_-$, so that $P_M^-H_+=H_-P_M^-$. Otherwise, the operator $Q_N^-$ is called {\it non-minimizable}. \vskip1pc The following criterion of minimizability takes place. A~matrix $n\times n$ nonzero intertwining operator $Q_N^-$ can be represented in the form \[Q_N^-=P_M^-\prod_{l=1}^s(\lambda_lI_n-H_+)^{ k_l},\qquad\lambda_l\in{\Bbb C},\,\, k_l\in\Bbb N,\,\, l=1, \ldots, s,\quad\lambda_l\ne \lambda_{l'}\Leftrightarrow l\ne l',\la{minim8}\] where $P_M^-$ is a non-minimizable matrix $n\times n$ linear differential operator of the $M$-th order, $M=N-2\sum_{l=1}^sk _l$ that intertwines the Hamiltonians $H_+$ and $H_-$, so that $P_M^-H_+=H_-P_M^-$, \vskip0.5pc \noindent if and only if \renewcommand{\labelenumi}{\rm{(\theenumi)}} \begin{enumerate} \item all numbers $\lambda_l$, $l=1$, \dots, $s$ belong to the spectrum of the matrix $T^+$ and there are no equal numbers between them$;$ \item there are $2n$ Jordan blocks in a normal $($Jordan$)$ form of the matrix $T^+$ for any eigenvalue from the set $\lambda_l$, $l=1$, \dots, $s;$ \item there are no $2n$ Jordan blocks in a normal $($Jordan$)$ form of $T^+$ for any eigenvalue of this matrix that does not belong to the set $\lambda_l$, $l=1$, \dots, $s;$ \item $ k_l$ is the minimal of the orders of Jordan blocks corresponding to the eigenvalue $\lambda_l$ in a normal $($Jordan$)$ form of the matrix $T^+$, $l=1$, \dots, $s.$ \end{enumerate} \section{On existence a ``conjugate'' intertwining operator and polynomial SUSY\la{ecio}} Suppose that \renewcommand{\labelenumi}{\rm{(\theenumi)}} \begin{enumerate} \item $\lambda_l$, $l=1$, \dots, $L$ is the set of all different eigenvalues of $T^+;$ \item $g_l^-$ is the geometric multiplicity of $\lambda_l$ in the spectrum of $T^+$, $l=1$, \dots, $L;$ \item $k_{l,j}^-$, $j=1$, \dots, $g_l^-$ are the orders of Jordan blocks corresponding to $\lambda_l$ in a Jordan form of $T^+$, $l=1$, \dots, $L;$ \item $\varkappa_l=\max_{1\leqslant j\leqslant g_l^-}k_{l,j}^-$, $l=1$,\dots, $L.$ \end{enumerate} Then there is a non-minimizable linear differential operator $Q_{N'}^+$ of the order $N'=2(\varkappa_1+\ldots+\varkappa_L)-N$ with smooth coefficients that intertwines $H_+$ and $H_-$ as follows, \[H_+Q_{N'}^+=Q_{N'}^+H_-\la{int'10}\] and such that$:$ \[Q_{N'}^+Q_N^-=\prod\nolimits_{l=1}^L(H_+-\lambda_lI_n)^{\varkappa_l}.\la{pol10}\] Moreover, if there is no a nonzero matrix linear differential operator $P_M^-$ of the order $M$, $M<N$ such that the following intertwining holds, \[P_M^-H_+=H_-P_M^-,\la{splet10.6}\] then \[Q_{N'}^+Q_N^-\!=\!{\cal P}_{(N+N')/2}(H_+),\quad Q_{N}^-Q_{N'}^+\!=\!{\cal P}_{(N+N')/2}(H_-),\quad{\cal P}_{(N+N')/2}(\lambda)\!\equiv\!\prod\nolimits_{l=1}^L(\lambda\!-\!\lambda_l)^{\varkappa_l}.\] In the considered case with the help of the super-Hamiltonian \[{\bf H}=\begin{pmatrix}H_+&0\\0&H_-\end{pmatrix}\] and the nilpotent supercharges \[{\bf Q}=\begin{pmatrix}0&Q_{N'}^+\\0&0\end{pmatrix},\quad{\bf \bar Q}=\begin{pmatrix}0&0\\Q_N^-&0\end{pmatrix}, \qquad{\bf Q}^2={\bf \bar Q}^2=0\] one can construct the following polynomial algebra of supersymmetry: \[\{{\bf Q},{\bf \bar Q}\}={\cal P}_{(N+N')/2}({\bf H}),\qquad[{\bf H},{\bf Q}]=[{\bf H},{\bf \bar Q}]=0.\la{supalg10}\] \section{(Ir)reducibility of a matrix intertwining operator} Let us present definitions of reducible, irreducible and absolutely irreducible matrix intertwining operators. \vskip1pc \noindent{\bf Definition 2.} An intertwining operator $Q_N^-$ is called {\it reducible} if there are a matrix $n\times n$ linear differential operators $K_{N-M}^-$ and~$P_M^-$ of the orders $N-M$ and $M$, $0<M<N$ respectively with smooth coefficients and a matrix $n\times n$ intermediate Hamiltonian of Schr\"odinger form $H_M$ with smooth potential such that the following relations hold, \begin{equation}Q_N^-=K_{N-M}^-P_M^-,\qquad P_M^-H_+=H_MP_M^-,\qquad K_{N-M}^-H_M=H_-K_{N-M}^-.\la{interm7}\end{equation} Otherwise the operator $Q_N^-$ is called {\it irreducible}. \vskip1pc \noindent{\bf Definition 3.} An intertwining operator $Q_N^-$ is called {\it absolutely irreducible} if for any $M$, $0<M<N$ there are no a matrix $n\times n$ linear differential intertwining operators $K_{N-M}^-$ and $P_M^-$ of the orders $N-M$ and $M$ respectively and a matrix $n\times n$ intermediate Hamiltonian of Schr\"odinger form $H_M$, even with the potential of $H_M$ and the coefficients of $K_{N-M}^-$ and $P_M^-$ possessing by a pole singularity(-ies), such that \gl{interm7} hold. \vskip1pc The following criterion of reducibility of a matrix intertwining operator takes place. A matrix $n\times n$ nonzero intertwining operator $Q_N^-$ is reducible if and only if there are a natural number $M<N$ and a vector-functions $\Phi_l^-(x)$, $l=1$, \dots, $nM$, $1\leqslant M<N$ belonging to $\ker Q_N^-$ such that these vector-functions can be divided into a chains of formal associated vector-functions of $H_+$ and their Wronskian does not vanish on the real axis. \vskip1pc In contrast to the scalar case $n=1$ where absolutely irreducible intertwining operators absent (see, for example, Lemma 1 in \cite{anso03}) there are in the matrix case with any $n\geqslant2$ absolutely irreducible intertwining operators of any order. Restrict ourselves to the case $n=N=2$ and consider two chains of associated functions of two scalar Hamiltonians $h_1$ and $h_2$ for the same spectral value $\lambda_0\in\Bbb C$: \begin{eqnarray}h_1\varphi_{1,0}=\lambda_0\varphi_{1,0},&\qquad&(h_1-\lambda_0)\varphi_{1,l}=\varphi_{1,l-1},\quad l=1,2,3,\nonumber\\ h_2\varphi_{2,0}=\lambda_0\varphi_{2,0},&\qquad&(h_2-\lambda_0)\varphi_{2,1}=\varphi_{2,0}\nonumber\end{eqnarray} such that the Wronskians \[W_{1}(x)\equiv\varphi_{1,0}\varphi'_{1,1}-\varphi'_{1,0}\varphi_{1,1},\qquad W_{2}(x)\equiv\varphi_{2,0}\varphi'_{2,1}-\varphi'_{2,0}\varphi_{2,1}\] do not vanish on the real axis. Then the vector-functions \[\Phi_0^-=\begin{pmatrix}\varphi_{1,0}\\0\end{pmatrix},\quad\Phi_1^-=\begin{pmatrix}\varphi_{1,1}\\0\end{pmatrix},\quad\Phi_2^-=\begin{pmatrix}\varphi_{1,2}\\\varphi_{2,0}\end{pmatrix},\quad\Phi_3^-=\begin{pmatrix}\varphi_{1,3}\\\varphi_{2,1}\end{pmatrix},\] form a chain of formal associated vector-functions of the matrix Hamiltonian \[H_+={\rm{diag}}\,(h_1,h_2)\] for the spectral value $\lambda_0$, \[H_+\Phi_0^-=\lambda_0\Phi_0^-,\qquad (H_+-\lambda_0I_2)\Phi_l^-=\Phi_{l-1}^-,\qquad l=1,2,3\] and the following equalities for the Wronskian of $\Phi_0^-(x)$, \dots, $\Phi_3^-(x)$ hold, \[{\large\begin{vmatrix}\varphi_{1,0}&0&\varphi'_{1,0}&0\\ \varphi_{1,1}&0&\varphi'_{1,1}&0\\ \varphi_{1,2}&\varphi_{2,0}&\varphi'_{1,2}&\varphi'_{2,0}\\ \varphi_{1,3}&\varphi_{2,1}&\varphi'_{1,3}&\varphi'_{2,1} \end{vmatrix}}=-{\large\begin{vmatrix}\varphi_{1,0}&\varphi'_{1,0}&0&0\\ \varphi_{1,1}&\varphi'_{1,1}&0&0\\ \varphi_{1,2}&\varphi'_{1,2}&\varphi_{20}&\varphi'_{2,0}\\ \varphi_{1,3}&\varphi'_{1,3}&\varphi_{2,1}&\varphi'_{2,1} \end{vmatrix}}=-W_{1}(x)W_{2}(x).\] Thus, the Wronskian of $\Phi_0^-(x)$, \dots, $\Phi_3^-(x)$ does not vanish on the real axis and there exist (see Section~\ref{ioNgc}) the matrix $2\times2$ Hamiltonian of Schr\"odinger form $H_-$ and the matrix $2\times 2$ linear differential operator of the second order $Q_2^-$ intertwining $H_+$ and $H_-$ such that $\Phi_0^-(x)$, \dots, $\Phi_3^-(x)$ form a canonical basis in $\ker Q_2^-$. Absolute irreducibility of $Q_2^-$ takes place in view of the facts that any canonical basis in the kernel of possible separated from the right-hand side of $Q_2^-$ intertwining operator can be constructed of $\Phi_0^-(x)$ and $\Phi_1^-(x)$ but their Wronskian \[\begin{vmatrix}\varphi_{1,0}&0\\\varphi_{1,1}&0\end{vmatrix}\equiv0.\] Generalization of this construction for arbitrary $n$ and $N$ is straightforward. \section*{Conclusions} Let us present in conclusion the following list of questions and problems which could be investigated in future papers. \renewcommand{\labelenumi}{\rm{(\theenumi)}} \begin{enumerate} \item By analogy with \cite{anso03} to introduce the notion of (in)dependence for matrix differential intertwining operators, to find a criterion of dependence for such operators and to investigate the questions on maximal number of independent matrix differential intertwining operators and on a basis of these operators. \item To investigate the question on existence in the matrix case a matrix differential symmetry operator with properties analogous to the properties of an antisymmetric with respect to transposition non-minimizable symmetry operator in the scalar case \cite{anso03,anso09}. \item To investigate in details different partial cases of intertwining relations, in particular, the case where both intertwined Hamiltonians are Hermitian, the case where both intertwined Hamiltonians are symmetric with respect to transposition and the case where all elements of the potentials of both intertwined Hamiltonians are real-valued. \item By analogy with \cite{acdi95,anca04,samsonov99,anso06,sokolov10,ferneni00,tr89,dun98,khsu99,fermura03,fermiros02',fersahe03,samsonov06} to investigate and classify irreducible and, in particular, absolutely irreducible matrix differential intertwining operators. \item To generalize the results to the case with degenerate matrix coefficient at $\partial$ in the highest degree in a matrix differential intertwining operator. \end{enumerate} \section*{Acknowledgments} The author is grateful to A.A. Andrianov for critical reading of this paper and valuable comments. This work was supported by the SPbSU project 11.0.64.2010.
{"config": "arxiv", "file": "1307.4449.tex"}
\begin{document} \maketitle \markboth{F. Diacu, E. P\'erez-Chavela, and M. Santoprete}{The $n$-Body Problem in Spaces of Constant Curvature} \author{\begin{center} Florin Diacu\\ \smallskip {\footnotesize Pacific Institute for the Mathematical Sciences\\ and\\ Department of Mathematics and Statistics\\ University of Victoria\\ P.O.~Box 3060 STN CSC\\ Victoria, BC, Canada, V8W 3R4\\ diacu@math.uvic.ca\\ }\end{center} \begin{center} Ernesto P\'erez-Chavela\\ \smallskip {\footnotesize Departamento de Matem\'aticas\\ Universidad Aut\'onoma Metropolitana-Iztapalapa\\ Apdo.\ 55534, M\'exico, D.F., M\'exico\\ epc@xanum.uam.mx\\ }\end{center} \begin{center} Manuele Santoprete\\ \smallskip {\footnotesize Department of Mathematics\\ Wilfrid Laurier University\\ 75 University Avenue West,\\ Waterloo, ON, Canada, N2L 3C5.\\ msantopr@wlu.ca\\ }\end{center} } \vskip0.5cm \begin{center} \today \end{center} \bigskip \begin{abstract} {We generalize the Newtonian $n$-body problem to spaces of curvature $\kappa={\rm constant}$, and study the motion in the 2-dimensional case. For $\kappa>0$, the equations of motion encounter non-collision singularities, which occur when two bodies are antipodal. This phenomenon leads, on one hand, to hybrid solution singularities for as few as 3 bodies, whose corresponding orbits end up in a collision-antipodal configuration in finite time; on the other hand, it produces non-singularity collisions, characterized by finite velocities and forces at the collision instant. We also point out the existence of several classes of relative equilibria, including the hyperbolic rotations for $\kappa<0$. In the end, we prove Saari's conjecture when the bodies are on a geodesic that rotates elliptically or hyperbolically. We also emphasize that fixed points are specific to the case $\kappa>0$, hyperbolic relative equilibria to $\kappa<0$, and Lagrangian orbits of arbitrary masses to $\kappa=0$---results that provide new criteria towards understanding the large-scale geometry of the physical space.} \end{abstract} \newpage \tableofcontents \section{Introduction} The goal of this paper is to extend the Newtonian $n$-body problem of celestial mechanics to spaces of constant curvature. Though attempts of this kind existed for two bodies in the 19th century, they faded away after the birth of special and general relativity, to be resurrected several decades later, but only in the case $n=2$. As we will further argue, the topic we are opening here is important for understanding particle dynamics in spaces other than Euclidean, for shedding some new light on the classical case, and perhaps helping us understand the nature of the physical space. \subsection{History of the problem} The first researcher who took the idea of gravitation beyond ${\bf R}^3$ was Nikolai Lobachevsky. In 1835, he proposed a Kepler problem in the 3-dimensional hyperbolic space, ${\bf H}^3$, by defining an attractive force proportional to the inverse area of the 2-dimensional sphere of the same radius as the distance between bodies, \cite{Lob}. Independently of him, and at about the same time, J\'anos Bolyai came up with a similar idea, \cite{Bol}. These co-discoverers of the first non-Euclidean geometry had no followers in their pre-relativistic attempts until 1860, when Paul Joseph Serret\footnote{Paul Joseph Serret (1827-1898) should not be confused with another French mathematician, Joseph Alfred Serret (1819-1885), known for the Frenet-Serret formulas of vector calculus.} extended the gravitational force to the sphere ${\bf S}^2$ and solved the corresponding Kepler problem, \cite{Ser}. Ten years later, Ernst Schering revisited Lobachevsky's law for which he obtained an analytic expression given by the cotangent potential we study in this paper, \cite{Sche}. Schering also wrote that Lejeune Dirichlet had told some friends to have dealt with the same problem during his last years in Berlin\footnote{This must have happened around 1852, as claimed by Rudolph Lipschitz, \cite{Lip72}.}, \cite{Sche1}. In 1873, Rudolph Lipschitz considered the same problem in ${\bf S}^3$, but defined a potential proportional to $1/\sin({r/R})$, where $r$ denotes the distance between bodies and $R$ is the curvature radius, \cite{Lip}. He obtained the general solution of this problem in terms of elliptic functions, but his failure to provide an explicit formula invited new approaches. In 1885, Wilhelm Killing adapted Lobachevsky's idea to ${\bf S}^3$ and defined an extension of the Newtonian force given by the inverse area of a 2-dimensional sphere (in the spirit of Schering), for which he proved a generalization of Kepler's three laws, \cite{Kil10}. Another contributor was Heinrich Liebmann,\footnote{Although he signed his works as Heinrich Liebmann, his full name was Karl Otto Heinrich Liebmann (1874-1939). He did most of his work in Heidelberg and Munich.}. In 1902, he showed that the orbits of the two-body problem are conics in ${\bf S}^3$ and ${\bf H}^3$ and generalized Kepler's three laws to $\kappa\ne 0$, \cite{Lie1}. One year later, Liebmann proved ${\bf S}^2$- and ${\bf H}^2$-analogues of Bertrand's theorem, \cite{Ber}, \cite{Win}, which states that there exist only two analytic central potentials in the Euclidean space for which all bounded orbits are closed, \cite{Lie2}. He also summed up his results in a book published in 1905, \cite{Lie3}. Unfortunately, this direction of research was neglected in the decades following the birth of special and general relativity. Starting with 1940, however, Erwin Schr\"odinger developed a quantum-mechanical analogue of the Kepler problem in ${\bf S}^2$, \cite{Schr}. Schr\"odinger used the same cotangent potential of Schering and Liebmann, which he deemed to be the natural extension of Newton's law to the sphere\footnote{``The correct form of [the] potential (corresponding to $1/r$ of the flat space) is known to be $\cot\chi$,'' \cite{Schr}, p.~14.}. Further results in this direction were obtained by Leopold Infeld, \cite{Inf}, \cite{Ste}. In 1945, Infeld and his student Alfred Schild extended this problem to spaces of constant negative curvature using a potential given by the hyperbolic cotangent of the distance. A list of the above-mentioned works also appears in \cite{Sh}, except for Serret's book, \cite{Ser}. A bibliography of works about mechanical problems in spaces of constant curvature is given in \cite{Shch2}. Several members of the Russian school of celestial mechanics, including Valeri Kozlov and Alexander Harin, \cite{Koz}, \cite{Koz2}, Alexey Borisov, Ivan Mamaev, and Alexander Kilin, \cite{Bor}, \cite{Bor1}, \cite{Bor2}, \cite{Bor3}, \cite{Kilin}, Alexey Shchepetilov, \cite{Shc}, \cite{Shc1}, \cite{Shch2}, and Tatiana Vozmischeva, \cite{Voz}, revisited the idea of the cotangent potential for the 2-body problem and considered related problems in spaces of constant curvature starting with the 1990s. The main reason for which Kozlov and Harin supported this approach was mathematical. They pointed out, as Schering, Liebmann, Schr\"odinger, Infeld, and others had insisted earlier, that (i) the classical one-body problem satisfies Laplace's equation (i.e.~the potential is a harmonic function), which also means that the equations of the problem are equivalent with those of the harmonic oscillator; (ii) its potential generates a central field in which all bounded orbits are closed---according to Bertrand's theorem. Then they showed that the cotangent potential is the only one that satisfies these properties in spaces of constant curvature and has, at the same time, meaning in celestial mechanics. The results they obtained support the idea that the cotangent potential is, so far, the best extension found for the Newtonian potential to spaces of nonzero constant curvature. Our paper brings new arguments that support this view. The latest contribution to the case $n=2$ belongs to Jos\'e Cari\~nena, Manuel Ra\~nada, and Mariano Santander, who provided a unified approach in the framework of differential geometry, emphasizing the dynamics of the cotangent potential in ${\bf S}^2$ and ${\bf H}^2$, \cite{Car} (see also \cite{Car2}, \cite{Gut}). They also proved that, in this unified context, the conic orbits known in Euclidean space extend naturally to spaces of constant curvature, in agreement with the results obtained by Liebmann, \cite{Sh}. \subsection{Relativistic $n$-body problems} Before trying to approach this problem with contemporary tools, we were compelled to ask why the direction of research proposed by Lobachevsky was neglected after the birth of relativity. Perhaps this phenomenon occurred because relativity hoped not only to answer the questions this research direction had asked, but also to regard them from a better perspective than classical mechanics, whose days seemed to be numbered. Things, however, didn't turn out this way. Research on the classical Newtonian $n$-body problem continued and even flourished in the decades to come, and the work on the case $n=2$ in spaces of constant curvature was revived after several decades. But how did relativity fare with respect to this fundamental problem of any gravitational theory? Although the most important success of relativity was in cosmology and related fields, there were attempts to discretize Einstein's equations and define an $n$-body problem. Reamrkable in this direction were the contributions of Jean Chazy, \cite{Cha}, Tullio Levi-Civita, \cite{Civ}, \cite{Civita}, Arthur Eddington, \cite{Edd}, Albert Einstein, Leopold Infeld\footnote{A vivid description of the collaboration between Einstein and Infeld appears in \cite{Infeld}.}, and Banesh Hoffmann, \cite{Ein}, and Vladimir Fock, \cite{Fock}. Subsequent efforts led to refined post-Newtonian approximations (see, e.g., \cite{Soff1}, \cite{Soff2}, \cite{Soff3}), which prove useful in practice, from understanding the motion of artificial satellites---a field with applications in geodesy and geophysics---to using the Global Positioning System (GPS), \cite{Soff4}. But the equations of the $n$-body problem derived from relativity prove complicated even for $n=2$, and they are not prone to analytical studies similar to the ones done in the classical case. This is probably the reason why the need of some simpler equations revived the research on the motion of two bodies in spaces of constant curvature. Nobody, however, considered the general $n$-body problem\footnote{One of us (Erensto P\'erez-Chavela), together with his student Luis Franco-P\'erez, recently analyzed a restricted 3-body problem in ${\bf S}^1$, \cite{Fra}, in a more restrained context than the one we provide here.} for $n\ge 3$. The lack of developments in this direction may again rest with the complicated form the equations of motion take if one starts from the idea of defining the potential in terms of the intrinsic distance in the framework of differential geometry. Such complications might have discouraged all the attempts to generalize the problem to more than two bodies. \subsection{Our approach} The present paper overcomes the above-mentioned difficulties encountered in defining a meaningful $n$-body problem prone to the same mathematical depth achieved in the classical case, by replacing the differential-geometric approach used for $n=2$ in the case of the cotangent potential with the variational method of constrained Lagrangian dynamics. Also, the technical complications that arise in understanding the motion within the standard models of the Bolyai-Lobachevsky plane (the Klein-Beltrami disk, the Poincar\'e upper-half-plane, and the Poincar\'e disk) are bypassed through the less known Weierstrass hyperboloidal model (see Appendix), which often provides analogies with the results we obtain in the spherical case. This model also allows us to use hyperbolic rotations---a class of isometries---to put into the evidence some unexpected solutions of the equations of motion. The history of the problem shows that there is no unique way of extending the classical idea of gravitation to spaces of constant curvature, but that the cotangent potential is the most natural candidate. Therefore we take this potential as a starting point of our approach, though some of our results---as for example Saari's conjecture in the geodesic case---do not use this potential explicitly, only its property of being a homogenous function of degree zero. Our generalization recovers the Newtonian law when the curvature is zero. Moreover, it provides a unified context, in which the potential varies continuously with the curvature $\kappa$. The same continuity occurs for the basic results when the curvature tends to zero. For instance, the set of closed orbits of the Kepler problem on non-zero-curvature surfaces tends to the set of ellipses in the Euclidean plane when $\kappa\to 0$ (see, e.g., \cite{Car} or \cite{Lie1}). \section{Summary of results} \subsection{Equations of motion} In Section 3, we extend the Newtonian potential of the $n$-body problem to spaces of constant curvature, $\kappa$, for any finite dimension. For $\kappa\ne 0$, the potential turns out to be a homogeneous function of degree zero. We also show the existence of an energy integral as well as of the integrals of the angular momentum. Like in general relativity, there are no integrals of the center of mass and linear momentum. But unlike in relativity, where---in the passage from continuous matter to discrete bodies---the fact that forces don't cancel at the center of mass leads to difficulties in defining infinitesimal sizes for finite masses, \cite{Civ}, we do not encounter such problems here. We assume that the laws of classical mechanics hold for point masses moving on manifolds, so we can apply the results of constrained Lagrangian dynamics to derive the equations of motion. Thus two kinds of forces act on bodies: (i) those given by the mutual interaction between particles, represented by the gradient of the potential, and (ii) those that occur due to the constraints, which involve both position and velocity terms. \subsection{Singularities} In Section 4 we focus on singularities, and distinguish between singularities of the equations of motion and solution singularities. For any $\kappa\ne 0$, the equations of motion become singular at collisions, the same as in the Euclidean case. The case $\kappa>0$, however, introduces some new singularities, which we call antipodal because they occur when two bodies are at the opposite ends of a diameter of the sphere. The set of singularities is endowed with a natural dynamical structure. When the motion of three bodies takes place along a geodesic, solutions close to binary collisions and away from antipodal singularities end up in collision, so binary collisions are attractive. But antipodal singularities are repulsive in the sense that no matter how close two bodies are to an antipodal singularity, they never reach it if the third body is far from a collision with any of them. Solution singularities arise naturally from the question of existence and uniqueness of initial value problems. For nonsingular initial conditions, standard results of the theory of differential equations ensure local existence and uniqueness of an analytic solution defined in some interval $[0,t^+)$. This solution can be analytically extended to an interval $[0,t^*)$, with $0<t^+\le t^*\le\infty$. If $t^*=\infty$, the solution is globally defined. If $t^*<\infty$, the solution is called singular and is said to have a singularity at time $t^*$. While the existence of solutions ending in collisions is obvious for any value of $\kappa$, the occurrence of other singularities is not easy to demonstrate. Nevertheless, we prove that some hybrid singular solutions exist in the 3-body problem with $\kappa>0$. These orbits end up in finite time in a collision-antipodal singularity. Whether other types of non-collision singularities exist, like the pseudocollisions of the Euclidean case, remains an open question. The main reason why this problem is not easy to answer rests with the nonexistence of the center-of-mass integrals. Another class of solutions connected to collision-antipodal configurations is particularly interesting. We show that, for $n=3$, there are orbits that reach such a configuration at some instant $t^*$ but remain analytic at this point because the forces and the velocities involved remain finite at $t^*$. Such a motion can, of course, be analytically continued beyond $t^*$. This is the first example of a natural non-singularity collision. \subsection{Relative equilibria} The rest of this paper, except for the Appendix, focuses on the results we obtained in ${\bf S}^2$ and ${\bf H}^2$, mainly because these two surfaces are representative for the cases $\kappa>0$ and $\kappa<0$, respectively. Indeed, the results we proved for these surfaces can be extended to different curvatures of the same sign by a mere change of factor. Sections 5 and 6 deal with relative equilibria in ${\bf S}^2$ and ${\bf H}^2$. In ${\bf S}^2$ we only have elliptic relative equilibria. Instead, the relative equilibria in ${\bf H}^2$ are of two kinds: elliptic relative equilibria, generated by elliptic rotations, and hyperbolic relative equilibria, generated by hyperbolic rotations (see Appendix). Parabolic relative equilibria, generated by parabolic rotations, do not exist. Some of the results we obtain in ${\bf S}^2$ have analogues in ${\bf H}^2$; others are specific to each case. Theorems \ref{ngonS} and \ref{ngonH}, for instance, are dual to each other, whereas Theorem \ref{fix} takes place only in ${\bf S}^2$. The latter identifies a class of fixed points of the equations of motion. More precisely, we prove that if an odd number $n$ of equal masses are placed, initially at rest, at the vertices of a regular $n$-gon inscribed in a great circle, then the bodies won't move. The same is true for four equal masses placed at the vertices of a regular tetrahedron inscribed in ${\bf S}^2$, but---due to the occurrence of antipodal singularities---fails to hold for the other regular polyhedra: octahedron (6 bodies), cube (8 bodies), dodecahedron (12 bodies), and icosahedron (20 bodies), as well as in the case of geodesic $n$-gons with an even number of bodies. Theorem \ref{nofixS} shows that there are no fixed points for $n$ bodies within any hemisphere of ${\bf S}^2$. Its hyperbolic analogue, stated in Theorem \ref{nofixH}, proves the nonexistence of fixed points in ${\bf H}^2$. These two results are in agreement with the Euclidean case in the sense that the $n$-body problem has no fixed points within distances, say, not larger than the ray of the visible universe. It is also natural to ask whether fixed points can generate relative equilibria. Theorem \ref{fixrel} shows that if $n$ masses $m_1,m_2, \dots, m_n$ lie initially on a great circle of ${\bf S}^2$ such that the mutual forces are in equilibrium, then any uniform rotation applied to the system generates a relative equilibrium. Theorem \ref{rengon} states that the only way to generate an elliptic relative equilibrium from an initial $n$-gon configuration taken on a great circle, as in Theorem \ref{fix}, is to assign suitable velocities in the plane of the $n$-gon. So a regular polygon of this kind can rotate only in a plane orthogonal to the rotation axis. Theorem \ref{ngonS} and its hyperbolic analogue, Theorem \ref{ngonH}, show that $n$-gons of any admissible size can rotate on the same circle, both in ${\bf S}^2$ and ${\bf H}^2$. Again, these results agree with the Euclidean case. But something interesting happens with the equilateral (Lagrangian) solutions. Unlike in Euclidean space, elliptic relative equilibria moving in the same plane of ${\bf R}^3$ can be generated only when the masses move on the same circle and are therefore equal, as we prove in Theorems \ref{lagranS} and \ref{lagranH}. Thus Lagrangian solutions with unequal masses are specific to the Euclidean case. Theorems \ref{regeo3} and \ref{regeo3H} show that analogues to the collinear (Eulerian) orbits in the 3-body problem of the classical case exist in ${\bf S}^2$ and ${\bf H}^2$, respectively. While nothing surprising happens in ${\bf H}^2$, where we prove the existence of such solutions of any size, an interesting phenomenon takes place in ${\bf S}^2$. Assume that one body lies on the rotation axis (which contains one height of the triangle), while the other two are at the opposite ends of a rotating diameter on some non-geodesic circle of ${\bf S}^2$. Then elliptic relative equilibria exist while the bodies are at initial positions within the same hemisphere. When the rotating bodies are placed on the equator, however, they encounter an antipodal singularity. Below the equator, solutions exist again until the bodies form an equilateral triangle. By Theorem \ref{rengon}, any $n$-gon with an odd number of sides can rotate only in its own plane, so the (vertical) equilateral triangle is a fixed point but cannot lead to an elliptic relative equilibrium. If the rotating bodies are then placed below the equilateral position, solutions fail to exist. But the masses don't have to be all equal. Eulerian solutions exist if, say, the non-rotating body has mass $m$ and the other two have mass $M$. If $M\ge 4m$, these orbits occur for all $z\ne 0$. Again, these results prove that, as long as we do not exceed reasonable distances, such as the ray of the visible universe, the behavior of elliptic relative equilibria lying on a rotating geodesic is similar to the one of Eulerian solutions in the Euclidean case. We then study hyperbolic relative equilibria around a point and along a (usually non-geodesic) hyperbola. Theorem \ref{noreH} proves that such orbits do not exist on fixed geodesics of ${\bf H}^2$, so the bodies cannot chase each other along a geodesic while maintaining the same initial distances. But Theorem \ref{hyp} proves the existence of hyperbolic relative equilibria in ${\bf H}^2$ for three equal masses. The bodies move along hyperbolas of the hyperboloid that models ${\bf H}^2$ remaining all the time on a moving geodesic and maintaining the initial distances among themselves. These orbits rather resemble fighter planes flying in formation than celestial bodies moving under the action of gravity alone. The result also holds if the mass in the middle differs from the other two. The last result of this section, Theorem \ref{thpar}, shows that parabolic relative equilibria do not exist. \subsection{Saari's conjecture} Our extension of the Newtonian $n$-body problem to spaces of constant curvature also reveals new aspects of Saari's conjecture. Proposed in 1970 by Don Saari in the Euclidean case, Saari's conjecture claims that solutions with constant moment of inertia are relative equilibria. This problem generated a lot of interest from the very beginning, but also several failed attempts to prove it. The discovery of the figure eight solution, which has an almost constant moment of inertia, and whose existence was proved in 2000 by Alain Chenciner and Richard Montgomery, \cite{Che}, renewed the interest in this conjecture. Several results showed up not long thereafter. The case $n=3$ was solved in 2005 by Rick Moeckel, \cite{Moe}; the collinear case, for any number of bodies and the more general potentials that involve only mutual distances, was settled the same year by the authors of this paper, \cite{Dia2}. Saari's conjecture is also connected to the Chazy-Wintner-Smale conjecture, \cite{Sma}, \cite{Win}, which asks to determine whether the number of central configurations is finite for $n$ given bodies in Euclidean space. Since relative equilibria have elliptic and hyperbolic versions in ${\bf H}^2$, Saari's conjecture raises new questions for $\kappa<0$. We answered them in Theorem \ref{Saari} of Section 7, when the bodies are restrained to a geodesic that rotates elliptically or hyperbolically. \bigskip An Appendix in which we present some basic facts about the Weierstrass model of the hyperbolic plane, together with some historical remarks, closes our paper. We suggest that readers unfamiliar with this model take a look at the Appendix before getting into the technical details related to our results. \subsection{Some physical remarks} Does our gravitational model have any connection with the physical reality? Since there is no unique extension of the Newtonian $n$-body problem to spaces of constant curvature, is our generalization meaningful from the physical point of view or does it lead only to some interesting mathematical properties? We followed the tradition of the cotangent potential, which seems the most natural candidate. But since the debate on the nature of the physical space is open, the only way to justify this model is through mathematical results. As we will further argue, not only that the properties we obtained match the Euclidean ones, but they also provide a classical explanation of the cosmological scenario, in agreement with the basic conclusions of general relativity. But before getting into the physical aspect, let us remark that our model is based on mathematical principles, which lead to a meaningful physical interpretation. As we already mentioned, the cotangent potential preserves two fundamental properties: (i) it is harmonic for the one-body problem and (ii) it generates a central field in which all bounded orbits are closed. Other results that support the cotangent potential are based on the idea of central (or gnomonic) projection, \cite{App}. By taking the central projection on the sphere for the planar Kepler problem, Paul Appell obtained the cotangent potential. This idea can be generalized by projecting the planar Kepler problem to any surface of revolution, as one of us (Manuele Santoprte) proved, \cite{Santoprete}. In 1992, Kozlov and Harin showed that the only central potential that satisfies the fundamental properties (i) and (ii) in ${\bf S}^2$ and has meaning in celestial mechanics is the cotangent of the distance, \cite{Koz}. This fact had been known to Infeld for the quantum mechanical version of the potential, \cite{Inf}. But since any continuously differentiable and non-constant harmonic function attains no maximum or minimum on the sphere, the existence of two distinct singularities (the collisional and the antipodal---in our case) is not unexpected. And though a force that becomes infinite for points at opposite poles may seem counterintuitive in a gravitational framework, it explains the cosmological scenario. Indeed, while there is no doubt that $n$ point masses ejecting from a total collapse would move forever in spaces with $\kappa\le0$ for large initial conditions, in agreement with general relativity, it is not clear what happens for $\kappa>0$. But the energy relation \eqref{enerS} shows that, in spherical space, the current expansion of the universe cannot last forever. For a fixed energy constant, $h$, the potential energy, $-U$, would become positive and very large if one or more pairs of particles were to come close to antipodal singularities. Therefore in a homogeneous universe, highly populated with non-colliding particles, the system could never expand beyond the equator (assuming that the initial ejection took place at one pole). No matter how large (but fixed) the energy constant is, when the potential energy reaches the value $h$, the kinetic energy becomes zero, so all the particles stop simultaneously and the motion reverses. Thus, for $\kappa>0$, the cotangent potential recovers the growth of the system to a maximum size and the reversal of the expansion independently on the value of the energy constant. Without antipodal singularities, the reversal could take place only for certain initial conditions. This conclusion is reached without introducing a cosmological force and differently from how it was obtained in the classical model proposed by \'Elie Cartan, \cite{Cart1}, \cite{Cart2}, and shown by Frank Tipler to be as rigorous as Friedmann's cosmology, \cite{Tip1}, \cite{Tip2}. Another result that suggests the validity of the cotangent potential is the nonexistence of fixed points. They don't show up in the Euclidean case, and neither do they appear in this model within the observable universe. The properties we proved for relative equilibria are also in agreement with the classical $n$-body problem, the only exception being the Lagrangian solutions for $\kappa\ne 0$, which, unlike in the Euclidean case, must have equal masses and move on the same circle. This distinction adds to the strength of the model because, even in the Euclidean case, the arbitrariness of the Lagrangian solutions is a peculiar property. At least two arguments support this point of view. First, relative equilibria generated from regular polygons, except the equilateral triangle, exist only if the masses are equal. The second argument is related to central configurations, which generate relative equilibria in the Euclidean case. One of us (Florin Diacu) proved that among attraction forces given by symmetric laws of masses, $\gamma(m_i,m_j)=\gamma(m_j,m_i)$, equilateral central configurations with unequal masses occur only when $\gamma(m_i,m_j)= c\ \! m_im_j$, where $c$ is a nonzero constant, \cite{Diacu}. Since for $\kappa\ne 0$ relative equilibria are equilateral only if the masses are equal means that Lagrangian solutions of arbitrary masses characterize the Euclidean space. Such orbits exist in nature, the best known example being the equilateral triangle formed by the Sun, Jupiter, and the Trojan asteroids. Therefore our result reinforces the fact that space is Euclidean within distances comparable to those of our solar system. This fact was not known during the time of Gauss, who apparently tried to determine the nature of space by measuring the angles of triangles having the vertices some tens of kilometers apart.\footnote{Arthur Miller argues that these experiments never took place, \cite{Mill}} Since we cannot measure the angles of cosmic triangles, our result opens up a new possibility. Any evidence of a Lagrangian solution involving galaxies (or clusters of galaxies) of unequal masses, could be used as an argument for the flatness of the physical space for distances comparable to the size of the triangle. Similarly, hyperbolic relative equilibria would show that space has negative curvature. \section{Equations of motion} We derive in this section a Newtonian $n$-body problem on surfaces of constant curvature. The equations of motion we obtain are simple enough to allow an analytic approach. At the end, we provide a straightforward generalization of these equations to spaces of constant curvature of any finite dimension. \subsection{Unified trigonometry} Let us first consider what, following \cite{Car}, we will call trigonometric $\kappa$-functions, which unify elliptical and hyperbolic trigonometry. We define the $\kappa$-sine, ${\rm sn}_\kappa$, as $$ {\rm sn}_{\kappa}(x):=\left\{ \begin{array}{rl} {\kappa}^{-1/2}\sin{\kappa}^{1/2}x & {\rm if }\ \ \kappa>0\\ x & {\rm if }\ \ \kappa=0\\ ({-\kappa})^{-{1/2}}\sinh({-\kappa})^{1/2}x & {\rm if }\ \ \kappa<0, \end{array} \right. $$ the $\kappa$-cosine, ${\rm csn}_\kappa$, as $$ {\rm csn}_{\kappa}(x):=\left\{ \begin{array}{rl} \cos{\kappa}^{1/2}x & {\rm if }\ \ \kappa>0\\ 1 & {\rm if }\ \ \kappa=0\\ \cosh({-\kappa})^{1/2}x & {\rm if }\ \ \kappa<0, \end{array} \right. $$ as well as the $\kappa$-tangent, ${\rm tn}_\kappa$, and $\kappa$-cotangent, ${\rm ctn}_\kappa$, as $${\rm tn}_{\kappa}(x):={{\rm sn}_{\kappa}(x)\over {\rm csn}_{\kappa}(x)}\ \ \ {\rm and}\ \ \ {\rm ctn}_{\kappa}(x):={{\rm csn}_{\kappa}(x)\over {\rm sn}_{\kappa}(x)},$$ respectively. The entire trigonometry can be rewritten in this unified context, but the only identity we will further need is the fundamental formula $$ {\kappa}\ {\rm sn}_{\kappa}^2(x)+{\rm csn}_{\kappa}^2(x)=1. $$ \subsection{Differential-geometric approach} In a 2-dimensional Riemann space, we can define geodesic polar coordinates, $(r,\phi)$, by fixing an origin and an oriented geodesic through it. If the space has constant curvature $\kappa$, the range of $r$ depends on $\kappa$; namely $r\in[0,{\pi/(2\kappa^{1/2})}]$ for $\kappa>0$ and $r\in[0,\infty)$ for $\kappa\le 0$; in all cases, $\phi\in[0,2\pi]$. The line element is given by $$ds_{\kappa}^2=dr^2+{\rm sn}_{\kappa}^2(r)d\phi^2.$$ In ${\bf S}^2, {\bf R}^2$, and ${\bf H}^2$, the line element corresponds to $\kappa=1,0,$ and $-1$, respectively, and reduces therefore to $$ds_1^2=dr^2+(\sin^2 r)d\phi^2, \ \ \ ds_0^2=dr^2+r^2d\phi^2,\ \ {\rm and}\ \ ds_{-1}^2=dr^2+(\sinh^2 r)d\phi^2.$$ In \cite{Car}, the Lagrangian of the Kepler problem is defined as $$L_{\kappa}(r,\phi, v_r, v_{\phi})={1\over 2}[v_r^2+{\rm sn}_{\kappa}^2(r)v_{\phi}^2]+ U_{\kappa}(r),$$ where $v_r$ and $v_{\phi}$ represent the polar components of the velocity, and $-U$ is the potential, where $$U_{\kappa}(r)=G\ {\rm ctn}_{\kappa}(r)$$ is the force function, $G>0$ being the gravitational constant. This means that the corresponding force functions in ${\bf S}^2, {\bf R}^2$, and ${\bf H}^2$ are, respectively, $$U_1(r)={G\cot r}, \ \ \ U_0(r)={G r^{-1}},\ \ \ {\rm and} \ \ \ U_{-1}(r)={G\coth r}.$$ In this setting, the case $\kappa=0$ separates the potentials with $\kappa>0$ and $\kappa<0$ into classes exhibiting different qualitative behavior. The passage from $\kappa>0$ to $\kappa<0$ through $\kappa=0$ takes place continuously. Moreover, the potential is spherically symmetric and satisfies Gauss's law in a 3-dimensional space of constant curvature $\kappa$. This law asks that the flux of the radial force field across a sphere of radius $r$ is a constant independent of $r$. Since the area of the sphere is $4\pi sn_k^2(r)$, the flux is $4\pi sn_k^2(r)\times{d\over dr}U_{\kappa}(r)$, so the potential satisfies Gauss's law. As in the Euclidean case, this generalized potential does not satisfy Gauss's law in the 2-dimensional space. The results obtained in \cite{Car} show that the force function $U_{\kappa}$ leads to the expected conic orbits on surfaces of constant curvature, and thus justify this extension of the Kepler problem to $\kappa\ne 0$. \subsection{The potential} To generalize the above setting of the Kepler problem to the $n$-body problem on surfaces of constant curvature, let us start with some notations. Consider $n$ bodies of masses $m_1,\dots,m_n$ moving on a surface of constant curvature $\kappa$. When $\kappa>0$, the surfaces are spheres of radii ${\kappa}^{-1/2}$ given by the equation $x^2+y^2+z^2=\kappa^{-1}$; for $\kappa=0$, we recover the Euclidean plane; and if $\kappa<0$, we consider the Weierstrass model of hyperbolic geometry (see Appendix), which is devised on the sheets with $z>0$ of the hyperboloids of two sheets $x^2+y^2-z^2={\kappa}^{-1}.$ The coordinates of the body of mass $m_i$ are given by ${\bf q}_i=(x_i,y_i,z_i)$ and a constraint, depending on $\kappa$, that restricts the motion of this body to one of the above described surfaces. In this paper, ${\widetilde\nabla}_{{\bf q}_i}$ denotes either of the gradient operators $$ \nabla_{{\bf q}_i}=(\partial_{x_i},\partial_{y_i},\partial_{z_i}),\ {\rm for}\ \ \kappa\ge0,\ \ {\rm or}\ \ {\overline\nabla}_{{\bf q}_i}=(\partial_{x_i},\partial_{y_i},-\partial_{z_i}),\ {\rm for}\ \ \kappa<0, $$ with respect to the vector ${\bf q}_i$, and $\widetilde\nabla$ stands for the operator $(\widetilde\nabla_{{\bf q}_1},\dots,\widetilde\nabla_{{\bf q}_n})$. For ${\bf a}=(a_x,a_y,a_z)$ and ${\bf b}=(b_x,b_y,b_z)$ in ${\bf R}^3$, we define ${\bf a}\odot{\bf b}$ as either of the inner products $${\bf a}\cdot{\bf b}:=(a_xb_x+a_yb_y+a_zb_z) \ \ {\rm for}\ \ \kappa\ge0,$$ $${\bf a}\boxdot{\bf b}:=(a_xb_x+a_yb_y-a_zb_z) \ \ {\rm for}\ \ \kappa<0,$$ the latter being the Lorentz inner product (see Appendix). We also define ${\bf a}\otimes{\bf b}$ as either of the cross products $${\bf a}\times{\bf b}:=(a_yb_z-a_zb_y, a_zb_x-a_xb_z, a_xb_y-a_yb_x) \ \ {\rm for}\ \ \kappa\ge0,$$ $${\bf a}\boxtimes{\bf b}:=(a_yb_z-a_zb_y, a_zb_x-a_xb_z, a_yb_x-a_xb_y)\ \ {\rm for}\ \ \kappa<0.$$ The distance between $\bf a$ and $\bf b$ on the surface of constant curvature $\kappa$ is then given by $$ d_{\kappa}({\bf a},{\bf b}):= \begin{cases} \kappa^{-1/2}\cos^{-1}(\kappa{\bf a}\cdot{\bf b}),\ \ \ \ \ \ \ \ \ \kappa >0\cr |{\bf a}-{\bf b}|, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \! \ \hspace{2 pt} \kappa=0\cr ({-\kappa})^{-1/2}\cosh^{-1}(\kappa{\bf a}\boxdot{\bf b}),\hspace{4 pt} \kappa<0,\cr \end{cases} $$ where the vertical bars denote the standard Euclidean norm. In particular, the distances in ${\bf S}^2$ and ${\bf H}^2$ are $$ d_1({\bf a},{\bf b})=\cos^{-1}({\bf a}\cdot{\bf b}),\ \ d_{-1}({\bf a},{\bf b})=\cosh^{-1}(-{\bf a}\boxdot{\bf b}), $$ respectively. Notice that $d_0$ is the limiting case of $d_\kappa$ when $\kappa\to 0$. Indeed, for both $\kappa>0$ and $\kappa<0$, the vectors $\bf a$ and $\bf b$ tend to infinity and become parallel, while the surfaces tend to an Euclidean plane, therefore the length of the arc between the vectors tends to the Euclidean distance. We will further define a potential in ${\bf R}^3$ if $\kappa>0$, and in the $3$-dimensional Minkowski space $\mathcal{M}^3$ (see Appendix) if $\kappa<0$, such that we can use a variational method to derive the equations of motion. For this purpose we need to extend the distance to these spaces. We do this by redefining the distance as $$ d_{\kappa}({\bf a},{\bf b}):= \begin{cases} \kappa^{-1/2}\cos^{-1}{\kappa{\bf a}\cdot{\bf b}\over\sqrt{\kappa{\bf a}\cdot{\bf a}} \sqrt{\kappa{\bf b}\cdot{\bf b}}},\ \ \ \ \ \ \ \ \! \ \ \kappa >0\cr |{\bf a}-{\bf b}|, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \! \ \hspace{0.82cm} \kappa=0\cr ({-\kappa})^{-1/2}\cosh^{-1}{\kappa{\bf a}\boxdot{\bf b}\over\sqrt{\kappa{\bf a}\boxdot{\bf a}}\sqrt{\kappa{\bf b}\boxdot{\bf b}}},\hspace{4.5 pt} \kappa<0.\cr \end{cases} $$ Notice that this new definition is identical with the previous one when we restrict the vectors ${\bf a}$ and ${\bf b}$ to the spheres $x^2+y^2+z^2=\kappa^{-1}$ or the hyperboloids $x^2+y^2-z^2=\kappa^{-1}$, but is also valid for any vectors $\bf a$ and $\bf b$ in ${\bf R}^3$ and $\mathcal{M}^3$, respectively. From now on we will rescale the units such that the gravitational constant $G$ is $1$. We thus define the potential of the $n$-body problem as the function $-U_\kappa({\bf q})$, where \begin{equation*} U_\kappa({\bf q}):={1\over 2}\sum_{i=1}^n\sum_{j=1,j\ne i}^n{m_im_j{\rm ctn}_\kappa ({d}_\kappa({\bf q}_i,{\bf q}_j))} \label{defpot} \end{equation*} stands for the force function, and ${\bf q}=({\bf q}_1,\dots, {\bf q}_n)$ is the configuration of the system. Notice that ${\rm ctn}_0({d}_0({\bf q}_i,{\bf q}_j))=|{\bf q}_i-{\bf q}_j|^{-1}$, which means that we recover the Newtonian potential in the Euclidean case. Therefore the potential $U_\kappa$ varies continuously with the curvature $\kappa$. Now that we defined a potential that satisfies the basic continuity condition we required of any extension of the $n$-body problem beyond the Euclidean space, we will focus on the case $\kappa\ne0$. A straightforward computation shows that \begin{equation} U_\kappa({\bf q})={1\over 2}\sum_{i=1}^n\sum_{j=1,j\ne i}^n{m_im_j (\sigma\kappa)^{1/2}{\kappa{\bf q}_i\odot{\bf q}_j\over\sqrt{\kappa{\bf q}_i \odot{\bf q}_i}\sqrt{\kappa{\bf q}_j\odot{\bf q}_j}}\over \sqrt{\sigma-\sigma\Big({\kappa{\bf q}_i\odot{\bf q}_j\over \sqrt{\kappa{\bf q}_i \odot{\bf q}_i}\sqrt{\kappa{\bf q}_j\odot{\bf q}_j}}\Big)^2}}, \ \ \kappa\ne 0, \label{pothom} \end{equation} where $$ \sigma= \begin{cases} +1, \ \ {\rm for} \ \ \kappa>0\cr -1, \ \ {\rm for} \ \ \kappa<0.\cr \end{cases} $$ \subsection{Euler's formula} Notice that $U_{\kappa}(\eta{\bf q})=U_\kappa({\bf q})=\eta^0U_{\kappa}({\bf q})$ for any $\eta\ne 0$, which means that the potential is a homogeneous function of degree zero. But for ${\bf q}$ in ${\bf R}^{3n}$, homogeneous functions $F:{\bf R}^{3n}\to{\bf R}$ of degree $\alpha$ satisfy Euler's formula, ${\bf q}\cdot\nabla F({\bf q})=\alpha F({\bf q})$. With our notations, Euler's formula can be written as ${\bf q}\odot\widetilde\nabla{F({\bf q})}=\alpha F({\bf q})$. Since $\alpha=0$ for $U_{\kappa}$ with $\kappa\ne 0$, we conclude that \begin{equation} {\bf q}\odot\widetilde\nabla U_{\kappa}({\bf q})=0. \end{equation} We can also write the force function as $U_\kappa({\bf q})={1\over 2}\sum_{i=1}^nU_\kappa^i({\bf q}_i)$, where $$ U_\kappa^i({\bf q}_i):= \sum_{j=1,j\ne i}^n{m_im_j(\sigma\kappa)^{1/2}{\kappa{\bf q}_i\odot{\bf q}_j\over\sqrt{\kappa{\bf q}_i\odot{\bf q}_i}\sqrt{\kappa{\bf q}_j\odot{\bf q}_j}}\over\sqrt{\sigma-\sigma\Big({\kappa{\bf q}_i\odot{\bf q}_j\over\sqrt{\kappa{\bf q}_i\odot{\bf q}_i}\sqrt{\kappa{\bf q}_j\odot{\bf q}_j}}\Big)^2}}, \ \ i=1,\dots,n, $$ are also homogeneous functions of degree $0$. Applying Euler's formula for functions $F:{\bf R}^3\to{\bf R}$, we obtain that ${\bf q}_i\odot\widetilde\nabla_{{\bf q}_i} U_\kappa^i({\bf q})=0$. Then using the identity $\widetilde\nabla_{{\bf q}_i}U_\kappa({\bf q})=\widetilde\nabla_{{\bf q}_i} U_\kappa^i({\bf q}_i)$, we can conclude that \begin{equation} {\bf q}_i\odot\widetilde\nabla_{{\bf q}_i} U_\kappa({\bf q})=0, \ \ i=1,\dots,n. \label{eul} \end{equation} \subsection{Derivation of the equations of motion} To obtain the equations of motion for $\kappa\ne 0$, we will use a variational method applied to the force function (\ref{pothom}). The Lagrangian of the $n$-body system has the form $$ L_\kappa({\bf q}, \dot{\bf q})=T_\kappa({\bf q}, \dot{\bf q})+U_\kappa({\bf q}), $$ where $T_\kappa({\bf q},\dot{\bf q}):={1\over 2}\sum_{i=1}^nm_i(\dot{\bf q}_i\odot\dot{\bf q}_i)(\kappa{\bf q}_i\odot{\bf q}_i)$ is the kinetic energy of the system. (The reason for introducing the factors $\kappa{\bf q}_i\odot{\bf q}_i=1$ into the definition of the kinetic energy will become clear in Section 3.8.) Then, according to the theory of constrained Lagrangian dynamics (see, e.g., \cite{Gel}), the equations of motion are \begin{equation} {d\over dt}\Bigg({\partial L_\kappa\over\partial\dot{\bf q}_i}\Bigg)-{\partial L_\kappa\over\partial{\bf q}_i}-\lambda_\kappa^i(t){\partial f_i\over\partial{\bf q}_i}={\bf 0},\ \ \ i=1,\dots,n,\label{eqLagrangianS2} \end{equation} where $f_\kappa^i={\bf q}_i\odot{\bf q}_i-{\kappa}^{-1}$ is the function that gives the constraint $f_\kappa^i=0$, which keeps the body of mass $m_i$ on the surface of constant curvature $\kappa$, and $\lambda_\kappa^i$ is the Lagrange multiplier corresponding to the same body. Since ${\bf q}_i\odot{\bf q}_i=\kappa^{-1}$ implies that $\dot{\bf q}_i\odot{\bf q}_i=0$, it follows that $$ {d\over dt}\Bigg({\partial L_\kappa\over\partial\dot{\bf q}_i}\Bigg)= m_i\ddot{\bf q}_i(\kappa{\bf q}_i\odot{\bf q}_i)+2 m_i(\kappa\dot{\bf q}_i\odot{\bf q}_i)= m_i\ddot{\bf q}_i. $$ This relation, together with $$ {\partial L_\kappa\over\partial{\bf q}_i}=m_i\kappa(\dot{\bf q}_i\odot\dot{\bf q}_i){\bf q}_i+\widetilde\nabla_{{\bf q}_i} U_{\kappa}({\bf q}), $$ implies that equations (\ref{eqLagrangianS2}) are equivalent to \begin{equation} m_i\ddot{\bf q}_i-m_i\kappa(\dot{\bf q}_i\odot\dot{\bf q}_i){\bf q}_i-\widetilde\nabla_{{\bf q}_i} U_{\kappa}({\bf q})-2\lambda_\kappa^i(t){\bf q}_i={\bf 0},\ \ \ i=1,\dots, n.\label{equations} \end{equation} To determine $\lambda_\kappa^i$, notice that $0=\ddot{f}_\kappa^i=2\dot{\bf q}_i\odot\dot{\bf q}_i+ 2({\bf q}_i\odot\ddot{\bf q}_i),$ so \begin{equation} {\bf q}_i\odot\ddot{\bf q}_i=-\dot{\bf q}_i\odot\dot{\bf q}_i.\label{h1} \end{equation} Let us also remark that $\odot$-multiplying equations (\ref{equations}) by ${\bf q}_i$ and using (\ref{eul}), we obtain that $$ m_i({\bf q}_i\odot\ddot{\bf q}_i)-m_i\kappa(\dot{\bf q}_i\odot\dot{\bf q}_i)-{\bf q}_i\odot\widetilde\nabla_{{\bf q}_i} U_{\kappa}({\bf q})=2\lambda_\kappa^i{\bf q}_i\odot{\bf q}_i=2\kappa^{-1}\lambda_\kappa^i, $$ which, via (\ref{h1}), implies that $\lambda_\kappa^i=-\kappa m_i(\dot{\bf q}_i\odot\dot{\bf q}_i)$. Substituting these values of the Lagrange multipliers into equations (\ref{equations}), the equations of motion and their constraints become \begin{multline} m_i \ddot {\bf q}_i=\widetilde\nabla_{{\bf q}_i} U_{\kappa}({\bf q})-m_i\kappa(\dot{\bf q}_i\odot\dot{\bf q}_i){\bf q}_i, \ \ {\bf q}_i\odot{\bf q}_i=\kappa^{-1}, \ \ \kappa\ne 0,\\ \ \ i=1,\dots, n.\label{eqmotion} \end{multline} The ${\bf q}_i$-gradient of the force function, obtained from (\ref{pothom}), has the form \begin{equation} \widetilde\nabla_{{\bf q}_i} U_\kappa({\bf q})=\sum_{j=1,j\ne i}^n {{m_im_j(\sigma\kappa)^{1/2}\left(\sigma\kappa{\bf q}_j -\sigma{\kappa^2{\bf q}_i\odot{\bf q}_j\over\kappa{\bf q}_i\odot{\bf q}_i}{\bf q}_i \right)\over\sqrt{\kappa{\bf q}_i\odot{\bf q}_i}\sqrt{\kappa{\bf q}_j\odot{\bf q}_j}} \over \left[\sigma-\sigma\left({\kappa{\bf q}_i\odot{\bf q}_j}\over{\sqrt{\kappa{\bf q}_i\odot{\bf q}_i}\sqrt{\kappa{\bf q}_j\odot{\bf q}_j}}\right)^2\right]^{3/2}},\ \kappa\ne 0, \label{gra} \end{equation} and using the fact that $\kappa{\bf q}_i\odot{\bf q}_i=1$, we can write this gradient as \begin{equation} \widetilde\nabla_{{\bf q}_i}U_\kappa({\bf q})=\sum_{j=1,j\ne i}^n{{m_im_j}(\sigma \kappa)^{3/2} \left[{\bf q}_j - (\kappa{\bf q}_i\odot{\bf q}_j){\bf q}_i \right] \over \left[\sigma-\sigma\left(\kappa{\bf q}_i\odot{\bf q}_j\right)^2\right]^{3/2}}, \ \kappa\ne 0. \label{grad} \end{equation} Sometimes we can use the simpler form \eqref{grad} of the gradient, but whenever we need to exploit the homogeneity of the gradient or have to differentiate it, we must use its original form \eqref{gra}. Thus equations (\ref{eqmotion}) and (\ref{gra}) describe the $n$-body problem on surfaces of constant curvature for $\kappa\ne0$. Though more complicated than the equations of motion Newton derived for the Euclidean space, system (\ref{eqmotion}) is simple enough to allow an analytic approach. Let us first provide some of its basic properties. \subsection{First integrals} The equations of motion have the energy integral \begin{equation} T_\kappa({\bf q},{\bf p})-U_\kappa({\bf q})=h,\label{energy} \end{equation} where, recall, $T_\kappa({\bf q}, {\bf p}):={1\over 2}\sum_{i=1}^nm_i^{-1}({\bf p}_i\odot{\bf p}_i)(\kappa{\bf q}_i\odot{\bf q}_i)$ is the kinetic energy, ${\bf p}:=({\bf p}_1,\dots,{\bf p}_n)$ denotes the momentum of the $n$-body system, with ${\bf p}_i:=m_i\dot{\bf q}_i$ representing the momentum of the body of mass $m_i, i=1,\dots,n$, and $h$ is a real constant. Indeed, $\odot$-multiplying equations (\ref{eqmotion}) by $\dot{\bf q}_i$, we obtain $$ \sum_{i=1}^nm_i\ddot{\bf q}_i\odot\dot{\bf q}_i= [\widetilde\nabla_{{\bf q}_i} U_\kappa({\bf q})]\odot\dot{\bf q}_i- \sum_{i=1}^n{m_i\kappa}(\dot{\bf q}_i\odot\dot{\bf q}_i){\bf q}_i\odot{\dot{\bf q}_i}= {d\over dt}U_\kappa({\bf q}(t)). $$ Then equation (\ref{energy}) follows by integrating the first and last term in the above equation. The equations of motion also have the integrals of the angular momentum, \begin{equation} \sum_{i=1}^n{\bf q}_i\otimes{\bf p}_i={\bf c},\label{angular} \end{equation} where $\bf c$ is a constant vector. Relations (\ref{angular}) follow by integrating the identity formed by the first and last term of the equations \begin{multline} \sum_{i=1}^nm_i\ddot{\bf q}_i\otimes{\bf q}_i=\sum_{i=1}^n\sum_{j=1,j\ne i}^n{m_im_j(\sigma\kappa)^{3/2}{\bf q}_i\otimes{\bf q}_j \over [\sigma-\sigma(\kappa{\bf q}_i\odot{\bf q}_j)^2]^{3/2}}\\ -\sum_{i=1}^n\left[\sum_{j=1,j\ne i}^n{m_im_j(\sigma\kappa)^{3/2}(\kappa{\bf q}_i\odot{\bf q}_j) \over [\sigma-\sigma(\kappa{\bf q}_i\odot{\bf q}_j)^2]^{3/2}}- m_i{\kappa}(\dot{\bf q}_i\odot\dot{\bf q}_i)\right]{\bf q}_i\otimes{\bf q}_i ={\bf 0}, \end{multline} obtained if $\otimes$-multiplying the equations of motion (\ref{eqmotion}) by ${\bf q}_i$. The last of the above identities follows from the skew-symmetry of $\otimes$ and the fact that ${\bf q}_i\otimes{\bf q}_i={\bf 0}, \ i=1,\dots,n$. \subsection{Motion of a free body} A consequence of the integrals of motion is the analogue of the well known result from the Euclidean space related to the motion of a single body in the absence of any gravitational interactions. Though simple, the proof of this property is not as trivial as in the classical case. \begin{proposition} A free body on a surface of constant curvature is either at rest or it moves uniformly along a geodesic. Moreover, for $\kappa>0$, every orbit is closed. \end{proposition} \begin{proof} Since there are no gravitational interactions, the equations of motion take the form \begin{equation} \ddot{\bf q}=-\kappa(\dot{\bf q}\odot\dot{\bf q}){\bf q}, \end{equation} where ${\bf q}=(x,y,z)$ is the vector describing the position of the body of mass $m$. If $\dot{\bf q}(0)={\bf 0}$, then $\ddot{\bf q}(0)={\bf 0}$, so no force acts on $m$. Therefore the body will be at rest. If $\dot{\bf q}(0)\ne{\bf 0}$, $\ddot{\bf q}(0)$ and ${\bf q}(0)$ are collinear, having the same sense if $\kappa<0$, but the opposite sense if $\kappa>0$. So the sum between $\ddot{\bf q}(0)$ and $\dot{\bf q}(0)$ pulls the body along the geodesic corresponding to the direction of these vectors. We still need to show that the motion is uniform. This fact follows obviously from the integral of energy, but we can also derive it from the integrals of the angular momentum. Indeed, for $\kappa>0$, these integrals lead us to $$c=({\bf q}\times\dot{\bf q})\cdot({\bf q}\times\dot{\bf q})=({\bf q}\cdot{\bf q}) (\dot{\bf q}\cdot\dot{\bf q})\sin^2\alpha,$$ where $c$ is the length of the angular momentum vector and $\alpha$ is the angle between ${\bf q}$ and $\dot{\bf q}$ (namely $\pi/2$). So since ${\bf q}\cdot{\bf q}=\kappa^{-1}$, we can draw the conclusion that the speed of the body is constant. For $\kappa<0$, we can write that $$ c=({\bf q}\boxtimes\dot{\bf q})\boxdot ({\bf q}\boxtimes\dot{\bf q})= -\left|\begin{array}{cc} {\bf q}\boxdot{\bf q} & {\bf q}\boxdot\dot{\bf q}\\ {\bf q}\boxdot\dot{\bf q} & \dot{\bf q}\boxdot\dot{\bf q}\\ \end{array}\right|=-\left|\begin{array}{cc} \kappa^{-1} & 0\\ 0 & \dot{\bf q}\boxdot\dot{\bf q}\\ \end{array}\right|=-\kappa^{-1}\dot{\bf q}\boxdot\dot{\bf q}. $$ Therefore the speed is constant in this case too, so the motion is uniform. Since for $\kappa>0$ the body moves on geodesics of a sphere, every orbit is closed. \end{proof} \subsection{Hamiltonian form} The equations of motion (\ref{eqmotion}) are Hamiltonian. Indeed, the Hamiltonian function $H_\kappa$ is given by $$ \begin{cases} H_\kappa({\bf q},{\bf p})= {1\over 2}\sum_{i=1}^nm_i^{-1}({\bf p}_i\odot{\bf p}_i) (\kappa{\bf q}_i\odot{\bf q}_i)-U_\kappa({\bf q}),\cr {\bf q}_i\odot{\bf q}_i={\kappa}^{-1}, \ \kappa\ne 0, \ \ i=1,\dots,n. \end{cases} $$ Equations (\ref{equations}) thus take the form of a $6n$-dimensional first order system of differential equations with $2n$ constraints, \begin{equation} \begin{cases} \dot{\bf q}_i= \widetilde\nabla_{{\bf p}_i} H_\kappa({\bf q},{\bf p})=m_i^{-1}{\bf p}_i,\cr \dot{\bf p}_i= -\widetilde\nabla_{{\bf q}_i} H_\kappa({\bf q},{\bf p})= \widetilde\nabla_{{\bf q}_i} U_\kappa({\bf q}) -m_i^{-1}{\kappa}({\bf p}_i\odot{\bf p}_i){\bf q}_i,\cr {\bf q}_i\odot{\bf q}_i={\kappa}^{-1}, \ \ {\bf q}_i\odot{\bf p}_i=0, \ \ \kappa\ne 0, \ \ i=1,\dots,n.\label{Ham} \end{cases} \end{equation} It is interesting to note that, independently of whether the kinetic energy is defined as $$T_\kappa({\bf p}):={1\over 2}\sum_{i=1}^nm_i^{-1}{\bf p}_i\odot{\bf p}_i\ \ {\rm or} \ \ T_\kappa({\bf q}, {\bf p}):={1\over 2}\sum_{i=1}^nm_i^{-1}({\bf p}_i\odot{\bf p}_i)(\kappa{\bf q}_i\odot{\bf q}_i),$$ (which, though identical since $\kappa{\bf q}_i\odot{\bf q}_i=1$, does not come to the same thing when differentiating $T_\kappa$), the form of equations (\ref{eqmotion}) remains the same. But in the former case, system (\ref{eqmotion}) cannot be put in Hamiltonian form in spite of having an energy integral, while in the former case it can. This is why we chose the latter definition of $T_\kappa$. These equations describe the motion of the $n$-body system for any $\kappa\ne 0$, the case $\kappa=0$ corresponding to the classical Newtonian equations. The representative non-zero-curvature cases, however, are $\kappa=1$ and $\kappa=-1$, which characterize the motion for $\kappa>0$ and $\kappa<0$, respectively. Therefore we will further focus on the $n$-body problem in ${\bf S}^2$ and ${\bf H}^2$. \subsection{Equations of motion in ${\bf S}^2$} In this case, the force function (\ref{pothom}) takes the form \begin{equation} U_1({\bf q})={1\over 2}\sum_{i=1}^n\sum_{j=1,j\ne i}^n\frac{m_im_j~ \frac{{\bf q}_i\cdot{\bf q}_j}{\snorm}}{ \sqrt{1-\left(\frac{{\bf q}_i\cdot{\bf q}_j}{\snorm}\right)^2}} ,\label{potS} \end{equation} while the equations of motion (\ref{eqmotion}) and their constraints become \begin{equation} m_i\ddot{\bf q}_i=\nabla_{{\bf q}_i} U_1({\bf q})-m_i(\dot{\bf q}_i\cdot\dot{\bf q}_i){\bf q}_i,\ \ \ {\bf q}_i\cdot{\bf q}_i=1,\ \ \ {\bf q}_i\cdot\dot{\bf q}_i=0,\ \ \ i=1,\dots,n.\label{eqS} \end{equation} In terms of coordinates, the equations of motion and their constraints can be written as \begin{equation} \begin{cases} m_i\ddot{x}_i={\partial U_1\over\partial x_i}-m_i(\dot{x}_i^2+\dot{y}_i^2+\dot{z}_i^2)x_i,\cr m_i\ddot{y}_i={\partial U_1\over\partial y_i}-m_i(\dot{x}_i^2+\dot{y}_i^2+\dot{z}_i^2)y_i,\cr m_i\ddot{z}_i={\partial U_1\over\partial z_i}-m_i(\dot{x}_i^2+\dot{y}_i^2+\dot{z}_i^2)z_i,\cr x_i^2+y_i^2+z_i^2=1,\ \ x_i\dot{x}_i+y_i\dot{y}_i+z_i\dot{z}_i=0,\ \ i=1,\dots,n,\label{eqcoordS} \end{cases} \end{equation} and by computing the gradients they become \begin{equation} \begin{cases} \ddot{x}_i=\sum_{j=1,j\ne i}^n{m_j{x_j-{x_ix_j+y_iy_j+z_iz_j\over x_i^2+y_i^2+z_i^2}x_i \over \sqrt{x_i^2+y_i^2+z_i^2}\sqrt{x_j^2+y_j^2+z_j^2}}\over \bigg[1-\Big({x_ix_j+y_iy_j+z_iz_j\over\sqrt{x_i^2+y_i^2+z_i^2}\sqrt{x_j^2+y_j^2+z_j^2}}\Big)^2\bigg]^{3/2}}-(\dot{x}_i^2+\dot{y}_i^2+\dot{z}_i^2)x_i,\cr \ddot{y}_i=\sum_{j=1,j\ne i}^n{m_j{y_j-{x_ix_j+y_iy_j+z_iz_j\over x_i^2+y_i^2+z_i^2}y_i \over \sqrt{x_i^2+y_i^2+z_i^2}\sqrt{x_j^2+y_j^2+z_j^2}}\over \bigg[1-\Big({x_ix_j+y_iy_j+z_iz_j\over\sqrt{x_i^2+y_i^2+z_i^2}\sqrt{x_j^2+y_j^2+z_j^2}}\Big)^2\bigg]^{3/2}}-(\dot{x}_i^2+\dot{y}_i^2+\dot{z}_i^2)y_i,\cr \ddot{z}_i=\sum_{j=1,j\ne i}^n{m_j{z_j-{x_ix_j+y_iy_j+z_iz_j\over x_i^2+y_i^2+z_i^2}z_i \over \sqrt{x_i^2+y_i^2+z_i^2}\sqrt{x_j^2+y_j^2+z_j^2}}\over \bigg[1-\Big({x_ix_j+y_iy_j+z_iz_j\over\sqrt{x_i^2+y_i^2+z_i^2}\sqrt{x_j^2+y_j^2+z_j^2}}\Big)^2\bigg]^{3/2}}-(\dot{x}_i^2+\dot{y}_i^2+\dot{z}_i^2)z_i,\cr x_i^2+y_i^2+z_i^2=1,\ \ x_i\dot{x}_i+y_i\dot{y}_i+z_i\dot{z}_i=0,\ \ i=1,\dots,n.\label{coordSfull} \end{cases} \end{equation} Since we will neither need the homogeneity of the gradient, nor we will we differentiate it, we can use the constraints to write the above system as \begin{equation} \begin{cases} \ddot{x}_i=\sum_{j=1,j\ne i}^n{m_j[x_j-(x_ix_j+y_iy_j+z_iz_j)x_i]\over [1-(x_ix_j+y_iy_j+z_iz_j)^2]^{3/2}}-(\dot{x}_i^2+\dot{y}_i^2+\dot{z}_i^2)x_i,\cr \ddot{y}_i=\sum_{j=1,j\ne i}^n{m_j[y_j-(x_ix_j+y_iy_j+z_iz_j)y_i]\over [1-(x_ix_j+y_iy_j+z_iz_j)^2]^{3/2}}-(\dot{x}_i^2+\dot{y}_i^2+\dot{z}_i^2)y_i,\cr \ddot{z}_i=\sum_{j=1,j\ne i}^n{m_j[z_j-(x_ix_j+y_iy_j+z_iz_j)z_i]\over [1-(x_ix_j+y_iy_j+z_iz_j)^2]^{3/2}}-(\dot{x}_i^2+\dot{y}_i^2+\dot{z}_i^2)z_i,\cr x_i^2+y_i^2+z_i^2=1,\ \ x_i\dot{x}_i+y_i\dot{y}_i+z_i\dot{z}_i=0,\ \ i=1,\dots,n.\label{coordS} \end{cases} \end{equation} The Hamiltonian form of the equations of motion is \begin{equation} \begin{cases} \dot{\bf q}_i= m_i^{-1}{\bf p}_i,\cr \dot{\bf p}_i= \sum_{j=1, j\ne i}^n{m_im_j[{\bf q}_j-({\bf q}_i\cdot{\bf q}_j){\bf q}_i]\over [1-({\bf q}_i\cdot{\bf q}_j)^2]^{3/2}}-m_i^{-1} ({\bf p}_i\cdot{\bf p}_i){\bf q}_i,\cr {\bf q}_i\cdot{\bf q}_i=1, \ \ {\bf q}_i\cdot{\bf p}_i=0, \ \ \kappa\ne 0, \ \ i=1,\dots,n.\label{HamS} \end{cases} \end{equation} Consequently the integral of energy has the form \begin{equation} \sum_{i=1}^nm_i^{-1}({\bf p}_i\cdot{\bf p}_i)-\sum_{i=1}^n\sum_{j=1,j\ne i}^n\frac{m_im_j~ \frac{{\bf q}_i\cdot{\bf q}_j}{\snorm}}{ \sqrt{1-\left(\frac{{\bf q}_i\cdot{\bf q}_j}{\snorm}\right)^2}}=2h, \label{eneS} \end{equation} which, via ${\bf q}_i\cdot{\bf q}_i=1,\ i=1,\dots,n$, becomes \begin{equation} \sum_{i=1}^nm_i^{-1}({\bf p}_i\cdot{\bf p}_i)-\sum_{i=1}^n\sum_{j=1,j\ne i}^n {m_im_j\q_i\cdot\q_j\over\sqrt{1-(\q_i\cdot\q_j)^2}}=2h, \label{enerS} \end{equation} and the integrals of the angular momentum take the form \begin{equation} \sum_{i=1}^n{\bf q}_i\times{\bf p}_i={\bf c}. \end{equation} Notice that sometimes we can use the simpler form \eqref{enerS} of the energy integral, but whenever we need to exploit the homogeneity of the potential or have to differentiate it, we must use the more complicated form \eqref{eneS}. \subsection{Equations of motion in ${\bf H}^2$} In this case, the force function (\ref{pothom}) takes the form \begin{equation} U_{-1}({\bf q})=-{1\over 2}\sum_{i=1}^n\sum_{j=1,j\ne i}^n\frac{m_im_j~\frac{\q_i\boxdot\q_j}{\hnorm}}{\sqrt{\left(\frac{\q_i\boxdot\q_j}{\hnorm} \right)^2-1}} ,\label{potH} \end{equation} so the equations of motion and their constraints become \begin{multline} m_i\ddot{\bf q}_i=\overline\nabla_{{\bf q}_i}U_{-1}({\bf q})+m_i(\dot{\bf q}_i\boxdot\dot{\bf q}_i){\bf q}_i,\ {\bf q}_i\boxdot{\bf q}_i=-1,\ {\bf q}_i\boxdot\dot{\bf q}_i=0,\\ i=1,\dots,n.\label{eqH} \end{multline} In terms of coordinates, the equations of motion and their constraints can be written as \begin{equation} \begin{cases} m_i\ddot{x}_i=\ \ {\partial U_{-1}\over\partial x_i}+m_i(\dot{x}_i^2+\dot{y}_i^2-\dot{z}_i^2)x_i,\cr m_i\ddot{y}_i=\ \ {\partial U_{-1}\over\partial y_i}+m_i(\dot{x}_i^2+\dot{y}_i^2-\dot{z}_i^2)y_i,\cr m_i\ddot{z}_i=-{\partial U_{-1}\over\partial z_i}+m_i(\dot{x}_i^2+\dot{y}_i^2-\dot{z}_i^2)z_i,\cr x_i^2+y_i^2-z_i^2=-1,\ \ x_i\dot{x}_i+y_i\dot{y}_i-z_i\dot{z}_i=0,\ \ i=1,\dots,n, \label{eqcoordH} \end{cases} \end{equation} and by computing the gradients they become \begin{equation} \begin{cases} \ddot{x}_i=\sum_{j=1,j\ne i}^n{m_j{x_j+{x_ix_j+y_iy_j-z_iz_j\over -x_i^2-y_i^2+z_i^2}x_i \over \sqrt{-x_i^2-y_i^2+z_i^2}\sqrt{-x_j^2-y_j^2+z_j^2}}\over \bigg[\Big({x_ix_j+y_iy_j-z_iz_j\over\sqrt{-x_i^2-y_i^2+z_i^2}\sqrt{-x_j^2-y_j^2+z_j^2}}\Big)^2-1\bigg]^{3/2}}+(\dot{x}_i^2+\dot{y}_i^2-\dot{z}_i^2)x_i,\cr \ddot{y}_i=\sum_{j=1,j\ne i}^n{m_j{y_j+{x_ix_j+y_iy_j-z_iz_j\over -x_i^2-y_i^2+z_i^2}y_i \over \sqrt{-x_i^2-y_i^2+z_i^2}\sqrt{-x_j^2-y_j^2+z_j^2}}\over \bigg[\Big({x_ix_j+y_iy_j-z_iz_j\over\sqrt{-x_i^2-y_i^2+z_i^2}\sqrt{-x_j^2-y_j^2+z_j^2}}\Big)^2-1\bigg]^{3/2}}+(\dot{x}_i^2+\dot{y}_i^2-\dot{z}_i^2)y_i,\cr \ddot{z}_i=\sum_{j=1,j\ne i}^n{m_j{z_j+{x_ix_j+y_iy_j-z_iz_j\over -x_i^2-y_i^2+z_i^2}z_i \over \sqrt{-x_i^2-y_i^2+z_i^2}\sqrt{-x_j^2-y_j^2+z_j^2}}\over \bigg[\Big({x_ix_j+y_iy_j-z_iz_j\over\sqrt{-x_i^2-y_i^2+z_i^2}\sqrt{-x_j^2-y_j^2+z_j^2}}\Big)^2-1\bigg]^{3/2}}+(\dot{x}_i^2+\dot{y}_i^2-\dot{z}_i^2)z_i,\cr x_i^2+y_i^2-z_i^2=-1,\ \ x_i\dot{x}_i+y_i\dot{y}_i-z_i\dot{z}_i=0,\ \ i=1,\dots,n.\label{coordHfull} \end{cases} \end{equation} For the same reasons described in the previous subsection, we can use the constraints to write from now on the above system as \begin{equation} \begin{cases} \ddot{x}_i=\sum_{j=1,j\ne i}^n{m_j[x_j+(x_ix_j+y_iy_j-z_iz_j)x_i]\over [(x_ix_j+y_iy_j-z_iz_j)^2-1]^{3/2}}+(\dot{x}_i^2+\dot{y}_i^2-\dot{z}_i^2)x_i,\cr \ddot{y}_i=\sum_{j=1,j\ne i}^n{m_j[y_j+(x_ix_j+y_iy_j-z_iz_j)y_i]\over [(x_ix_j+y_iy_j-z_iz_j)^2-1]^{3/2}}+(\dot{x}_i^2+\dot{y}_i^2-\dot{z}_i^2)y_i,\cr \ddot{z}_i=\sum_{j=1,j\ne i}^n{m_j[z_j+(x_ix_j+y_iy_j-z_iz_j)z_i]\over [(x_ix_j+y_iy_j-z_iz_j)^2-1]^{3/2}}+(\dot{x}_i^2+\dot{y}_i^2-\dot{z}_i^2)z_i,\cr x_i^2+y_i^2-z_i^2=-1,\ \ x_i\dot{x}_i+y_i\dot{y}_i-z_i\dot{z}_i=0,\ \ i=1,\dots,n.\label{coordH} \end{cases} \end{equation} The Hamiltonian form of the equations of motion is \begin{equation} \begin{cases} \dot{\bf q}_i= m_i^{-1}{\bf p}_i,\cr \dot{\bf p}_i= \sum_{j=1, j\ne i}^n{m_im_j[{\bf q}_j+({\bf q}_i\boxdot{\bf q}_j){\bf q}_i]\over [({\bf q}_i\boxdot{\bf q}_j)^2-1]^{3/2}}+m_i^{-1} ({\bf p}_i\boxdot{\bf p}_i){\bf q}_i,\cr {\bf q}_i\boxdot{\bf q}_i=-1, \ \ {\bf q}_i\boxdot{\bf p}_i=0, \ \ \kappa\ne 0, \ \ i=1,\dots,n.\label{HamH} \end{cases} \end{equation} Consequently the integral of energy takes the form \begin{equation} \sum_{i=1}^nm_i^{-1}({\bf p}_i\boxdot{\bf p}_i)+\sum_{i=1}^n\sum_{j=1,j\ne i}^n\frac{m_im_j~\frac{\q_i\boxdot\q_j}{\hnorm}}{\sqrt{\left(\frac{\q_i\boxdot\q_j}{\hnorm} \right)^2-1}}=2h, \label{eneH} \end{equation} which, via ${\bf q}_i\boxdot{\bf q}_i=-1, \ i=1,\dots,n$, becomes \begin{equation} \sum_{i=1}^nm_i^{-1}({\bf p}_i\boxdot{\bf p}_i)+\sum_{i=1}^n\sum_{j=1,j\ne i}^n {m_im_j\q_i\boxdot\q_j\over\sqrt{(\q_i\boxdot\q_j)^2-1}}=2h, \label{enerH} \end{equation} and the integrals of the angular momentum can be written as \begin{equation} \sum_{i=1}^n{\bf q}_i\boxtimes{\bf p}_i={\bf c}. \end{equation} Notice that sometimes we can use the simpler form \eqref{enerH} of the energy integral, but whenever we need to exploit the homogeneity of the potential or have to differentiate it, we must use the more complicated form \eqref{eneH}. \subsection{Equations of motion in ${\bf S}^{\mu}$ and ${\bf H}^{\mu}$} The formalism we adopted in this paper allows a straightforward generalization of the $n$-body problem to ${\bf S}^{\mu}$ and ${\bf H}^{\mu}$ for any integer $\mu\ge 1$. The equations of motion in $\mu$-dimensional spaces of constant curvature have the form (\ref{eqmotion}) for vectors ${\bf q}_i$ and ${\bf q}_j$ of ${\bf R}^{\mu+1}$ constrained to the corresponding manifold. It is then easy to see from any coordinate-form of the system that ${\bf S}^\nu$ and ${\bf H}^\nu$ are invariant sets for the equations of motion in ${\bf S}^{\mu}$ and ${\bf H}^{\mu}$, respectively, for any integer $\nu<\mu$. Indeed, this is the case, say, for equations \eqref{coordS}, if we take $x_i(0)=0, \dot{x}_i(0)=0, \ i=1,\dots,n$. Then the equations of $\ddot{x}_i$ are identically satisfied, and the motion takes place on the circle $y^2+z^2=1$. The generalization of this idea from one component to any number $\nu$ of components in a $(\mu+1)$-dimensional space, with $\nu<\mu$, is straightforward. Therefore the study of the $n$-body problem on surfaces of constant curvature is fully justified. The only aspect of this generalization that is not obvious from our formalism is how to extend the cross product to higher dimensions. But this extension can be done as in general relativity with the help of the exterior product. However, we will not get into higher dimensions in this paper. Our further goal is to study the 2-dimensional case. \section{Singularities} Singularities have always been a rich source of research in the theory of differential equations. The $n$-body problem we derived in the previous section seems to make no exception from this rule. In what follows, we will point out the singularities that occur in our problem and prove some results related to them. The most surprising seems to be the existence of a class of solutions with some hybrid singularities, which are both collisional and non-collisional. \subsection{Singularities of the equations} The equations of motion (\ref{Ham}) have restrictions. First, the variables are constrained to a surface of constant curvature, i.e.\ $({\bf q},{\bf p})\in {\bf T}^*({\bf M}_\kappa^2)^n$, where ${\bf M}^2_\kappa$ is the surface of curvature $\kappa\ne 0$ (in particular, ${\bf M}^2_1={\bf S}^2$ and ${\bf M}^2_{-1}={\bf H}^2$), ${\bf T}^*({\bf M}_\kappa^2)^n$ is the cotangent bundle of ${\bf M}^2_\kappa$. Second, system (\ref{Ham}), which contains the gradient \eqref{gra}, is undefined in the set ${\bf \Delta}:=\cup_{1\le i<j\le n}{\bf \Delta}_{ij}$, with $${\bf \Delta}_{ij}:=\{{\bf q}\in({\bf M}^2_\kappa)^n\ |\ (\kappa{\bf q}_i\odot{\bf q}_j)^2=1\},$$ where both the force function \eqref{pothom} and its gradient \eqref{gra} become infinite. Thus the set $\bf\Delta$ contains the singularities of the equations of motion. The singularity condition, $(\kappa{\bf q}_i\odot{\bf q}_j)^2=1$, suggests that we consider two cases, and thus write ${\bf \Delta}_{ij}={\bf \Delta}_{ij}^+\cup{\bf \Delta}_{ij}^-$, where $$ {\bf \Delta}_{ij}^+:=\{{\bf q}\in({\bf M}^2_\kappa)^n\ |\ \kappa{\bf q}_i\odot{\bf q}_j=1\}\ \ {\rm and}\ \ {\bf \Delta}_{ij}^-:=\{{\bf q}\in({\bf M}^2_\kappa)^n\ |\ \kappa{\bf q}_i\odot{\bf q}_j=-1\}. $$ Accordingly, we define $$ {\bf \Delta}^+:=\cup_{1\le i<j\le n}{\bf \Delta}_{ij}^+\ \ {\rm and}\ \ {\bf \Delta}^-:=\cup_{1\le i<j\le n}{\bf \Delta}_{ij}^-. $$ Then ${\bf \Delta}={\bf \Delta}^+\cup{\bf \Delta}^-$. The elements of ${\bf \Delta}^+$ correspond to collisions for any $\kappa\ne 0$, whereas the elements of ${\bf \Delta}^-$ correspond to what we will call antipodal singularities when $\kappa>0$. The latter occur when two bodies are at the opposite ends of the same diameter of a sphere. For $\kappa<0$, such singularities do not exist because $\kappa{\bf q}_i\odot{\bf q}_j\ge 1$. In conclusion, the equations of motion are undefined for configurations that involve collisions on spheres or hyperboloids, as well as for configurations with antipodal bodies on spheres of any curvature $\kappa>0$. In both cases, the gravitational forces become infinite. In the 2-body problem, ${\bf \Delta}^+$ and ${\bf \Delta}^-$ are disjoint sets. Indeed, since there are only two bodies, $\kappa{\bf q}_1\cdot{\bf q}_2$ is either $+1$ or $-1$, but cannot be both. The set ${\bf \Delta}^+\cap{\bf \Delta}^-$, however, is not empty for $n\ge 3$. In the 3-body problem, for instance, the configuration in which two bodies are at collision and the third lies at the opposite end of the corresponding diameter is, what we will call from now on, a collision-antipodal singularity. The theory of differential equations merely regards singularities as points for which the equations break down, and must therefore be avoided. But singularities exhibit sometimes a dynamical structure. In the $3$-body problem in $\bf R$, for instance, the set of binary collisions is attractive in the sense that for any given initial velocities, there are initial positions such that if two bodies come close enough to each other but far enough from other collisions, then the collision will take place. (Things are more complicated with triple collisions. Two of the bodies coming close to triple collisions may form a binary while the third gets expelled with high velocity away from the other two, \cite{McGe}.) Something similar happens for binary collisions in the 3-body problem on a geodesic of ${\bf S}^2$. Given some initial velocities, one can choose initial positions that put $m_1$ and $m_2$ close enough to a binary collision, and $m_3$ far enough from an antipodal singularity with either $m_1$ or $m_2$, such that the binary collision takes place. This is indeed the case because the attraction between $m_1$ and $m_2$ can be made as large as desired by placing the bodies close enough to each other. Since $m_3$ is far enough from an antipodal position, and no comparable force can oppose the attraction between $m_1$ and $m_2$, these bodies will collide. Antipodal singularities lead to a new phenomenon on geodesics of ${\bf S}^2$. Given initial velocities, no matter how close one chooses initial positions near an antipodal singularity, the corresponding solution is repelled in future time from this singularity as long as no collision force compensates for this force. So while binary collisions can be regarded as attractive if far away from binary antipodal singularities, binary antipodal singularities can be seen as repulsive if far away from collisions. But what happens when collision and antipodal singularities are close to each other? As we will see in the next subsection, the behavior of solutions in that region is sensitive to the choice of masses and initial conditions. In particular, we will prove the existence of some hybrid singular solutions in the 3-body problem, namely those that end in finite time in a collision-antipodal singularity, as well as of solutions that reach a collision-antipodal configuration but remain analytic at this point. \subsection{Solution singularities} The set $\bf\Delta$ is related to singularities which arise from the question of existence and uniqueness of initial value problems. For initial conditions $({\bf q},{\bf p})(0)\in{\bf T}^*({\bf M}_\kappa^2)^n$ with ${\bf q}(0)\notin\bf\Delta$, standard results of the theory of differential equations ensure local existence and uniqueness of an analytic solution $({\bf q},{\bf p})$ defined on some interval $[0,t^+)$. Since the surfaces ${\bf M}^2_\kappa$ are connected, this solution can be analytically extended to an interval $[0,t^*)$, with $0<t^+\le t^*\le\infty$. If $t^*=\infty$, the solution is globally defined. But if $t^*<\infty$, the solution is called singular, and we say that it has a singularity at time $t^*$. There is a close connection between singular solutions and singularities of the equations of motion. In the classical case ($\kappa=0$), this connection was pointed out by Paul Painlev\'e towards the end of the 19th century. In his famous lectures given in Stockholm, \cite{Pai}, he showed that every singular solution $({\bf q},{\bf p})$ is such that ${\bf q}(t)\to{\bf\Delta}$ when $t\to t^*$, for otherwise the solution would be globally defined. In the Euclidean case, $\kappa=0$, the set $\bf\Delta$ is formed by all configurations with collisions, so when ${\bf q}(t)$ tends to an element of $\bf\Delta$, the solution ends in a collision singularity. But it is also possible that ${\bf q}(t)$ tends to $\bf\Delta$ without asymptotic phase, i.e.~by oscillating among various elements without ever reaching a definite position. Painlev\'e conjectured that such noncollision singularities, which he called pseudocollisions, exist. In 1908, Hugo von Zeipel showed that a necessary condition for a solution to experience a pseudocollision is that the motion becomes unbounded in finite time, \cite{Zei}, \cite{McG}. Zhihong (Jeff) Xia produced the first example of this kind in 1992, \cite{Xia}. Historical accounts of this development appear in \cite{Diac} and \cite{Dia0}. The results of Painlev\'e don't remain intact in our problem, \cite{Dia1}, \cite{Dia2008}, so whether pseudocollisions exist for $\kappa\ne 0$ is not clear. Nevertheless, we will now show that there are solutions ending in collision-antipodal singularities of the equations of motion, solutions these singularities repel, as well as solutions that are not singular at such configurations. To prove these facts, we need the result stated below, which provides a criterion for determining the direction of motion along a great circle in the framework of an isosceles problem defined in an invariant set ${\bf S}^1$. \begin{lemma} Consider the $n$-body problem in ${\bf S}^2$, and assume that a body of mass $m$ is at rest at time $t_0$ on the geodesic $z=0$ within its first quadrant, $x,y>0$. Then, if (a) $\ddot{x}(t_0)> 0$ and $\ddot{y}(t_0)< 0$, the force pulls the body along the circle toward the point $(x,y)=(1,0)$. (b) $\ddot{x}(t_0)< 0$ and $\ddot{y}(t_0)> 0$, the force pulls the body along the circle toward the point $(x,y)=(0,1)$. (c) $\ddot{x}(t_0)\le 0$ and $\ddot{y}(t_0)\le 0$, the force pulls the body toward the point $(1,0)$ if $\ddot{y}(t_0)/\ddot{x}(t_0)>y(t_0)/x(t_0)$, toward $(0,1)$ if $\ddot{y}(t_0)/\ddot{x}(t_0)<y(t_0)/x(t_0)$, but no force acts on the body if neither of the previous inequalities holds. (d) $\ddot{x}(t_0)> 0$ and $\ddot{y}(t_0)> 0$, the motion is impossible.\label{singlemma} \end{lemma} \def\JPicScale{0.5} \ifx\JPicScale\undefined\def\JPicScale{1}\fi \unitlength \JPicScale mm \begin{figure} \begin{picture}(110,100)(0,0) \linethickness{0.3mm} \put(90,44.75){\line(0,1){0.5}} \multiput(89.99,44.25)(0.01,0.5){1}{\line(0,1){0.5}} \multiput(89.98,43.75)(0.01,0.5){1}{\line(0,1){0.5}} \multiput(89.97,43.25)(0.02,0.5){1}{\line(0,1){0.5}} \multiput(89.94,42.75)(0.02,0.5){1}{\line(0,1){0.5}} \multiput(89.92,42.24)(0.03,0.5){1}{\line(0,1){0.5}} \multiput(89.88,41.74)(0.03,0.5){1}{\line(0,1){0.5}} \multiput(89.84,41.24)(0.04,0.5){1}{\line(0,1){0.5}} \multiput(89.8,40.75)(0.04,0.5){1}{\line(0,1){0.5}} \multiput(89.75,40.25)(0.05,0.5){1}{\line(0,1){0.5}} \multiput(89.69,39.75)(0.06,0.5){1}{\line(0,1){0.5}} \multiput(89.63,39.25)(0.06,0.5){1}{\line(0,1){0.5}} \multiput(89.56,38.75)(0.07,0.5){1}{\line(0,1){0.5}} \multiput(89.49,38.26)(0.07,0.5){1}{\line(0,1){0.5}} \multiput(89.41,37.76)(0.08,0.5){1}{\line(0,1){0.5}} \multiput(89.33,37.27)(0.08,0.49){1}{\line(0,1){0.49}} \multiput(89.24,36.77)(0.09,0.49){1}{\line(0,1){0.49}} \multiput(89.15,36.28)(0.09,0.49){1}{\line(0,1){0.49}} \multiput(89.05,35.79)(0.1,0.49){1}{\line(0,1){0.49}} \multiput(88.94,35.3)(0.11,0.49){1}{\line(0,1){0.49}} \multiput(88.83,34.81)(0.11,0.49){1}{\line(0,1){0.49}} \multiput(88.72,34.32)(0.12,0.49){1}{\line(0,1){0.49}} \multiput(88.59,33.84)(0.12,0.49){1}{\line(0,1){0.49}} \multiput(88.47,33.35)(0.13,0.48){1}{\line(0,1){0.48}} \multiput(88.33,32.87)(0.13,0.48){1}{\line(0,1){0.48}} \multiput(88.2,32.39)(0.14,0.48){1}{\line(0,1){0.48}} \multiput(88.05,31.91)(0.14,0.48){1}{\line(0,1){0.48}} \multiput(87.9,31.43)(0.15,0.48){1}{\line(0,1){0.48}} \multiput(87.75,30.95)(0.15,0.48){1}{\line(0,1){0.48}} \multiput(87.59,30.48)(0.16,0.48){1}{\line(0,1){0.48}} \multiput(87.43,30)(0.16,0.47){1}{\line(0,1){0.47}} \multiput(87.26,29.53)(0.17,0.47){1}{\line(0,1){0.47}} \multiput(87.08,29.06)(0.17,0.47){1}{\line(0,1){0.47}} \multiput(86.9,28.59)(0.09,0.23){2}{\line(0,1){0.23}} \multiput(86.72,28.13)(0.09,0.23){2}{\line(0,1){0.23}} \multiput(86.53,27.66)(0.1,0.23){2}{\line(0,1){0.23}} \multiput(86.33,27.2)(0.1,0.23){2}{\line(0,1){0.23}} \multiput(86.13,26.74)(0.1,0.23){2}{\line(0,1){0.23}} \multiput(85.92,26.29)(0.1,0.23){2}{\line(0,1){0.23}} \multiput(85.71,25.83)(0.11,0.23){2}{\line(0,1){0.23}} \multiput(85.5,25.38)(0.11,0.23){2}{\line(0,1){0.23}} \multiput(85.28,24.93)(0.11,0.22){2}{\line(0,1){0.22}} \multiput(85.05,24.48)(0.11,0.22){2}{\line(0,1){0.22}} \multiput(84.82,24.04)(0.12,0.22){2}{\line(0,1){0.22}} \multiput(84.58,23.59)(0.12,0.22){2}{\line(0,1){0.22}} \multiput(84.34,23.15)(0.12,0.22){2}{\line(0,1){0.22}} \multiput(84.1,22.72)(0.12,0.22){2}{\line(0,1){0.22}} \multiput(83.85,22.28)(0.13,0.22){2}{\line(0,1){0.22}} \multiput(83.59,21.85)(0.13,0.22){2}{\line(0,1){0.22}} \multiput(83.33,21.42)(0.13,0.21){2}{\line(0,1){0.21}} \multiput(83.06,21)(0.13,0.21){2}{\line(0,1){0.21}} \multiput(82.79,20.58)(0.13,0.21){2}{\line(0,1){0.21}} \multiput(82.52,20.16)(0.14,0.21){2}{\line(0,1){0.21}} \multiput(82.24,19.74)(0.14,0.21){2}{\line(0,1){0.21}} \multiput(81.96,19.33)(0.14,0.21){2}{\line(0,1){0.21}} \multiput(81.67,18.92)(0.14,0.21){2}{\line(0,1){0.21}} \multiput(81.38,18.51)(0.15,0.2){2}{\line(0,1){0.2}} \multiput(81.08,18.11)(0.15,0.2){2}{\line(0,1){0.2}} \multiput(80.78,17.71)(0.1,0.13){3}{\line(0,1){0.13}} \multiput(80.47,17.31)(0.1,0.13){3}{\line(0,1){0.13}} \multiput(80.16,16.91)(0.1,0.13){3}{\line(0,1){0.13}} \multiput(79.85,16.53)(0.11,0.13){3}{\line(0,1){0.13}} \multiput(79.53,16.14)(0.11,0.13){3}{\line(0,1){0.13}} \multiput(79.2,15.76)(0.11,0.13){3}{\line(0,1){0.13}} \multiput(78.87,15.38)(0.11,0.13){3}{\line(0,1){0.13}} \multiput(78.54,15)(0.11,0.13){3}{\line(0,1){0.13}} \multiput(78.21,14.63)(0.11,0.12){3}{\line(0,1){0.12}} \multiput(77.87,14.26)(0.11,0.12){3}{\line(0,1){0.12}} \multiput(77.52,13.9)(0.11,0.12){3}{\line(0,1){0.12}} \multiput(77.17,13.54)(0.12,0.12){3}{\line(0,1){0.12}} \multiput(76.82,13.18)(0.12,0.12){3}{\line(0,1){0.12}} \multiput(76.46,12.83)(0.12,0.12){3}{\line(1,0){0.12}} \multiput(76.1,12.48)(0.12,0.12){3}{\line(1,0){0.12}} \multiput(75.74,12.13)(0.12,0.11){3}{\line(1,0){0.12}} \multiput(75.37,11.79)(0.12,0.11){3}{\line(1,0){0.12}} \multiput(75,11.46)(0.12,0.11){3}{\line(1,0){0.12}} \multiput(74.62,11.13)(0.13,0.11){3}{\line(1,0){0.13}} \multiput(74.24,10.8)(0.13,0.11){3}{\line(1,0){0.13}} \multiput(73.86,10.47)(0.13,0.11){3}{\line(1,0){0.13}} \multiput(73.47,10.15)(0.13,0.11){3}{\line(1,0){0.13}} \multiput(73.09,9.84)(0.13,0.11){3}{\line(1,0){0.13}} \multiput(72.69,9.53)(0.13,0.1){3}{\line(1,0){0.13}} \multiput(72.29,9.22)(0.13,0.1){3}{\line(1,0){0.13}} \multiput(71.89,8.92)(0.13,0.1){3}{\line(1,0){0.13}} \multiput(71.49,8.62)(0.2,0.15){2}{\line(1,0){0.2}} \multiput(71.08,8.33)(0.2,0.15){2}{\line(1,0){0.2}} \multiput(70.67,8.04)(0.21,0.14){2}{\line(1,0){0.21}} \multiput(70.26,7.76)(0.21,0.14){2}{\line(1,0){0.21}} \multiput(69.84,7.48)(0.21,0.14){2}{\line(1,0){0.21}} \multiput(69.42,7.21)(0.21,0.14){2}{\line(1,0){0.21}} \multiput(69,6.94)(0.21,0.13){2}{\line(1,0){0.21}} \multiput(68.58,6.67)(0.21,0.13){2}{\line(1,0){0.21}} \multiput(68.15,6.41)(0.21,0.13){2}{\line(1,0){0.21}} \multiput(67.72,6.15)(0.22,0.13){2}{\line(1,0){0.22}} \multiput(67.28,5.9)(0.22,0.13){2}{\line(1,0){0.22}} \multiput(66.85,5.66)(0.22,0.12){2}{\line(1,0){0.22}} \multiput(66.41,5.42)(0.22,0.12){2}{\line(1,0){0.22}} \multiput(65.96,5.18)(0.22,0.12){2}{\line(1,0){0.22}} \multiput(65.52,4.95)(0.22,0.12){2}{\line(1,0){0.22}} \multiput(65.07,4.72)(0.22,0.11){2}{\line(1,0){0.22}} \multiput(64.62,4.5)(0.22,0.11){2}{\line(1,0){0.22}} \multiput(64.17,4.29)(0.23,0.11){2}{\line(1,0){0.23}} \multiput(63.71,4.08)(0.23,0.11){2}{\line(1,0){0.23}} \multiput(63.26,3.87)(0.23,0.1){2}{\line(1,0){0.23}} \multiput(62.8,3.67)(0.23,0.1){2}{\line(1,0){0.23}} \multiput(62.34,3.47)(0.23,0.1){2}{\line(1,0){0.23}} \multiput(61.87,3.28)(0.23,0.1){2}{\line(1,0){0.23}} \multiput(61.41,3.1)(0.23,0.09){2}{\line(1,0){0.23}} \multiput(60.94,2.92)(0.23,0.09){2}{\line(1,0){0.23}} \multiput(60.47,2.74)(0.47,0.17){1}{\line(1,0){0.47}} \multiput(60,2.57)(0.47,0.17){1}{\line(1,0){0.47}} \multiput(59.52,2.41)(0.47,0.16){1}{\line(1,0){0.47}} \multiput(59.05,2.25)(0.48,0.16){1}{\line(1,0){0.48}} \multiput(58.57,2.1)(0.48,0.15){1}{\line(1,0){0.48}} \multiput(58.09,1.95)(0.48,0.15){1}{\line(1,0){0.48}} \multiput(57.61,1.8)(0.48,0.14){1}{\line(1,0){0.48}} \multiput(57.13,1.67)(0.48,0.14){1}{\line(1,0){0.48}} \multiput(56.65,1.53)(0.48,0.13){1}{\line(1,0){0.48}} \multiput(56.16,1.41)(0.48,0.13){1}{\line(1,0){0.48}} \multiput(55.68,1.28)(0.49,0.12){1}{\line(1,0){0.49}} \multiput(55.19,1.17)(0.49,0.12){1}{\line(1,0){0.49}} \multiput(54.7,1.06)(0.49,0.11){1}{\line(1,0){0.49}} \multiput(54.21,0.95)(0.49,0.11){1}{\line(1,0){0.49}} \multiput(53.72,0.85)(0.49,0.1){1}{\line(1,0){0.49}} \multiput(53.23,0.76)(0.49,0.09){1}{\line(1,0){0.49}} \multiput(52.73,0.67)(0.49,0.09){1}{\line(1,0){0.49}} \multiput(52.24,0.59)(0.49,0.08){1}{\line(1,0){0.49}} \multiput(51.74,0.51)(0.5,0.08){1}{\line(1,0){0.5}} \multiput(51.25,0.44)(0.5,0.07){1}{\line(1,0){0.5}} \multiput(50.75,0.37)(0.5,0.07){1}{\line(1,0){0.5}} \multiput(50.25,0.31)(0.5,0.06){1}{\line(1,0){0.5}} \multiput(49.75,0.25)(0.5,0.06){1}{\line(1,0){0.5}} \multiput(49.25,0.2)(0.5,0.05){1}{\line(1,0){0.5}} \multiput(48.76,0.16)(0.5,0.04){1}{\line(1,0){0.5}} \multiput(48.26,0.12)(0.5,0.04){1}{\line(1,0){0.5}} \multiput(47.76,0.08)(0.5,0.03){1}{\line(1,0){0.5}} \multiput(47.25,0.06)(0.5,0.03){1}{\line(1,0){0.5}} \multiput(46.75,0.03)(0.5,0.02){1}{\line(1,0){0.5}} \multiput(46.25,0.02)(0.5,0.02){1}{\line(1,0){0.5}} \multiput(45.75,0.01)(0.5,0.01){1}{\line(1,0){0.5}} \multiput(45.25,0)(0.5,0.01){1}{\line(1,0){0.5}} \put(44.75,0){\line(1,0){0.5}} \multiput(44.25,0.01)(0.5,-0.01){1}{\line(1,0){0.5}} \multiput(43.75,0.02)(0.5,-0.01){1}{\line(1,0){0.5}} \multiput(43.25,0.03)(0.5,-0.02){1}{\line(1,0){0.5}} \multiput(42.75,0.06)(0.5,-0.02){1}{\line(1,0){0.5}} \multiput(42.24,0.08)(0.5,-0.03){1}{\line(1,0){0.5}} \multiput(41.74,0.12)(0.5,-0.03){1}{\line(1,0){0.5}} \multiput(41.24,0.16)(0.5,-0.04){1}{\line(1,0){0.5}} \multiput(40.75,0.2)(0.5,-0.04){1}{\line(1,0){0.5}} \multiput(40.25,0.25)(0.5,-0.05){1}{\line(1,0){0.5}} \multiput(39.75,0.31)(0.5,-0.06){1}{\line(1,0){0.5}} \multiput(39.25,0.37)(0.5,-0.06){1}{\line(1,0){0.5}} \multiput(38.75,0.44)(0.5,-0.07){1}{\line(1,0){0.5}} \multiput(38.26,0.51)(0.5,-0.07){1}{\line(1,0){0.5}} \multiput(37.76,0.59)(0.5,-0.08){1}{\line(1,0){0.5}} \multiput(37.27,0.67)(0.49,-0.08){1}{\line(1,0){0.49}} \multiput(36.77,0.76)(0.49,-0.09){1}{\line(1,0){0.49}} \multiput(36.28,0.85)(0.49,-0.09){1}{\line(1,0){0.49}} \multiput(35.79,0.95)(0.49,-0.1){1}{\line(1,0){0.49}} \multiput(35.3,1.06)(0.49,-0.11){1}{\line(1,0){0.49}} \multiput(34.81,1.17)(0.49,-0.11){1}{\line(1,0){0.49}} \multiput(34.32,1.28)(0.49,-0.12){1}{\line(1,0){0.49}} \multiput(33.84,1.41)(0.49,-0.12){1}{\line(1,0){0.49}} \multiput(33.35,1.53)(0.48,-0.13){1}{\line(1,0){0.48}} \multiput(32.87,1.67)(0.48,-0.13){1}{\line(1,0){0.48}} \multiput(32.39,1.8)(0.48,-0.14){1}{\line(1,0){0.48}} \multiput(31.91,1.95)(0.48,-0.14){1}{\line(1,0){0.48}} \multiput(31.43,2.1)(0.48,-0.15){1}{\line(1,0){0.48}} \multiput(30.95,2.25)(0.48,-0.15){1}{\line(1,0){0.48}} \multiput(30.48,2.41)(0.48,-0.16){1}{\line(1,0){0.48}} \multiput(30,2.57)(0.47,-0.16){1}{\line(1,0){0.47}} \multiput(29.53,2.74)(0.47,-0.17){1}{\line(1,0){0.47}} \multiput(29.06,2.92)(0.47,-0.17){1}{\line(1,0){0.47}} \multiput(28.59,3.1)(0.23,-0.09){2}{\line(1,0){0.23}} \multiput(28.13,3.28)(0.23,-0.09){2}{\line(1,0){0.23}} \multiput(27.66,3.47)(0.23,-0.1){2}{\line(1,0){0.23}} \multiput(27.2,3.67)(0.23,-0.1){2}{\line(1,0){0.23}} \multiput(26.74,3.87)(0.23,-0.1){2}{\line(1,0){0.23}} \multiput(26.29,4.08)(0.23,-0.1){2}{\line(1,0){0.23}} \multiput(25.83,4.29)(0.23,-0.11){2}{\line(1,0){0.23}} \multiput(25.38,4.5)(0.23,-0.11){2}{\line(1,0){0.23}} \multiput(24.93,4.72)(0.22,-0.11){2}{\line(1,0){0.22}} \multiput(24.48,4.95)(0.22,-0.11){2}{\line(1,0){0.22}} \multiput(24.04,5.18)(0.22,-0.12){2}{\line(1,0){0.22}} \multiput(23.59,5.42)(0.22,-0.12){2}{\line(1,0){0.22}} \multiput(23.15,5.66)(0.22,-0.12){2}{\line(1,0){0.22}} \multiput(22.72,5.9)(0.22,-0.12){2}{\line(1,0){0.22}} \multiput(22.28,6.15)(0.22,-0.13){2}{\line(1,0){0.22}} \multiput(21.85,6.41)(0.22,-0.13){2}{\line(1,0){0.22}} \multiput(21.42,6.67)(0.21,-0.13){2}{\line(1,0){0.21}} \multiput(21,6.94)(0.21,-0.13){2}{\line(1,0){0.21}} \multiput(20.58,7.21)(0.21,-0.13){2}{\line(1,0){0.21}} \multiput(20.16,7.48)(0.21,-0.14){2}{\line(1,0){0.21}} \multiput(19.74,7.76)(0.21,-0.14){2}{\line(1,0){0.21}} \multiput(19.33,8.04)(0.21,-0.14){2}{\line(1,0){0.21}} \multiput(18.92,8.33)(0.21,-0.14){2}{\line(1,0){0.21}} \multiput(18.51,8.62)(0.2,-0.15){2}{\line(1,0){0.2}} \multiput(18.11,8.92)(0.2,-0.15){2}{\line(1,0){0.2}} \multiput(17.71,9.22)(0.13,-0.1){3}{\line(1,0){0.13}} \multiput(17.31,9.53)(0.13,-0.1){3}{\line(1,0){0.13}} \multiput(16.91,9.84)(0.13,-0.1){3}{\line(1,0){0.13}} \multiput(16.53,10.15)(0.13,-0.11){3}{\line(1,0){0.13}} \multiput(16.14,10.47)(0.13,-0.11){3}{\line(1,0){0.13}} \multiput(15.76,10.8)(0.13,-0.11){3}{\line(1,0){0.13}} \multiput(15.38,11.13)(0.13,-0.11){3}{\line(1,0){0.13}} \multiput(15,11.46)(0.13,-0.11){3}{\line(1,0){0.13}} \multiput(14.63,11.79)(0.12,-0.11){3}{\line(1,0){0.12}} \multiput(14.26,12.13)(0.12,-0.11){3}{\line(1,0){0.12}} \multiput(13.9,12.48)(0.12,-0.11){3}{\line(1,0){0.12}} \multiput(13.54,12.83)(0.12,-0.12){3}{\line(1,0){0.12}} \multiput(13.18,13.18)(0.12,-0.12){3}{\line(1,0){0.12}} \multiput(12.83,13.54)(0.12,-0.12){3}{\line(0,-1){0.12}} \multiput(12.48,13.9)(0.12,-0.12){3}{\line(0,-1){0.12}} \multiput(12.13,14.26)(0.11,-0.12){3}{\line(0,-1){0.12}} \multiput(11.79,14.63)(0.11,-0.12){3}{\line(0,-1){0.12}} \multiput(11.46,15)(0.11,-0.12){3}{\line(0,-1){0.12}} \multiput(11.13,15.38)(0.11,-0.13){3}{\line(0,-1){0.13}} \multiput(10.8,15.76)(0.11,-0.13){3}{\line(0,-1){0.13}} \multiput(10.47,16.14)(0.11,-0.13){3}{\line(0,-1){0.13}} \multiput(10.15,16.53)(0.11,-0.13){3}{\line(0,-1){0.13}} \multiput(9.84,16.91)(0.11,-0.13){3}{\line(0,-1){0.13}} \multiput(9.53,17.31)(0.1,-0.13){3}{\line(0,-1){0.13}} \multiput(9.22,17.71)(0.1,-0.13){3}{\line(0,-1){0.13}} \multiput(8.92,18.11)(0.1,-0.13){3}{\line(0,-1){0.13}} \multiput(8.62,18.51)(0.15,-0.2){2}{\line(0,-1){0.2}} \multiput(8.33,18.92)(0.15,-0.2){2}{\line(0,-1){0.2}} \multiput(8.04,19.33)(0.14,-0.21){2}{\line(0,-1){0.21}} \multiput(7.76,19.74)(0.14,-0.21){2}{\line(0,-1){0.21}} \multiput(7.48,20.16)(0.14,-0.21){2}{\line(0,-1){0.21}} \multiput(7.21,20.58)(0.14,-0.21){2}{\line(0,-1){0.21}} \multiput(6.94,21)(0.13,-0.21){2}{\line(0,-1){0.21}} \multiput(6.67,21.42)(0.13,-0.21){2}{\line(0,-1){0.21}} \multiput(6.41,21.85)(0.13,-0.21){2}{\line(0,-1){0.21}} \multiput(6.15,22.28)(0.13,-0.22){2}{\line(0,-1){0.22}} \multiput(5.9,22.72)(0.13,-0.22){2}{\line(0,-1){0.22}} \multiput(5.66,23.15)(0.12,-0.22){2}{\line(0,-1){0.22}} \multiput(5.42,23.59)(0.12,-0.22){2}{\line(0,-1){0.22}} \multiput(5.18,24.04)(0.12,-0.22){2}{\line(0,-1){0.22}} \multiput(4.95,24.48)(0.12,-0.22){2}{\line(0,-1){0.22}} \multiput(4.72,24.93)(0.11,-0.22){2}{\line(0,-1){0.22}} \multiput(4.5,25.38)(0.11,-0.22){2}{\line(0,-1){0.22}} \multiput(4.29,25.83)(0.11,-0.23){2}{\line(0,-1){0.23}} \multiput(4.08,26.29)(0.11,-0.23){2}{\line(0,-1){0.23}} \multiput(3.87,26.74)(0.1,-0.23){2}{\line(0,-1){0.23}} \multiput(3.67,27.2)(0.1,-0.23){2}{\line(0,-1){0.23}} \multiput(3.47,27.66)(0.1,-0.23){2}{\line(0,-1){0.23}} \multiput(3.28,28.13)(0.1,-0.23){2}{\line(0,-1){0.23}} \multiput(3.1,28.59)(0.09,-0.23){2}{\line(0,-1){0.23}} \multiput(2.92,29.06)(0.09,-0.23){2}{\line(0,-1){0.23}} \multiput(2.74,29.53)(0.17,-0.47){1}{\line(0,-1){0.47}} \multiput(2.57,30)(0.17,-0.47){1}{\line(0,-1){0.47}} \multiput(2.41,30.48)(0.16,-0.47){1}{\line(0,-1){0.47}} \multiput(2.25,30.95)(0.16,-0.48){1}{\line(0,-1){0.48}} \multiput(2.1,31.43)(0.15,-0.48){1}{\line(0,-1){0.48}} \multiput(1.95,31.91)(0.15,-0.48){1}{\line(0,-1){0.48}} \multiput(1.8,32.39)(0.14,-0.48){1}{\line(0,-1){0.48}} \multiput(1.67,32.87)(0.14,-0.48){1}{\line(0,-1){0.48}} \multiput(1.53,33.35)(0.13,-0.48){1}{\line(0,-1){0.48}} \multiput(1.41,33.84)(0.13,-0.48){1}{\line(0,-1){0.48}} \multiput(1.28,34.32)(0.12,-0.49){1}{\line(0,-1){0.49}} \multiput(1.17,34.81)(0.12,-0.49){1}{\line(0,-1){0.49}} \multiput(1.06,35.3)(0.11,-0.49){1}{\line(0,-1){0.49}} \multiput(0.95,35.79)(0.11,-0.49){1}{\line(0,-1){0.49}} \multiput(0.85,36.28)(0.1,-0.49){1}{\line(0,-1){0.49}} \multiput(0.76,36.77)(0.09,-0.49){1}{\line(0,-1){0.49}} \multiput(0.67,37.27)(0.09,-0.49){1}{\line(0,-1){0.49}} \multiput(0.59,37.76)(0.08,-0.49){1}{\line(0,-1){0.49}} \multiput(0.51,38.26)(0.08,-0.5){1}{\line(0,-1){0.5}} \multiput(0.44,38.75)(0.07,-0.5){1}{\line(0,-1){0.5}} \multiput(0.37,39.25)(0.07,-0.5){1}{\line(0,-1){0.5}} \multiput(0.31,39.75)(0.06,-0.5){1}{\line(0,-1){0.5}} \multiput(0.25,40.25)(0.06,-0.5){1}{\line(0,-1){0.5}} \multiput(0.2,40.75)(0.05,-0.5){1}{\line(0,-1){0.5}} \multiput(0.16,41.24)(0.04,-0.5){1}{\line(0,-1){0.5}} \multiput(0.12,41.74)(0.04,-0.5){1}{\line(0,-1){0.5}} \multiput(0.08,42.24)(0.03,-0.5){1}{\line(0,-1){0.5}} \multiput(0.06,42.75)(0.03,-0.5){1}{\line(0,-1){0.5}} \multiput(0.03,43.25)(0.02,-0.5){1}{\line(0,-1){0.5}} \multiput(0.02,43.75)(0.02,-0.5){1}{\line(0,-1){0.5}} \multiput(0.01,44.25)(0.01,-0.5){1}{\line(0,-1){0.5}} \multiput(0,44.75)(0.01,-0.5){1}{\line(0,-1){0.5}} \put(0,44.75){\line(0,1){0.5}} \multiput(0,45.25)(0.01,0.5){1}{\line(0,1){0.5}} \multiput(0.01,45.75)(0.01,0.5){1}{\line(0,1){0.5}} \multiput(0.02,46.25)(0.02,0.5){1}{\line(0,1){0.5}} \multiput(0.03,46.75)(0.02,0.5){1}{\line(0,1){0.5}} \multiput(0.06,47.25)(0.03,0.5){1}{\line(0,1){0.5}} \multiput(0.08,47.76)(0.03,0.5){1}{\line(0,1){0.5}} \multiput(0.12,48.26)(0.04,0.5){1}{\line(0,1){0.5}} \multiput(0.16,48.76)(0.04,0.5){1}{\line(0,1){0.5}} \multiput(0.2,49.25)(0.05,0.5){1}{\line(0,1){0.5}} \multiput(0.25,49.75)(0.06,0.5){1}{\line(0,1){0.5}} \multiput(0.31,50.25)(0.06,0.5){1}{\line(0,1){0.5}} \multiput(0.37,50.75)(0.07,0.5){1}{\line(0,1){0.5}} \multiput(0.44,51.25)(0.07,0.5){1}{\line(0,1){0.5}} \multiput(0.51,51.74)(0.08,0.5){1}{\line(0,1){0.5}} \multiput(0.59,52.24)(0.08,0.49){1}{\line(0,1){0.49}} \multiput(0.67,52.73)(0.09,0.49){1}{\line(0,1){0.49}} \multiput(0.76,53.23)(0.09,0.49){1}{\line(0,1){0.49}} \multiput(0.85,53.72)(0.1,0.49){1}{\line(0,1){0.49}} \multiput(0.95,54.21)(0.11,0.49){1}{\line(0,1){0.49}} \multiput(1.06,54.7)(0.11,0.49){1}{\line(0,1){0.49}} \multiput(1.17,55.19)(0.12,0.49){1}{\line(0,1){0.49}} \multiput(1.28,55.68)(0.12,0.49){1}{\line(0,1){0.49}} \multiput(1.41,56.16)(0.13,0.48){1}{\line(0,1){0.48}} \multiput(1.53,56.65)(0.13,0.48){1}{\line(0,1){0.48}} \multiput(1.67,57.13)(0.14,0.48){1}{\line(0,1){0.48}} \multiput(1.8,57.61)(0.14,0.48){1}{\line(0,1){0.48}} \multiput(1.95,58.09)(0.15,0.48){1}{\line(0,1){0.48}} \multiput(2.1,58.57)(0.15,0.48){1}{\line(0,1){0.48}} \multiput(2.25,59.05)(0.16,0.48){1}{\line(0,1){0.48}} \multiput(2.41,59.52)(0.16,0.47){1}{\line(0,1){0.47}} \multiput(2.57,60)(0.17,0.47){1}{\line(0,1){0.47}} \multiput(2.74,60.47)(0.17,0.47){1}{\line(0,1){0.47}} \multiput(2.92,60.94)(0.09,0.23){2}{\line(0,1){0.23}} \multiput(3.1,61.41)(0.09,0.23){2}{\line(0,1){0.23}} \multiput(3.28,61.87)(0.1,0.23){2}{\line(0,1){0.23}} \multiput(3.47,62.34)(0.1,0.23){2}{\line(0,1){0.23}} \multiput(3.67,62.8)(0.1,0.23){2}{\line(0,1){0.23}} \multiput(3.87,63.26)(0.1,0.23){2}{\line(0,1){0.23}} \multiput(4.08,63.71)(0.11,0.23){2}{\line(0,1){0.23}} \multiput(4.29,64.17)(0.11,0.23){2}{\line(0,1){0.23}} \multiput(4.5,64.62)(0.11,0.22){2}{\line(0,1){0.22}} \multiput(4.72,65.07)(0.11,0.22){2}{\line(0,1){0.22}} \multiput(4.95,65.52)(0.12,0.22){2}{\line(0,1){0.22}} \multiput(5.18,65.96)(0.12,0.22){2}{\line(0,1){0.22}} \multiput(5.42,66.41)(0.12,0.22){2}{\line(0,1){0.22}} \multiput(5.66,66.85)(0.12,0.22){2}{\line(0,1){0.22}} \multiput(5.9,67.28)(0.13,0.22){2}{\line(0,1){0.22}} \multiput(6.15,67.72)(0.13,0.22){2}{\line(0,1){0.22}} \multiput(6.41,68.15)(0.13,0.21){2}{\line(0,1){0.21}} \multiput(6.67,68.58)(0.13,0.21){2}{\line(0,1){0.21}} \multiput(6.94,69)(0.13,0.21){2}{\line(0,1){0.21}} \multiput(7.21,69.42)(0.14,0.21){2}{\line(0,1){0.21}} \multiput(7.48,69.84)(0.14,0.21){2}{\line(0,1){0.21}} \multiput(7.76,70.26)(0.14,0.21){2}{\line(0,1){0.21}} \multiput(8.04,70.67)(0.14,0.21){2}{\line(0,1){0.21}} \multiput(8.33,71.08)(0.15,0.2){2}{\line(0,1){0.2}} \multiput(8.62,71.49)(0.15,0.2){2}{\line(0,1){0.2}} \multiput(8.92,71.89)(0.1,0.13){3}{\line(0,1){0.13}} \multiput(9.22,72.29)(0.1,0.13){3}{\line(0,1){0.13}} \multiput(9.53,72.69)(0.1,0.13){3}{\line(0,1){0.13}} \multiput(9.84,73.09)(0.11,0.13){3}{\line(0,1){0.13}} \multiput(10.15,73.47)(0.11,0.13){3}{\line(0,1){0.13}} \multiput(10.47,73.86)(0.11,0.13){3}{\line(0,1){0.13}} \multiput(10.8,74.24)(0.11,0.13){3}{\line(0,1){0.13}} \multiput(11.13,74.62)(0.11,0.13){3}{\line(0,1){0.13}} \multiput(11.46,75)(0.11,0.12){3}{\line(0,1){0.12}} \multiput(11.79,75.37)(0.11,0.12){3}{\line(0,1){0.12}} \multiput(12.13,75.74)(0.11,0.12){3}{\line(0,1){0.12}} \multiput(12.48,76.1)(0.12,0.12){3}{\line(0,1){0.12}} \multiput(12.83,76.46)(0.12,0.12){3}{\line(0,1){0.12}} \multiput(13.18,76.82)(0.12,0.12){3}{\line(1,0){0.12}} \multiput(13.54,77.17)(0.12,0.12){3}{\line(1,0){0.12}} \multiput(13.9,77.52)(0.12,0.11){3}{\line(1,0){0.12}} \multiput(14.26,77.87)(0.12,0.11){3}{\line(1,0){0.12}} \multiput(14.63,78.21)(0.12,0.11){3}{\line(1,0){0.12}} \multiput(15,78.54)(0.13,0.11){3}{\line(1,0){0.13}} \multiput(15.38,78.87)(0.13,0.11){3}{\line(1,0){0.13}} \multiput(15.76,79.2)(0.13,0.11){3}{\line(1,0){0.13}} \multiput(16.14,79.53)(0.13,0.11){3}{\line(1,0){0.13}} \multiput(16.53,79.85)(0.13,0.11){3}{\line(1,0){0.13}} \multiput(16.91,80.16)(0.13,0.1){3}{\line(1,0){0.13}} \multiput(17.31,80.47)(0.13,0.1){3}{\line(1,0){0.13}} \multiput(17.71,80.78)(0.13,0.1){3}{\line(1,0){0.13}} \multiput(18.11,81.08)(0.2,0.15){2}{\line(1,0){0.2}} \multiput(18.51,81.38)(0.2,0.15){2}{\line(1,0){0.2}} \multiput(18.92,81.67)(0.21,0.14){2}{\line(1,0){0.21}} \multiput(19.33,81.96)(0.21,0.14){2}{\line(1,0){0.21}} \multiput(19.74,82.24)(0.21,0.14){2}{\line(1,0){0.21}} \multiput(20.16,82.52)(0.21,0.14){2}{\line(1,0){0.21}} \multiput(20.58,82.79)(0.21,0.13){2}{\line(1,0){0.21}} \multiput(21,83.06)(0.21,0.13){2}{\line(1,0){0.21}} \multiput(21.42,83.33)(0.21,0.13){2}{\line(1,0){0.21}} \multiput(21.85,83.59)(0.22,0.13){2}{\line(1,0){0.22}} \multiput(22.28,83.85)(0.22,0.13){2}{\line(1,0){0.22}} \multiput(22.72,84.1)(0.22,0.12){2}{\line(1,0){0.22}} \multiput(23.15,84.34)(0.22,0.12){2}{\line(1,0){0.22}} \multiput(23.59,84.58)(0.22,0.12){2}{\line(1,0){0.22}} \multiput(24.04,84.82)(0.22,0.12){2}{\line(1,0){0.22}} \multiput(24.48,85.05)(0.22,0.11){2}{\line(1,0){0.22}} \multiput(24.93,85.28)(0.22,0.11){2}{\line(1,0){0.22}} \multiput(25.38,85.5)(0.23,0.11){2}{\line(1,0){0.23}} \multiput(25.83,85.71)(0.23,0.11){2}{\line(1,0){0.23}} \multiput(26.29,85.92)(0.23,0.1){2}{\line(1,0){0.23}} \multiput(26.74,86.13)(0.23,0.1){2}{\line(1,0){0.23}} \multiput(27.2,86.33)(0.23,0.1){2}{\line(1,0){0.23}} \multiput(27.66,86.53)(0.23,0.1){2}{\line(1,0){0.23}} \multiput(28.13,86.72)(0.23,0.09){2}{\line(1,0){0.23}} \multiput(28.59,86.9)(0.23,0.09){2}{\line(1,0){0.23}} \multiput(29.06,87.08)(0.47,0.17){1}{\line(1,0){0.47}} \multiput(29.53,87.26)(0.47,0.17){1}{\line(1,0){0.47}} \multiput(30,87.43)(0.47,0.16){1}{\line(1,0){0.47}} \multiput(30.48,87.59)(0.48,0.16){1}{\line(1,0){0.48}} \multiput(30.95,87.75)(0.48,0.15){1}{\line(1,0){0.48}} \multiput(31.43,87.9)(0.48,0.15){1}{\line(1,0){0.48}} \multiput(31.91,88.05)(0.48,0.14){1}{\line(1,0){0.48}} \multiput(32.39,88.2)(0.48,0.14){1}{\line(1,0){0.48}} \multiput(32.87,88.33)(0.48,0.13){1}{\line(1,0){0.48}} \multiput(33.35,88.47)(0.48,0.13){1}{\line(1,0){0.48}} \multiput(33.84,88.59)(0.49,0.12){1}{\line(1,0){0.49}} \multiput(34.32,88.72)(0.49,0.12){1}{\line(1,0){0.49}} \multiput(34.81,88.83)(0.49,0.11){1}{\line(1,0){0.49}} \multiput(35.3,88.94)(0.49,0.11){1}{\line(1,0){0.49}} \multiput(35.79,89.05)(0.49,0.1){1}{\line(1,0){0.49}} \multiput(36.28,89.15)(0.49,0.09){1}{\line(1,0){0.49}} \multiput(36.77,89.24)(0.49,0.09){1}{\line(1,0){0.49}} \multiput(37.27,89.33)(0.49,0.08){1}{\line(1,0){0.49}} \multiput(37.76,89.41)(0.5,0.08){1}{\line(1,0){0.5}} \multiput(38.26,89.49)(0.5,0.07){1}{\line(1,0){0.5}} \multiput(38.75,89.56)(0.5,0.07){1}{\line(1,0){0.5}} \multiput(39.25,89.63)(0.5,0.06){1}{\line(1,0){0.5}} \multiput(39.75,89.69)(0.5,0.06){1}{\line(1,0){0.5}} \multiput(40.25,89.75)(0.5,0.05){1}{\line(1,0){0.5}} \multiput(40.75,89.8)(0.5,0.04){1}{\line(1,0){0.5}} \multiput(41.24,89.84)(0.5,0.04){1}{\line(1,0){0.5}} \multiput(41.74,89.88)(0.5,0.03){1}{\line(1,0){0.5}} \multiput(42.24,89.92)(0.5,0.03){1}{\line(1,0){0.5}} \multiput(42.75,89.94)(0.5,0.02){1}{\line(1,0){0.5}} \multiput(43.25,89.97)(0.5,0.02){1}{\line(1,0){0.5}} \multiput(43.75,89.98)(0.5,0.01){1}{\line(1,0){0.5}} \multiput(44.25,89.99)(0.5,0.01){1}{\line(1,0){0.5}} \put(44.75,90){\line(1,0){0.5}} \multiput(45.25,90)(0.5,-0.01){1}{\line(1,0){0.5}} \multiput(45.75,89.99)(0.5,-0.01){1}{\line(1,0){0.5}} \multiput(46.25,89.98)(0.5,-0.02){1}{\line(1,0){0.5}} \multiput(46.75,89.97)(0.5,-0.02){1}{\line(1,0){0.5}} \multiput(47.25,89.94)(0.5,-0.03){1}{\line(1,0){0.5}} \multiput(47.76,89.92)(0.5,-0.03){1}{\line(1,0){0.5}} \multiput(48.26,89.88)(0.5,-0.04){1}{\line(1,0){0.5}} \multiput(48.76,89.84)(0.5,-0.04){1}{\line(1,0){0.5}} \multiput(49.25,89.8)(0.5,-0.05){1}{\line(1,0){0.5}} \multiput(49.75,89.75)(0.5,-0.06){1}{\line(1,0){0.5}} \multiput(50.25,89.69)(0.5,-0.06){1}{\line(1,0){0.5}} \multiput(50.75,89.63)(0.5,-0.07){1}{\line(1,0){0.5}} \multiput(51.25,89.56)(0.5,-0.07){1}{\line(1,0){0.5}} \multiput(51.74,89.49)(0.5,-0.08){1}{\line(1,0){0.5}} \multiput(52.24,89.41)(0.49,-0.08){1}{\line(1,0){0.49}} \multiput(52.73,89.33)(0.49,-0.09){1}{\line(1,0){0.49}} \multiput(53.23,89.24)(0.49,-0.09){1}{\line(1,0){0.49}} \multiput(53.72,89.15)(0.49,-0.1){1}{\line(1,0){0.49}} \multiput(54.21,89.05)(0.49,-0.11){1}{\line(1,0){0.49}} \multiput(54.7,88.94)(0.49,-0.11){1}{\line(1,0){0.49}} \multiput(55.19,88.83)(0.49,-0.12){1}{\line(1,0){0.49}} \multiput(55.68,88.72)(0.49,-0.12){1}{\line(1,0){0.49}} \multiput(56.16,88.59)(0.48,-0.13){1}{\line(1,0){0.48}} \multiput(56.65,88.47)(0.48,-0.13){1}{\line(1,0){0.48}} \multiput(57.13,88.33)(0.48,-0.14){1}{\line(1,0){0.48}} \multiput(57.61,88.2)(0.48,-0.14){1}{\line(1,0){0.48}} \multiput(58.09,88.05)(0.48,-0.15){1}{\line(1,0){0.48}} \multiput(58.57,87.9)(0.48,-0.15){1}{\line(1,0){0.48}} \multiput(59.05,87.75)(0.48,-0.16){1}{\line(1,0){0.48}} \multiput(59.52,87.59)(0.47,-0.16){1}{\line(1,0){0.47}} \multiput(60,87.43)(0.47,-0.17){1}{\line(1,0){0.47}} \multiput(60.47,87.26)(0.47,-0.17){1}{\line(1,0){0.47}} \multiput(60.94,87.08)(0.23,-0.09){2}{\line(1,0){0.23}} \multiput(61.41,86.9)(0.23,-0.09){2}{\line(1,0){0.23}} \multiput(61.87,86.72)(0.23,-0.1){2}{\line(1,0){0.23}} \multiput(62.34,86.53)(0.23,-0.1){2}{\line(1,0){0.23}} \multiput(62.8,86.33)(0.23,-0.1){2}{\line(1,0){0.23}} \multiput(63.26,86.13)(0.23,-0.1){2}{\line(1,0){0.23}} \multiput(63.71,85.92)(0.23,-0.11){2}{\line(1,0){0.23}} \multiput(64.17,85.71)(0.23,-0.11){2}{\line(1,0){0.23}} \multiput(64.62,85.5)(0.22,-0.11){2}{\line(1,0){0.22}} \multiput(65.07,85.28)(0.22,-0.11){2}{\line(1,0){0.22}} \multiput(65.52,85.05)(0.22,-0.12){2}{\line(1,0){0.22}} \multiput(65.96,84.82)(0.22,-0.12){2}{\line(1,0){0.22}} \multiput(66.41,84.58)(0.22,-0.12){2}{\line(1,0){0.22}} \multiput(66.85,84.34)(0.22,-0.12){2}{\line(1,0){0.22}} \multiput(67.28,84.1)(0.22,-0.13){2}{\line(1,0){0.22}} \multiput(67.72,83.85)(0.22,-0.13){2}{\line(1,0){0.22}} \multiput(68.15,83.59)(0.21,-0.13){2}{\line(1,0){0.21}} \multiput(68.58,83.33)(0.21,-0.13){2}{\line(1,0){0.21}} \multiput(69,83.06)(0.21,-0.13){2}{\line(1,0){0.21}} \multiput(69.42,82.79)(0.21,-0.14){2}{\line(1,0){0.21}} \multiput(69.84,82.52)(0.21,-0.14){2}{\line(1,0){0.21}} \multiput(70.26,82.24)(0.21,-0.14){2}{\line(1,0){0.21}} \multiput(70.67,81.96)(0.21,-0.14){2}{\line(1,0){0.21}} \multiput(71.08,81.67)(0.2,-0.15){2}{\line(1,0){0.2}} \multiput(71.49,81.38)(0.2,-0.15){2}{\line(1,0){0.2}} \multiput(71.89,81.08)(0.13,-0.1){3}{\line(1,0){0.13}} \multiput(72.29,80.78)(0.13,-0.1){3}{\line(1,0){0.13}} \multiput(72.69,80.47)(0.13,-0.1){3}{\line(1,0){0.13}} \multiput(73.09,80.16)(0.13,-0.11){3}{\line(1,0){0.13}} \multiput(73.47,79.85)(0.13,-0.11){3}{\line(1,0){0.13}} \multiput(73.86,79.53)(0.13,-0.11){3}{\line(1,0){0.13}} \multiput(74.24,79.2)(0.13,-0.11){3}{\line(1,0){0.13}} \multiput(74.62,78.87)(0.13,-0.11){3}{\line(1,0){0.13}} \multiput(75,78.54)(0.12,-0.11){3}{\line(1,0){0.12}} \multiput(75.37,78.21)(0.12,-0.11){3}{\line(1,0){0.12}} \multiput(75.74,77.87)(0.12,-0.11){3}{\line(1,0){0.12}} \multiput(76.1,77.52)(0.12,-0.12){3}{\line(1,0){0.12}} \multiput(76.46,77.17)(0.12,-0.12){3}{\line(1,0){0.12}} \multiput(76.82,76.82)(0.12,-0.12){3}{\line(0,-1){0.12}} \multiput(77.17,76.46)(0.12,-0.12){3}{\line(0,-1){0.12}} \multiput(77.52,76.1)(0.11,-0.12){3}{\line(0,-1){0.12}} \multiput(77.87,75.74)(0.11,-0.12){3}{\line(0,-1){0.12}} \multiput(78.21,75.37)(0.11,-0.12){3}{\line(0,-1){0.12}} \multiput(78.54,75)(0.11,-0.13){3}{\line(0,-1){0.13}} \multiput(78.87,74.62)(0.11,-0.13){3}{\line(0,-1){0.13}} \multiput(79.2,74.24)(0.11,-0.13){3}{\line(0,-1){0.13}} \multiput(79.53,73.86)(0.11,-0.13){3}{\line(0,-1){0.13}} \multiput(79.85,73.47)(0.11,-0.13){3}{\line(0,-1){0.13}} \multiput(80.16,73.09)(0.1,-0.13){3}{\line(0,-1){0.13}} \multiput(80.47,72.69)(0.1,-0.13){3}{\line(0,-1){0.13}} \multiput(80.78,72.29)(0.1,-0.13){3}{\line(0,-1){0.13}} \multiput(81.08,71.89)(0.15,-0.2){2}{\line(0,-1){0.2}} \multiput(81.38,71.49)(0.15,-0.2){2}{\line(0,-1){0.2}} \multiput(81.67,71.08)(0.14,-0.21){2}{\line(0,-1){0.21}} \multiput(81.96,70.67)(0.14,-0.21){2}{\line(0,-1){0.21}} \multiput(82.24,70.26)(0.14,-0.21){2}{\line(0,-1){0.21}} \multiput(82.52,69.84)(0.14,-0.21){2}{\line(0,-1){0.21}} \multiput(82.79,69.42)(0.13,-0.21){2}{\line(0,-1){0.21}} \multiput(83.06,69)(0.13,-0.21){2}{\line(0,-1){0.21}} \multiput(83.33,68.58)(0.13,-0.21){2}{\line(0,-1){0.21}} \multiput(83.59,68.15)(0.13,-0.22){2}{\line(0,-1){0.22}} \multiput(83.85,67.72)(0.13,-0.22){2}{\line(0,-1){0.22}} \multiput(84.1,67.28)(0.12,-0.22){2}{\line(0,-1){0.22}} \multiput(84.34,66.85)(0.12,-0.22){2}{\line(0,-1){0.22}} \multiput(84.58,66.41)(0.12,-0.22){2}{\line(0,-1){0.22}} \multiput(84.82,65.96)(0.12,-0.22){2}{\line(0,-1){0.22}} \multiput(85.05,65.52)(0.11,-0.22){2}{\line(0,-1){0.22}} \multiput(85.28,65.07)(0.11,-0.22){2}{\line(0,-1){0.22}} \multiput(85.5,64.62)(0.11,-0.23){2}{\line(0,-1){0.23}} \multiput(85.71,64.17)(0.11,-0.23){2}{\line(0,-1){0.23}} \multiput(85.92,63.71)(0.1,-0.23){2}{\line(0,-1){0.23}} \multiput(86.13,63.26)(0.1,-0.23){2}{\line(0,-1){0.23}} \multiput(86.33,62.8)(0.1,-0.23){2}{\line(0,-1){0.23}} \multiput(86.53,62.34)(0.1,-0.23){2}{\line(0,-1){0.23}} \multiput(86.72,61.87)(0.09,-0.23){2}{\line(0,-1){0.23}} \multiput(86.9,61.41)(0.09,-0.23){2}{\line(0,-1){0.23}} \multiput(87.08,60.94)(0.17,-0.47){1}{\line(0,-1){0.47}} \multiput(87.26,60.47)(0.17,-0.47){1}{\line(0,-1){0.47}} \multiput(87.43,60)(0.16,-0.47){1}{\line(0,-1){0.47}} \multiput(87.59,59.52)(0.16,-0.48){1}{\line(0,-1){0.48}} \multiput(87.75,59.05)(0.15,-0.48){1}{\line(0,-1){0.48}} \multiput(87.9,58.57)(0.15,-0.48){1}{\line(0,-1){0.48}} \multiput(88.05,58.09)(0.14,-0.48){1}{\line(0,-1){0.48}} \multiput(88.2,57.61)(0.14,-0.48){1}{\line(0,-1){0.48}} \multiput(88.33,57.13)(0.13,-0.48){1}{\line(0,-1){0.48}} \multiput(88.47,56.65)(0.13,-0.48){1}{\line(0,-1){0.48}} \multiput(88.59,56.16)(0.12,-0.49){1}{\line(0,-1){0.49}} \multiput(88.72,55.68)(0.12,-0.49){1}{\line(0,-1){0.49}} \multiput(88.83,55.19)(0.11,-0.49){1}{\line(0,-1){0.49}} \multiput(88.94,54.7)(0.11,-0.49){1}{\line(0,-1){0.49}} \multiput(89.05,54.21)(0.1,-0.49){1}{\line(0,-1){0.49}} \multiput(89.15,53.72)(0.09,-0.49){1}{\line(0,-1){0.49}} \multiput(89.24,53.23)(0.09,-0.49){1}{\line(0,-1){0.49}} \multiput(89.33,52.73)(0.08,-0.49){1}{\line(0,-1){0.49}} \multiput(89.41,52.24)(0.08,-0.5){1}{\line(0,-1){0.5}} \multiput(89.49,51.74)(0.07,-0.5){1}{\line(0,-1){0.5}} \multiput(89.56,51.25)(0.07,-0.5){1}{\line(0,-1){0.5}} \multiput(89.63,50.75)(0.06,-0.5){1}{\line(0,-1){0.5}} \multiput(89.69,50.25)(0.06,-0.5){1}{\line(0,-1){0.5}} \multiput(89.75,49.75)(0.05,-0.5){1}{\line(0,-1){0.5}} \multiput(89.8,49.25)(0.04,-0.5){1}{\line(0,-1){0.5}} \multiput(89.84,48.76)(0.04,-0.5){1}{\line(0,-1){0.5}} \multiput(89.88,48.26)(0.03,-0.5){1}{\line(0,-1){0.5}} \multiput(89.92,47.76)(0.03,-0.5){1}{\line(0,-1){0.5}} \multiput(89.94,47.25)(0.02,-0.5){1}{\line(0,-1){0.5}} \multiput(89.97,46.75)(0.02,-0.5){1}{\line(0,-1){0.5}} \multiput(89.98,46.25)(0.01,-0.5){1}{\line(0,-1){0.5}} \multiput(89.99,45.75)(0.01,-0.5){1}{\line(0,-1){0.5}} \put(88,60){\line(-1,2){15}} \linethickness{0.3mm} \put(45,45){\line(1,0){55}} \linethickness{0.3mm} \put(45,45){\line(0,1){50}} \linethickness{0.3mm} \multiput(45,45)(0.24,0.12){167}{\line(1,0){0.24}} \linethickness{0.3mm} \multiput(85,65)(0.24,0.12){83}{\line(1,0){0.24}} \linethickness{0.3mm} \put(85,65){\line(1,0){25}} \linethickness{0.3mm} \put(85,65){\line(0,1){25}} \linethickness{0.3mm} \put(45,65){\line(1,0){40}} \linethickness{0.3mm} \put(85,45){\line(0,1){20}} \linethickness{0.3mm} \multiput(60,60)(0.6,0.12){83}{\line(1,0){0.6}} \linethickness{0.3mm} \multiput(80,50)(0.12,0.36){83}{\line(0,1){0.36}} \put(66,70){\makebox(0,0)[cc]{$x$}} \put(39,56){\makebox(0,0)[cc]{$y$}} \put(41,41){\makebox(0,0)[cc]{O}} \put(92,60){\makebox(0,0)[cc]{P}} \put(76,50){\makebox(0,0)[cc]{B}} \put(55,60){\makebox(0,0)[cc]{C}} \put(99,39){\makebox(0,0)[cc]{(1,0)}} \put(36,96){\makebox(0,0)[cc]{(0,1)}} \end{picture} \caption{The relative positions of the force acting on $m$, while the body is on the geodesic $z=0$.}\label{circ} \end{figure} \begin{proof} By equation \eqref{h1}, $x\ddot x+y\ddot y=-(\dot x^2+\dot y^2) \le 0$, which means that the force acting on $m$ is always directed along the tangent at $m$ to the geodesic circle $z=0$ or inside the half-plane containing this circle. Assuming that an $xy$-coordinate system is fixed at the origin of the acceleration vector (point P in Figure \ref{circ}), this vector always lies in the half-plane below the line of slope $-x(t_0)/y(t_0)$ (i.e.~the tangent to the circle at the point P in Figure \ref{circ}). We further prove each case separately. (a) If $\ddot{x}(t_0)> 0$ and $\ddot{y}(t_0)<0$, the force acting on $m$ is represented by a vector that lies in the region given by the intersection of the fourth quadrant (counted counterclockwise) and the half plane below the line of slope $-x(t_0)/y(t_0)$. Then, obviously, the force pulls the body along the circle in the direction of the point $(1,0)$. (b) If $\ddot{x}(t_0)< 0$ and $\ddot{y}(t_0)> 0$, the force acting on $m$ is represented by a vector that lies in the region given by the intersection of the second quadrant and the half plane lying below the line of slope $-x(t_0)/y(t_0)$. Then, obviously, the force pulls the body along the circle in the direction of the point $(0,1)$. (c) If $\ddot{x}(t_0)\le 0$ and $\ddot{y}(t_0)\le 0$, the force acting on $m$ is represented by a vector lying in the third quadrant. Then the direction in which this force acts depends on whether the acceleration vector lies: (i) below the line of slope $y(t_0)/x(t_0)$ (PB is below OP in Figure \ref{circ}); (ii) above it (PC is above OP); or (iii) on it (i.e.~on the line OP). Case (iii) includes the case when the acceleration is zero. In case (i), the acceleration vector lies on a line whose slope is larger than $y(t_0)/x(t_0)$, i.e. $\ddot{y}(t_0)/\ddot{x}(t_0)>y(t_0)/x(t_0)$, so the force pulls $m$ toward $(1,0)$. In case (ii), the acceleration vector lies on a line of slope that is smaller than $y(t_0)/x(t_0)$, i.e.~$\ddot{y}(t_0)/\ddot{x}(t_0)<y(t_0)/x(t_0)$, so the force pulls $m$ toward $(0,1)$. In case (iii), the acceleration vector is either zero or lies on the line of slope $y(t_0)/x(t_0)$, i.e.~$\ddot{y}(t_0)/\ddot{x}(t_0)=y(t_0)/x(t_0)$. But the latter alternative never happens. This fact follows from the equations of motion \eqref{eqmotion}, which show that the acceleration is the difference between the gradient of the force function and a multiple of the position vector. But according to Euler's formula for homogeneous functions, \eqref{eul}, and the fact that the velocities are zero, these vectors are orthogonal, so their difference can have the same direction as one of them only if it is zero. This vectorial argument agrees with the kinematic facts, which show that if $\dot{x}(t_0)=\dot{y}(t_0)=0$ and the acceleration has the same direction as the position vector, then $m$ doesn't move, so $\dot{x}(t)=\dot{y}(t)=0$, and therefore $\ddot{x}(t)=\ddot{y}(t)=0$ for all $t$. In particular, this means that when $\ddot{y}(t_0)=\ddot{x}(t_0)=0$, no force acts on $m$, so the body remains fixed. (d) If $\ddot{x}(t_0)> 0$ and $\ddot{y}(t_0)> 0$, the force acting on $m$ is represented by a vector that lies in the region given by the intersection between the first quadrant and the half-plane lying below the line of slope $-x(t_0)/y(t_0)$. But this region is empty, so the motion doesn't take place. \end{proof} We will further prove the existence of solutions with collision-antipodal singularities, solutions repelled from collision-antipodal singularities in positive time, as well as of solutions that remain analytic at a collision-antipodal configuration. They show that the dynamics of $\bf\Delta^+\cap\Delta^-$ is more complicated than the dynamics of $\bf\Delta^+$ and $\bf\Delta^-$ away from the intersection, since solutions can go both towards and away from this set for $t>0$, and can even avoid singularities. This result represents a first example of a non-collision singularity reached by only three bodies as well as a first example of a non-singularity collision. \begin{theorem} Consider the 3-body problem in ${\bf S}^2$ with the bodies $m_1$ and $m_2$ having mass $M>0$ and the body $m_3$ having mass $m>0$. Then (i) there are values of\ \ $m$ and $M$, as well as initial conditions, for which the solutions end in finite time in a collision-antipodal singularity; (ii) other choices of masses and initial conditions lead to solutions that are repelled from a collision-antipodal singularity; (iii) and yet other choices of masses and initial data correspond to solutions that reach a collision-antipodal configuration but remain analytic at this point. \label{singularity} \end{theorem} \def\JPicScale{0.5} \ifx\JPicScale\undefined\def\JPicScale{1}\fi \unitlength \JPicScale mm \begin{figure} \begin{picture}(95,95)(0,0) \linethickness{0.3mm} \put(90,44.75){\line(0,1){0.5}} \multiput(89.99,44.25)(0.01,0.5){1}{\line(0,1){0.5}} \multiput(89.98,43.75)(0.01,0.5){1}{\line(0,1){0.5}} \multiput(89.97,43.25)(0.02,0.5){1}{\line(0,1){0.5}} \multiput(89.94,42.75)(0.02,0.5){1}{\line(0,1){0.5}} \multiput(89.92,42.24)(0.03,0.5){1}{\line(0,1){0.5}} \multiput(89.88,41.74)(0.03,0.5){1}{\line(0,1){0.5}} \multiput(89.84,41.24)(0.04,0.5){1}{\line(0,1){0.5}} \multiput(89.8,40.75)(0.04,0.5){1}{\line(0,1){0.5}} \multiput(89.75,40.25)(0.05,0.5){1}{\line(0,1){0.5}} \multiput(89.69,39.75)(0.06,0.5){1}{\line(0,1){0.5}} \multiput(89.63,39.25)(0.06,0.5){1}{\line(0,1){0.5}} \multiput(89.56,38.75)(0.07,0.5){1}{\line(0,1){0.5}} \multiput(89.49,38.26)(0.07,0.5){1}{\line(0,1){0.5}} \multiput(89.41,37.76)(0.08,0.5){1}{\line(0,1){0.5}} \multiput(89.33,37.27)(0.08,0.49){1}{\line(0,1){0.49}} \multiput(89.24,36.77)(0.09,0.49){1}{\line(0,1){0.49}} \multiput(89.15,36.28)(0.09,0.49){1}{\line(0,1){0.49}} \multiput(89.05,35.79)(0.1,0.49){1}{\line(0,1){0.49}} \multiput(88.94,35.3)(0.11,0.49){1}{\line(0,1){0.49}} \multiput(88.83,34.81)(0.11,0.49){1}{\line(0,1){0.49}} \multiput(88.72,34.32)(0.12,0.49){1}{\line(0,1){0.49}} \multiput(88.59,33.84)(0.12,0.49){1}{\line(0,1){0.49}} \multiput(88.47,33.35)(0.13,0.48){1}{\line(0,1){0.48}} \multiput(88.33,32.87)(0.13,0.48){1}{\line(0,1){0.48}} \multiput(88.2,32.39)(0.14,0.48){1}{\line(0,1){0.48}} \multiput(88.05,31.91)(0.14,0.48){1}{\line(0,1){0.48}} \multiput(87.9,31.43)(0.15,0.48){1}{\line(0,1){0.48}} \multiput(87.75,30.95)(0.15,0.48){1}{\line(0,1){0.48}} \multiput(87.59,30.48)(0.16,0.48){1}{\line(0,1){0.48}} \multiput(87.43,30)(0.16,0.47){1}{\line(0,1){0.47}} \multiput(87.26,29.53)(0.17,0.47){1}{\line(0,1){0.47}} \multiput(87.08,29.06)(0.17,0.47){1}{\line(0,1){0.47}} \multiput(86.9,28.59)(0.09,0.23){2}{\line(0,1){0.23}} \multiput(86.72,28.13)(0.09,0.23){2}{\line(0,1){0.23}} \multiput(86.53,27.66)(0.1,0.23){2}{\line(0,1){0.23}} \multiput(86.33,27.2)(0.1,0.23){2}{\line(0,1){0.23}} \multiput(86.13,26.74)(0.1,0.23){2}{\line(0,1){0.23}} \multiput(85.92,26.29)(0.1,0.23){2}{\line(0,1){0.23}} \multiput(85.71,25.83)(0.11,0.23){2}{\line(0,1){0.23}} \multiput(85.5,25.38)(0.11,0.23){2}{\line(0,1){0.23}} \multiput(85.28,24.93)(0.11,0.22){2}{\line(0,1){0.22}} \multiput(85.05,24.48)(0.11,0.22){2}{\line(0,1){0.22}} \multiput(84.82,24.04)(0.12,0.22){2}{\line(0,1){0.22}} \multiput(84.58,23.59)(0.12,0.22){2}{\line(0,1){0.22}} \multiput(84.34,23.15)(0.12,0.22){2}{\line(0,1){0.22}} \multiput(84.1,22.72)(0.12,0.22){2}{\line(0,1){0.22}} \multiput(83.85,22.28)(0.13,0.22){2}{\line(0,1){0.22}} \multiput(83.59,21.85)(0.13,0.22){2}{\line(0,1){0.22}} \multiput(83.33,21.42)(0.13,0.21){2}{\line(0,1){0.21}} \multiput(83.06,21)(0.13,0.21){2}{\line(0,1){0.21}} \multiput(82.79,20.58)(0.13,0.21){2}{\line(0,1){0.21}} \multiput(82.52,20.16)(0.14,0.21){2}{\line(0,1){0.21}} \multiput(82.24,19.74)(0.14,0.21){2}{\line(0,1){0.21}} \multiput(81.96,19.33)(0.14,0.21){2}{\line(0,1){0.21}} \multiput(81.67,18.92)(0.14,0.21){2}{\line(0,1){0.21}} \multiput(81.38,18.51)(0.15,0.2){2}{\line(0,1){0.2}} \multiput(81.08,18.11)(0.15,0.2){2}{\line(0,1){0.2}} \multiput(80.78,17.71)(0.1,0.13){3}{\line(0,1){0.13}} \multiput(80.47,17.31)(0.1,0.13){3}{\line(0,1){0.13}} \multiput(80.16,16.91)(0.1,0.13){3}{\line(0,1){0.13}} \multiput(79.85,16.53)(0.11,0.13){3}{\line(0,1){0.13}} \multiput(79.53,16.14)(0.11,0.13){3}{\line(0,1){0.13}} \multiput(79.2,15.76)(0.11,0.13){3}{\line(0,1){0.13}} \multiput(78.87,15.38)(0.11,0.13){3}{\line(0,1){0.13}} \multiput(78.54,15)(0.11,0.13){3}{\line(0,1){0.13}} \multiput(78.21,14.63)(0.11,0.12){3}{\line(0,1){0.12}} \multiput(77.87,14.26)(0.11,0.12){3}{\line(0,1){0.12}} \multiput(77.52,13.9)(0.11,0.12){3}{\line(0,1){0.12}} \multiput(77.17,13.54)(0.12,0.12){3}{\line(0,1){0.12}} \multiput(76.82,13.18)(0.12,0.12){3}{\line(0,1){0.12}} \multiput(76.46,12.83)(0.12,0.12){3}{\line(1,0){0.12}} \multiput(76.1,12.48)(0.12,0.12){3}{\line(1,0){0.12}} \multiput(75.74,12.13)(0.12,0.11){3}{\line(1,0){0.12}} \multiput(75.37,11.79)(0.12,0.11){3}{\line(1,0){0.12}} \multiput(75,11.46)(0.12,0.11){3}{\line(1,0){0.12}} \multiput(74.62,11.13)(0.13,0.11){3}{\line(1,0){0.13}} \multiput(74.24,10.8)(0.13,0.11){3}{\line(1,0){0.13}} \multiput(73.86,10.47)(0.13,0.11){3}{\line(1,0){0.13}} \multiput(73.47,10.15)(0.13,0.11){3}{\line(1,0){0.13}} \multiput(73.09,9.84)(0.13,0.11){3}{\line(1,0){0.13}} \multiput(72.69,9.53)(0.13,0.1){3}{\line(1,0){0.13}} \multiput(72.29,9.22)(0.13,0.1){3}{\line(1,0){0.13}} \multiput(71.89,8.92)(0.13,0.1){3}{\line(1,0){0.13}} \multiput(71.49,8.62)(0.2,0.15){2}{\line(1,0){0.2}} \multiput(71.08,8.33)(0.2,0.15){2}{\line(1,0){0.2}} \multiput(70.67,8.04)(0.21,0.14){2}{\line(1,0){0.21}} \multiput(70.26,7.76)(0.21,0.14){2}{\line(1,0){0.21}} \multiput(69.84,7.48)(0.21,0.14){2}{\line(1,0){0.21}} \multiput(69.42,7.21)(0.21,0.14){2}{\line(1,0){0.21}} \multiput(69,6.94)(0.21,0.13){2}{\line(1,0){0.21}} \multiput(68.58,6.67)(0.21,0.13){2}{\line(1,0){0.21}} \multiput(68.15,6.41)(0.21,0.13){2}{\line(1,0){0.21}} \multiput(67.72,6.15)(0.22,0.13){2}{\line(1,0){0.22}} \multiput(67.28,5.9)(0.22,0.13){2}{\line(1,0){0.22}} \multiput(66.85,5.66)(0.22,0.12){2}{\line(1,0){0.22}} \multiput(66.41,5.42)(0.22,0.12){2}{\line(1,0){0.22}} \multiput(65.96,5.18)(0.22,0.12){2}{\line(1,0){0.22}} \multiput(65.52,4.95)(0.22,0.12){2}{\line(1,0){0.22}} \multiput(65.07,4.72)(0.22,0.11){2}{\line(1,0){0.22}} \multiput(64.62,4.5)(0.22,0.11){2}{\line(1,0){0.22}} \multiput(64.17,4.29)(0.23,0.11){2}{\line(1,0){0.23}} \multiput(63.71,4.08)(0.23,0.11){2}{\line(1,0){0.23}} \multiput(63.26,3.87)(0.23,0.1){2}{\line(1,0){0.23}} \multiput(62.8,3.67)(0.23,0.1){2}{\line(1,0){0.23}} \multiput(62.34,3.47)(0.23,0.1){2}{\line(1,0){0.23}} \multiput(61.87,3.28)(0.23,0.1){2}{\line(1,0){0.23}} \multiput(61.41,3.1)(0.23,0.09){2}{\line(1,0){0.23}} \multiput(60.94,2.92)(0.23,0.09){2}{\line(1,0){0.23}} \multiput(60.47,2.74)(0.47,0.17){1}{\line(1,0){0.47}} \multiput(60,2.57)(0.47,0.17){1}{\line(1,0){0.47}} \multiput(59.52,2.41)(0.47,0.16){1}{\line(1,0){0.47}} \multiput(59.05,2.25)(0.48,0.16){1}{\line(1,0){0.48}} \multiput(58.57,2.1)(0.48,0.15){1}{\line(1,0){0.48}} \multiput(58.09,1.95)(0.48,0.15){1}{\line(1,0){0.48}} \multiput(57.61,1.8)(0.48,0.14){1}{\line(1,0){0.48}} \multiput(57.13,1.67)(0.48,0.14){1}{\line(1,0){0.48}} \multiput(56.65,1.53)(0.48,0.13){1}{\line(1,0){0.48}} \multiput(56.16,1.41)(0.48,0.13){1}{\line(1,0){0.48}} \multiput(55.68,1.28)(0.49,0.12){1}{\line(1,0){0.49}} \multiput(55.19,1.17)(0.49,0.12){1}{\line(1,0){0.49}} \multiput(54.7,1.06)(0.49,0.11){1}{\line(1,0){0.49}} \multiput(54.21,0.95)(0.49,0.11){1}{\line(1,0){0.49}} \multiput(53.72,0.85)(0.49,0.1){1}{\line(1,0){0.49}} \multiput(53.23,0.76)(0.49,0.09){1}{\line(1,0){0.49}} \multiput(52.73,0.67)(0.49,0.09){1}{\line(1,0){0.49}} \multiput(52.24,0.59)(0.49,0.08){1}{\line(1,0){0.49}} \multiput(51.74,0.51)(0.5,0.08){1}{\line(1,0){0.5}} \multiput(51.25,0.44)(0.5,0.07){1}{\line(1,0){0.5}} \multiput(50.75,0.37)(0.5,0.07){1}{\line(1,0){0.5}} \multiput(50.25,0.31)(0.5,0.06){1}{\line(1,0){0.5}} \multiput(49.75,0.25)(0.5,0.06){1}{\line(1,0){0.5}} \multiput(49.25,0.2)(0.5,0.05){1}{\line(1,0){0.5}} \multiput(48.76,0.16)(0.5,0.04){1}{\line(1,0){0.5}} \multiput(48.26,0.12)(0.5,0.04){1}{\line(1,0){0.5}} \multiput(47.76,0.08)(0.5,0.03){1}{\line(1,0){0.5}} \multiput(47.25,0.06)(0.5,0.03){1}{\line(1,0){0.5}} \multiput(46.75,0.03)(0.5,0.02){1}{\line(1,0){0.5}} \multiput(46.25,0.02)(0.5,0.02){1}{\line(1,0){0.5}} \multiput(45.75,0.01)(0.5,0.01){1}{\line(1,0){0.5}} \multiput(45.25,0)(0.5,0.01){1}{\line(1,0){0.5}} \put(44.75,0){\line(1,0){0.5}} \multiput(44.25,0.01)(0.5,-0.01){1}{\line(1,0){0.5}} \multiput(43.75,0.02)(0.5,-0.01){1}{\line(1,0){0.5}} \multiput(43.25,0.03)(0.5,-0.02){1}{\line(1,0){0.5}} \multiput(42.75,0.06)(0.5,-0.02){1}{\line(1,0){0.5}} \multiput(42.24,0.08)(0.5,-0.03){1}{\line(1,0){0.5}} \multiput(41.74,0.12)(0.5,-0.03){1}{\line(1,0){0.5}} \multiput(41.24,0.16)(0.5,-0.04){1}{\line(1,0){0.5}} \multiput(40.75,0.2)(0.5,-0.04){1}{\line(1,0){0.5}} \multiput(40.25,0.25)(0.5,-0.05){1}{\line(1,0){0.5}} \multiput(39.75,0.31)(0.5,-0.06){1}{\line(1,0){0.5}} \multiput(39.25,0.37)(0.5,-0.06){1}{\line(1,0){0.5}} \multiput(38.75,0.44)(0.5,-0.07){1}{\line(1,0){0.5}} \multiput(38.26,0.51)(0.5,-0.07){1}{\line(1,0){0.5}} \multiput(37.76,0.59)(0.5,-0.08){1}{\line(1,0){0.5}} \multiput(37.27,0.67)(0.49,-0.08){1}{\line(1,0){0.49}} \multiput(36.77,0.76)(0.49,-0.09){1}{\line(1,0){0.49}} \multiput(36.28,0.85)(0.49,-0.09){1}{\line(1,0){0.49}} \multiput(35.79,0.95)(0.49,-0.1){1}{\line(1,0){0.49}} \multiput(35.3,1.06)(0.49,-0.11){1}{\line(1,0){0.49}} \multiput(34.81,1.17)(0.49,-0.11){1}{\line(1,0){0.49}} \multiput(34.32,1.28)(0.49,-0.12){1}{\line(1,0){0.49}} \multiput(33.84,1.41)(0.49,-0.12){1}{\line(1,0){0.49}} \multiput(33.35,1.53)(0.48,-0.13){1}{\line(1,0){0.48}} \multiput(32.87,1.67)(0.48,-0.13){1}{\line(1,0){0.48}} \multiput(32.39,1.8)(0.48,-0.14){1}{\line(1,0){0.48}} \multiput(31.91,1.95)(0.48,-0.14){1}{\line(1,0){0.48}} \multiput(31.43,2.1)(0.48,-0.15){1}{\line(1,0){0.48}} \multiput(30.95,2.25)(0.48,-0.15){1}{\line(1,0){0.48}} \multiput(30.48,2.41)(0.48,-0.16){1}{\line(1,0){0.48}} \multiput(30,2.57)(0.47,-0.16){1}{\line(1,0){0.47}} \multiput(29.53,2.74)(0.47,-0.17){1}{\line(1,0){0.47}} \multiput(29.06,2.92)(0.47,-0.17){1}{\line(1,0){0.47}} \multiput(28.59,3.1)(0.23,-0.09){2}{\line(1,0){0.23}} \multiput(28.13,3.28)(0.23,-0.09){2}{\line(1,0){0.23}} \multiput(27.66,3.47)(0.23,-0.1){2}{\line(1,0){0.23}} \multiput(27.2,3.67)(0.23,-0.1){2}{\line(1,0){0.23}} \multiput(26.74,3.87)(0.23,-0.1){2}{\line(1,0){0.23}} \multiput(26.29,4.08)(0.23,-0.1){2}{\line(1,0){0.23}} \multiput(25.83,4.29)(0.23,-0.11){2}{\line(1,0){0.23}} \multiput(25.38,4.5)(0.23,-0.11){2}{\line(1,0){0.23}} \multiput(24.93,4.72)(0.22,-0.11){2}{\line(1,0){0.22}} \multiput(24.48,4.95)(0.22,-0.11){2}{\line(1,0){0.22}} \multiput(24.04,5.18)(0.22,-0.12){2}{\line(1,0){0.22}} \multiput(23.59,5.42)(0.22,-0.12){2}{\line(1,0){0.22}} \multiput(23.15,5.66)(0.22,-0.12){2}{\line(1,0){0.22}} \multiput(22.72,5.9)(0.22,-0.12){2}{\line(1,0){0.22}} \multiput(22.28,6.15)(0.22,-0.13){2}{\line(1,0){0.22}} \multiput(21.85,6.41)(0.22,-0.13){2}{\line(1,0){0.22}} \multiput(21.42,6.67)(0.21,-0.13){2}{\line(1,0){0.21}} \multiput(21,6.94)(0.21,-0.13){2}{\line(1,0){0.21}} \multiput(20.58,7.21)(0.21,-0.13){2}{\line(1,0){0.21}} \multiput(20.16,7.48)(0.21,-0.14){2}{\line(1,0){0.21}} \multiput(19.74,7.76)(0.21,-0.14){2}{\line(1,0){0.21}} \multiput(19.33,8.04)(0.21,-0.14){2}{\line(1,0){0.21}} \multiput(18.92,8.33)(0.21,-0.14){2}{\line(1,0){0.21}} \multiput(18.51,8.62)(0.2,-0.15){2}{\line(1,0){0.2}} \multiput(18.11,8.92)(0.2,-0.15){2}{\line(1,0){0.2}} \multiput(17.71,9.22)(0.13,-0.1){3}{\line(1,0){0.13}} \multiput(17.31,9.53)(0.13,-0.1){3}{\line(1,0){0.13}} \multiput(16.91,9.84)(0.13,-0.1){3}{\line(1,0){0.13}} \multiput(16.53,10.15)(0.13,-0.11){3}{\line(1,0){0.13}} \multiput(16.14,10.47)(0.13,-0.11){3}{\line(1,0){0.13}} \multiput(15.76,10.8)(0.13,-0.11){3}{\line(1,0){0.13}} \multiput(15.38,11.13)(0.13,-0.11){3}{\line(1,0){0.13}} \multiput(15,11.46)(0.13,-0.11){3}{\line(1,0){0.13}} \multiput(14.63,11.79)(0.12,-0.11){3}{\line(1,0){0.12}} \multiput(14.26,12.13)(0.12,-0.11){3}{\line(1,0){0.12}} \multiput(13.9,12.48)(0.12,-0.11){3}{\line(1,0){0.12}} \multiput(13.54,12.83)(0.12,-0.12){3}{\line(1,0){0.12}} \multiput(13.18,13.18)(0.12,-0.12){3}{\line(1,0){0.12}} \multiput(12.83,13.54)(0.12,-0.12){3}{\line(0,-1){0.12}} \multiput(12.48,13.9)(0.12,-0.12){3}{\line(0,-1){0.12}} \multiput(12.13,14.26)(0.11,-0.12){3}{\line(0,-1){0.12}} \multiput(11.79,14.63)(0.11,-0.12){3}{\line(0,-1){0.12}} \multiput(11.46,15)(0.11,-0.12){3}{\line(0,-1){0.12}} \multiput(11.13,15.38)(0.11,-0.13){3}{\line(0,-1){0.13}} \multiput(10.8,15.76)(0.11,-0.13){3}{\line(0,-1){0.13}} \multiput(10.47,16.14)(0.11,-0.13){3}{\line(0,-1){0.13}} \multiput(10.15,16.53)(0.11,-0.13){3}{\line(0,-1){0.13}} \multiput(9.84,16.91)(0.11,-0.13){3}{\line(0,-1){0.13}} \multiput(9.53,17.31)(0.1,-0.13){3}{\line(0,-1){0.13}} \multiput(9.22,17.71)(0.1,-0.13){3}{\line(0,-1){0.13}} \multiput(8.92,18.11)(0.1,-0.13){3}{\line(0,-1){0.13}} \multiput(8.62,18.51)(0.15,-0.2){2}{\line(0,-1){0.2}} \multiput(8.33,18.92)(0.15,-0.2){2}{\line(0,-1){0.2}} \multiput(8.04,19.33)(0.14,-0.21){2}{\line(0,-1){0.21}} \multiput(7.76,19.74)(0.14,-0.21){2}{\line(0,-1){0.21}} \multiput(7.48,20.16)(0.14,-0.21){2}{\line(0,-1){0.21}} \multiput(7.21,20.58)(0.14,-0.21){2}{\line(0,-1){0.21}} \multiput(6.94,21)(0.13,-0.21){2}{\line(0,-1){0.21}} \multiput(6.67,21.42)(0.13,-0.21){2}{\line(0,-1){0.21}} \multiput(6.41,21.85)(0.13,-0.21){2}{\line(0,-1){0.21}} \multiput(6.15,22.28)(0.13,-0.22){2}{\line(0,-1){0.22}} \multiput(5.9,22.72)(0.13,-0.22){2}{\line(0,-1){0.22}} \multiput(5.66,23.15)(0.12,-0.22){2}{\line(0,-1){0.22}} \multiput(5.42,23.59)(0.12,-0.22){2}{\line(0,-1){0.22}} \multiput(5.18,24.04)(0.12,-0.22){2}{\line(0,-1){0.22}} \multiput(4.95,24.48)(0.12,-0.22){2}{\line(0,-1){0.22}} \multiput(4.72,24.93)(0.11,-0.22){2}{\line(0,-1){0.22}} \multiput(4.5,25.38)(0.11,-0.22){2}{\line(0,-1){0.22}} \multiput(4.29,25.83)(0.11,-0.23){2}{\line(0,-1){0.23}} \multiput(4.08,26.29)(0.11,-0.23){2}{\line(0,-1){0.23}} \multiput(3.87,26.74)(0.1,-0.23){2}{\line(0,-1){0.23}} \multiput(3.67,27.2)(0.1,-0.23){2}{\line(0,-1){0.23}} \multiput(3.47,27.66)(0.1,-0.23){2}{\line(0,-1){0.23}} \multiput(3.28,28.13)(0.1,-0.23){2}{\line(0,-1){0.23}} \multiput(3.1,28.59)(0.09,-0.23){2}{\line(0,-1){0.23}} \multiput(2.92,29.06)(0.09,-0.23){2}{\line(0,-1){0.23}} \multiput(2.74,29.53)(0.17,-0.47){1}{\line(0,-1){0.47}} \multiput(2.57,30)(0.17,-0.47){1}{\line(0,-1){0.47}} \multiput(2.41,30.48)(0.16,-0.47){1}{\line(0,-1){0.47}} \multiput(2.25,30.95)(0.16,-0.48){1}{\line(0,-1){0.48}} \multiput(2.1,31.43)(0.15,-0.48){1}{\line(0,-1){0.48}} \multiput(1.95,31.91)(0.15,-0.48){1}{\line(0,-1){0.48}} \multiput(1.8,32.39)(0.14,-0.48){1}{\line(0,-1){0.48}} \multiput(1.67,32.87)(0.14,-0.48){1}{\line(0,-1){0.48}} \multiput(1.53,33.35)(0.13,-0.48){1}{\line(0,-1){0.48}} \multiput(1.41,33.84)(0.13,-0.48){1}{\line(0,-1){0.48}} \multiput(1.28,34.32)(0.12,-0.49){1}{\line(0,-1){0.49}} \multiput(1.17,34.81)(0.12,-0.49){1}{\line(0,-1){0.49}} \multiput(1.06,35.3)(0.11,-0.49){1}{\line(0,-1){0.49}} \multiput(0.95,35.79)(0.11,-0.49){1}{\line(0,-1){0.49}} \multiput(0.85,36.28)(0.1,-0.49){1}{\line(0,-1){0.49}} \multiput(0.76,36.77)(0.09,-0.49){1}{\line(0,-1){0.49}} \multiput(0.67,37.27)(0.09,-0.49){1}{\line(0,-1){0.49}} \multiput(0.59,37.76)(0.08,-0.49){1}{\line(0,-1){0.49}} \multiput(0.51,38.26)(0.08,-0.5){1}{\line(0,-1){0.5}} \multiput(0.44,38.75)(0.07,-0.5){1}{\line(0,-1){0.5}} \multiput(0.37,39.25)(0.07,-0.5){1}{\line(0,-1){0.5}} \multiput(0.31,39.75)(0.06,-0.5){1}{\line(0,-1){0.5}} \multiput(0.25,40.25)(0.06,-0.5){1}{\line(0,-1){0.5}} \multiput(0.2,40.75)(0.05,-0.5){1}{\line(0,-1){0.5}} \multiput(0.16,41.24)(0.04,-0.5){1}{\line(0,-1){0.5}} \multiput(0.12,41.74)(0.04,-0.5){1}{\line(0,-1){0.5}} \multiput(0.08,42.24)(0.03,-0.5){1}{\line(0,-1){0.5}} \multiput(0.06,42.75)(0.03,-0.5){1}{\line(0,-1){0.5}} \multiput(0.03,43.25)(0.02,-0.5){1}{\line(0,-1){0.5}} \multiput(0.02,43.75)(0.02,-0.5){1}{\line(0,-1){0.5}} \multiput(0.01,44.25)(0.01,-0.5){1}{\line(0,-1){0.5}} \multiput(0,44.75)(0.01,-0.5){1}{\line(0,-1){0.5}} \put(0,44.75){\line(0,1){0.5}} \multiput(0,45.25)(0.01,0.5){1}{\line(0,1){0.5}} \multiput(0.01,45.75)(0.01,0.5){1}{\line(0,1){0.5}} \multiput(0.02,46.25)(0.02,0.5){1}{\line(0,1){0.5}} \multiput(0.03,46.75)(0.02,0.5){1}{\line(0,1){0.5}} \multiput(0.06,47.25)(0.03,0.5){1}{\line(0,1){0.5}} \multiput(0.08,47.76)(0.03,0.5){1}{\line(0,1){0.5}} \multiput(0.12,48.26)(0.04,0.5){1}{\line(0,1){0.5}} \multiput(0.16,48.76)(0.04,0.5){1}{\line(0,1){0.5}} \multiput(0.2,49.25)(0.05,0.5){1}{\line(0,1){0.5}} \multiput(0.25,49.75)(0.06,0.5){1}{\line(0,1){0.5}} \multiput(0.31,50.25)(0.06,0.5){1}{\line(0,1){0.5}} \multiput(0.37,50.75)(0.07,0.5){1}{\line(0,1){0.5}} \multiput(0.44,51.25)(0.07,0.5){1}{\line(0,1){0.5}} \multiput(0.51,51.74)(0.08,0.5){1}{\line(0,1){0.5}} \multiput(0.59,52.24)(0.08,0.49){1}{\line(0,1){0.49}} \multiput(0.67,52.73)(0.09,0.49){1}{\line(0,1){0.49}} \multiput(0.76,53.23)(0.09,0.49){1}{\line(0,1){0.49}} \multiput(0.85,53.72)(0.1,0.49){1}{\line(0,1){0.49}} \multiput(0.95,54.21)(0.11,0.49){1}{\line(0,1){0.49}} \multiput(1.06,54.7)(0.11,0.49){1}{\line(0,1){0.49}} \multiput(1.17,55.19)(0.12,0.49){1}{\line(0,1){0.49}} \multiput(1.28,55.68)(0.12,0.49){1}{\line(0,1){0.49}} \multiput(1.41,56.16)(0.13,0.48){1}{\line(0,1){0.48}} \multiput(1.53,56.65)(0.13,0.48){1}{\line(0,1){0.48}} \multiput(1.67,57.13)(0.14,0.48){1}{\line(0,1){0.48}} \multiput(1.8,57.61)(0.14,0.48){1}{\line(0,1){0.48}} \multiput(1.95,58.09)(0.15,0.48){1}{\line(0,1){0.48}} \multiput(2.1,58.57)(0.15,0.48){1}{\line(0,1){0.48}} \multiput(2.25,59.05)(0.16,0.48){1}{\line(0,1){0.48}} \multiput(2.41,59.52)(0.16,0.47){1}{\line(0,1){0.47}} \multiput(2.57,60)(0.17,0.47){1}{\line(0,1){0.47}} \multiput(2.74,60.47)(0.17,0.47){1}{\line(0,1){0.47}} \multiput(2.92,60.94)(0.09,0.23){2}{\line(0,1){0.23}} \multiput(3.1,61.41)(0.09,0.23){2}{\line(0,1){0.23}} \multiput(3.28,61.87)(0.1,0.23){2}{\line(0,1){0.23}} \multiput(3.47,62.34)(0.1,0.23){2}{\line(0,1){0.23}} \multiput(3.67,62.8)(0.1,0.23){2}{\line(0,1){0.23}} \multiput(3.87,63.26)(0.1,0.23){2}{\line(0,1){0.23}} \multiput(4.08,63.71)(0.11,0.23){2}{\line(0,1){0.23}} \multiput(4.29,64.17)(0.11,0.23){2}{\line(0,1){0.23}} \multiput(4.5,64.62)(0.11,0.22){2}{\line(0,1){0.22}} \multiput(4.72,65.07)(0.11,0.22){2}{\line(0,1){0.22}} \multiput(4.95,65.52)(0.12,0.22){2}{\line(0,1){0.22}} \multiput(5.18,65.96)(0.12,0.22){2}{\line(0,1){0.22}} \multiput(5.42,66.41)(0.12,0.22){2}{\line(0,1){0.22}} \multiput(5.66,66.85)(0.12,0.22){2}{\line(0,1){0.22}} \multiput(5.9,67.28)(0.13,0.22){2}{\line(0,1){0.22}} \multiput(6.15,67.72)(0.13,0.22){2}{\line(0,1){0.22}} \multiput(6.41,68.15)(0.13,0.21){2}{\line(0,1){0.21}} \multiput(6.67,68.58)(0.13,0.21){2}{\line(0,1){0.21}} \multiput(6.94,69)(0.13,0.21){2}{\line(0,1){0.21}} \multiput(7.21,69.42)(0.14,0.21){2}{\line(0,1){0.21}} \multiput(7.48,69.84)(0.14,0.21){2}{\line(0,1){0.21}} \multiput(7.76,70.26)(0.14,0.21){2}{\line(0,1){0.21}} \multiput(8.04,70.67)(0.14,0.21){2}{\line(0,1){0.21}} \multiput(8.33,71.08)(0.15,0.2){2}{\line(0,1){0.2}} \multiput(8.62,71.49)(0.15,0.2){2}{\line(0,1){0.2}} \multiput(8.92,71.89)(0.1,0.13){3}{\line(0,1){0.13}} \multiput(9.22,72.29)(0.1,0.13){3}{\line(0,1){0.13}} \multiput(9.53,72.69)(0.1,0.13){3}{\line(0,1){0.13}} \multiput(9.84,73.09)(0.11,0.13){3}{\line(0,1){0.13}} \multiput(10.15,73.47)(0.11,0.13){3}{\line(0,1){0.13}} \multiput(10.47,73.86)(0.11,0.13){3}{\line(0,1){0.13}} \multiput(10.8,74.24)(0.11,0.13){3}{\line(0,1){0.13}} \multiput(11.13,74.62)(0.11,0.13){3}{\line(0,1){0.13}} \multiput(11.46,75)(0.11,0.12){3}{\line(0,1){0.12}} \multiput(11.79,75.37)(0.11,0.12){3}{\line(0,1){0.12}} \multiput(12.13,75.74)(0.11,0.12){3}{\line(0,1){0.12}} \multiput(12.48,76.1)(0.12,0.12){3}{\line(0,1){0.12}} \multiput(12.83,76.46)(0.12,0.12){3}{\line(0,1){0.12}} \multiput(13.18,76.82)(0.12,0.12){3}{\line(1,0){0.12}} \multiput(13.54,77.17)(0.12,0.12){3}{\line(1,0){0.12}} \multiput(13.9,77.52)(0.12,0.11){3}{\line(1,0){0.12}} \multiput(14.26,77.87)(0.12,0.11){3}{\line(1,0){0.12}} \multiput(14.63,78.21)(0.12,0.11){3}{\line(1,0){0.12}} \multiput(15,78.54)(0.13,0.11){3}{\line(1,0){0.13}} \multiput(15.38,78.87)(0.13,0.11){3}{\line(1,0){0.13}} \multiput(15.76,79.2)(0.13,0.11){3}{\line(1,0){0.13}} \multiput(16.14,79.53)(0.13,0.11){3}{\line(1,0){0.13}} \multiput(16.53,79.85)(0.13,0.11){3}{\line(1,0){0.13}} \multiput(16.91,80.16)(0.13,0.1){3}{\line(1,0){0.13}} \multiput(17.31,80.47)(0.13,0.1){3}{\line(1,0){0.13}} \multiput(17.71,80.78)(0.13,0.1){3}{\line(1,0){0.13}} \multiput(18.11,81.08)(0.2,0.15){2}{\line(1,0){0.2}} \multiput(18.51,81.38)(0.2,0.15){2}{\line(1,0){0.2}} \multiput(18.92,81.67)(0.21,0.14){2}{\line(1,0){0.21}} \multiput(19.33,81.96)(0.21,0.14){2}{\line(1,0){0.21}} \multiput(19.74,82.24)(0.21,0.14){2}{\line(1,0){0.21}} \multiput(20.16,82.52)(0.21,0.14){2}{\line(1,0){0.21}} \multiput(20.58,82.79)(0.21,0.13){2}{\line(1,0){0.21}} \multiput(21,83.06)(0.21,0.13){2}{\line(1,0){0.21}} \multiput(21.42,83.33)(0.21,0.13){2}{\line(1,0){0.21}} \multiput(21.85,83.59)(0.22,0.13){2}{\line(1,0){0.22}} \multiput(22.28,83.85)(0.22,0.13){2}{\line(1,0){0.22}} \multiput(22.72,84.1)(0.22,0.12){2}{\line(1,0){0.22}} \multiput(23.15,84.34)(0.22,0.12){2}{\line(1,0){0.22}} \multiput(23.59,84.58)(0.22,0.12){2}{\line(1,0){0.22}} \multiput(24.04,84.82)(0.22,0.12){2}{\line(1,0){0.22}} \multiput(24.48,85.05)(0.22,0.11){2}{\line(1,0){0.22}} \multiput(24.93,85.28)(0.22,0.11){2}{\line(1,0){0.22}} \multiput(25.38,85.5)(0.23,0.11){2}{\line(1,0){0.23}} \multiput(25.83,85.71)(0.23,0.11){2}{\line(1,0){0.23}} \multiput(26.29,85.92)(0.23,0.1){2}{\line(1,0){0.23}} \multiput(26.74,86.13)(0.23,0.1){2}{\line(1,0){0.23}} \multiput(27.2,86.33)(0.23,0.1){2}{\line(1,0){0.23}} \multiput(27.66,86.53)(0.23,0.1){2}{\line(1,0){0.23}} \multiput(28.13,86.72)(0.23,0.09){2}{\line(1,0){0.23}} \multiput(28.59,86.9)(0.23,0.09){2}{\line(1,0){0.23}} \multiput(29.06,87.08)(0.47,0.17){1}{\line(1,0){0.47}} \multiput(29.53,87.26)(0.47,0.17){1}{\line(1,0){0.47}} \multiput(30,87.43)(0.47,0.16){1}{\line(1,0){0.47}} \multiput(30.48,87.59)(0.48,0.16){1}{\line(1,0){0.48}} \multiput(30.95,87.75)(0.48,0.15){1}{\line(1,0){0.48}} \multiput(31.43,87.9)(0.48,0.15){1}{\line(1,0){0.48}} \multiput(31.91,88.05)(0.48,0.14){1}{\line(1,0){0.48}} \multiput(32.39,88.2)(0.48,0.14){1}{\line(1,0){0.48}} \multiput(32.87,88.33)(0.48,0.13){1}{\line(1,0){0.48}} \multiput(33.35,88.47)(0.48,0.13){1}{\line(1,0){0.48}} \multiput(33.84,88.59)(0.49,0.12){1}{\line(1,0){0.49}} \multiput(34.32,88.72)(0.49,0.12){1}{\line(1,0){0.49}} \multiput(34.81,88.83)(0.49,0.11){1}{\line(1,0){0.49}} \multiput(35.3,88.94)(0.49,0.11){1}{\line(1,0){0.49}} \multiput(35.79,89.05)(0.49,0.1){1}{\line(1,0){0.49}} \multiput(36.28,89.15)(0.49,0.09){1}{\line(1,0){0.49}} \multiput(36.77,89.24)(0.49,0.09){1}{\line(1,0){0.49}} \multiput(37.27,89.33)(0.49,0.08){1}{\line(1,0){0.49}} \multiput(37.76,89.41)(0.5,0.08){1}{\line(1,0){0.5}} \multiput(38.26,89.49)(0.5,0.07){1}{\line(1,0){0.5}} \multiput(38.75,89.56)(0.5,0.07){1}{\line(1,0){0.5}} \multiput(39.25,89.63)(0.5,0.06){1}{\line(1,0){0.5}} \multiput(39.75,89.69)(0.5,0.06){1}{\line(1,0){0.5}} \multiput(40.25,89.75)(0.5,0.05){1}{\line(1,0){0.5}} \multiput(40.75,89.8)(0.5,0.04){1}{\line(1,0){0.5}} \multiput(41.24,89.84)(0.5,0.04){1}{\line(1,0){0.5}} \multiput(41.74,89.88)(0.5,0.03){1}{\line(1,0){0.5}} \multiput(42.24,89.92)(0.5,0.03){1}{\line(1,0){0.5}} \multiput(42.75,89.94)(0.5,0.02){1}{\line(1,0){0.5}} \multiput(43.25,89.97)(0.5,0.02){1}{\line(1,0){0.5}} \multiput(43.75,89.98)(0.5,0.01){1}{\line(1,0){0.5}} \multiput(44.25,89.99)(0.5,0.01){1}{\line(1,0){0.5}} \put(44.75,90){\line(1,0){0.5}} \multiput(45.25,90)(0.5,-0.01){1}{\line(1,0){0.5}} \multiput(45.75,89.99)(0.5,-0.01){1}{\line(1,0){0.5}} \multiput(46.25,89.98)(0.5,-0.02){1}{\line(1,0){0.5}} \multiput(46.75,89.97)(0.5,-0.02){1}{\line(1,0){0.5}} \multiput(47.25,89.94)(0.5,-0.03){1}{\line(1,0){0.5}} \multiput(47.76,89.92)(0.5,-0.03){1}{\line(1,0){0.5}} \multiput(48.26,89.88)(0.5,-0.04){1}{\line(1,0){0.5}} \multiput(48.76,89.84)(0.5,-0.04){1}{\line(1,0){0.5}} \multiput(49.25,89.8)(0.5,-0.05){1}{\line(1,0){0.5}} \multiput(49.75,89.75)(0.5,-0.06){1}{\line(1,0){0.5}} \multiput(50.25,89.69)(0.5,-0.06){1}{\line(1,0){0.5}} \multiput(50.75,89.63)(0.5,-0.07){1}{\line(1,0){0.5}} \multiput(51.25,89.56)(0.5,-0.07){1}{\line(1,0){0.5}} \multiput(51.74,89.49)(0.5,-0.08){1}{\line(1,0){0.5}} \multiput(52.24,89.41)(0.49,-0.08){1}{\line(1,0){0.49}} \multiput(52.73,89.33)(0.49,-0.09){1}{\line(1,0){0.49}} \multiput(53.23,89.24)(0.49,-0.09){1}{\line(1,0){0.49}} \multiput(53.72,89.15)(0.49,-0.1){1}{\line(1,0){0.49}} \multiput(54.21,89.05)(0.49,-0.11){1}{\line(1,0){0.49}} \multiput(54.7,88.94)(0.49,-0.11){1}{\line(1,0){0.49}} \multiput(55.19,88.83)(0.49,-0.12){1}{\line(1,0){0.49}} \multiput(55.68,88.72)(0.49,-0.12){1}{\line(1,0){0.49}} \multiput(56.16,88.59)(0.48,-0.13){1}{\line(1,0){0.48}} \multiput(56.65,88.47)(0.48,-0.13){1}{\line(1,0){0.48}} \multiput(57.13,88.33)(0.48,-0.14){1}{\line(1,0){0.48}} \multiput(57.61,88.2)(0.48,-0.14){1}{\line(1,0){0.48}} \multiput(58.09,88.05)(0.48,-0.15){1}{\line(1,0){0.48}} \multiput(58.57,87.9)(0.48,-0.15){1}{\line(1,0){0.48}} \multiput(59.05,87.75)(0.48,-0.16){1}{\line(1,0){0.48}} \multiput(59.52,87.59)(0.47,-0.16){1}{\line(1,0){0.47}} \multiput(60,87.43)(0.47,-0.17){1}{\line(1,0){0.47}} \multiput(60.47,87.26)(0.47,-0.17){1}{\line(1,0){0.47}} \multiput(60.94,87.08)(0.23,-0.09){2}{\line(1,0){0.23}} \multiput(61.41,86.9)(0.23,-0.09){2}{\line(1,0){0.23}} \multiput(61.87,86.72)(0.23,-0.1){2}{\line(1,0){0.23}} \multiput(62.34,86.53)(0.23,-0.1){2}{\line(1,0){0.23}} \multiput(62.8,86.33)(0.23,-0.1){2}{\line(1,0){0.23}} \multiput(63.26,86.13)(0.23,-0.1){2}{\line(1,0){0.23}} \multiput(63.71,85.92)(0.23,-0.11){2}{\line(1,0){0.23}} \multiput(64.17,85.71)(0.23,-0.11){2}{\line(1,0){0.23}} \multiput(64.62,85.5)(0.22,-0.11){2}{\line(1,0){0.22}} \multiput(65.07,85.28)(0.22,-0.11){2}{\line(1,0){0.22}} \multiput(65.52,85.05)(0.22,-0.12){2}{\line(1,0){0.22}} \multiput(65.96,84.82)(0.22,-0.12){2}{\line(1,0){0.22}} \multiput(66.41,84.58)(0.22,-0.12){2}{\line(1,0){0.22}} \multiput(66.85,84.34)(0.22,-0.12){2}{\line(1,0){0.22}} \multiput(67.28,84.1)(0.22,-0.13){2}{\line(1,0){0.22}} \multiput(67.72,83.85)(0.22,-0.13){2}{\line(1,0){0.22}} \multiput(68.15,83.59)(0.21,-0.13){2}{\line(1,0){0.21}} \multiput(68.58,83.33)(0.21,-0.13){2}{\line(1,0){0.21}} \multiput(69,83.06)(0.21,-0.13){2}{\line(1,0){0.21}} \multiput(69.42,82.79)(0.21,-0.14){2}{\line(1,0){0.21}} \multiput(69.84,82.52)(0.21,-0.14){2}{\line(1,0){0.21}} \multiput(70.26,82.24)(0.21,-0.14){2}{\line(1,0){0.21}} \multiput(70.67,81.96)(0.21,-0.14){2}{\line(1,0){0.21}} \multiput(71.08,81.67)(0.2,-0.15){2}{\line(1,0){0.2}} \multiput(71.49,81.38)(0.2,-0.15){2}{\line(1,0){0.2}} \multiput(71.89,81.08)(0.13,-0.1){3}{\line(1,0){0.13}} \multiput(72.29,80.78)(0.13,-0.1){3}{\line(1,0){0.13}} \multiput(72.69,80.47)(0.13,-0.1){3}{\line(1,0){0.13}} \multiput(73.09,80.16)(0.13,-0.11){3}{\line(1,0){0.13}} \multiput(73.47,79.85)(0.13,-0.11){3}{\line(1,0){0.13}} \multiput(73.86,79.53)(0.13,-0.11){3}{\line(1,0){0.13}} \multiput(74.24,79.2)(0.13,-0.11){3}{\line(1,0){0.13}} \multiput(74.62,78.87)(0.13,-0.11){3}{\line(1,0){0.13}} \multiput(75,78.54)(0.12,-0.11){3}{\line(1,0){0.12}} \multiput(75.37,78.21)(0.12,-0.11){3}{\line(1,0){0.12}} \multiput(75.74,77.87)(0.12,-0.11){3}{\line(1,0){0.12}} \multiput(76.1,77.52)(0.12,-0.12){3}{\line(1,0){0.12}} \multiput(76.46,77.17)(0.12,-0.12){3}{\line(1,0){0.12}} \multiput(76.82,76.82)(0.12,-0.12){3}{\line(0,-1){0.12}} \multiput(77.17,76.46)(0.12,-0.12){3}{\line(0,-1){0.12}} \multiput(77.52,76.1)(0.11,-0.12){3}{\line(0,-1){0.12}} \multiput(77.87,75.74)(0.11,-0.12){3}{\line(0,-1){0.12}} \multiput(78.21,75.37)(0.11,-0.12){3}{\line(0,-1){0.12}} \multiput(78.54,75)(0.11,-0.13){3}{\line(0,-1){0.13}} \multiput(78.87,74.62)(0.11,-0.13){3}{\line(0,-1){0.13}} \multiput(79.2,74.24)(0.11,-0.13){3}{\line(0,-1){0.13}} \multiput(79.53,73.86)(0.11,-0.13){3}{\line(0,-1){0.13}} \multiput(79.85,73.47)(0.11,-0.13){3}{\line(0,-1){0.13}} \multiput(80.16,73.09)(0.1,-0.13){3}{\line(0,-1){0.13}} \multiput(80.47,72.69)(0.1,-0.13){3}{\line(0,-1){0.13}} \multiput(80.78,72.29)(0.1,-0.13){3}{\line(0,-1){0.13}} \multiput(81.08,71.89)(0.15,-0.2){2}{\line(0,-1){0.2}} \multiput(81.38,71.49)(0.15,-0.2){2}{\line(0,-1){0.2}} \multiput(81.67,71.08)(0.14,-0.21){2}{\line(0,-1){0.21}} \multiput(81.96,70.67)(0.14,-0.21){2}{\line(0,-1){0.21}} \multiput(82.24,70.26)(0.14,-0.21){2}{\line(0,-1){0.21}} \multiput(82.52,69.84)(0.14,-0.21){2}{\line(0,-1){0.21}} \multiput(82.79,69.42)(0.13,-0.21){2}{\line(0,-1){0.21}} \multiput(83.06,69)(0.13,-0.21){2}{\line(0,-1){0.21}} \multiput(83.33,68.58)(0.13,-0.21){2}{\line(0,-1){0.21}} \multiput(83.59,68.15)(0.13,-0.22){2}{\line(0,-1){0.22}} \multiput(83.85,67.72)(0.13,-0.22){2}{\line(0,-1){0.22}} \multiput(84.1,67.28)(0.12,-0.22){2}{\line(0,-1){0.22}} \multiput(84.34,66.85)(0.12,-0.22){2}{\line(0,-1){0.22}} \multiput(84.58,66.41)(0.12,-0.22){2}{\line(0,-1){0.22}} \multiput(84.82,65.96)(0.12,-0.22){2}{\line(0,-1){0.22}} \multiput(85.05,65.52)(0.11,-0.22){2}{\line(0,-1){0.22}} \multiput(85.28,65.07)(0.11,-0.22){2}{\line(0,-1){0.22}} \multiput(85.5,64.62)(0.11,-0.23){2}{\line(0,-1){0.23}} \multiput(85.71,64.17)(0.11,-0.23){2}{\line(0,-1){0.23}} \multiput(85.92,63.71)(0.1,-0.23){2}{\line(0,-1){0.23}} \multiput(86.13,63.26)(0.1,-0.23){2}{\line(0,-1){0.23}} \multiput(86.33,62.8)(0.1,-0.23){2}{\line(0,-1){0.23}} \multiput(86.53,62.34)(0.1,-0.23){2}{\line(0,-1){0.23}} \multiput(86.72,61.87)(0.09,-0.23){2}{\line(0,-1){0.23}} \multiput(86.9,61.41)(0.09,-0.23){2}{\line(0,-1){0.23}} \multiput(87.08,60.94)(0.17,-0.47){1}{\line(0,-1){0.47}} \multiput(87.26,60.47)(0.17,-0.47){1}{\line(0,-1){0.47}} \multiput(87.43,60)(0.16,-0.47){1}{\line(0,-1){0.47}} \multiput(87.59,59.52)(0.16,-0.48){1}{\line(0,-1){0.48}} \multiput(87.75,59.05)(0.15,-0.48){1}{\line(0,-1){0.48}} \multiput(87.9,58.57)(0.15,-0.48){1}{\line(0,-1){0.48}} \multiput(88.05,58.09)(0.14,-0.48){1}{\line(0,-1){0.48}} \multiput(88.2,57.61)(0.14,-0.48){1}{\line(0,-1){0.48}} \multiput(88.33,57.13)(0.13,-0.48){1}{\line(0,-1){0.48}} \multiput(88.47,56.65)(0.13,-0.48){1}{\line(0,-1){0.48}} \multiput(88.59,56.16)(0.12,-0.49){1}{\line(0,-1){0.49}} \multiput(88.72,55.68)(0.12,-0.49){1}{\line(0,-1){0.49}} \multiput(88.83,55.19)(0.11,-0.49){1}{\line(0,-1){0.49}} \multiput(88.94,54.7)(0.11,-0.49){1}{\line(0,-1){0.49}} \multiput(89.05,54.21)(0.1,-0.49){1}{\line(0,-1){0.49}} \multiput(89.15,53.72)(0.09,-0.49){1}{\line(0,-1){0.49}} \multiput(89.24,53.23)(0.09,-0.49){1}{\line(0,-1){0.49}} \multiput(89.33,52.73)(0.08,-0.49){1}{\line(0,-1){0.49}} \multiput(89.41,52.24)(0.08,-0.5){1}{\line(0,-1){0.5}} \multiput(89.49,51.74)(0.07,-0.5){1}{\line(0,-1){0.5}} \multiput(89.56,51.25)(0.07,-0.5){1}{\line(0,-1){0.5}} \multiput(89.63,50.75)(0.06,-0.5){1}{\line(0,-1){0.5}} \multiput(89.69,50.25)(0.06,-0.5){1}{\line(0,-1){0.5}} \multiput(89.75,49.75)(0.05,-0.5){1}{\line(0,-1){0.5}} \multiput(89.8,49.25)(0.04,-0.5){1}{\line(0,-1){0.5}} \multiput(89.84,48.76)(0.04,-0.5){1}{\line(0,-1){0.5}} \multiput(89.88,48.26)(0.03,-0.5){1}{\line(0,-1){0.5}} \multiput(89.92,47.76)(0.03,-0.5){1}{\line(0,-1){0.5}} \multiput(89.94,47.25)(0.02,-0.5){1}{\line(0,-1){0.5}} \multiput(89.97,46.75)(0.02,-0.5){1}{\line(0,-1){0.5}} \multiput(89.98,46.25)(0.01,-0.5){1}{\line(0,-1){0.5}} \multiput(89.99,45.75)(0.01,-0.5){1}{\line(0,-1){0.5}} \put(45,0){\circle*{4}} \put(25,85){\circle*{5}} \put(65,85){\circle*{5}} \put(16,93){\makebox(0,0)[cc]{$m_1=:M$}} \put(74,93){\makebox(0,0)[cc]{$m_2=:M$}} \put(27,-6){\makebox(0,0)[cc]{$m_3=:m$}} \put(45,85){\line(1,0){20}} \linethickness{0.3mm} \put(65,45){\line(0,1){40}} \linethickness{0.3mm} \put(55,81){\makebox(0,0)[cc]{$x$}} \put(69,63){\makebox(0,0)[cc]{$y$}} \linethickness{0.3mm} \put(45,-5){\line(0,1){100}} \linethickness{0.3mm} \put(-5,45){\line(1,0){100}} \end{picture} \caption{The initial positions of $m_1, m_2$, and $m_3$ on the geodesic $z=0$.}\label{cir} \end{figure} \begin{proof} Let us start with some initial conditions we will refine on the way. During the refinement process, we will also choose suitable masses. Consider \begin{align*} x_1(0)&=-x(0),& y_1(0)&=y(0),& z_1(0)&=0,\\ x_2(0)&=x(0),& y_2(0)&=y(0),& z_2(0)&=0,\\ x_3(0)&=0,& y_3(0)&=-1,& z_3(0)&=0, \end{align*} as well as zero initial velocities, where $0<x(t),y(t)<1$ are functions with $x(t)^2+y(t)^2=1$. Since all $z$ coordinates are zero, only the equations of coordinates $x$ and $y$ play a role in the motion. The symmetry of these initial conditions implies that $m_3$ remains fixed for all time (in fact the equations corresponding to $\ddot{x}_3$ and $\ddot{y}_3$ reduce to identities), that the angular momentum is zero, and that it is enough to see what happens for $m_2$, because $m_1$ behaves symmetrically with respect to the $y$ axis. Thus, substituting the above initial conditions into the equations of motion, we obtain \begin{equation} \ddot{x}(0)=-{y(0)\over x^2(0)}\bigg({M\over 4y^2(0)}-m\bigg)\ \ \ {\rm and}\ \ \ \ddot{y}(0)={1\over x(0)}\bigg({M\over 4y^2(0)}-m\bigg).\label{incond} \end{equation} These equations show that several situations occur, depending on the choice of masses and initial positions. Here are two significant possibilities. 1. For $M\ge4m$, it follows that $\ddot x(0)<0$ and $\ddot y(0)>0$ for any choices of initial positions with $0<x(0),y(0)<1$. 2. For $M<4m$, there are initial positions for which: \hskip0.6cm (a) $\ddot x(0)<0$ and $\ddot y(0)>0$, \hskip0.6cm (b) $\ddot x(0)>0$ and $\ddot y(0)<0$, \hskip0.6cm (c) $\ddot x(0)=\ddot y(0)=0$. In case 2(c), the solutions are fixed points of the equations of motion, a situation achieved, for instance, when $M=2m$ and $x(0)=y(0)=\sqrt{2}/2$. The cases of interest for us, however, are 1 and 2(b). In the former, $m_2$ begins to move from rest towards a collision with $m_1$ at $(0,1)$, but whether this collision takes place also depends on velocities, which affect the equations of motion. In the latter case, $m_2$ moves away from the same collision, and we need to see again how the velocities alter this initial tendency. So let us write now the equations of motion for $m_2$ starting from arbitrary masses $M$ and $m$. The computations lead us to the system \begin{equation} \begin{cases} \ddot x=-{M\over 4x^2y}+{my\over x^2}-(\dot{x}^2+\dot{y}^2)x\cr \ddot y={M\over 4xy^2}-{m\over x}-(\dot{x}^2+\dot{y}^2)y\cr \end{cases}\label{initcond} \end{equation} and the energy integral $$ \dot{x}^2+\dot{y}^2={h\over M}-{2my\over x}+{M(2y^2-1)\over 2xy}. $$ Substituting this expression of $\dot{x}^2+\dot{y}^2$ into equations (\ref{initcond}), we obtain \begin{equation} \begin{cases} \ddot x={4(M-2m)x^4-2(M-2m)x^2-M+4m\over 4x^2y}-{h\over M}x\cr \ddot y={M+2(M-2m)y^2-4(M-2m)y^4\over 4xy^2}-{h\over M}y.\cr \end{cases}\label{twoeq} \end{equation} We will further focus on the first class of orbits announced in this theorem. (i) To prove the existence of solutions with collision-antipodal singularities, let us further examine the case $M=8m$, which brings system \eqref{twoeq} to the form \begin{equation} \begin{cases} \ddot x={6mx^2\over y}-{3m\over y}-{m\over x^2y}-{h\over 8m}x\cr \ddot y={2m\over xy^2}+{3m\over x}-{6my^2\over x}-{h\over 8m}y,\cr \end{cases}\label{coll} \end{equation} with the energy integral \begin{equation} \dot{x}^2+\dot{y}^2+{4mx\over y}-{2my\over x}={h\over 8m}.\label{enecoll} \end{equation} Then, as $x\to 0$ and $y\to 1$, both $\ddot x$ and $\ddot y$ tend to $-\infty$, so they are ultimately negative. This fact corresponds to case (c) of Lemma \ref{singlemma}. But a simple computation shows that $\ddot y/\ddot x$ tends to zero as $x\to 0$ and $y\to 1$. Since $y/x>0$, it follows that if $(x(0),y(0))$ is chosen close enough to $(0,1)$, then $\ddot y(0)/\ddot x(0)< y(0)/x(0)$, so according to the conclusion of Lemma \ref{singlemma}(c) the collision-antipodal configuration is reached. As the forces and the potential are infinite at this point, using the energy relation \eqref{enecoll} it follows that the velocities are also infinite. Consequently the motion cannot be analytically extended beyond the collision-antipodal configuration, which thus proves to be a singularity. (ii) To show the existence of solutions repelled from a collision-antipodal singularity of the equations of motion in positive time, let us take $M=2m$. Then equations \eqref{twoeq} have the form \begin{equation} \begin{cases} \ddot x={m\over 2x^2y}-{h\over 2m}x\cr \ddot y={m\over 2xy^2}-{h\over 2m}y,\cr \end{cases}\label{repel} \end{equation} with the integral of energy \begin{equation} \dot x^2+\dot y^2+{m\over xy}={h\over 2m},\label{en} \end{equation} which implies that $h>0$. Obviously, as $x\to 0$ and $y\to 1$, the forces and and the kinetic energy become infinite, so the collision-antipodal configuration is a singularity if it were to be reached. But as we will further see, this cannot happen for this choice of masses. Indeed, as we saw in case 2(c) above, the initial position $x(0)=$ $y(0)=\sqrt{2}/2$ corresponds to a fixed point of the equations of motion for zero initial velocities. Therefore we must seek the desired solution for initial conditions with $0<x(0)<\sqrt{2}/2$ and the corresponding choice of $y(0)>0$. Let us pick any such initial positions, as close to the collision-antipodal singularity as we want, and zero initial velocities. For $x\to 0$, however, equations \eqref{repel} show that both $\ddot x$ and $\ddot y$ grow positive. But according to case (d) of Lemma \ref{singlemma}, such an outcome is impossible, so the motion cannot come infinitesimally close to the corresponding collision-antipodal singularity, which repels any solution with $M=2m$ and initial conditions chosen as we previously described. (iii) To prove the existence of solutions that have no singularity at a collision-antipodal configuration, let us further examine the case $M=4m$, which brings system \eqref{twoeq} to the form \begin{equation} \begin{cases} \ddot{x}={m(2x^2-1)\over y}-{h\over 4m}x\cr \ddot{y}={mx(2y^2+1)\over y^2}-{h\over 4m}y.\cr \end{cases} \label{yeq} \end{equation} For this choice of masses, the energy integral becomes \begin{equation} \dot{x}^2+\dot{y}^2+{2mx\over y}={h\over 4m}.\label{ene} \end{equation} We can compute the value of $h$ from the initial conditions. Thus, for initial positions $x(0), y(0)$ and initial velocities $\dot x(0)=\dot y(0)=0$, the energy constant is $h=8m^2x(0)/y(0)>0$. Assuming that $x\to 0$ and $y\to 1$, equations (\ref{yeq}) imply that $\ddot x(t)\to -m<0$ and $\ddot y(t)\to -h/4m<0$, which means that the forces are finite at the collision-antipodal configuration. We are thus in the case (c) of Lemma \ref{singlemma}, so to determine the direction of motion for $m_2$ when it comes close to $(0,1)$, we need to take into account the ratio ${\ddot y/ \ddot x}$, which tends to $h/4m^2$ as $x\to 0$. Since $h=8m^2x(0)/y(0)$, $\lim_{x\to 0}({\ddot y/ \ddot x})=2x(0)/y(0)$. Then $2x(0)/y(0)<y(0)/x(0)$ for any $x(0)$ and $y(0)$ with $0<x(0)<1/\sqrt{3}$ and the corresponding choice of $y(0)>0$ given by the constraint $x^2(0)+y^2(0)=1$. But the inequality $2x(0)/y(0)<y(0)/x(0)$ is equivalent to the condition $\ddot{y}(t_0)/\ddot{x}(t_0)<y(t_0)/x(t_0)$ in Lemma \ref{singlemma}(c), according to which the force pulls $m_2$ toward $(0,1)$. Therefore the velocity and the force acting on $m_2$ keep this body on the same path until the collision-antipodal configuration occurs. It is also clear from equation \eqref{ene} that the velocity is positive and finite at collision. Since the distance between the initial position and $(0,1)$ is bounded, $m_2$ collides with $m_1$ in finite time. Therefore the choice of masses with $M=4m$, initial positions $x(0),y(0)$ with $0<x(0)<1/\sqrt{3}$ and the corresponding value of $y(0)$, and initial velocities $\dot{x}(0)=\dot{y}(0)=0$, leads to a solution that remains analytic at the collision-antipodal configuration, so the motion naturally extends beyond this point. \end{proof} \section{ Relative equilibria in ${\bf S}^2$} In this section we will prove a few results related to fixed points and elliptic relative equilibria in ${\bf S}^2$. Since, by Euler's theorem (see Appendix), every element of the group $SO(3)$ can be written, in an orthonormal basis, as a rotation about the $z$ axis, we can define elliptic relative equilibria as follows. \begin{definition} An elliptic relative equilibrium in ${\bf S}^2$ is a solution of the form ${\bf q}_i=(x_i,y_i,z_i)$, $i=1,\dots, n$, of equations (\ref{coordS}) with $x_i=r_i\cos(\omega t+\alpha_i), y_i=r_i\sin(\omega t+\alpha_i), z_i={\rm constant},$ where $\omega, \alpha_i,$ and $r_i$, with $0\le r_i=(1-z_i^2)^{1/2}\le 1,\ i=1,\dots,n,$ are constants.\label{reS} \end{definition} Notice that although the equations of motion don't have an integral of the center of mass, a ``weak'' property of this kind occurs for elliptic relative equilibria. Indeed, it is easy to see that if all the bodies are at all times on one side of a plane containing the rotation axis, then the integrals of the angular momentum are violated. This happens because under such circumstances the vector representing the total angular momentum cannot be zero or parallel to the $z$ axis. \subsection{Fixed points} The simplest solutions of the equations of motion are fixed points. They can be seen as trivial relative equilibria that correspond to $\omega=0$. In terms of the equations of motion, we can define them as follows. \begin{definition} A solution of system (\ref{HamS}) is called a fixed point if $$\nabla_{{\bf q}_i}U_1({\bf q})(t)={\bf p}_i(t)={\bf 0}\ \ {\rm for} \ {\rm all}\ \ t\in{\bf R}\ \ {\rm and}\ \ i=1,\dots,n.$$ \end{definition} Let us start with finding the simplest fixed points, namely those that occur when all the masses are equal. \begin{theorem} Consider the $n$-body problem in ${\bf S}^2$ with $n$ odd. If the masses are all equal, the regular $n$-gon lying on any geodesic is a fixed point of the equations of motion. For $n=4$, the regular tetrahedron is a fixed point too. \label{fix} \end{theorem} \begin{proof} Assume that $m_1=m_2=\dots =m_n$, and consider an $n$-gon with an odd number of sides inscribed in a geodesic of ${\bf S}^2$ with a body, initially at rest, at each vertex. In general, two forces act on the body of mass $m_i$: the force $\nabla_{{\bf q}_i}U_1({\bf q})$, which is due to the interaction with the other bodies, and the force $-m_i(\dot{\bf q}_i\cdot\dot{\bf q}_i){\bf q}_i$, which is due to the constraints. The latter force is zero at $t=0$ because the bodies are initially at rest. Since ${\bf q}_i\cdot\nabla_{{\bf q}_i}U_1({\bf q})=0$, it follows that $\nabla_{{\bf q}_i}U_1({\bf q})$ is orthogonal to ${\bf q}_i$, and thus tangent to ${\bf S}^2$. Then the symmetry of the $n$-gon implies that, at the initial moment $t=0$, $\nabla_{{\bf q}_i}U_1({\bf q})$ is the sum of pairs of forces, each pair consisting of opposite forces that cancel each other. This means that $\nabla_{{\bf q}_i}U_1({\bf q})={\bf 0}$. Therefore, from the equations of motion and the fact that the bodies are initially at rest, it follows that $$ \ddot{\bf q}_i(0)=-(\dot{\bf q}_i(0)\cdot\dot{\bf q}_i(0)){\bf q}_i(0)={\bf 0}, \ \ i=1,\dots,n. $$ But then no force acts on the body of mass $m_i$ at time $t=0$, consequently the body doesn't move. So $\dot{\bf q}_i(t)={\bf 0}$ for all $t\in{\bf R}$. Then $\ddot{\bf q}_i(t)={\bf 0}$ for all $t\in{\bf R}$, therefore $\nabla_{{\bf q}_i}U_1({\bf q})(t)={\bf 0}$ for all $t\in{\bf R}$, so the $n$-gon is a fixed point of equations (\ref{coordS}). Notice that if $n$ is even, the $n$-gon has $n/2$ pairs of antipodal vertices. Since antipodal bodies introduce singularities into the equations of motion, only the $n$-gons with an odd number of vertices are fixed points of equations (\ref{coordS}). The proof that the regular tetrahedron is a fixed point can be merely done by computing that 4 bodies of equal masses with initial coordinates given by ${\bf q}_1=(0,0,1), {\bf q}_2=(0,2\sqrt{2}/3, -1/3), {\bf q}_3=(-2/\sqrt{6},-\sqrt{2}/3,-1/3), {\bf q}_4= (2/\sqrt{6},-\sqrt{2}/3,-1/3)$, satisfy system (\ref{coordS}), or by noticing that the forces acting on each body cancel each other because of the involved symmetry. \end{proof} \begin{remark} If equal masses are placed at the vertices of the other four regular polyhedra: octahedron (6 bodies), cube (8 bodies), dodecahedron (12 bodies), and icosahedron (20 bodies), they do not form fixed points because antipodal singularities occur in each case. \end{remark} \begin{remark} In the proof of Theorem \ref{singularity}, we discovered that if one body has mass $m$ and the other two mass $M=2m$, then the isosceles triangle with the vertices at $(0,-1,0)$, $(-\sqrt{2}/2,\sqrt{2}/2,0)$, and $(\sqrt{2}/2,\sqrt{2}/2,0)$ is a fixed point. Therefore one might expect that fixed points can be found for any given masses. But, as formula \eqref{incond} shows, this is not the case. Indeed, if one body has mass $m$ and the other two have masses $M\ge4m$, there is no configuration (which must be isosceles due to symmetry) that corresponds to a fixed point since $\ddot x$ and $\ddot y$ are never zero. This observation proves that in the 3-body problem, there are choices of masses for which the equations of motion lack fixed points.\label{rem} \end{remark} The following statement is an obvious consequence of the proof given for Theorem \ref{fix}. \begin{cor} Consider an odd number of equal bodies, initially at the vertices of a regular $n$-gon inscribed in a great circle of ${\bf S}^2$, and assume that the solution generated from this initial position maintains the same relative configuration for all times. Then, for all $t\in{\bf R}$, this solution satisfies the conditions $\nabla_{{\bf q}_i}U_1({\bf q}(t))={\bf 0},\ i=1,\dots,n$.\label{cor1} \end{cor} It is interesting to see that if the bodies are within a hemisphere (meaning half a sphere and its geodesic boundary), fixed points do not occur if at least one body is not on the boundary. Let us formally state and prove this result. \begin{theorem} Consider an initial nonsingular configuration of the $n$-body problem in ${\bf S}^2$ for which all bodies lie within a hemisphere, meant to include its geodesic boundary, with at least one body not on this geodesic. Then this configuration is not a fixed point.\label{nofixS} \end{theorem} \begin{proof} Without loss of generality we can consider the initial configuration of the bodies $m_1,\dots, m_n$ in the hemisphere $z\ge 0$, whose boundary is the geodesic $z=0$. Then at least one body has the smallest $z$ coordinate, and let $m_1$ be one of these bodies. Also, at least one body has its $z$ coordinate positive, and let $m_2$ be one of them. Since all initial velocities are zero, only the mutual forces between bodies act on $m_1$. Then, according to the equations of motion (\ref{eqcoordS}), $m_1\ddot{z}_1(0)={\partial\over\partial z_1}U_1({\bf q}(0))$. But as no body has its $z$ coordinate smaller than $z_1$, the terms contained in the expression of ${\partial\over\partial z_1}U_1({\bf q}(0))$ that involve interactions between $m_1$ and $m_i$ are all larger than or equal to zero for $i=3,4,\dots,n$, while the term involving $m_2$ is strictly positive. Therefore ${\partial\over\partial z_1}U_1({\bf q}(0))>0$, so $m_1$ moves upward the hemisphere. Consequently the initial configuration is not a fixed point. \end{proof} \subsection{Polygonal solutions} We will further show that fixed points lying on geodesics of spheres can generate relative equilibria. \begin{theorem} Consider a fixed point given by the masses $m_1, m_2,\dots, m_n$ that lie on a great circle of ${\bf S}^2$. Then for every nonzero angular velocity, this configuration generates a relative equilibrium along the great circle. \label{fixrel} \end{theorem} \begin{proof} Without loss of generality, we assume that the great circle is the equator $z=0$ and that for some given masses $m_1, m_2,\dots, m_n$ there exist $\alpha_1,\alpha_2,\dots,\alpha_n$ such that the configuration ${\bf q}=({\bf q}_1,\dots, {\bf q}_n)$ given by ${\bf q}_i=(x_i,y_i, 0), i=1,\dots,n$, with \begin{equation} x_i=\cos(\omega t +\alpha_i), y_i=\sin(\omega t +\alpha_i), \ i=1,\dots, n, \label{rot} \end{equation} is a fixed point for $\omega =0$. This configuration can also be interpreted as being ${\bf q}(0)$, i.e.~the solution ${\bf q}$ at $t=0$ for any $\omega\ne 0$. So we can conclude that $\nabla_{{\bf q}_i}U_1({\bf q}(0))={\bf 0},\ i=1,\dots,n$. But then, for $t=0$, the equations of motion \eqref{eqcoordS} reduce to \begin{equation} \begin{cases} \ddot{x}_i=-(\dot{x}^2_i+\dot{y}^2_i)x_i\cr \ddot{y}_i=-(\dot{x}^2_i+\dot{y}^2_i)y_i, \label{tralala} \end{cases} \end{equation} $i=1,\dots,n.$ Notice, however, that $\dot{x}_i=-\omega\sin(\omega t+\alpha_i), \ddot{x}_i=-\omega^2\cos(\omega t+\alpha_i), \dot{y}_i=-\omega\cos(\omega t+\alpha_i)$, and $\ddot{y}_i=-\omega^2\sin(\omega t+\alpha_i),$ therefore $\dot{x}_i^2+ \dot{y}_i^2=\omega^2$. Using these computations, it is easy to see that $\bf q$ given by \eqref{rot} is a solution of \eqref{tralala} for every $t$, so no forces due to the constraints act on the bodies, neither at $t=0$ nor later. Since $\nabla_{{\bf q}_i}U_1({\bf q}(0))={\bf 0},\ i=1,\dots,n$, it follows that the gravitational forces are in equilibrium at the initial moment, so no gravitational forces act on the bodies either. Consequently, the rotation imposed by $\omega\ne 0$ makes the system move like a rigid body, so the gravitational forces further remain in equilibrium, consequently $\nabla_{{\bf q}_i}U_1({\bf q}(t))={\bf 0},\ i=1,\dots,n$, for all $t$. Therefore $\bf q$ given by \eqref{rot} satisfies equations \eqref{eqcoordS}. Then, by Definition \ref{reS}, $\bf q$ is an elliptic relative equilibrium. \end{proof} The following result shows that relative equilibria generated by fixed points obtained from regular $n$-gons on a great circle of ${\bf S}^2$ can occur only when the bodies rotate along the great circle. \begin{theorem} Consider an odd number of equal bodies, initially at the vertices of a regular $n$-gon inscribed in a great circle of ${\bf S}^2$. Then the only elliptic relative equilibria that can be generated from this configuration are the ones that rotate in the plane of the original great circle.\label{rengon} \end{theorem} \begin{proof} Without loss of generality, we can prove this result for the equator $z=0$. Consider therefore an elliptic relative equilibrium solution of the form \begin{equation} x_i=r_i\cos(\omega t+\alpha_i), \ y_i=r_i\sin(\omega t+\alpha_i), \ z_i=\pm (1-r_i^2)^{1/2},\label{check} \end{equation} $ i=1,\dots,n,$ with $+$ taken for $z_i>0$ and $-$ for $z_i<0$. The only condition we impose on this solution is that $r_i$ and $\alpha_i,\ i=1,\dots,n$, are chosen such that the configuration is a regular $n$-gon inscribed in a moving great circle of ${\bf S}^2$ at all times. Therefore the plane of the $n$-gon can have any angle with, say, the $z$-axis. This solution has the derivatives $$\dot{x}_i=-r_i\omega\sin(\omega t+\alpha_i), \ \dot{y}_i=r_i\omega\cos(\omega t+\alpha_i), \ \dot{z}_i=0,\ i=1,\dots,n,$$ $$\ddot{x}_i=-r_i\omega^2\cos(\omega t+\alpha_i), \ \ddot{y}_i=-r_i\omega^2\sin(\omega t+\alpha_i), \ \ddot{z}_i=0,\ i=1,\dots,n.$$ Then $$\dot{x}_i^2+\dot{y}_i^2+\dot{z}_i^2=r_i^2\omega^2, \ i=1,\dots,n.$$ Since, by Corollary \ref{cor1}, any $n$-gon solution with $n$ odd satisfies the conditions $$\nabla_{{\bf q}_i}U_1({\bf q})={\bf 0},\ i=1,\dots,n,$$ system (\ref{coordS}) reduces to $$ \begin{cases} \ddot{x}_i=-(\dot{x}_i^2+\dot{y}_i^2+\dot{z}_i^2)x_i,\cr \ddot{y}_i=-(\dot{x}_i^2+\dot{y}_i^2+\dot{z}_i^2)y_i,\cr \ddot{z}_i=-(\dot{x}_i^2+\dot{y}_i^2+\dot{z}_i^2)z_i,\ i=1,\dots,n.\cr \end{cases} $$ Then the substitution of (\ref{check}) into the above equations leads to: $$ \begin{cases} r_i(1-r_i^2)\omega^2\cos(\omega t+\alpha_i)=0,\cr r_i(1-r_i^2)\omega^2\sin(\omega t+\alpha_i)=0,\ i=1,\dots,n.\cr \end{cases} $$ But assuming $\omega\ne 0$, this system is nontrivially satisfied if and only if $r_i=1,$ conditions which are equivalent to $z_i=0,\ i=1,\dots,n.$ Therefore the bodies must rotate along the equator $z=0$. \end{proof} Theorem \ref{rengon} raises the question whether elliptic relative equilibria given by regular polygons can rotate on other curves than geodesics. The answer is given by the following result. \begin{theorem} Consider the $n$-body problem with equal masses in ${\bf S}^2$. Then, for any $n$ odd, $m>0$ and $z\in(-1,1)$, there are a positive and a negative $\omega$ that produce elliptic relative equilibria in which the bodies are at the vertices of an $n$-gon rotating in the plane $z=$ constant. If $n$ is even, this property is still true if we exclude the case $z=0$. \label{ngonS} \end{theorem} \begin{proof} There are two cases to discuss: (i) $n$ odd and (ii) $n$ even. (i) To simplify the presentation, we further denote the bodies by $m_i, i=-s, -s+1,\dots, -1, 0, 1,\dots, s-1,s$, where $s$ is a positive integer, and assume that they all have mass $m$. Without loss of generality we can further substitute into equations (\ref{coordS}) a solution of the form (\ref{check}) with $i$ as above, $\alpha_{-s}=-{4s\pi\over2s+1}, \dots,\alpha_{-1}=-{2\pi\over2s+1},\alpha_0=0$, $\alpha_1={2\pi\over2s+1},\dots, \alpha_s={4s\pi\over2s+1}$, $r:=r_i, z:=z_i$, and consider only the equations for $i=0$. The study of this case suffices due to the involved symmetry, which yields the same conclusions for any value of $i$. The equation corresponding to the $z_0$ coordinate takes the form $$ \sum_{j=-s, j\ne 0}^s{m(z-k_{0j}z)\over(1-k_{0j}^2)^{3/2}}-r^2\omega^2z=0, $$ where $k_{0j}=x_0x_j+y_0y_j+z_0z_j=\cos\alpha_j-z^2\cos\alpha_j+z^2$. Using the fact that $r^2+z^2=1$, $\cos\alpha_j=\cos\alpha_{-j}$, and $k_{0j}=k_{0(-j)}$, this equation becomes \begin{equation} \sum_{j=1}^s{2(1-\cos\alpha_j)\over(1-k_{0j}^2)^{3/2}}={\omega^2\over m}. \label{z0} \end{equation} Now we need to check whether the equations corresponding to $x_0$ and $y_0$ lead to the same equation. In fact, checking for $x_0$, and ignoring $y_0$, suffices due to the same symmetry reasons invoked earlier or the duality of the trigonometric functions $\sin$ and $\cos$. The substitution of the the above functions into the first equation of (\ref{coordS}) leads us to $$ (r^2-1)\omega^2\cos\omega t=\sum_{j=-s, j\ne 0}^s{m[\cos(\omega t+\alpha_j) -k_{0j}\cos\omega t]\over(1-k_{0j}^2)^{3/2}}. $$ A straightforward computation, which uses the fact that $r^2+z^2=1$, $\sin\alpha_j=-\sin\alpha_{-j}$, $\cos\alpha_j=\cos\alpha_{-j}$, and $k_{0j}=k_{0(-j)}$, yields the same equation (\ref{z0}). Writing the denominator of equation (\ref{z0}) explicitly, we are led to \begin{equation} \sum_{j=1}^s{2\over(1-\cos\alpha_j)^{1/2}(1-z^2)^{3/2}[2-(1-\cos\alpha_j)(1-z^2)]^{3/2}} ={\omega^2\over m}. \end{equation} The left hand side is always positive, so for any $m>0$ and $z\in(-1,1)$ fixed, there are a positive and a negative $\omega$ that satisfy the equation. Therefore the $n$-gon with an odd number of sides is an elliptic relative equilibrium. (ii) To simplify the presentation when $n$ is even, we denote the bodies by $m_i,\ i=-s+1,\dots, -1, 0, 1,\dots, s-1,s$, where $s$ is a positive integer, and assume that they all have mass $m$. Without loss of generality, we can substitute into equations (\ref{coordS}) a solution of the form (\ref{check}) with $i$ as above, $\alpha_{-s+1}={(-s+1)\pi\over s},\dots,\alpha_{-1}=-{\pi\over s}$, $\alpha_0=0$, $\alpha_1={\pi\over s},\dots,\alpha_{s-1}={(s-1)\pi\over s}$, $\alpha_s=\pi$, $r:=r_i$, $z:=z_i$, and consider as in the previous case only the equations for $i=0$. Then using the fact that $k_{0j}=k_{0(-j)}$, $\cos\alpha_j=\cos\alpha_{-j}$, and $\cos\pi=-1$, a straightforward computation brings the equation corresponding to $z_0$ to the form \begin{equation} \sum_{j=1}^{s-1}{2(1-\cos\alpha_j)\over(1-k_{0j}^2)^{3/2}}+{2\over(1-k_{0s}^2)^{3/2}}= {\omega^2\over m}.\label{form} \end{equation} Using additionally the relations $\sin\alpha_j=-\sin\alpha_{-j}$ and $\sin\pi=0$, we obtain for the equation corresponding to $x_0$ the same form (\ref{form}), which---for $k_{0j}$ and $k_{0s}$ written explicitly---becomes \begin{multline*} \sum_{j=1}^{s-1}{2\over(1-\cos\alpha_j)^{1/2}(1-z^2)^{3/2}[2-(1-\cos\alpha_j)(1-z^2)]^{3/2}}\\ +{1\over 4z^2|z|(1-z^2)^{3/2}}={\omega^2\over m}. \end{multline*} Since the left hand side of this equations is positive and finite, given any $m>0$ and $z\in(-1,0)\cup(0,1)$, there are a positive and a negative $\omega$ that satisfy it. So except for the case $z=0$, which introduces antipodal singularities, the rotating $n$-gon with an even number of sides is an elliptic relative equilibrium. \end{proof} \subsection{Lagrangian solutions} The case $n=3$ presents particular interest in the Euclidean case because the equilateral triangle is an elliptic relative equilibrium for any values of the masses, not only when the masses are equal. But before we check whether this fact holds in ${\bf S}^2$, let us consider the case of three equal masses in more detail. \begin{cor} Consider the 3-body problem with equal masses, $m:=m_1=m_2=m_3$, in ${\bf S}^2$. Then for any $m>0$ and $z\in(-1,1)$, there are a positive and a negative $\omega$ that produce elliptic relative equilibria in which the bodies are at the vertices of an equilateral triangle that rotates in the plane $z=$ constant. Moreover, for every $\omega^2/m$ there are two values of $z$ that lead to relative equilibria if $\omega^2/m\in(8/\sqrt{3},\infty)\cup\{3\}$, three values if $\omega^2/m=8/\sqrt{3}$, and four values if $\omega^2/m\in(3,8/\sqrt{3})$. \label{equilateralS} \end{cor} \begin{proof} The first part of the statement is a consequence of Theorem \ref{ngonS} for $n=3$. Alternatively, we can substitute into system (\ref{coordS}) a solution of the form (\ref{check}) with $i=1,2,3$, $r:=r_1=r_2=r_3$, $z=\pm(1-r^2)^{1/2}$, $\alpha_1=0, \alpha_2=2\pi/3, \alpha_3=4\pi/3$, and obtain the equation \begin{equation} {8\over\sqrt{3}(1+2z^2-3z^4)^{3/2}}={\omega^2\over m}.\label{eq4} \end{equation} The left hand side is positive for $z\in(-1,1)$ and tends to infinity when $z\to\pm1$ (see Figure \ref{3gon}). So for any $z$ in this interval and $m>0$, there are a positive and a negative $\omega$ for which the above equation is satisfied. Figure \ref{3gon} and a straightforward computation also clarify the second part of the statement. \end{proof} \begin{figure} \centering \includegraphics[width=1.7in]{equilat} \caption{\small The graph of the function $f(z)={8\over\sqrt{3}(1+2z^2-3z^4)^{3/2}}$ for $z\in(-1,1)$.}\label{3gon} \end{figure} \begin{remark} A result similar to Corollary \ref{equilateralS} can be proved for two equal masses that rotate on a non-geodesic circle, when the bodies are situated at opposite ends of a rotating diameter. Then, for $z\in(-1,0)\cup(0,1)$, the analogue of (\ref{eq4}) is the equation $${1\over 4z^2|z|(1-z^2)^{3/2}}={\omega^2\over m}.$$ The case $z=0$ yields no solution because it involves an antipodal singularity. \end{remark} We have reached now the point when we can decide whether the equilateral triangle can be an elliptic relative equilibrium in ${\bf S}^2$ if the masses are not equal. The following result shows that, unlike in the Euclidean case, the answer is negative when the bodies move on the sphere in the same Euclidean plane. \begin{proposition} In the 3-body problem in ${\bf S}^2$, if the bodies $m_1, m_2, m_3$ are initially at the vertices of an equilateral triangle in the plane $z=$ constant for some $z\in(-1,1)$, then there are initial velocities that lead to an elliptic relative equilibrium in which the triangle rotates in its own plane if and only if $m_1=m_2=m_3$.\label{equil} \end{proposition} \begin{proof} The implication which shows that if $m_1=m_2=m_3$, the rotating equilateral triangle is a relative equilibrium, follows from Theorem \ref{equilateralS}. To prove the other implication, we substitute into equations (\ref{coordS}) a solution of the form (\ref{check}) with $i=1,2,3,\ r:=r_1,r_2,r_3,\ z:=z_1=z_2=z_3=\pm (1-r^2)^{1/2}$, and $\alpha_1=0, \alpha_2=2\pi/3, \alpha_3=4\pi/3$. The computations then lead to the system \begin{equation} \begin{cases} m_1+m_2=\gamma\omega^2\cr m_2+m_3=\gamma\omega^2\cr m_3+m_1=\gamma\omega^2,\cr \end{cases} \end{equation} where $\gamma=\sqrt{3}(1+2z^2-3z^4)^{3/2}/4$. But for any $z=$ constant in the interval $(-1,1)$, the above system has a solution only for $m_1=m_2=m_3=\gamma\omega^2/2$. Therefore the masses must be equal. \end{proof} The next result leads to the conclusion that Lagrangian solutions in ${\bf S}^2$ can take place only in Euclidean planes of ${\bf R}^3$. This property is known to be true in the Euclidean case for all elliptic relative equilibria, \cite{Win}, but Wintner's proof doesn't work in our case because it uses the integral of the center of mass. Most importantly, our result also implies that Lagrangian orbits with non-equal masses cannot exist in ${\bf S}^2$. \begin{theorem} For all Lagrangian solutions in ${\bf S}^2$, the masses $m_1, m_2$ and $m_3$ have to rotate on the same circle, whose plane must be orthogonal to the rotation axis, and therefore $m_1=m_2=m_3$.\label{lagranS} \end{theorem} \begin{proof} Consider a Lagrangian solution in ${\bf S}^2$ with bodies of masses $m_1, m_2$, and $m_3$. This means that the solution, which is an elliptic relative equilibrium, must have the form \begin{align*} x_1&=r_1\cos\omega t,& y_1&=r_1\sin\omega t,& z_1&=(1-r_1^2)^{1/2},\\ x_2&=r_2\cos(\omega t+a),& y_2&=r_2\sin(\omega t+a),& z_2&=(1-r_2^2)^{1/2},\\ x_3&=r_3\cos(\omega t+b),& y_3&=r_3\sin(\omega t+b),& z_3&=(1-r_3^2)^{1/2}, \end{align*} with $b>a>0$. In other words, we assume that this equilateral forms a constant angle with the rotation axis, $z$, such that each body describes its own circle on ${\bf S}^2$. But for such a solution to exist, it is necessary that the total angular momentum is either zero or is given by a vector parallel with the $z$ axis. Otherwise this vector rotates around the $z$ axis, in violation of the angular-momentum integrals. This means that at least the first two components of the vector $\sum_{i=1}^3m_i{\bf q}_i\times\dot{\bf q}_i$ must be zero. A straightforward computation shows this constraint to lead to the condition $$ m_1r_1z_1\sin\omega t+m_2r_2z_2\sin(\omega t+a)+m_3r_3z_3\sin(\omega t+b)=0, $$ assuming that $\omega\ne 0$. For $t=0$, this equation becomes \begin{equation} m_2r_2z_2\sin a=-m_3r_3z_3\sin b. \label{withz} \end{equation} Using now the fact that $$ \alpha:=x_1x_2+y_1y_2+z_1z_2=x_1x_3+y_1y_3+z_1z_3=x_3x_2+y_3y_2+z_3z_2 $$ is constant because the triangle is equilateral, the equation of the system of motion corresponding to $\ddot{y}_1$ takes the form $$ Kr_1(r_1^2-1)\omega^2\sin\omega t= m_2r_2\sin(\omega t+a)+m_3r_3\sin(\omega t+b), $$ where $K$ is a nonzero constant. For $t=0$, this equation becomes \begin{equation} m_2r_2\sin a=-m_3r_3\sin b. \label{withoutz} \end{equation} Dividing \eqref{withz} by \eqref{withoutz}, we obtain that $z_2=z_3$. Similarly, we can show that $z_1=z_2=z_3$, therefore the motion must take place in the same Euclidian plane on a circle orthogonal to the rotation axis. Proposition \ref{equil} then implies that $m_1=m_2=m_3$. \end{proof} \subsection{Eulerian solutions} It is now natural to ask whether such elliptic relative equilibria exist, since---as Theorem \ref{rengon} shows---they cannot be generated from regular $n$-gons. The answer in the case $n=3$ of equal masses is given by the following result. \begin{theorem} Consider the 3-body problem in ${\bf S}^2$ with equal masses, $m:=m_1=m_2=m_3$. Fix the body of mass $m_1$ at $(0,0,1)$ and the bodies of masses $m_2$ and $m_3$ at the opposite ends of a diameter on the circle $z=$ constant. Then, for any $m>0$ and $z\in(-0.5,0)\cup(0,1)$, there are a positive and a negative $\omega$ that produce elliptic relative equilibria.\label{regeo3} \end{theorem} \begin{proof} Substituting into the equations of motion (\ref{coordS}) a solution of the form $$x_1=0,\ y_1=0,\ z_1=1,$$ $$x_2=r\cos\omega t,\ y_2=r\sin\omega t, z_2=z,$$ $$x_3=r\cos(\omega t+\pi),\ y_3=r\sin(\omega t+\pi), z_3=z,$$ with $r\ge0$ and $z$ constants satisfying $r^2+z^2=1$, leads either to identities or to the algebraic equation \begin{equation} {4z+|z|^{-1}\over 4z^2(1-z^2)^{3/2}}={\omega^2\over m}.\label{ratio1} \end{equation} The function on the left hand side is negative for $z\in(-1,-0.5)$, $0$ at $z=-0.5$, positive for $z\in(-0.5,0)\cup(0,1)$, and undefined at $z=0$. Therefore, for every $m>0$ and $z\in(-0.5,0)\cup(0,1)$, there are a positive and a negative $\omega$ that lead to a geodesic relative equilibrium. For $z=-0.5$, we recover the equilateral fixed point. The sign of $\omega$ determines the sense of rotation. \end{proof} \begin{figure} \centering \includegraphics[width=1.7in]{fig1} \includegraphics[width=1.7in]{fig2} \caption{\small The graph of the function $f(z)={4z+|z|^{-1}\over 4z^2(1-z^2)^{3/2}}$ in the intervals $(-1,0)$ and $(0,1)$, respectively.}\label{regeoS2} \end{figure} \begin{remark} For every $\omega^2/m\in(0,64\sqrt{15}/45)$, there are three values of $z$ that satisfy relation (\ref{ratio1}): one in the interval $(-0.5,0)$ and two in the interval $(0,1)$ (see Figure \ref{regeoS2}). \end{remark} \begin{remark} If in Theorem \ref{regeo3} we take the masses $m_1=:m$ and $m_2=m_3=:M$, the analogue of equation \eqref{ratio1} is \begin{equation*} {4mz+M|z|^{-1}\over 4z^2(1-z^2)^{3/2}}={\omega^2\over m}.\label{ratio2} \end{equation*} Then solutions exist for any $z\in(-\sqrt{M/m}/2,0)\cup(0,1)$. This means that there are no fixed points for $M\ge 4m$ (a fact that agrees with what we learned from Remark \ref{rem} and the proof of Theorem \ref{singularity}), so relative equilibria exist for such masses for all $z\in(-1,0)\cup(0,1)$. \end{remark} \section{Relative equilibria in ${\bf H}^2$} In this section we will prove a few results about fixed points, as well as elliptic and hyperbolic relative equilibria in ${\bf H}^2$. We also show that parabolic relative equilibria do not exist. Since, by the Principal Axis theorem for the Lorentz group, every Lorentzian rotation (see Appendix) can be written, in some basis, either as an elliptic rotation about the $z$ axis, or as an hyperbolic rotation about the $x$ axis, or as a parabolic rotation about the line $x=0$, $y=z$, we can define three kinds of relative equilibria: the elliptic relative equilibria, the hyperbolic relative equilibria, and the parabolic relative equilibria. This terminology matches the standard terminology of hyperbolic geometry \cite{Henle}. \medskip The elliptic relative equilibria are defined as follows. \begin{definition} An elliptic relative equilibrium in ${\bf H}^2$ is a solution ${\bf q}_i=(x_i,y_i,z_i)$, $i=1,\dots,n,$ of equations (\ref{coordH}) with $x_i=\rho_i\cos(\omega t+\alpha_i), y_i=\rho_i\sin(\omega t+\alpha_i)$, and $z_i=(\rho_i^2+1)^{1/2}$, where $\omega, \alpha_i,$ and $\rho_i,\ i=1,\dots,n,$ are constants. \label{reH1} \end{definition} Remark that, as in ${\bf S}^2$, a ``weak'' property of the center of mass occurs in ${\bf H}^2$ for elliptic relative equilibria. Indeed, if all the bodies are at all times on one side of a plane containing the rotation axis, then the integrals of the angular momentum are violated because the vector representing the total angular momentum cannot be zero or parallel to the $z$ axis. \medskip Let us now define the hyperbolic relative equilibria. \begin{definition} A hyperbolic relative equilibrium in ${\bf H}^2$ is a solution of equations (\ref{coordH}) of the form ${\bf q}_i=(x_i,y_i,z_i),\ i=1,\dots,n,$ defined for all $t\in{\bf R}$, with \begin{equation} x_i={\rm constant},\ \ y_i=\rho_i\sinh(\omega t+\alpha_i),\ \ {\rm and} \ \ z_i=\rho_i\cosh(\omega t+\alpha_i),\label{hyper} \end{equation} where $\omega, \alpha_i,$ and $\rho_i=(1+x_i^2)^{1/2}\ge1,\ i=1,\dots,n,$ are constants. \label{reH} \end{definition} Finally, the parabolic relative equilibria are defined as follows. \begin{definition} A parabolic relative equilibrium in ${\bf H}^2$ is a solution of equations (\ref{coordH}) of the form ${\bf q}_i=(x_i,y_i,z_i),\ i=1,\dots,n,$ defined for all $t\in{\bf R}$, with \begin{equation} \begin{split} x_i&=a_i-b_it+c_it\\ y_i&=a_it+b_i(1-t^2/2)+c_it^2/2\\ z_i&=a_it-b_it^2/2+c_i(1+t^2/2),\label{parabol} \end{split} \end{equation} where $a_i,b_i$ and $c_i,\ i=1,\dots,n,$ are constants, and $a_i^2+b_i^2-c_i^2=-1$. \label{reH} \end{definition} \subsection{Fixed Points in $\bf H^2$} The simplest solutions of the equations of motion are the fixed points. They can be seen as trivial elliptic relative equilibria that correspond to $\omega=0$. In terms of the equations of motion, we can define them as follows. \begin{definition} A solution of system (\ref{HamH}) is called a fixed point if $$\overline\nabla_{{\bf q}_i}U_{-1}({\bf q})(t)={\bf p}_i(t)={\bf 0}\ \ {\rm for} \ {\rm all}\ \ t\in{\bf R}\ \ {\rm and}\ \ i=1,\dots,n.$$ \end{definition} Unlike in ${\bf S}^2$, there are no fixed points in ${\bf H}^2$. Let us formally state and prove this fact. \begin{theorem} In the $n$-body problem with $n\ge 2$ in ${\bf H}^2$ there are no configurations that correspond to fixed points of the equations of motion.\label{nofixH} \end{theorem} \begin{proof} Consider any collisionless configuration of $n$ bodies initially at rest in ${\bf H}^2$. This means that the component of the forces acting on bodies due to the constraints, which involve the factors $\dot{x}_i^2+\dot{y}_i^2-\dot{z}_i^2, \ i=1,\dots,n$, are zero at $t=0$. At least one body, $m_i$, has the largest $z$ coordinate. Notice that the interaction between $m_i$ and any other body takes place along geodesics, which are concave-up hyperbolas on the ($z>0$)-sheet of the hyperboloid modeling ${\bf H}^2$. Then the body $m_j, j\ne i$, exercises an attraction on $m_i$ down the geodesic hyperbola that connects these bodies, so the $z$ coordinate of this force acting on $m_i$ is negative, independently of whether $z_j(0)<z_i(0)$ or $z_j(0)=z_i(0)$. Since this is true for every $j=1,\dots,n,\ j\ne i$, it follows that $\ddot{z}_i(0)<0$. Therefore $m_i$ moves downwards the hyperboloid, so the original configuration is not a fixed point. \end{proof} \subsection{Elliptic Relative Equilibria in $\bf H^2$} We now consider elliptic relative equilibria, and prove an analogue of Theorem \ref{ngonS}. \begin{theorem} Consider the $n$-body problem with equal masses in ${\bf H}^2$. Then, for any $m>0$ and $z>1$, there are a positive and a negative $\omega$ that produce elliptic relative equilibria in which the bodies are at the vertices of an $n$-gon rotating in the plane $z=$ constant. \label{ngonH} \end{theorem} \begin{proof} The proof works in the same way as for Theorem \ref{ngonS}, by considering the cases $n$ odd and even separately. The only differences are that we replace $r$ with $\rho$, the relation $r^2+z^2=1$ with $z^2=\rho^2+1$, and the denominator $(1-k_{0j}^2)^{3/2}$ with $(c_{0j}^2-1)^{3/2}$, wherever it appears, where $c_{0j}=-k_{0j}$ replaces $k_{0j}$. Unlike in ${\bf S}^2$, the case $n$ even is satisfied for all admissible values of $z$. \end{proof} Like in ${\bf S}^2$, the equilateral triangle presents particular interest, so let us say a bit more about it than in the general case of the regular $n$-gon. \begin{cor} Consider the 3-body with equal masses, $m:=m_1=m_2=m_3$, in ${\bf H}^2$. Then for any $m>0$ and $z>1$, there are a positive and a negative $\omega$ that produce relative elliptic equilibria in which the bodies are at the vertices of an equilateral triangle that rotates in the plane $z=$ constant. Moreover, for every $\omega^2/m>0$ there is a unique $z>1$ as above. \label{equilateralH} \end{cor} \begin{proof} Substituting in system (\ref{coordH}) a solution of the form \begin{equation} x_i=\rho\cos(\omega t+\alpha_i),\ \ y_i=\rho\sin(\omega t+\alpha_i),\ \ z_i=z, \label{subst} \end{equation} with $z=\sqrt{\rho^2+1}$, $\alpha_1=0, \alpha_2=2\pi/3, \alpha_3=4\pi/3$, we are led to the equation \begin{equation} {8\over\sqrt{3}(3z^4-2z^2-1)^{3/2}}={\omega^2\over m}.\label{eq5} \end{equation} The left hand side is positive for $z>1$, tends to infinity when $z\to1$, and tends to zero when $z\to\infty$. So for any $z$ in this interval and $m>0$, there are a positive and a negative $\omega$ for which the above equation is satisfied. \end{proof} As we already proved in the previous section, an equilateral triangle rotating in its own plane forms an elliptic relative equilibrium in ${\bf S}^2$ only if the three masses lying at its vertices are equal. The same result is true in ${\bf H}^2$, as we will further show. \begin{proposition} In the 3-body problem in ${\bf H}^2$, if the bodies $m_1, m_2, m_3$ are initially at the vertices of an equilateral triangle in the plane $z=$ constant for some $z>1$, then there are initial velocities that lead to an elliptic relative equilibrium in which the triangle rotates in its own plane if and only if $m_1=m_2=m_3$.\label{equilH} \end{proposition} \begin{proof} The implication which shows that if $m_1=m_2=m_3$, the rotating equilateral triangle is an elliptic relative equilibrium, follows from Theorem \ref{ngonH}. To prove the other implication, we substitute into equations (\ref{coordH}) a solution of the form (\ref{subst}) with $i=1,2,3,\ \rho:=\rho_1, \rho_2, \rho_3,\ z:=z_1=z_2=z_3= (\rho^2+1)^{1/2}$, and $\alpha_1=0, \alpha_2=2\pi/3, \alpha_3=4\pi/3$. The computations then lead to the system \begin{equation} \begin{cases} m_1+m_2=\zeta\omega^2\cr m_2+m_3=\zeta\omega^2\cr m_3+m_1=\zeta\omega^2,\cr \end{cases} \end{equation} where $\zeta=\sqrt{3}(3z^4-2z^2-1)^{3/2}/4$. But for any $z=$ constant with $z>1$, the above system has a solution only for $m_1=m_2=m_3=\zeta\omega^2/2$. Therefore the masses must be equal. \end{proof} The following result perfectly resembles Theorem \ref{lagranS}. The proof works the same way, by just replacing the elliptical trigonometric functions with hyperbolic ones and changing the signs to reflect the equations of motion in ${\bf H}^2$. \begin{theorem} For all Lagrangian solutions in ${\bf H}^2$, the masses $m_1, m_2$ and $m_3$ have to rotate on the same circle, whose plane must be orthogonal to the rotation axis, and therefore $m_1=m_2=m_3$.\label{lagranH} \end{theorem} We will further prove an analogue of Theorem \ref{regeo3}. \begin{theorem} Consider the 3-body problem in ${\bf H}^2$ with equal masses, $m:=m_1=m_2=m_3$. Fix the body of mass $m_1$ at $(0,0,1)$ and the bodies of masses $m_2$ and $m_3$ at the opposite ends of a diameter on the circle $z=$ constant. Then, for any $m>0$ and $z>1$, there are a positive and a negative $\omega$, which produce elliptic relative equilibria that rotate around the $z$ axis.\label{regeo3H} \end{theorem} \begin{proof} Substituting into the equations of motion (\ref{coordH}) a solution of the form \begin{align*} x_1&=0,& y_1&=0,& z_1&=1,\\ x_2&=\rho\cos\omega t,& y_2&=\rho\sin\omega t,& z_2&=z,\\ x_3&=\rho\cos(\omega t+\pi),& y_3&=\rho\sin(\omega t+\pi),& z_3&=z, \end{align*} where $\rho\ge0$ and $z\ge1$ are constants satisfying $z^2=\rho^2+1$, leads either to identities or to the algebraic equation \begin{equation} {4z^2+1\over 4z^3(z^2-1)^{3/2}}={\omega^2\over m}.\label{ratio2} \end{equation} The function on the left hand side is positive for $z>1$. Therefore, for every $m>0$ and $z>1$, there are a positive and a negative $\omega$ that lead to a geodesic elliptic relative equilibrium. The sign of $\omega$ determines the sense of rotation. \end{proof} \begin{remark} For every $\omega^2/m>0$, there is exactly one $z>1$ that satisfies equation (\ref{ratio2}) (see Figure \ref{regeoH2}). \end{remark} \begin{figure} \centering \includegraphics[width=1.7in]{plot3} \caption{\small The graph of the function $f(z)={4z^2+1\over 4z^3(z^2-1)^{3/2}}$ for $z>1$.}\label{regeoH2} \end{figure} \subsection{Hyperbolic Relative Equilibria in $\bf H^2$} We now prove some results concerning hyperbolic relative equilibria. We first show that, in the $n$-body problem, hyperbolic relative equilibria do not exist along any given fixed geodesic of ${\bf H}^2$. In other words, the bodies cannot chase each other along a geodesic and maintain the same initial distances for all times. \begin{theorem} Along any fixed geodesic, the $n$-body problem in ${\bf H}^2$ has no hyperbolic relative equilibria.\label{noreH} \end{theorem} \begin{proof} Without loss of generality, we can prove this result for the geodesic $x=0$. We will show that equations (\ref{coordH}) do not have solutions of the form (\ref{hyper}) with $x_i=0$ and (consequently) $\rho_i=1,\ i=1,\dots,n$. Substituting \begin{equation} x_i=0,\ \ y_i=\sinh(\omega t+\alpha_i),\ \ {\rm and} \ \ z_i=\cosh(\omega t+\alpha_i)\label{hypre} \end{equation} into system (\ref{coordH}), the equation corresponding to the $y_i$ coordinate becomes \begin{equation} \sum_{j=1,j\ne i}^n{m_j[\sinh(\omega t+\alpha_j)-\cosh(\alpha_i-\alpha_j)\sinh(\omega t+\alpha_i)]\over|\sinh(\alpha_i-\alpha_j)|^3}=0. \label{tri} \end{equation} Assume now that $\alpha_i>\alpha_j$ for all $j\ne i$. Let $\alpha_{M(i)}$ be the maximum of all $\alpha_j$ with $j\ne i$. Then for $t\in(-\alpha_{M(i)}/\omega,-\alpha_i/\omega)$, we have that $\sinh(\alpha t+\alpha_j)<0$ for all $j\ne i$ and $\sinh(\alpha t+\alpha_i)>0$. Therefore the left hand side of equation (\ref{tri}) is negative in this interval, so the identity cannot take place for all $t\in{\bf R}$. It follows that a necessary condition to satisfy equation (\ref{tri}) is that $\alpha_{M(i)}\ge\alpha_i$. But this inequality must be verified for all $i=1,\dots,n$, a fact that can be written as: $$\alpha_1\ge\alpha_2 \ \ {\rm or}\ \ \alpha_1\ge\alpha_3\ \ {\rm or}\ \ \dots \ \ {\rm or} \ \ \alpha_1\ge\alpha_n,$$ $$\alpha_2\ge\alpha_1 \ \ {\rm or}\ \ \alpha_2\ge\alpha_3\ \ {\rm or}\ \ \dots \ \ {\rm or} \ \ \alpha_2\ge\alpha_n,$$ $$ \dots $$ $$\alpha_n\ge\alpha_1 \ \ {\rm or}\ \ \alpha_n\ge\alpha_2\ \ {\rm or}\ \ \dots \ \ {\rm or} \ \ \alpha_n\ge\alpha_{n-1}.$$ The constants $\alpha_1,\dots,\alpha_n$ must satisfy one inequality from each of the above lines. But every possible choice implies the existence of at least one $i$ and one $j$ with $i\ne j$ and $\alpha_i=\alpha_j$. For those $i$ and $j$, $\sinh(\alpha_i-\alpha_j)=0$, so equation (\ref{tri}) is undefined, therefore equations (\ref{coordH}) cannot have solutions of the form (\ref{hypre}). Consequently hyperbolic relative equilibria do not exist along the geodesic $x=0$. \end{proof} Theorem \ref{noreH} raises the question whether hyperbolic relative equilibria do exist at all. For three equal masses, the answer is given by the following result, which shows that, in ${\bf H}^2$, three bodies can move along hyperbolas lying in parallel planes of ${\bf R}^3$, maintaining the initial distances among themselves and remaining on the same geodesic (which rotates hyperbolically). The existence of such solutions is surprising. They rather resemble fighter planes flying in formation than celestial bodies moving under the action of gravity alone. \begin{theorem} In the 3-body problem of equal masses, $m:=m_1=m_2=m_3$, in ${\bf H}^2$, for any given $m>0$ and $x\ne0$, there exist a positive and a negative $\omega$ that lead to hyperbolic relative equilibria. \label{hyp} \end{theorem} \begin{proof} We will show that ${\bf q}_i(t)=(x_i(t),y_i(t),z_i(t)), \ i=1,2,3$, is a hyperbolic relative equilibrium of system (\ref{coordH}) for \begin{align*} x_1&=0,& y_1&=\sinh\omega t,& z_1&=\cosh\omega t,\\ x_2&=x,& y_2&=\rho\sinh\omega t,& z_2&=\rho\cosh\omega t,\\ x_3&=-x,& y_3&=\rho\sinh\omega t,& z_3&=\rho\cosh\omega t, \end{align*} where $\rho=(1+x^2)^{1/2}$.\label{hypre3H} Notice first that $$x_1x_2+y_1y_2-z_1z_2=x_1x_3+y_1y_3-z_1z_3=-\rho,$$ $$x_2x_3+y_2y_3-z_2z_3=-2x^2-1,$$ $$ \dot{x}_1^2+\dot{y}_1^2-\dot{z}_1^2=\omega^2,\ \ \dot{x}_2^2+\dot{y}_2^2-\dot{z}_2^2=\dot{x}_3^2+\dot{y}_3^2-\dot{z}_3^2= \rho^2\omega^2. $$ Substituting the above coordinates and expressions into equations (\ref{coordH}), we are led either to identities or to the equation \begin{equation} {4x^2+5\over 4x^2|x|(x^2+1)^{3/2}}={\omega^2\over m},\label{eq7} \end{equation} from which the statement of the theorem follows. \end{proof} \begin{remark} The left hand side of equation (\ref{eq7}) is undefined for $x=0$, but it tends to infinity when $x\to0$ and to 0 when $x\to\pm\infty$. This means that for each $\omega^2/m>0$ there are exactly one positive and one negative $x$ (equal in absolute value), which satisfy the equation. \end{remark} \begin{remark} Theorem \ref{hyp} is also true if, say, $m:=m_1$ and $M:=m_2=m_3$. Then the analogue of equation (\ref{eq7}) is $$ {m\over x^2|x|(x^2+1)^{1/2}}+{M\over4x^2|x|(x^2+1)^{3/2}}=\omega^2, $$ and it is obvious that for any $m,M>0$ and $x\ne 0$, there are a positive and negative $\omega$ satisfying the above equation. \end{remark} \begin{remark} Theorem \ref{hyp} also works for two bodies of equal masses, $m:=m_1=m_2$, of coordinates $$ x_1=-x_2=x, y_1=y_2=\rho\sinh\omega t, z_1=z_2=\rho\cosh\omega t, $$ where $x$ is a positive constant and $\rho=(x^2+1)^{3/2}$. Then the analogue of equation (\ref{eq7}) is $$ {1\over 4x^2|x|(x^2+1)^{3/2}}={\omega^2\over m}, $$ which obviously supports a statement similar to the one in Theorem \ref{hyp}. \end{remark} \subsection{Parabolic Relative Equilibria in $\bf H^2$} We now show that there are no parabolic relative equilibria. More precisely, we prove the following result. \begin{theorem}\label{thpar} The $n$-body problem in ${\bf H}^2$ has no parabolic relative equilibria. \end{theorem} \begin{proof} Let $x_i,y_i$, and $z_i$ be as in the definition of parabolic relative equilibria (\ref{parabol}). Then $\dot x_i=-b_i+c_i$, $\dot y_i=a_i+(c_i-b_i)t$, and $a_i+(c_i-b_i)t$. Thus, the first component of the angular momentum is $\sum_i m_i a_i(b_i-c_i) -\sum_i m_i(b_i-c_i)^2t$. It follows that $b_i=c_i$ because the first component of the angular momentum must be constant. But $a_i^2+b_i^2-c_i^2=-1$, hence $a_i^2=-1$, which is impossible, since $a_i$ is a real number. \end{proof} \section{ Saari's conjecture} In 1970, Don Saari conjectured that solutions of the classical $n$-body problem with constant moment of inertia are relative equilibria, \cite{Saa}, \cite{Saa1}. The moment of inertia is defined in classical Newtonian celestial mechanics as ${1\over 2}\sum_{i=1}^nm_i{\bf q}_i\cdot{\bf q}_i$, a function that gives a crude measure of the bodies' distribution in space. But this definition makes little sense in ${\bf S}^2$ and ${\bf H}^2$ because ${\bf q}_i\odot{\bf q}_i=\pm1$ for every $i=1,\dots,n$. To avoid this problem, we adopt the standard point of view used in physics, and define the moment of inertia in ${\bf S}^2$ or ${\bf H}^2$ about the direction of the angular momentum. But while fixing an axis in ${\bf S}^2$ does not restrain generality, the symmetry of ${\bf H}^2$ makes us distinguish between two cases. Indeed, in ${\bf S}^2$ we can assume that the rotation takes place around the $z$ axis, and thus define the moment of inertia as \begin{equation} {\bf I}:=\sum_{i=1}^nm_i(x_i^2+y_i^2).\label{momz} \end{equation} In ${\bf H}^2$, all possibilities can be reduced via suitable isometric transformations (see Appendix) to: (i) the symmetry about the $z$ axis, when the moment of inertia takes the same form (\ref{momz}), and (ii) the symmetry about the $x$ axis, which corresponds to hyperbolic rotations, when---in agreement with the definition of the Lorentz product (see Appendix)---we define the moment of inertia as \begin{equation} {\bf J}:=\sum_{i=1}^nm_i(y_i^2-z_i^2).\label{momx} \end{equation} The case of the parabolic roations will not be considered because there are no parabolic relative equilibria. \smallskip These definitions allow us to formulate the following conjecture: \medskip \noindent {\bf Saari's Conjecture in ${\bf S}^2$ and ${\bf H}^2$}. {\it For the gravitational $n$-body problem in ${\bf S}^2$ and ${\bf H}^2$, every solution that has a constant moment of inertia about the direction of the angular momentum is either an elliptic relative equilibrium in ${\bf S}^2$ or ${\bf H}^2$, or a hyperbolic relative equilibrium in ${\bf H}^2$.} \medskip By generalizing an idea we used in the Euclidean case, \cite{Dia}, \cite{Dia2}, we can now settle this conjecture when the bodies undergo another constraint. More precisely, we will prove the following result. \begin{theorem} For the gravitational $n$-body problem in ${\bf S}^2$ and ${\bf H}^2$, every solution with constant moment of inertia about the direction of the angular momentum for which the bodies remain aligned along a geodesic that rotates elliptically in ${\bf S}^2$ or ${\bf H}^2$, or hyperbolically in ${\bf H}^2$, is either an elliptic relative equilibrium in ${\bf S}^2$ or ${\bf H}^2$, or a hyperbolic relative equilibrium in ${\bf H}^2$.\label{Saari} \end{theorem} \begin{proof} Let us first prove the case in which ${\bf I}$ is constant in ${\bf S}^2$ and ${\bf H}^2$, i.e.~when the geodesic rotates elliptically. According to the above definition of $\bf I$, we can assume without loss of generality that the geodesic passes through the point $(0,0,1)$ and rotates about the $z$-axis with angular velocity $\omega(t)\ne 0$. The angular momentum of each body is ${\bf L}_i=m_i{\bf q}_i\otimes \dot{\bf q}_i$, so its derivative with respect to $t$ takes the form $$\dot{\bf L}_i=m_i\dot{\bf q}_i\otimes\dot{\bf q}_i+m_i{\bf q}_i\otimes\ddot{\bf q}_i= m_i{\bf q}_i\otimes{\widetilde\nabla_{{\bf q}_i}} U_\kappa({\bf q})-m_i\dot{\bf q}_i^2{\bf q}_i\otimes{\bf q}_i =m_i{\bf q}_i\otimes{\widetilde\nabla_{{\bf q}_i}} U_\kappa({\bf q}),$$ with $\kappa=1$ in ${\bf S}^2$ and $\kappa=-1$ in ${\bf H}^2$. Since ${\bf q}_i\odot{\widetilde\nabla_{{\bf q}_i}} U_\kappa({\bf q})=0$, it follows that ${\widetilde\nabla_{{\bf q}_i}} U_\kappa({\bf q})$ is either zero or orthogonal to ${\bf q}_i$. (Recall that orthogonality here is meant in terms of the standard inner product because, both in ${\bf S}^2$ and ${\bf H}^2$, ${\bf q}_i\odot{\widetilde\nabla_{{\bf q}_i}} U_\kappa({\bf q})={\bf q}_i\cdot{\nabla_{{\bf q}_i}} U_\kappa({\bf q})$.) If ${\widetilde\nabla_{{\bf q}_i}} U_\kappa({\bf q})={\bf 0}$, then $\dot{\bf L}_i={\bf 0}$, so $\dot{L}_i^z=0$. Assume now that ${\widetilde\nabla_{{\bf q}_i}} U_\kappa({\bf q})$ is orthogonal to ${\bf q}_i$. Since all the particles are on a geodesic, their corresponding position vectors are in the same plane, therefore any linear combination of them is in this plane, so ${\widetilde\nabla_{{\bf q}_i}} U_\kappa({\bf q})$ is in the same plane. Thus $\widetilde\nabla_{{\bf q}_i} U_\kappa({\bf q})$ and ${\bf q}_i$ are in a plane orthogonal to the $xy$ plane. It follows that $\dot{\bf L}_i$ is parallel to the $xy$ plane and orthogonal to the $z$ axis. Thus the $z$ component, $\dot{L}_i^z$, of $\dot{\bf L}_i$ is $0$, the same conclusion we obtained in the case ${\widetilde\nabla_{{\bf q}_i}} U_\kappa({\bf q})={\bf 0}$. Consequently, $L_i^z=c_i$, where $c_i$ is a constant. Let us also remark that since the angular momentum and angular velocity vectors are parallel to the $z$ axis, $L_i^z={\bf I}_i\omega(t)$, where ${\bf I}_i=m_i(x_i^2+y_i^2)$ is the moment of inertia of the body $m_i$ about the $z$-axis. Since the total moment of inertia, ${\bf I}$, is constant, and $\omega(t)$ is the same for all bodies because they belong to the same rotating geodesic, it follows that $\sum_{i=1}^n{\bf I}_i\omega(t)={\bf I}\omega(t)=c,$ where $c$ is a constant. Consequently, $\omega$ is a constant vector. Moreover, since $L_i^z=c_i$, it follows that ${\bf I}_i\omega(t)=c_i$. Then every ${\bf I}_i$ is constant, and so is every $z_i,\ i=1,\dots,n$. Hence each body of mass $m_i$ has a constant $z_i$-coordinate, and all bodies rotate with the same constant angular velocity around the $z$-axis, properties that agree with our definition of an elliptic relative equilibrium. We now prove the case ${\bf J}=$ constant, i.e.~when the geodesic rotates hyperbolically in ${\bf H}^2$. According to the definition of $\bf J$, we can assume that the bodies are on a moving geodesic whose plane contains the $x$ axis for all time and whose vertex slides along the geodesic hyperbola $x=0$. (This moving geodesic hyperbola can be also visualized as the intersection between the sheet $z>0$ of the hyperboloid and the plane containing the $x$ axis and rotating about it. For an instant, this plane also contains the $z$ axis.) The angular momentum of each body is ${\bf L}_i=m_i{\bf q}_i\boxtimes\dot{\bf q}_i$, so we can show as before that its derivative takes the form $\dot{\bf L}_i=m_i{\bf q}_i\boxtimes{\overline\nabla_{{\bf q}_i}} U_{-1}({\bf q})$. Again, ${\overline\nabla_{{\bf q}_i}} U_{-1}({\bf q})$ is either zero or orthogonal to ${\bf q}_i$. In the former case we can draw the same conclusion as earlier, that $\dot{\bf L}_i={\bf 0}$, so in particular $\dot{L}_i^x=0$. In the latter case, ${\bf q}_i$ and ${\overline\nabla_{{\bf q}_i}} U_{-1}({\bf q})$ are in the plane of the moving hyperbola, so their cross product, ${\bf q}_i\boxtimes{\overline\nabla_{{\bf q}_i}} U_{-1}({\bf q})$ (which differs from the standard cross product only by its opposite $z$ component), is orthogonal to the $x$ axis, and therefore $\dot{L}_i^x=0$. Thus $\dot{L}_i^x=0$ in either case. From here the proof proceeds as before by replacing $\bf I$ with $\bf J$ and the $z$ axis with the $x$ axis, and noticing that $L_i^x={\bf J}_i\omega(t)$, to show that every $m_i$ has a constant $x_i$ coordinate. In other words, each body is moving along a (in general non-geodesic) hyperbola given by the intersection of the hyperboloid with a plane orthogonal to the $x$ axis. These facts in combination with the sliding of the moving geodesic hyperbola along the fixed geodesic hyperbola $x=0$ are in agreement with our definition of a hyperbolic relative equilibrium. \end{proof} \section{Appendix} \subsection{The Weierstrass model} Since the Weierstrass model of the hyperbolic (Bolyai-Lobachevsky) plane is little known, we will present here its basic properties. This model appeals for at least two reasons: (i) it allows an obvious comparison with the sphere, both from the geometric and analytic point of view; (ii) it emphasizes the differences between the Bolyai-Lobachevsky and the Euclidean plane as clearly as the well-known differences between the Euclidean plane and the sphere. As far as we are concerned, this model was the key for obtaining the results we proved for the $n$-body problem for $\kappa<0$. The Weierstrass model is constructed on one of the sheets of the hyperboloid $x^2+y^2-z^2=-1$ in the 3-dimensional Minkowski space $\mathcal{M}^3:=({\bf R}^3, \boxdot)$, in which ${\bf a}\boxdot{\bf b}=a_xb_x+a_yb_y-a_zb_z$, with ${\bf a}=(a_x,a_y,a_z)$ and ${\bf b}=(b_x,b_y,b_z)$, represents the Lorentz inner product. We choose the $z>0$ sheet of the hyperboloid, which we identify with the Bolyai-Lobachevsky plane ${\bf H}^2$. A linear transformation $T\colon\mathcal{M}^3\to\mathcal{M}^3$ is orthogonal if $T({\bf a})\boxdot T({\bf a})={\bf a}\boxdot{\bf a}$ for any ${\bf a}\in\mathcal{M}^3$. The set of these transformations, together with the Lorentz inner product, forms the orthogonal group $O(\mathcal{M}^3)$, given by matrices of determinant $\pm1$. Therefore the group $SO(\mathcal{M}^3)$ of orthogonal transformations of determinant 1 is a subgroup of $O(\mathcal{M}^3)$. Another subgroup of $O(\mathcal{M}^3)$ is $G(\mathcal{M}^3)$, which is formed by the transformations $T$ that leave ${\bf H}^2$ invariant. Furthermore, $G(\mathcal{M}^3)$ has the closed Lorentz subgroup, ${\rm Lor}(\mathcal{M}^3):= G(\mathcal{M}^3) \cap SO(\mathcal{M}^3)$. An important result is the Principal Axis Theorem for ${\rm Lor}(\mathcal{M}^3)$, \cite{Dillen}, \cite{Nomizu}. Let us define the Lorentzian rotations about an axis as the 1-parameter subgroups of ${\rm Lor}(\mathcal{M}^3)$ that leave the axis pointwise fixed. Then the Principal Axis Theorem states that every Lorentzian transformation has one of the forms: $$ A=P\begin{bmatrix} \cos\theta & -\sin\theta & 0 \\ \sin\theta & \cos\theta & 0 \\ 0 & 0 & 1 \end{bmatrix} P^{-1}, $$ $$A=P\begin{bmatrix} 1 & 0 & 0 \\ 0 & \cosh s & \sinh s \\ 0 & \sinh s & \cosh s \end{bmatrix}P^{-1}, $$ or $$A=P\begin{bmatrix} 1 & -t & t \\ t & 1-t^2/2 & t^2/2 \\ t & -t^2/2 & 1+t^2/2 \end{bmatrix}P^{-1}, $$ where $\theta\in[0,2\pi)$, $s, t\in{\bf R}$, and $P\in {\rm Lor}(\mathcal{M}^3)$. These transformations are called elliptic, hyperbolic, and parabolic, respectively. The elliptic transformations are rotations about a {\it timelike} axis---the $z$ axis in our case; hyperbolic rotations are rotations about a {\it spacelike} axis---the $x$ axis in our context; and parabolic transformations are rotations about a {\it lightlike} (or null) axis, represented here by the line $x=0$, $y=z$. This result resembles Euler's Principal Axis Theorem, which states that any element of $SO(3)$ can be written, in some orthonormal basis, as a rotation about the $z$ axis. The geodesics of ${\bf H}^2$ are the hyperbolas obtained by intersecting the hyperboloid with planes passing through the origin of the coordinate system. For any two distinct points ${\bf a}$ and ${\bf b}$ of ${\bf H}^2$, there is a unique geodesic that connects them, and the distance between these points is given by $d({\bf a},{\bf b})=\cosh^{-1}(-{\bf a}\boxdot{\bf b})$. In the framework of Weierstrass's model, the parallels' postulate of hyperbolic geometry can be translated as follows. Take a geodesic $\gamma$, i.e.~a hyperbola obtained by intersecting a plane through the origin, $O$, of the coordinate system with the upper sheet, $z>0$, of the hyperboloid. This hyperbola has two asymptotes in its plane: the straight lines $a$ and $b$, intersecting at $O$. Take a point, $P$, on the upper sheet of the hyperboloid but not on the chosen hyperbola. The plane $aP$ produces the geodesic hyperbola $\alpha$, whereas $bP$ produces $\beta$. These two hyperbolas intersect at $P$. Then $\alpha$ and $\gamma$ are parallel geodesics meeting at infinity along $a$, while $\beta$ and $\gamma$ are parallel geodesics meeting at infinity along $b$. All the hyperbolas between $\alpha$ and $\beta$ (also obtained from planes through $O$) are non-secant with $\gamma$. Like the Euclidean plane, the abstract Bolyai-Lobachevsky plane has no privileged points or geodesics. But the Weierstrass model has some convenient points and geodesics, such as the point $(0,0,1)$ and the geodesics passing through it. The elements of ${\rm Lor}(\mathcal{M}^3)$ allow us to move the geodesics of ${\bf H}^2$ to convenient positions, a property we frequently use in this paper to simplify our arguments. Other properties of the Weierstrass model can be found in \cite{Fab} and \cite{Rey}. The Lorentz group is treated in some detail in \cite{Bak}, but the Principal Axis Theorems for the Lorentz group contained in \cite{Bak} and \cite{Rey} fails to include parabolic rotations, and is therefore incomplete. \subsection{History of the model} The first researcher who mentioned Karl Weierstrass in connection with the hyperboloidal model of the Bolyai-Lobachevsky plane was Wilhelm Killing. In a paper published in 1880, \cite{Kil1}, he used what he called Weierstrass's coordinates to describe the ``exterior hyperbolic plane'' as an ``ideal region'' of the Bolyai-Lobachevsky plane. In 1885, he added that Weierstrass had introduced these coordinates, in combination with ``numerous applications,'' during a seminar held in 1872, \cite{Kil2}, pp.~258-259. We found no evidence of any written account of the hyperboloidal model for the Bolyai-Lobachevsky plane prior to the one Killing gave in a paragraph of \cite{Kil2}, p.~260. His remarks might have inspired Richard Faber to name this model after Weierstrass and to dedicate a chapter to it in \cite{Fab}, pp.~247-278. \bigskip \noindent{\bf Acknowledgments.} We would like to thank Sergey Bolotin, Alexey Borisov, Eduardo Pi\~{n}a, Don Saari, Alexey Shchepetilov, and Jeff Xia for discussions and suggestions that helped us improve this paper. We also acknowledge the grant support Florin Diacu and Manuele Santoprete received from NSERC (Canada) and Ernesto P\'erez-Chavela from CONACYT (Mexico).
{"config": "arxiv", "file": "0807.1747/Sphere13.tex"}
TITLE: what is "function countably simple"? QUESTION [0 upvotes]: i have this part of proof: "We first consider the case $0<p<1$. Let $f=\sum_n a_n \chi_{E_n}$ be a simple function on X (take $f$ to be countably simple when $q=\infty$)" what it means "take $f$ to be countably simple when $q=\infty$" Thank you REPLY [0 votes]: (formally a comment, but perhaps this is sufficient for an answer) This probably means that there are countably infinitely many values of $n$ involved. The number of distinct values of $a_n$ might not be infinite (for example, we could have $E_n = (\frac{1}{n+1}, \, \frac{1}{n})$ for each $n,$ with $a_n$ equal to $1$ or $2$ according as to whether $n$ is even or odd), and whether infinitely many distinct sets $E_n$ could be involved will depend on your specific definition of "simple function".
{"set_name": "stack_exchange", "score": 0, "question_id": 3495935}
TITLE: Which affine schemes are projective? QUESTION [1 upvotes]: Let $k$ be a field. Are there any useful necessary and sufficient conditions on $k$-algebras $A$ such that $\mathrm{Spec}(A)$ is a projective scheme over $k$? I know that there are very few of these, but I don't think that $k$ or $k\times k$ are the only ones. REPLY [7 votes]: If $X=\textrm{Spec }A$ is a projective $k$-variety (not necessarily connected), then $A=\Gamma(X,\mathscr O_X)$ is a finite direct sum of copies of $k$, as it occurs for every proper $k$-scheme. This means that the $k$-algebra $A$ is a finite dimensional vector space, i.e. $A$ is an Artin $k$-algebra, hence $$\dim X=\dim_{\textrm{Krull}}A=0.$$ So, as already mentioned in the comments, only finite $k$-schemes can be both affine and projective. If $A$ is also local and has residue field $k$, then $\textrm{Spec }A$ is called a fat point over $k$.
{"set_name": "stack_exchange", "score": 1, "question_id": 1855943}
:: Integrability Formulas -- Part {III} :: by Bo Li and Na Ma environ vocabularies RELAT_1, FUNCT_1, ARYTM_1, SIN_COS, VALUED_1, NAT_1, INTEGRA1, FDIFF_1, SQUARE_1, ARYTM_3, ORDINAL2, PREPOWER, REAL_1, PARTFUN1, TAYLOR_1, CARD_1, ORDINAL4, RCOMP_1, INTEGRA5, XXREAL_0, SIN_COS4, SUBSET_1, XBOOLE_0, TARSKI, NUMBERS, XXREAL_2, SEQ_4, MEASURE5; notations TARSKI, XBOOLE_0, SIN_COS, SUBSET_1, ORDINAL1, NUMBERS, VALUED_1, XXREAL_0, XCMPLX_0, XREAL_0, REAL_1, FUNCT_1, RELSET_1, PARTFUN1, PARTFUN2, RCOMP_1, RFUNCT_1, MEASURE5, FCONT_1, SQUARE_1, INTEGRA5, PREPOWER, TAYLOR_1, FDIFF_1, SEQ_2, FDIFF_9, SIN_COS4, SEQ_4; constructors SIN_COS, TAYLOR_1, REAL_1, FDIFF_1, FCONT_1, SQUARE_1, PREPOWER, INTEGRA5, SEQ_4, PARTFUN2, RFUNCT_1, FDIFF_9, SIN_COS4, RELSET_1, INTEGRA1, COMSEQ_2, BINOP_2; registrations NUMBERS, MEMBERED, VALUED_0, INT_1, RELAT_1, RCOMP_1, RELSET_1, MEASURE5, XREAL_0, SQUARE_1, PREPOWER; requirements NUMERALS, SUBSET, ARITHM; definitions TARSKI, XBOOLE_0; equalities SIN_COS, VALUED_1, SIN_COS4, FDIFF_9, XCMPLX_0; expansions TARSKI; theorems PARTFUN1, RFUNCT_1, FUNCT_1, FDIFF_1, TARSKI, XBOOLE_0, INTEGRA5, SIN_COS, VALUED_1, XBOOLE_1, FDIFF_7, FDIFF_8, FDIFF_9, FDIFF_10, SIN_COS9, RELAT_1, FDIFF_2; begin :: Differentiation Formulas reserve a,x for Real; reserve n for Nat; reserve A for non empty closed_interval Subset of REAL; reserve f,f1 for PartFunc of REAL,REAL; reserve Z for open Subset of REAL; Lm1: 0 in Z implies (id Z)"{0} = {0} proof assume A1: 0 in Z; thus (id Z)"{0} c= {0} proof let x be object; assume A2: x in (id Z)"{0}; then x in dom id Z by FUNCT_1:def 7; then A3: x in Z; (id Z).x in {0} by A2,FUNCT_1:def 7; hence thesis by A3,FUNCT_1:18; end; let x be object; assume x in {0}; then A4: x = 0 by TARSKI:def 1; then (id Z).x = 0 by A1,FUNCT_1:18; then A5: (id Z).x in {0} by TARSKI:def 1; x in dom id Z by A1,A4; hence thesis by A5,FUNCT_1:def 7; end; theorem Th1: Z c= dom (sec*((id Z)^)) implies (-sec*((id Z)^)) is_differentiable_on Z & for x st x in Z holds ((-sec*((id Z)^))`|Z).x = sin.(1/x)/(x^2*(cos.(1/x))^2) proof assume A1:Z c= dom (sec*((id Z)^)); then A2:Z c= dom (-sec*((id Z)^)) by VALUED_1:def 5; A3:Z c= dom ((id Z)^) by A1,FUNCT_1:101; A4:not 0 in Z proof assume A5: 0 in Z; dom ((id Z)^) = dom id Z \ (id Z)"{0} by RFUNCT_1:def 2 .= dom id Z \ {0} by Lm1,A5; then not 0 in {0} by A5,A3,XBOOLE_0:def 5; hence thesis by TARSKI:def 1; end; then A6:(sec*((id Z)^)) is_differentiable_on Z by A1,FDIFF_9:8; then A7:(-1)(#)(sec*((id Z)^)) is_differentiable_on Z by A2,FDIFF_1:20; for x st x in Z holds ((-sec*((id Z)^))`|Z).x = sin.(1/x)/(x^2*(cos.(1/x))^2) proof let x; assume A8:x in Z; ((-sec*((id Z)^))`|Z).x=((-1)(#)((sec*((id Z)^))`|Z)).x by A6,FDIFF_2:19 .=(-1)*(((sec*((id Z)^))`|Z).x) by VALUED_1:6 .=(-1)*(-sin.(1/x)/(x^2*(cos.(1/x))^2)) by A1,A4,A8,FDIFF_9:8 .=sin.(1/x)/(x^2*(cos.(1/x))^2); hence thesis; end; hence thesis by A7; end; ::f.x=-cosec.(exp_R.x) theorem Th2: Z c= dom (cosec*exp_R) implies -cosec*exp_R is_differentiable_on Z & for x st x in Z holds ((-cosec*exp_R)`|Z).x = exp_R.x*cos.(exp_R.x)/(sin.(exp_R.x))^2 proof assume A1:Z c= dom (cosec*exp_R); then A2:Z c= dom (-cosec*exp_R) by VALUED_1:8; A3:cosec*exp_R is_differentiable_on Z by A1,FDIFF_9:13; then A4:(-1)(#)(cosec*exp_R) is_differentiable_on Z by A2,FDIFF_1:20; for x st x in Z holds ((-cosec*exp_R)`|Z).x = exp_R.x*cos.(exp_R.x)/(sin.(exp_R.x))^2 proof let x; assume A5:x in Z; ((-cosec*exp_R)`|Z).x=((-1)(#)((cosec*exp_R)`|Z)).x by A3,FDIFF_2:19 .=(-1)*(((cosec*exp_R)`|Z).x) by VALUED_1:6 .=(-1)*(-exp_R.x*cos.(exp_R.x)/(sin.(exp_R.x))^2) by A1,A5,FDIFF_9:13 .=exp_R.x*cos.(exp_R.x)/(sin.(exp_R.x))^2; hence thesis; end; hence thesis by A4; end; :: f.x = -cosec.(ln.x) theorem Th3: Z c= dom (cosec*ln) implies -cosec*ln is_differentiable_on Z & for x st x in Z holds ((-cosec*ln)`|Z).x = cos.(ln.x)/(x*(sin.(ln.x))^2) proof assume A1:Z c= dom (cosec*ln); then A2:Z c= dom (-cosec*ln) by VALUED_1:8; A3:cosec*ln is_differentiable_on Z by A1,FDIFF_9:15; then A4:(-1)(#)(cosec*ln) is_differentiable_on Z by A2,FDIFF_1:20; for x st x in Z holds ((-cosec*ln)`|Z).x = cos.(ln.x)/(x*(sin.(ln.x))^2) proof let x; assume A5: x in Z; ((-cosec*ln)`|Z).x=((-1)(#)((cosec*ln)`|Z)).x by A3,FDIFF_2:19 .=(-1)*(((cosec*ln)`|Z).x) by VALUED_1:6 .=(-1)*(-cos.(ln.x)/(x*(sin.(ln.x))^2)) by A1,A5,FDIFF_9:15 .=cos.(ln.x)/(x*(sin.(ln.x))^2); hence thesis; end; hence thesis by A4; end; :: f.x = -exp_R.(cosec.x) theorem Th4: Z c= dom (exp_R*cosec) implies -exp_R*cosec is_differentiable_on Z & for x st x in Z holds ((-exp_R*cosec)`|Z).x = exp_R.(cosec.x)*cos.x/(sin.x)^2 proof assume A1:Z c= dom (exp_R*cosec); then A2:Z c= dom (-exp_R*cosec) by VALUED_1:8; A3:exp_R*cosec is_differentiable_on Z by A1,FDIFF_9:17; then A4:(-1)(#)(exp_R*cosec) is_differentiable_on Z by A2,FDIFF_1:20; for x st x in Z holds ((-exp_R*cosec)`|Z).x = exp_R.(cosec.x)*cos.x/(sin.x)^2 proof let x; assume A5:x in Z; ((-exp_R*cosec)`|Z).x=((-1)(#)((exp_R*cosec)`|Z)).x by A3,FDIFF_2:19 .=(-1)*(((exp_R*cosec)`|Z).x) by VALUED_1:6 .=(-1)*(-exp_R.(cosec.x)*cos.x/(sin.x)^2) by A1,A5,FDIFF_9:17 .=exp_R.(cosec.x)*cos.x/(sin.x)^2; hence thesis; end; hence thesis by A4; end; :: f.x = -ln.(cosec.x) theorem Th5: Z c= dom (ln*cosec) implies -ln*cosec is_differentiable_on Z & for x st x in Z holds ((-ln*cosec)`|Z).x = cot.x proof assume A1:Z c= dom (ln*cosec); then A2:Z c= dom (-ln*cosec) by VALUED_1:8; A3:ln*cosec is_differentiable_on Z by A1,FDIFF_9:19; then A4:(-1)(#)(ln*cosec) is_differentiable_on Z by A2,FDIFF_1:20; A5:for x st x in Z holds sin.x<>0 proof let x; assume x in Z; then x in dom cosec by A1,FUNCT_1:11; hence thesis by RFUNCT_1:3; end; for x st x in Z holds ((-ln*cosec)`|Z).x = cot.x proof let x; assume A6: x in Z; ((-ln*cosec)`|Z).x =((-1)(#)((ln*cosec)`|Z)).x by A3,FDIFF_2:19 .=(-1)*(((ln*cosec)`|Z).x) by VALUED_1:6 .=(-1)*(-cos.x/sin.x) by A1,A6,FDIFF_9:19 .=cot(x) .=cot.x by A5,A6,SIN_COS9:16; hence thesis; end; hence thesis by A4; end; ::f.x=-(cosec.x) #Z n theorem Th6: Z c= dom (( #Z n)*cosec) & 1<=n implies -( #Z n)*cosec is_differentiable_on Z & for x st x in Z holds ((-( #Z n)*cosec)`|Z).x = n*cos.x/(sin.x) #Z (n+1) proof assume A1:Z c= dom (( #Z n)*cosec) & 1<=n; then A2:Z c= dom (-( #Z n)*cosec) & 1<=n by VALUED_1:8; A3:( #Z n)*cosec is_differentiable_on Z by A1,FDIFF_9:21; then A4:(-1)(#)(( #Z n)*cosec) is_differentiable_on Z by A2,FDIFF_1:20; for x st x in Z holds ((-( #Z n)*cosec)`|Z).x = n*cos.x/(sin.x) #Z (n+1) proof let x; assume A5:x in Z; ((-( #Z n)*cosec)`|Z).x=((-1)(#)((( #Z n)*cosec)`|Z)).x by A3,FDIFF_2:19 .=(-1)*(((( #Z n)*cosec)`|Z).x) by VALUED_1:6 .=(-1)*(-n*cos.x/(sin.x) #Z (n+1)) by A1,A5,FDIFF_9:21 .=n*cos.x/(sin.x) #Z (n+1); hence thesis; end; hence thesis by A4; end; ::f.x= -1/x*sec.x theorem Th7: Z c= dom ((id Z)^(#)sec) implies (-(id Z)^(#)sec) is_differentiable_on Z & for x st x in Z holds ((-(id Z)^(#)sec)`|Z).x = 1/cos.x/x^2-sin.x/x/(cos.x)^2 proof assume A1:Z c= dom ((id Z)^(#)sec); then A2:Z c= dom (-(id Z)^(#)sec) by VALUED_1:8; Z c= dom ((id Z)^) /\ dom sec by A1,VALUED_1:def 4;then A3:Z c= dom ((id Z)^) by XBOOLE_1:18; A4:not 0 in Z proof assume A5: 0 in Z; dom ((id Z)^) = dom id Z \ (id Z)"{0} by RFUNCT_1:def 2 .= dom id Z \ {0} by Lm1,A5; then not 0 in {0} by A5,A3,XBOOLE_0:def 5; hence thesis by TARSKI:def 1; end; then A6:((id Z)^(#)sec) is_differentiable_on Z by A1,FDIFF_9:32; then A7:(-1)(#)((id Z)^(#)sec) is_differentiable_on Z by A2,FDIFF_1:20; for x st x in Z holds ((-(id Z)^(#)sec)`|Z).x = 1/cos.x/x^2-sin.x/x/(cos.x)^2 proof let x; assume A8: x in Z; ((-(id Z)^(#)sec)`|Z).x = ((-1)(#)(((id Z)^(#)sec)`|Z)).x by A6,FDIFF_2:19 .= (-1)*((((id Z)^(#)sec)`|Z).x) by VALUED_1:6 .= (-1)*(-1/cos.x/x^2+sin.x/x/(cos.x)^2) by A1,A4,A8,FDIFF_9:32 .= 1/cos.x/x^2-sin.x/x/(cos.x)^2; hence thesis; end; hence thesis by A7; end; ::f.x=-1/x*cosec.x theorem Th8: Z c= dom ((id Z)^(#)cosec) implies (-(id Z)^(#)cosec) is_differentiable_on Z & for x st x in Z holds ((-(id Z)^(#)cosec)`|Z).x = 1/sin.x/x^2+cos.x/x/(sin.x)^2 proof assume A1:Z c= dom ((id Z)^(#)cosec); then A2:Z c= dom (-(id Z)^(#)cosec) by VALUED_1:8; Z c= dom ((id Z)^) /\ dom cosec by A1,VALUED_1:def 4;then A3:Z c= dom ((id Z)^) by XBOOLE_1:18; A4:not 0 in Z proof assume A5: 0 in Z; dom ((id Z)^) = dom id Z \ (id Z)"{0} by RFUNCT_1:def 2 .= dom id Z \ {0} by Lm1,A5; then not 0 in {0} by A5,A3,XBOOLE_0:def 5; hence thesis by TARSKI:def 1; end; then A6:((id Z)^(#)cosec) is_differentiable_on Z by A1,FDIFF_9:33; then A7:(-1)(#)((id Z)^(#)cosec) is_differentiable_on Z by A2,FDIFF_1:20; for x st x in Z holds ((-(id Z)^(#)cosec)`|Z).x = 1/sin.x/x^2+cos.x/x/(sin.x)^2 proof let x; assume A8:x in Z; ((-(id Z)^(#)cosec)`|Z).x = ((-1)(#)(((id Z)^(#)cosec)`|Z)).x by A6,FDIFF_2:19 .= (-1)*((((id Z)^(#)cosec)`|Z).x) by VALUED_1:6 .= (-1)*(-1/sin.x/x^2-cos.x/x/(sin.x)^2) by A1,A4,A8,FDIFF_9:33 .= 1/sin.x/x^2+cos.x/x/(sin.x)^2; hence thesis; end; hence thesis by A7; end; ::f.x = -cosec.(sin.x) theorem Th9: Z c= dom (cosec*sin) implies -cosec*sin is_differentiable_on Z & for x st x in Z holds ((-cosec*sin)`|Z).x = cos.x*cos.(sin.x)/(sin.(sin.x))^2 proof assume A1:Z c= dom (cosec*sin); then A2:Z c= dom (-cosec*sin) by VALUED_1:8; A3:cosec*sin is_differentiable_on Z by A1,FDIFF_9:36; then A4:(-1)(#)(cosec*sin) is_differentiable_on Z by A2,FDIFF_1:20; for x st x in Z holds ((-cosec*sin)`|Z).x = cos.x*cos.(sin.x)/(sin.(sin.x))^2 proof let x; assume A5:x in Z; ((-cosec*sin)`|Z).x=((-1)(#)((cosec*sin)`|Z)).x by A3,FDIFF_2:19 .=(-1)*(((cosec*sin)`|Z).x) by VALUED_1:6 .=(-1)*(-cos.x*cos.(sin.x)/(sin.(sin.x))^2) by A1,A5,FDIFF_9:36 .=cos.x*cos.(sin.x)/(sin.(sin.x))^2; hence thesis; end; hence thesis by A4; end; ::f.x=-sec.(cot.x) theorem Th10: Z c= dom (sec*cot) implies -sec*cot is_differentiable_on Z & for x st x in Z holds ((-sec*cot)`|Z).x = sin.(cot.x)/(sin.x)^2/(cos.(cot.x))^2 proof assume A1:Z c= dom (sec*cot); then A2:Z c= dom (-sec*cot) by VALUED_1:8; A3:sec*cot is_differentiable_on Z by A1,FDIFF_9:39; then A4:(-1)(#)(sec*cot) is_differentiable_on Z by A2,FDIFF_1:20; for x st x in Z holds ((-sec*cot)`|Z).x = sin.(cot.x)/(sin.x)^2/(cos.(cot.x))^2 proof let x; assume A5: x in Z; ((-sec*cot)`|Z).x=((-1)(#)((sec*cot)`|Z)).x by A3,FDIFF_2:19 .=(-1)*(((sec*cot)`|Z).x) by VALUED_1:6 .=(-1)*(-sin.(cot.x)/(sin.x)^2/(cos.(cot.x))^2) by A1,A5,FDIFF_9:39 .=sin.(cot.x)/(sin.x)^2/(cos.(cot.x))^2; hence thesis; end; hence thesis by A4; end; ::f.x=-cosec.(tan.x) theorem Th11: Z c= dom (cosec*tan) implies -cosec*tan is_differentiable_on Z & for x st x in Z holds ((-cosec*tan)`|Z).x = cos.(tan.x)/(cos.x)^2/(sin.(tan.x))^2 proof assume A1:Z c= dom (cosec*tan); then A2:Z c= dom (-cosec*tan) by VALUED_1:8; A3:cosec*tan is_differentiable_on Z by A1,FDIFF_9:40; dom (cosec*tan) c= dom tan by RELAT_1:25; then A4: Z c= dom tan by A1; A5:(-1)(#)(cosec*tan) is_differentiable_on Z by A2,A3,FDIFF_1:20; A6:for x st x in Z holds sin.(tan.x)<>0 proof let x; assume x in Z; then tan.x in dom cosec by A1,FUNCT_1:11; hence thesis by RFUNCT_1:3; end; for x st x in Z holds ((-cosec*tan)`|Z).x = cos.(tan.x)/(cos.x)^2/(sin.(tan.x))^2 proof let x; assume A7: x in Z; then A8: cos.x<>0 by A4,FDIFF_8:1; then A9: tan is_differentiable_in x by FDIFF_7:46; A10: sin.(tan.x)<>0 by A6,A7;then A11: cosec is_differentiable_in tan.x by FDIFF_9:2; A12: cosec*tan is_differentiable_in x by A3,A7,FDIFF_1:9; ((-cosec*tan)`|Z).x=diff(-cosec*tan,x) by A5,A7,FDIFF_1:def 7 .=(-1)*(diff(cosec*tan,x)) by A12,FDIFF_1:15 .=(-1)*(diff(cosec, tan.x)*diff(tan,x)) by A9,A11,FDIFF_2:13 .=(-1)*((-cos.(tan.x)/(sin.(tan.x))^2) * diff(tan,x)) by A10,FDIFF_9:2 .=(-1)*((1/(cos.x)^2)*(-cos.(tan.x)/(sin.(tan.x))^2)) by A8,FDIFF_7:46 .=cos.(tan.x)/(cos.x)^2/(sin.(tan.x))^2; hence thesis; end; hence thesis by A5; end; ::f.x=-cot.x*sec.x theorem Th12: Z c= dom (cot(#)sec) implies (-cot(#)sec) is_differentiable_on Z & for x st x in Z holds ((-cot(#)sec)`|Z).x = 1/(sin.x)^2/cos.x-cot.x*sin.x/(cos.x)^2 proof assume A1:Z c= dom (cot(#)sec); then A2:Z c= dom (-cot(#)sec) by VALUED_1:8; A3:cot(#)sec is_differentiable_on Z by A1,FDIFF_9:43; then A4:(-1)(#)(cot(#)sec) is_differentiable_on Z by A2,FDIFF_1:20; for x st x in Z holds ((-cot(#)sec)`|Z).x = 1/(sin.x)^2/cos.x-cot.x*sin.x/(cos.x)^2 proof let x; assume A5: x in Z; ((-cot(#)sec)`|Z).x = ((-1)(#)((cot(#)sec)`|Z)).x by A3,FDIFF_2:19 .=(-1)*(((cot(#)sec)`|Z).x) by VALUED_1:6 .=(-1)*(-1/(sin.x)^2/cos.x+cot.x*sin.x/(cos.x)^2) by A1,A5,FDIFF_9:43 .=1/(sin.x)^2/cos.x-cot.x*sin.x/(cos.x)^2; hence thesis; end; hence thesis by A4; end; ::f.x=-cot.x*cosec.x theorem Th13: Z c= dom (cot(#)cosec) implies (-cot(#)cosec) is_differentiable_on Z & for x st x in Z holds ((-cot(#)cosec)`|Z).x = 1/(sin.x)^2/sin.x+cot.x*cos.x/(sin.x)^2 proof assume A1:Z c= dom (cot(#)cosec); then A2:Z c= dom (-cot(#)cosec) by VALUED_1:8; A3:(cot(#)cosec) is_differentiable_on Z by A1,FDIFF_9:45; then A4:(-1)(#)(cot(#)cosec) is_differentiable_on Z by A2,FDIFF_1:20; for x st x in Z holds ((-cot(#)cosec)`|Z).x = 1/(sin.x)^2/sin.x+cot.x*cos.x/(sin.x)^2 proof let x; assume A5:x in Z; ((-cot(#)cosec)`|Z).x = ((-1)(#)((cot(#)cosec)`|Z)).x by A3,FDIFF_2:19 .=(-1)*(((cot(#)cosec)`|Z).x) by VALUED_1:6 .=(-1)*(-1/(sin.x)^2/sin.x-cot.x*cos.x/(sin.x)^2) by A1,A5,FDIFF_9:45 .=1/(sin.x)^2/sin.x+cot.x*cos.x/(sin.x)^2; hence thesis; end; hence thesis by A4; end; ::f.x=-cos.x * cot.x theorem Th14: Z c= dom (cos (#) cot) implies (-cos (#) cot) is_differentiable_on Z & for x st x in Z holds((-cos (#) cot)`|Z).x = cos.x+cos.x/(sin.x)^2 proof assume A1:Z c= dom (cos (#) cot); then A2:Z c= dom (-cos (#) cot) by VALUED_1:8; A3:(cos (#) cot) is_differentiable_on Z by A1,FDIFF_10:11; then A4:(-1)(#)(cos (#) cot) is_differentiable_on Z by A2,FDIFF_1:20; for x st x in Z holds ((-cos (#) cot)`|Z).x = cos.x+cos.x/(sin.x)^2 proof let x; assume A5: x in Z; ((-cos (#) cot)`|Z).x = ((-1)(#)((cos (#) cot)`|Z)).x by A3,FDIFF_2:19 .=(-1)*(((cos (#) cot)`|Z).x) by VALUED_1:6 .=(-1)*(-cos.x-cos.x/(sin.x)^2) by A1,A5,FDIFF_10:11 .=cos.x+cos.x/(sin.x)^2; hence thesis; end; hence thesis by A4; end; begin :: Integrability Formulas ::f.x=sin.(1/x)/(x^2*(cos.(1/x))^2) theorem A c= Z & (for x st x in Z holds f.x=sin.(1/x)/(x^2*(cos.(1/x))^2)) & Z c= dom (sec*((id Z)^)) & Z = dom f & f|A is continuous implies integral(f,A)=(-sec*((id Z)^)).(upper_bound A)- (-sec*((id Z)^)).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=sin.(1/x)/(x^2*(cos.(1/x))^2)) & Z c= dom (sec*((id Z)^)) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:(-sec*((id Z)^)) is_differentiable_on Z by A1,Th1; A4:for x being Element of REAL st x in dom ((-sec*((id Z)^))`|Z) holds ((-sec*((id Z)^))`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((-sec*((id Z)^))`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then ((-sec*((id Z)^))`|Z).x = sin.(1/x)/(x^2*(cos.(1/x))^2) by A1,Th1 .=f.x by A1,A5; hence thesis; end; dom ((-sec*((id Z)^))`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((-sec*((id Z)^))`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=cos.(1/x)/(x^2*(sin.(1/x))^2) theorem A c= Z & (for x st x in Z holds f.x=cos.(1/x)/(x^2*(sin.(1/x))^2)) & Z c= dom (cosec*((id Z)^)) & Z = dom f & f|A is continuous implies integral(f,A)=(cosec*((id Z)^)).(upper_bound A)- (cosec*((id Z)^)).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=cos.(1/x)/(x^2*(sin.(1/x))^2)) & Z c= dom (cosec*((id Z)^)) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:Z c= dom ((id Z)^) by A1,FUNCT_1:101; A4:not 0 in Z proof assume A5: 0 in Z; dom ((id Z)^) = dom id Z \ (id Z)"{0} by RFUNCT_1:def 2 .= dom id Z \ {0} by Lm1,A5; then not 0 in {0} by A5,A3,XBOOLE_0:def 5; hence thesis by TARSKI:def 1; end; then A6:(cosec*((id Z)^)) is_differentiable_on Z by A1,FDIFF_9:9; A7:for x being Element of REAL st x in dom ((cosec*((id Z)^))`|Z) holds ((cosec*((id Z)^))`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((cosec*((id Z)^))`|Z);then A8:x in Z by A6,FDIFF_1:def 7;then ((cosec*((id Z)^))`|Z).x = cos.(1/x)/(x^2*(sin.(1/x))^2) by A1,A4,FDIFF_9:9 .=f.x by A1,A8; hence thesis; end; dom ((cosec*((id Z)^))`|Z)=dom f by A1,A6,FDIFF_1:def 7; then ((cosec*((id Z)^))`|Z)= f by A7,PARTFUN1:5; hence thesis by A1,A2,A6,INTEGRA5:13; end; ::f.x=exp_R.x*sin.(exp_R.x)/(cos.(exp_R.x))^2 theorem A c= Z & (for x st x in Z holds f.x=exp_R.x*sin.(exp_R.x)/(cos.(exp_R.x))^2) & Z c= dom (sec*exp_R) & Z = dom f & f|A is continuous implies integral(f,A)=(sec*exp_R).(upper_bound A)-(sec*exp_R).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=exp_R.x*sin.(exp_R.x)/(cos.(exp_R.x))^2) & Z c= dom (sec*exp_R) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:sec*exp_R is_differentiable_on Z by A1,FDIFF_9:12; A4:for x being Element of REAL st x in dom ((sec*exp_R)`|Z) holds ((sec*exp_R)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((sec*exp_R)`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then ((sec*exp_R)`|Z).x=exp_R.x*sin.(exp_R.x)/(cos.(exp_R.x))^2 by A1,FDIFF_9:12 .=f.x by A1,A5; hence thesis; end; dom ((sec*exp_R)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((sec*exp_R)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=exp_R.x*cos.(exp_R.x)/(sin.(exp_R.x))^2 theorem A c= Z & (for x st x in Z holds f.x=exp_R.x*cos.(exp_R.x)/(sin.(exp_R.x))^2) & Z c= dom (cosec*exp_R) & Z = dom f & f|A is continuous implies integral(f,A)=(-cosec*exp_R).(upper_bound A)-(-cosec*exp_R).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=exp_R.x*cos.(exp_R.x)/(sin.(exp_R.x))^2) & Z c= dom (cosec*exp_R) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:-cosec*exp_R is_differentiable_on Z by A1,Th2; A4:for x being Element of REAL st x in dom ((-cosec*exp_R)`|Z) holds ((-cosec*exp_R)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((-cosec*exp_R)`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then ((-cosec*exp_R)`|Z).x=exp_R.x*cos.(exp_R.x)/(sin.(exp_R.x))^2 by A1,Th2 .=f.x by A1,A5; hence thesis; end; dom ((-cosec*exp_R)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((-cosec*exp_R)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=sin.(ln.x)/(x*(cos.(ln.x))^2) theorem A c= Z & (for x st x in Z holds f.x=sin.(ln.x)/(x*(cos.(ln.x))^2)) & Z c= dom (sec*ln) & Z = dom f & f|A is continuous implies integral(f,A)=(sec*ln).(upper_bound A)-(sec*ln).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=sin.(ln.x)/(x*(cos.(ln.x))^2)) & Z c= dom (sec*ln) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:sec*ln is_differentiable_on Z by A1,FDIFF_9:14; A4:for x being Element of REAL st x in dom ((sec*ln)`|Z) holds ((sec*ln)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((sec*ln)`|Z);then A5: x in Z by A3,FDIFF_1:def 7;then ((sec*ln)`|Z).x=sin.(ln.x)/(x*(cos.(ln.x))^2) by A1,FDIFF_9:14 .=f.x by A1,A5; hence thesis; end; dom((sec*ln)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((sec*ln)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=cos.(ln.x)/(x*(sin.(ln.x))^2) theorem A c= Z & (for x st x in Z holds f.x=cos.(ln.x)/(x*(sin.(ln.x))^2)) & Z c= dom (cosec*ln) & Z = dom f & f|A is continuous implies integral(f,A)=(-cosec*ln).(upper_bound A)-(-cosec*ln).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=cos.(ln.x)/(x*(sin.(ln.x))^2)) & Z c= dom (cosec*ln) & Z = dom f & f|A is continuous;then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:-cosec*ln is_differentiable_on Z by A1,Th3; A4:for x being Element of REAL st x in dom ((-cosec*ln)`|Z) holds ((-cosec*ln)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((-cosec*ln)`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then ((-cosec*ln)`|Z).x=cos.(ln.x)/(x*(sin.(ln.x))^2) by A1,Th3 .=f.x by A1,A5; hence thesis; end; dom ((-cosec*ln)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((-cosec*ln)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=exp_R.(sec.x)*sin.x/(cos.x)^2 theorem A c= Z & f=(exp_R*sec)(#)(sin/cos^2) & Z = dom f & f|A is continuous implies integral(f,A)=(exp_R*sec).(upper_bound A)-(exp_R*sec).(lower_bound A) proof assume A1:A c= Z & f=(exp_R*sec)(#)(sin/cos^2) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; Z = dom (exp_R*sec) /\ dom (sin/cos^2) by A1,VALUED_1:def 4;then A3:Z c= dom (exp_R*sec) & Z c= dom (sin/cos^2) by XBOOLE_1:18; then A4:exp_R*sec is_differentiable_on Z by FDIFF_9:16; A5:for x st x in Z holds f.x=exp_R.(sec.x)*sin.x/(cos.x)^2 proof let x; assume A6:x in Z; ((exp_R*sec)(#)(sin/cos^2)).x =(exp_R*sec).x*(sin/cos^2).x by VALUED_1:5 .=exp_R.(sec.x)*(sin/cos^2).x by A6,A3,FUNCT_1:12 .=exp_R.(sec.x)*(sin.x/(cos^2).x) by A3,A6,RFUNCT_1:def 1 .=exp_R.(sec.x)*(sin.x/(cos.x)^2) by VALUED_1:11 .=exp_R.(sec.x)*sin.x/(cos.x)^2 ; hence thesis by A1; end; A7:for x being Element of REAL st x in dom ((exp_R*sec)`|Z) holds ((exp_R*sec)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((exp_R*sec)`|Z);then A8:x in Z by A4,FDIFF_1:def 7;then ((exp_R*sec)`|Z).x=exp_R.(sec.x)*sin.x/(cos.x)^2 by A3,FDIFF_9:16 .=f.x by A5,A8; hence thesis; end; dom ((exp_R*sec)`|Z)=dom f by A1,A4,FDIFF_1:def 7; then ((exp_R*sec)`|Z)= f by A7,PARTFUN1:5; hence thesis by A1,A2,A4,INTEGRA5:13; end; ::f.x=exp_R.(cosec.x)*cos.x/(sin.x)^2 theorem A c= Z & f=(exp_R*cosec)(#)(cos/sin^2) & Z = dom f & f|A is continuous implies integral(f,A)=(-exp_R*cosec).(upper_bound A)- (-exp_R*cosec).(lower_bound A) proof assume A1:A c= Z & f=(exp_R*cosec)(#)(cos/sin^2) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; Z = dom (exp_R*cosec) /\ dom (cos/sin^2) by A1,VALUED_1:def 4;then A3:Z c= dom (exp_R*cosec) & Z c= dom (cos/sin^2) by XBOOLE_1:18; then A4:-exp_R*cosec is_differentiable_on Z by Th4; A5:for x st x in Z holds f.x=exp_R.(cosec.x)*cos.x/(sin.x)^2 proof let x; assume A6:x in Z; ((exp_R*cosec)(#)(cos/sin^2)).x =(exp_R*cosec).x*(cos/sin^2).x by VALUED_1:5 .=exp_R.(cosec.x)*(cos/sin^2).x by A6,A3,FUNCT_1:12 .=exp_R.(cosec.x)*(cos.x/(sin^2).x) by A3,A6,RFUNCT_1:def 1 .=exp_R.(cosec.x)*(cos.x/(sin.x)^2) by VALUED_1:11 .=exp_R.(cosec.x)*cos.x/(sin.x)^2 ; hence thesis by A1; end; A7:for x being Element of REAL st x in dom ((-exp_R*cosec)`|Z) holds ((-exp_R*cosec)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((-exp_R*cosec)`|Z);then A8:x in Z by A4,FDIFF_1:def 7;then ((-exp_R*cosec)`|Z).x=exp_R.(cosec.x)*cos.x/(sin.x)^2 by A3,Th4 .=f.x by A5,A8; hence thesis; end; dom ((-exp_R*cosec)`|Z)=dom f by A1,A4,FDIFF_1:def 7; then ((-exp_R*cosec)`|Z)= f by A7,PARTFUN1:5; hence thesis by A1,A2,A4,INTEGRA5:13; end; ::f.x=tan.x theorem A c= Z & Z c= dom (ln*sec) & Z = dom tan & tan|A is continuous implies integral(tan,A)=(ln*sec).(upper_bound A)-(ln*sec).(lower_bound A) proof assume A1:A c= Z & Z c= dom (ln*sec) & Z = dom tan & tan|A is continuous; then A2:tan is_integrable_on A & tan|A is bounded by INTEGRA5:10,11; A3:ln*sec is_differentiable_on Z by A1,FDIFF_9:18; A4:for x st x in Z holds cos.x<>0 proof let x; assume x in Z; then x in dom sec by A1,FUNCT_1:11; hence thesis by RFUNCT_1:3; end; A5:for x being Element of REAL st x in dom ((ln*sec)`|Z) holds ((ln*sec)`|Z).x = tan.x proof let x be Element of REAL; assume x in dom ((ln*sec)`|Z);then A6: x in Z by A3,FDIFF_1:def 7;then A7: cos.x<>0 by A4; ((ln*sec)`|Z).x = tan x by A1,A6,FDIFF_9:18 .=tan.x by A7,SIN_COS9:15; hence thesis; end; dom ((ln*sec)`|Z)=dom tan by A1,A3,FDIFF_1:def 7; then ((ln*sec)`|Z)= tan by A5,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=-cot.x theorem A c= Z & Z c= dom (ln*cosec) & Z = dom cot & (-cot)|A is continuous implies integral((-cot),A)=(ln*cosec).(upper_bound A)- (ln*cosec).(lower_bound A) proof assume A1:A c= Z & Z c= dom (ln*cosec) & Z = dom cot & (-cot)|A is continuous; then A2:Z = dom (-cot) by VALUED_1:8; then A3:(-cot) is_integrable_on A & (-cot)|A is bounded by A1,INTEGRA5:10,11; A4:ln*cosec is_differentiable_on Z by A1,FDIFF_9:19; A5:for x st x in Z holds sin.x<>0 proof let x; assume x in Z; then x in dom cosec by A1,FUNCT_1:11; hence thesis by RFUNCT_1:3; end; A6:for x being Element of REAL st x in dom ((ln*cosec)`|Z) holds ((ln*cosec)`|Z).x = (-cot).x proof let x be Element of REAL; assume x in dom ((ln*cosec)`|Z);then A7: x in Z by A4,FDIFF_1:def 7;then A8: sin.x<>0 by A5; ((ln*cosec)`|Z).x = -cot(x) by A1,A7,FDIFF_9:19 .=-cot.x by A8,SIN_COS9:16 .=(-cot).x by VALUED_1:8; hence thesis; end; dom ((ln*cosec)`|Z)=dom (-cot) by A2,A4,FDIFF_1:def 7; then ((ln*cosec)`|Z)= -cot by A6,PARTFUN1:5; hence thesis by A1,A3,A4,INTEGRA5:13; end; ::f.x=cot.x theorem A c= Z & Z c= dom (ln*cosec) & Z = dom cot & cot|A is continuous implies integral(cot,A)=(-ln*cosec).(upper_bound A)-(-ln*cosec).(lower_bound A) proof assume A1:A c= Z & Z c= dom (ln*cosec) & Z = dom cot & cot|A is continuous; then A2:cot is_integrable_on A & cot|A is bounded by INTEGRA5:10,11; A3:-ln*cosec is_differentiable_on Z by A1,Th5; A4:for x being Element of REAL st x in dom ((-ln*cosec)`|Z) holds ((-ln*cosec)`|Z).x = cot.x proof let x be Element of REAL; assume x in dom ((-ln*cosec)`|Z);then x in Z by A3,FDIFF_1:def 7; then ((-ln*cosec)`|Z).x=cot.x by A1,Th5; hence thesis; end; dom ((-ln*cosec)`|Z)=dom cot by A1,A3,FDIFF_1:def 7; then ((-ln*cosec)`|Z)= cot by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=n*sin.x/(cos.x) #Z (n+1) theorem A c= Z & (for x st x in Z holds f.x=n*sin.x/(cos.x) #Z (n+1)) & Z c= dom (( #Z n)*sec) & 1<=n & Z = dom f & f|A is continuous implies integral(f,A)=(( #Z n)*sec).(upper_bound A)- (( #Z n)*sec).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=n*sin.x/(cos.x) #Z (n+1)) & Z c= dom (( #Z n)*sec) & 1<=n & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:( #Z n)*sec is_differentiable_on Z by A1,FDIFF_9:20; A4:for x being Element of REAL st x in dom ((( #Z n)*sec)`|Z) holds ((( #Z n)*sec)`|Z).x = f.x proof let x be Element of REAL; assume x in dom ((( #Z n)*sec)`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then ((( #Z n)*sec)`|Z).x=n*sin.x/(cos.x) #Z (n+1) by A1,FDIFF_9:20 .=f.x by A1,A5; hence thesis; end; dom ((( #Z n)*sec)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((( #Z n)*sec)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=n*cos.x/(sin.x) #Z (n+1) theorem A c= Z & (for x st x in Z holds f.x=n*cos.x/(sin.x) #Z (n+1)) & Z c= dom (( #Z n)*cosec) & 1<=n & Z = dom f & f|A is continuous implies integral(f,A)=(-( #Z n)*cosec).(upper_bound A)- (-( #Z n)*cosec).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=n*cos.x/(sin.x) #Z (n+1)) & Z c= dom (( #Z n)*cosec) & 1<=n & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:-( #Z n)*cosec is_differentiable_on Z by A1,Th6; A4:for x being Element of REAL st x in dom ((-( #Z n)*cosec)`|Z) holds ((-( #Z n)*cosec)`|Z).x = f.x proof let x be Element of REAL; assume x in dom ((-( #Z n)*cosec)`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then ((-( #Z n)*cosec)`|Z).x=n*cos.x/(sin.x) #Z (n+1) by A1,Th6 .=f.x by A1,A5; hence thesis; end; dom ((-( #Z n)*cosec)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((-( #Z n)*cosec)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=exp_R.x/cos.x+exp_R.x*sin.x/(cos.x)^2 theorem A c= Z & (for x st x in Z holds f.x=exp_R.x/cos.x+exp_R.x*sin.x/(cos.x)^2) & Z c= dom (exp_R(#)sec) & Z = dom f & f|A is continuous implies integral(f,A)=(exp_R(#)sec).(upper_bound A)- (exp_R(#)sec).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=exp_R.x/cos.x+exp_R.x*sin.x/(cos.x)^2) & Z c= dom (exp_R(#)sec) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:exp_R(#)sec is_differentiable_on Z by A1,FDIFF_9:24; A4:for x being Element of REAL st x in dom ((exp_R(#)sec)`|Z) holds ((exp_R(#)sec)`|Z).x = f.x proof let x be Element of REAL; assume x in dom ((exp_R(#)sec)`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then ((exp_R(#)sec)`|Z).x = exp_R.x/cos.x+exp_R.x*sin.x/(cos.x)^2 by A1,FDIFF_9:24 .=f.x by A1,A5; hence thesis; end; dom ((exp_R(#)sec)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((exp_R(#)sec)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=exp_R.x/sin.x-exp_R.x*cos.x/(sin.x)^2 theorem A c= Z & (for x st x in Z holds f.x=exp_R.x/sin.x-exp_R.x*cos.x/(sin.x)^2) & Z c= dom (exp_R(#)cosec) & Z = dom f & f|A is continuous implies integral(f,A)=(exp_R(#)cosec).(upper_bound A)- (exp_R(#)cosec).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=exp_R.x/sin.x-exp_R.x*cos.x/(sin.x)^2) & Z c= dom (exp_R(#)cosec) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:exp_R(#)cosec is_differentiable_on Z by A1,FDIFF_9:25; A4:for x being Element of REAL st x in dom ((exp_R(#)cosec)`|Z) holds ((exp_R(#)cosec)`|Z).x = f.x proof let x be Element of REAL; assume x in dom ((exp_R(#)cosec)`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then ((exp_R(#)cosec)`|Z).x = exp_R.x/sin.x-exp_R.x*cos.x/(sin.x)^2 by A1,FDIFF_9:25 .=f.x by A1,A5; hence thesis; end; dom ((exp_R(#)cosec)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((exp_R(#)cosec)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=(sin.(a*x)-(cos.(a*x))^2)/(cos.(a*x))^2 theorem A c= Z & (for x st x in Z holds f.x=(sin.(a*x)-(cos.(a*x))^2)/(cos.(a*x))^2) & (Z c= dom ((1/a)(#)(sec*f1)-id Z) & for x st x in Z holds f1.x=a*x & a<>0) & Z = dom f & f|A is continuous implies integral(f,A)=((1/a)(#)(sec*f1)-id Z).(upper_bound A)- ((1/a)(#)(sec*f1)-id Z).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=(sin.(a*x)-(cos.(a*x))^2)/(cos.(a*x))^2) & (Z c= dom ((1/a)(#)(sec*f1)-id Z) & for x st x in Z holds f1.x=a*x & a<>0) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:(1/a)(#)(sec*f1)-id Z is_differentiable_on Z by A1,FDIFF_9:26; A4:for x being Element of REAL st x in dom (((1/a)(#)(sec*f1)-id Z)`|Z) holds (((1/a)(#)(sec*f1)-id Z)`|Z).x=f.x proof let x be Element of REAL; assume x in dom (((1/a)(#)(sec*f1)-id Z)`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then (((1/a)(#)(sec*f1)-id Z)`|Z).x = (sin.(a*x)-(cos.(a*x))^2)/(cos.(a*x))^2 by A1,FDIFF_9:26 .= f.x by A1,A5; hence thesis; end; dom (((1/a)(#)(sec*f1)-id Z)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then (((1/a)(#)(sec*f1)-id Z)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=(cos.(a*x)-(sin.(a*x))^2)/(sin.(a*x))^2 theorem A c= Z & (for x st x in Z holds f.x=(cos.(a*x)-(sin.(a*x))^2)/(sin.(a*x))^2) & (Z c= dom ((-1/a)(#)(cosec*f1)-id Z) & for x st x in Z holds f1.x=a*x & a<>0) & Z = dom f & f|A is continuous implies integral(f,A)=((-1/a)(#)(cosec*f1)-id Z).(upper_bound A) -((-1/a)(#)(cosec*f1)-id Z).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=(cos.(a*x)-(sin.(a*x))^2)/(sin.(a*x))^2) & (Z c= dom ((-1/a)(#)(cosec*f1)-id Z) & for x st x in Z holds f1.x=a*x & a<>0) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:(-1/a)(#)(cosec*f1)-id Z is_differentiable_on Z by A1,FDIFF_9:27; A4:for x being Element of REAL st x in dom (((-1/a)(#)(cosec*f1)-id Z)`|Z) holds (((-1/a)(#)(cosec*f1)-id Z)`|Z).x=f.x proof let x be Element of REAL; assume x in dom (((-1/a)(#)(cosec*f1)-id Z)`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then (((-1/a)(#)(cosec*f1)-id Z)`|Z).x = (cos.(a*x)-(sin.(a*x))^2)/(sin.(a*x))^2 by A1,FDIFF_9:27 .= f.x by A1,A5; hence thesis; end; dom (((-1/a)(#)(cosec*f1)-id Z)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then (((-1/a)(#)(cosec*f1)-id Z)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=1/cos.x/x+ln.x*sin.x/(cos.x)^2 theorem A c= Z & (for x st x in Z holds f.x=1/cos.x/x+ln.x*sin.x/(cos.x)^2) & Z c= dom (ln(#)sec) & Z = dom f & f|A is continuous implies integral(f,A)=(ln(#)sec).(upper_bound A)-(ln(#)sec).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=1/cos.x/x+ln.x*sin.x/(cos.x)^2) & Z c= dom (ln(#)sec) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:ln(#)sec is_differentiable_on Z by A1,FDIFF_9:30; A4:for x being Element of REAL st x in dom ((ln(#)sec)`|Z) holds ((ln(#)sec)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((ln(#)sec)`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then ((ln(#)sec)`|Z).x = 1/cos.x/x+ln.x*sin.x/(cos.x)^2 by A1,FDIFF_9:30 .= f.x by A1,A5; hence thesis; end; dom ((ln(#)sec)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((ln(#)sec)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=1/sin.x/x-ln.x*cos.x/(sin.x)^2 theorem A c= Z & (for x st x in Z holds f.x=1/sin.x/x-ln.x*cos.x/(sin.x)^2) & Z c= dom (ln(#)cosec) & Z = dom f & f|A is continuous implies integral(f,A)=(ln(#)cosec).(upper_bound A)-(ln(#)cosec).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=1/sin.x/x-ln.x*cos.x/(sin.x)^2) & Z c= dom (ln(#)cosec) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:ln(#)cosec is_differentiable_on Z by A1,FDIFF_9:31; A4:for x being Element of REAL st x in dom ((ln(#)cosec)`|Z) holds ((ln(#)cosec)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((ln(#)cosec)`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then ((ln(#)cosec)`|Z).x = 1/sin.x/x-ln.x*cos.x/(sin.x)^2 by A1,FDIFF_9:31 .= f.x by A1,A5; hence thesis; end; dom ((ln(#)cosec)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((ln(#)cosec)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=1/cos.x/x^2-sin.x/x/(cos.x)^2 theorem A c= Z & (for x st x in Z holds f.x=1/cos.x/x^2-sin.x/x/(cos.x)^2) & Z c= dom ((id Z)^(#)sec) & Z = dom f & f|A is continuous implies integral(f,A)=(-(id Z)^(#)sec).(upper_bound A)-(-(id Z)^(#)sec).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=1/cos.x/x^2-sin.x/x/(cos.x)^2) & Z c= dom ((id Z)^(#)sec) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:-(id Z)^(#)sec is_differentiable_on Z by A1,Th7; A4:for x being Element of REAL st x in dom ((-(id Z)^(#)sec)`|Z) holds ((-(id Z)^(#)sec)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((-(id Z)^(#)sec)`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then ((-(id Z)^(#)sec)`|Z).x = 1/cos.x/x^2-sin.x/x/(cos.x)^2 by A1,Th7 .= f.x by A1,A5; hence thesis; end; dom ((-(id Z)^(#)sec)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((-(id Z)^(#)sec)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=1/sin.x/x^2+cos.x/x/(sin.x)^2 theorem A c= Z & (for x st x in Z holds f.x=1/sin.x/x^2+cos.x/x/(sin.x)^2) & Z c= dom ((id Z)^(#)cosec) & Z = dom f & f|A is continuous implies integral(f,A)=(-(id Z)^(#)cosec).(upper_bound A)- (-(id Z)^(#)cosec).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=1/sin.x/x^2+cos.x/x/(sin.x)^2) & Z c= dom ((id Z)^(#)cosec) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:-(id Z)^(#)cosec is_differentiable_on Z by A1,Th8; A4:for x being Element of REAL st x in dom ((-(id Z)^(#)cosec)`|Z) holds ((-(id Z)^(#)cosec)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((-(id Z)^(#)cosec)`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then ((-(id Z)^(#)cosec)`|Z).x = 1/sin.x/x^2+cos.x/x/(sin.x)^2 by A1,Th8 .= f.x by A1,A5; hence thesis; end; dom ((-(id Z)^(#)cosec)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((-(id Z)^(#)cosec)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=cos.x*sin.(sin.x)/(cos.(sin.x))^2 theorem A c= Z & (for x st x in Z holds f.x=cos.x*sin.(sin.x)/(cos.(sin.x))^2) & Z c= dom (sec*sin) & Z = dom f & f|A is continuous implies integral(f,A)=(sec*sin).(upper_bound A)-(sec*sin).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=cos.x*sin.(sin.x)/(cos.(sin.x))^2) & Z c= dom (sec*sin) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:sec*sin is_differentiable_on Z by A1,FDIFF_9:34; A4:for x being Element of REAL st x in dom ((sec*sin)`|Z) holds ((sec*sin)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((sec*sin)`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then ((sec*sin)`|Z).x = cos.x*sin.(sin.x)/(cos.(sin.x))^2 by A1,FDIFF_9:34 .= f.x by A1,A5; hence thesis; end; dom ((sec*sin)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((sec*sin)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=sin.x*sin.(cos.x)/(cos.(cos.x))^2 theorem A c= Z & (for x st x in Z holds f.x=sin.x*sin.(cos.x)/(cos.(cos.x))^2) & Z c= dom (sec*cos) & Z = dom f & f|A is continuous implies integral(f,A)=(-sec*cos).(upper_bound A)-(-sec*cos).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=sin.x*sin.(cos.x)/(cos.(cos.x))^2) & Z c= dom (sec*cos) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:Z c= dom (-sec*cos) by A1,VALUED_1:8; A4:sec*cos is_differentiable_on Z by A1,FDIFF_9:35; then A5:(-1)(#)(sec*cos) is_differentiable_on Z by A3,FDIFF_1:20; A6:for x st x in Z holds ((-sec*cos)`|Z).x = sin.x*sin.(cos.x)/(cos.(cos.x))^2 proof let x; assume A7:x in Z; ((-sec*cos)`|Z).x=((-1)(#)((sec*cos)`|Z)).x by A4,FDIFF_2:19 .=(-1)*(((sec*cos)`|Z).x) by VALUED_1:6 .=(-1)*(-sin.x*sin.(cos.x)/(cos.(cos.x))^2) by A1,A7,FDIFF_9:35 .=sin.x*sin.(cos.x)/(cos.(cos.x))^2; hence thesis; end; A8:for x being Element of REAL st x in dom ((-sec*cos)`|Z) holds ((-sec*cos)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((-sec*cos)`|Z);then A9:x in Z by A5,FDIFF_1:def 7;then ((-sec*cos)`|Z).x = sin.x*sin.(cos.x)/(cos.(cos.x))^2 by A6 .= f.x by A1,A9; hence thesis; end; dom ((-sec*cos)`|Z)=dom f by A1,A5,FDIFF_1:def 7; then ((-sec*cos)`|Z)= f by A8,PARTFUN1:5; hence thesis by A1,A2,A5,INTEGRA5:13; end; ::f.x=cos.x*cos.(sin.x)/(sin.(sin.x))^2 theorem A c= Z & (for x st x in Z holds f.x=cos.x*cos.(sin.x)/(sin.(sin.x))^2) & Z c= dom (cosec*sin) & Z = dom f & f|A is continuous implies integral(f,A)=(-cosec*sin).(upper_bound A)-(-cosec*sin).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=cos.x*cos.(sin.x)/(sin.(sin.x))^2) & Z c= dom (cosec*sin) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:-cosec*sin is_differentiable_on Z by A1,Th9; A4:for x being Element of REAL st x in dom ((-cosec*sin)`|Z) holds ((-cosec*sin)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((-cosec*sin)`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then ((-cosec*sin)`|Z).x = cos.x*cos.(sin.x)/(sin.(sin.x))^2 by A1,Th9 .= f.x by A1,A5; hence thesis; end; dom ((-cosec*sin)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((-cosec*sin)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=sin.x*cos.(cos.x)/(sin.(cos.x))^2 theorem A c= Z & (for x st x in Z holds f.x=sin.x*cos.(cos.x)/(sin.(cos.x))^2) & Z c= dom (cosec*cos) & Z = dom f & f|A is continuous implies integral(f,A)=(cosec*cos).(upper_bound A)-(cosec*cos).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=sin.x*cos.(cos.x)/(sin.(cos.x))^2) & Z c= dom (cosec*cos) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:cosec*cos is_differentiable_on Z by A1,FDIFF_9:37; A4:for x being Element of REAL st x in dom ((cosec*cos)`|Z) holds ((cosec*cos)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((cosec*cos)`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then ((cosec*cos)`|Z).x = sin.x*cos.(cos.x)/(sin.(cos.x))^2 by A1,FDIFF_9:37 .= f.x by A1,A5; hence thesis; end; dom ((cosec*cos)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((cosec*cos)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=sin.(tan.x)/(cos.x)^2/(cos.(tan.x))^2 theorem A c= Z & (for x st x in Z holds f.x=sin.(tan.x)/(cos.x)^2/(cos.(tan.x))^2) & Z c= dom (sec*tan) & Z = dom f & f|A is continuous implies integral(f,A)=(sec*tan).(upper_bound A)-(sec*tan).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=sin.(tan.x)/(cos.x)^2/(cos.(tan.x))^2) & Z c= dom (sec*tan) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:sec*tan is_differentiable_on Z by A1,FDIFF_9:38; A4:for x being Element of REAL st x in dom ((sec*tan)`|Z) holds ((sec*tan)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((sec*tan)`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then ((sec*tan)`|Z).x = sin.(tan.x)/(cos.x)^2/(cos.(tan.x))^2 by A1,FDIFF_9:38 .= f.x by A1,A5; hence thesis; end; dom ((sec*tan)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((sec*tan)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=sin.(cot.x)/(sin.x)^2/(cos.(cot.x))^2 theorem A c= Z & (for x st x in Z holds f.x=sin.(cot.x)/(sin.x)^2/(cos.(cot.x))^2) & Z c= dom (sec*cot) & Z = dom f & f|A is continuous implies integral(f,A)=(-sec*cot).(upper_bound A)-(-sec*cot).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=sin.(cot.x)/(sin.x)^2/(cos.(cot.x))^2) & Z c= dom (sec*cot) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:-sec*cot is_differentiable_on Z by A1,Th10; A4:for x being Element of REAL st x in dom ((-sec*cot)`|Z) holds ((-sec*cot)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((-sec*cot)`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then ((-sec*cot)`|Z).x = sin.(cot.x)/(sin.x)^2/(cos.(cot.x))^2 by A1,Th10 .= f.x by A1,A5; hence thesis; end; dom ((-sec*cot)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((-sec*cot)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x= cos.(tan.x)/(cos.x)^2/(sin.(tan.x))^2 theorem A c= Z & (for x st x in Z holds f.x=cos.(tan.x)/(cos.x)^2/(sin.(tan.x))^2) & Z c= dom (cosec*tan) & Z = dom f & f|A is continuous implies integral(f,A)=(-cosec*tan).(upper_bound A)-(-cosec*tan).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=cos.(tan.x)/(cos.x)^2/(sin.(tan.x))^2) & Z c= dom (cosec*tan) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:-cosec*tan is_differentiable_on Z by A1,Th11; A4:for x being Element of REAL st x in dom ((-cosec*tan)`|Z) holds ((-cosec*tan)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((-cosec*tan)`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then ((-cosec*tan)`|Z).x = cos.(tan.x)/(cos.x)^2/(sin.(tan.x))^2 by A1,Th11 .= f.x by A1,A5; hence thesis; end; dom ((-cosec*tan)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((-cosec*tan)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=cos.(cot.x)/(sin.x)^2/(sin.(cot.x))^2 theorem A c= Z & (for x st x in Z holds f.x=cos.(cot.x)/(sin.x)^2/(sin.(cot.x))^2) & Z c= dom (cosec*cot) & Z = dom f & f|A is continuous implies integral(f,A)=(cosec*cot).(upper_bound A)-(cosec*cot).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=cos.(cot.x)/(sin.x)^2/(sin.(cot.x))^2) & Z c= dom (cosec*cot) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:cosec*cot is_differentiable_on Z by A1,FDIFF_9:41; A4:for x being Element of REAL st x in dom ((cosec*cot)`|Z) holds ((cosec*cot)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((cosec*cot)`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then ((cosec*cot)`|Z).x = cos.(cot.x)/(sin.x)^2/(sin.(cot.x))^2 by A1,FDIFF_9:41 .= f.x by A1,A5; hence thesis; end; dom ((cosec*cot)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((cosec*cot)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=1/(cos.x)^2/cos.x+tan.x*sin.x/(cos.x)^2 theorem A c= Z & (for x st x in Z holds f.x=1/(cos.x)^2/cos.x+tan.x*sin.x/(cos.x)^2) & Z c= dom (tan(#)sec) & Z = dom f & f|A is continuous implies integral(f,A)=(tan(#)sec).(upper_bound A)-(tan(#)sec).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=1/(cos.x)^2/cos.x+tan.x*sin.x/(cos.x)^2) & Z c= dom (tan(#)sec) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:tan(#)sec is_differentiable_on Z by A1,FDIFF_9:42; A4:for x being Element of REAL st x in dom ((tan(#)sec)`|Z) holds ((tan(#)sec)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((tan(#)sec)`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then ((tan(#)sec)`|Z).x = 1/(cos.x)^2/cos.x+tan.x*sin.x/(cos.x)^2 by A1,FDIFF_9:42 .= f.x by A1,A5; hence thesis; end; dom ((tan(#)sec)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((tan(#)sec)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=1/(sin.x)^2/cos.x-cot.x*sin.x/(cos.x)^2 theorem A c= Z & (for x st x in Z holds f.x=1/(sin.x)^2/cos.x-cot.x*sin.x/(cos.x)^2) & Z c= dom (cot(#)sec) & Z = dom f & f|A is continuous implies integral(f,A)=(-cot(#)sec).(upper_bound A)-(-cot(#)sec).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=1/(sin.x)^2/cos.x-cot.x*sin.x/(cos.x)^2) & Z c= dom (cot(#)sec) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:-cot(#)sec is_differentiable_on Z by A1,Th12; A4:for x being Element of REAL st x in dom ((-cot(#)sec)`|Z) holds ((-cot(#)sec)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((-cot(#)sec)`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then ((-cot(#)sec)`|Z).x = 1/(sin.x)^2/cos.x-cot.x*sin.x/(cos.x)^2 by A1,Th12 .= f.x by A1,A5; hence thesis; end; dom ((-cot(#)sec)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((-cot(#)sec)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=1/(cos.x)^2/sin.x-tan.x*cos.x/(sin.x)^2 theorem A c= Z & (for x st x in Z holds f.x=1/(cos.x)^2/sin.x-tan.x*cos.x/(sin.x)^2) & Z c= dom (tan(#)cosec) & Z = dom f & f|A is continuous implies integral(f,A)=(tan(#)cosec).(upper_bound A)- (tan(#)cosec).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=1/(cos.x)^2/sin.x-tan.x*cos.x/(sin.x)^2) & Z c= dom (tan(#)cosec) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:tan(#)cosec is_differentiable_on Z by A1,FDIFF_9:44; A4:for x being Element of REAL st x in dom ((tan(#)cosec)`|Z) holds ((tan(#)cosec)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((tan(#)cosec)`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then ((tan(#)cosec)`|Z).x = 1/(cos.x)^2/sin.x-tan.x*cos.x/(sin.x)^2 by A1,FDIFF_9:44 .= f.x by A1,A5; hence thesis; end; dom ((tan(#)cosec)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((tan(#)cosec)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=1/(sin.x)^2/sin.x+cot.x*cos.x/(sin.x)^2 theorem A c= Z & (for x st x in Z holds f.x=1/(sin.x)^2/sin.x+cot.x*cos.x/(sin.x)^2) & Z c= dom (cot(#)cosec) & Z = dom f & f|A is continuous implies integral(f,A)=(-cot(#)cosec).(upper_bound A)- (-cot(#)cosec).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=1/(sin.x)^2/sin.x+cot.x*cos.x/(sin.x)^2) & Z c= dom (cot(#)cosec) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:-cot(#)cosec is_differentiable_on Z by A1,Th13; A4:for x being Element of REAL st x in dom ((-cot(#)cosec)`|Z) holds ((-cot(#)cosec)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((-cot(#)cosec)`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then ((-cot(#)cosec)`|Z).x = 1/(sin.x)^2/sin.x+cot.x*cos.x/(sin.x)^2 by A1,Th13 .= f.x by A1,A5; hence thesis; end; dom ((-cot(#)cosec)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((-cot(#)cosec)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=1/(cos.(cot.x))^2*(1/(sin.x)^2) theorem A c= Z & (for x st x in Z holds f.x=1/(cos.(cot.x))^2*(1/(sin.x)^2)) & Z c= dom (tan*cot) & Z = dom f & f|A is continuous implies integral(f,A)=(-tan*cot).(upper_bound A)-(-tan*cot).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=1/(cos.(cot.x))^2*(1/(sin.x)^2)) & Z c= dom (tan*cot) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:Z c= dom (-tan*cot) by A1,VALUED_1:8; A4:tan*cot is_differentiable_on Z by A1,FDIFF_10:1; then A5:(-1)(#)(tan*cot) is_differentiable_on Z by A3,FDIFF_1:20; A6:for x st x in Z holds ((-tan*cot)`|Z).x = 1/(cos.(cot.x))^2*(1/(sin.x)^2) proof let x; assume A7:x in Z; ((-tan*cot)`|Z).x=((-1)(#)((tan*cot)`|Z)).x by A4,FDIFF_2:19 .=(-1)*(((tan*cot)`|Z).x) by VALUED_1:6 .=(-1)*(1/(cos.(cot.x))^2*(-1/(sin.x)^2)) by A1,A7,FDIFF_10:1 .=1/(cos.(cot.x))^2*(1/(sin.x)^2); hence thesis; end; A8:for x being Element of REAL st x in dom ((-tan*cot)`|Z) holds ((-tan*cot)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((-tan*cot)`|Z);then A9:x in Z by A5,FDIFF_1:def 7;then ((-tan*cot)`|Z).x = 1/(cos.(cot.x))^2*(1/(sin.x)^2) by A6 .= f.x by A1,A9; hence thesis; end; dom ((-tan*cot)`|Z)=dom f by A1,A5,FDIFF_1:def 7; then ((-tan*cot)`|Z)= f by A8,PARTFUN1:5; hence thesis by A1,A2,A5,INTEGRA5:13; end; ::f.x=1/(cos.(tan.x))^2 *(1/(cos.x)^2) theorem A c= Z & (for x st x in Z holds f.x=1/(cos.(tan.x))^2 *(1/(cos.x)^2)) & Z c= dom (tan*tan) & Z = dom f & f|A is continuous implies integral(f,A)=(tan*tan).(upper_bound A)-(tan*tan).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=1/(cos.(tan.x))^2 *(1/(cos.x)^2)) & Z c= dom (tan*tan) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:tan*tan is_differentiable_on Z by A1,FDIFF_10:2; A4:for x being Element of REAL st x in dom ((tan*tan)`|Z) holds ((tan*tan)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((tan*tan)`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then ((tan*tan)`|Z).x = 1/(cos.(tan.x))^2 *(1/(cos.x)^2) by A1,FDIFF_10:2 .= f.x by A1,A5; hence thesis; end; dom ((tan*tan)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((tan*tan)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=(1/(sin.(cot.x))^2) *(1/(sin.x)^2) theorem A c= Z & (for x st x in Z holds f.x=(1/(sin.(cot.x))^2) *(1/(sin.x)^2)) & Z c= dom (cot*cot) & Z = dom f & f|A is continuous implies integral(f,A)=(cot*cot).(upper_bound A)-(cot*cot).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=(1/(sin.(cot.x))^2) *(1/(sin.x)^2)) & Z c= dom (cot*cot) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:cot*cot is_differentiable_on Z by A1,FDIFF_10:3; A4:for x being Element of REAL st x in dom ((cot*cot)`|Z) holds ((cot*cot)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((cot*cot)`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then ((cot*cot)`|Z).x = (1/(sin.(cot.x))^2) *(1/(sin.x)^2) by A1,FDIFF_10:3 .= f.x by A1,A5; hence thesis; end; dom ((cot*cot)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((cot*cot)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=(1/(sin.(tan.x))^2)*(1/(cos.x)^2) theorem A c= Z & (for x st x in Z holds f.x=(1/(sin.(tan.x))^2)*(1/(cos.x)^2)) & Z c= dom (cot*tan) & Z = dom f & f|A is continuous implies integral(f,A)=(-cot*tan).(upper_bound A)-(-cot*tan).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=(1/(sin.(tan.x))^2)*(1/(cos.x)^2)) & Z c= dom (cot*tan) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:Z c= dom (-cot*tan) by A1,VALUED_1:8; A4:cot*tan is_differentiable_on Z by A1,FDIFF_10:4; then A5:(-1)(#)(cot*tan) is_differentiable_on Z by A3,FDIFF_1:20; A6:for x st x in Z holds ((-cot*tan)`|Z).x = (1/(sin.(tan.x))^2)*(1/(cos.x)^2) proof let x; assume A7:x in Z; ((-cot*tan)`|Z).x=((-1)(#)((cot*tan)`|Z)).x by A4,FDIFF_2:19 .=(-1)*(((cot*tan)`|Z).x) by VALUED_1:6 .=(-1)*((-1/(sin.(tan.x))^2)*(1/(cos.x)^2)) by A1,A7,FDIFF_10:4 .=(1/(sin.(tan.x))^2)*(1/(cos.x)^2); hence thesis; end; A8:for x being Element of REAL st x in dom ((-cot*tan)`|Z) holds ((-cot*tan)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((-cot*tan)`|Z);then A9:x in Z by A5,FDIFF_1:def 7;then ((-cot*tan)`|Z).x =(1/(sin.(tan.x))^2)*(1/(cos.x)^2) by A6 .= f.x by A1,A9; hence thesis; end; dom ((-cot*tan)`|Z)=dom f by A1,A5,FDIFF_1:def 7; then ((-cot*tan)`|Z)= f by A8,PARTFUN1:5; hence thesis by A1,A2,A5,INTEGRA5:13; end; ::f.x=1/(cos.x)^2+1/(sin.x)^2 theorem A c= Z & (for x st x in Z holds f.x=1/(cos.x)^2+1/(sin.x)^2) & Z c= dom (tan-cot) & Z = dom f & f|A is continuous implies integral(f,A)=(tan-cot).(upper_bound A)-(tan-cot).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=1/(cos.x)^2+1/(sin.x)^2) & Z c= dom (tan-cot) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:tan-cot is_differentiable_on Z by A1,FDIFF_10:5; A4:for x being Element of REAL st x in dom ((tan-cot)`|Z) holds ((tan-cot)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((tan-cot)`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then ((tan-cot)`|Z).x = 1/(cos.x)^2+1/(sin.x)^2 by A1,FDIFF_10:5 .= f.x by A1,A5; hence thesis; end; dom ((tan-cot)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((tan-cot)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=1/(cos.x)^2-1/(sin.x)^2 theorem A c= Z & (for x st x in Z holds f.x=1/(cos.x)^2-1/(sin.x)^2) & Z c= dom (tan+cot) & Z = dom f & f|A is continuous implies integral(f,A)=(tan+cot).(upper_bound A)-(tan+cot).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=1/(cos.x)^2-1/(sin.x)^2) & Z c= dom (tan+cot) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:tan+cot is_differentiable_on Z by A1,FDIFF_10:6; A4:for x being Element of REAL st x in dom ((tan+cot)`|Z) holds ((tan+cot)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((tan+cot)`|Z);then A5:x in Z by A3,FDIFF_1:def 7;then ((tan+cot)`|Z).x = 1/(cos.x)^2-1/(sin.x)^2 by A1,FDIFF_10:6 .= f.x by A1,A5; hence thesis; end; dom ((tan+cot)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((tan+cot)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=cos.(sin.x)*cos.x theorem A c= Z & (for x st x in Z holds f.x=cos.(sin.x)*cos.x) & Z = dom f & f|A is continuous implies integral(f,A)=(sin*sin).(upper_bound A)-(sin*sin).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=cos.(sin.x)*cos.x) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:sin*sin is_differentiable_on Z by FDIFF_10:7; A4:for x being Element of REAL st x in dom ((sin*sin)`|Z) holds ((sin*sin)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((sin*sin)`|Z);then A5: x in Z by A3,FDIFF_1:def 7;then ((sin*sin)`|Z).x = cos.(sin.x)*cos.x by FDIFF_10:7 .= f.x by A1,A5; hence thesis; end; dom ((sin*sin)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((sin*sin)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=cos.(cos.x)*sin.x theorem A c= Z & (for x st x in Z holds f.x=cos.(cos.x)*sin.x) & Z = dom f & f|A is continuous implies integral(f,A)=(-sin*cos).(upper_bound A)-(-sin*cos).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=cos.(cos.x)*sin.x) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; dom cos = REAL & rng cos c= dom cos & dom sin = dom cos by SIN_COS:24; then dom (sin*cos) = REAL by RELAT_1:27; then A3: dom (-sin*cos) = REAL by VALUED_1:8; A4:sin*cos is_differentiable_on Z by FDIFF_10:8; then A5:(-1)(#)(sin*cos) is_differentiable_on Z by A3,FDIFF_1:20; A6:for x st x in Z holds ((-sin*cos)`|Z).x = cos.(cos.x)*sin.x proof let x; assume A7:x in Z; ((-sin*cos)`|Z).x=((-1)(#)((sin*cos)`|Z)).x by A4,FDIFF_2:19 .=(-1)*(((sin*cos)`|Z).x) by VALUED_1:6 .=(-1)*((-cos.(cos.x)*sin.x)) by A7,FDIFF_10:8 .=cos.(cos.x)*sin.x; hence thesis; end; A8:for x being Element of REAL st x in dom ((-sin*cos)`|Z) holds ((-sin*cos)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((-sin*cos)`|Z);then A9:x in Z by A5,FDIFF_1:def 7;then ((-sin*cos)`|Z).x = cos.(cos.x)*sin.x by A6 .= f.x by A1,A9; hence thesis; end; dom ((-sin*cos)`|Z)=dom f by A1,A5,FDIFF_1:def 7; then ((-sin*cos)`|Z)= f by A8,PARTFUN1:5; hence thesis by A1,A2,A5,INTEGRA5:13; end; ::f.x=sin.(sin.x)*cos.x theorem A c= Z & (for x st x in Z holds f.x=sin.(sin.x)*cos.x) & Z = dom f & f|A is continuous implies integral(f,A)=(-cos*sin).(upper_bound A)-(-cos*sin).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=sin.(sin.x)*cos.x) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3: dom sin = REAL by SIN_COS:24; rng sin c= dom sin & dom sin = dom cos by SIN_COS:24; then dom (cos*sin) = REAL by A3,RELAT_1:27; then A4:dom (-cos*sin) = REAL by VALUED_1:8; A5:cos*sin is_differentiable_on Z by FDIFF_10:9; then A6:(-1)(#)(cos*sin) is_differentiable_on Z by A4,FDIFF_1:20; A7:for x st x in Z holds ((-cos*sin)`|Z).x = sin.(sin.x)*cos.x proof let x; assume A8:x in Z; ((-cos*sin)`|Z).x=((-1)(#)((cos*sin)`|Z)).x by A5,FDIFF_2:19 .=(-1)*(((cos*sin)`|Z).x) by VALUED_1:6 .=(-1)*((-sin.(sin.x)*cos.x)) by A8,FDIFF_10:9 .=sin.(sin.x)*cos.x; hence thesis; end; A9:for x being Element of REAL st x in dom ((-cos*sin)`|Z) holds ((-cos*sin)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((-cos*sin)`|Z);then A10: x in Z by A6,FDIFF_1:def 7;then ((-cos*sin)`|Z).x =sin.(sin.x)*cos.x by A7 .= f.x by A1,A10; hence thesis; end; dom ((-cos*sin)`|Z)=dom f by A1,A6,FDIFF_1:def 7; then ((-cos*sin)`|Z)= f by A9,PARTFUN1:5; hence thesis by A1,A2,A6,INTEGRA5:13; end; ::f.x=sin.(cos.x)*sin.x theorem A c= Z & (for x st x in Z holds f.x=sin.(cos.x)*sin.x) & Z = dom f & f|A is continuous implies integral(f,A)=(cos*cos).(upper_bound A)-(cos*cos).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=sin.(cos.x)*sin.x) & Z = dom f & f|A is continuous;then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:cos*cos is_differentiable_on Z by FDIFF_10:10; A4:for x being Element of REAL st x in dom ((cos*cos)`|Z) holds ((cos*cos)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((cos*cos)`|Z);then A5: x in Z by A3,FDIFF_1:def 7;then ((cos*cos)`|Z).x = sin.(cos.x)*sin.x by FDIFF_10:10 .= f.x by A1,A5; hence thesis; end; dom ((cos*cos)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((cos*cos)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=cos.x+cos.x/(sin.x)^2 theorem A c= Z & (for x st x in Z holds f.x=cos.x+cos.x/(sin.x)^2) & Z c= dom (cos (#) cot) & Z = dom f & f|A is continuous implies integral(f,A)=(-cos (#) cot).(upper_bound A)- (-cos (#) cot).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=cos.x+cos.x/(sin.x)^2) & Z c= dom (cos (#) cot) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:-cos (#) cot is_differentiable_on Z by A1,Th14; A4:for x being Element of REAL st x in dom ((-cos (#) cot)`|Z) holds ((-cos (#) cot)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((-cos (#) cot)`|Z);then A5: x in Z by A3,FDIFF_1:def 7;then ((-cos (#) cot)`|Z).x =cos.x+cos.x/(sin.x)^2 by A1,Th14 .= f.x by A1,A5; hence thesis; end; dom ((-cos (#) cot)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((-cos (#) cot)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end; ::f.x=sin.x + sin.x/(cos.x)^2 theorem A c= Z & (for x st x in Z holds f.x=sin.x + sin.x/(cos.x)^2) & Z c= dom (sin (#) tan) & Z = dom f & f|A is continuous implies integral(f,A)=(sin (#) tan).(upper_bound A)- (sin (#) tan).(lower_bound A) proof assume A1:A c= Z & (for x st x in Z holds f.x=sin.x + sin.x/(cos.x)^2) & Z c= dom (sin (#) tan) & Z = dom f & f|A is continuous; then A2:f is_integrable_on A & f|A is bounded by INTEGRA5:10,11; A3:sin (#) tan is_differentiable_on Z by A1,FDIFF_10:12; A4:for x being Element of REAL st x in dom ((sin (#) tan)`|Z) holds ((sin (#) tan)`|Z).x=f.x proof let x be Element of REAL; assume x in dom ((sin (#) tan)`|Z);then A5: x in Z by A3,FDIFF_1:def 7;then ((sin (#) tan)`|Z).x =sin.x + sin.x/(cos.x)^2 by A1,FDIFF_10:12 .= f.x by A1,A5; hence thesis; end; dom ((sin (#) tan)`|Z)=dom f by A1,A3,FDIFF_1:def 7; then ((sin (#) tan)`|Z)= f by A4,PARTFUN1:5; hence thesis by A1,A2,A3,INTEGRA5:13; end;
{"subset_name": "curated", "file": "formal/mizar/integr14.miz"}
\begin{document} \begin{abstract}We prove that if an ALE Ricci-flat manifold $(M,g)$ is linearly stable and integrable, it is dynamically stable under Ricci flow, i.e. any Ricci flow starting close to $g$ exists for all time and converges modulo diffeomorphism to an ALE Ricci-flat metric close to $g$. By adapting Tian's approach in the closed case, we show that integrability holds for ALE Calabi-Yau manifolds which implies that they are dynamically stable. \end{abstract} \maketitle \section{Introduction} Consider a complete Riemannian manifold $(M^n,g)$ endowed with a Ricci-flat metric $g$. As such, it is a fixed point of the Ricci flow and therefore, it is tempting to study the stability of such a metric with respect to the Ricci flow. Whether the manifold is closed or noncompact makes an essential difference in the analysis. In both cases, if $(M^n,g)$ is Ricci-flat, the linearized operator is the so called Lichnerowicz operator acting on symmetric $2$-tensors. Nonetheless, the $L^2$ approach differs drastically in the noncompact case. Indeed, even in the simplest situation, that is the flat case, the spectrum of the Lichnerowicz operator is not discrete anymore and $0$ belongs to the essential spectrum. In this paper, we restrict to Ricci-flat metrics on noncompact manifolds that are asymptotically locally Euclidean (ALE for short), i.e. that are asymptotic to a flat cone over a space form $\mathbb{S}^{n-1}/\Gamma$ where $\Gamma$ is a finite group of $\SO(n)$ acting freely on $\mathbb{R}^n\setminus\{0\}$. If $(M^n,g_0)$ is an ALE Ricci-flat metric, we assume furthermore that it is linearly stable, i.e. the Lichnerowicz operator is nonpositive in the $L^2$ sense and that the set of ALE Ricci-flat metrics close to $g_0$ is integrable, i.e. has a manifold structure of finite dimension: see Section \ref{def-ALE}. The strategy we adopt is given by Koch and Lamm \cite{Koc-Lam-Rou} that studied the stability of the Euclidean space along the Ricci flow in the $L^{\infty}$ sense. The quasi-linear evolution equation to consider here is \begin{eqnarray} \partial_tg=-2\Ric_g+\Li_{V(g,g_0)}(g),\quad\text{$\|g(0)-g_0\|_{L^{\infty}(g_0)}$ small} ,\label{eq-ricci-flat} \end{eqnarray} where $(M^n,g_0)$ is a fixed background ALE Ricci-flat metric and $\Li_{V(g,g_0)}(g)$ is the so called DeTurck's term. Equation (\ref{eq-ricci-flat}) is called the Ricci-DeTurck flow: its advantage over the Ricci flow equation is to be a strictly parabolic equation instead of a degenerate one. Koch and Lamm managed to rewrite (\ref{eq-ricci-flat}) in a clever way to get optimal results regarding the regularity of the initial condition: see Section \ref{RF-Sec}. Our main theorem is then: \begin{theo}\label{main-theo-bis} Let $(M^n,g_0)$ be an ALE Ricci-flat space. Assume it is linearly stable and integrable. Then for every $\epsilon>0$, there exists a $\delta>0$ such that the following holds: for any metric $g\in \mathcal{B}_{L^2\cap L^{\infty}}(g_0,\delta)$, there is a complete Ricci-DeTurck flow $(M^n,g(t))_{t\geq 0}$ starting from $g$ converging to an ALE Ricci-flat metric $g_{\infty}\in \mathcal{B}_{L^2\cap L^{\infty}}(g_0,\epsilon)$. Moreover, the $L^{\infty}$ norm of $(g(t)-g_0)_{t\geq 0}$ is decaying sharply at infinity: \begin{eqnarray*} \|g(t)-g_0\|_{L^{\infty}(M\setminus B_{g_0}(x_0,\sqrt{t}))}\leq C(n,g_0,\epsilon)\frac{\sup_{t\geq 0}\|g(t)-g_0\|_{L^2(M)}}{t^{\frac{n}{4}}},\quad t>0. \end{eqnarray*} \end{theo} Schn\" urer, Schulze and Simon \cite{Sch-Sch-Sim} have proved the stability of the Euclidean space for an $L^2\cap L^{\infty}$ perturbation as well. The decay obtained in Theorem \ref{main-theo-bis} sharpens their result: indeed, the proof shows that if $(M^n,g_0)$ is isometric to $(\mathbb{R}^n,\eucl)$ then the $L^{\infty}$ decay holds on the whole manifold. \begin{rk} It is an open question whether the decay in time obtained in Theorem \ref{main-theo-bis} holds on the whole manifold with an exponent $\alpha$ less than or equal to $n/4$. \end{rk} From the physicist point of view, the question of stability of ALE Ricci-flat metrics is of great importance when applied to hyperkŠhler or Calabi-Yau ALE metrics: the Lichnerowicz operator is always a nonnegative operator because of the special algebraic structure of the curvature tensor shared by these metrics. It turns out that they are also integrable: see Theorem \ref{CY-ALE-Int} based on the fundamental results of Tian \cite{Tian-Smoothness} in the closed case. In particular, it gives us plenty of examples to which one can apply Theorem \ref{main-theo-bis}. Another source of motivation comes from the question of continuing the Ricci flow after it reached a finite time singularity on a $4$-dimensional closed Riemannian manifold: the works of Bamler \cite{Bam-Zha-I} and Simon \cite{Sim-Ext} show that the singularities that can eventually show up are exactly ALE Ricci-flat metrics. However, there is no classification available of such metrics in dimension $4$ at the moment, except Kronheimer's classification for hyperk\"ahler metrics \cite{Kron-Class}.\\ Finally, we would like to discuss some related results especially regarding the stability of closed Ricci-flat metrics. There have been basically two approaches. On one hand, Sesum \cite{Ses-Lin-Dyn-Sta} has proved the stability of integrable Ricci-flat metrics on closed manifolds: in this case, the convergence rate is exponential since the spectrum of the Lichnerowicz operator is discrete. On the other hand, Haslhofer-M\"uller \cite{Has-Mul-Lam} and the second author \cite{Kro-Sta-Ric-Sol} have proved Lojasiewicz inequalities for Perelman's entropies which are monotone under the Ricci flow and whose critical points are exactly Ricci-flat metrics and Ricci solitons, respectively. The paper is organized as follows. Section \ref{def-ALE} recalls the basic definitions of ALE spaces together with the notions of linear stability and integrability of a Ricci-flat metric. Section \ref{sec-moduli-space} gives a detailed description of the space of gauged ALE Ricci-flat metrics: see Theorem \ref{ell-reg-prop} and Theorem \ref{analyticset}. Section \ref{Ricci-Flat-ALE-Kahler} investigates the integrability of K\"ahler Ricci-flat metrics: this is the content of Theorem \ref{CY-ALE-Int}. Section \ref{RF-Sec} is devoted to the proof of the first part of Theorem \ref{main-theo-bis}. Section \ref{sec-equ-flo} discusses the structure of the Ricci-DeTurck flow. Section \ref{Short-time} establishes pointwise and integral short time estimates. The core of the proof of Theorem \ref{main-theo-bis} is contained in Section \ref{deco-met-sec}: a priori uniform in time $L^2$ estimates are proved with the help of a suitable notion of strict positivity for the Lichnerowicz operator developed for Schr\"odinger operators by Devyver \cite{Dev-Gau-Est}. The infinite time existence and the convergence aspects of Theorem \ref{main-theo-bis} are then proved in Section \ref{Exi-conv}. Finally, Section \ref{Nash-Moser-Sec} proves the last part of Theorem \ref{main-theo-bis}: the decay in time is verified with the help of a Nash-Moser iteration. \subsection*{Acknowledgements} The authors want to thank the MSRI for hospitality during the research program in differential geometry, where part of the work was carried out. \section{ALE spaces}\label{Sec-ALE} \subsection{Analysis on ALE spaces}\label{def-ALE} We start by recalling a couple of definitions. \begin{defn} A complete Riemannian manifold $(M^n,g_0)$ is said to be \textbf{asymptotically locally Euclidean} (ALE) with one end of order $\tau>0$ if there exists a compact set $K\subset M$, a radius $R$ and a diffeomorphism such that : $\phi:M\setminus K\rightarrow (\mathbb{R}^n\setminus B_R)/\Gamma$, where $\Gamma$ is a finite group of $\SO(n)$ acting freely on $\mathbb{R}^n\setminus\{0\}$, such that \begin{eqnarray*} \arrowvert \nabla^{\eucl,k}(\phi_*g_0-g_{\eucl})\arrowvert_{\eucl}=\textit{O}(r^{-\tau-k}),\quad \forall k\geq 0. \end{eqnarray*} holds on $(\mathbb{R}^n\setminus B_R)/\Gamma$. \end{defn} The linearized operator we will focus on is the so called Lichnerowicz operator whose definition is recalled below: \begin{defn} Let $(M,g)$ be a Riemannian manifold. Then the operator $L_g:C^{\infty}(S^2T^*M)\rightarrow C^{\infty}(S^2T^*M)$, defined by \begin{eqnarray*} L_g(h)&:=&\Delta_gh+2\Rm(g)\ast h-\Ric(g)\circ h-h\circ \Ric(g),\\ (\Rm(g)\ast h)_{ij}&:=&\Rm(g)_{iklj}h_{mn}g^{km}g^{ln}, \end{eqnarray*} is called the \textbf{Lichnerowicz} Laplacian acting on the space of symmetric $2$-tensors $S^2T^*M$. \end{defn} \noindent In this paper, we consider the following notion of stability: \begin{defn} Let $(M^n,g_0)$ be a complete ALE Ricci-flat manifold. $(M^n,g_0)$ is said to be \textbf{linearly stable} if the (essential) $L^2$ spectrum of the Lichnerowicz operator $L_{g_0}:=\Delta_{g_0}+2\Rm(g_0)\ast$ is in $(-\infty,0]$. \end{defn} Equivalently, this amounts to say that $\sigma_{L^2}(-L_{g_0})\subset [0,+\infty)$. By a theorem due to G. Carron \cite{Car-Coh}, $\ker_{L^2}L_{g_0}$ has finite dimension. Denote by $\Pi_c$ the $L^2$ projection on the kernel $\ker_{L^2}L_{g_0}$ and $\Pi_s$ the projection orthogonal to $\Pi_c$ so that $h=\Pi_ch+\Pi_sh$ for any $h\in L^2(S^2T^*M)$. Let $(M,g_0)$ be an ALE Ricci-flat manifold and $\mathcal{U}_{g_0}$ the set of ALE Ricci-flat metrics with respect to the gauge $g_0$, that is : \begin{eqnarray} \mathcal{U}_{g_0}&:=&\{\mbox{$g \mid$ $g$ ALE metric on $M$ s.t.} \Ric(g)=0\mbox{ and }\Li_{V(g,g_0)}(g)=0\},\\ g_0(V(g(t),g_0),.)&:=&\div_{g(t)}g_0-\frac{1}{2}\nabla^{g(t)}\tr_{g(t)}g_0,\label{def-vect-gauge} \end{eqnarray} endowed with the $L^2\cap L^{\infty}$ topology coming from $g_0$. \begin{defn}\label{defn-integrable} $(M^n,g_0)$ is said to be \textbf{integrable} if $\mathcal{U}_{g_0}$ has a smooth structure in a neighborhood of $g_0$. In other words, $(M^n,g_0)$ is integrable if the map \begin{eqnarray*} \Psi_{g_0}: g\in \mathcal{U}_{g_0}\rightarrow \Pi_c(g-g_0)\in \ker_{L^2}(L_{g_0}), \end{eqnarray*} is a local diffeomorphism at $g_0$. \end{defn} If $(M,g_0)$ is ALE and Ricci-flat, it is a consequence of \cite[Theorem 1.1]{Ban-Kas-Nak} that it is already ALE of order $n-1$. Moreover, if $n=4$ or $(M,g_0)$ is K\"ahler, it is ALE of order $n$. This is due to the presence of Kato inequalities, \cite[Corollary 4.10]{Ban-Kas-Nak} for the curvature tensor. We will show in Theorem \ref {ell-reg-prop} that by elliptic regularity, all $g\in\mathcal{U}_{g_0}$ are ALE of order $n-1$ with respect to the same coordinates as $g_0$. In order to do analysis of partial differential equations on ALE manifolds, one has to work with weighted function spaces which we will define in the following. Fix a point $x\in M$ and define a function $\rho:M\to\mathbb{R}$ by $\rho(y)=\sqrt{1+d(x,y)^2}$. For $p\in[1,\infty)$ and $\delta\in \mathbb{R}$, we define the spaces $L^p_{\delta}(M)$ as the closure of $C^{\infty}_{0}(M)$ with respect to the norm \begin{align*} \left\|u\right\|_{L^p_{\delta}}=\left(\int_M |\rho^{-\delta}u|^p\rho^{-n}d\mu\right)^{1/p}, \end{align*} and the weighted Sobolev spaces $W^{k,p}_{\delta}(M)$ as the closure of $C^{\infty}_{0}(M)$ under \begin{align*} \left\|u\right\|_{W^{k,p}_{\delta}}=\sum_{l=0}^k\left\|\nabla^lu\right\|_{L^{p}_{\delta-l}}. \end{align*} The weighted H\"{o}lder spaces are defined as the closure of $C^{\infty}_{0}(M)$ under \begin{align*} \left\|u\right\|_{C^{k,\alpha}_{\delta}}=&\sum_{l=0}^k\sup_{x\in M} \rho^{-\delta+l}(x)|\nabla^lu(x)|\\&\quad+\sup_{\substack{x, y\in M\\ 0<d(x,y)<\mathrm{inj}(M)}}\min\left\{\rho^{-\delta+k+\alpha}(x),\rho^{-\delta+k+\alpha}(y)\right\}\frac{|\tau^y_x\nabla^ku(x)-\nabla^k u(y)|}{|x-y|^{\alpha}}, \end{align*} where $\alpha\in (0,1]$ and $\tau_x^y$ denotes the parallel transport from $x$ to $y$ along the shortest geodesic joining $x$ and $y$. All these spaces are Banach spaces, the spaces $H^k_{\delta}(M):=W^{k,2}_{\delta}(M)$ are Hilbert spaces and their definition does not depend on the choice of the base point defining the weight function $\rho$. All these definitions extend to Riemannian vector bundles with a metric connection in an obvious manner. In the literature, there are different notational conventions for weighted spaces. We follow the convention of \cite{Bar-Mass}. The Laplacian $\Delta_g$ is a bounded map $\Delta_g:W^{p,k}_{\delta}(M)\to W^{p,k-2}_{\delta-2}(M)$ and there exists a discrete set $D\subset\mathbb{R}$ such that this operator is Fredholm for $\delta\in \mathbb{R}\setminus D$. This is shown in \cite{Bar-Mass} in the asymptotically flat case and the proof in the ALE case is the same up to minor changes. We call the values $\delta\in D$ exceptional and the values $\delta\in \mathbb{R}\setminus D$ nonexceptional. These properties also hold for elliptic operators of arbitrary order acting on vector bundles supposed that the coefficients behave suitable at infinity \cite[Theorem 6.1]{Loc-McO}. We will use these facts frequently in this paper. \subsection{The space of gauged ALE Ricci-flat metrics}\label{sec-moduli-space} Fix an ALE Ricci-flat manifold $(M,g_0)$. Let $\mathcal{M}$ be the space of smooth metrics on the manifold $M$. For $g\in \mathcal{M}$, let $V=V(g,g_0)$ be the vector field defined intrinsically by (\ref{def-vect-gauge}) and locally given by $g^{ij}(\Gamma(g)^k_{ij}-\Gamma(g_0)_{ij}^k)$ where $\Gamma(g)_{ij}^k$ denotes the Christoffel symbols associated to the Riemannian metric $g$. We call a metric $g$ gauged, if $V(g,g_0)=0$. Let \begin{align} \mathcal{F}=\left\{g\in\mathcal{M}\mid -2\Ric(g)+\Li_{V(g,g_0)}g=0 \right\}\label{def-F-set}, \end{align} be the set of stationary points of the Ricci-DeTurck flow. In local coordinates, the above equation (\ref{def-F-set}) can also be written as \begin{eqnarray*} 0&=&g^{ab}\nabla^{g_0,2}_{ab}g_{ij}-g^{kl}g_{ip}(g_0)^{pq}\Rm(g_0)_{jklq}-g^{kl}g_{jp}(g_0)^{pq}\Rm(g_0)_{iklq}\\ &&+g^{ab}g^{pq}\left(\frac{1}{2}\nabla^{g_0}_ig_{pa}\nabla^{g_0}_jg_{qb}+\nabla^{g_0}_ag_{jp}\nabla^{g_0}_qg_{ib}\right)\\ &&-g^{ab}g^{pq}\left(\nabla^{g_0}_ag_{jp}\nabla^{g_0}_bg_{iq}-\nabla^{g_0}_jg_{pa}\nabla^{g_0}_bg_{iq}-\nabla^{g_0}_ig_{pa}\nabla^{g_0}_bg_{jq}\right), \end{eqnarray*} see \cite[Lemma 2.1]{Shi-Def}. By defining $h=g-g_0$, this equation can be again rewritten as \begin{equation}\begin{split}\label{stat-evo-equ} 0&=g^{ab}\nabla^{g_0,2}_{ab}h_{ij}+h_{ab}g^{ka}(g_0)^{lb}g_{ip}(g_0)^{pq}\Rm(g_0)_{jklq}+h_{ab}g^{ka}(g_0)^{lb}g_{jp}(g_0)^{pq}\Rm(g_0)_{iklq}\\ &\quad+g^{ab}g^{pq}\left(\frac{1}{2}\nabla^{g_0}_ih_{pa}\nabla^{g_0}_jh_{qb}+\nabla^{g_0}_ah_{jp}\nabla^{g_0}_qh_{ib}\right)\\&\quad -g^{ab}g^{pq}\left(\nabla^{g_0}_ah_{jp}\nabla^{g_0}_bh_{iq}-\nabla^{g_0}_jh_{pa}\nabla^{g_0}_bh_{iq}-\nabla^{g_0}_ih_{pa}\nabla^{g_0}_bh_{jq}\right), \end{split} \end{equation} where we used that $g_0$ is Ricci-flat. The linearization of this equation at $g_0$ is given by \begin{align*} \frac{d}{dt}|_{t=0}(-2\mathrm{Ric}_{g_0+th}+\Li_{V(g_0+th,g_0)}(g_0+th))=L_{g_0}h. \end{align*} A proof of this fact can be found for instance in \cite[Chapter 3]{Bam-Phd}. We recall the well-known fact that the $L^2$-kernel of the Lichnerowicz operator consists of transverse traceless tensors: \begin{lemma}\label{tttensors} Let $(M^n,g)$ be an ALE Ricci-flat manifold and $h\in\ker_{L^2}(L_{g_0})$. Then, $\tr_{g_0} h=0$ and $\div_{g_0} h=0$. \end{lemma} \begin{proof} Straightforward calculations show that $\tr_{g_0}\circ L_{g_0}=\Delta_{g_0}\circ \tr_{g_0}$ and $\div_{g_0}\circ L_{g_0}=\Delta_{g_0}\circ\div_{g_0}$. Therefore, $\tr_{g_0} h\in \ker_{L^2}(\Delta_{g_0})$ and $\div_{g_0} h\in \ker_{L^2} (\Delta_{g_0})$ which implies the statement of the lemma. \end{proof} \noindent The next proposition ensures that ALE steady Ricci solitons are Ricci-flat: \begin{prop}\label{flatsoliton} Let $(M^n,g,X)$ be a steady Ricci soliton, i.e. $\Ric(g)=\mathcal{L}_X(g)$ for some vector field $X$ on $M$. Then $\lim_{+\infty}|X|_g=0$ implies $X=0$. In particular, any steady soliton in the sense of (\ref{def-F-set}) that is ALE with $\lim_{+\infty}V(g,g_0)=0$ is Ricci-flat. \end{prop} \begin{proof} By the contracted Bianchi identity, one has: \begin{eqnarray*} \frac{1}{2}\nabla^g\R_g=\div_g\Ric(g) =\div_g\mathcal{L}_X(g) &=&\frac{1}{2}\nabla^g\tr_g(\mathcal{L}_X(g))+\Delta_gX+\Ric(g)(X)\\ &=&\frac{1}{2}\nabla^g\R_g+\Delta_gX+\Ric(g)(X). \end{eqnarray*} Therefore, $\Delta_gX+\Ric(g)(X)=0$. In particular, \begin{eqnarray*} \Delta_g|X|_g^2+X\cdot|X|_g^2=2|\nabla^gX|_g^2+2<\nabla^g_XX,X>_g-2\Ric(g)(X,X)=2|\nabla^gX|_g^2, \end{eqnarray*} which establishes that $|X|_g^2$ is a subsolution of $\Delta_X:=\Delta_g+X\cdot$. The use of the maximum principle then implies the result in case $\lim_{+\infty}|X|=0$. \end{proof} \begin{theo}\label{ell-reg-prop} Let $(M^n,g_0)$ be an ALE Ricci-flat manifold with order $\tau>0$. Let $g\in\mathcal{F}$ be in a sufficiently small neighbourhood of $g_0$ with respect to the $L^2\cap L^{\infty}$-topology. Then $g$ is an ALE Ricci-flat manifold of order $n-1$ with respect to the same coordinates as $g_0$. \end{theo} \begin{rk} \begin{itemize} \item[(i)] If $n=4$ or $g_0$ is K\"ahler, it seems likely that $g\in\mathcal{F}$ is ALE of order $n$ with respect to the same coordinates as $g_0$. However, we don't need this decay for further considerations. \item[(ii)] A priori, Proposition \ref{ell-reg-prop} does not assume any integral or pointwise decay on the difference tensor $g-g_0$ or on the curvature tensor of $g$. The assumptions on $g$ can be even weakened as follows: If $\left\|g-g_0\right\|_{L^p(g_0)}\leq K<\infty$ for some $p\in[2,\infty)$ and $\left\|g-g_0\right\|_{L^{\infty}(g_0)}<\epsilon=\epsilon(g_0,p,K)$, then the conclusions of Theorem \ref{ell-reg-prop} hold. \end{itemize} \end{rk} \begin{proof}[Proof of Theorem \ref{ell-reg-prop}] The first step consists in applying a Moser iteration to the norm of the difference of the two metrics : $\arrowvert h\arrowvert_{g_0}:=|g-g_0|_{g_0}$. Indeed, recall that $h$ satisfies \eqref{stat-evo-equ} which can also be written as \begin{eqnarray*} &&g^{-1}\ast\nabla^{g_0,2}h+2\Rm(g_0)\ast h=\nabla^{g_0}h\ast\nabla^{g_0}h,\\ && g^{-1}\ast \nabla^{g_0,2}h_{ij}:=g^{kl}\nabla^{g_0,2}_{kl}h_{ij}. \end{eqnarray*} In particular, \begin{eqnarray*} g^{-1}\ast\nabla^{g_0,2}\arrowvert h\arrowvert_{g_0}^2&=&2g^{-1}(\nabla^{g_0}h,\nabla^{g_0}h)-4\langle\Rm(g_0)\ast h,h\rangle_{g_0}+\langle\nabla^{g_0}h\ast \nabla^{g_0}h,h\rangle_{g_0},\\ g^{-1}(\nabla^{g_0}h,\nabla^{g_0}h)&:=&g^{ij}\nabla^{g_0}_ih_{kl}\nabla^{g_0}_jh_{kl}. \end{eqnarray*} Therefore, as $\|h\|_{L^{\infty}(g_0)}\leq \epsilon$ where $\epsilon>0$ is a sufficiently small constant depending on $n$ and $g_0$, we get \begin{eqnarray*} g^{-1}\ast\nabla^{g_0,2}\arrowvert h\arrowvert_{g_0}^2\geq -c(n)\arrowvert\Rm(g_0)\arrowvert_{g_0}\arrowvert h\arrowvert^2_{g_0}. \end{eqnarray*} As $|\Rm(g_0)|\in L^{n/2}(M)$ and $h\in L^2(S^2T^*M)$, Lemma $4.6$ and Proposition $4.8$ of \cite{Ban-Kas-Nak} tell us that $|h|^2=\textit{O}(r^{-\tau})$ at infinity for any positive $\tau<n-2$, i.e. $h=\textit{O}(r^{-\tau})$ for any $\tau<n/2-1$. Here, $r$ denotes the distance function on $M$ centered at some arbitrary point $x\in M$. The next step is to show that $\nabla^{g_0} h=\textit{O}(r^{-\tau-1})$ for $\tau<n/2-1$. By elliptic regularity for weighted spaces and by interpolation inequalities, \begin{align*} \left\|h\right\|_{W^{2,p}_{-\tau}(g_0)}&\leq C(\left\|g^{-1}\ast\nabla^{g_0,2}h\right\|_{L^p_{-\tau-2}(g_0)}+\left\|h\right\|_{L^p_{-\tau}(g_0)})\\ &\leq C(\left\| |\nabla^{g_0} h|^2\right\|_{L^{p}_{-\tau-2}(g_0)}+\left\|h\right\|_{L^p_{-\tau}(g_0)})\\ &\leq C( \left\|\nabla^{g_0,2}h\right\|_{L^p_{-\tau-2}(g_0)}+\left\|\nabla^{g_0} h\right\|_{L^p_{-\tau-1}(g_0)})\left\|h\right\|_{L^{\infty}(g_0)}+\left\|h\right\|_{L^p_{-\tau}(g_0)}), \end{align*} which implies \begin{align*} \left\|h\right\|_{C^{1,\alpha}_{-\tau}(g_0)}\leq C\left\|h\right\|_{W^{2,p}_{-\tau}(g_0)}\leq C \left\| h\right\|_{L^p_{-\tau}(g_0)}\leq C \left\| h\right\|_{L^{\infty}_{-\tau-\epsilon}(g_0)}<\infty \end{align*} for all $p\in(n,\infty)$ and $\tau+\epsilon<\frac{n}{2}-1$ provided that the $L^{\infty}$-norm is small enough. Consequently, \begin{eqnarray*} \nabla^{g_0} h=\textit{O}(r^{-1-\tau}). \end{eqnarray*} In the following we will further improve the decay order and show that $h=\textit{O}(r^{-n+1})$. As a consequence of elliptic regularity for weighted H\"{o}lder spaces, we will furthermore get $\nabla^{g_0,k}h=\textit{O}(r^{-n+1-k})$ for any $k\in\mathbb{N}$. To prove these statements we adapt the strategy in \cite[p. 325-327]{Ban-Kas-Nak}. As $\Rm(g_0)=\textit{O}(r^{-n-1})$, $h=\textit{O}(r^{-\tau})$ and $\nabla^{g_0} h=\textit{O}(r^{-\tau-1})$ for some fixed $\tau$ slightly smaller than $\frac{n}{2}-1$, equation \eqref{stat-evo-equ} implies that $\Delta_{g_0} h=\textit{O}(r^{-2\tau-2})$. Thus, $\Delta_{g_0} h=\textit{O}(r^{-\mu})$ for some $\mu$ slightly smaller than $n$. Let $\varphi: M\setminus B_R(x)\to (\mathbb{R}^n\setminus B_R)/\Gamma $ be coordinates at infinity with respect to which $g_0$ is ALE of order $n-1$. Let furthermore $\Pi:\mathbb{R}^n\setminus B_R\to (\mathbb{R}^n\setminus B_R)/\Gamma$ be the projection map. From now on, we consider the objects on $M$ as objects on $\mathbb{R}^n\setminus B_R$ by identifying them with the pullbacks under the map $\Pi\circ \varphi^{-1}$. To avoid any confusion, we denote the pullback of $\Delta_{g_0}$ by $\Delta$. $\Delta_0$ denotes the Euclidean Laplacian on $\mathbb{R}^n\setminus B_R$. Let $r_0=|z|$ be the euclidean norm as a function on $\mathbb{R}^n\setminus B_R$. Then we have, for any $\beta\in\mathbb{R}$, \begin{align*} \Delta r_0^{-\beta}=\Delta_0r_0^{-\beta}+(\Delta-\Delta_0)r_0^{-\beta}=\beta(\beta-n+2)r_0^{-\beta-2}+O(r_0^{-\beta-n-1}). \end{align*} Let $u=h_{ij}$ for any $i,j$. For any constant $A>0$, we have \begin{align*} \Delta (A\cdot r_0^{-\beta}\pm u)= (A\cdot\beta(\beta-n+2)+ C_1\cdot r_0^{-n+1}+C_2r_0^{-\mu+\beta+2})r_0^{-\beta-2}. \end{align*} If we choose $\beta>0$ so that $\beta+2<\mu<n$, we can choose $A$ so large that \begin{align*} \Delta (A\cdot r_0^{-\beta}\pm u)<0 \qquad A\cdot r_0^{-\beta}\pm u>0 \text{ if }r_0=R. \end{align*} The strong maximum principle then implies that $u=\textit{O}(r_0^{-\beta})$ and we also get $\Delta u=\textit{O}(r_0^{-\beta-2})$. By elliptic regularity, $u\in W^{2,p}_{-\beta}(\mathbb{R}^n\setminus B_R)$. By \eqref{stat-evo-equ}, $\Delta_0 u\in L^{p}_{-2\beta-2}(\mathbb{R}^n\setminus B_R)$ and for all nonexceptional $n-1<\gamma<2\beta$, there exists a function $v^{\gamma}_{ij}\in W^{2,p}_{-\gamma}(\mathbb{R}^n\setminus B_R)$ such that $\Delta_0v^{\gamma}_{ij}=\Delta_0h_{ij}$. By the expansion of harmonic functions on $\mathbb{R}^n$, we have \begin{align*} h_{ij}=v^{\gamma}_{ij}+A_{ij}r_0^{-n+2}+\textit{O}(r_0^{-n+1})=A_{ij}r_0^{-n+2}+\textit{O}(r_0^{-n+1}), \end{align*} where the decay of $v^{\gamma}_{ij}$ follows from Sobolev embedding. Proposition \ref{flatsoliton} now implies that the equations $\mathrm{Ric}(g)=0$ and $V(g,g_0)=0$ hold individually. Therefore, \begin{align*}0=g^{ij}(\Gamma(g)_{ij}^k-\Gamma(g_0)_{ij}^k)=(Az-\frac{1}{2}\tr(A)z)^k(2-n)|z|^{-n}+\textit{O}(r_0^{-n}), \end{align*} which implies that $A_{ij}=0$ and thus, $h=\textit{O}(r^{-n+1})$. As $h_{ij}=v^{\gamma}_{ij}+\textit{O}(r_0^{-n+1})$ with $v^{\gamma}_{ij}\in W^{2,p}_{-\gamma}(\mathbb{R}^n\setminus B_R)$ and an harmonic remainder term, Sobolev embedding and elliptic regularity implies that $h_{ij}\in C^{1,\alpha}_{1-n}(\mathbb{R}^n\setminus B_R)$, so that $\nabla^{g_0} h=\textit{O}(r^{-n})$. Elliptic regularity for weighted H\"{o}lder spaces implies that $\nabla^{g_0,k} h=\textit{O}(r^{-n+1-k})$ for all $k\in\mathbb{N}$. \end{proof} \begin{rk}\label{lin-decay} By almost the same proof as above, one shows that $h=\textit{O}_{\infty}(r^{-n-1})$ if $h\in\ker_{L^2}(L_{g_0})$. To prove $A_{ij}=0$ in this case, one uses the condition $0=\div_{g_0}h-\frac{1}{2}\tr_{g_0} h$ which is guaranteed by Lemma \ref{tttensors}. \end{rk} \begin{theo}\label{analyticset} Let $(M,g_0)$ be an ALE Ricci-flat metric and $\mathcal{F}$ as above. Then there exists an $L^2\cap L^{\infty}$-neighbourhood $\mathcal{U}$ of $g_0$ in the space of metrics and a finite-dimensional real-analytic submanifold $\mathcal{Z}\subset\mathcal{U}$ with $T_{g_0}\mathcal{Z}=ker_{L^2}(L_{g_0})$ such that $\mathcal{U}\cap\mathcal{F}$ is an analytic subset of $\mathcal{Z}$. In particular if $g_0$ is integrable, we have $\mathcal{U}\cap\mathcal{F}=\mathcal{Z}$. \end{theo} \begin{proof} Let $\Phi:g\mapsto-2\Ric(g)+\Li_{V(g,g_0)}(g)$ and $\mathcal{B}_{L^2\cap L^{\infty}}(g_0,\epsilon)$ be the $\epsilon$-ball with respect to the $L^2\cap L^{\infty}$-norm induced by $g_0$ and centered at $g_0$. By Theorem \ref{ell-reg-prop}, we can choose $\epsilon>0$ be so small that any $g\in\mathcal{F}\cap \mathcal{B}_{L^2\cap L^{\infty}}(g_0,\epsilon)$ satisfies the condition $g-g_0=\textit{O}_{\infty}(r^{-n+1}$) so that $\left\|g-g_0\right\|_{H^k_{\delta}}<\infty$ for any $k\in \mathbb{N}$ and $\delta >-n+1$. Suppose now in addition that $k>n/2+2$ and $\delta\leq-n/2$ and let $\mathcal{V}$ be a $H^k_{\delta}$-neighbourhood of $g_0$ with $\mathcal{V}\subset \mathcal{B}_{L^2\cap L^{\infty}}(g_0,\epsilon_1)$. Then the map $\Phi$, considered as a map $\Phi: H^k_{\delta}(S^2T^*M)\supset\mathcal{V}\to H^{k-2}_{\delta-2}(S^2T^*M)$ is a real analytic map between Hilbert manifolds. If $\delta$ is nonexceptional, the differential $d\Phi_{g_0}=L_{g_0}: H^k_{\delta}(S^2T^*M)\to H^{k-2}_{\delta-2}(S^2T^*M)$ is Fredholm. By \cite[Lemma $13.6$]{Koiso-Cx-Str}, there exists (possibly after passing to a smaller neighbourhood) a finite-dimensional real-analytic submanifold $\mathcal{W}\subset \mathcal{V}$ with $g_0\in \mathcal{W}$ and $T_{g_0}\mathcal{W}=\ker_{H^k_{\delta}}(L_{g_0})$ such that $\mathcal{V}\cap\Phi^{-1}(0)\subset\mathcal{W}$ is a real-analytic subset. By the proof of Theorem \ref{ell-reg-prop}, we can choose an $L^2\cap L^{\infty}$-neighbourhood $\mathcal{U}_{g_0}\subset \mathcal{B}_{L^2\cap L^{\infty}}(g_0,\epsilon)$ of $g_0$ so small that $\mathcal{U}_{g_0}\cap\mathcal{F}\subset\mathcal{V}$ (provided that $\mathcal{V}$ is small enough). Then the set $\mathcal{Z}=\mathcal{U}_{g_0}\cap\mathcal{W}$ fulfills the desired properties because $T_{g_0}\mathcal{Z}=T_{g_0}\mathcal{W}=\ker_{H^k_{\delta}}(L_{g_0})=\ker_{L^2}(L_{g_0})$. Here, the last equation holds by Remark \ref{lin-decay}. \end{proof} \begin{prop}\label{slice} Let $(M^n,g_0)$ be an ALE Ricci-flat manifold and let $k>n/2+1$ and $\delta\leq-n/2$ nonexceptional. Then there exists a $H^k_{\delta}$-neighbourhood $\mathcal{U}^k_{\delta}$ of $g_0$ in the space of metrics such that the set \begin{align*} \mathcal{G}_{\delta}^k:=\left\{g\in\mathcal{U}_{\delta}^k\mid g^{ij}(\Gamma(g)_{ij}^k-\Gamma(g_0)_{ij}^k)=0\right\}, \end{align*} is a smooth manifold. Moreover, for any $g\in \mathcal{U}_{\delta}^k$, there exists a unique diffeomorphism $\varphi$ which is $H^{k+1}_{\delta+1}$-close to the identity such that $\varphi^*g\in \mathcal{G}_{\delta}^k$. \end{prop} \begin{proof}Let $\mathcal{U}$ be a $H^{k}_{\delta}$-neighbourhood of $g_0$ in the space of metrics such that the map $V:H^k_{\delta}(S^2T^*M)\supset\mathcal{U}\to H^{k-1}_{\delta-1}(TM)$, given by $V(g)^k=V(g,g_0)^k=g^{ij}(\Gamma(g)_{ij}^k-\Gamma(g_0)_{ij}^k)$ is well-defined. Linearization at $g_0$ yields the map $F:H^k_{\delta}(S^2T^*M)\to H^{k-1}_{\delta-1}(TM)$, defined by $F(h)=(\mathrm{div}_{g_0}h)^{\sharp}-\frac{1}{2}\nabla^{g_0}\mathrm{tr}_{g_0}h$. To prove the theorem, it suffices to prove that $F$ is surjective and that the decomposition \begin{align*} H^k_{\delta}(S^2T^*M)=\ker F\oplus \Li(g_0)(H^{k+1}_{\delta+1}(TM)) \end{align*} holds (here, $\Li_X(g_0)$ denotes the Lie-Derivative of $g_0$ along $X$). In fact, a calculation shows that $F\circ \Li(g_0)= \Delta_{g_0}+\Ric(g_0)(\cdot)=\Delta_{g_0}$ since $g_0$ is Ricci-flat. Since the map $\Delta_{g_0}:H^{k+1}_{\delta+1}(\Lambda^1M)\to H^{k-1}_{\delta-1}(\Lambda^1M)$ is an isomorphism, it follows that $F$ is surjective and $\ker F\cap \Li(g_0)(H^{k+1}_{\delta+1}(TM))=\left\{0\right\}$. To show that \begin{align*} H^k_{\delta}(S^2T^*M)\subset \ker_{L^2}(F)\oplus \Li(g_0)(H^{k+1}_{\delta+1}(TM)), \end{align*} let $h\in H^k_{\delta}(S^2T^*M)$ and $X\in H^{k+1}_{\delta+1}(TM)$ the unique solution of $F(h)=\Delta_{g_0} X=F( \Li_X(g_0))$. Then, $h=(h-\Li_X(g_0))+\Li_X(g_0)$ is the desired decomposition. By surjectivity of $F$, $\mathcal{G}_{\delta}^k$ is a manifold. The second assertion follows because the map \begin{align*} \Phi: \mathcal{G}_{\delta}^k\times H^{k+1}_{\delta+1}(\mathrm{Diff}(M))\to \mathcal{M}_{\delta}^k=:\mathcal{M}\cap H^k_{\delta}(S^2T^*M),\qquad (g,\varphi)\mapsto \varphi^*g \end{align*} is a local diffeomorphism around $g_0$ due to the implicit function theorem and the above decomposition. \end{proof} \begin{rk}The construction in Proposition \ref{slice} is similar to the slice provided by Ebin's slice theorem \cite{Ebi-Slice} in the compact case. The set $\mathcal{F}$ is similar to the local premoduli space of Einstein metrics defined in \cite[Definition 2.8]{Koiso-Cx-Str}. In contrast to the compact case, the elements in $\mathcal{F}$ close to $g_0$ can all be homothetic. In fact, this holds for the Eguchi-Hanson metric, see \cite{Page-K3}. More generally, any four-dimensional ALE hyperk\"{a}hler manifold $(M,g)$ admits a three-dimensional subspace of homothetic metrics in $\mathcal{F}$: see \cite[p. 52--53]{Via-Lect-IAS}. \end{rk} \subsection{ ALE Ricci-flat K\"{a}hler spaces}\label{Ricci-Flat-ALE-Kahler} \begin{lemma}[$\partial\bar{\partial}$-Lemma for ALE manifolds] Let $(M,g,J)$ be an ALE K\"{a}hler manifold, $\delta\leq-n/2$ nonexceptional, $k\geq 1$ and $\alpha\in H^k_{\delta}(\Lambda^{p,q}M)$. Suppose that \begin{itemize} \item $\alpha=\partial\beta$ for some $\beta\in H^{k+1}_{\delta+1}(\Lambda^{p-1,q}M)$ and $\bar{\partial}\alpha=0$ or \item $\alpha=\bar{\partial}\beta$ for some $\beta\in H^{k+1}_{\delta+1}(\Lambda^{p,q-1}M)$ and ${\partial}\alpha=0$. \end{itemize} Then there exists a form $\gamma\in H^{k+2}_{\delta+2}(\Lambda^{p-1,q-1}M)$ such that $\alpha=\partial\bar{\partial}\gamma$. Moreover, we can choose $\gamma$ to satisfy the estimate $\left\|\gamma\right\|_{H^{k+2}_{\delta+2}}\leq C\cdot \left\|\alpha\right\|_{H^{k}_{\delta}}$. for some $C>0$. \end{lemma} \begin{proof}This follows along the lines of Lemma 5.50 in \cite{Ballmann-Book}. Let $d=\partial$ or $d=\bar{\partial}$ and $\Delta=\Delta_{\partial}=\Delta_{\bar{\partial}}$ Consider $\Delta$ as an operator $\Delta:H^k_{\delta}(\Lambda^{*}M)\to H^{k-2}_{\delta-2}(\Lambda^{*}M)$. Because of the assumption on $\delta$, it is Fredholm and we have the $L^2$-orthogonal decomposition \begin{align*} H^k_{\delta}(\Lambda^{*}M)=\ker_{L^2}(\Delta)\oplus \Delta(H^{k+2}_{\delta+2}(\Lambda^{*}M)). \end{align*} We define the Green's operator $G$ to be zero on $\ker_{L^2}(\Delta)$ and to be the inverse of $\Delta$ on $\ker_{L^2}(\Delta)^{\perp}$. This defines a continuous linear operator $G:H^{k}_{\delta}(\Lambda^{*}M)\to H^{k+2}_{\delta+2}(\Lambda^{*}M)$. By Hodge theory and because $d+d^* :H^{k+1}_{\delta+1}(\Lambda^{*}M)\to H^{k}_{\delta}(\Lambda^{*}M)$ is also Fredholm, \begin{align*} d( H^{k+1}_{\delta+1}(\Lambda^{*}M))\oplus d^{*}(H^{k+1}_{\delta+1}(\Lambda^{*}M))=\Delta(H^{k+2}_{\delta+2}(\Lambda^{*}M)). \end{align*} and it is straightforward to see that $G$ is self-adjoint and commutes with $d$ and $d^*$. As in Ballmann's book, one shows that $\gamma=-\partial^*G\bar{\partial}^*G\alpha$ does the job in both cases. The estimate on $\gamma$ follows from construction. \end{proof} Let $(M,g,J)$ be a K\"{a}hler manifold. An infinitesimal complex deformation is an endomorphism $I:TM\to TM$ that anticommutes with $J$ and satisfies $\bar{\partial}I=0$ and $\bar{\partial}^*I=0$. By the relation $IJ+JI=0$, $I$ can be viewed as a section of $\Lambda^{0,1}M\otimes T^{1,0}M$. \begin{theo} Let $(M^n,g,J)$ be an ALE K\"{a}hler manifold with a holomorphic volume form, $k>n/2+1$, $\delta\leq-n/2$ nonexceptional and $I\in H^k_{\delta}(\Lambda^{0,1}M\otimes T^{1,0}M)$ such that $\bar{\partial}I=0$ and $\bar{\partial}^*I=0$. Then there exists a smooth family of complex structures $J(t)$ with $J(0)=J$ such that $J(t)-J\in H^k_{\delta}(T^*M\otimes TM)$ and $J'(0)=I$. \end{theo} \begin{proof} The proof follows along the lines of Tian's proof by the power series approach \cite{Tian-Smoothness}: We write $J(t)=J(1-I(t))(1+I(t))^{-1}$, where $I(t)\in H^k_{\delta}(\Lambda^{0,1}M\otimes T^{1,0}M)$ and $I(t)$ has to solve the equation \begin{align*} \bar{\partial}I(t)+\frac{1}{2}[I(t),I(t)]=0, \end{align*} where $[.,.]$ denotes the Fr\"{o}licher-Nijenhuis bracket. If we write $I(t)$ as a formal power series $I(t)=\sum_{k\geq 1} I_kt^k$, the coefficients have to solve the equation \begin{align*} \bar{\partial} I_N+\frac{1}{2}\sum_{k=1}^{N-1}[I_k,I_{N-k}]=0, \end{align*} inductively for all $N\geq 2$. As $\Lambda^{n,0}M$ is trivial, there is a natural identification of the bundles $\Lambda^{0,1}M\otimes T^{1,0}M=\Lambda^{n-1,1}M$ by using the holomorphic volume form and we now think of the $I_k$ as being $(n-1,1)$-forms. Initially, we have chosen $I_1\in H^k_{\delta}(\Lambda^{0,1}M\otimes T^{1,0}M)$, given by $I=2I_1J$. By the multiplication property of weighted Sobolev spaces \cite[p. 538]{Cho-Bru-Boo}, $[I_1,I_1]\in H^{k-1}_{\delta-1}(\Lambda^{n-1,2}M)$. Using $\partial I_1=0$ and $\bar{\partial}^*I_1=0$, one can now show that $\bar{\partial}[I_1,I_1]=0$ and $[I_1,I_1]$ is $\partial$-exact. The $\partial\bar{\partial}$-lemma now implies the existence of a $\psi\in H^{k+1}_{\delta+1}(\Lambda^{n-2,1}M)$ such that $$\partial\bar{\partial}\psi=-\frac{1}{2}[I_1,I_1],$$ and so, $I_2=\partial \psi\in H^{k}_{\delta}(\Lambda^{n-1,1}M)$ does the job. Inductively, we get a solution of the equation \begin{align*} \partial\bar{\partial}\psi=\frac{1}{2}\sum_{k=1}^{N-1}[I_k,I_{N-k}], \end{align*} by the $\partial\bar{\partial}$-lemma since the right hand side is $\bar{\partial}$-closed and $\partial$-exact (which in turn is true because $\partial I_k=0$ for $1\leq k\leq N-1$). Now we can choose $I_N=\partial\psi\in H^{k}_{\delta}(\Lambda^{n-1,1}M)$. Let us prove the convergence of the above series: Let $D_1$ be the constant in the estimate of the $\partial\bar{\partial}$-lemma and $D_2$ be the constant such that \begin{align*} \left\|[\phi,\psi]\right\|_{H^{k-1}_{\delta-1}}\leq D_2\left\|\phi\right\|_{H^k_{\delta}}\left\|\psi\right\|_{H^k_{\delta}}. \end{align*} Then one can easily show by induction that \begin{align*} \left\|I_N\right\|_{H^k_{\delta}}\leq C(N)\cdot[\frac{1}{2}D_1\cdot D_2]^{N-1}(\left\|I_1\right\|_{H^{k}_{\delta}})^N \end{align*} for $N\geq1$, where $C(N)$ is the sequence defined by $C(1)=1$ and $C(N)=\sum_{i=1}^{N-1}C(i)\cdot C(N-i)$ for $N>1$. By defining $D:=2/(D_1\cdot D_2)$ and $s=\frac{1}{2}D_1\cdot D_2\cdot\left\|I_1\right\|_{H^{k}_{\delta}}\cdot t$, we get \begin{align*} \left\|I(t)\right\|_{H^k_{\delta}}\leq \sum_{i=1}^{\infty}\left\|I_i\right\|_{H^k_{\delta}} t^i\leq D\cdot\sum_{i=1}^{\infty}C(i)\cdot s^i=D\cdot \left (\frac{1}{2}-\sqrt{\frac{1}{4}-s}\right) \end{align*} if $s<1/4$ which shows that the series converges. Thus $I(t)\in H^{k}_{\delta}(\Lambda^{n-1,1}M)$ and $J(t)-J=-2JI(t)(1+I(t))^{-1}\in H^{k}_{\delta}(\Lambda^{n-1,1}M)\cong H^{k}_{\delta}(\Lambda^{0,1}M\otimes T^{1,0}M)$. \end{proof} The proof of the above theorem provides an analytic immersion $\Theta:H^{k}_{\delta}(\Lambda^{0,1}M\otimes T^{1,0}M)\cap \ker_{L^2}(\Delta)\supset U\to H^{k}_{\delta}(T^*M\otimes TM)$ whose image is a smooth manifold of complex structures which we denote by $\mathcal{J}^k_{\delta}$ and whose tangent map at $J$ is just the injection. \begin{prop}\label{kahler-def}Let $(M,g_0,J_0)$ be an ALE Calabi-Yau manifold, $\delta<2-n$ nonexceptional and $\mathcal{J}^k_{\delta}$ be as above. Then there exists a $H^{k}_{\delta}$-neighbourhood $\mathcal{U}$ of $J$ and a smooth map $\Phi:\mathcal{J}^k_{\delta}\cap \mathcal{U}\to\mathcal{M}^k_{\delta}$ which associates to each $J\in \mathcal{J}^k_{\delta}\cap \mathcal{U}$ sufficiently close to $J_0$ a metric $g(J)$ which is $H^k_{\delta}$-close to $g_0$ and K\"{a}hler with respect to $J$. Moreover, we can choose the map $\Phi$ such that $$d\Phi_{J_0}(I)(X,Y)=\frac{1}{2}(g_0(IX,J_0Y)+g_0(J_0X,IY)).$$ \end{prop} \begin{proof} We adapt the strategy of Kodaira and Spencer \cite[Section 6]{Kod-Spe-III}. Let $J_t$ be a family in $\mathcal{J}^k_{\delta}$ and define $J_t$-hermitian forms $\omega_t$ by $\Pi^{1,1}_{t}\omega_0(X,Y)=\frac{1}{2}(\omega_0(X,Y)+\omega_0(J_tX,J_tY))$. Let $\partial_t,\bar{\partial}_t$ the associated Dolbeaut operators and $\partial_t^*,\bar{\partial}_t^*$ their formal adjoints with respect to the metric $g_t(X,Y):=\omega_t(X,J_tY)$. We now define a forth-order linear differential operator $E_t:H^k_{\delta}(\Lambda^{p,q}_tM)\to H^{k-4}_{\delta-4}(\Lambda^{p,q}_tM)$ by \begin{align*} E_t=\partial_t\bar{\partial}_t\bar{\partial}^*_t\partial_t^*+\bar{\partial}^*_t\partial_t^*\partial_t\bar{\partial}_t+\bar{\partial}^*_t\partial_t\partial_t^*\bar{\partial}_t+\partial_t^*\bar{\partial}_t\bar{\partial}^*_t\partial_t +\bar{\partial}^*_t\bar{\partial}_t+\partial_t^*\partial_t. \end{align*} It is straightforward to see that $E_t$ is formally self-adjoint and strongly elliptic. Moreover, $\alpha\in\ker_{H^k_{\delta}}(E_t)$ if and only if $\partial_t\alpha=0$, $\bar{\partial}_t\alpha=0$ and $\bar{\partial}^*_t\partial_t^*\alpha=0$, i.e.\ $d\alpha=0$ and $\bar{\partial}^*_t\partial_t^*\alpha=0$ hold simultaneously. If $\delta$ is nonexceptional, $E_t$ is Fredholm which allows to define for each $t$ its Greens operator $G_t:H^{k-4}_{\delta-4}(\Lambda^{p,q}_tM)\to H^k_{\delta}(\Lambda^{p,q}_tM)$. As in \cite[Proposition 7]{Kod-Spe-III}, one now shows that \begin{align*} \ker_{L^2}(d)\cap H^k_{\delta}(\Lambda^{p,q}_tM)=\partial_t\bar{\partial}_t(H^{k+2}_{\delta+2}(\Lambda^{p-1,q-1}_tM))\oplus \ker_{L^2}(E_t)\cap H^k_{\delta}(\Lambda^{p,q}_tM), \end{align*} is an $L^2(g_t)$ orthogonal decomposition. The dimension of $\ker_{L^2}(E_t)\cap H^k_{\delta}(\Lambda^{1,1}_tM)$ is constant for small $t$ which implies that $G_t$ depends smoothly on $t$. The proof of this fact is exactly as in \cite[Proposition 8]{Kod-Spe-III}. Now observe that $E_t\omega_t\in H^{k-4}_{\delta-4}(\Lambda^{1,1}_tM)$ if $\omega_t$ and $J_t$ are $H^k_{\delta}$-close to $\omega_0$ and $J_0$, respectively. This allows us to define \begin{align*} \tilde{\omega}_t&:=\omega_t-G_tE_t\omega_t+\partial_t\bar{\partial}_tu_t =(1-G_tE_t)\Pi^{1,1}_t\omega_0+\partial_t\bar{\partial}_tu_t, \end{align*} where $u_t\in H^{k+2}_{\delta+2}(M)$ is a smooth family of functions such that $u_0=0$ which will be defined later. Clearly, $$\bar{\omega}_t:=\omega_t-G_tE_t\omega_t\in \ker E_t.$$ As $\bar{\omega}_t$ is $H^k_{\delta}$-close to $\omega_0$, $\nabla^{g_t}\bar{\omega}_t\in H^{k-1}_{\delta-1}(g_t)$, since $\omega_0$ is $g_0$-parallel. Therefore, $\nabla^{g_t}\bar{\omega}_t=\textit{O}(r^{-\alpha-1}) $ and $\nabla^{g_t,2}\bar{\omega_t}=\textit{O}(r^{-\alpha-2})$ for any $\alpha<-\delta$. Thus, if we choose the nonexceptional value $\delta$ so that $\delta<-n+2$, integration by parts implies that \begin{align*} \left\|\partial_t \bar{\omega}_t\right\|^2_{L^2(g_t)}+ \left\|\bar{\partial_t}\bar{\omega}_t \right\|^2_{L^2(g_t)} \leq (E_t\bar{\omega}_t,\bar{\omega}_t)_{L^2(g_t)}=0. \end{align*} Therefore, $\bar{\omega}_t$ and hence also $\tilde{\omega}_t$ is closed. Differentiating at $t=0$ yields \begin{align*} \tilde{\omega}'_0=(1-G_0E_0)\omega'_0-G_0(E_0'\omega_0)+\partial_0\bar{\partial}_0u'_0=\omega'_0-G_0(E_0'\omega_0)+\partial_0\bar{\partial}_0u'_0 \end{align*} Because $d\tilde{\omega}_t=0$, we have $d\tilde{\omega}'_0=0$ and since $J_0'$ is an infinitesimal complex deformation, $E_0\omega'_0=0$ and $d\omega'_0=0$ which implies that $$G_0(E_0'\omega_0)\in \ker_{L^2}(E_0)^{\perp}\cap\ker_{L^2}(d)\cap H^{k}_{\delta}(\Lambda^{1,1}_0M)=\partial_0\bar{\partial}_0(H^{k+2}_{\delta+2}(M)).$$ Let now $ v\in H^{k+2}_{\delta+2}(M) $ so that $\partial_0\bar{\partial}_0v=G_0(E_0'\omega_0).$ Then, define $u_t\in H^{k+2}_{\delta+2}(M) $ by $$u_t:=tv.$$ By this choice, $ \tilde{\omega}_0'=\omega'_0$ and the assertion for $d\Phi_{J_0}(J'_0)=\tilde{g}_0'$ follows immediately. Finally, $\tilde{g}_t(X,Y):=\tilde{\omega}_t(X,J_tY)$ is a Riemannian metric for $t$ small enough and it is K\"{a}hler with respect to $J_t$. \end{proof} \begin{rk} Let $J_t$ is a smooth family of complex structures in $\mathcal{J}^k_{\delta}\cap \mathcal{U}$ and $g_t=\Phi(J_t)$. Then the construction in the proof above shows that $I=J'_0$ and $h=g'_0$ are related by \begin{align*} h(JX,Y)=-\frac{1}{2}(g(X,IY)+g(IX,Y)). \end{align*} \end{rk} \noindent Before we state the next theorem, recall the notation $\mathcal{G}_{\delta}^k$ we used in Proposition \ref{slice}. \begin{theo}\label{CY-ALE-Int} Let $(M,g_0,J_0)$ be an ALE Calabi-Yau manifold and $\delta\in (1-n,2-n)$ nonexceptional. Then for any $h\in\ker_{L^2}(L_{g_0})$, there exists a smooth family $g(t)$ of Ricci-flat metrics in $\mathcal{G}^k_{\delta}$ with $g(0)=g_0$ and $g'_0=h$. Each metric $g(t)$ is ALE and K\"{a}hler with respect to some complex structure $J(t)$ which is $H^k_{\delta}$-close to $J_0$. In particular, $g_0$ is integrable. \end{theo} \begin{proof} We proceed similarly as in \cite[Chapter 12]{Besse}, except the fact that we use weighted Sobolev spaces. Given a complex structure $J$ close to $J_0$ and a $J$-$(1,1)$-form $\omega$ which is $H^{k}_{\delta}$-close to $\omega_0$, we seek a Ricci-flat metric in the cohomology class $[\omega]\in \mathcal{H}^{1,1}_J(M)$. As the first Chern class vanishes, there exists a function $f_{\omega}\in H^{k}_{\delta}(M)$, such that $i\partial\bar{\partial}f_{\omega}$ is the Ricci form of $\omega$. If $\bar{\omega}\in [\omega]$ and $\bar{\omega}-\omega\in H^{k}_{\delta}(\Lambda^{1,1}_{J}M)$, the $\partial\bar{\partial}$-lemma implies that there is a $u\in H^{k-2}_{\delta-2}(M)$ such that $\bar{\omega}=\omega+i\cdot \partial\bar{\partial}u$. Ricci-flatness of $\bar{\omega}$ is now equivalent to the condition \begin{align*} f_{\omega}=\log\frac{(\omega+i\partial\bar{\partial}u)^n}{\omega^n}=:Cal(\omega,u). \end{align*} Let $\mathcal{J}^k_{\delta}$ be as above and $\Delta_J$ the Dolbeaut Laplacian of $J$ and the metric $g(J)$. Then all the $(L^2_{\delta})$-cohomologies $\mathcal{H}^{1,1}_{J,\delta}(M)=\ker_{L^2_{\delta}}(\Delta_J)\cap L^2_{\delta}(\Lambda^{1,1}M)$ are isomorphic for $J\in\mathcal{J}^k_{\delta}$ if we $\mathcal{J}^k_{\delta}$ is small enough: We have $\mathcal{H}^2_{\delta}(M)=\mathcal{H}^{2,0}_{J,\delta}(M)\oplus \mathcal{H}^{1,1}_{J,\delta}(M)\oplus \mathcal{H}^{0,2}_{J,\delta}(M)$. The left hand side is independent of $J$ and the metric $g(J)$ is provided by Proposition \ref{kahler-def}. The spaces on the right hand side are kernels of $J$-dependent elliptic operators whose dimension depends upper-semicontinuously on $J$. However the sum of the dimensions is constant and so the dimensions must be constant as well. Thus, there is a natural projection $pr_{J}:\ker_{L^2}(\Delta_{J_0})\to\ker_{L^2}(\Delta_{J})$ which is an isomorphism. We now want to apply the implicit function theorem to the map \begin{align*} G: \mathcal{J}^k_{\delta}\times \mathcal{H}^{1,1}_{J_0,\delta}(M)\times H^{k}_{\delta}(M)&\to H^{k-2}_{\delta-2}(M)\\ (J,\kappa,u)&\mapsto Cal(\omega(J)+pr_J(\kappa),u)-f_{\omega(J)+pr_J(\kappa)}, \end{align*} where $\omega(J)(X,Y):=g(J)(JX,Y)$ and $g(J)$ is the metric constructed in Proposition \ref{kahler-def}. We have $G(J_0,0,0)=0$ and the differential restricted to the third component is just given by $\Delta:H^{k}_{\delta}(M)\to H^{k-2}_{\delta-2}(M)$, which is an isomorphism \cite[p.\ 328]{Besse}, therefore we find a map $\Psi$ such that $G(J,\kappa, \Psi(J,\kappa))=0$. Let now $h\in\ker_{L^2}(L_{g_0})$ and let $h=h_H+h_A$ its decomposition into a $J_0$-hermitian and a $J_0$-antihermitian part. We want to show that $h$ is tangent to a family of Ricci-flat metrics. We have seen in Theorem \ref{ell-reg-prop} that $h\in H^k_{\delta}(S^2T^*M)$ for all $\delta>1-n$) and we can define $I\in H^k_{\delta}(T^*M\otimes TM)$ and $\kappa\in H^k_{\delta}(\Lambda^{1,1}_{J_0})(M)$ by \begin{align}\label{herm+antiherm} g(X,IY)=-h_A(X,J_0Y),\qquad \kappa(X,Y)=h_H(J_0X,Y). \end{align} It is easily seen that $I$ is a symmetric endomorphism satisfying $IJ_0+J_0I=0$ and thus can be viewed as $I\in H^k_{\delta}(\Lambda^{0,1}M\otimes T^{1,0}M)$. Moreover, because $h_A$ is a $TT$-tensor, $\bar{\partial} I=0$ and $\bar{\partial}^*I=0$. In addition $\kappa\in \mathcal{H}^{1,1}_{J_0}(M)$. The proof of this facts is as in \cite{Koiso-Cx-Str}. Let $J(t)=\Theta(t\cdot I)$ be a family of complex structures tangent to $I$ and $\tilde{\omega}(t)=\tilde{\Phi}(J(t))$ be the associated family of K\"{a}hler forms. We consider the family $\omega(t)=\tilde{\omega}(t)+pr_{J(t)}(t\cdot \kappa)+i\partial\bar{\partial} \Psi(J(t),t\cdot\kappa)$ and the associated family of Ricci-flat metrics $\tilde{g}(t)(X,Y)=\omega(t)(X,J(t)Y)$. It is straightforward that $\tilde{g}'(0)=h$. By Proposition \ref{slice}, there exist diffeomorphisms $\varphi_t$ with $\varphi_0=id$ such that $g(t)=\varphi_t^*\tilde{g}(t)\in \mathcal{G}^k_{\delta}$. We obtain $g'(0)=h+\Li_{X}g_0$ for some $X\in H^{k+1}_{\delta+1}(TM)$. Since $h$ is a TT-tensor due to Lemma \ref{tttensors}, $h\in T_{g_0}\mathcal{G}_{\delta}^{k}$. On the other hand, $g'(0)\in T_{g_0}\mathcal{G}_{\delta}^{k}$ as well which implies that $g'(0)=h$ due to the decomposition in Proposition \ref{slice}. By Theorem \ref{analyticset}, the set of stationary solutions of the Ricci-DeTurck flow $\mathcal{F}$ close to $g_0$ is an analytic set contained in a finite-dimensional manifold $\mathcal{Z}$ with $T_{g_0}\mathcal{Z}=\ker_{L^2}(L_{g_0})$. The above construction provides a smooth map $\Xi:\ker_{L^2}(L_{g_0})\supset \mathcal{U}\to\mathcal{F}\subset\mathcal{Z}$ whose tangent map is the identity. Therefore, there exists a $L^2\cap L^{\infty}$-neighbourhood $\mathcal{U}$ of $g_0$ in the space of metrics such that $\mathcal{F}\cap\mathcal{U}=\mathcal{Z}\cap\mathcal{U}$. \end{proof} Let $h\in C^{\infty}(S^2T^*M)$ and $h_H,h_A$ its hermitian and anti-hermitian part, respectively. The hermitian and anti-hermitian part are preserved by $L_{g_0}$. Let $I=I(h_A)$ and $\kappa=\kappa(h_H)$ be defined as in \eqref{herm+antiherm}. Then we have the relations $I(L(h_A))=\Delta_C(I(h_A))$ and $\kappa(L(h_H) )=\Delta_H(\kappa(h_H))$, where $\Delta_C$ and $ \Delta_H$ are the complex Laplacian and the Hodge Laplacian acting on $C^{\infty}(\Lambda^{0,1}M\otimes T^{1,0}M)$ and $C^{\infty}(\Lambda^{1,1}_{J_0}M)$, respectively. For details see \cite{Koiso-Cx-Str} and \cite[Chap. $12$]{Besse}. As a consequence, we get \begin{theo}[Koiso] If $(M,g_0,J_0)$ is an ALE Ricci-flat K\"{a}hler manifold, it is linearly stable. \end{theo} \section{Ricci flow}\label{RF-Sec} Our main result of this section is the following \begin{theo}\label{main-theo} Let $(M^n,g_0)$ be an ALE Ricci-flat manifold. Assume it is linearly stable and integrable. Then for every $\epsilon>0$, there exists a $\delta>0$ such that the following holds: For any metric $g\in \mathcal{B}_{L^2\cap L^{\infty}}(g_0,\delta)$, there is a complete Ricci-DeTurck flow $(M^n,g(t))_{t\geq 0}$ starting from $g$ converging to an ALE Ricci-flat metric $g_{\infty}\in \mathcal{B}_{L^2\cap L^{\infty}}(g_0,\epsilon)$. \end{theo} \subsection{An expansion of the Ricci flow}\label{sec-equ-flo} Let us fix an ALE Ricci-flat manifold $(M^n,g_0)$ once and for all. Recall the definition of the Ricci flow \[ \left\{ \begin{array}{rl} &\partial_tg=-2\Ric(g(t)), \quad\mbox{on}\quad M\times (0,+\infty),\\ &\\ & g(0)=g_0+h, \end{array} \right. \] where $h$ is a symmetric $2$-tensor on $M$ (denoted by $h\in S^2T^*M$) such that $g(0)$ is a metric. The Ricci-DeTurck flow is given by \[ \left\{ \begin{array}{rl} &\partial_tg=-2\Ric(g(t))+\Li_{V(g(t),g_0)}(g(t)), \quad\mbox{on}\quad M\times (0,+\infty),\\ &\\ & g(0)=g_0+h, \end{array} \right. \] where $V(g(t),g_0)$ is a vector field defined locally by $V^k=g^{ij}(\Gamma(g)_{ij}^k-\Gamma(g_0)_{ij}^k)$ and globally by \begin{eqnarray}\label{de-turck-vect} g_0(V(g(t),g_0),.):=-\div_{g(t)}g_0+\frac{1}{2}\nabla^{g(t)}\tr_{g(t)}g_0. \end{eqnarray} Following \cite[Lemma 2.1]{Shi-Def}, the Ricci-DeTurck flow can be written in coordinates as \begin{eqnarray*} \partial_tg_{ij}&=&g^{ab}\nabla^{g_0,2}_{ab}g_{ij}-g^{kl}g_{ip}\Rm(g_0)_{jklp}-g^{kl}g_{jp}\Rm(g_0)_{iklp}\\ &&+g^{ab}g^{pq}\left(\frac{1}{2}\nabla^{g_0}_ig_{pa}\nabla^{g_0}_jg_{qb}+\nabla^{g_0}_ag_{jp}\nabla^{g_0}_qg_{ib}\right)\\ &&-g^{ab}g^{pq}\left(\nabla^{g_0}_ag_{jp}\nabla^{g_0}_bg_{iq}-\nabla^{g_0}_jg_{pa}\nabla^{g_0}_bg_{iq}-\nabla^{g_0}_ig_{pa}\nabla^{g_0}_bg_{jq}\right). \end{eqnarray*} For our purposes, we calculate a different expansion: Let $\bar{g}$ and $g$ two Riemannian metrics on a given manifold and $h:=g-\bar{g}$. Then a careful computation shows that in local coordinates, \begin{align*} 2(\mathrm{Ric}(g)_{ij}-\mathrm{Ric}(\bar{g})_{ij})&=-(L_{\bar{g}} h)_{ij}+\bar{g}^{uv}(\nabla^{\bar{g},2}_{iu}h_{jv}+\nabla^{\bar{g},2}_{ju}h_{iv}-\nabla^{\bar{g},2}_{ij}h_{uv})\\ &\quad +(g^{uv}-\bar{g}^{uv})(\nabla^{\bar{g},2}_{ui}h_{jv}+\nabla^{\bar{g},2}_{uj}h_{iv}-\nabla^{\bar{g},2}_{uv}h_{ij}-\nabla^{\bar{g},2}_{ij}h_{uv})\\ &\quad+g^{uv}g^{pq}(\nabla^{\bar{g}}_uh_{pi}\nabla^{\bar{g}}_vh_{qj}-\nabla^{\bar{g}}_ph_{ui}\nabla^{\bar{g}}_vh_{qj}+\frac{1}{2}\nabla^{\bar{g}}_ih_{up}\nabla^{\bar{g}}_jh_{vq})\\ &\quad+g^{uv}(-\nabla^{\bar{g}}_uh_{vp}+\frac{1}{2}\nabla^{\bar{g}}_ph_{uv})g^{pq}(\nabla^{\bar{g}}_ih_{qj}+\nabla^{\bar{g}}_jh_{qi}-\nabla^{\bar{g}}_qh_{ij}). \end{align*} where $g^{uv},\bar{g}^{uv}$ are the inverse matrices of $g_{uv},\bar{g}_{uv}$, respectively. For a calculation, see for instance \cite[p. 15]{Bam-Phd}. Furthermore, if a background metric $g_0$ is fixed and if $V=V(g,g_0)$ is defined as above, then we have the expansion \begin{align*} V(g&,g_0)^k-{V}(\bar{g},g_0)^k=\frac{1}{2}\bar{g}^{ij}\bar{g}^{kl}(\nabla^{\bar{g}}_ih_{jl}+\nabla^{\bar{g}}_jh_{il}-\nabla^{\bar{g}}_lh_{ij}) -h_{pq}\bar{g}^{pi}\bar{g}^{qj}(\Gamma(\bar{g})_{ij}^k-\Gamma(g_0)_{ij}^k) \\ &\quad+\frac{1}{2}\bar{g}^{ij}(g^{kl}-\bar{g}^{kl})(\nabla^{\bar{g}}_ih_{jl}+\nabla^{\bar{g}}_jh_{il}-\nabla^{\bar{g}}_lh_{ij})+\frac{1}{2}\bar{g}^{kl}(g^{ij}-\bar{g}^{ij})(\nabla^{\bar{g}}_ih_{jl}+\nabla^{\bar{g}}_jh_{il}-\nabla^{\bar{g}}_lh_{ij}) \\&\quad+\frac{1}{2}(g^{ij}-\bar{g}^{ij})(g^{kl}-\bar{g}^{kl})(\nabla^{\bar{g}}_ih_{jl}+\nabla^{\bar{g}}_jh_{il}-\nabla^{\bar{g}}_lh_{ij})-h_{pq}(g^{pi}-\bar{g}^{pi})\bar{g}^{qj}(\Gamma(\bar{g})_{ij}^k-\Gamma(g_0)_{ij}^k). \end{align*} Thus for $V=V(g,g_0)$ and $\bar{V}=V(\bar{g},g_0)$, we have \begin{align*} \Li_Vg_{ij}-\Li_{\bar{V}}\bar{g}_{ij}&=\Li_{V}\bar{g}_{ij}+\Li_{V}h_{ij}-\Li_{\bar{V}}\bar{g}_{ij}\\ &=\nabla^{\bar{g}}_iV_j+\nabla^{\bar{g}}_jV_i+V^k\nabla^{\bar{g}}_kh_{ij}+\nabla^{\bar{g}}_iV^kh_{kj}+\nabla^{\bar{g}}_jV^kh_{ik}-\Li_{\bar{V}}\bar{g}_{ij}. \end{align*} Now if $\bar{g}$ is a Ricci-flat metric that additionally satisfies $\bar{V}=0$, we can write the Ricci-DeTurck flow as an evolution of the difference $h(t):=g(t)-\bar{g}$ for which we get \begin{equation}\begin{split}\label{rdt-expansion} \partial_t h=\partial_tg&=-2\mathrm{Ric}_g+2\mathrm{Ric}_{\bar{g}}+\Li_{V(g,g_0)}g-\Li_{V(\bar{g},g_0)}\bar{g}\\ &=L_{\bar{g}}h-\Li_{\langle h,\Gamma(\bar{g})-\Gamma(g_0)\rangle}\bar{g}+F*\nabla^{\bar{g}} h*\nabla^{\bar{g}} h+\nabla^{\bar{g}}(G*h*\nabla^{\bar{g}} h), \end{split} \end{equation} where $\langle h,\Gamma(\bar{g})-\Gamma(g_0)\rangle^k=h_{pq}\bar{g}^{pi}\bar{g}^{qj}(\Gamma(\bar{g})_{ij}^k-\Gamma(g_0)_{ij}^k)$ and $*$ denotes a linear combination of tensor products and contractions with respect to the metric $\bar{g}$. The tensors $F$ and $G$ depend on $g^{-1}$ and $\Gamma(g_0)$. \subsection{Short-time estimates and an extension criterion}\label{Short-time} In this subsection we recall the short-time estimates of $C^k$-norms and an extension criterion for the Ricci-DeTurck flow. In addition, we prove some new Shi-type estimates for $L^2$-type Sobolev norms. For the sake of simplicity, all covariant derivatives and norms in this subsection are taken with respect to $g_0$. \begin{lemma}[A priori short-time $C^k$-estimates]\label{Ck-estimate}Let $(M,g_0)$ be a complete Ricci-flat manifold of bounded curvature. Then there exist constants $\epsilon>0$ and $\tau\geq 1$ such that if $g(0)$ is a metric satisfying \[\left\|g(0)-g_0\right\|_{L^{\infty}}<\epsilon, \] there exists a Ricci-DeTurck flow $(g(t))_{t\in[0,\tau]}$ with initial metric $g(0)$ which satisfies the estimates \[\left\|\nabla^k( g(t)-g_0)\right\|_{L^{\infty}}<C(k,\tau)t^{-k/2}\left\|g(0)-g_0\right\|_{L^{\infty}},\quad \forall k\in \mathbb{N}_0,\quad t\in (0,\tau]. \] Moreover, $(g(t))_{t\in[0,\tau)}$ is the unique Ricci-DeTurck flow starting at $g(0)$ which satisfies \[\left\| g(t)-g_0\right\|_{L^{\infty}}<C(0,\tau)\left\|g(0)-g_0\right\|_{L^{\infty}}. \] In particular, this implies the following: if $(g(t))_{t\in[0,\infty)}$ is a Ricci-DeTurck flow and such that it is in $\mathcal{B}_{L^{\infty}}(g_0,\epsilon)$ for all time, then there exist constants such that \[\left\|\nabla^k( g(t)-g_0)\right\|_{L^{\infty}}<C(k)\epsilon, \qquad \forall k\in \mathbb{N},\quad t\in [1,\infty). \] \end{lemma} \begin{proof} The same statement is given in \cite[Proposition 2.8]{Bam} for the case of negative Einstein metrics. The proof is standard and translates easily to the present situation. For more details, see e.g. \cite[Section 3.7]{Bam-Phd}. \end{proof} \begin{lemma}[A priori short-time $L^2$-estimate]\label{lemma-short-time-L2} Let $(M,g_0)$ be an ALE Ricci-flat manifold. Then there exists an $\epsilon=\epsilon(n,g_0)>0$ with the following property: Suppose that $(g(t))_{t\in[0,T_{max})}$ is a Ricci-DeTurck flow such that $h(t)=g(t)-g_0$ satisfies $\|h(t)\|_{L^{\infty}}<\epsilon$ for all $t\in [0,T_{max})$ and $\|h(0)\|_{L^2}<\infty$. Then, $\|h(t)\|_{L^2}<\infty$ for all $t\in (0,T_{max})$ and there exists a constant $C=C(n,g_0)$ such that \begin{align*} \|h(t)\|_{L^2}\leq e^{Ct}\cdot\|h(0)\|_{L^2}\qquad \forall t\in (0,T_{max}). \end{align*} \end{lemma} \begin{proof} By \eqref{rdt-expansion}, we can rewrite the Ricci-DeTurck flow with gauge $g_0$ in the schematic form \begin{equation}\begin{split}\label{rdt-expansion2} \partial_t h=\Delta h+Rm*h+F*\nabla h*\nabla h+\nabla(G*h*\nabla h). \end{split} \end{equation} For each $R>0$, let $\eta_R:[0,\infty)$ be a function such that $\eta_R(r)$ for $r\leq R$, $\eta_R(r)=0$ for $r\geq 2R$ and $|\nabla\eta_R|\leq 2/R$. For $x\in M$, let $\phi_{R,x}(y)=\eta_R( d(x,y))$. Then $\phi_{R,x}\equiv 1$ on $B_R(x)$, $\phi_{R,x}\equiv 0$ on $M\setminus B_{2R}(x)$ and $|\nabla\phi_{R,x}|\leq 2/R$. By \eqref{rdt-expansion2}, we obtain \begin{align*} \partial_t \int_M |h|^2\phi^2 d\mu&\leq 2\int_M\langle \Delta h,\phi^2h\rangle d\mu+ C\|Rm\|_{L^{\infty}}\int_M |h|^2\phi^2d\mu\\&+C\|h\|_{L^{\infty}}\int_M |\nabla h|^2\phi^2 d\mu+\int_M \langle \nabla(G*h*\nabla h), h\rangle \phi^2 d\mu\\ &\leq -2\int_M |\nabla h|^2\phi^2 d\mu + C\int_M |\nabla h||h||\nabla\phi|\phi d\mu\\ & +C(g_0)\int_M |h|^2\phi^2d\mu+C\|h\|_{L^{\infty}}\int_M |\nabla h|^2\phi^2 d\mu\\ &\leq (-2+C\epsilon+C\delta )\int_M |\nabla h|^2\phi^2 d\mu+C(g_0)\int_M |h|^2\phi^2d\mu+\frac{C}{\delta}\int_M |h|^2|\nabla \phi|^2d\mu\\ &\leq (C(g_0)+\frac{2C}{\delta R^2})\int_{B_{2R}(x)} |h|^2d\mu \end{align*} for an appropriate choice of $\delta$. Define \begin{align*} A(t,R)=\sup_{x\in M}\int_M |h(t)|^2\phi_{R,x}^2d\mu. \end{align*} As $(M,g_0)$ is ALE, there exists a constant $N=N(n)$ such that each ball on $M$ of radius $2R$ can be covered by $N$ balls of radius $R$. Thus, by integration in time, \begin{align*} \int_M |h(t)|^2\phi_{R,x}^2 d\mu&\leq \int_M |h(0)|^2\phi_{R,x}^2 d\mu+(C(g_0)+\frac{2C}{\delta R^2})\int_0^t\int_{B_{2R}(x)} |h(s)|^2d\mu ds \\ &\leq \int_M |h(0)|^2\phi_{R,x}^2 d\mu+N(C(g_0)+\frac{2C}{\delta R^2})\int_0^t A(s,R)ds. \end{align*} Consequently, \begin{align*} A(t,R)\leq A(0,R)+ N(C(g_0)+\frac{2C}{\delta R^2})\int_0^t A(s,R)ds \end{align*} and by the Gronwall inequality, \begin{align*} A(t,R)\leq A(0,R)\cdot \exp\left(N(C(g_0)+\frac{2C}{\delta R^2})t\right). \end{align*} The assertion follows from letting $R\to\infty$. \end{proof} \begin{lemma}[A priori short-time $H^k$-estimates]\label{Hk-estimate} Let $(M,g_0)$ be an ALE Ricci-flat manifold. Then there exists an $\epsilon=\epsilon(n,g_0)>0$ with the following property: Suppose that $(g(t))_{t\in[0,T_{max})}$ is a Ricci-DeTurck flow such that $h(t)=g(t)-g_0$ satisfies \begin{eqnarray*} \|h(t)\|_{L^{\infty}}<\epsilon,\quad \forall t\in [0,T_{max}). \end{eqnarray*} Then for each $T\in (0,T_{max})$ and $k\in\mathbb{N}$ there exist constants $C_k=C_k(n,g_0,T)$ such that if $\|h(t)\|_{L^2}\leq K$ for all $t\in [0,T]$, we get \begin{align*} \|\nabla^kh(t)\|_{L^2}\leq C_k\cdot t^{-k/2}\cdot K\qquad \forall t\in (0,T]. \end{align*} In particular, if $(g(t))_{t\in[0,T_{max})}$ is a Ricci flow satisfying $\|h(t)\|_{L^{\infty}}<\epsilon$ and $\|h(t)\|_{L^2}<K$ as long as $t\in [0,T_{max})$, then there exist constants $C_k=C_k(n,g_0)$ such that \begin{align*} \|\nabla^k h(t)\|_{L^2}\leq C_k\cdot K\qquad \forall k\in\mathbb{N}\qquad\forall t\in [1,T_{max}). \end{align*} \end{lemma} \begin{proof} The proof follows from a delicate argument involving a sequence of cutoff functions. By differentiating \eqref{rdt-expansion2}, we get \begin{align*} \partial_t \nabla^kh&=\nabla^k\Delta h+\nabla^k(Rm*h)+\nabla^k(F*\nabla h*\nabla h)+\nabla^{k+1}(G*h*\nabla h)\\ &=\Delta\nabla^k h+\sum_{l=0}^k\nabla^lRm*\nabla^{k-l}h+\sum_{\substack{0\leq l_1,l_2,l_3\leq k\\l_1+l_2+l_3=k}}\nabla^{l_1}F*\nabla^{l_2+1}h*\nabla^{l_3+1}h\\ &\quad+\nabla\left(\sum_{\substack{0\leq l_1,l_2,l_3\leq k\\l_1+l_2+l_3=k}}\nabla^{l_1}G*\nabla^{l_2} h*\nabla^{l_3+1}h\right). \end{align*} Let $\phi$ be a cutoff function as in the proof of Lemma \ref{lemma-short-time-L2}. Then, \begin{align*} \partial_t\int_M |\nabla^kh|^2\phi^2 d\mu&\leq 2 \int_M\langle \Delta \nabla^k h,\nabla^k h\rangle \phi^2d\mu+C\sum_{l=0}^k \|\nabla^lRm\|_{L^{\infty}}\int_M |\nabla^{k-l}h||\nabla^lh|\phi^2d\mu\\ &\quad+\sum_{\substack{0\leq l_1,l_2,l_3\leq k\\l_1+l_2+l_3=k}}\int_M\langle\nabla^{l_1}F*\nabla^{l_2+1}h*\nabla^{l_3+1}h,\nabla^kh\rangle\phi^2d\mu\\ &\quad+\int_M\langle\nabla(\sum_{\substack{0\leq l_1,l_2,l_3\leq k\\l_1+l_2+l_3=k}}\nabla^{l_1}G*\nabla^{l_2} h*\nabla^{l_3+1}h),\nabla^kh\rangle\phi^2d\mu. \end{align*} Let us consider each of these terms separately. Then we get \begin{align*} 2 \int_M\langle \Delta \nabla^k h,\nabla^k h\rangle \phi^2d\mu&=-2\int_M |\nabla^{k+1}h|^2\phi^2d\mu+2\int_M |\nabla^{k+1}h||\nabla^kh||\nabla\phi|\phi d\mu\\ &\leq(-2+\delta)\int_M |\nabla^{k+1}h|^2\phi^2d\mu+\frac{1}{\delta}\int_M |\nabla^k h|^2|\nabla\phi|^2d\mu \end{align*} and \begin{align*} C\sum_{l=0}^k \|\nabla^lRm\|_{L^{\infty}}\int_M |\nabla^{k-l}h||\nabla^lk|\phi^2d\mu \leq C\sum_{l=0}^k \int_M |\nabla^lh|^2\phi^2d\mu. \end{align*} In the estimates of the higher order terms, we use the property $\|\nabla^k h\|_{L^{\infty}}\leq C_k t^{-k/2}\|h\|_{L^{\infty}}$, which also implies $\|\nabla^k F\|_{L^{\infty}}\leq C\cdot t^{-k/2}$ and $\|\nabla^k G\|_{L^{\infty}}\leq C\cdot t^{-k/2}$ for $t\in (0,T]$ and $k\in\mathbb{N}$. \begin{align*} \sum_{\substack{0\leq l_1,l_2,l_3\leq k\\l_1+l_2+l_3=k}}&\int_M\langle\nabla^{l_1}F*\nabla^{l_2+1}h*\nabla^{l_3+1}h,\nabla^kh\rangle\phi^2d\mu \\&\leq C \sum_{0\leq l\leq i\leq k}\int_M|\nabla^{k-i}F| |\nabla^{l+1}h||\nabla^{i-l+1}h||\nabla^kh|\phi^2d\mu\\ &\leq C\cdot t\sum_{0\leq l\leq i\leq k}\int_M|\nabla^{k-i}F|^2 |\nabla^{l+1}h|^2|\nabla^{i-l+1}h|^2\phi^2d\mu +C t^{-1}\int_M |\nabla^k h|^2\phi^2d\mu\\ &\leq C\|h\|_{L^{\infty}}\sum_{l=0}^kt^{-k+l}\int_M |\nabla^{l+1}h|^2\phi^2d\mu +C t^{-1}\int_M |\nabla^k h|^2\phi^2d\mu \\ &\leq C\cdot \epsilon \int_M |\nabla^{k+1} h|^2\phi^2d\mu+C\sum_{l=1}^kt^{-k+l-1}\int_M |\nabla^l h|^2\phi^2d\mu. \end{align*} Similarly, \begin{align*} \int_M&\langle\nabla(\sum_{\substack{0\leq l_1,l_2,l_3\leq k\\l_1+l_2+l_3=k}}\nabla^{l_1}G*\nabla^{l_2} h*\nabla^{l_3+1}h),\nabla^kh\rangle\phi^2d\mu\\ &\leq\sum_{\substack{0\leq l_1,l_2,l_3\leq k\\l_1+l_2+l_3=k}}\int_M\langle\nabla^{l_1}G*\nabla^{l_2} h*\nabla^{l_3+1}h,\nabla^kh\rangle\phi^2d\mu\\&\quad+ \sum_{\substack{0\leq l_1,l_2,l_3\leq k\\l_1+l_2+l_3=k}}\int_M|\nabla^{l_1}G||\nabla^{l_2} h||\nabla^{l_3+1}h||\nabla^kh||\nabla\phi|\phi d\mu\\ &\leq \delta\int_M |\nabla^{k+1}h|^2\phi^2d\mu+\frac{C}{\delta}\int_M|\nabla^kh|^2|\nabla\phi|^2d\mu\\ &\quad+C(\left\|h\right\|_{L^{\infty}}+\left\|\nabla h\right\|_{L^{\infty}}^2)\int_M |\nabla^kh|^2\phi^2d\mu +\frac{C}{\delta}\sum_{0\leq l\leq i\leq k-1} \int_M |\nabla^{k-i}G||\nabla^{l+1}h|^2|\nabla^{i-l}h|^2\phi^2d\mu\\ &\leq \delta\int_M |\nabla^{k+1}h|^2\phi^2d\mu+C\int_M|\nabla^kh|^2|\nabla\phi|^2d\mu+\frac{C}{\delta}\sum_{l=1}^k t^{-k+l-1}\int_M |\nabla^lh|^2\phi^2d\mu. \end{align*} Assuming that $\epsilon,\delta>0$ are small enough, we therefore get \begin{align*} \partial_t\int_M |\nabla^kh|^2\phi^2 d\mu&\leq -\int_M |\nabla^{k+1}h|^2\phi^2d\mu+C_k\int_M |\nabla^k h|^2|\nabla\phi|^2d\mu\\ &\quad+\tilde{C}_k\int_M |h|^2\phi^2d\mu + C_k\sum_{l=1}^k t^{-k+l-1}\int_M |\nabla^lh|^2\phi^2d\mu. \end{align*} In the following, let $x\in M$ and $\phi_l:M\to [0,1]$, $0\leq l\leq k$ a sequence of cutoff functions with the following properties: \begin{align*} \phi_l&\equiv 1\qquad \text{ on } B(x,(k+1-l)R),\\ \phi_l&\equiv 0\qquad \text{ on } M\setminus B(x,(k+2-l)R),\\ |\nabla \phi_l|&\leq 2/R. \end{align*} Obviously, we get $\phi_l\leq \phi_{l-1}$ for $1\leq l\leq k$ and, if $R\geq 2$, $|\nabla \phi_l|\leq \phi_{l-1}$ for $1\leq l\leq k$. We now define a function $F_k:[0,T]\to\mathbb{R}$ as \begin{align*} F_k(t)=\sum_{l=0}^kA_l\cdot t^l\int_M |\nabla^lh|^2\phi_l^2d\mu \end{align*} where $A_l$ are some positive constants we will choose later. Then we can compute \begin{align*} \partial_tF_k&=\sum_{l=1}^kl\cdot A_lt^{l-1}\int_M|\nabla^lh|^2\phi_l^2d\mu+\sum_{l=0}^kA_lt^l\partial_t\int_M|\nabla^lh|^2\phi_l^2d\mu\\ &\leq \sum_{l=1}^kl\cdot A_lt^{l-1}\int_M|\nabla^lh|^2\phi_l^2d\mu-\sum_{l=0}^k A_lt^{l}\int_M|\nabla^{l+1}h|^2\phi_l^2d\mu+\sum_{l=0}^k\tilde{C}_lA_lt^l\int_M |h|^2\phi_l^2d\mu\\ &\quad+\sum_{l=0}^k C_l\cdot A_lt^{l}\int_M|\nabla^lh|^2|\nabla\phi_l|^2d\mu +\sum_{l=0}^kC_l A_l\sum_{i=1}^lt^{i-1}\int_M |\nabla^ih|^2\phi_l^2d\mu\\ &\leq\sum_{l=1}^k[lA_l-A_{l-1}+C_lA_lt+ \sum_{i=l}^kC_iA_i]\cdot t^{l-1}\int_{M}|\nabla^lh|^2\phi_{l-1}^2d\mu\\ &\quad + C_0\cdot A_0\int_M |h|^2||\nabla\phi_0|^2d\mu + \sum_{l=0}^k\tilde{C}_lA_lt^l\int_M |h|^2\phi_0^2d\mu. \end{align*} Note that we used the properties $\phi_l\leq\phi_{l-1}$ and $|\nabla\phi_l|\leq \phi_{l-1}$ in the above estimate. Now if we choose $A_k,A_{k-1},\ldots A_0$ inductively such that \begin{align*} A_{l-1}\geq lA_l+CA_lt+ \sum_{i=l}^kC_iA_i, \end{align*} for all $t\in[0,T]$, then \begin{align*} \partial_t F_k\leq C(g_0,k,T)\int_M |h|^2\phi_0^2d\mu\leq C(g_0,k,T)\int_M |h|^2d\mu, \end{align*} so that we get $F_k(t)\leq C(g_0,k,T)\cdot \sup_{t\in[0,T]} \int_M |h|^2d\mu$ for all $t\in [0,T]$. The result now follows from letting $R\to\infty$. \end{proof} We conclude this section with some very general result due to \cite{Sch-Sch-Sim} giving a criteria for ensuring infinite time existence. \begin{theo}[Criteria for infinite time existence]\label{delta-max-sol} Let $(M^n,g_0)$ be a complete Riemannian manifold such that $\|\Rm(g_0)\|_{L^{\infty}}=:k_0<+\infty$. Then there exists a positive constant $\tilde{\delta}=\tilde{\delta}(n,k_0)$ such that the following holds. Let $0<\beta<\delta\leq\tilde{\delta}$. Then every metric $g(0)$ $\beta$-close to $g_0$ has a $\delta$-maximal solution $g(t)_{t\in[0,T(g(0)))}$ with $T(g(0))$ positive and $\|g(t)-g_0\|_{L^{\infty}}<\delta$ for all $t\in[0,T(g(0)))$. The solution is $\delta$-maximal in the following sense. Either $T(g(0))=+\infty$ and $\|g(t)-g_0\|_{L^{\infty}}<\delta$ for any nonnegative time $t$ or we can extend $(g(t))_t$ to a solution on $M^n\times[0,T(g(0))+\tau)$, for some positive $\tau=\tau(n,k_0)$ and $\|g(T(g(0)))-g_0\|_{L^{\infty}}=\delta.$ \end{theo} \subsection{A local decomposition of the space of metrics}\label{deco-met-sec} In order to prove convergence of a Ricci-DeTurck flow $g(t)$ to a Ricci-flat metric $g_{\infty}$, we have to construct a family $g_0(t)$ of Ricci-flat reference metrics. For the proof of the main theorem, it is nessecary to construct $g_0(t)$ in such a way that $\partial_t g_0= \textit{o}((g-g_0)^2)$. This section is devoted to this construction. For this purpose, let $\mathcal{F}$ again be given by \begin{align*} \mathcal{F}=\left\{g\in \mathcal{M}\mid -2\mathrm{Ric}_g+\Li_{V(g,g_0)}g=0 \right\}. \end{align*} If $g_0$ satisfies the integrability condition, then there exists an $L^2\cap L^{\infty}$-neighbourhood $\mathcal{U}$ of $g_0$ in the space of metrics such that $\widetilde{\mathcal{F}}:=\mathcal{U}\cap{\mathcal{F}}$ is a manifold and for all $g\in\widetilde{\mathcal{F}}$, the equations $\mathrm{Ric}_g=0$ and $\Li_{V(g,g_0)}g=0$ hold individually by Proposition \ref{flatsoliton}. Linearization of these two conditions show that the tangent space $T_g\widetilde{\mathcal{F}}$ is given by the kernel of the map \begin{align*} L_{g,g_0}h=L_gh-\Li_{\langle h,\Gamma(g)-\Gamma(g_0)\rangle}g, \end{align*} where $\langle h,\Gamma(g)-\Gamma(g_0)\rangle^k=g^{ik}g^{lj}h_{kl}(\Gamma(g)_{ij}^k-\Gamma(g_0)_{ij}^k)$ and $L_g$ is the Lichnerowicz Laplacian of $g$. The next lemma ensures that the kernels $\ker L^*_{g,g_0}$ all have the same dimension when $g$ is an ALE Ricci flat metric sufficiently close to $g_0$: \begin{lemma}\label{Fred-Prop-Lap-Mod} Let $(M^n,g_0)$ be a linearly stable ALE Ricci-flat manifold which is integrable. Furthermore, let $\widetilde{\mathcal{F}}$ be as above. Then there exists an $L^2\cap L^{\infty}$-neighbourhood $\mathcal{U}$ of $g_0$ in the space of metrics such that $\dim\ker_{L^2} L^*_{g,g_0}=\dim\ker_{L^2} L_{g_0}$, for all $g\in \widetilde{\mathcal{F}}$. \end{lemma} \begin{proof} First, we claim that elements in the kernel of $L_{g,g_0}$ have decay rate $-(n-1)$. Indeed, by the proof of Theorem $2.7$, if $h\in\ker_{L^2}L_{g,g_0}$, then $h=\textit{O}(r^{-\beta})$ with $\beta<n/2-1$. Now, we use the special algebraic structure of the operator $L_{g,g_0}$ by considering the divergence and the trace with respect to $g$ of $L_{g,g_0}h$: \begin{eqnarray*} 0&=&\div_g(L_{g,g_0}h)\\ &=&\Delta_g(\div_gh)-\nabla^g(\div_g(<h,\Gamma(g)-\Gamma(g_0)>))-\Delta_g(<h,\Gamma(g)-\Gamma(g_0)>),\\ 0&=&\Delta_g\tr_gh-2\div_g(<h,\Gamma(g)-\Gamma(g_0)>), \end{eqnarray*} which implies the following relation: \begin{eqnarray*} \Delta_g\left(\div_gh-\frac{\nabla^g\tr_gh}{2}-<h,\Gamma(g)-\Gamma(g_0)>\right)=0. \end{eqnarray*} Since the vector field $\div_gh-\frac{\nabla^g\tr_gh}{2}-<h,\Gamma(g)-\Gamma(g_0)>$ goes to $0$ at infinity, the maximum principle ensures that $$\div_gh-\frac{\nabla^g\tr_gh}{2}-<h,\Gamma(g)-\Gamma(g_0)>=0.$$ Now, a reasoning analogous to the end of the proof of Theorem $\ref{ell-reg-prop}$ shows that $h=\textit{O}(r^{-(n-1)}).$ In particular, the previous claim implies that: $$\ker_{L^2}(L_{g,g_0})=\ker_{L^2_{\delta}}(L_{g,g_0})=\ker_{H^k_{\delta}}(L_{g,g_0}),$$ where $\delta\in (-n+1,-n/2]$ is a nonexceptional weight and $k$ can be any natural number. Now, $L_{g_0}=L_{g_0,g_0}$ is Fredholm as a map from $H^k_{\delta}(S^2T^*M)$ to $H^{k-2}_{\delta-2}(S^2T^*M)$ with Fredholm index $0$. The same holds for $L_g$ with $g\in\mathcal{F}$ in a sufficiently small neighborhood of $g_0$. Observe that $L_{g,g_0}-L_g$ is a bounded operator as a map from $H^k_{\delta}(S^2T^*M)$ to $H^{k-2}_{\delta-2}(S^2T^*M)$, with arbitrarily small norm operator. Therefore, by the openness of the set of Fredholm operators with respect to the operator norm, $L_{g,g_0}$ have the same index as $L_{g_0,g_0}$, which is $0$. Therefore we get \begin{eqnarray*} 0&=&\dim(\ker_{L^2}(L_{g_0,g_0}))-\dim(\ker_{L^2}(L^*_{g_0,g_0}))\\ &=&\ind_{H^k_{\delta}}(L_{g_0,g_0})\\ &=&\ind_{H^k_{\delta}}(L_{g,g_0})\\ &=&\dim(\ker_{H^k_{\delta}}(L_{g_0,g_0}))-\dim(\ker_{H^k_{\delta}}(L^*_{g_0,g_0})). \end{eqnarray*} \end{proof} Now we claim that if $\mathcal{U}$ is small enough, every metric $g\in\mathcal{U}$ can be decomposed uniquely as $g=\bar{g}+h$ where $\bar{g}\in \widetilde{\mathcal{F}}$ and $h\in \overline{L_{\bar{g},g_0}(C^{\infty}_{0}(S^2T^{*}M))}$ (where the closure is taken with respect to $L^2\cap L^{\infty}$). Indeed, this follows from the implicit function theorem applied to the map \begin{eqnarray*} \Phi:\widetilde{\mathcal{F}}\times \overline{L_{g_0}(C^{\infty}_{0}(S^2T^*M))}&\to& \mathcal{M},\\ (\bar{g},h)&\to& \bar{g}+h-\sum_{i=1}^m(h,e_i(\bar{g}))_{L^2}e_i(\bar{g}), \end{eqnarray*} where $\left\{e_1(\bar{g}),\ldots e_m(\bar{g})\right\}$ is an $L^2(\bar{g})$ orthonormal basis of $\ker_{L^2}(L_{\bar{g},g_0}^*)$ which can be chosen to depend smoothly on $\bar{g}$ by Lemma \ref{Fred-Prop-Lap-Mod}. Let now $(g(t))_{t\in[0,T)}$ be a Ricci-DeTurck flow in $\mathcal{U}$ and $(g_0(t))_{t\in[0,T)}\in \widetilde{\mathcal{F}}$ be the family of Ricci-flat metrics such that $$g(t)-g_0(t)\in\overline{{L_{g_0(t),g_0}(C^{\infty}_{0}(S^2T^{*}M))}}.$$ Writing $h(t)=g(t)-g_0$ and $h_0(t)=g_0(t)-g_0$, we see that $h(t)-h_0(t)=g(t)-g_0(t)$ admits the expansion \begin{eqnarray*} \partial_t h-{L}_{g_0(t),g_0}(h-h_0)&=&R[h-h_0]\\ &=&F*\nabla(h-h_0)*\nabla(h-h_0)+\nabla(G*(h-h_0)*\nabla(h-h_0)), \end{eqnarray*} where the connection is now with respect to $g_0(t)$. Before stating the next lemma, we need to recall the Hardy inequality for Riemannian manifolds with nonnegative Ricci curvature and positive asymptotic volume ratio due to Minerbe \cite[Theorem 2.23]{Min-Wei-Sob-Ric-Fla}: \begin{theo}(Minerbe)\label{Hardy-Ineq} Let $(M^n,g)$ be a Riemannian manfold with nonnegative Ricci curvature and Euclidean volume growth, i.e. $$\AVR(g):=\lim_{r\rightarrow+\infty}\frac{\vol_gB_g(x,r)}{r^n}>0,$$ for some (and hence all) $x\in M$. Then, \begin{eqnarray*} \int_Mr_x^{-2}|\phi|^2d\mu_g\leq C(n,\AVR(g))\int_M|\nabla^g\phi|^2d\mu_g,\quad \forall \phi\in C_0^{\infty}(M), \end{eqnarray*} where $r_x(y)=d(x,y)$. \end{theo} The next lemma controls the time derivative of $h_0$ in the $C^k$ topology in terms of the $L^2$ norm of the gradient of $h-h_0$. \begin{lemma}\label{est_g_0}Let $\mathcal{U}$ be an $L^2\cap L^{\infty}$-neighbourhood of $g_0$ such that the above decomposition holds. Let $(g(t))_{t\in[0,T)}$ be a Ricci-DeTurck flow in $\mathcal{U}$ and let $g_0(t)$, $h(t)$, $h_0(t)$ be defined as above for $t\in[0,T)$. Then we have the following estimate that holds for $t\in(0,T)$: \begin{align*} \left\|\partial_t h_0\right\|_{C^k(g_0(t))}\leq C(k)\left\|\nabla^{g_0(t)}(h-h_0)\right\|_{L^2(g_0(t))}^2. \end{align*} \end{lemma} \begin{proof} Let $\left\{e_1(t),\ldots e_m(t)\right\}$ be a family of $L^2(d\mu_{g_0(t)})$-orthonormal bases of $\ker_{L^2}\left({L}_{g_0(t),g_0}^*\right)$. Note that $\partial_te_i(t)$ depends linearly on $\partial_th_0(t)$. We can write $$h(t)-h_0(t)=k(t)- \sum_{i=1}^m(k(t),e_i(t))_{L^2}\cdot e_i(t)$$ for some $k(t_0)\in \overline{L_{g_0(t_0),g_0}(C^{\infty}_{0}(S^2T^{*}M))}=:N$. Differentiating at time $t_0$ yields \begin{align*} h'&=h_0'+k'-\sum_{i=1}^m(k',e_i)_{L^2}\cdot e_i -\sum_{i=1}^m(k,e_i')_{L^2}\cdot e_i -\sum_{i=1}^m(k*h_0',e_i)_{L^2}\cdot e_i\\ &=h_0'+k'_N -\sum_{i=1}^m(h-h_0,e_i')_{L^2}\cdot e_i -\sum_{i=1}^m((h-h_0)*h_0',e_i)_{L^2}\cdot e_i\\ &=:h_0'+k'_N +A(h-h_0,h_0') \end{align*} where $A$ depends linearly on both entries. Let us split this expression into $A=A_{\widetilde{\mathcal{F}}}+A_{N}$ according to the decomposition $T_{g_0(t_0)}\widetilde{\mathcal{F}}\oplus N$. If we also split $h'=h'_{\widetilde{\mathcal{F}}}+h'_{N}$, we get \begin{align*} \begin{pmatrix} h'_{\widetilde{\mathcal{F}}} \\ h'_{N}\end{pmatrix}= \begin{pmatrix} id_{T_{g_0(t_0)}\widetilde{\mathcal{F}}}+A_{\widetilde{\mathcal{F}}}(h-h_0,.) & 0 \\ A_{N}(h-h_0,.) & id_{\bar{N}} \end{pmatrix} \cdot \begin{pmatrix} h_0' \\ k' \end{pmatrix} \end{align*} By inverting, we conclude that \begin{align*} h'_0=(id_{T_{g_0(t_0)}\widetilde{\mathcal{F}}}+A_{\widetilde{\mathcal{F}}}(h-h_0,.))^{-1}h'_{\widetilde{\mathcal{F}}}. \end{align*} Note that the orthogonal projection $\Pi:T_{g_0(t_0)}\widetilde{\mathcal{F}}\to \ker_{L^2}\left({L}_{g_0(t_0),g_0}^*\right)$ is an isomorphism. Because $\partial_th={L}_{g_0(t),g_0}(h-h_0)+R[h-h_0]$ and by elliptic regularity, \begin{align*} &\left\|h_0'(t)\right\|_{C^k(g_0(t))}\leq C\left\|(h'(t))_{\widetilde{\mathcal{F}}}\right\|_{L^2(g_0(t))}\\ &\leq C\left\|(R[h-h_0])_{\ker_{L^2}\left({L}_{g_0(t_0),g_0}^*\right)}\right\|_{L^2(g_0(t))}\\ &\leq C\sum_{i=1}^m| (F*\nabla^{g_0(t)}(h-h_0)*\nabla^{g_0(t)}(h-h_0)+\nabla^{g_0(t)}(G*(h-h_0)*\nabla^{g_0(t)}(h-h_0)),e_i)_{L^2(g_0(t))}|\\ &\leq C\left\|\nabla^{g_0(t)}(h-h_0)\right\|_{L^2(g_0(t))}^2+\sum_{i=1}^m|(G*r\cdot(h-h_0)*\nabla^{g_0(t)}(h-h_0),r^{-1}\nabla^{g_0(t)} e_i)_{L^2(g_0(t))}|\\ &\leq C\left\|\nabla^{g_0(t)}(h-h_0)\right\|_{L^2(g_0(t))}^2. \end{align*} We used the Hardy inequality (Theorem \ref{Hardy-Ineq}) and the fact that $r^{-1}\nabla^{g_0(t)} e_i$ is bounded by elliptic regularity in the last step. \end{proof} Before proving theorem \ref{main-theo}, we start by recalling a result by Devyver \cite[Definition 6]{Dev-Gau-Est} adapted to our context: \begin{theo}[Strong positivity of $L_{g_0}$]\label{Dev-Str-Pos} Let $(M^n,g_0)$ be an ALE Ricci-flat space that is linearly stable. Then the restriction of $-L_{g_0}$ to the orthogonal of $\ker_{L^2}(L_{g_0})$ is strongly positive, i.e. there exists some positive $\alpha_{g_0}\in(0,1]$ such that \begin{eqnarray*} \alpha_{g_0}(-\Delta_{g_0}h,h)_{L^2(g_0)}\leq (-L_{g_0}h,h)_{L^2(g_0)}\quad \forall h\in L_{g_0}(C^{\infty}_{0}(S^2T^{*}M)). \end{eqnarray*} \begin{proof}[Sketch of proof] The proof is as in \cite{Dev-Gau-Est} with some minor modifications we point out here. Write $-L_{g_0}=-\Delta_{g_0}+R_+-R_-$ where $R_+$ and $R_-$ correspond to the positive (resp.\ non-positive) eigenvalues of $Rm(g_0)*$. Let $H=-\Delta_{g_0}+R_+$ and $A:L^2(S^2T^*M)\to L^2(S^2T^*M)$ be defined by $A=H^{-1/2}R_-H^{-1/2}$. The operator $A$ is compact \cite[Corollary 1.3]{Car-Coh} Because $-L_{g_0}$ is nonnegative, all eigenvalues of $A$ lie in $[0,1]$. It can be shown that $H^{1/2}$ maps $\ker_{L^2}(L_{g_0})$ isomorphically to $\ker_{L^2}(1-A)$. As $A$ is a compact operator, we get the condition \begin{align*} (Ah,h)_{L^2(g_0)}\leq (1-\epsilon)\left\|h\right\|_{L^2(g_0)}^2,\quad \forall h\in H^{1/2}(C^{\infty}_{0}(S^2T^*M)\cap \ker_{L^2}(L_{g_0})^{\perp}), \end{align*} for some $\epsilon>0$ which in turn is equivalent to \begin{align*} (R_-h,h)_{L^2(g_0)}\leq (1-\epsilon)(Hh,h)_{L^2(g_0)},\quad \forall h\in C^{\infty}_{0}(S^2T^*M)\cap \ker_{L^2}(L_{g_0})^{\perp}. \end{align*} \end{proof} \end{theo} \begin{theo}\label{theo-unif-bound-strict-pos} Let $(M^n,g_0)$ be a linearly stable ALE Ricci-flat manifold which is integrable. Furthermore, let $\widetilde{\mathcal{F}}$ be as above. Then there exists a constant $\alpha>0$ such that \begin{eqnarray*} (-{L}_{g,g_0}h,h)_{L^2(g)}\geq \alpha_{g_0} \left\|\nabla^{g} h\right\|^2_{L^2(g)}, \end{eqnarray*} for all $g\in \widetilde{\mathcal{F}}$ and $h\in L_{g_0(t),g_0}(C^{\infty}_{0}(S^2T^{*}M))$ provided that $\widetilde{\mathcal{F}}$ is chosen small enough. \end{theo} \begin{proof} By Theorem \ref{Dev-Str-Pos}, there exists a constant $\alpha_0>0$ such that \begin{eqnarray*} (-L_{g_0}h,h)_{L^2({g_0})}\geq \alpha_0 \left\|\nabla^{g_0} h\right\|_{L^2({g_0})}, \end{eqnarray*} for any compactly supported $h\in \ker_{L^2}(L_{g_0})^{\perp}$. Now by Taylor expansion, with $k=g-g_0$ \begin{align*} &(-{L}_{g,g_0}h,h)_{L^2({g})}= (-L_{g_0}h,h)_{L^2({g_0})}-\int_0^1\frac{d}{dt}({L}_{g_0+tk,g_0}h,h)_{L^2(g_0+tk)}dt\\ &\geq \alpha_0 \left\|\nabla^{g_0} h\right\|_{L^2({g_0})}^2+\int_0^1 [(\nabla^{g_0,2}h*k+\nabla^{g_0} h*\nabla^{g_0} k+h*\nabla^{g_0,2} k)*h+Rm*h*h*k] d\mu_{g_0+tk}\\ &= \alpha_0 \left\|\nabla^{g_0} h\right\|_{L^2({g_0})}^2+\int_0^1 [\nabla^{g_0} h*h*\nabla^{g_0} k+\nabla^{g_0} h*\nabla^{g_0} h* k+Rm*h*h*k] d\mu_{g_0+tk}\\ &\geq \alpha_0 \left\|\nabla^{g_0} h\right\|_{L^2({g_0})}^2- C \left\|\nabla^{g_0} h\right\|_{L^2({g_0})}^2\left\| k\right\|_{L^{\infty}(g_0)}- C \left\|\nabla^{g_0} h\right\|_{L^2({g_0})}\left\|h*\nabla^{g_0} k\right\|_{L^2({g_0})}\\ &\geq \alpha_0 \left\|\nabla^{g_0} h\right\|_{L^2({g_0})}^2- C \left\|\nabla^{g_0} h\right\|_{L^2({g_0})}^2\left\| k\right\|_{L^{\infty}(g_0)}^{\sigma}, \end{align*} for some $\sigma\in (0,1)$. To justify the last inequality, we use elliptic regularity and Sobolev embedding as in the proof of Theorem \ref{ell-reg-prop} to obtain \begin{align*} \left\|r\nabla^{g_0} k\right\|_{L^{\infty}({g_0})}\leq\left\|k\right\|_{C^{1,\alpha}_{0}({g_0})}\leq C\left\|k\right\|_{W^{2,p}_{0}({g_0})}\leq C\left\|k\right\|_{L^{p}({g_0})}\leq C\left\|k\right\|_{L^{2}({g_0})}^{1-\sigma}\left\|k\right\|_{L^{\infty}({g_0})}^{\sigma}, \end{align*} with $\sigma=1-\frac{2}{p}$ and $p>n$. This can be combined with the Hardy inequality (\ref{Hardy-Ineq}) to obtain \begin{align*} \left\|h*\nabla^{g_0} k\right\|_{L^2({g_0})}\leq & \left\|r^{-1}h\right\|_{L^2({g_0})} \left\|r\nabla^{g_0} k\right\|_{L^{\infty}({g_0})} \leq C\left\|\nabla^{g_0} h\right\|_{L^2({g_0})}\left\|k\right\|_{L^{\infty}(g_0)}^{\sigma}, \end{align*} which yields the estimate of the theorem for $h\in \ker_{L^2}(L_{g_0})^{\perp}$, provided that $\left\| k\right\|_{L^{\infty}(g_0)}$ is small enough. To pass to $h\in \ker_{L^2}({L}_{g,g_0}^*)^{\perp}$, we note that an isomorphism between $\ker_{L^2}(L_{g_0})^{\perp}$ and $ \ker_{L^2}({L}_{g,g_0}^*)^{\perp}$ is given by $\Phi_g:h\mapsto h-\sum_i(h,e_i(g))_{L^2(g)}\cdot e_i(g)$ where the tensors $e_i(g)$ are an orthonormal basis of $\ker_{L^2}({L}_{g,g_0}^*)$. At first, \begin{equation}\begin{split}\label{nabla-est} \left\|\nabla^g \Phi(h)\right\|^2_{L^2(g)}&=\left\|\nabla^g h\right\|^2_{L^2(g)} -2(h,e_i)_{L^2(g)}(\Delta_g e_i,h)_{L^2(g)}+(h,e_i)_{L^2(g)}^2(\Delta_g e_i,e_i)_{L^2(g)}\\ &\leq C\left\|\nabla^g h\right\|^2_{L^2(g)} \leq C\left\|\nabla^{g_0} h\right\|^2_{L^2(g_0)}. \end{split} \end{equation} The first inequality here can be proven as follows: let $\left\{f_i\right\}_i$ be an orthonormal basis of $\ker_{L^2}(L_{g_0})$, then by the triangular inequality, \begin{align*} |(h,e_i)_{L^2(g)}|&\leq|(h,e_i)_{L^2(g_0)}|+|(h,e_i)_{L^2(g_0)}-(h,e_i)_{L^2(g)}|. \end{align*} For the first of these terms, we have \begin{align*} |(h,e_i)_{L^2(g_0)}|&= C|(h,e_i-f_i)_{L^2(g_0)}|\\&\leq C\left\|h\right\|_{L^{2n/(n-2)}(g)}\cdot \left\|e_i-f_i\right\|_{L^{n/2}(g)}\leq \delta\left\| \nabla^g h\right\|_{L^2(g)}, \end{align*} where we used the Sobolev inequality and the fact that a basis of $\ker_{L^2}({L}_{g,g_0}^*)$ can be chosen to depend smoothly on $g$. To handle the second term, we use Taylor expansion and we obtain \begin{align*} |(h,e_i)_{L^2(g_0)}-(h,e_i)_{L^2(g)}|&\leq C \int_M |k|\cdot |h|d\mu_g \leq \left\|h\right\|_{L^{2n/(n-2)}(g_0)}\cdot \left\|k\right\|_{L^{n/2}(g_0)}\\&\leq C\left\| \nabla^{g_0} h\right\|_{L^2(g_0)}\cdot \left\|k\right\|_{L^{\infty}(g_0)}^{\sigma}\leq \delta\left\| \nabla^{g_0} h\right\|_{L^2(g_0)}. \end{align*} This justifies the first inequality from above. The second inequality in \eqref{nabla-est} follows from a Taylor expansion argument. To finish the proof, it remains to show that the inequality $(-{L}_{g,g_0} \Phi(h),\Phi(h))_{L^2(g)}\geq C\cdot (-{L}_{g,g_0} h,h)_{L^2(g)}$ holds for some constant $C>0$. We compute \begin{align*} (-{L}_{g,g_0}\Phi(h),\Phi(h))_{L^2(g)} &=(-{L}_{g,g_0}h,h)_{L^2(g)}+\sum_i (h,e_i)_{L^2(g)}(-{L}_{g,g_0}e_i,h)_{L^2(g)}\\ &\geq(-{L}_{g,g_0}h,h)_{L^2(g)}-\sum_i \delta' \left\| \nabla^g h\right\|_{L^2(g)}\left\|{L}_{g,g_0}e_i\right\|_{L^{n/2}(g)}\left\|h\right\|_{L^{2n/n-2}(g)} \\ &\geq (-{L}_{g,g_0}h,h)_{L^2(g)}- \delta \left\| \nabla^g h\right\|_{L^2(g)}^2\\ &\geq (1-C\delta)(-{L}_{g,g_0}h,h)_{L^2(g)}, \end{align*} where we also used the Sobolev inequality and elliptic regularity. \end{proof} \subsection{Existence for all time and convergence}\label{Exi-conv} \begin{prop}\label{L2-bound} Let $(M,g_0)$ be a linearly stable ALE Ricci-flat manifold which satisfies the integrability condition. Then there exists an $\epsilon>0$ with the following property: If $(g(t))_{t\in[0,T]}$ is a Ricci-DeTurck flow and $T$ a time such that $\left\|g(t)-g_0\right\|_{L^{\infty}}<\epsilon$ for all $t\in [0,T]$, then there exists a constant such that the evolution inequality \[ \frac{d}{dt}\left\|h-h_0\right\|_{L^2(g_0(t))}^2+C\left\|\nabla^{g_0(t)}(h-h_0)\right\|_{L^2(g_0(t))}^2\leq0, \] holds. \end{prop} \begin{proof} We know that \begin{eqnarray*} \partial_t h-{L}_{g_0(t),g_0}(h-h_0)=R[h-h_0], \end{eqnarray*} where \begin{eqnarray*} R[h-h_0]=F*\nabla^{g_0(t)}(h-h_0)*\nabla^{g_0(t)}(h-h_0)+\nabla^{g_0(t)}(G*(h-h_0)*\nabla^{g_0(t)}(h-h_0)). \end{eqnarray*} Thanks to Theorem \ref{theo-unif-bound-strict-pos} and Lemma \ref{est_g_0}, \begin{align*} \partial_t&\|h-h_0\|_{L^2(g_0(t))}^2=2({L}_{g_0(t),g_0}(h-h_0),h-h_0)_{L^2(g_0(t))}+(R[h-h_0],h-h_0)_{L^2(g_0(t))} \\& +(h-h_0,\partial_t h_0(t))_{L^2(g_0(t))}+\int_M (h-h_0)*(h-h_0)*\partial_t h_0(t)d\mu_{g_0(t)}\\ &\leq -2\alpha_{g_0} \left\|\nabla^{g_0(t)}(h-h_0)\right\|_{L^2(g_0(t))}^2+C\left\|(h-h_0)\right\|_{L^{\infty}(g_0(t))}\left\|\nabla^{g_0(t)}(h-h_0)\right\|_{L^2(g_0(t))}^2\\ & \quad+ \left\|\partial_th_0\right\|_{L^{2}(g_0(t))}\left\|h-h_0\right\|_{L^2(g_0(t))}\\ &\leq (-2\alpha_{g_0}+C\cdot \epsilon )\left\|\nabla^{g_0(t)}(h-h_0)\right\|_{L^2(g_0(t))}^2, \end{align*} which proves the desired estimate. \end{proof} \begin{proof}[Proof of Theorem \ref{main-theo}] Let $\epsilon>0$ be so small that Proposition \ref{L2-bound} holds, provided that $\left\|h\right\|_{L^{\infty}(g_0)}<\epsilon$. By Lemma \ref{Ck-estimate}, we can find $\delta_1>0$ so small that the above $\epsilon$-bound holds as long as $h\in\mathcal{B}_{L^2\cap L^{\infty}}(0,\delta_1)$. Now let $\delta_2>0$ be so small that $h(1)\in \mathcal{B}_{L^2\cap L^{\infty}}(0,\delta_1)$ if $h(0)\in \mathcal{B}_{L^2\cap L^{\infty}}(0,\delta_2)$ where $\delta_2=\delta_2(\epsilon)$ will be chosen below. Suppose that $T_{max}\geq 1$ is the first time where $h(t)$ leaves $\mathcal{B}_{L^2\cap L^{\infty}}(0,\epsilon)$. By Lemma \ref{est_g_0}, Proposition \ref{L2-bound} and elliptic regularity, \begin{align*} \left\|h_0(T_{max})\right\|_{L^{2}(g_0)}+&\left\|h_0(T_{max})\right\|_{L^{\infty}(g_0)}\leq C\int_1^{T_{max}}\left\|\partial_th_0(t)\right\|_{L^{2}(g_0(t))}dt\\& \leq C\int_1^{T_{max}}\left\|\nabla^{g_0(t)} (h(t)-h_0(t))\right\|_{L^{2}(g_0(t))}^2dt\\ &\leq C\left\|h(1)-h_0(1)\right\|_{L^{2}(g_0)}^2\leq C\left\|h(1)\right\|_{L^{2}(g_0)}^2\leq C\cdot (\delta_1)^2. \end{align*} Furthermore by Proposition \ref{L2-bound}, \begin{align*} \left\|h(T_{max})-h_0(T_{max})\right\|_{L^{2}(g_0)}\leq \left\|h(1)-h_0(1)\right\|_{L^{2}(g_0)}\leq C\cdot \delta_1. \end{align*} Again by Proposition \ref{L2-bound}, Lemma \ref{Ck-estimate} and interpolation, \begin{align*} &\left\|h(T_{max})-h_0(T_{max})\right\|_{L^{\infty}(g_0)}\\&\qquad\leq C\left\|\nabla^{g_0}(h(T_{max})-h_0(T_{max}))\right\|_{L^{\infty}(g_0)}^{1-\alpha}\cdot \left\|h(T_{max})-h_0(T_{max})\right\|_{L^{2}(g_0)}^{\alpha}\\ &\qquad\leq C\cdot \epsilon^{1-\alpha}\left\|h(1)-h_0(1)\right\|_{L^{2}(g_0)}^{\alpha}\leq C\cdot \epsilon^{1-\alpha}(\delta_1)^{\alpha}, \end{align*} with $\alpha=\frac{n}{n+2}$. By the triangle inequality, \begin{align*} \left\|h(T_{max})\right\|_{L^{2}(g_0)}+\left\|h(T_{max})\right\|_{L^{\infty}(g_0)}&\leq C\cdot (\delta_1)^2+C\cdot \delta_1+ C\cdot \epsilon^{1-\alpha}(\delta_1)^{\alpha}\leq \epsilon/2, \end{align*} provided that $\delta_1>0$ was chosen small enough. We now have proven that such a $T_{max}$ can not exist and that $h(t)\in \mathcal{U}$ for all $T>0$. Moreover, because \begin{align*} \int_1^{\infty}\left\|\partial_th_0(t)\right\|_{L^{2}(g_0(t))}dt &\leq C\int_1^{\infty}\left\|\nabla^{g_0(t)} (h(t)-h_0(t))\right\|_{L^{2}(g_0(t))}^2dt\\&\leq C\left\|h(1)-h_0(1)\right\|_{L^{2}(g_0)}^2<\infty, \end{align*} and since \begin{align*} \left|\partial_t \left\|\nabla^{g_0(t)}(h(t)-h_0(t))\right\|^2_{L^2(g_0(t))}\right|\leq C \left\|h(t)-h_0(t)\right\|^2_{H^{3}(g_0)}\leq C, \end{align*} by Lemma \ref{Hk-estimate}, we get $$\limsup_{t\rightarrow+\infty}\left\|\partial_th_0(t)\right\|_{L^{2}(g_0(t))}\leq \limsup_{t\rightarrow+\infty}\left\|\nabla^{g_0(t)} (h(t)-h_0(t))\right\|_{L^{2}(g_0(t))}^2= 0,$$ and $h_0(t)$ converges in $L^2$ to some limit $h_0(\infty)$. Due to elliptic regularity, $(h_0(t))_{t\geq 0}$ converges to $h_0(\infty)$ as $t$ goes to $+\infty$ with respect to all Sobolev norms. We are going to show now that $(g(t))_{t\geq 0}$ converges to $g_0+h_0(\infty)=: g_{\infty}$ as $t$ goes to $+\infty$ with respect to all $W^{k,p}$-norms with $p>2$. For this purpose, it suffices to show that $(h(t)-h_0(t))_{t\geq 0}$ converges to $0$ as $t$ goes to $+\infty$ with respect to all these norms. At first, by the Euclidean Sobolev inequality, \begin{align*} \left\|h-h_0\right\|_{L^{\frac{2n}{n-2}}(g_0)}\leq C\left\|\nabla^{g_0} (h-h_0)\right\|_{L^2(g_0)}\to0, \quad\mbox{as $t\rightarrow+\infty$,} \end{align*} which implies that $\lim_{t\rightarrow+\infty}h-h_0=0$ in $L^p$ for all $p\in(2,\infty)$ by interpolation and due to smallness in $L^2\cap L^{\infty}$. Moreover, for $j\in\mathbb{N}$ arbitrary, by interpolation inequalities, \begin{align*} \left\|\nabla^{g_0,j}(h-h_0)\right\|_{L^p(g_0)}\leq C\left\|\nabla^{g_0,m}(h-h_0)\right\|_{L^{\infty}(g_0)}^{\alpha}\left\|h-h_0\right\|_{L^p(g_0)}^{1-\alpha}\leq C\left\|h-h_0\right\|_{L^p(g_0)}^{1-\alpha}\to0, \end{align*} as $t\rightarrow+\infty$ with $\alpha=\frac{jp}{mp+n}$ and $m\in\mathbb{N}$ so large that $\alpha<1$. Due to Sobolev embedding, convergence also holds for $p=\infty$. \end{proof} \section{Nash-Moser Iteration at infinity}\label{Nash-Moser-Sec} In this section, we prove a decay of the $L^{\infty}$ norm of the difference between an immortal solution to the Ricci-DeTurck flow with gauge an ALE Ricci-flat metric $g_0$ and the metric $g_0$ itself. More precisely, one has the following theorem: \begin{theo}\label{theo-Nas-Moser-Iteration} Let $(M^n,g_0)$ be a complete Riemannian manifold endowed with a Ricci flat metric with quadratic curvature decay at infinity, i.e. \begin{eqnarray*} \Ric(g_0)= 0,\quad |\Rm(g_0)|_{g_0}(x)\leq \frac{C}{1+d_{g_0}^2(x_0,x)},\quad x\in M,\quad\AVR(g_0)>0, \end{eqnarray*} for some positive constant $C$ and some point $x_0\in M$. Let $(M^n,g(t))_{t\geq 0}$, be an immortal solution to the Ricci-DeTurck flow with respect to the background metric $g_0$ such that $$\sup_{t\geq 0}\|g(t)-g_0\|_{L^{\infty}(M)}\leq \epsilon,$$ for some positive universal $\epsilon$. Then, for any positive time $t$, and radius $r<\sqrt{t}$, \begin{eqnarray*} &&\sup_{P(x_0,t,r/2)}|g(t)-g_0|^2\leq \frac{C(n,g_0,\epsilon)}{r^{n+2}}\int_{P(x_0,t,r)}\arrowvert g(t)-g_0\arrowvert_{g_0}^2(y,s)d\mu_{g_0}(y)ds,\\ &&P(x_0,t,r):=(M\setminus B_{g_0}(x_0,r))\times (t-r^2,t]. \end{eqnarray*} In particular, if $\sup_{t\geq 0}\|g(t)-g_0\|_{L^2(M)}\leq C<+\infty$, then \begin{eqnarray*} \|g(t)-g_0\|_{L^{\infty}(M\setminus B_{g_0}(x_0,\sqrt{t}))}\leq C(n,g_0,\epsilon)\frac{\sup_{t\geq 0}\|g(t)-g_0\|_{L^2(M)}}{t^{\frac{n}{4}}},\quad t>0. \end{eqnarray*} \end{theo} \begin{rk} Notice that $P(x_0,t,r)$ is the parabolic neighborhood of a point at infinity on the manifold $M$. \end{rk} Before starting the proof of this theorem, we remark that by combining Theorem \ref{main-theo} and Theorem \ref{theo-Nas-Moser-Iteration} leads to Theorem \ref{main-theo-bis}. \begin{proof}[Proof of Theorem \ref{theo-Nas-Moser-Iteration}] Recall that $(g(t))_{ t\geq 0}$ (or similarly $h(t):=g(t)-g_0$) satisfies the following partial differential equation : \begin{eqnarray*} \partial_th&=&g^{-1}\ast\nabla^{g_0,2}h+\Rm(g_0)\ast h+g^{-1}\ast g^{-1}\ast\nabla^{g_0}h\ast\nabla^{g_0}h,\\ g^{-1}\ast\nabla^{g_0,2}h&:=&g^{ij}\nabla^{g_0,2}_{ij}h. \end{eqnarray*} In particular, \begin{eqnarray*} &&\partial_t|h|_{g_0}^2\leq g^{-1}\ast\nabla^{g_0,2}|h|_{g_0}^2-2g^{-1}(\nabla^{g_0}h,\nabla^{g_0}h)+\RP(g_0)|h|_{g_0}^2+c(n)|h|_{g_0}|\nabla^{g_0}h|_{g_0}^2,\\ &&g^{-1}(\nabla^{g_0}h,\nabla^{g_0}h):=g^{ij}\nabla^{g_0}_ih\nabla^{g_0}_jh,\quad \RP(g_0):=c(n,\epsilon)|\Rm(g_0)|_{g_0}. \end{eqnarray*} Since $\|h(t)\|_{L^{\infty}(M)}\leq \epsilon$ for any positive time $t$ : \begin{eqnarray*} &&\partial_t|h|_{g_0}^2\leq g^{-1}\ast\nabla^{g_0,2}|h|_{g_0}^2-|\nabla^{g_0}h|_{g_0}^2+\RP(g_0)|h|_{g_0}^2,\\ \end{eqnarray*} if $\epsilon\leq \epsilon(n)$. Define $u:=|h|_{g_0}^2$ and multiply the previous differential inequality by $pu^{p-1}$ for some real $p\geq 2$ to get : \begin{eqnarray*} \partial_tu^p&=&pu^{p-1}\partial_tu\\ &\leq& pu^{p-1}g^{-1}\ast\nabla^{g_0,2}u-pu^{p-1}|\nabla^{g_0}h|_{g_0}^2+p\RP(g_0)u^p\\ &\leq&g^{-1}\ast\nabla^{g_0,2}u^p-pg^{-1}(\nabla^{g_0}u^{p-1},\nabla^{g_0}u)-pu^{p-1}|\nabla^{g_0}h|_{g_0}^2+p\RP(g_0)u^p.\\ \end{eqnarray*} Take any smooth space-time cutoff function $\psi$ and multiply the previous differential inequality by $\psi^2u^p$ and integrate by parts as follows : \begin{eqnarray} &&\int_{t- r^2}^{t'}\int_M\psi^2u^p\left[-g^{-1}\ast\nabla^{g_0,2}u^p+pg^{-1}(\nabla^{g_0}u^{p-1},\nabla^{g_0}u)+pu^{p-1}|\nabla^{g_0}h|_{g_0}^2\right]d\mu_{g_0}ds\label{inequ-nas-mos-1}\\ &&\leq \int_{t-r^2}^{t'}\int_M-\psi^2u^p\partial_su^p+p\RP(g_0)\psi^2u^{2p}d\mu_{g_0}ds,\label{inequ-nas-mos-2} \end{eqnarray} for any $t'\in(t-r^2,t].$ Now, by integrating by parts once and using the pointwise Young inequality : \begin{eqnarray*} -\int_M\psi^2u^pg^{-1}\ast\nabla^{g_0,2}u^pd\mu_{g_0}&=&\int_M\nabla^{g_0}_i(g^{ij}\psi^2u^p)\nabla^{g_0}_ju^pd\mu_{g_0}\\ &=&\int_Mg^{-1}(\nabla^{g_0}(\psi u^p),\nabla^{g_0}(\psi u^p))-g^{-1}(\nabla^{g_0}\psi,\nabla^{g_0}(\psi u^p))u^pd\mu_{g_0}\\ &&+\int_Mg^{-1}(u^p\nabla^{g_0}\psi,\psi\nabla^{g_0}u^p)d\mu_{g_0}ds\\ &&+\int_M\div_{g_0}(g^{-1})(\psi^2u^p\nabla^{g_0}u^p)d\mu_{g_0}ds\\ &\geq&\frac{1}{2}\int_M|\nabla^{g_0}(\psi u^p)|_{g_0}^2d\mu_{g_0}\\ &&-c\int_M|\nabla^{g_0}\psi|^2_{g_0}u^{2p}+pu^{2p-1}\psi|\nabla^{g_0}h|_{g_0}\psi|\nabla^{g_0}u|_{g_0}d\mu_{g_0}\\ &\geq&\frac{1}{2}\int_M|\nabla^{g_0}(\psi u^p)|_{g_0}^2d\mu_{g_0}-c\int_M|\nabla^{g_0}\psi|^2_{g_0}u^{2p}d\mu_{g_0}\\ &&-\frac{p}{2}\int_M\left(\psi^2u^{2p-1}|\nabla^{g_0}h|_{g_0}^2+c u^{2p-1}\psi^2|\nabla^{g_0}u|^2_{g_0}\right)d\mu_{g_0}, \end{eqnarray*} for some universal positive constant $c$, and where we used the smallness of $\|h(t)\|_{L^{\infty}(M)}$ for all time $t$. Going back to (\ref{inequ-nas-mos-1}) and (\ref{inequ-nas-mos-2}), one gets by absorbing terms appropriately : \begin{eqnarray*} \frac{1}{2}\int_{t-r^2}^{t'}\int_M|\nabla^{g_0}(\psi u^p)|_{g_0}^2d\mu_{g_0}ds&\leq& \int_{t-r^2}^{t'}\int_M-\psi^2u^p\partial_su^p+p\RP(g_0)\psi^2u^{2p}d\mu_{g_0}ds \\ &&+c\int_{t-r^2}^{t'}\int_M|\nabla^{g_0}\psi|^2_{g_0}u^{2p}d\mu_{g_0}ds. \end{eqnarray*} Finally, by integrating by parts with respect to time : \begin{eqnarray*} &&\int_M(u^{2p}\psi^2)(t')d\mu_{g_0}+\int_{t-r^2}^{t'}\int_M|\nabla^{g_0}(\psi u^p)|_{g_0}^2d\mu_{g_0}ds\\&&\qquad\leq 2\int_{t-r^2}^{t'}\int_Mp\RP(g_0)\psi^2u^{2p}d\mu_{g_0}ds +c\int_{t-r^2}^{t'}\int_M\left((\partial_s\psi^2)+|\nabla^{g_0}\psi|^2_{g_0}\right)u^{2p}d\mu_{g_0}ds. \end{eqnarray*} We need to control the integral involving the potential $\RP(g_0)$. By assumption, on the quadratic curvature decay at infinity: \begin{eqnarray}\label{C^0-control-potential} \|\RP(g_0)\|_{L^{\infty}(M\setminus B_{g_0}(x_0,r))}\leq\frac{C}{1+r^2},\quad r>0. \end{eqnarray} Let $\tau, \sigma\in (0,+\infty)$ such that $\tau+\sigma\leq r$. Define momentarily, \begin{eqnarray*} P(x_0,t,r,s):=M\setminus B_{g_0}(x_0,r-s)\times(t-s^2,t], \quad 0<s^2\leq r^2<t. \end{eqnarray*} Notice that $s_1<s_2$ implies $P(x_0,t,r,s_1)\subset P(x_0,t,r,s_2).$ Now, choose two smooth functions $\phi:\mathbb{R}_+\rightarrow[0,1]$ and $\eta:\mathbb{R}_+\rightarrow[0,1]$ such that \begin{eqnarray*} && \supp(\phi)\subset [r-(\tau+\sigma),+\infty),\quad\phi\equiv 1\quad\mbox{in $[r-\tau,+\infty)$},\\ &&\quad \phi\equiv 0\quad\mbox{in $[0,r-(\tau+\sigma)]$},\quad 0\leq \phi'\leq c/\sigma,\\ && \supp(\eta)\subset [t-(\tau+\sigma)^2,+\infty),\quad\eta\equiv 1\quad\mbox{in $[t-\tau^2,+\infty)$},\\ && \eta\equiv 0\quad\mbox{in $(t-r^2,t-(\tau+\sigma)^2]$},\quad 0\leq \eta'\leq c/\sigma^2. \end{eqnarray*} Define $\psi(y,s):=\phi(d_{g_0}(x_0,y))\eta(s)$, for $(y,s)\in M\times(0,+\infty)$. Then, \begin{eqnarray*} &&\arrowvert\nabla^{g_0}\psi\arrowvert_{g_0}\leq\frac{c}{\sigma},\quad \arrowvert\partial_s\psi\arrowvert\leq \frac{c}{\sigma^2}, \end{eqnarray*} for some uniform positive constant $c$. In particular, thanks to claim (\ref{C^0-control-potential}) applied to this cut-off function $\psi$ previously defined, one has : \begin{eqnarray*} &&\int_M(u^{2p}\psi^2)(t')d\mu_{g_0}+\int_{t-r^2}^{t'}\int_M|\nabla^{g_0}(\psi u^p)|_{g_0}^2d\mu_{g_0}ds\leq\\ &&c\left(\frac{p}{[1+r-(\tau+\sigma)]^2}+\frac{1}{\sigma^2}\right)\left(\int_{P(x_0,t,r,\tau+\sigma)}u^{2p}d\mu_{g_0}\right),\quad t'\in(t-r^2,t], \end{eqnarray*} which implies in particular that, \begin{eqnarray}\label{sup-integral} \sup_{t'\in(t-r^2,t]}\int_M(u^{2p}\psi^2)(t')d\mu_{g_0}\leq c\left(\frac{p}{[1+r-(\tau+\sigma)]^2}+\frac{1}{\sigma^2}\right)\left(\int_{P(x_0,t,r,\tau+\sigma)}u^{2p}d\mu_{g_0}\right). \end{eqnarray} Now, by Hšlder inequality, for $s\geq 0$, \begin{eqnarray*} \int_M (\psi u^p)^{2+\frac{4}{n}}(s)d\mu_{g_0}\leq\left(\int_M (\psi u^p)^{\frac{2n}{n-2}}(s)d\mu_{g_0}\right)^{\frac{n-2}{n}}\left(\int_M (\psi u^p)^2(s)d\mu_{g_0}\right)^{\frac{2}{n}}. \end{eqnarray*} Therefore, to sum it up, if $\alpha_n:=1+2/n$, by using (\ref{sup-integral}) in the third line, \begin{eqnarray*} \int_{P(x_0,t,r,\tau)}\left(u^{2p}\right)^{\alpha_n}d\mu_{g_0}ds&\leq&\int_{P(x_0,t,r,0)}\left(\psi u^p\right)^{2\alpha_n}d\mu_{g_0}ds\\ &\leq&\int_{t-r^2}^t\left(\int_M (\psi u^p)^{\frac{2n}{n-2}}d\mu_{g_0}\right)^{\frac{n-2}{n}}\left(\int_M (\psi u^p)^2d\mu_{g_0}\right)^{\frac{2}{n}}ds\\ &\leq&c(n,g_0)\sup_{s\in(t-r^2,t]}\left(\int_M (\psi u^p)^2d\mu_{g_0}\right)^{\frac{2}{n}}\int_{M\times(t-r^2,t]}\arrowvert\nabla^{g_0}(\psi u^p)\arrowvert_{g_0}^2d\mu_{g_0}\\ &\leq&c(n,g_0)\left(\frac{p}{[1+r-(\tau+\sigma)]^2}+\frac{1}{\sigma^2}\right)^{\alpha_n}\left(\int_{P(x_0,t,r,\tau+\sigma)}u^{2p}d\mu_{g_0}\right)^{\alpha_n}. \end{eqnarray*} Define the following sequences : \begin{eqnarray*} p_i:=\alpha_n^i,\quad \sigma_i:=2^{-1-i}(r/4),\quad \tau_{-1}:=3r/4,\quad\tau_i:=3r/4-\sum_{j=0}^i\sigma_j,\quad i\geq 0. \end{eqnarray*} Then, $\lim_{i\rightarrow +\infty}\tau_i= r/2$ and, for any $i\geq 0$, \begin{eqnarray*} \| u^2\|_{L^{p_{i+1}}(P(x_0,t,r,\tau_i))}\leq\left(c(n,g_0)\left(\frac{p_i}{1+[r-\tau_{i-1}]^2}+\frac{1}{\sigma_i^2}\right)\right)^{\frac{1}{p_i}}\| u^2\|_{L^{p_i}(P(x_0,t,r,\tau_{i-1}))}, \end{eqnarray*} i.e. \begin{eqnarray*} \| u\|^2_{L^{\infty}(P(x_0,t,r/2))}\leq\Pi_{i=0}^{\infty}\left(c(n)\left(\frac{p_i}{1+[r-\tau_{i-1}]^2}+\frac{1}{\sigma_i^2}\right)\right)^{\frac{1}{p_i}}\| u\|^2_{L^{2}(P(x_0,t,r,3r/4))}, \end{eqnarray*} since $P(x_0,t,r,r/2)=P(x_0,t,r/2)$. It remains to estimate the previous infinite product: \begin{eqnarray*} \Pi_{i=0}^{\infty}\left(\frac{p_i}{1+[r-\tau_{i-1}]^2}+\frac{1}{\sigma_i^2}\right)^{\frac{1}{p_i}} &\leq&c(n)\frac{1}{r^{2+n}}, \end{eqnarray*} i.e. \begin{eqnarray}\label{L^2-bound-mean-value} \sup_{P(x_0,t,r/2)}u^2&\leq&c(n,g_0) \frac{1}{r^{2+n}} \int_{P(x_0,t,r,3r/4)}u^2d\mu_{g_0}ds. \end{eqnarray} To get a bound depending on the $L^1$ norm of $u$, one can proceed as in \cite{Li-Sch-Mea-Val} (the so called Li-Schoen's trick) by iterating (\ref{L^2-bound-mean-value}) appropriately. \end{proof} \bibliographystyle{alpha.bst} \bibliography{bib-ale-stability} \end{document}
{"config": "arxiv", "file": "1707.09919/ALE-stability.tex"}
\section{Structure of the quotient graph of groups}\label{main} In this section we analyze the quotient graphs of groups for lattices in semisimple groups of rank 1. It turns out that an infinite quotient graph can be explained in terms of the cusps of the lattice. This is a notion surprisingly similar to the one introduced in hyperbolic geometry. As in this classical example the notion interweaves geometric and group theoretic aspects. Just as in the case of a lattice $\Gamma$ in $G:=SL_2(\mathbb{R})$, we expect the notion of a cusp to capture both geometrical properties of the quotient of the homogenous space of $G$ modulo $\Gamma$ and group theoretic properties of the $\Gamma$--stabilizers of these cusps. The geometric aspect involves points at infinity of the homogenous space and its $\Gamma$--quotient. Since over a local field the homogenous space is a tree, it is natural to assign this role to certain ends of the tree and all ends of the quotient graph respectively. For lattices in $SL_2(\mathbb{R})$ the group theoretic aspect involved is that $\Gamma$--stabilizers of points at infinity whose horoball neighborhoods map to neighborhoods of cusps are (maximal) unipotent groups. (Horoballs as introduced in Section~\ref{tree-section} will play a prominent role in this paper as well.) The first idea concerning the group theoretic aspect would therefore be maximal unipotent subgroups of $\Gamma$. Unfortunately, from a geometric point of view, unipotents in semisimple groups over local fields do not necessarily behave as one expects extrapolating from the $SL_2(\mathbb{R})$. Unipotent elements can be grouped into three classes defined algebraically (see Section~\ref{unipotent-section}) which can be distinguished by their action on the Bruhat--Tits building as illustrated by Proposition~\ref{good/bad/ugly_geom}. This Proposition makes clear that we should restrict ourselves to consider good unipotent elements only. (An element/subgroup of $\G k.$ is good iff it is contained in the unipotent radical of a $k$--parabolic subgroup. The first part of Theorem~\ref{all good} explains, why we did not encounter that problem for groups over the reals.) We thus arrive at the following definition: \begin{definition}[cusps]\label{cusps} Let $\Gamma$ be a lattice in $\G k.$. \begin{itemize} \item An end of the Bruhat--Tits tree $\XG$ of $\G k.$ is called $\Gamma$--cuspidal iff there is a nontrivial good unipotent element in $\Gamma$ fixing $\epsilon$. \item A geometric cusp of $\Gamma$ is a an end of the quotient of the barycentric subdivision of $\XG$ modulo $\Gamma$. \item A cusp subgroup of $\Gamma$ is a maximal nontrivial good unipotent subgroup. \item A cusp of $\Gamma$ is a $\Gamma$--conjugacy class of a cusp subgroup of $\Gamma$. \end{itemize} \end{definition} There is an obvious bijection between $\Gamma$--cuspidal ends and cusp subgroups of $\Gamma$: Since we assume that $\G.$ is semisimple of rank $1$, a nontrivial good unipotent is contained in a unique proper $k$--parabolic, hence fixes a unique end. Therefore every cusp subgroup of $\Gamma$ fixes a unique end as well, which is $\Gamma$--cuspidal by definition. Conversely any $\Gamma$--cuspidal end (, i.e., proper $k$--parabolic) $\P_.$ defines the cusp subgroup $\Gamma\cap\U k_.$, where $\U_.$ is the unipotent radical of $\P_.$. Part (2) of Theorem~\ref{X mod Gamma} will show, that this bijection can be pushed down to the quotient level. We next note that our cuspidal ends are what Lubotzky calls the cusps of the lattice in \cite{l.rank1}. Therefore we may use his results whenever convenient. \begin{bemerkung}\label{lub-cusps} Let $\Gamma$ be a lattice in $\G k.$. The $\Gamma$--cuspidal ends of $\XG$ are precisely the cusps of $\Gamma$ in the sense of Lubotzky (\cite[Definition 6.4]{l.rank1} --- see below). \end{bemerkung} \proof Recall that Lubotzky calls an end $\epsilon$ of $\XG$ a cusp of $\Gamma$ if and only if there exists a vertex $x$, such that writing $x$ as $g.x_j;\ j=0,1$ (where $x_0$, $x_1$ are representatives) as we may, we find a nontrivial good unipotent in the group $\Gamma\cap gN_1g^{-1}$ fixing $\epsilon$. (The definition of $N_1$ can be found just before Citation~\ref{R-3.15}; it depends on $j$, a natural number $n$, and a lattice $L$ in the Lie algebra of $\G.$. The dependence on $j$ is discussed at the beginning of Section~\ref{l-cit-section}. The existence of an appropriate $L$ is proved in the same section. The parameter $n$ must be chosen large enough to guarentee a geometric property $(\bigstar)$ found at the beginning of Section~\ref{details}.) Call such an end a Lubotzky--cusp for the sake of this proof. Let $\epsilon$ be a Lubotzky--cusp of $\Gamma$. By Lubotzky's definition there is even a nontrivial good unipotent in $\Gamma_\epsilon\cap gN_1g^{-1}$, so $\epsilon$ is evidently $\Gamma$--cuspidal. For the converse, let the end $\epsilon$ be cuspidal, i.e., assume there is a nontrivial good unipotent element $u$ in $\Gamma$ fixing $\epsilon$. Choose a sequence $(g_i)_{i\in\NN}$ in $\G k.$ such that $g_i^{-1}ug_i$ converges to $e$ if $i$ tends to infinity (use Theorem \ref{RPM}). In addition let $r\in\mathbb{N}$ be large enough to guarantee $N:=\Fix {B_x(r)}_{\G k.}.\subseteq N_1$. If we choose $k\in\mathbb{N}$ large enough to ensure $g_k^{-1}ug_k\in N$, we get $e\neq u\in\Gamma_\epsilon\cap g_kNg_k^{-1}\subseteq \Gamma_\epsilon\cap g_kN_1g_k^{-1}$. We may then choose the vertex $x$ to be $g_k.x_j$ (where $j$ depends on $N_1$).\qed A further remark is in order here: The group $N_1$ is only defined for absolutely simple groups over fields of positive characteristic. For fields of characteristic $0$ all lattices in $\G k.$ are uniform thanks to a result of Tamagawa;\label{Tamagawa} see page 84 in \cite{trees}. But then a lattice $\Gamma$ can not contain any nontrivial good unipotent elements thanks to Corollary \ref{cocp=>no goods}. So there are no $\Gamma$--cuspidal ends hence no Lubotzky--cusps unless the field has positive characteristic. If the group is not absolutely simple, Lubotzkys definition has to be adapted by working backwards through the reduction steps of Section \ref{reduction} to cover the general case. The following theorem gives the best discrete analogue of the classical description of the structure of the quotient of the upper half plane by a lattice in $SL_2(\mathbb{R})$ one can hope for. To guarantee that $\G k.$ acts without inversion, we work with the barycentric subdivision $\XG'$ of $\XG$ in the following theorem. The function $\q\cdot+1$ will give the order of ramification at a vertex or any vertex in the $\Gamma$--orbit of a vertex in the quotient graph as appropriate. \begin{theorem}[structure of $\Gamma\bbackslash \XG'$] \label{X mod Gamma} Let $\G.$ be a connected semisimple $k$--group of $k$--rank 1. \begin{itemize} \item[(1)] For any lattice $\Gamma$ in $\G k.$ the quotient graph $\Gamma\backslash X_G'$ is the union of a finite connected graph $E$ with finitely many simplicial rays $r_i;\ 1\le i\le c$ attached to $E$ at their respective origin. If $y$ and $y'$ are two neighboring vertices on one of these rays which are sufficiently far from $E$ and with $y$ nearer to $E$ as $y'$ then $\Gamma_y$ is a subgroup of $\Gamma_{y'}$ of Index $\q{y'}$. \item[(2)] The map from the set of cusps of $\Gamma$ to the set of geometric cusps of $\Gamma$ induced by the map sending each maximal good unipotent subgroup to the unique end it fixes is bijective. \item[(3)] The $\Gamma$--cuspidal ends of $X_G'$ are precisely the ends whose $\Gamma$--stabilizer is infinite and locally finite. They are maximal infinite locally finite subgroups of $\Gamma$. Every infinite and locally finite subgroup of $\Gamma$ fixes a unique end. \item[(4)] Suppose that all bad unipotent elements of $\G k.$ contained in $\Gamma$ are anisotropic. Then each cusp subgroup of $\Gamma$ is a cocompact lattice in the group of elliptic elements fixing the $\Gamma$--cuspidal end $\epsilon$ fixed by the cusp subgroup. In other words, the cusp subgroup is of finite index in $\Gamma_\epsilon$. \end{itemize} \end{theorem} Lubotzky (using his notion of cusps) proves parts (1) and (3) and states part (2). We will therefore confine ourselves here to report the proof of (4). A complete proof of the remaining parts can be found in Section~\ref{details}. The proof of (4) is an immediate Corollary to a technical result derived in the central steps~4.2--4.5 in Raghunathans proof of Citation \ref{R-4.1}. What makes it worth reporting, is that the condition we impose in (4) almost always holds, see Remark~\ref{anisotropListe}. We first make sure, that it suffices to treat the absolutely simple case: The reduction steps are explained in Section~\ref{reduction}. The facts listed in Proposition~\ref{g/b/u.reduction} make sure that the lattice obtained after going through the reduction steps still satisfies our precondition. On the other hand the last fact mentioned in Section~\ref{U injects} makes sure, that the conclusion will hold for the original lattice once it is proved for its ``reduced'' version. So we may indeed assume that $\G.$ is absolutely simple. If the field $k$ has characteristic $0$, we know (c.f. the remark following Remark~\ref{lub-cusps}) that there are then no nontrivial good unipotents in the lattice $\Gamma$ and therefore no cusp groups. It follows that the claim of (4) is trivially valid for fields of characteristic $0$. So we may also assume that the field has positive characteristic. This will enable us to use the results of \cite{R}. Now, let $V\leqslant \Gamma$ be a cusp subgroup. Let $\P_.$ be the unique end of $\XG$ it fixes, $\U_.$ its unipotent radical, $P:=\P k_.$ and $U:=\U k_.$. We have to show that $V$ is a cocompact lattice in the group of elliptic elements in $\P k_.=P$. It is evidently discrete. By definition $V$ is contained in $U$. Since $U$ is cocompact in the group of elliptic elements of $P$ (see Section~\ref{U injects}), it suffices to show that $V$ is cocompact in $U$. The unipotent group $\Lambda$ defined in the first paragraph on page~142 of \cite{R} satisfies $$ V=U\cap\Gamma\leqslant \Lambda\leqslant P\cap \Gamma\;. $$ Raghunathan proves that $\Lambda$ is cocompact in the group $L$ of rational points of its Zariski--closure, and that $L$ contains $U$. Our hypotheses on unipotents in $\G k.$ enables us to prove that actually $V=\Lambda$ holds and our claim will follow. All elements of $\Lambda$ are unipotent, and none of them is anisotropic, since they are all contained in $P=\P k_.$. By our hypotheses then, $\Lambda$ can not contain any bad unipotents, therefore $\Lambda\subseteq U\cap\Gamma=V$. \qed Assuming the additional assumption we made to derive part~(4) of Theorem~\ref{X mod Gamma} we can improve upon Lubotzky's converse to his receipee to construct nonuniform lattices as follows. \begin{satz}\label{zalesskii} Let $\Gamma$ be a lattice in a connected semisimple $k$--group of $k$--rank 1. Assume that all cusp stabilizers of $\Gamma$ are residually finite and that all bad unipotent elements of $\Gamma$ are anisotropic. Then there is a sublattice $\Gamma^*$ of $\Gamma$ whose cusp stabilizers are cusp subgroups of $\Gamma^*$ and which is the free product of the representantives $\Delta^*_1,\ldots,\Delta^*_{c^*}$ of it's (algebraic) cusps and a free group of rank $\rank_\mathbb{Z}(H_1(\Gamma^*\backslash\XG'))$ generated by hyperbolic elements. \end{satz} \proof The method of proof applied to derive Lubotzky's converse, Theorem~7.1 in \cite{l.rank1}, can be reused. We supplement it with a geometric interpretation. Part (1) of the Structure Theorem implies that a lattice in the group of rational points of a semisimple group of rank $1$ over a local field is the fundamental group of a finite graph of groups, obtained by ``contraction of cusps'': Along the simplicial rays $r_i$ we eventually have an increasing chain of vertex groups. We may therefore replace an appropriate tail of each of these rays of groups by a single point whose attached vertex group equals the direct limit (the fundamental group) of the tail. The fundamental group is unchanged by this modification. But the new graph of groups is finite with finite edge groups and residually finite vertex groups. This is obvious except for the vertex groups obtained by contraction of a tail of a geometric cusp. The latter vertex groups stabilize the cuspidal end covering the tail. They are therefore residually finite by assumption. (Indeed those vertex groups \emph{are} the stabilizers of that cuspidal end, since cusp stabilizers consist of elliptic elements, compare Corollary~\ref {SpitzenStab-}.) By a result of Bass--Serre theory (\cite[Proposition 12]{trees}), the group $\Gamma$ is residually finite and the topology of subgroups with finite index in $\Gamma$ induces the topology of subgroups with finite index on each of the vertex groups. By our second assumption and part~(4) of the Structure Theorem, the cusp subgroups of $\Gamma$ have finite index in the cusp stabilizers. We may therefore choose a normal subgroup $\Gamma^*$ of finite index in $\Gamma$ such that the intersection with $\Gamma^*$ of the vertex groups obtained by contraction consist of good unipotent elements and the intersection of $\Gamma^*$ with the other vertex groups is trivial. The group $\Gamma^*$ determines a covering of the modified graph of groups of $\Gamma$. Each of the vertex groups for $\Gamma^*$ above the contracted tails is conjugate to the intersection of $\Gamma^*$ with the fundamental group of the tail, hence consists of good unipotent elements. The other vertex groups for $\Gamma^*$ are trivial. Hence $\Gamma^*$ is the free product of its nontrivial vertex groups extended by the free group on the set of edges outside a maximal subtree. Since the graph of groups of $\Gamma^*$ is finite, its sets of vertex and edge groups are finite, and our claim is proved modulo the geometric interpretation. To arrive at it, we interpret the universal covering tree $\ol{X}_{\G.}$ of the modified graph of groups of $\Gamma$ in terms of the original tree $\XG$. According to Lemma~\ref{geo-cusp.contr}, if we choose the tails to be contracted small enough, $\ol{X}_{\G.}$ can be realized by contracting horoballs around cusps which are independent with respect to $\Gamma$. These horoballs are then independent with respect to the subgroup $\Gamma^*$ as well. The groups $\Gamma^*$ and $\Gamma$ are commensurable, hence have the same set of cuspidal ends (this is obvious when using their charactersisation in part~(3) of the Structure Theorem). The stabilizers of the independent horoballs (which are the vertices of infinite ramification index in $\ol{X}_{\G.}$) coincide thus with the stabilizers of the cuspidal end they contain. This shows that the nontrivial vertex groups of the graph of groups for $\Gamma^*$ we considered in the previous section represent the conjugacy classes of the stabilizers of the $\Gamma^*$--cuspidal ends. We are left to confirm that each of the edges of the quotient graph of groups of $\Gamma^*$ acts hy a hyperbolic transformation and that there are exactly $\rank_\mathbb{Z}(H_1(\Gamma^*\backslash\XG'))$ many of them. But this is obvious from the interpretation of the action of $\Gamma^*$ on $\ol{X}_{\G.}$ in terms of the original action. \qed This stronger converse enters into the statement and proof of Theorem~4.1 in \cite{profinite.normal}. Dependence however seems not to be critical. \begin{bemerkung} Our second assumption almost always holds, for we are going to show that in most groups of rank~1 each bad unipotent element is anisotropic; compare Observation~\ref{anisotropListe}. On the other hand it is not known, whether cusp stabilizers of a lattice must be residually finite, equivalently, whether lattices in groups of rank~1 necessarily are residually finite. For the subclass of lattices whose cusp subgroups have finite index in the corresponding cusp stabilizer the question becomes whether a lattice in the group of rational points of the unipotent radical of a semisimple group of rank~1 over a local field of positive characteristic is residually finite. If the unipotent radical is abelian, it is the additive group of a vector space over the field and hence residually finite. I conjecture that each discrete unipotent subgroup of an algebraic group over a local field of positive characteristic is residually finite. (Discreteness is needed, since the unipotent radical of (any) minimal $k$--parabolic in the group $SU_3$ over an infinite field with respect to the standard hermitian form is not residually finite.) \end{bemerkung}
{"config": "arxiv", "file": "math0302111/main.tex"}
\begin{document} \fontfamily{ptm} \selectfont \maketitle \begin{abstract} We are interested in the long-time behavior of a diploid population with sexual reproduction, characterized by its genotype composition at one bi-allelic locus. The population is modeled by a $3$-dimensional birth-and-death process with competition, cooperation and Mendelian reproduction. This stochastic process is indexed by a scaling parameter $K$ that goes to infinity, following a large population assumption. When the birth and natural death parameters are of order $K$, the sequence of stochastic processes indexed by $K$ converges toward a slow-fast dynamics. We indeed prove the convergence toward $0$ of a fast variable giving the deviation of the population from Hardy-Weinberg equilibrium, while the sequence of slow variables giving the respective numbers of occurrences of each allele converges toward a $2$-dimensional diffusion process that reaches $(0,0)$ almost surely in finite time. We obtain that the population size and the proportion of a given allele converge toward a generalized Wright-Fisher diffusion with varying population size and diploid selection. Using a non trivial change of variables, we next study the absorption of this diffusion and its long time behavior conditioned on non-extinction. In particular we prove that this diffusion starting from any non-trivial state and conditioned on not hitting $(0,0)$ admits a unique quasi-stationary distribution. We finally give numerical approximations of this quasi-stationary behavior in three biologically relevant cases: neutrality, overdominance, and separate niches. \end{abstract} \Keywords{Diploid populations; Generalized Wright-Fisher diffusion processes; Stochastic slow-fast dynamical systems; Quasi-stationary distributions; Allele coexistence.} \section{Introduction}\label{sectionintro} We study the diffusion limit and quasi-stationary behavior of a population of diploid individuals modeled by a non-linear $3$-type birth-and-death process with competition, cooperation and Mendelian reproduction. Individuals are characterized by their genotype at one locus for which there exist $2$ alleles, $A$ and $a$. We study the genetic evolution of the population, i.e. the dynamics of the respective numbers of individuals with genotype $AA$, $Aa$, and $aa$. Following an infinite population size approximation (see also \cite{FournierMeleard2004} and \cite{Champagnat2006} for instance) we assume that the initial number of individuals is of order $K$ where $K$ is a scale parameter that will go to infinity. The population is then modeled by a $3$-type birth-and-death process denoted by $\nu^K=(\nu^K_t, t\geq0)$ and we consider the sequence of stochastic processes $Z^K=\nu^K/K$. At each time $t$ and for all $K$, we define the deviation $Y^K_t$ of the population $Z^K_t$ from a so-called Hardy-Weinberg equilibrium. We are interested in the convergence of the sequence of stochastic processes $Z^K$ when the individual birth and natural death rates are assumed to be both equivalent to $\gamma K$, with $\gamma>0$ (see Section \ref{sectionconvergencediffusion} and \cite{ChampagnatFerriereMeleard2006} for a biological interpretation). In Section \ref{sectionconvergencediffusion} we first establish some conditions on the competition and cooperation parameters so that the sequence of population sizes satisfies a moment propagation property. Next, we prove the convergence of the sequence of stochastic processes $Z^K$ toward a slow-fast dynamics (see \cite{MeleardTran2012} or \cite{Balletal2006} for other examples of such dynamics and \cite{Kurtz1992} and \cite{BerglundGentz2005} for treatments of slow-fast scales in diffusion processes). More precisely, we prove that for all $t>0$, the sequence of random variables $(Y^K_t)_{K\in\mathbb{N}^*}$ goes to $0$ when $K$ goes to infinity, while the sequence of processes $(N^K_t,X^K_t)_{t\geq0}$ giving respectively the population size and the proportion of allele $A$ converges in law toward a "slow" $2$-dimensional diffusion process $(N_t,X_t)_{t\geq0}$. This limiting diffusion $(N,X)$ can be seen as a generalized Wright-Fisher diffusion with varying population size and diploid selection. In Section \ref{sectionQSD}, we first find an appropriate change of variables $S=(f_1(N,X),f_2(N,X))$ such that $S$ is a Kolmogorov diffusion process evolving in a subset $\mathcal{D}$ of $\mathbb{R}^2$. We prove that the stochastic process $S$ is absorbed in the set $\mathbf{A}\cup\mathbf{B}\cup\mathbf{0}$ almost surely in finite time, where $\mathbf{A}$, $\mathbf{B}$ and $\mathbf{0}$ correspond respectively to the sets where $X=1$ (fixation of allele $A$), $X=0$ (fixation of allele $a$), and $N=0$ (extinction of the population). Next, following \cite{CattiauxCollet...2009} and \cite{CattiauxMeleard2009}, we study the quasi-stationary behavior of the diffusion process $(S_t)_{t\geq0}$ conditioned on the non extinction of the population, i.e. on not reaching $\mathbf{0}$. First, the diffusion process $(S_t)_{t\geq0}$ conditioned on not reaching $\mathbf{A}\cup\mathbf{B}\cup\mathbf{0}$ admits a Yaglom limit. Second, if $S_0\notin\mathbf{A}\cup\mathbf{B}\cup\mathbf{0}$ then the law of $S_t$ conditioned on $\{S_t\notin\mathbf{0}\}$ converges when $t$ goes to infinity toward a distribution which is independent from $S_0$. Finally in Section \ref{sectionnumerical}, we present numerical applications and study the long-time coexistence of the two alleles, in three biologically relevant cases: a pure competition neutral case, a case in which each genotype has its own ecological niche, and an overdominance case. In particular, we show that a long-term coexistence of alleles is possible even in some full competition cases, which is not true for haploid clonal reproduction (\cite{CattiauxMeleard2009}). Note that for the sake of simplicity, most proofs of this article are given in the main text for the neutral case where demographic parameters do not depend on the types of individuals, and the calculations for the non-neutral case are given in Appendix \ref{appendixQnonneutre}. \section{Model and deterministic limit}\label{sectionmodel} \subsection{Model} We consider a population of diploid hermaphroditic individuals characterized by their genotype at one bi-allelic locus, whose alleles are denoted by $A$ and $a$. Individuals can then have one of the three possible genotypes $AA$, $Aa$, and $aa$, (also called types $1$, $2$, and $3$). The population at any time $t$ is represented by a $3$-dimensional birth-and-death process giving the respective numbers of individuals with each genotype. As in \cite{FournierMeleard2004}, \cite{ChampagnatMeleard2011} or \cite{ColletMeleardMetz2012}, we consider an infinite population size approximation. To this end we introduce a scaling parameter $K\in\mathbb{N}^*$ that will go to infinity, and we denote by $\nu^K=((\nu^{1,K}_t,\nu^{2,K}_t,\nu^{3,K}_t),t\geq0)$ the population indexed by $K$. The initial numbers of individuals of each type $\nu_0^{1,K}$, $\nu_0^{2,K}$ and $\nu_0^{3,K}$ will be of order $K$ and we then consider the sequence of rescaled stochastic processes \be\left(Z^K_t\right)_{t\geq0}=\left(Z^{1,K}_t,Z^{2,K}_t,Z^{3,K}_t\right) _{t\geq0}=\left(\frac{\nu^K_t}{K}\right)_{t\geq0}\ee that gives at each time $t$ the respective numbers of individuals weighted by $1/K$, and with genotypes $AA$, $Aa$, and $aa$. The rescaled population size at time $t$ is denoted by \ben\label{taille} N^K_t=Z^{1,K}_t+Z^{2,K}_t+Z^{3,K}_t\in\frac{\mathbb{Z}_+}{K},\een and the proportion of allele $A$ at time $t$ is denoted by \ben\label{proportion} X^K_t=\frac{2Z^{1,K}_t+Z^{2,K}}{2(Z^{1,K}_t+Z^{2,K}_t+Z^{3,K}_t)}.\een As in \cite{Coron2012} or \cite{ColletMeleardMetz2012}, the jump rates of $Z$ model Mendelian panmictic reproduction. More precisely, if we set $e_1=(1,0,0)$, $e_2=(0,1,0)$ and $e_3=(0,0,1)$, then for all $i\in\{1,2,3\}$, the rates $\lambda_i^K(Z)$ at which the stochastic process $Z^K$ jumps from $z=(z_1,z_2,z_3)\in\left(\frac{\mathbb{Z}_+}{K}\right)^3$ to $z+e_i/K$, as long as $z_1+z_2+z_3=n\neq0$, are given by:\ban\label{birthratesQSD} \lambda_1^K(z)&=\frac{Kb_1^K}{n}\left(z_1+\frac{z_2}{2}\right)^2,\\ \lambda_2^K(z)&=\frac{Kb_2^K}{n}2\left(z_1+\frac{z_2}{2}\right)\left(z_3+\frac{z_2}{2}\right),\\ \lambda_3^K(z)&=\frac{Kb_3^K}{n}\left(z_3+\frac{z_2}{2}\right)^2.\ean These birth rates are naturally set to $0$ if $n=0$ and the demographic parameters $b_i^K\in\mathbb{R}_+$ are called birth demographic parameters. Now individuals can die naturally and either compete or cooperate with other individuals, depending on the genotype of each individual. More precisely, for all $i\in\{1,2,3\}$, the rates $\mu_i^K(z)$ at which the stochastic process $Z^K$ jumps from $z=(z_1,z_2,z_3)\in(\mathbb{Z}_+)^3/K$ to $z-e_i/K$ for $i\in\{1,2,3\}$ are given by:\ban\label{deathratesQSD} \mu_1^K(z)&=Kz_1(d_1^K+K(c_{11}^Kz_1+c_{21}^Kz_2+c_{31}^Kz_3))^+,\\ \mu_2^K(z)&=Kz_2(d_2^K+K(c_{12}^Kz_1+c_{22}^Kz_2+c_{32}^Kz_3))^+,\\ \mu_3^K(z)&=Kz_3(d_3^K+K(c_{13}^Kz_1+c_{23}^Kz_2+c_{33}^Kz_3))^+.\ean where the interaction (competition or cooperation) demographic parameters $c_{ij}^K$ are arbitrary real numbers and $(x)^+=max(x,0)$ for any $x\in\mathbb{R}$. If $c_{ij}^K>0$ (resp. $c_{ij}^K<0$), then individuals with type $i$ have a negative (resp. positive) influence on individuals of type $j$. The demographic parameter $d_i^K\in\mathbb{R}_+$ is called the intrinsic death rate of individuals of type $i$. From now on, we say that the stochastic process $Z^K$ is neutral for a given $K\in\mathbb{N}^*$ if its demographic parameters do not depend on the types of individuals, i.e. \ben\label{neutre} b_i^K=b^K, \quad d_i^K=d^K\quad \text{ and } \quad c_{ij}^K=c^K\quad\forall i,j\in\{1,2,3\}.\een Note that for any fixed $K\in\mathbb{N}^*$, the pure jump process $Z^K$ is well defined for all $t\in\mathbb{R}_+$. Indeed, $N^K$ is stochastically dominated by a rescaled pure birth process $\overline{N}^K$ that jumps from $n\in\mathbb{Z}_+/K$ to $n+1/K$ at rate $(\underset{i}{\max}\;b_i^K)Kn$ and, from Theorem $10$ in \cite{VillemonaisMeleard2012}, $\overline{N}^K$ does not explode almost surely. The stochastic process $Z^K$ is then a $\frac{(\mathbb{Z}_+)^3}{K}$-valued pure jump Markov process absorbed at $(0,0,0)$, defined for all $t\geq0$ by \be Z^K_t=Z^K_0+\sum_{i\in\{1,2,3\}}\left[\int_0^t\frac{e_i}{K}\mathbf{1}_{\{\theta\leq \lambda_i^K(Z^K_{s^-})\}}\eta_{1}^i(ds,d\theta)-\int_0^t\frac{e_i}{K}\mathbf{1}_{\{\theta\leq \mu_i^K(Z^K_{s^-})\}}\eta_{2}^i(ds,d\theta)\right]\ee where the measures $\eta_{j}^i$ for $i\in\{1,2,3\}$ and $j\in\{1,2\}$ are independent Poisson point measures on $(\mathbb{R}_+)^2$ with intensity $dsd\theta$. For any $K$, the law of $Z^K$ is therefore a probability measure on the trajectory space $\mathbb{D}(\mathbb{R}_+,(\mathbb{Z}_+)^3/K)$ which is the space of left-limited and right-continuous functions from $\mathbb{R}_+$ to $(\mathbb{Z}_+)^3/K$, endowed with the Skorohod topology. Finally, the extended generator $L^K$ of $Z^K$ satisfies for every bounded measurable function $f$ from $(\mathbb{Z}_+)^3/K$ to $\mathbb{R}$ and for every $z\in\frac{(\mathbb{Z}_+)^3}{K}$: \ban \label{generateur}L^Kf(z)& =\underset{i\in\{1,2,3\}}{\sum}\left[\lambda_i^K(z)\left(f\left(z+\frac{e_i}{K}\right)-f(z)\right)+\mu_i^K(z)\left(f\left(z-\frac{e_i}{K}\right)-f(z)\right)\right].\ean To end with the model description, let us introduce for all $K\in\mathbb{N}^*$ the stochastic processes $Y^K$ such that for every $t\geq0$, as long as $N^K_t>0$,\ben\label{defY}Y^K_t=\frac{4Z^{1,K}_tZ^{3,K}_t-(Z^{2,K}_t)^2}{4N^K_t}.\een If $N^K_t=0$, we set $Y^K_t=0$ as $\vert Y^K_t\vert\leq N^K_t$ for all $t\geq0$. This stochastic process will play a main role in the article and note first that: $$Y^K_t=Z^{1,K}_t-\frac{(2Z^{1,K}_t+Z^{2,K}_t)^2}{4N^K_t}=\left(p^{AA,K}_t-(p^{A,K}_t)^2\right)N^K_t$$ if $p^{A,K}_t$ (resp. $p^{AA,K}_t$) is the proportion of allele $A$ (resp. genotype $AA$) in the population at time $t$. Similarly, $$Y^K_t=\left(p^{aa,K}_t-(p^{a,K}_t)^2\right)N^K_t=\left(2p^{A,K}_tp^{a,K}_t-p^{Aa,K}_t\right)N^K_t.$$ Then if $Y^K_t=0$, the proportion of each genotype in the population $Z^K_t$ is equal to the proportion of pairs of alleles forming this genotype. By an abuse of language, if $Y^K_t=0$ we say that the population $Z^K_t$ is at Hardy-Weinberg equilibrium (\cite{CrowKimura1970}, p. $34$). In the rest of the article, we will see that the quantities of interest in this model are the population size $N^K$, the deviation from Hardy-Weinberg equilibrium $Y^K$ and the proportion $X^K$ of allele $A$. More precisely, the following lemma gives the change of variable: \begin{lem}\label{cgtvar} Let us set for all $z=(z_1,z_2,z_3)\in(\mathbb{R}_+)^3\setminus\{(0,0,0)\}$, \ba \phi_1(z)&=z_1+z_2+z_3,\quad\phi_2(z)=\frac{2z_1+z_2}{2(z_1+z_2+z_3)}\quad\text{and}\quad\phi_3(z)=\frac{4z_1z_3-(z_2)^2}{4(z_1+z_2+z_3)}.\ea Then the function \ba \phi:(\mathbb{R}_+)^3\setminus\{(0,0,0)\}&\rightarrow E\\z&\mapsto \phi(z)=(\phi_1(z),\phi_2(z),\phi_3(z))\ea where $E=\{(n,x,y)|n\in\mathbb{R}_+^*,x\in[0,1],-n\min(x^2,(1-x)^2)\leq y\leq nx(1-x)\}$ is a bijection. \end{lem} Note that $\phi(Z^K)=(N^K,X^K,Y^K),$ where $N^K$, $X^K$, and $Y^K$ have been respectively defined in Equations \eqref{taille}, \eqref{proportion} and \eqref{defY}. \begin{proof} We easily obtain that $(n,x,y)=\phi(z_1,z_2,z_3)$ if and only if \ben\label{inverse} z_1=nx^2+y,\quad z_2=2nx(1-x)-2y\quad\text{and}\quad z_3=n(1-x)^2+y,\een which gives the injectivity. Next, for any $(n,x,y)$ such that $n\in\mathbb{R}_+^*$, $x\in[0,1]$ and $-n\min(x^2,(1-x)^2)\leq y\leq nx(1-x)$, $z=(z_1,z_2,z_3)$ defined by Equation \eqref{inverse} is in $(\mathbb{R}_+)^3\setminus\{(0,0,0)\}$ which gives the surjectivity. \end{proof} \subsection{Convergence toward a deterministic system}\label{ssectiondeterm} This section aims at understanding the behavior of the population when the birth and natural death parameters do not depend on $K$. The results obtained at this scaling will indeed give an intuition of the behavior of the population when $b^{i,K}$ and $d^{i,K}$ are of order $K$, which is studied in Section \ref{sectionconvergencediffusion}. In particular, we prove in this section a long-time convergence of the population toward Hardy-Weinberg equilibrium. We consider a particular case of the scaling considered in Section $3$ of \cite{ColletMeleardMetz2012}. More precisely, we set: \ba b_i^K&=\beta\in\mathbb{R}_+^*\\ d_i^K&=\delta\in\mathbb{R}_+\\ Kc_{ij}^K&=\alpha\in\mathbb{R}_+\\ Z^K_0&\underset{K \to \infty}{\longrightarrow} \mathcal{Z}_0\quad\text{in law,} \ea where $\mathcal{Z}_0$ is a deterministic vector of $(\mathbb{R}_+)^3$. Note that the process $Z^K$ is neutral for all $K\in\mathbb{N}^*$. For all $i\in\{1,2,3\}$ and $z=(z_1,z_2,z_3)\in(\mathbb{R}_+)^3$ we now denote by $\lambda^{\infty}_i(z)$ (resp. $\mu^{\infty}_i(z)$) the limit of the rescaled birth (resp. death) rate (see Equations \eqref{birthratesQSD} and \eqref{deathratesQSD}): $$\lambda_i^{\infty}(z)=\underset{K\rightarrow\infty}{\lim}\frac{\lambda_i^K(z)}{K}\quad\text{and}\quad\mu_i^{\infty}(z)=\underset{K\rightarrow\infty}{\lim}\frac{\mu_i^K(z)}{K}.$$ For instance, if we set $(n,x,y)=\phi(z)$ where $\phi$ has been defined in Lemma \ref{cgtvar}, we get $$\lambda_1^{\infty}(z)=\beta nx^2\quad\text{and}\quad \mu_1^{\infty}(z)=(\delta+\alpha n)z_1.$$ Here, Proposition $3.2$ in \cite{ColletMeleardMetz2012} (see also Theorem $5.3$ of \cite{FournierMeleard2004}) gives that for all $T>0$, the sequence of stochastic processes $(Z^K_t,t\in[0,T])_{K\in\mathbb{Z}_+^*}$ converges in law in $\mathbb{D}([0,T],(\mathbb{R}_+)^3)$ toward a deterministic limit $\mathcal{Z}=(\mathcal{Z}^1,\mathcal{Z}^2,\mathcal{Z}^3)$, which is the unique continuous solution of the differential system: \ben \label{systemedeterministe} \left\{\begin{tabular}{l} $\frac{d\mathcal{Z}^1_t}{dt}=\lambda_1^{\infty}(\mathcal{Z}_t)-\mu_1^{\infty}(\mathcal{Z}_t)$\\ $\frac{d\mathcal{Z}^2_t}{dt}=\lambda_2^{\infty}(\mathcal{Z}_t)-\mu_2^{\infty}(\mathcal{Z}_t)$\\ $\frac{d\mathcal{Z}^3_t}{dt}=\lambda_3^{\infty}(\mathcal{Z}_t)-\mu_3^{\infty}(\mathcal{Z}_t).$ \end{tabular}\right. \een A solution of this system does not appear immediately, but using the change of variables $\phi$ introduced in Lemma \ref{cgtvar}, we obtain that $\phi(\mathcal{Z})=(\mathcal{N},\mathcal{X},\mathcal{Y})$ satisfies the \begin{prop}\label{propsolutiondeterministe} \begin{description} \item[$(i)$] If $\beta=\delta$ then \ben\label{formuleNdeterm1} \mathcal{N}_t=\frac{\mathcal{N}_0}{\alpha\mathcal{N}_0t+1},\quad\forall t\geq0.\een Else \ben\label{formuleNdeterm} \mathcal{N}_t=\frac{(\beta-\delta)\mathcal{N}_0e^{(\beta-\delta)t}}{(\beta-\delta)+\alpha\mathcal{N}_0(e^{(\beta-\delta)t}-1)}\quad\forall t\geq0.\een \item[$(ii)$] For all $t\geq0$, $\mathcal{X}_t=\mathcal{X}_0$. \item[$(iii)$] If $\alpha=0$ then $\mathcal{Y}_t=\mathcal{Y}_0e^{-\delta t}$ for all $t\geq0.$ If $\alpha\neq0$ and $\beta=\delta$ then $ \mathcal{Y}_t=\left(\mathcal{Y}_0-\ln\left(1+\alpha\mathcal{N}_0t\right)\right)e^{-\delta t}$ for all $t\geq0$. If $\alpha\neq0$, $\beta\neq\delta$ and $\mathcal{N}_0=\frac{\beta-\delta}{\alpha}$, then $\mathcal{Y}_t=\mathcal{Y}_0e^{-\beta t}$ for all $t\geq0.$ Finally if $\alpha\neq0$, $\beta\neq\delta$ and $\mathcal{N}_0\neq\frac{\beta-\delta}{\alpha}$ then for all $t\geq0$ and if $C=\frac{\mathcal{Y}_0}{1-\frac{\alpha\mathcal{N}_0}{\beta-\delta}}$, $\mathcal{Y}_t=Ce^{-\delta t}\left(1-\frac{\alpha\mathcal{N}_0e^{(\beta-\delta)t}}{(\beta-\delta)+\alpha\mathcal{N}_0(e^{(\beta-\delta)t}-1)}\right)$ for all $t\geq0$ \end{description} \end{prop} \begin{proof} $\mathcal{N}$ is solution of the logistic equation $d\mathcal{N}_t/dt=(\beta-\delta-\alpha \mathcal{N}_t)\mathcal{N}_t$ whose unique solution is given for $\beta\neq\delta$ and $\alpha\neq0$ in \cite{Verhulst1844}. We then have Equation \eqref{formuleNdeterm} that remains true if $\alpha=0$ and $\beta\neq\delta$. If $\beta=\delta$ we easily find that the unique solution of the equation $d\mathcal{N}_t/dt=-\alpha \mathcal{N}_t^2$ is given by Equation \eqref{formuleNdeterm1}. Therefore, $\mathcal{N}_t>0$ for all $t\geq0$. Then, using the System of equations \eqref{systemedeterministe}, we find that $d\mathcal{X}_t/dt=0$ for all $t\geq0$ which gives $(ii)$. Finally for $(iii)$, $\mathcal{Y}$ is solution of the differential equation $d\mathcal{Y}_t/dt=-(\delta+\alpha\mathcal{N}_t)\mathcal{Y}_t$. If $\alpha=0$ the solution is clear. If $\alpha\neq0$ and $\beta=\delta$ we find the result by looking for a solution of the form $C(t)e^{-\delta t}$ and from Equation \eqref{formuleNdeterm1}. If $\alpha\neq0$, $\beta\neq\delta$ and $\mathcal{N}_0=\frac{\beta-\delta}{\alpha}$, then from Equation \eqref{formuleNdeterm}, $\mathcal{N}_t=\mathcal{N}_0$ for all $t$ and the solution follows. Finally if $\alpha\neq0$, $\beta\neq\delta$ and $\mathcal{N}_0\neq\frac{\beta-\delta}{\alpha}$, looking for a solution of the form $\mathcal{Y}_t=Ce^{-\delta t}+\frac{B(t)e^{(\beta-\delta)t}}{(\beta-\delta)+\alpha\mathcal{N}_0(e^{(\beta-\delta)t}-1)}$ with $B(t)=De^{-\delta t}$ we find the result. \end{proof} \bigskip Note that, in this scaling, the population does no get extinct in finite time and the proportion of allele $A$ remains constant. Besides, $\mathcal{Y}_t$ goes to $0$ when $t$ goes to infinity, which gives a long-time convergence of the population toward Hardy-Weinberg equilibrium. We therefore observe biodiversity conservation but can not study the Darwinian evolution of the population, since none of the two alleles will eventually disappear. These points are due to the fact that the population is neutral and to the large population size assumption (\cite{CrowKimura1970}, p. $34$). \section{Convergence toward a slow-fast stochastic dynamics}\label{sectionconvergencediffusion} In this section, we investigate a new scaling under which the population size and proportion of allele $a$ evolve stochastically with time (in particular the population can get extinct and one of the two alleles can eventually get fixed), while the population still converges rapidly toward Hardy-Weinberg equilibrium. The results we obtain then provide a rigorous justification of the assumption of Hardy-Weinberg equilibrium which is often made when studying large populations. However we will explain that this result does not mean that the genetic composition of a diploid population can always be reduced to a set of alleles. We assume that birth and natural death parameters are of order $K$, while $Z^K_0$ converges in law toward a random vector $Z_0$. More precisely, we set for $\gamma>0$: \ba b^{i,K}&= \gamma K+\beta_i\in[0,\infty[\\ d^{i,K}&= \gamma K+\delta_i\in[0,\infty[\\ c_{ij}^K&=\frac{\alpha_{ij}}{K}\in\mathbb{R}\\ Z^K_0&\underset{K\rightarrow\infty}{\rightarrow} Z_0 \quad\text{ in law,} \ea where $Z_0$ is a $(\mathbb{R}_+)^3$-valued random variable. This means that the birth and natural death events are now happening faster, which will introduce some stochasticity in the limiting process. The results presented in Proposition \ref{propsolutiondeterministe} suggest that under these conditions, $Y^K$ will be a "fast" variable that converges directly toward the long time equilibrium of $\mathcal{Y}$ (equal to $0$), while $X^K$ and $N^K$ will be "slow" variables, converging toward a non deterministic process. First, we need a moment propagation property. It is not true for all values of the interaction parameters $\alpha_{ij}$ and in particular when $\alpha_{ii}\leq0$ for any $i\in\{1,2,3\}$, i.e. when individuals with a same given genotype cooperate or do not compete. For any $z=(z_1,z_2,z_3)\in(\mathbb{R}_+)^3$, let $g(z)=\underset{i,j\in\{1,2,3\}}{\sum}\alpha_{ij}z_iz_j.$ We establish the following: \begin{prop}\label{propgpositif} If $g(z)>0$ for all $z\in\left(\mathbb{R}_+\right)^3\setminus\{(0,0,0)\}$ and if, for any $k\in\mathbb{Z}_+$, there exists a constant $C_0$ such that for all $K\in\mathbb{N}^*$, $\mathbb{E}((N^K_0)^k))\leq C_0$, then \begin{description} \item[(i)] There exists a constant $C$ such that $\underset{K}{\sup}\;\underset{t\geq0}{\sup}\;\mathbb{E}((N^K_t)^k))\leq C.$ \item[(ii)] For all $T<+\infty$, there exists a constant $C_T$ such that $\underset{K}{\sup}\;\mathbb{E}\left(\underset{t\leq T}{\sup}\;(N^K_t)^k\right)\leq C_T.$ \end{description} \end{prop} \begin{proof} Assume that $g(z)>0$ for all $z\in\left(\mathbb{R}_+\right)^3\setminus\{(0,0,0)\}$. Note that $g$ writes $$g(z)=\phi_1(z)^2\underset{i,j\in\{1,2,3\}}{\sum}\alpha_{ij}p_ip_j:=\phi_1(z)^2f(p_1,p_2,p_3)$$ where $\phi_1$ has been defined in Lemma \ref{cgtvar} and $p_i=z_i/\phi_1(z)$ for all $i\in\{1,2,3\}$. If $g(z)>0$ for all $z\in\left(\mathbb{R}_+\right)^3\setminus\{(0,0,0)\}$, the function $f$ is then non-negative and continuous on $\{(p_1,p_2,p_3)\in[0,1]^3|p_1+p_2+p_3=1\}$ which is a compact set. Then $f$ reaches its minimum $m\geq0$. Now if there exists $(p_1,p_2,p_3)$ such that $f(p_1,p_2,p_3)=0$ then for any $n>0$, $g(np_1,np_2,np_3)=0$ and $(np_1,np_2,np_3)\in\left(\mathbb{R}_+\right)^3\setminus\{(0,0,0)\}$ which is impossible. Then $m>0$, $g(z)\geq m\phi_1(z)^2$, and $\mu_1^K(z)+\mu_2^K(z)+\mu_3^K(z)\geq(\gamma K+\underset{i}{\inf}\delta_i+m\phi_1(z))K\phi_1(z)$ for all $z\in(\mathbb{R}_+)^3$. Then, for all $K$, $N^K$ is stochastically dominated by the logistic birth-and-death process $\overline{N}^K$ jumping from $n\in\mathbb{Z}_+/K$ to $n+1/K$ at rate $(\gamma K+\underset{i\in\{1,2,3\}}{\sup}\;\beta_i)Kn$ and from $n$ to $n-1/K$ at rate $(\gamma K+\underset{i\in\{1,2,3\}}{\inf}\delta_i+m n)Kn$. Finally, the sequence of stochastic processes $\overline{N}^K$ satisfies \textbf{(i)} and \textbf{(ii)}, which gives the result (see respectively Lemma $1$ of \cite{Champagnat2006} and the proof of Theorem $5.3$ of \cite{FournierMeleard2004}). \end{proof} \bigskip From now we assume the following hypotheses: \ben\label{hyp1}\tag{H1} g(z)>0 \quad\text{for all}\quad z\in\left(\mathbb{R}_+\right)^3\setminus\{(0,0,0)\},\een and a $3$-rd-order moments conditions: \ben\label{hyp2}\tag{H2}\text{there exists } C<\infty \quad\text{such that}\quad\underset{K}{\sup}\;\mathbb{E}((N^K_0)^3))\leq C.\een In Section \ref{sectionQSD} we will consider only the symmetrical case where $\alpha_{ij}=\alpha_{ji}$ for all $i,j$ and give some explicit sufficient conditions on the parameters $\alpha_{ij}$ so that \eqref{hyp1} is true. \bigskip The following proposition gives that $(Y^K_t,t\geq0)$ is a fast variable that converges toward the deterministic value $0$ when $K$ goes to infinity. \begin{prop} \label{propY} Under \eqref{hyp1} and \eqref{hyp2}, for all $s,t>0$, $\underset{t\leq u\leq t+s}{\sup}\mathbb{E}((Y^K_u)^2)\rightarrow 0$ when $K$ goes to infinity. \end{prop} \begin{proof} Let us fix $z=(z_1,z_2,z_3)\in(\mathbb{R}_+)^3$ and set $(n,x,y)=\phi(z)$ where $\phi$ is defined in Lemma \ref{cgtvar}. The extended generator $L^K$ of the jump process $Z^K$ applied to a measurable real-valued function $f$ (see Equation \eqref{generateur}) is decomposed as follows in $z$: \ban\label{generateurdecompose} L^Kf(z)&=\gamma K^2y\left[f\left(z-\frac{e_1}{K}\right)-2f\left(z-\frac{e_2}{K}\right)+f\left(z-\frac{e_3}{K}\right)\right]\\&+\gamma K^2n\,(x)^2\left[f\left(z+\frac{e_1}{K}\right)+f\left(z-\frac{e_1}{K}\right)-2f\left(z\right)\right]\\&+\gamma K^22nx(1-x)\left[f\left(z+\frac{e_2}{K}\right)+f\left(z-\frac{e_2}{K}\right)-2f\left(z\right)\right]\\&+\gamma K^2n\left(1-x\right)^2\left[f\left(z+\frac{e_3}{K}\right)+f\left(z-\frac{e_3}{K}\right)-2f(z)\right]\\&+\beta_1Kn\,(x)^2\left[f\left(z+\frac{e_1}{K}\right)-f(z)\right]+\beta_2K2nx(1-x)\left[f\left(z+\frac{e_2}{K}\right)-f(z)\right]\\&+\beta_3Kn(1-x)^2\left[f\left(z+\frac{e_3}{K}\right)-f(z)\right]\\& +K\underset{i\in\{1,2,3\}}{\sum}\left(\delta_i+\underset{j\in\{1,2,3\}}{\sum}\alpha_{ji}z_j\right)z_i\left[f\left(z-\frac{e_i}{K}\right)-f(z)\right]\\&+K\underset{i\in\{1,2,3\}}{\sum}\left(\gamma K+\delta_i+\underset{j\in\{1,2,3\}}{\sum}\alpha_{ji}z_j\right)^-z_i\left[f\left(z-\frac{e_i}{K}\right)-f(z)\right]\ean where $(x)^-=\max(-x,0)$. Now if $f=\left(\phi_3\right)^2$, then there exist functions $g_1^K$, $g_2^K$, and $g_3^K$ and a constant $C_1$ such that for all $z\in(\mathbb{R}_+)^3$, \ba f\left(z+\frac{e_1}{K}\right)-f(z)&=-\frac{2f(z)}{Kn}+\frac{2z_3y}{Kn}+g_1^K(z)\\f\left(z+\frac{e_2}{K}\right)-f(z)&=-\frac{2f(z)}{Kn}-\frac{z_2y}{Kn}+g_2^K(z)\\f\left(z+\frac{e_3}{K}\right)-f(z)&=-\frac{2f(z)}{Kn}+\frac{2z_1y}{Kn}+g_3^K(z)\ea with $\vert g_i^K(z)\vert\leq \frac{C_1}{K^2}$ for all $i\in\{1,2,3\}$. Finally, note that since $\gamma>0$, there exists a positive constant $C_2$ such that for all $i\in\{1,2,3\}$ and all $z\in(\mathbb{R}_+)^3$, \ba\mathbf{1}_{\left\{\left(\gamma K+\delta_i+\underset{j\in\{1,2,3\}}{\sum}\alpha_{ji}z_j\right)^-\neq 0\right\}}&=\mathbf{1}_{\left\{\underset{j\in\{1,2,3\}}{\sum}\alpha_{ji}z_j\leq -\gamma K-\delta_i\right\}}\\&\leq\mathbf{1}_{\left\{\exists j\in\{1,2,3\}\,:\,\alpha_{ji}<0\,,\, \alpha_{ji}z_j\leq -\frac{\gamma}{3}K-\frac{\delta_i}{3}\right\}}\leq\mathbf{1}_{\left\{\phi_1(z)\geq C_2K\right\}}.\ea Therefore, there exists a positive constant $C_3$ such that \ba L^K\left(\phi_3\right)^2(z) \leq-2\gamma K\left(\phi_3\right)^2(z)+C_3\left[(\phi_1(z))^2(\phi_1(z)+K\mathbf{1}_{\{\phi_1(z)\geq C_2K\}})+1\right].\ea Now from Proposition \ref{propgpositif} and Markov inequality, under \eqref{hyp1} and \eqref{hyp2}, there exists a constant $C$ such that $\underset{K}{\sup}\;\underset{t\geq0}{\sup}\;\mathbb{E}\left(C_3\left[(N^K_t)^2(N^K_t+K\mathbf{1}_{\{N^K_t\geq C_2K\}})+1\right]\right)\leq C.$ Therefore from the Kolmogorov forward equation, since $0\leq (Y^K_t)^2\leq(N^K_t)^2$ for all $t$ and from Proposition \ref{propgpositif}, \ba \frac{d\mathbb{E}\left((Y^K_t)^2\right)}{dt}&\leq -2\gamma K\mathbb{E}\left((Y^K_t)^2\right)+C.\ea This gives for all $t\geq0$, $\frac{d}{dt}\left(e^{2\gamma Kt}\mathbb{E}\left((Y^K_t)^2\right)\right)\leq Ce^{2\gamma Kt}.$ Then by integration, \ba\mathbb{E}\left((Y^K_t)^2\right)&\leq \mathbb{E}\left(\left(Y^K_0\right)^2\right)e^{-2\gamma Kt}+\frac{C}{2\gamma K}-\frac{C}{2\gamma K}e^{-2\gamma Kt}\\&\leq \mathbb{E}\left((N^K_0)^2\right)e^{-2\gamma Kt}+\frac{C}{2\gamma K}-\frac{C}{2\gamma K}e^{-2\gamma Kt},\quad\text{which gives the result.}\ea \end{proof} \bigskip In particular, under \eqref{hyp1} and \eqref{hyp2} and for all $t>0$, $Y^K_t$ converges in $L^2$ to $0$. We say that $Y^K$ is a fast variable compared to the vector $(N^K,X^K)$ whose behavior is now studied. Let us introduce the following notation for all $z=(z_1,z_2,z_3)\in(\mathbb{R}_+)^3$: $$\psi_1(z)=2z_1+z_2 \quad\text{and}\quad \psi_2(z)=2z_3+z_2.$$ Note that $\psi_1(Z^K_t)=2N^K_tX^K_t$ (resp. $\psi_2(Z^K_t)=2N^K_t(1-X^K_t)$) is the rescaled number of allele $A$ (resp. $a$) in the rescaled population $Z^K$ at time $t$. For any $K\in\mathbb{N}^*$, $(\psi_1(Z^K),\psi_2(Z^K))$ is a pure jump Markov process with trajectories in $\mathbb{D}(\mathbb{R}_+,(\mathbb{Z}_+)^2/K)$ and for all $i\in\{1,2\}$, the process $\psi_i(Z^K)$ admits the following semi-martingale decomposition: for all $t\geq0$, $$\psi_i(Z^K_t)=\psi_i(Z^K_0)+M^{i,K}_t+\int_0^tL^K\psi_i(Z^K_s)ds$$ where $M^K=(M^{1,K},M^{2,K})$ is, under \eqref{hyp1} and \eqref{hyp2}, a square integrable $\mathbb{R}^2$-valued càd-làg martingale (from Proposition \ref{propgpositif}) and is such that for all $i,j\in\{1,2\}$, the predictable quadratic variation is given for all $t\geq0$ by: $$\langle M^{i,K},M^{j,K}\rangle_t=\int_0^tL^K\psi_i\psi_j(Z^K_s) -\psi_i(Z^K_s)L^K\psi_j(Z^K_s)-\psi_j(Z^K_s)L^K\psi_i(Z^K_s)ds.$$ Using this decomposition we prove the \begin{thm}\label{TheoremConvergenceNX} Under \eqref{hyp1} and \eqref{hyp2}, if the sequence $\{(\psi_1(Z^K_0),\psi_2(Z^K_0))\}_{K\in\mathbb{N^*}}$ of random variables converges in law toward a random variable $(N^A_0,N^a_0)$ when $K$ goes to infinity, then for all $T>0$, the sequence of stochastic processes $(\psi_1(Z^K),\psi_2(Z^K))$ converges in law in $\mathbb{D}([0,T],(\mathbb{R}_+)^2)$ when $K$ goes to infinity, toward the diffusion process $(N^A,N^a)$ starting from $(N^A_0,N^a_0)$ and satisfying the following diffusion equation, where $B=(B^1,B^2)$ is a $2$-dimensional Brownian motion: \ban\label{diffusionA1A2} dN^A_t&=\frac{N^A_t}{N^A_t+N^a_t}\left[\left[\beta_1-\delta_1-\frac{\alpha_{11}(N^A_t)^2+\alpha_{21}2N^A_tN^a_t+\alpha_{31}(N^a_t)^2}{2(N^A_t+N^a_t)}\right]N^A_t\right.\\&\phantom{\frac{N^A_t}{N^A_t+N^a_t}[[}+\left.\left[\beta_2-\delta_2-\frac{\alpha_{12}(N^A_t)^2+\alpha_{22}2N^A_tN^a_t+\alpha_{32}(N^a_t)^2}{2(N^A_t+N^a_t)}\right]N^a_t\right]dt\\&+\sqrt{\frac{4\gamma}{N^A_t+N^a_t}}N^A_tdB^1_t+\sqrt{2\gamma\frac{N^A_tN^a_t}{N^A_t+N^a_t}}dB^2_t\\ dN^a_t&=\frac{N^a_t}{N^A_t+N^a_t}\left[\left[\beta_3-\delta_3-\frac{\alpha_{33}(N^a_t)^2+\alpha_{23}2N^A_tN^a_t+\alpha_{13}(N^A_t)^2}{2(N^A_t+N^a_t)}\right]N^a_t\right.\\&\phantom{\frac{N^a_t}{N^A_t+N^a_t}[[}+\left.\left[\beta_2-\delta_2-\frac{\alpha_{32}(N^a_t)^2+\alpha_{22}2N^A_tN^a_t+\alpha_{12}(N^A_t)^2}{2(N^A_t+N^a_t)}\right]N^A_t\right]dt\\&+\sqrt{\frac{4\gamma}{N^A_t+N^a_t}}N^a_tdB^1_t-\sqrt{2\gamma\frac{N^A_tN^a_t}{N^A_t+N^a_t}}dB^2_t \ean\end{thm} Note that the diffusion coefficients of the diffusion process $((N^A_t,N^a_t),t\geq0)$ do not explode when $N^A_t+N^a_t$ goes to $0$ since $\frac{N^A_t}{\sqrt{N^A_t+N^a_t}}\leq\sqrt{N^A_t+N^a_t}$, $\frac{N^a_t}{\sqrt{N^A_t+N^a_t}}\leq\sqrt{N^A_t+N^a_t}$ and $\frac{N^A_tN^a_t}{N^A_t+N^a_t}\leq N^A_t+N^a_t$. From this theorem we deduce the convergence of the sequence of processes $$(N^K,X^K)=\left(\frac{\psi_1(Z^K)+\psi_2(Z^K)}{2},\frac{\psi_1(Z^K)}{\psi_1(Z^K)+\psi_2(Z^K)}\right)$$ stopped when $N^K\leq\epsilon$ for any $\epsilon>0$: \begin{cor}\label{corNX} For any $\epsilon>0$ and $T>0$, let us define $T_{\epsilon}^K=\inf\{t\in[0,T]:N^K_t\leq\epsilon\}$. If the sequence of random variables $(N^K_0,X^K_0)\in[\epsilon,+\infty[\times[0,1]$ converges in law toward a random variable $(N_0,X_0)\in]\epsilon,+\infty[\times[0,1]$ when $K$ goes to infinity, then the sequence of stopped stochastic processes $\{(N^K,X^K)_{.\wedge T^K_{\epsilon}}\}_{K\geq1}$ converges in law in $\mathbb{D}([0,T],[\epsilon,\infty[\times[0,1])$ when $K$ goes to infinity, toward the stopped diffusion process $(N,X)_{.\wedge T_{\epsilon}}$ ($T_{\epsilon}=\inf\{t\in[0,T]:N_t=\epsilon\}$), starting from $(N_0,X_0)$ and satisfying the following diffusion equation: \ban\label{diffusionNXnoneutre} dN_t\!&=\sqrt{2\gamma N_t}dB^1_t\\&+N_t\left[X_t^2\left(\beta_1-\delta_1-\left(\alpha_{11}N_tX_t^2 +\alpha_{21}2N_tX_t(1-X_t)+\alpha_{31}N_t(1-X_t)^2\right)\right)\right.\\&\phantom{+}+2X_t(1-X_t)\left(\beta_2-\delta_2-\!\left(\alpha_{12}N_tX_t^2+\alpha_{22}2N_tX_t(1-X_t)+\alpha_{32}N_t(1-X_t)^2\right)\right)\\&\phantom{+}\left.+(1-X_t)^2\left(\beta_3-\delta_3-\!\left(\alpha_{13}N_tX_t^2+\alpha_{23}2N_tX_t(1-X_t)+\alpha_{33}N_t(1-X_t)^2\right)\right)\right]dt\\ dX_t&=\sqrt{\frac{\gamma X_t(1-X_t)}{N_t}}dB^2_t\\&+(1-X_t)X_t^2[(\beta_1-\delta_1)-(\beta_2-\delta_2)\\&\phantom{+(1}-\!N_t((\alpha_{11}-\alpha_{12})X_t^2 +(\alpha_{21}-\alpha_{22})2X_t(1-X_t)+(\alpha_{31}-\alpha_{32})(1-X_t)^2)]dt\\&+X_t(1-X_t)^2[(\beta_2-\delta_2)-(\beta_3-\delta_3) \\&\phantom{+(1}-\!N_t((\alpha_{12}-\alpha_{13})X_t^2+(\alpha_{22}-\alpha_{23})2X_t(1-X_t)+(\alpha_{32}-\alpha_{33})(1-X_t)^2)]dt. \ean \end{cor} The population size and the proportion of allele $A$ are therefore directed by two independent Brownian motions. The diffusion equation \eqref{diffusionNXnoneutre} can be simplified in the neutral case: \begin{cor}\label{corWF} In the neutral case where $\beta_i=\beta$, $\delta_i=\delta$ and $\alpha_{ij}=\alpha$ for all $i$, $j$, the limiting diffusion $(N,X)$ introduced in Equation \eqref{diffusionNXnoneutre} satisfies: \ban\label{diffusionNXWF} dN_t&=\sqrt{2\gamma N_t}dB^1_t+N_t(\beta-\delta-\alpha N_t)dt\\ dX_t&=\sqrt{\frac{\gamma X_t(1-X_t)}{N_t}}dB^2_t. \ean $X$ is then a bounded martingale and this diffusion can be seen as a generalized Wright-Fisher diffusion (see for instance \cite{EthierKurtz} p. $411$) with a population size evolving stochastically with time. \end{cor} We denote by $\mathcal{C}^k_b(E,\mathbb{R})$ the set of functions from $E$ to $\mathbb{R}$ possessing bounded continuous derivatives of order up to $k$ (resp. with compact support) and $\mathcal{C}^k_c(E,\mathbb{R})$ the set of functions of $\mathcal{C}^k_b(E,\mathbb{R})$ with compact support. \begin{proof}[Proof of Theorem \ref{TheoremConvergenceNX}] Using the Rebolledo and Aldous criteria (\cite{JoffeMetivier1986}), we prove the tightness of the sequence of processes $(\psi_1(Z^K),\psi_2(Z^K))$ and its convergence toward the unique continuous solution of a martingale problem. The proof is divided in several steps. STEP 1. Let us denote by $L$ the generator of the diffusion process defined in Equation \eqref{diffusionA1A2}. We first prove the uniqueness of a solution $((N^A_t,N^a_t),t\in[0,T])$ to the martingale problem: for any function $f\in\mathcal{C}^2_b((\mathbb{R}_+)^2,\mathbb{R})$, \ben \label{Mart1}M^f_t=f(N^A_t,N^a_t)-f(N^A_0,N^a_0)-\int_0^tLf(N^A_s,N^a_s)ds\een is a continuous martingale. From \cite{StroockVaradhan}, for any $\epsilon>0$, there exists a unique (in law) solution $((N^{A,\epsilon}_t,N^{a,\epsilon}_t),t\in[0,T])$ such that for all $f\in\mathcal{C}^2_b(\mathbb{R}^2)$ the process $(M^{f,\epsilon}_t,t\in[0,T])$ such that for all $t\geq0$, \be M^{f,\epsilon}_t=f(N^{A,\epsilon}_t,N^{a,\epsilon}_t)-f(N^{A,\epsilon}_0,N^{a,\epsilon}_0)-\int_0^tLf(N^{A,\epsilon}_s,N^{a,\epsilon}_s)\mathbf{1}_{\{\epsilon<N^{A,\epsilon}_s+N^{a,\epsilon}_s<1/\epsilon\}}ds\ee is a continuous martingale. The uniqueness of a solution of \eqref{Mart1} therefore follows from Theorem $6.2$ of \cite{EthierKurtz} about localization of martingale problems. STEP 2. As in the proof of Proposition \ref{propY}, we obtain easily that there exist two positive constants $C_1$ and $C_2$ such that for all $z\in(\mathbb{R}_+)^3$, the generator $L^K$ of $Z^K$, decomposed in Equation \eqref{generateurdecompose}, satisfies: $$\vert L^K\psi_1(z)\vert+\vert L^K\psi_2(z)\vert\leq C_2 \left[\phi_1(z)^2+1+K\phi_1(z)\mathbf{1}_{\phi_1(z)\geq C_1K}\right],$$ and similarly $$\vert L^K\psi_1^2(z)-2\psi_1(z)L^K\psi_1(z)+L^K\psi_2^2(z) -2\psi_2(z)L^K\psi_2(z)\vert\leq C_2(\phi_1(z)^2+1).$$ Therefore from Proposition \ref{propgpositif}, under \eqref{hyp1} and \eqref{hyp2}, for all sequence of stopping times $\tau_K\leq T$ and for all $\epsilon>0$: \ban\label{AldousA2} \underset{K\geq K_0}{\sup}\,\underset{\sigma\leq\delta}{\sup}\,\mathbb{P}&\left(\left|\int_{\tau_K}^{\tau_K+\sigma}L^K\psi_1(N^K_s,X^K_s)ds\right| +\left|\int_{\tau_K}^{\tau_K+\sigma}L^K\psi_2(N^K_s,X^K_s)ds\right|>\eta\right)\\&\leq\underset{K\geq K_0}{\sup}\;\mathbb{P}\left(\delta\;\underset{0\leq s\leq T+\delta}{\sup}\;C_2((N^K_s)^2+1)>\eta\right)\\&+\underset{K\geq K_0}{\sup}\;\mathbb{P}\left(\underset{0\leq s\leq T+\delta}{\sup}\; N^K_s\geq C_1 K\right)\\&\leq\epsilon\quad\text{if $K_0$ is large enough and $\delta$ is small enough.}\ean Similarly, \ban\label{AldousM2} \underset{K\geq K_0}{\sup}\;\underset{\sigma\leq\delta}{\sup}\;\mathbb{P}&\left(\left|\int_{\tau_K}^{\tau_K+\sigma}(L^K\psi_1^2(Z^K_s)-2\psi_1(Z^K_s)L^K\psi_1(Z^K_s) \right.\right.\\&\left.\left.\phantom{\int aaaaaaaaaaaaaaaa}+L^K\psi_2^2(Z^K_s)-2\psi_2(Z^K_s)L^K\psi_2(Z^K_s))ds\right|>\eta\right)\\&\leq\epsilon\quad\text{if $K_0$ is large enough and $\delta$ is small enough.}\ean The sequence of processes $(\psi_1(Z^K),\psi_2(Z^K))$ is then tight from Rebolledo and Aldous criteria (Theorem $2.3.2$ of \cite{JoffeMetivier1986}). STEP 3. Now let us consider a subsequence of $(\psi_1(Z^K),\psi_2(Z^K))$ that converges in law in $\mathbb{D}([0,T],\mathbb{R}^2)$ toward a process $(N^A,N^a)$. Since for all $K>0$, $\underset{t\in[0,T]}{\sup}\;\Vert (N^A_t,N^a_t)-(N^A_{t^-},N^a_{t^-})\Vert\leq 2/K$ by construction, almost all trajectories of the limiting process $(N^A,N^a)$ belong to $C([0,T],\mathbb{R}^2)$. STEP 4. Finally we prove that the sequence $\{(\psi_1(Z^K),\psi_2(Z^K))\}_{K\in\mathbb{N}^*}$ of stochastic processes converges toward the unique continuous solution of the martingale problem given by Equation \eqref{Mart1}. Indeed for every function $f\in\mathcal{C}^3_c(\mathbb{R}^2)$, from Equation \eqref{generateurdecompose} there exists a constant $C_4$ such that \ban\label{eqLKL}&\left|L^Kf(\psi_1(z),\psi_2(z))-Lf(\psi_1(z),\psi_2(z))\right|\\&\leq C_4\left[\frac{\phi_1(z)^2}{K}+\vert\phi_3(z)\vert(1+\phi_1(z))+\!\gamma K\phi_1(z)\mathbf{1}_{\phi_1(z)\geq C_2K}+\!(\phi_1(z)^2+1)\mathbf{1}_{\phi_1(z)\geq C_2K}\right]\ean Note here that the fast-scale property shown in Proposition \ref{propY}, combined to Proposition \ref{propgpositif}, will insure that $\underset{t\leq u\leq t+s}{\sup}\mathbb{E}(\vert\phi_3(Z^K_t)\vert\phi_1(Z^K_t))$ converges to $0$ when $K$ goes to infinity. Then for all $0\leq t_1<t_2<...<t_k\leq t<t+s$, for all bounded continuous measurable functions $h_1,...,h_k$ on $(\mathbb{R}_+)^2$ and every $f\in\mathcal{C}^3_c(\mathbb{R}^2)$: \ba\mathbb{E}&\left[\left(f(\psi_1(Z^K_{t+s}),\psi_2(Z^K_{t+s}))-f(\psi_1(Z^K_{t}),\psi_2(Z^K_{t}))-\int_t^{t+s}Lf(\psi_1(Z^K_u),\psi_2(Z^K_u))du\right)\right.\\&\quad\quad\quad\quad\quad\times\left.\prod_{i=1}^kh_i(\psi_1(Z^K_{t_i}),\psi_2(Z^K_{t_i}))\right]=\\\mathbb{E}&\left[\int_t^{t+s}\!\!\!\!\left(L^Kf(\psi_1(Z^K_u),\psi_2(Z^K_u))\!-\!Lf(\psi_1(Z^K_u),\psi_2(Z^K_u))\right)du\right.\\&\quad\quad\quad\quad\quad\left.\times\prod_{i=1}^kh_i(\psi_1(Z^K_{t_i}),\psi_2(Z^K_{t_i}))\right]\\&\leq \underset{i}{\sup}\|h_i\|_{\infty}\;\mathbb{E}\left[\int_t^{t+s}\left|L^Kf(\psi_1(Z^K_u),\psi_2(Z^K_u))-Lf(\psi_1(Z^K_u),\psi_2(Z^K_u))\right|du\right]\\&\leq\underset{i}{\sup}\|h_i\|_{\infty}\;s\underset{t\leq u\leq t+s}{\sup}\mathbb{E}\left[\left|L^Kf(\psi_1(Z^K_u),\psi_2(Z^K_u))-Lf(\psi_1(Z^K_u),\psi_2(Z^K_u))\right|\right]\\&\underset{K\rightarrow\infty}{\rightarrow}\!\!0,\ea under \eqref{hyp1} and \eqref{hyp2}, from Equation \eqref{eqLKL} and Propositions \ref{propgpositif} and \ref{propY}. The extension of this result to any $f\in\mathcal{C}^2_b((\mathbb{R}_+)^2,\mathbb{R})$ is easy to obtain by approximating uniformly $f$ by a sequence of functions $f_n\in\mathcal{C}^3_c((\mathbb{R}_+)^2,\mathbb{R})$. Then from Theorem $8.10$ (p. $234$) of \cite{EthierKurtz}, $(\psi_1(Z^K),\psi_2(Z^K))$ converges in law in $\mathbb{D}([0,T],\mathbb{R}^2)$ toward the unique (in law) solution of the martingale problem given in Equation \eqref{Mart1}, which is equal to the diffusion process $(N^A,N^a)$ of Equation \eqref{diffusionA1A2}. \end{proof} \bigskip The proof of Corollary \ref{corNX} relies on the following analytic lemma: \begin{lem}\label{lemcoro}For any $x=(x^1_t,x^2_t)_{0\leq t\leq T}\in\mathbb{D}([0,T],(\mathbb{R}_+)^2)$ and any $\epsilon>0$, let us define $$\zeta_{\epsilon}(x)=\inf\{t\in[0,T]:x^1_t+x^2_t\leq2\epsilon\}.$$ Let $x=(x^1_t,x^2_t)_{0\leq t\leq T}\in\mathcal{C}([0,T],(\mathbb{R}_+)^2)$ such that $x^1_0+x^2_0>2\epsilon$ and $\epsilon'\mapsto\zeta_{\epsilon'}(x)$ is continuous in $\epsilon$. Consider a sequence of functions $(x_n)_{n\in\mathbb{Z}_+}$ such that for any $n\in\mathbb{Z}_+$, $x_n=(x^{1,n}_t,x^{2,n}_t)_{0\leq t\leq T}\in\mathbb{D}([0,T],(\mathbb{R}_+)^2)$ and $x_n$ converges to $x$ for the Skorohod topology. Then the sequence $((x^{1,n}_{t\wedge\zeta_{\epsilon}(x_n)},x^{2,n}_{t\wedge\zeta_{\epsilon}(x_n)}),t\in[0,T])$ converges to $((x^1_{t\wedge\zeta_{\epsilon}(x)},x^2_{t\wedge\zeta_{\epsilon}(x)}),t\in[0,T])$ when $n$ goes to infinity. \end{lem} \begin{proof} We first prove that $\zeta_{\epsilon}(x_n)$ converges to $\zeta_{\epsilon}(x)$ when $n$ goes to infinity. For any $\delta>0$, since $\epsilon'\mapsto\zeta_{\epsilon}(x)$ is continuous in $\epsilon$, there exists $n'\in\mathbb{Z}_+^*$ such that $\zeta_{\epsilon-1/n'}(x)-\delta<\zeta_{\epsilon}(x)<\zeta_{\epsilon+1/n'}(x)+\delta$. Now let us assume that $\zeta_{\epsilon}(x_n)$ does not converge to $\zeta_{\epsilon}(x)$ when $n$ goes to infinity. Then there exists $\delta$ such that for all $n$ there exists $k_n>n$ such that $|\zeta_{\epsilon}(x_{k_n})-\zeta_{\epsilon}(x)|>\delta$. Then there exists $m$ such that $$\underset{n\rightarrow+\infty}{\overline{\lim}}\;\underset{0\leq t\leq \zeta_{\epsilon-1/m}(x)}{\sup}\; |x^1_n(t)+x^2_n(t)-(x^1(t)+x^2(t))|\geq 1/m$$ which is impossible if $x$ is continuous. Now we prove that $(x_n)_{.\wedge\zeta_{\epsilon}(x_n)}$ converges to $x_{.\wedge\zeta_{\epsilon}(x)}$ when $n$ goes to infinity. Let us denote by $r(v,w)$ the Euclidean distance between two points $v$ and $w$ of $\mathbb{R}^2$. Since $x_n$ converges to $x$ in $\mathbb{D}([0,T],(\mathbb{R}_+)^2)$, there exists a sequence of strictly increasing functions $\lambda_n$ mapping $[0,\infty)$ onto $[0,\infty)$ such that \ben\label{xnconverge}\gamma(\lambda_n)\underset{n\rightarrow+\infty}{\longrightarrow}0 \quad\text{and}\quad\underset{n\rightarrow+\infty}{\lim}\;\underset{0\leq t\leq T}{\sup}\;r(x_n(t),x(\lambda_n(t))=0\een where $\gamma(\lambda)=\underset{0\leq t<s}{\sup}\left|\log\frac{\lambda(s)-\lambda(t)}{s-t}\right|$ (\cite{EthierKurtz}, p. $117$). Now for all $t\geq0$, \ba r(x_n(t\wedge\zeta_{\epsilon}(x_n)),x(\lambda_n(t)\wedge\zeta_{\epsilon}(x))&\leq r(x_n(t\wedge\zeta_{\epsilon}(x_n)),x(\lambda_n(t\wedge\zeta_{\epsilon}(x_n))))\\&+r(x(\lambda_n(t\wedge\zeta_{\epsilon}(x_n))),x(\lambda_n(t)\wedge\zeta_{\epsilon}(x)), \quad\text{and}\ea \ba r(x(\lambda_n(t\!\wedge\!\zeta_{\epsilon}(x_n))),x(\lambda_n(t)\!\wedge\!\zeta_{\epsilon}(x)))\!&=\!r(x(\lambda_n(\zeta_{\epsilon}(x_n))),x(\zeta_{\epsilon}(x)))\mathbf{1}_{\{t>\zeta_{\epsilon}(x_n),\lambda_n(t)>\zeta_{\epsilon}(x)\}}\\&+\!r(x(\zeta_{\epsilon}(x)), x(\lambda_n(t)))\mathbf{1}_{\{t\leq\zeta_{\epsilon}(x_n),\lambda_n(t)>\zeta_{\epsilon}(x)\}}\\&+\!r(x(\lambda_n(\zeta_{\epsilon}(x_n))),x(\lambda_n(t)))\mathbf{1}_{\{t>\zeta_{\epsilon}(x_n),\lambda_n(t)\leq\zeta_{\epsilon}(x)\}}.\ea Therefore, using that $x$ is continuous, that $\zeta_{\epsilon}(x_n)\rightarrow\zeta_{\epsilon}(x)$ and that $\underset{0\leq t\leq T}{\sup}|\lambda_n(t)-t|\rightarrow0$ when $n$ goes to infinity, and from Equation \eqref{xnconverge}, we obtain that $\underset{n\rightarrow+\infty}{\lim}\underset{0\leq t\leq T}{\sup}\!r(x_n(t\wedge\zeta_{\epsilon}(x_n)),x(\lambda_n(t)\wedge\zeta_{\epsilon}(x))=0$ which gives the result. \end{proof} \begin{proof}[Proof of Corollary \ref{corNX}] Note that the function $\zeta_{\epsilon}$ defined in Lemma \ref{lemcoro} satisfies $T^K_{\epsilon}=\zeta_{\epsilon}(\psi_1(Z^K),\psi_2(Z^K))=\inf\{t\in[0,T]:N^K_t\leq\epsilon\}$, and $T_{\epsilon}=\zeta_{\epsilon}(N^A,N^a)=\inf\{t\in[0,T]:N_t\leq\epsilon\}$. From the Theorem $3.3$ of \cite{Pinsky2008}, we know that the function $\epsilon'\mapsto\zeta_{\epsilon}(N^A,N^a)$ is almost surely continuous in $\epsilon$. Therefore from Lemma \ref{lemcoro}, the function $f$ such that for all $x\in\mathbb{D}([0,T],(\mathbb{R}_+)^2)$, $f(x)=(x_{t\wedge\zeta_{\epsilon}(x)},t\in[0,T])$ is continuous in almost all trajectories of the diffusion process $(N^A,N^a)$. Therefore from Corollary $1.9$ p.$103$ of \cite{EthierKurtz} and Theorem \ref{TheoremConvergenceNX}, if the sequence of random variables $(\psi_1(Z^K_0),\psi(Z^K_0))\in (\mathbb{R}_+)^2$ converges in law toward a random variable $(N^A_0,N^a_0)$ when $K$ goes to infinity, then for all $T>0$,the sequence of stochastic processes $(\psi_1(Z^K_{.\wedge T^K_{\epsilon}}),\psi(Z^K_{.\wedge T^K_{\epsilon}}))$ converges in law in $\mathbb{D}([0,T],(\mathbb{R}_+)^2)$ toward $(N^A_{.\wedge T_{\epsilon}},N^a_{.\wedge T_{\epsilon}})$. Since the function $(n^A,n^a)\mapsto\left(\frac{n^A+n^a}{2},\frac{n^A}{n^A+n^a}\right)$ is lipschitz continuous on $\{(n^A,n^a)\in(\mathbb{R_+})^2:n^A+n^a\geq2\epsilon\}$, we get the result. \end{proof} \bigskip \begin{rem}\label{remdiplohaplo} The diffusion process $(N_t,X_t)_{t\geq0}$ of Corollary \ref{corWF} can be compared to the haploid neutral population (which corresponds to a stochastic Lotka-Volterra process) studied in detail in \cite{CattiauxMeleard2009} and defined by: \ban\label{diffhaplo} dH^1_t&=\sqrt{2\gamma H^1_t}dB^{1,h}_t+(\beta-\delta-\alpha(H^1_t+H^2_t))H^1_tdt\\ dH^2_t&=\sqrt{2\gamma H^2_t}dB^{2,h}_t+(\beta-\delta-\alpha(H^1_t+H^2_t))H^2_tdt \ean where $B^{1,h}$ and $B^{2,h}$ are independent Brownian motions. Here, $H^1$ is the number of alleles $A$ while $H^2$ is the number of alleles $a$. $N^h=H^1+H^2$ is then the total number of individuals while $X^h=H^1/(H^1+H^2)$ is the proportion of alleles $A$ in the haploid population. We easily see that the total population size satisfies the same diffusion equation in the haploid and diploid populations. We therefore compare the stochastic processes $(N,X)$ and $(N^h,X^h)$. Now by Itô's formula, the stochastic process $(N^h,X^h)$ satisfies a diffusion equation that can be written using a new $2$-dimensional brownian motion $(\tilde{B}^1,\tilde{B}^2)$ as: \ban\label{diffusionNXhaploid} dN^h_t&=(\beta-\delta-\alpha N^h_t)N^h_tdt+\sqrt{2\gamma N^h_t}d\tilde{B}^1_t\\ dX^h_t&=\sqrt{\frac{2\gamma X^h_t(1-X^h_t)}{N^h_t}}d\tilde{B}^2_t \ean Then the differences between the haploid and the diploid neutral models only reside in a variation of the proportion of allele $A$ divided by $\sqrt{2}$ in the diploid population (see Equations \eqref{diffusionA1A2} and \eqref{diffusionNXhaploid}). However note from Equations \eqref{diffusionNXnoneutre} and \eqref{diffhaplo} that this apparently insignificant difference induce that the respective numbers of alleles $A$ and $a$ are directed by correlated Brownian motions in a diploid population which is not the case in a haploid population. \end{rem} \section{New change of variable and quasi-stationarity}\label{sectionQSD} In this section we study the long-time behavior of the diffusion process $(N^A,N^a)$ introduced in Theorem \ref{TheoremConvergenceNX}. For any process $U$, we denote by $\mathbb{P}^U_x$ the distribution law of $U$ starting from a point $x$, and $\mathbb{E}^{U}_x$ the associated expectation. First, the process $N=N^A+N^a$ defined in Corollary \ref{corNX} reaches $0$ almost surely in finite time: \begin{prop}\label{extinction} Let $T_0=\inf\{t\geq0:N_t=0\}$. Under \eqref{hyp1}, $\mathbb{P}^N_x(T_0<+\infty)=1$ for all $x\in \mathbb{R}_+$, and there exists $\lambda>0$ such that $\underset{x}{\sup}\;\mathbb{E}_x(e^{\lambda T_0})<+\infty$. \end{prop} \begin{proof} Under \eqref{hyp1}, as in the proof of Proposition \ref{propgpositif}, there exists a positive constant $m$ such that $N$ is stochastically dominated by a diffusion process $\overline{N}$ satisfying $d\overline{N}_t=\sqrt{2\gamma \overline{N}_t} dB^1_t+\overline{N}_t(\underset{i}{\sup}\,\beta_i-\underset{i}{\inf}\,\delta_i-m\overline{N}_t)dt$. Theorem $5.2$ of \cite{CattiauxCollet...2009} gives the result for $\overline{N}$ and therefore for $N$. \end{proof} \bigskip The long-time behavior of the diffusion $(N^a,N^A)$ is therefore trivial and we now study the long-time behavior of this diffusion process conditioned on non-extinction, i.e. conditioned on not reaching the absorbing state $(0,0)$. In particular, we are interested in studying the possibility of a long-time coexistence of the two alleles $A$ and $a$ in the population conditioned on non-extinction. \subsection{New change of variables} To study the quasi-stationary behavior of the diffusion $(N,X)$ conditioned on non-extinction, we need to change variables in order to obtain a $2$-dimensional Kolmogorov diffusion (i.e. a diffusion process with a diffusion coefficient equal to $1$ and a gradient-type drift coefficient) whose quasi-stationary behavior can be most easily derived. Such ideas have been developed in \cite{CattiauxCollet...2009} and \cite{CattiauxMeleard2009}. Let us define, as long as $N_t>0$: \ban\label{changementvariables} S^1_t&=\sqrt{\frac{2N_t}{\gamma}}\cos\left(\frac{\arccos(2X_t-1)}{\sqrt{2}}\right)\\ S^2_t&=\sqrt{\frac{2N_t}{\gamma}}\sin\left(\frac{\arccos(2X_t-1)}{\sqrt{2}}\right). \ean If $N_t=0$, we obviously set $S_t=(S^1_t,S^2_t)=(0,0)$. To begin with, simple calculations give the following Proposition, illustrated in Figure \ref{figureS}. \begin{prop} \label{propespaceD} For all $t\geq0$, $S^2_t\geq0$ and $S^2_t\geq uS^1_t$ with $u=tan\left(\frac{\pi}{\sqrt{2}}\right)<0$. \end{prop} \medskip \begin{proof} For all $t\geq0$, $2X_t-1\in[-1,1]$, which gives that $\frac{\arccos(2X_t-1)}{\sqrt{2}}\in[0,\pi/\sqrt{2}]$ and $\sin\left(\frac{\arccos(2X_t-1)}{\sqrt{2}}\right)>0$. Then $S^2_t\geq0$ for all $t\geq0$. Now if $\frac{\arccos(2X_t-1)}{\sqrt{2}}\in[0,\pi/2]$, then $S^1_t\geq0$ and $S^2_t\geq0$, so $S^2_t\geq uS^1_t$. Finally, if $\frac{\arccos(2X_t-1)}{\sqrt{2}}\in]\pi/2,\pi/\sqrt{2}]$, then $S^1_t<0$, $S^2_t\geq0$, and $\frac{S^2_t}{S^1_t}=\tan\left(\frac{\arccos(2X_t-1)}{\sqrt{2}}\right)\in]-\infty,u]$. Then $S^2_t\geq uS^1_t$. \end{proof} \begin{rem} Let us define for all $(s_1,s_2)\in\mathbb{R}^2$, the sets $\mathbf{A}=\{s_2=0, s_1>0\}$, $\mathbf{a}=\{s_2=us_1, s_2>0\}$ and $\mathbf{0}=\{s_1=s_2=0\}$. The sets $\{S_t\in\mathbf{A}\}$, $\{S_t\in\mathbf{a}\}$, and $\{S_t\in\mathbf{0}\}$ are respectively equal to the sets $\{X_t=1\}$ (fixation of allele $A$), $\{X_t=0\}$ (fixation of allele $a$) and $\{N_t=0\}$ (extinction of the population). \end{rem} We denote by $\mathcal{D}=\mathbf{R}\times\mathbf{R}_+\cap\{(S^1,S^2):S^2\geq uS^1\}$ the set of values taken by $S_t$ for $t\geq0$, $\partial \mathcal{D}=\mathbf{A}\cup\mathbf{a}\cup\mathbf{0}$ its boundary in $\mathbb{R}^2$, and $T_{\mathbf{D}}$ the hitting time of $\mathbf{D}$ for any $\mathbf{D}\subset \mathcal{D}$. $\mathbf{0}$, $\mathbf{A}\cup\mathbf{0}$ and $\mathbf{a}\cup\mathbf{0}$ are therefore absorbing sets and from Proposition \ref{extinction}, starting from any point $s\in\mathcal{D}$, $S$ reaches any of these sets almost surely in finite time. \begin{figure}\begin{center} \scalebox{0.7}{ \begin{pspicture}(-5,-1.2)(7,7) \psline{->}(0,0)(6,0)\psline{->}(0,0)(0,5.5)\psline(0,0)(-4,5.25) \put(6.2,-0.2){$S^1$}\put(0,5.6){$S^2$}\put(0.2,4){$\mathbf{a}_0$} \psline(0,0)(5,3.8)\put(5.1,3.9){$\mathbf{A}_0$}\psline(0,0)(3,5)\put(3.1,5.1){$\mathbf{M}$} \uput{0}[0]{-53}(-3.2,3.2){$\mathbf{a}=\{s_2=us_1\}$}\put(3,-0.4){$\mathbf{A}=\{s_2=0\}$}\put(-1.3,-0.4){$\mathbf{0}=\{s_1=s_2=0\}$} \psdot[dotsize=0.1](0,0) \psdot[dotstyle=x, dotsize=0.2](4.5,2)\psline{<->}(0,0)(4.5,2)\psarc(0,0){4.1}{0}{24}\put(4.1,0.9){$\frac{arccos(2X-1)}{\sqrt{2}}$}\put(4.6,2.1){$(s_1,s_2)=\psi(n,x)$}\put(1.9,0.4){$\sqrt{\frac{\gamma N}{2}}$} \pstGeonode[PointName=none,PointSymbol=none](0,0){O} \pstGeonode[PointName=none,PointSymbol=none](-4,5.25){B} \pstGeonode[PointName=none,PointSymbol=none](5,3.8){A} \pstRightAngle{B}{O}{A} \end{pspicture} } \end{center} \caption[Set of values taken by the diffusion $S$]{\label{figureS}Set $\mathcal{D}$ of the values taken by $S_t$, for $t\geq0$.} \end{figure} Finally, \begin{prop}\label{cgtvariableS} The transformation \ba\psi:\!\mathbb{R}_+^*\!\times\![0,1]\!&\rightarrow\mathcal{D}\setminus\textbf{0}\\ (n,x)&\mapsto\!(s_1,s_2)=\!\left(\sqrt{\frac{2n}{\gamma}}\cos\!\left(\frac{\arccos(2x-1)}{\sqrt{2}}\right)\!,\sqrt{\frac{2n}{\gamma}}\sin\!\left(\frac{\arccos(2x-1)}{\sqrt{2}}\right)\!\right)\ea introduced in Equation \eqref{changementvariables} is a bijection. \end{prop} \begin{proof} For any $(s_1,s_2)\in\mathcal{D}\setminus\textbf{0}$, we easily get the following inverse transformation: \be\begin{array}{ll}x=\left\{\begin{array}{l} \frac{1+\cos\left(\sqrt{2}\arctan\left(\frac{s_2}{s_1}\right)\right)}{2}\quad\text{if $s_1\geq0$,}\\ \frac{1+\cos\left(\sqrt{2}\arctan\left(\frac{s_2}{s_1}+\pi\right)\right)}{2}\quad\text{if $s_1\leq0$,} \end{array}\right.&\quad\text{and}\quad n=\frac{\left((s_1)^2+(s_2)^2\right)\gamma}{2} ,\end{array}\ee for which we obviously have $n\in\mathbb{R}^*_+$ and $x\in[0,1]$. \end{proof} \bigskip Now from Itô's formula, $S$ satisfies the following diffusion equation:\ban\label{diffusionSnonneutregenerale} dS^1_t&=dW^1_t-q_1(S_t)dt\\ dS^2_t&=dW^2_t-q_2(S_t)dt, \ean where, in the neutral case (Equation \eqref{neutre}), $q(s)$ is defined for all $s=(s_1,s_2)\in\mathcal{D}$ such that $s_1\geq0$ by \ban \label{qneutre} q(s)=\left(\begin{array}{c} -\frac{s_2}{(s_1)^2+(s_2)^2}\frac{1}{\sqrt{2}\tan\left(\sqrt{2}\arctan\left(\frac{s_2}{s_1}\right)\right)}\\-s_1\left[\left(\beta-\delta-\frac{\alpha\gamma}{2}((s_1)^2+(s_2)^2)\right)\frac{1}{2}-\frac{1}{(s_1)^2+(s_2)^2}\right] \\ \\ \frac{s_1}{(s_1)^2+(s_2)^2}\frac{1}{\sqrt{2}\tan\left(\sqrt{2}\arctan\left(\frac{s_2}{s_1}\right)\right)}\\-s_2\left[\left(\beta-\delta-\frac{\alpha\gamma}{2}((s_1)^2+(s_2)^2)\right)\frac{1}{2}-\frac{1}{(s_1)^2+(s_2)^2}\right] \end{array} \right) \ean and when $s_1\leq0$ by \ba q(s)=\left(\begin{array}{c} -\frac{s_2}{(s_1)^2+(s_2)^2}\frac{1}{\sqrt{2}\tan\left(\sqrt{2}\left(\arctan\left(\frac{s_2}{s_1}\right)+\pi\right)\right)}\\-s_1\left[\left(\beta-\delta-\frac{\alpha\gamma}{2}((s_1)^2+(s_2)^2)\right)\frac{1}{2}-\frac{1}{(s_1)^2+(s_2)^2}\right] \\ \\ \frac{s_1}{(s_1)^2+(s_2)^2}\frac{1}{\sqrt{2}\tan\left(\sqrt{2}\left(\arctan\left(\frac{s_2}{s_1}\right)+\pi\right)\right)}\\-s_2\left[\left(\beta-\delta-\frac{\alpha\gamma}{2}((s_1)^2+(s_2)^2)\right)\frac{1}{2}-\frac{1}{(s_1)^2+(s_2)^2}\right] \end{array} \right). \ea The formula for $q$ in the general case and if $s_1\geq0$ is given in Appendix \ref{appq}. We now give conditions on the demographic parameters so that $S$ satisfies $dS_t=dW_t-\nabla Q(S_t)dt$, i.e. $q=(q_1,q_2)=\nabla Q$ for a real-valued function $Q$ of two variables. This requires at least that $\frac{\partial q_2}{\partial{s_1}}(s)=\frac{\partial q_1}{\partial{s_2}}(s)$ for all $s\in\mathcal{D}$. We state the following \begin{prop}\label{PropQnonneutre} \begin{description} \item[$(i)$] $\frac{\partial q_2(s)}{\partial s_1}=\frac{\partial q_2(s)}{\partial s_2}$ for all $s=(s_1,s_2)\in\mathcal{D}$ if and only $\alpha$ is symmetric, i.e. $\alpha_{12}=\alpha_{21}$, $\alpha_{31}=\alpha_{13}$, $\alpha_{23}=\alpha_{32}$. \item[$(ii)$] In this case we have \ban dS_t=dW_t-\nabla Q(S_t)dt,\label{diffkolmo}\ean with, in the neutral case and for all $s=(s_1,s_2)\in\mathcal{D}$, \ben \label{equationQ} Q(s)=\left\{\!\!\begin{array}{l} \frac{\ln((s_1)^2+(s_2)^2)}{2}+\frac{1}{2}\ln\left(\sin\left(\sqrt{2}\arctan\left(\frac{s_2}{s_1}\right)\right)\right)\\\phantom{\ln((s_1)^2+(s_2)^2)}-(\beta-\delta-\frac{\alpha\gamma}{4}((s_1)^2+(s_2)^2))\frac{(s_1)^2+(s_2)^2}{4}\;\text{if $s_1\geq0$}\\ \\ \frac{\ln((s_1)^2+(s_2)^2)}{2}+\frac{1}{2}\ln\left(\sin\left(\sqrt{2}\left(\arctan\left(\frac{s_2}{s_1}\right)+\pi\right)\right)\right)\\\phantom{\ln((s_1)^2+(s_2)^2)}-(\beta-\delta-\frac{\alpha\gamma}{4}((s_1)^2+(s_2)^2))\frac{(s_1)^2+(s_2)^2)}{4}\;\text{if $s_1\leq0$}. \end{array}\right.\een This function $Q$ in the non-neutral case is given in Appendix \ref{appQ}. \end{description} \end{prop} \begin{proof} For $(i)$, we can decompose the functions $q_1$ and $q_2$ as: \ban \label{formuleq1}q_1(s)&=\frac{\gamma}{2n}s_1+\frac{\gamma}{\sqrt{2}n}s_2\frac{2x-1}{4\sqrt{x(1-x)}}\\&-\frac{s_1}{2}[x^2U+2x(1-x)V+(1-x)^2W]\\&-\frac{s_2}{\sqrt{2}}\sqrt{x(1-x)}[x(U-V)+(1-x)(V-W)] \ean and \ban \label{formuleq2}q_2(s)&=\frac{\gamma}{2n}s_2-\frac{\gamma}{\sqrt{2}n}s_1\frac{2x-1}{4\sqrt{x(1-x)}}\\&-\frac{s_2}{2}[x^2U+2x(1-x)V+(1-x)^2W]\\&+\frac{s_1}{\sqrt{2}}\sqrt{x(1-x)}[x(U-V)+(1-x)(V-W)] \ean where \ban \label{formuleXNappendix}&\begin{array}{ll}x=\left\{\begin{array}{l} \frac{1+\cos\left(\sqrt{2}\arctan\left(\frac{s_2}{s_1}\right)\right)}{2}\quad\text{if $s_1\geq0$,}\\ \frac{1+\cos\left(\sqrt{2}\arctan\left(\frac{s_2}{s1}+\pi\right)\right)}{2}\quad\text{if $s_1\leq0$,} \end{array}\right.& n=\frac{\left((s_1)^2+(s_2)^2\right)\gamma}{2}, \end{array}\\ & U=\beta_1-\delta_1-n\left(\alpha_{11}x^2+\alpha_{21}2x(1-x)+\alpha_{31}(1-x)^2\right)\\ & V=\beta_2-\delta_2-n\left(\alpha_{12}x^2+\alpha_{22}2x(1-x)+\alpha_{32}(1-x)^2\right), \text{ and }\\& W=\beta_3-\delta_3-n\left(\alpha_{13}x^2+\alpha_{23}2x(1-x)+\alpha_{33}(1-x)^2\right).\ean From Equation \eqref{formuleXNappendix}, we easily obtain that: $$\frac{\partial n(s_1,s_2)}{\partial s_1}=\frac{s_1}{s_2}\frac{\partial n(s_1,s_2)}{\partial s_2}\quad\text{and}\quad\frac{\partial x(s_1,s_2)}{\partial s_1}=-\frac{s_2}{s_1}\frac{\partial x(s_1,s_2)}{\partial s_2}.$$ Finally, after some calculations and using that $$\frac{\partial n}{\partial s_1}=\gamma s_1\quad\text{and}\quad\frac{\partial x}{\partial s_1}\times\left(\frac{(s_1)^2+(s_2)^2}{s_2}\right)=\sqrt{2x(1-x)},$$ we obtain that $\frac{\partial q_1(s)}{\partial s_2}=\frac{\partial q_2(s)}{\partial s_1}$ if and only if for all $x\in[0,1]$, \ba x^2[\alpha_{21}\!-\alpha_{31}\!-\alpha_{12}\!+\alpha_{13}+\alpha_{32}-\alpha_{23}] +x[\alpha_{31}\!-\alpha_{13}\!+2\alpha_{23}\!-2\alpha_{32}] +[\alpha_{32}\!-\alpha_{23}]=0\ea which happens if and only if $\alpha$ is symmetric. For $(ii)$, the result comes from straightforward calculations that are given in the general case in Appendix \ref{appQ}. \end{proof} \bigskip Assuming now that $\alpha$ is symmetric, we can establish some sufficient conditions on the parameters $\alpha_{ij}$ so that the function $g$ introduced in Proposition \ref{propgpositif} is positive, i.e. Hypothesis \eqref{hyp1} is satisfied. \begin{prop} \label{propconditionsalpha} Let us now assume that $\alpha_{ij}=\alpha_{ji}$ for all $i,j\in\{1,2,3\}$. If $\alpha_{ii}>0$ for all $i\in\{1,2,3\}$ and one of the following conditions is satisfied: \begin{description} \item[$(i)$] $\alpha_{ij}>0$ for all $i,j$. \item[$(ii)$] There exists $i\in\{1,2,3\}$ such that $\alpha_{ik}>0$ for all $k$, and $\alpha_{jl}^2<\alpha_{jj}\alpha_{ll}$ if $i$, $j$ and $l$ are all distinct. \item[$(iii)$] There exists $i\in\{1,2,3\}$ such that $\alpha_{ii}\alpha_{jl}>\alpha_{ij}\alpha_{il}$, $\alpha_{ij}^2<\alpha_{ii}\alpha_{jj}$, and $\alpha_{il}^2<\alpha_{ii}\alpha_{ll}$ where $i$, $j$ and $l$ are all distinct. \item[$(iv)$] There exists $i\in\{1,2,3\}$ such that $\alpha_{ij}^2<\alpha_{ii}\alpha_{jj}$, $\alpha_{il}^2<\alpha_{ii}\alpha_{ll}$, and $(\alpha_{ii}\alpha_{jl}-\alpha_{ij}\alpha_{il})^2<(\alpha_{ii}\alpha_{ll}-\alpha_{il}^2)(\alpha_{ii}\alpha_{jj}-\alpha_{ij}^2)$ where $i$, $j$ and $l$ are all distinct. \end{description} then Hypothesis \eqref{hyp1} is satisfied. \end{prop} \begin{proof} Since $\alpha$ is symmetric, we have for all $z=(z_1,z_2,z_3)\in(\mathbb{R}_+)^3$: $$g(Z)=\alpha_{11}(z_1)^2+2\alpha_{12}z_1z_2+\alpha_{22}(z_2)^2+2\alpha_{23}z_2z_3+2\alpha_{13}z_1z_3+\alpha_{33}(z_3)^2.$$ Considering $g$ as a polynomial function of $z_1$, we easily obtain that $g$ is positive if $(1):$ the discriminant $\Delta_1(z_2,z_3)=(2\alpha_{12}z_2+2\alpha_{13}z_3)^2-4\alpha_{11}(\alpha_{22}(z_2)^2+\alpha_{33}(z_3)^2+2\alpha_{23}z_2z_3)$ is negative or if $(2):$ $(2\alpha_{12}z_2+2\alpha_{13}z_3)>\sqrt{\Delta_1(z_2,z_3)}$. If $\alpha_{12}>0$, $\alpha_{13}>0$, and $\alpha_{23}>0$ or $\alpha_{23}^2<\alpha_{22}\alpha_{33}$ (case $(i)$ or $(ii)$), then $(2)$ is true for all $z\in\left(\mathbb{R}_+\right)^3$. If $\alpha_{11}\alpha_{22}>\alpha_{12}^2$, $\alpha_{11}\alpha_{33}>\alpha_{13}^2$, and $\alpha_{11}\alpha_{23} >\alpha_{12}\alpha_{13}$ or $(\alpha_{11}\alpha_{23}-\alpha_{12}\alpha_{13})^2<(\alpha_{11}\alpha_{33}-\alpha_{13}^2)(\alpha_{11}\alpha_{22}-\alpha_{12}^2)$ (case $(iii)$ or $(iv)$), then $(1)$ is true for all $z\in\left(\mathbb{R}_+\right)^3$, which gives the result, allowing in the end for permutations of indices $1$, $2$, and $3$. \end{proof} \bigskip Note that these conditions mean that for Hypothesis \eqref{hyp1} to be true, we need that cooperation is not too strong or is compensated in some way by competition. \subsection{Absorption of the diffusion process $S$} In this section, we establish more precise results concerning the absorption of the process $S$ in the absorbing sets $\mathbf{0}$, $\mathbf{A}\cup\mathbf{0}$, $\mathbf{a}\cup\mathbf{0}$ and $\mathbf{A}\cup\mathbf{a}\cup\mathbf{0}$. \begin{thm}\label{theoremtemps} \begin{description} \item[$(i)$] For all $s\in \mathcal{D}\setminus\mathbf{0}$, $\mathbb{P}^S_s(T_{\mathbf{A}}\wedge T_{\mathbf{a}}<T_{\mathbf{0}})=1$. \item[$(ii)$] Let $\mathbf{M}=\{(s_1,s_2)\in\mathcal{D}: s_2=tan(\frac{\pi}{2\sqrt{2}}\}$ (see Figure \ref{figureS}). For all $s\in\mathbf{M}$, $\mathbb{P}^S_s(T_{\mathbf{a}}<T_{\mathbf{0}})>0$ and $\mathbb{P}^S_s(T_{\mathbf{A}}<T_{\mathbf{0}})>0$. \item[$(iii)$] For all $s\in \mathcal{D}\setminus\partial \mathcal{D}$, $\mathbb{P}^S_s(T_{\mathbf{A}}<T_{\mathbf{0}})>0$, and $\mathbb{P}^S_s(T_{\mathbf{a}}<T_{\mathbf{0}})>0$. \end{description} \end{thm} \begin{proof} We first consider the neutral case. To prove $(i)$, we start with extending Girsanov approach as presented in \cite{CattiauxMeleard2009} (proof of Proposition $2.3$), on two different subsets of $\mathcal{D}$. Let us indeed define (see Figure \ref{figureS}) \ba \mathcal{D}_1&=\{(s_1,s_2)\in\mathcal{D}, s_1\geq0\}=(\mathbb{R}_+)^2,\\ \mathcal{D}_2&=\{(s_1,s_2)\in\mathcal{D}, us_2-s_1\geq0\},\\ \mathbf{A}_0&=\{(s_1,s_2)\in\mathcal{D}, s_1=-us_2\},\quad\text{ and}\\ \mathbf{a}_0&=\{(s_1,s_2)\in\mathcal{D}, s_1=0\}.\ea Note that $\partial\mathcal{D}_1=\mathbf{A}\cup\mathbf{a_0}\cup\mathbf{0}$ and that $\partial\mathcal{D}_2=\mathbf{a}\cup\mathbf{A_0}\cup\mathbf{0}$). Let us first assume that $S$ starts in $\mathcal{D}_1$. We consider the diffusion process $H$ which is solution of the following stochastic differential equation: \ba dH^1_t&=dB^1_t-\frac{1}{2H^1_t}dt\\ dH^2_t&=dB^2_t.\ea Then $H^1$ and $H^2$ are independent diffusion processes defined up to their respective hitting time of origin $T^{H^1}_0$ and $T^{H^2}_0$. Let us define for all $x\in\mathbb{R}_+^*$, $Q_1(x)=\frac{\ln(x)}{2}.$ Then from Girsanov Theorem extension, for $i\in\{1,2\}$, for all $t>0$, for all bounded Borel function $f$ on $C([0,t],\mathbb{R}_+)$ and for all $x\in\mathbb{R}_+^*$, \ben\label{GirsanovHW}\mathbb{E}^{H^i}_x\left(f(w)\mathbf{1}_{t<T_0(w)}\right)=\mathbb{E}^{W}_x \left(f(w)\mathbf{1}_{t<T_0(w)}e^{G_i(t)}\right),\een where $W$ is a $1-$dimensional brownian motion and \ba G_1(t)&=Q_1(x)-Q_1(w_t)-\frac{1}{2}\int_0^t\left((Q_1'(w_u))^2-Q_1''(w_u)\right)du,\\G_2(t)&=0.\ea Therefore the law of the couple of stopping times $(T^{H^1}_0,T^{H^2}_0)$ is equivalent to the Lebesgue measure on $]0,\infty[\otimes]0,\infty[$. We now consider the diffusion processes $S$ and $H$ starting from $s\in\mathcal{D}_1$ and stopped when they reach $\partial\mathcal{D}_1=\mathbf{A}\cup\mathbf{a}_0\cup \mathbf{0}$ and define for all $(x_1,x_2)\in(\mathbb{R}_+)^2$, $Q_2(x_1,x_2)=Q_1(x_1)-Q(x_1,x_2)$ (where $Q$ is given in the neutral case in Equation \eqref{equationQ} and in the general case in Equation \eqref{equationQnonneutre}). Then from the extended Girsanov theory again, we have for all bounded Borel function $f$ on $C([0,t],(\mathbb{R}_+)^2)$ and for all $s\in\mathcal{D}_1\setminus\mathbf{0}$, $$\mathbb{E}^{S}_s\left(f(w)\mathbf{1}_{t<T_{\partial \mathcal{D}_1}(w)}\right)=\mathbb{E}^{H}_s \left(f(w)\mathbf{1}_{t<T_{\partial \mathcal{D}_1}(w)}e^{R_t}\right)$$ where $R_t=Q_2(s)-Q_2(w_t)-\frac{1}{2}\int_0^t\left(\left|\nabla Q_2(w_u)\right|^2-\Delta Q_2(w_u)\right)du.$ Now $R_{t\wedge T_{\mathbf{A}}\wedge T_{\mathbf{a}_0}}$ is well defined, which gives for all $s\in\mathcal{D}_1$: \be\mathbb{E}^S_s\left[f(w)\mathbf{1}_{t<T_{\mathbf{0}}}(w)\right] =\mathbb{E}^{H}_s\left[f(\omega)\mathbf{1}_{t<T_{\mathbf{0}}(\omega)}\exp(R_{t\wedge T_{\mathbf{A}}\wedge T_{\mathbf{a}_0}})\right]\ee Then from Equation \eqref{GirsanovHW}, for any $s\in\mathcal{D}_1$,\ben\label{eqAB0}\mathbb{P}^S_s(T_{\mathbf{A}}\wedge T_{\mathbf{a}_0}<T_{\mathbf{0}})=1.\een Now, in the neutral case, the proportion $1-X$ of allele $a$ is a bounded martingale, from Corollary \ref{corNX}, which gives that starting from any $s\in\mathbf{A}_0\subset\mathcal{D}_1$, \ba\mathbf{E}^S_{s}\left(1-X_{T_{\mathbf{A}}\wedge T_{\mathbf{a}_0}}\right)&=\mathbb{P}^S_s(T_{\mathbf{A}}> T_{\mathbf{a}_0})\times\frac{1-\cos(\frac{\pi}{\sqrt{2}})}{2} =\mathbb{E}^S_s(1-X_0)=\frac{1+\cos(\frac{\pi}{\sqrt{2}})}{2}.\ea Finally, the same work can be done on $\mathcal{D}_2$ by symmetry, which gives that for all $s\in\mathcal{D}_2$, \ben\label{eqBA0}\mathbb{P}^S_s(T_{\mathbf{a}}\wedge T_{\mathbf{A}_0}<T_{\mathbf{0}})=1,\een and for all $s\in\mathbf{a}_0\subset\mathcal{D}_2$, \be\mathbb{P}^S_s(T_{\mathbf{a}}> T_{\mathbf{A}_0})=\frac{1+\cos(\frac{\pi}{\sqrt{2}})}{1-\cos(\frac{\pi}{\sqrt{2}})}.\ee Then the number of back and forths of $S$ between $\mathbf{A}_0$ and $\mathbf{a}_0$ follows a geometrical law with parameter $\frac{1+\cos(\frac{\pi}{\sqrt{2}})}{1-\cos(\frac{\pi}{\sqrt{2}})}$ and is therefore almost surely finite. What is more, from Equations \eqref{eqAB0} and \eqref{eqBA0}, each time the diffusion $S$ reaches $\mathbf{A}_0$ (resp. $\mathbf{a}_0$), it goes to $\mathbf{a}_0$ or $\mathbf{A}$ (resp. $\mathbf{A}_0$ or $\mathbf{a}$) before $\mathbf{0}$ almost surely, which gives the result for the neutral case. Now note from Equation \eqref{changementvariables} that $S_t\in\mathbf{M}$ if and only $X_t=1/2$. Therefore $(ii)$ is obvious in the neutral case since by symmetry, for all $s\in\mathbf{M}$, $\mathbb{P}^S_s(T_{\mathbf{a}}<T_{\mathbf{A}})=1/2$ which gives that $\mathbb{P}^S_s(T_{\mathbf{a}}<T_{\mathbf{0}})=\mathbb{P}^S_s(T_{\mathbf{A}}<T_{\mathbf{0}})=1/2>0$ from $(i)$. Finally for $(iii)$, using Girsanov theory as in the proof of $(i)$, for all $s\in\mathcal{D}\setminus\partial\mathcal{D}$, $\mathbb{P}^S_s(T_{\mathbf{M}}<T_{\mathbf{0}})=\mathbb{P}^S_s(T_{\mathbf{M}}<\infty)>0$. Now by Markov's Property, for all $s\in\mathcal{D}\setminus\partial\mathcal{D}$, we get \ba\mathbb{P}^S_s(T_{\mathbf{a}}<T_{\mathbf{0}})&\geq\mathbb{P}^S_s(T_{\mathbf{a}}<T_{\mathbf{0}}, T_{\mathbf{M}}<\infty)=\frac{\mathbb{P}^S_s(T_{\mathbf{M}}<\infty)}{2}\quad\text{from $(i)$}.\ea Similarly, for all $s\in\mathcal{D}\setminus\partial\mathcal{D}$, $\mathbb{P}^S_s(T_{\mathbf{A}}<T_{\mathbf{0}})>0$. In the non-neutral case, by Girsanov theory again, the law of the process $(S^1,S^2)$ starting from $(s_1,s_2)\in\mathcal{D}$ is equivalent on $C([0,t],\mathcal{D})$ to the law of a process $(\tilde{S}^1,\tilde{S}^2)$ starting from $(s_1,s_2)$ and that is neutral, which gives all the results. \end{proof} \subsection{Quasi-stationary behavior of $S$} In \cite{CattiauxMeleard2009}, the study of quasi-stationary distributions has been developped for diffusion processes of the form \eqref{diffkolmo}. In particular, existence and uniqueness is given under some conditions on the diffusion coefficient $Q$. Let us prove that these conditions are satisfied in our case. \begin{prop}\label{PropQnorme} \begin{description} \item[$(i)$] There exists a constant $C$ such that for all $s=(s_1,s_2)\in \mathcal{D}$, $$\vert\nabla Q(s)\vert^2-\Delta Q(s)\geq C.$$ \item[$(ii)$] $\inf\{\vert\nabla Q(s)\vert^2-\Delta Q(s), |s|\geq R, s\in\mathcal{D}\}\rightarrow +\infty$ when $R\rightarrow\infty$. \end{description} \end{prop} \begin{proof} Let us define $F(s)=\vert\nabla Q(s)\vert^2-\Delta Q(s)$ for all $s\in \mathcal{D}$. In the neutral case, we find: \ba F(s)&=\frac{1}{2((s_1)^2+(s_2)^2)\tan^2\left(\sqrt{2}\arctan\frac{s_2}{s_1}\right)}\\&+((s_1)^2+(s_2)^2)\left[\frac{(\beta-\delta-\frac{\alpha\gamma}{2}((s_1)^2+(s_2)^2)^2}{4}+\frac{1}{((s_1)^2+(s_2)^2)^2}\right]\\& -((s_1)^2+(s_2)^2)\frac{\alpha\gamma}{2}\\& +\frac{1}{(s_1)^2+(s_2)^2}\frac{1+\tan^2\left(\sqrt{2}\arctan\frac{s_2}{s_1}\right)}{\tan^2\left(\sqrt{2}\arctan\frac{s_2}{s_1}\right)}\\& \geq C \quad\quad\text{clearly}.\ea We also have $$F(s)\geq ((s_1)^2+(s_2)^2)\left[\frac{(\beta-\delta-\frac{\alpha\gamma}{2}((s_1)^2+(s_2)^2)^2}{4}+\frac{1}{(s_1)^2+(s_2)^2}-\frac{\alpha\gamma}{2}\right] $$ which gives $(ii)$. The proof of the two points in the non-neutral case is given in Appendix \ref{appprop}. \end{proof} As in \cite{CattiauxMeleard2009}, the quasi-stationary behavior of $S$ is first studied respectively to the absorbing set $\partial \mathcal{D}$ and then for the absorbing set $\mathbf{0}$ that corresponds to the extinction of the population. \begin{thm}\label{theoQSD1} \begin{description} \item[$(i)$] There exists a unique distribution $\nu$ on $\mathcal{D}\setminus\partial \mathcal{D}$ such that for all $E\subset\mathcal{D}\setminus\partial \mathcal{D}$ and all $t\geq0$, $$\mathbb{P}^S_{\nu}(S_t\in E|T_{\partial \mathcal{D}}>t)=\nu(E).$$ What is more, this distribution is a Yaglom limit for $S$, i.e. for all $s\in\mathcal{D}\setminus\partial \mathcal{D}$, $$\underset{t\rightarrow\infty}{\lim}\mathbb{P}^S_s(S_t\in E|T_{\partial \mathcal{D}}>t)=\nu(E).$$ \item[$(ii)$]There exists a unique probability measure $\nu_0$ on $\mathcal{D}\setminus\mathbf{0}$ such that for all $s\in\mathcal{D}\setminus\partial \mathcal{D}$ and for all $E\subset\mathcal{D}\setminus\mathbf{0}$, \ben\label{existqsd}\underset{t\rightarrow\infty}{\lim}\mathbb{P}^S_s(S_t\in E\vert T_{\mathbf{0}}>t)=\nu_0(E).\een \end{description} \end{thm} \begin{proof} The set of assumptions $(H)$ of \cite{CattiauxMeleard2009} (p. $816-818$) is satisfied from Propositions \ref{extinction} and \ref{PropQnorme}, which gives $(i)$ from \cite{CattiauxMeleard2009} (Proposition $B.12$). $(ii)$ is obtained as in \cite{CattiauxMeleard2009} by using Theorem \ref{theoremtemps} and decomposing: \ben\label{decompofinal} \mathbb{P}_s(S_t\in E\vert T_{\mathbf{0}}>t)=\frac{\mathbb{P}_s(S_t\in E)}{\mathbb{P}_s(T_{\partial \mathcal{D}}>t)}\frac{\mathbb{P}_s(T_{\partial \mathcal{D}}>t)}{\mathbb{P}_s(T_{\mathbf{0}}>t)}.\een\end{proof} \bigskip Note that the quasi-stationary behavior of the diffusion process $((N_t,X_t),t\geq0)$ conditioned on non extinction is obtained easily since $$\mathbb{P}^{N,X}_{(n,x)}((N_t,X_t)\in F\vert N_t>0)=\mathbb{P}^S_s(S_t\in E\vert T_\mathbf{0}>t),$$ if $s=\psi(n,x)$ and $E=\psi(F)$ where $\psi$ is defined in Proposition \ref{cgtvariableS}. Let us remind that we are interested in studying the possibility of a long-time coexistence of the two alleles $A$ and $a$ in the population conditioned on non-extinction. This means that we would like to approximate the quasi-stationary distribution $\nu_X$ such that \ben\label{distribfigures}\nu_X(.):=\underset{t\rightarrow\infty}{\lim}\mathbb{P}^{N,X}_{(n,x)}(X_t\in .\vert N_t>0)\een and we are interested in knowing whether $\nu_X(]0,1[)=0$ or not. Indeed, if $\nu_X(]0,1[)\neq0$ we can observe a long-time coexistence of the two alleles in the population conditioned on non-extinction whereas if $\nu_X(]0,1[)=0$, no such coexistence is possible. Note that $\nu_X(]0,1[)=\nu_0(\mathcal{D}\setminus\partial\mathcal{D})$. For a haploid population with clonal reproduction, \cite{CattiauxMeleard2009} proved that in a pure competition case, i.e. when every individual competes with every other one, no coexistence of alleles is possible. However, in our diploid population, this result should not be true anymore. Indeed, from Equation \eqref{decompofinal}, $$\mathbb{P}_s(S_t\in\mathcal{D}\setminus\partial\mathcal{D})=\frac{\mathbb{P}_s(T_{\partial\mathcal{D}}>t)}{\mathbb{P}_s(T_{\mathbf{0}}>t)},$$ therefore the possibility of coexistence of the two alleles relies on the fact that the time spent by the population in $\mathcal{D}\setminus\partial\mathcal{D}$ is not negligible compared to the time spent in $\mathcal{D}\setminus\mathbf{0}$. In a diploid population, if the heterozygotes are favored compared to homozygous individuals (this situation is called overdominance), they can make the coexistence period last longer than the remaining lifetime of the population once one of the alleles has disappeared. Similarly, as in \cite{CattiauxMeleard2009}, cooperation can favor the long-time coexistence of alleles in the population conditioned on non-extinction. These biological and mathematical intuitions are now sustained by numerical results. \section{Numerical results}\label{sectionnumerical} Numerical simulations of $\nu_X$ are obtained following the Fleming-Viot algorithm introduced in \cite{Burdzyetal1996} and which has been extensively studied in the articles \cite{Villemonais2011} and \cite{Villemonais2012}. This approach consists in approximating the conditioned distribution $$\mathbb{P}^{N,X}_{(n,x)}((N_t,X_t)\in .\vert T_0>t)$$ by the empirical distribution of an interacting particle system. More precisely, we consider a large number $k$ of particles, that all start from a given $(n,x)\in\mathbb{R}_+^*\times]0,1[$ and evolve independently from $(n,x)$ and from each other according to the law of the diffusion process $(N,X)$ defined by the diffusion equation \eqref{diffusionNXnoneutre}, until one of them hits ${N=0}$. At that time, the absorbed particle jumps to the position of one of the remaining $k-1$ particles, chosen uniformly at random among them. Then the particles evolve independently according to the law of the diffusion process $(N,X)$ until one of them reaches ${N=0}$, and so on. Theorem $1$ of \cite{Villemonais2012} gives the convergence when $k$ goes to infinity of the empirical distribution of the $k$ particles at time $t$ toward the conditioned distribution $\mathbb{P}^{N,X}_{(n,x)}((N_t,X_t)\in .\vert T_0>t)$. Here we present three biologically relevant examples. For each case, we set $k=2000$ and plot the empirical distribution at a large enough time $T$ of the $2000$ proportions of allele $A$ given by the respective positions of the $2000$ particles, starting from $(n,x)=(10,1/2)$. First, we consider a neutral competitive case, in which each individual is in competition with every other one, independently from their genotypes. Here, the quasi-stationary distribution $\nu_X$ of the proportion $X$ is a sum of two Dirac functions in $0$ and $1$ (Figure \ref{figureQSDneutre}), i.e. alleles $A$ and $a$ do not coexist in a long time limit. \begin{figure}[ht]\center \scalebox{0.25}{\includegraphics[trim=0cm 0cm 0cm 0cm,clip]{QSDneutre}} \caption[Quasi-stationary distribution in a neutral competitive case]{Approximation of the quasi-stationary distribution $\nu_X$ of the proportion $X$ of allele $A$ (Equation \eqref{distribfigures}), in a neutral competitive case. In this figure, $\beta_i=1$, $\delta_i=0$, and $\alpha_{ij}=0.1$ for all $i$, $j$, and $T=40$.}\label{figureQSDneutre} \end{figure} Second (Figure \ref{figurecompetover}), we show an overdominance case: every individual competes equally with every other ones but heterozygous individuals are favored compared to homozygotes, as their reproduction rate is higher. In this case, the quasi-stationary distribution $\nu_X$ charges only points of $]0,1[$, i.e. alleles $A$ and $a$ seem to coexist with probability $1$. This behavior is specific to the Mendelian reproduction: in \cite{CattiauxMeleard2009}, the authors proved that no coexistence of alleles is possible in a haploid population with clonal reproduction, if every individual is in competition with every other one. \begin{figure}[ht]\center \scalebox{0.25}{\includegraphics[trim=0cm 0cm 0cm 0cm,clip]{QSDover}} \caption[Quasi-stationary distribution in an overdominance case]{Approximation of the quasi-stationary distribution $\nu_X$ of the proportion $X$ of allele $A$ (Equation \eqref{distribfigures}), in an overdominance case. In this figure, $\beta_i=1$ for all $i\neq2$, $\beta_2=5$, $\delta_i=0$ for all $i$, $\alpha_{ij}=0.1$ for all $(i,j)$, and $T=100$.}\label{figurecompetover} \end{figure} Third (Figure \ref{figureQSDniches}), we show a case in which individuals only compete with individuals with same genotype; this can happen if different genotypes feed differently and have different predators. In this case, we can observe either a coexistence of the two alleles $A$ and $a$ or an elimination of one of the alleles, since the distribution $\nu_X$ charges both $\{0\}\cup\{1\}$ and $]0,1[$. \begin{figure}[ht]\center \scalebox{0.26}{\includegraphics[trim=0cm 0cm 0cm 0cm,clip]{QSDniches}} \caption[Quasi-stationary distribution in a separate niches case]{Approximation of the quasi-stationary distribution $\nu_X$ of the proportion $X$ of allele $A$ (Equation \eqref{distribfigures}), in a case where individuals with different genotypes do not compete or cooperate with each other. In this figure, $\beta_i=1$, $\delta_i=0$, $\alpha_{ii}=0.1$ for all $i$, $\alpha_{ij}=0$ for all $i\neq j$, and $T=2500$.}\label{figureQSDniches} \end{figure} \appendix \section{Calculations in the general case}\label{appendixQnonneutre} \subsection{Form of the function $Q$}\label{appQ} If $\alpha$ is symmetric, we use Equations \eqref{formuleq1}, \eqref{formuleq2} and \eqref{formuleXNappendix} and search a function $Q$ such that $\frac{\partial Q(s)}{\partial s_1}=q_1(S)$ and $\frac{\partial Q(s)}{\partial s_2}=q_2(S)$. After calculating the partial derivatives of functions of the form: $$(s_1,s_2)\mapsto\left\{\begin{array}{l} ((s_1)^2+(s_2)^2)^k\cos^l\left(\sqrt{2}\arctan\left(\frac{s_2}{s_1}\right)\right)\quad\text{if $s_1\geq0$}\\ ((s_1)^2+(s_2)^2)^k\cos^l\left(\sqrt{2}\arctan\left(\frac{s_2}{s_1}+\pi\right)\right)\quad\text{if $s_1\leq0$} \end{array}\right.$$ for $k\in\{1,2\}$ and $l\in\{1,2,3,4\}$, we find that \ben \label{equationQnonneutre}Q(s)=\left\{\begin{array}{l} \frac{\ln((s_1)^2+(s_2)^2)}{2}+\frac{1}{2}\ln\left(\sin\left(\sqrt{2}\arctan\left(\frac{s_2}{s_1}\right)\right)\right)\\-\frac{(s_1)^2+(s_2)^2}{4}\left[\frac{\beta_1-\delta_1+2(\beta_2-\delta_2)+\beta_3-\delta_3}{4}\right.\\ \phantom{-\frac{(s_1)^2+(s_2)^2}{4}}\left.-\frac{(s_1)^2+(s_2)^2}{4}\gamma\frac{\alpha_{11}+4\alpha_{12}+2\alpha_{13} +4\alpha_{23}+4\alpha_{22}+\alpha_{33}}{16}\right]\\ -((s_1)^2+(s_2)^2)\left[h(s)\frac{\beta_1-\delta_1-(\beta_3-\delta_3)}{8}+h(s)^2\frac{\beta_1-\delta_1-2(\beta_2-\delta_2)+\beta_3-\delta_3}{16}\right]\\ +\frac{((s_1)^2+(s_2)^2)^2}{16}\gamma h(s)\left[\frac{\alpha_{11}+2\alpha_{12}-2\alpha_{23}-\alpha_{33}}{4}+h(s)\frac{3\alpha_{11}-2\alpha_{13}-4\alpha_{22}+3\alpha_{33}}{8}\right.\\\phantom{\frac{((s_1)^2}{16}\gamma[}\left.+h(s)^2\frac{\alpha_{11}-2\alpha_{12}+2\alpha_{23}-\alpha_{33}}{4}+h(s)^3\frac{\alpha_{11}-4\alpha_{12} +2\alpha_{13}-4\alpha_{23}+4\alpha_{22}+\alpha_{33}}{16}\right]\\ \quad\text{if $s_1\geq0$}\\ \\ \frac{\ln((s_1)^2+(s_2)^2)}{2}+\frac{1}{2}\ln\left(\sin\left(\sqrt{2}\left(\arctan\left(\frac{s_2}{s_1}\right)+\pi\right)\right)\right)\\-\frac{(s_1)^2+(s_2)^2}{4}\left[\frac{\beta_1-\delta_1+2(\beta_2-\delta_2)+\beta_3-\delta_3}{4}\right.\\ \phantom{-\frac{(s_1)^2+(s_2)^2}{4}}\left.-\frac{(s_1)^2+(s_2)^2}{4}\gamma\frac{\alpha_{11}+4\alpha_{12}+2\alpha_{13} +4\alpha_{23}+4\alpha_{22}+\alpha_{33}}{16}\right]\\ -((s_1)^2+(s_2)^2)\left[h(s)\frac{\beta_1-\delta_1-(\beta_3-\delta_3)}{8}+h(s)^2\frac{\beta_1-\delta_1-2(\beta_2-\delta_2)+\beta_3-\delta_3}{16}\right]\\ +\frac{((s_1)^2+(s_2)^2)^2}{16}\gamma h(s)\left[\frac{\alpha_{11}+2\alpha_{12}-2\alpha_{23}-\alpha_{33}}{4}+h(s)\frac{3\alpha_{11}-2\alpha_{13}-4\alpha_{22}+3\alpha_{33}}{8}\right.\\\phantom{\frac{((s_1)^2}{16}\gamma[}\left.+h(s)^2\frac{\alpha_{11}-2\alpha_{12}+2\alpha_{23}-\alpha_{33}}{4}+h(s)^3\frac{\alpha_{11}-4\alpha_{12} +2\alpha_{13}-4\alpha_{23}+4\alpha_{22}+\alpha_{33}}{16}\right]\\ \quad\text{if $s_1\leq0$} \end{array}\right.\een where $$h(s)=\left\{\begin{array}{l}\cos\left(\sqrt{2}\arctan\left(\frac{s_2}{s_1}\right)\right) \quad \text{when $s_1\geq0$}\\ \cos\left(\sqrt{2}\left(\arctan\left(\frac{s_2}{s_1}\right)+\pi\right)\right) \quad \text{when $s_1\leq0$.}\end{array}\right.$$ \subsection{Form of the function $q$}\label{appq} Therefore if $s_1\geq0$: \ban\label{formuleq1app} q_1(s)&=\frac{s_1}{(s_1)^2+(s_2)^2}-\frac{s_2}{(s_1)^2+(s_2)^2}\frac{1}{\sqrt{2}\tan\left(\sqrt{2}\arctan\left(\frac{s_2}{s_1}\right)\right)}\\&-s_1\left[\frac{\beta_1-\delta_1+2(\beta_2-\delta_2)+\beta_3-\delta_3}{8}\right.\\&\left.\quad-\frac{(s_1)^2+(s_2)^2}{4}\gamma\frac{\alpha_{11}+4\alpha_{12}+2\alpha_{13}+4\alpha_{23}+4\alpha_{22}+\alpha_{33}}{16}\right]\\&-2s_1\left[h(s)\frac{\beta_1-\delta_1-(\beta_3-\delta_3)}{8}+h(s)^2\frac{\beta_1-\delta_1-2(\beta_2-\delta_2)+\beta_3-\delta_3}{16}\right]\\&+s_1\frac{((s_1)^2+(s_2)^2)}{4}\gamma h(s)\left[\frac{\alpha_{11}+2\alpha_{12}-2\alpha_{23}-\alpha_{33}}{4}\right.\\&\quad+h(s)\frac{3\alpha_{11}-2\alpha_{13}-4\alpha_{22}+3\alpha_{33}}{8}\\&\quad+\left. h(s)^2\frac{\alpha_{11}-2\alpha_{12}+2\alpha_{23}-\alpha_{33}}{4}+h(s)^3\frac{\alpha_{11}\!-\!4\alpha_{12}+\!2\alpha_{13}-\!4\alpha_{23}+\!4\alpha_{22}\alpha_{33}}{16}\right]\\&-\sqrt{2}s_2\sin\left(\sqrt{2}\arctan\left(\frac{s_2}{s_1}\right)\right)\left[\frac{\beta_1-\delta_1-(\beta_3-\delta_3)}{8}\right.\\&\quad\quad\left.+h(s)\frac{\beta_1-\delta_1-2(\beta_2-\delta_2)+\beta_3-\delta_3}{8}\right]\\&+\frac{(s_1)^2+(s_2)^2}{16}\gamma\sqrt{2}s_2\sin\left(\sqrt{2}\arctan\left(\frac{s_2}{s_1}\right)\right)\left[\frac{\alpha_{11}+2\alpha_{12}-2\alpha_{23}-\alpha_{33}}{4}\right.\\&\quad+h(s)\frac{3\alpha_{11}-2\alpha_{13}-4\alpha_{22}+3\alpha_{33}}{4}\\&\quad+h(s)^2\frac{3(\alpha_{11}\!-2\alpha_{12}+\alpha_{23}\!-\alpha_{33})}{4}\\&\left.\quad+h(s)^3\frac{\alpha_{11}-4\alpha_{12}+2\alpha_{13}-4\alpha_{23}+4\alpha_{22}+\alpha_{33}}{4}\right].\ean We have similar formulas for $q_2$ and when $s_1\leq0$. \subsection{Proof of Proposition \ref{PropQnorme}}\label{appprop} Now $F(s)=\vert\nabla Q(s)\vert^2-\Delta Q(s)=(q_1(s))^2+(q_2(s))^2-\frac{\partial q_1}{\partial s_1}(s)-\frac{\partial q_2}{\partial s_2}(s).$ Besides, note that under \eqref{hyp1}, $\frac{\alpha_{11}+4\alpha_{12}+2\alpha_{13}+4\alpha_{23}+4\alpha_{22}+\alpha_{33}}{16}>0$. Therefore using Equations \eqref{formuleq1} and \eqref{formuleq2} we easily obtain that there exists a positive constant $C_1$ such that $(q_1(s))^2+(q_2(s))^2\geq C_1((s_1)^2+(s_2)^2)^3$. Finally, from Equation \eqref{formuleq1app}, we obtain after some calculations that there exists a positive constant $C_2$ such that $\frac{\partial q_1}{\partial s_1}(s)+\frac{\partial q_2}{\partial s_2}(s)\leq C_2((s_1)^2+(s_2)^2)^2$. Therefore Proposition \ref{PropQnorme} is true if $s_1\geq0$. If $s_1\leq0$, the result is true as well by symmetry. \bigskip \textbf{Acknowledgements:} I fully thank my Phd director Sylvie M\'el\'eard for suggesting me this research subject, and for her continual guidance during my work. I would also like to thank Denis Villemonais for his code and help for the simulation results. This article benefited from the support of the ANR MANEGE (ANR-09-BLAN-0215) and from the Chair ``Mod\'elisation Math\'ematique et Biodiversit\'e" of Veolia Environnement - \'Ecole Polytechnique - Museum National d'Histoire Naturelle - Fondation X. \bibliographystyle{plainnat} \bibliography{C:/Users/Camille/Dropbox/Travail/Biblio/mabiblio} \end{document}
{"config": "arxiv", "file": "1309.3405/ArxivCoronQSD.tex"}
TITLE: Temperature decreases in adiabatic expansion and gas laws QUESTION [0 upvotes]: So I understand that the temperature decreases when a gas expands adiabatically. This is because there is no gain of heat from the surroundings, so the kinetic energy of molecules decreases in doing work on the surroundings, resulting in decreased temperature and pressure. Pressure decreases because same number of molecules are vibrating with a lesser kinetic energy in a larger volume. Now my question is, why should the temperature of the gas not increase, as the volume is increasing as well. Volume is directly proportional to temperature, because when the volume increases, the gas molecules easily vibrate more vigorously, because intermolecular forces easily capture the slow moving gas molecules. But you may say, that the gas taken is an ideal one, and no intermolecular forces operate. Why then, do we say that Charles's Law holds good only for an ideal gas? In Charle's law too, the temperature is directly proportional to temperature. But if there are no Intermolecular forces, how does this happen? REPLY [2 votes]: So I understand that the temperature decreases when a gas expands adiabatically. This is because there is no gain of heat from the surroundings, so the the kinetic energy of molecules decreases in doing work on the surroundings, resulting in decreased temperature and pressure. If by expanding adiabatically, you mean the gas does boundary work on the surroundings as in a piston cylinder arrangement (as opposed to a free expansion in a vacuum), then yes there will be a corresponding decrease in internal energy. Per the first law, $\Delta U=Q-W$, $Q=0$, therefore $\Delta U=-W$. But the decrease in internal energy depends only on temperature in the case of an ideal gas. Pressure decreases because same number of molecules are vibrating with a lesser kinetic energy in a larger volume. The decrease in translational kinetic energy is due to a decrease in the average translational velocities of he molecules, not due to decreased "vibration". In any case, better to say the pressure decreases because the number of collisions per unit time on the walls is less due to the increase in volume and decrease in velocity, and the change of momentum (force) is less due to the decrease in velocity. Now my question is, why should the temperature of the gas not increase, as the volume is increasing as well. Volume is directly proportional to temperature,... Assuming an ideal gas, volume is directly proportional to temperature only when the pressure is constant. $$pV=nRT$$ $$V=\frac{nRT}{p}$$ And the pressure is not constant in an adiabatic expansion. Why then, do we say that Charles's Law holds good only for an ideal gas? In Charle's law too, the temperature is directly proportional to temperature. Charles law only applies when the pressure remains constant. And as you've already stated, the pressure decreases in an adiabatic expansion. Can it be possible that all the variables change simultaneously? Or is it necessary that one of the variables should remain constant? For an adiabatic process volume, temperature, and pressure all vary simultaneously, but according to specific relationships. For an ideal gas pressure and volume vary simultaneously according to $$pV^{k}=constant$$ where $k=c_{p}/c_{v}$ Substituting for $p$ or $V$ using the ideal gas equation, temperature and volume vary simultaneously according to $$nRTV^{k-1}=constant$$ and temperature and pressure vary simultaneously according to $$nRTp^{1-k}=constant$$ For other reversible processes, one gas property may be held constant. Pressure is constant for an isobaric process, temperature is constant for an isothermal process, and volume is constant for an isochoric process. Hope this helps.
{"set_name": "stack_exchange", "score": 0, "question_id": 615406}
TITLE: Do quaternions follow the rule for simplifying logarithms of products? QUESTION [2 upvotes]: The logarithm of a quaternion $q$ that has a real part $a$ and imaginary parts $v$ is defined as $$ \ln q = \ln a + \hat{v} \arccos \frac{a}{\left\lvert q \right\rvert} $$ The exponentional of a quaternion $q$ that has a real part $a$ and imaginary parts $v$ is defined as $$ \exp q = e^a \left(\cos \left\lvert v\right\rvert+ \hat{v} \sin \left\lvert v \right\rvert\right) $$ Is it true that for all purely imaginary quaternions that: $$ \ln (\exp u * \exp v) = u + v $$ $$ c u = \ln (\exp u)^c $$ I have a math program I wrote which seems to be inaccurate and want to know if the bug is in my program or in my idea of quarternions. REPLY [3 votes]: No, this is not true, and as usual the problem is lack of commutativity. The correct statement in general is surprisingly complicated and is given by the Baker-Campbell-Hausdorff formula. (You already have to be a bit careful with this rule for complex numbers, but for quaternions it just totally goes out the window.)
{"set_name": "stack_exchange", "score": 2, "question_id": 2185620}
\begin{document} \maketitle \begin{abstract} Non-commutative geometry, conceived by Alain Connes, is a new branch of mathematics whose aim is the study of geometrical spaces using tools from operator algebras and functional analysis. Specifically metrics for non-commutative manifolds are now encoded via spectral triples, a set of data involving a Hilbert space, an algebra of operators acting on it and an unbounded self-adjoint operator, maybe endowed with supplemental structures. Our main objective is to prove a version of Gel'fand-Na\u\i mark duality adapted to the context of Alain Connes' spectral triples. In this preliminary exposition, we present: \begin{itemize} \item a description of the relevant categories of geometrical spaces, namely compact Hausdorff smooth finite-dimensional orientable Riemannian manifolds, or more generally Hermitian bundles of Clifford modules over them; \item some tentative definitions of categories of algebraic structures, namely commutative Riemannian spectral triples; \item a construction of functors that associate a naive morphism of spectral triples to every smooth (totally geodesic) map. \end{itemize} The full construction of spectrum functors (reconstruction theorem for morphisms) and a proof of duality between the previous ``geometrical'' and ``algebraic'' categories are postponed to subsequent works, but we provide here some hints in this direction. We also conjecture how the previous ``algebraic'' categories might provide a suitable environment for the description of morphisms in non-commutative geometry. \smallskip \noindent \emph{Keywords:} Non-commutative Geometry, Spectral Triple, Gel'fand-Na\u\i mark Duality, Categories of Bundles and Modules. \smallskip \noindent \emph{MSC-2010:} 46L87, 46M15, 46L08, 46M20, 16D90. \end{abstract} \section{Introduction} In A.Connes' non-commutative geometry~\cite{C,FGV,Lan}, every compact Hausdorff spinorial Riemannian finite-dimensional orientable manifold $(M,g_M)$ with a spinorial Hermitian bundle $S(M)$ and volume form $\mu_{g_M}$ is associated to a commutative regular spectral triple $(\As,\H,D_M)$ where: $\As:=C(M;\CC)$ is the unital commutative C*-algebra of complex valued continuous functions on $M$ with respect to the maximum modulus norm; $\H:=L^2(S(M))$ is the Hilbert space obtained by completion of the \hbox{$C(M)$-module} $\Gamma(S(M))$ of continuous sections of the spinor bundle with respect to the norm induced by the inner product \hbox{$\ip{\sigma}{\rho}:=\int \ip{\sigma_x}{\rho_x}_{S(M)_x}\, \text{d}\mu_{g_M}$}, for all $\sigma,\rho\in \Gamma(S(M))$; and $D_M$ is the Dirac operator i.e.~the closure of the densely defined essentially self-adjoint operator obtained by contracting the spinorial Levi-Civita connection with the Clifford multiplication. A reconstruction theorem proved by A.Connes~\cite{C11,C12} (see also~\cite{Re1,RV} for previous only partially successful attempts) assures that a commutative spectral triple (that is irreducible real, graded, strongly regular $m$-dimensional finite absolutely continuous orientable with totally antisymmetric Hochschild cycle in the last $m$ entries, and satisfying Poincar\'e duality) is naturally isomorphic to the above mentioned canonical spectral triple of a spinorial Riemannian manifold with a given Hermitian spinor bundle equipped with charge conjugation. The reconstruction theorem has been recently extended to cover the case of Riemannian spectral triples~\cite{LRV} and to more general situations of almost commutative (real) spectral triples~\cite{Ca,Ca2}. It is still an open problem to reformulate these reconstruction theorems for (almost) commutative spectral triples in a fully categorical context in the same spirit of such celebrated cornerstones of non-commutative topology as Gel'fand-Na\u\i mark duality (between categories of continuous maps of compact Hausdorff topological spaces and categories of commutative unital $*$-homomorphisms of unital C*-algebras), Serre-Swan equivalence (between vector bundles and finite projective modules), or Takahashi duality (between Hilbert bundles over compact Hausdorff spaces and Hilbert C*-modules over commutative unital C*-algebras). As a first step towards such duality results, several suggestions for the construction of categories of spectral triples have been put forward (see for example~\cite{CM6,B1,B3,B4,B5,B6} and the references therein). Of particular relevance is the category of spectral triples recently constructed by B.Mesland~\cite{Me,Me2}, where morphisms are Kasparov KK-bimodules equipped with smooth structure and connection. In this very preliminary and tentative account our purpose, in the spirit of Cartesian geometry, is to suggest a description of some possible dualities between categories of geometrical spaces (usually compact Hausdorff smooth finite-dimensional orientable Riemannian manifolds or more generally Hermitian bundles of Clifford modules over them), here collectively denoted by $\Tf$, and categories of algebraic functional analytic structures (usually some variants of Connes' spectral triples) here denoted by $\Af$. The dualities are realized via two contravariant functors, the section functor \hbox{$\Gamma:\Tf\to\Af$} and the spectrum functor $\Sigma:\Af\to\Tf$ as in the following diagram: \begin{equation*} \xymatrix{ \Tf \rtwocell^\Gamma_\Sigma{'} & \Af. } \end{equation*} In the commutative C*-algebras context, we will describe how to embed categories of smooth (totally geodesic) maps of compact Riemannian manifolds into more general categories of Hermitian bundles and we will also see how a section functor can be used to trade such categories of bundles with categories of Hilbert C*-bimodules. In the non-commutative C*-algebra case, we will mainly deal with topological situations, discussing only the rather special categories of ``factorizable'' Hilbert \hbox{C*-bi}\-modules over tensor products of unital C*-algebras over commutative subalgebras. A more complete study aiming at the construction of functors from Riemann manifolds to B.Mesland's category of spectral triples and to the possible definition of involutive categories of spectral triples is left for future work. \section{Categories of Manifolds, Bundles and Propagators} The objects of our categories will be, for now, compact Hausdorff smooth Riemannian orientable finite-dimensional manifolds that are not necessarily connected.\footnote{For background on manifolds, bundles and differential geometry, the reader is referred for example to R.Abraham, J.Marsden, T.Ratiu~\cite{AMR} and L.Nicolaescu~\cite{N}.} We have several interesting categories that can be naturally constructed: \begin{itemize} \item[a)] The category $\Mf^\infty$ of smooth maps between such manifolds and its subcategories $\Mf^\infty_e$ of smooth embeddings\footnote{Here and in all the subsequent items we could also consider categories of (injective) immersions in place of embeddings.} and $\Mf^\infty_s$ of smooth submersions. \item[b)] The category R-$\Mf^\infty_e$ of smooth maps that are Riemannian embeddings and R-$\Mf^\infty_s$ of smooth Riemannian submersions. \item[c)] The category R-$\Mf^\infty_{ge}$ of totally geodesic smooth Riemannian embeddings and R-$\Mf^\infty_{gs}$ of totally geodesic smooth Riemannian submersions. \item[d)] The category R-$\Mf^\infty_{gec}$ of totally geodesic smooth Riemann embeddings of connected components and R-$\Mf^\infty_{gsc}$ of totally geodesic smooth Riemannian coverings. \end{itemize} There are natural inclusion functors between such categories as in the following diagrams: \begin{gather*} \xymatrix{ \text{R-}\Mf^\infty_{gec} \ar@{^(->}[r] & \text{R-}\Mf^\infty_{ge} \ar@{^(->}[r] & \text{R-}\Mf^\infty_{e} \ar@{^(->}[r] & \Mf^\infty_{e} \ar@{^(->}[r] & \Mf^\infty} \\ \xymatrix{\text{R-}\Mf^\infty_{gsc} \ar@{^(->}[r] & \text{R-}\Mf^\infty_{gs} \ar@{^(->}[r] & \text{R-}\Mf^\infty_{s} \ar@{^(->}[r] & \Mf^\infty_{s} \ar@{^(->}[r] & \Mf^\infty } \end{gather*} The previous categories are not equipped with involutions, since the reciprocal relations are generally not functions, furthermore the categories of embeddings and submersions appear in a kind of dual role. A more satisfactory involutive environment can be obtained considering (in the terminology often used in algebraic geometry) cycles i.e.~relations $R$ between such manifolds that are themselves compact (respectively (totally geodesic) Riemannian) orientable sub-manifolds of the product manifold $M\xleftarrow{\pi_M}M\times N\xrightarrow{\pi_N}N$ and equipping them with ``bundle-propagators'' between the tangent bundles $T(M)$ and $T(N)$ i.e.~smooth Hermitian sub-bundles of $\pi_M^\bullet(T(M))\oplus\pi_N^\bullet(T(N))|_R$ that are fiberwise linear (partial isometric, or equivalently partial co-isometric) relations between the corresponding fibers of the pull-backs on $R$ of the tangent bundles of $M$ and $N$.\footnote{ Since the equalizer of smooth maps between smooth manifolds usually is not a smooth manifold, strictly speaking, the composition of smooth ((totally geodesic) Riemannian) cycles between Hausdorff compact Riemannian orientable manifolds fails to be another such manifold. In order to solve this problem it is appropriate to embed the previous categories of manifolds into the corresponding categories of Hausdorff compact Riemannian orientable finite-dimensional \emph{diffeological spaces}~\cite{I-Z,La} and from now on, whenever necessary, we will assume that such embedding has been done.} Furthermore, in order assure the closeness under composition of this category of bundle-propagators, we will actually work with smooth ((totally geodesic) Riemannian) \emph{relational spans} $M\xleftarrow{\rho_M}R \xrightarrow{\rho_N}N$ of such compact Hausdorff Riemannian manifolds (or diffeological spaces). More generally, we can further ``decouple'' the Hermitian bundles from the underlying Riemannian structure of the manifolds allowing ``(amplified) propagators'' between arbitrary Hermitian bundles of Clifford modules over the given manifolds that are equipped with a compatible connection. In more detail, given two smooth (diffeological) Hermitian bundles $(E^1,\pi^1,X^1)$ and $(E^2,\pi^2,X^2)$ over compact Hausdorff smooth orientable finite-dimensional Riemannian manifolds (diffeological spaces) $X^1$ and $X^2$, here is a description of the morphisms in some of the several relevant categories $\Ef$ of bundles: \begin{itemize} \item[$\Ef^1$] The usual categories of bundle morphisms: $(f,F)$ where $f:X^1\to X^2$ is a morphism of manifolds (diffeological spaces) in any of the previous categories $\Mf$ and $F: E^1\to E^2$ is a smooth map such that $\pi^2\circ F=f\circ\pi^1$ and that is respectively fiberwise linear, isometric (when $f$ is in $\Mf_e$), co-isometric (if $f$ is in $\Mf_s$). \item[$\Ef^2$] The category of Takahashi bundle morphisms~\cite{Ta2}: $(f,F)$ where the map $f:X^1\to X^2$ is as above and $F: f^\bullet(E^2)\to E^1$ is a morphism of bundles over $X^1$ in the previous sense. \item[$\Ef^3$] The category of \emph{propagators of bundles}: $(E,\gamma,R)$ where $R$ is a smooth ((totally geodesic) Riemannian) relational span $X^1\xleftarrow{\rho_1}R \xrightarrow{\rho_2} X^2$ and $E$ is the total space of an Hermitian sub-bundle, over $R$, of the Whitney sum $\rho_1^\bullet(E^1)\oplus \rho_2^\bullet(E^2)$, that is a fiberwise partial isometry i.e.~the fiber $E_{r}:=\gamma^{-1}(r)\subset \rho_1^\bullet(E^1)_r\oplus \rho_2^\bullet(E^2)_r$ is the graph of a partial isometry between $\rho^\bullet_1(E^1)_r$ and $\rho^\bullet_2(E^2)_r$, for all $r\in R$.\footnote{This category, as well as the category $\Ef^4$, is involutive and its morphisms can be considered as a bivariant version of Takahashi bundle morphisms.} \item[$\Ef^4$] The category of amplified propagators of bundles: $(E,\gamma,R)$, where $R$ is a relational span as above and $E$ is an Hermitian sub-bundle of the Whitney sum $\rho_1^\bullet(E^1\otimes W^1)\oplus \rho_2^\bullet(E^2\otimes W^2)$, for two given Hermitian bundles $W^1$ over $X^1$ and $W^2$ over $X^2$ in such a way that, for every $r\in R$, $E_r$ is the graph of a partial isometry.\footnote{ Here, and in the category $\Ef^3$, the Hermitian structure on $E_r$, $r\in R$ is uniquely determined by the isometry requirement for the projections and it is a rescaling of the metric induced by the orthogonal Whitney sum. \\ More generally one can simply consider spans of fiberwise isometries $\rho_1^\bullet(E^1)\xleftarrow{\pi_1}E \xrightarrow{\pi_2} \rho_2^\bullet(E^2)$ of Hermitian bundles over $R$. } \end{itemize} We have natural inclusions relating the previous categories as follows: \begin{equation*} \xymatrix{ \Ef^1 \ar@{^(->}[r] & \Ef^3 & & \Ef^2 \ar@{^(->}[r] & \Ef^3 & & \Ef^3 \ar@{^(->}[r] & \Ef^4. } \end{equation*} Whenever we have bundles of Clifford modules that are equipped with Clifford connections, we can require our morphisms to be stable under the action of the tensor product of the Clifford bundles and totally geodesic for the connection. Exploiting the language of 2-categories, we can produce an even more efficient way to encode such categorical structures:\footnote{For details on higher categories, the reader is referred for example to T.Leinster~\cite{L}.} objects are compact Hausdorff smooth orientable finite-dimensional manifolds (diffeological spaces) $X^1,X^2$; 1-arrows are Hermitian bundles $E^1,E^2$ (eventually equipped with a Clifford action and a compatible connection) over relational spans between $X^1$ and $X^2$; 2-arrows are (amplified) propagators between such 1-arrows bundles, that can be required to be stable under the Clifford action and totally geodesic for the connection. Note that, since 2-arrows are themselves bundles over relational spans, the construction of arrows can be iterated obtaining arbitrary higher categories $\Tf$ of bivariant bundles over relational spans. We can now sketch the construction of embedding functors from the several categories $\Mf$ of manifolds to $\Ef$ of bundles (and so into the higher categories $\Tf$ of bivariant bundles). \begin{theorem} We have covariant Grassmann functors $\Lambda^\CC: \Mf\to\Ef$ from the previous categories of manifolds into the category $\Ef$ of (amplified) propagators of bundles. \end{theorem} \begin{proof} On the objects, the functor $\Lambda^\CC$ associates to every smooth orientable Riemannian manifold $M$ its complexified Grassmann algebra Hermitian bundle $\Lambda^\CC(M)$ with its natural right and left Clifford actions of the complexified Clifford algebra bundle $\CCl(M)$ and with the induced Levi-Civita Ehresman connection. On the arrows, the functor $\Lambda^\CC$ associates to every smooth map $f:X^1\to X^2$ the complexified Bogoljubov second quantized $\Lambda^\CC(Df):\Lambda^\CC(X^1)\to\Lambda^\CC(X^2)$ of the differential map \hbox{$Df:T(X^1)\to T(X^2)$} of $f$. If the map $f$ is a (totally geodesic) Riemannian isometry or co-isometry, fiberwise the graph of $\Lambda^\CC(Df)$ is an isometry or co-isometry and hence determines a propagator bundle. The complexified Clifford functor $\CCl$ associates to every object $M$ its complexified Clifford bundle $\CCl(M)$ and to every Riemannian (co)isometry $f$ an amplified propagator of the Clifford bundles that induces a right/left Clifford action on the propagator bundle determined by $\Lambda^\CC(Df)$ between the Grassmann bundles. For totally geodesic maps, the covariant derivative on the Whitney sum of the Grassmann bundles decomposes inducing a covariant derivative on the propagator bundle. \end{proof} If we examine in more detail how totally geodesic maps between compact Riemannian manifolds are described in terms of propagators, we see that the isometric differential map \hbox{$Df:T(M)\to T(N)$} induces an orthogonal splitting $T(N)|_{f(M)}=Df(T(M))\oplus Df(T(M))^\perp$ of the restriction to $f(M)$ of the tangent bundle of $N$. Passing to the complexified Grassmann bundles, and similarly for the Clifford bundles, we obtain the following tensorial decompositions \begin{gather*} \Lambda^\CC(T(N)|_{f(M)})\simeq\Lambda^\CC(Df(T(M)))\otimes\Lambda^\CC(Df(T(M))^\perp), \\ \CCl(T(N)|_{f(M)})\simeq\CCl(Df(T(M)))\otimes\CCl (Df(T(M))^\perp). \end{gather*} For totally geodesic maps, the restriction of the Levi-Civita connection on $T(N)|_{f(M)}$ decomposes as a direct sum of the connections on the subbundles $Df(T(M))$ and $Df(T(M))^\perp$ and, denoting by $\nabla^N$, $\nabla^M$ and $\nabla^\perp$ the connection induced respectively on $\Lambda^\CC(T(N)|_{f(M)})$, $\Lambda^\CC(Df(T(M)))$ and $\Lambda^\CC(Df(T(M))^\perp)$, we have $\nabla^N=(\nabla^M\otimes I) \oplus (I\otimes \nabla^\perp)$ and contracting with the Clifford actions we obtain the following relation $D_N|_{f(M)}=(D_M\otimes I)\oplus (I\otimes D_\perp)$ between the Hodge-De Rham Dirac operators for $M$ and $N$, where $D_\perp$ denotes a ``transversal'' operator obtained contracting the Clifford action with the orthogonal part of the connection $\nabla^\perp$. The interesting part, in view of the future study of links with the notion of B.Mesland morphisms of spectral triples, is the fact that the Grassmann bundle $\Lambda^\CC(N)$ decomposes as a tensor product of a ``copy'' of the Grassmann bundle of $M$ with a ``transversal'' factor that, passing to the module of sections, will provide a Mesland morphism between the Hodge-De Rham spectral triples of $M$ and $N$. \section{Naive Categories of Spectral Geometries} In this section we try to examine some very tentative candidates for categories $\Af$ of non-commutative spectral geometries that might be used as targets for functors that are defined on the categories $\Ef$ of bundles described in the previous section. Our general ideology will be to start at the topological level from Takahashi duality~\cite{Ta2} (that generalizes the well-known Gel'fand-Na\u\i mark duality between compact Hausdorff spaces and unital commutative C*-algebras) and proceed from there progressively adding the additional structures (Clifford actions, connections) that are required for the description of more rigid geometrical settings. Since Takahashi duality is between Hilbert bundles over compact Hausdorff spaces and Hilbert C*-modules over commutative unital C*-algebras, it is natural for us to start working on Hilbert C*-(bi)modules rather than on Hilbert spaces. This explains our need to partially reformulate a naive notion of A.Connes spectral triples in the case of Hilbert C*-modules. For our purpose here, a (naive) \emph{spectral triple} $(\As,\H,D)$ is given by a (possibly non-commutative) unital C*-algebra $\As$ faithfully represented on the Hilbert space $\H$ and a (possibly unbounded) self-adjoint operator $D$ with compact resolvent and such that the commutator $[D,x]$ extends to a bounded operator on $\H$, for all $x$ in a dense unital \hbox{C*-subalgebra} of $\As$ leaving invariant the domain of $D$. We will reserve the terms \emph{Atiyah-Singer spectral triples} and \emph{Hodge-De Rham spectral triples} for all those spectral triples, with commutative C*-algebras $\As$, for which respectively either A.Connes' or S.Lord-A.Rennie-J.Varilly's reconstruction theorems~\cite{C11,LRV} are viable. We say that $(\As,\Ms,D)$ is a naive left \emph{spectral module triple} if $\Ms$ is a unital left Hilbert C*-module, over the unital C*-algebra $\As$, that is equipped with a (possibly unbounded) regular operator $D$ such that, for all $x$ in a dense unital C*-subalgebra of $\As$ leaving invariant the domain of $D$, the commutator $[D,x]$ extends to an adjointable operator on $\Ms$. The first category of spectral geometries that we consider is strictly adapted to the commutative algebra situation and will be in duality with the categories of (amplified) propagators already described.\footnote{ For the case of finitely generated projective Hilbert C*-modules. } \begin{proposition} There is an involutive category $\Af^1$ of propagators of unital Hilbert \hbox{C*-modules} over commutative unital C*-algebras whose morphisms from the module $\Ms_\As$ to the module $\Ns_\Bs$ are given by Hilbert C*-modules $\Es_\Rs$ that are graphs $\Es_\Rs\subset(\Rs\otimes_\As\Ms)\oplus(\Rs\otimes_\Bs\Ns)$ of isometric morphisms of Hilbert C*-modules on $\Rs$, where $\Rs$ is a unital C*-algebra bimodule over $\As\otimes_\CC\Bs$.\footnote{ More generally we can consider spans of isometries of Hilbert C*-modules $\Rs\otimes_\As\Ms\xleftarrow{\Lambda_1}\Es_\Rs \xrightarrow{\Lambda_2} \Rs\otimes_\Bs\Ns$ over $\Rs$. } \end{proposition} The details of the proposition can be obtained considering that the section functor $\Gamma$ from Hilbert bundles to Hilbert C*-modules preserves direct sums and transforms the pull-back of bundles into change of the base algebra of modules via tensor product. In such commutative setting, if necessary, further requirements can be added to assure that these propagators of bimodules correspond to (totally geodesic) Riemannian maps. Note that, as always, propagators consist of two distinct processes: first a \emph{transport} via pull-back of bundles and Hilbert C*-modules onto a common space here realized via the change of rings with tensorization over $\As$ and $\Bs$ and then a \emph{correspondence} here realized via the selection of suitable submodules in the direct sum. For the special case of spectral module triples $(\As,\Ms_1,D_1)$ and $(\As,\Ms_2,D_2)$ on the same algebra $\As$, we can further specialize the propagators morphism of Hilbert C*-modules obtaining the following interesting definition of a category of spectral correspondences. \begin{proposition} There is a \emph{naive totally geodesic category of spectral correspondences module triples} $\Sf$ whose objects are naive spectral module triples over the same unital C*-algebra and whose morphisms, say from $(\As,\Ms_1,D_1)$ to $(\As,\Ms_2,D_2)$, consist of spectral module triples $(\As,\Phi,D_\Phi)$ where $\Phi\subset \Ms_1\oplus \Ms_2$ is a left $\As$-submodule that is stable under the action of the regular operator $D_1\oplus D_2$ and $D_\Phi:=(D_1\oplus D_2)|_\Phi$. \end{proposition} The category $\Sf$ is essentially a bivariant version of the naive category of spectral triples~\cite{B1,B3,B6} and (at least in the commutative C*-algebra case) can be used to model the ``correspondence'' part in the definition of a propagator. The ``transport'' process that in the commutative case is just a relatively unproblematic pull-back, in the case of non-commutative C*-algebras must be substituted by the more sophisticated notion of A.Connes' transfer of spectral triples between different algebras via tensorization with appropriate bimodules (a process that has been further developed by B.Mesland). Anyway, also the category $\Af^1$ is just an involutive version of the familiar category of Hilbert \hbox{C*-modules} over commutative unital C*-algebras used in Takahashi duality, where 1-arrows between C*-algebras reduce to unital \hbox{$*$-homo}morphisms. It is a general ideological principle that in non-commutative geometry categories of homomorphisms of algebras get substituted with categories of bimodules: every unital homomorphism $\phi:\As\to\Bs$ of unital \hbox{C*-alge}\-bras is associated to a pair of \emph{correspondences}: Hilbert C*-bimodules $\Bs_\As$ and $_\As\Bs$ (where the action of $\As$ on the right/left is via the homomorphism $\phi$) with $\Bs$-valued inner products. Composition of unital $*$-homomorphisms becomes the internal tensor product of such bimodules. As a consequence of this general passage from Abelian categories of bimodules to ``tensorial'' categories of bimodules, instead of pursuing the description of the details of dualities targeting the category $\Af^1$, it is important to try to look for a similar ``tensorial'' reformulation of the previous category. A bivariant version of naive spectral triple is also needed and it is natural to start with a notion of Hilbert C*-bimodule. Although we are not ready yet to select a definition of Hilbert C*-bimodules over general non-commutative \hbox{C*-algebras}, we can provide some elementary examples of situations that are sufficient to cover at least some significant cases of Hilbert C*-bimodules over commutative C*-algebras. This will be enough to create an environment suitable for the formulation of dualities with subcategories of the previous categories of bundles that is more in line with generalizations to the non-commutative setting. For this purpose, we define a \emph{unital C*-algebra bimodule, factorizable over commutative \hbox{C*-algebras}}, to be a unital bimodule ${}_\As\Rs_\Bs$ over the unital C*-algebras $\As$ and $\Bs$, such that $\Rs$ is a unital C*-algebra that is tensor product, over commutative unital \hbox{C*-algebras}, of other unital C*-algebra bimodules, i.e.~a unital C*-algebra of the form $\As\otimes_{C(Y)}\Fs\otimes_{C(X)}\Bs$, where $\As_{C(Y)},{}_{C(Y)}\Fs_{C(X)},{}_{C(X)}\Bs$ are three unital C*-algebra bimodules and $X,Y$ are compact Hausdorff spaces.\footnote{Note that, since the right/left actions of $\As$ and $\Bs$ on $\Rs=\As\otimes_{C(Y)}\Fs\otimes_{C(X)}\Bs$ commute, the C*-algebra $\Rs$ can be naturally considered as a bimodule over the unital \hbox{C*-algebras} $\As$ and $\Bs$, both on the right and on the left.} A \emph{Hilbert C*-bimodule over a C*-algebra bimodule factorizable over commutative C*-algebras} is a unital bimodule ${}_\Rs\Ms_\Rs$ on a unital C*-algebra bimodule factorizable over commutative \hbox{C*-algebras} ${}_{\As}\Rs_\Bs=\As\otimes_{C(Y)}\Fs\otimes_{C(X)}\Bs$, that is of the form ${}_\Rs\Ms_\Rs=\As\otimes_{C(Y)}\otimes\widehat{\Ms}\otimes_{C(X)}\Bs$ where $\widehat{\Ms}$ is a bimodule over $\Fs$ that is also equipped with both right $\ip{\cdot}{\cdot}_\Fs$ and left ${}_\Fs\ip{\cdot}{\cdot}$ $\Fs$-valued inner products\footnote{Here both inner products are assumed to be Hermitian positive non-degenerate with the left product being left $\Fs$-linear: ${}_\Fs\ip{fx}{y}=f\cdot{}_\Fs\ip{x}{y}$ and right $\Fs$-adjointable: ${}_\Fs\ip{xf}{y}={}_\Fs\ip{x}{yf^*}$; and the right product being right $\Fs$-linear: $\ip{x}{yf}_\Fs=\ip{x}{y}_\Fs \cdot f$ and left $\Fs$-adjointable: $\ip{fx}{y}_\Fs=\ip{x}{f^*y}_\Fs$, $x,y\in\widehat{\Ms}$, $f\in \Fs$. } that satisfy the compatibility condition \hbox{${}_\Fs\ip{x}{y}x=x\ip{y}{x}_\Fs$}, for all $x,y\in\Ms$.\footnote{The compatibility condition assures that the left and right norms induced by the inner products coincide and for bimodule morphisms that are left and right adjointable the left and right adjoints coincide.} \begin{theorem} There is an involutive category $\Af^2$ of Hilbert C*-bimodules over unital bimodule C*-algebras factorizable over commutative C*-algebras. \end{theorem} \begin{proof} Objects are unital C*-algebras $\As,\Bs,\Cs,\dots$; morphisms from $\Bs$ to $\As$ are given by Hilbert \hbox{C*-bimodules} $\Ms$ over unital C*-algebra bimodules factorizable over commutative C*-algebras such as $\Rs:=\As\otimes_{C(Y)}\Fs\otimes_{C(X)} \Bs$, $\Ss:=\Bs\otimes_{C(Z)} \Gs\otimes_{C(W)}\Cs$. The involution is given by the passage to the contragredient bimodules $\Ms^*$ over $\Bs\otimes_{C(X)}\Fs\otimes_{C(Y)}\As$. The composition of $\Ms_\Rs=\As\otimes_{C(Y)}\widehat{\Ms}\otimes_{C(Y)}\Bs$ with $\Ns_\Ss=\Bs\otimes_{C(Z)}\otimes\widehat{\Ns}\otimes_{C(W)}\Cs$ is given by the internal tensor product of bimodules $\Ms\otimes_\Bs \Ns=\As\otimes_{C(Y)}(\widehat{\Ms}\otimes_{C(X)}\Bs\otimes_{C(Z)}\otimes\widehat{\Ns})\otimes_{C(W)}\Cs$ as a bimodule over $\Rs\otimes_\Bs\Ss\simeq \As\otimes_{C(X)} (\Fs\otimes_{C(Y)}\Bs\otimes_{C(Z)}\Gs)\otimes_{C(W)}\Cs$ with compatible $(\Fs\otimes_{C(Y)}\Bs\otimes_{C(Z)}\Gs)$-valued inner products on $\widehat{\Ms}\otimes_{C(X)}\Bs\otimes_{C(Z)}\otimes\widehat{\Ns}$ defined by universal factorization property via \begin{gather*} {}_\bullet\ip{x_1\otimes_{C(X)} b_1\otimes_{C(Z)} y_1}{x_2\otimes_{C(X)} b_2\otimes_{C(Z)} y_2}:= {}_\Fs\ip{x_1}{x_2}\otimes_{C(X)}(b_1b_2^*)\otimes_{C(Z)}{}_\Gs\ip{y_1}{y_2} \\ \ip{x_1\otimes_{C(X)} b_1\otimes_{C(Z)} y_1}{x_2\otimes_{C(X)} b_2\otimes_{C(Z)} y_2}_\bullet:= \ip{x_1}{x_2}_\Fs\otimes_{C(X)}(b_1^*b_2)\otimes_{C(Z)}\ip{y_1}{y_2}_\Gs \end{gather*} \end{proof} The previous category can be made into a 2-category $\Af^2$ if we define \hbox{2-arrows} as pairs $(\phi,\Phi)$ such that $\Phi:\Ms_\Rs\to\Ns_\Ss$ is additive map and \hbox{$\phi:\Fs\to\Gs$} is a unital $*$-homomorphism that satisfies $\Phi(r_1xr_2)=\phi(r_1)\Phi(x)\phi(r_2)$, where with some abuse of notation we also denote $1_\As\otimes\phi \otimes 1_\Bs:\Rs\to\Ss$ by $\phi$. Furthermore (at least in the commutative C*-algebras case), one can consider as 2-arrows with source $\Ms_\Rs$ and target $\Ns_\Ss$ new Hilbert C*-bimodules over factorizable \hbox{C*-algebras} bimodules from $\Rs$ to $\Ss$ and in this way the category now constructed becomes actually an $\infty$-category, defining recursively level-$(n+1)$ morphisms as morphisms between the spectral module triples that are morphism at level-$n$. \section{Section Functor} \begin{theorem} There is a section functor $\Gamma:\Ef\to\Af$ that to every propagator $(E,\gamma,R)$ of Hermitian bundles from $(E^1,\pi^1,X^1)$ to $(E^2,\pi^2,X^2)$ associates the Hilbert C*-bimodule $\Gamma(R,E)$ over the \hbox{C*-al}\-gebra bimodule factorizable over commutative C*-algebras $C(R)\simeq C(X^1)\otimes_{C(X^1)}C(R)\otimes_{C(X^2)} C(X^2)$. \end{theorem} \begin{proof} The set $\Gamma(R,E)$ of continuous sections of the Hilbert bundle $(E,\gamma,R)$ is already a Hilbert \hbox{C*-bimodule} over the commutative unital C*-algebra $C(R)\simeq C(X^2)\otimes_{C(X^2)}C(R)\otimes_{C(X^1)}C(X^1)$ that is a C*-algebra bimodule, factorizable over the commutative C*-algebras $C(X^1)$ and $C(X^2)$. \end{proof} More generally, one can consider propagators where $(E,\gamma,R)$ is a bundle of Hilbert C*-bimodules over a bundle $(A,\gamma',R)$ of commutative C*-algebras (this means that there is a fiber preserving action of the total space $A$ on the total space $E$ making each fiber $E_r$ into a C*-bimodule over the C*algebra $A_r$, for all $r\in R$) and in this way one recovers, via the section functor, a \hbox{C*-bimodule} over the commutative C*-algebra bimodule factorizable over commutative \hbox{C*-algebras} given by \hbox{$C(X^1)\otimes_{C(X^1)} \Gamma(R,A)\otimes_{C(X^2)} C(X^2)$}. \medskip Let us examine in some more detail how (totally geodesic) maps between compact Riemannian manifolds are described using spectral module triples (this will provide insight on the role of tensorization by B.Mesland bimodules). As already described at the end of the previous section (in the specific case of totally geodesic Riemannian embeddings), every totally geodesic Riemannian map $f:M\to N$ induces a propagator between the complexified Grassmann bundles that is stable under Clifford action and the induced direct sum of the Levi-Civita connections. Perfectly similar results can be formulated for general totally geodesic propagators between Hermitian bundles of Clifford modules with a compatible connection. Modulo pull-back of bundles and change of rings of modules (that in this commutative situation is not problematic), an application of the section functor $\Gamma$ will immediately produce a propagator of Hilbert C*-modules over the same C*-algebra $C(f)\simeq C(M)$ and in the totally geodesic case a naive morphism of spectral module triples in $\Sf$. Alternatively one notes that a propagator between bundles or modules (let's say over the same space) induces at the second quantized level an inclusion into a tensor product factorization. To explain, in a very special situaton, the tangent bundle decomposition $T(N)|_{f(M)}=Df(T(M))\oplus Df(T(M))^\perp$ corresponds to a factorization $\Lambda^\CC(T(N)|_{f(M)})\simeq \Lambda^\CC(Df(T(M)))\otimes\Lambda^\CC(Df(T(M))^\perp)$ of Grassmann bundles and so to a tensorial factorization of the bimodules of sections. In this way we see a possible role for $\Gamma(\Lambda^\CC(Df(T(M))^\perp))$ as a Mesland bimodule for the Hodge-De Rham spectral triples of $M$ and $N$. We plan to elaborate much further on these points in forthcoming work. \section{Outlook} The work here presented is at a very preliminary stage and most of the elementary categorical structures here considered are essentially a playground (still mainly at the topological level) to test the validity of some conjectures. Specifically we would like to see a clear picture of how geometrical morphisms of Riemannian manifolds can be encoded via the section functor in terms of B.Mesland's bimodules between commutative Hodge-De Rham spectral triples. In order to provide a duality, a spectrum functor from categories of commutative Riemannian spectral triples to Riemannian manifolds must be constructed. At the level of objects this is already done, via the already mentioned reconstruction theorems by A.Connes and A.Rennie, S.Lord, J.Varilly, and our next goal is to prove a similar reconstruction theorem for suitable (totally geodesic) morphisms between these Hodge-De Rham spectral triples. Our hope is that, if morphisms can be described as a bivariant version of spectral triples, a direct application of (part of) the reconstruction theorems for objects might be possible also in the case of morphisms. Another important direction of investigation is related to our belief that ``involutive tensorial'' categories are the right environment for the study of non-commutative geometry and that involutive categories of bimodules should help to formulate a version of B.Mesland category of ``bivariant'' spectral triples with involutions. The categories of Hilbert \hbox{C*-bimodules} over \hbox{C*-algebra} bimodules factorizable over commutative C*-algebras that we defined here are not yet sufficient to cover even some of the most elementary morphisms of non-commutative spaces (the bimodule $\Bs_\As$ induced by a unital $*$-homomorphism $\phi:\As\to\Bs$, for example).\footnote{ A more satisfactory treatment of morphisms of non-commutative spaces (even at the topological level) is well-beyond the scope of such elementary paper and will likely require the usage of higher-C*-categories. } \newpage \emph{Notes and Acknowledgments:} The first author thanks his long time collaborator R.Conti at the ``Sapienza'' University in Rome for the discussion of many topics related to this research. He also thanks Starbucks Coffee at the $1^{\text{st}}$ floor of Emporium Tower in Sukhumvit, where he spent most of the time dedicated to this research project. \smallskip \textit{We stress that the two authors do not share any of their ideological, religious, political affiliations}. {\small
{"config": "arxiv", "file": "1409.1342.tex"}
\begin{document} \renewcommand{\baselinestretch}{1.07} \title[Affine surfaces with trivial Makar-Limanov invariant] {Affine surfaces \\ with trivial Makar-Limanov invariant} \author{Daniel Daigle} \address{Department of Mathematics and Statistics\\ University of Ottawa\\ Ottawa, Canada\ \ K1N 6N5} \email{ddaigle@uottawa.ca} \thanks{Research supported by a grant from NSERC Canada.} \keywords{Locally nilpotent derivations, group actions, Danielewski surfaces, affine surfaces, Makar-Limanov invariant, absolute constants} {\renewcommand{\thefootnote}{} \footnotetext{2000 \textit{Mathematics Subject Classification.} Primary: 14R10. Secondary: 14R05, 14R20.}} \begin{abstract} We study the class of $2$-dimensional affine $\bk$-domains $R$ satisfying $\ML(R)=\bk$, where $\bk$ is an arbitrary field of characteristic zero. In particular, we obtain the following result: {\it Let $R$ be a localization of a polynomial ring in finitely many variables over a field of characteristic zero. If $\ML(R) = K$ for some field $K \subset R$ such that $\trdeg_KR=2$, then $R$ is $K$-isomorphic to $K[X,Y,Z]/(XY-P(Z))$ for some nonconstant $P(Z) \in K[Z]$.} \end{abstract} \maketitle \vfuzz=2pt \section{Introduction} Let us recall the definition of the Makar-Limanov invariant: \begin{definition}\label{fefefefeer} If $R$ is a ring of characteristic zero, a derivation $D: R\to R$ is said to be \textit{locally nilpotent} if for each $r\in R$ there exists $n \in \Nat$ (depending on $r$) such that $D^n(r)=0$. We use the following notations: \begin{align*} \lnd(R) &= \textrm{set of locally nilpotent derivations $D:R\to R$} \\ \klnd(R) &= \setspec{ \ker D }{ D \in \lnd(R) \text{ and } D \neq 0} \\ \ML(R) &= \bigcap_{ D \in \text{\sc lnd}(R) } \ker(D). \end{align*} \end{definition} We are interested in the class of $2$-dimensional affine $\bk$-domains $R$ satisfying $\ML(R)=\bk$, where $\bk$ is a field of characteristic zero. The corresponding class of affine algebraic surfaces was studied by several authors (\cite{BandML:AffSurfAK}, \cite{Bertin:Pinceaux}, \cite{DaiRuss:HomolPlanes1}, \cite{Dub:CompletionsNormAffSurf}, \cite{Dub:DanFies}, \cite{Gur-Miy:ML}, \cite{Masuda-Miy:QHomPlanes}, in particular), but almost always under the assumption that $\bk$ is algebraically closed, or even $\bk=\Comp$. In this paper we obtain some partial results valid when $\bk$ is an arbitrary field of characteristic zero. We are particularly interested in the following subclass: \begin{definition} Given a field $\bk$ of characteristic zero, let $\Dan(\bk)$ be the class of $\bk$-algebras isomorphic to $ \bk[X,Y,Z] / (XY - \phi(Z)) $ for some nonconstant polynomial in one variable $\phi(Z) \in \bk[Z] \setminus \bk$, where $X,Y,Z$ are indeterminates over $\bk$. \end{definition} The class $\Dan( \bk )$ was studied in \cite{Dai:slice1}, \cite{Dai:LSC} and \cite{LML:GpsAutoms}, in particular. It is well-known that if $R \in \Dan( \bk )$ then $R$ is a $2$-dimensional normal affine domain satisfying $\ML(R)=\bk$. It is also known that the converse is not true, which raises the following: \begin{questionn} \it Suppose that $R$ is a $2$-dimensional affine $\bk$-domain with $\ML(R)=\bk$. Under what additional assumptions can we infer that $R \in \Dan(\bk)$? \end{questionn} Section~3 completely answers this question in the case where $R$ is a smooth $\bk$-algebra. This is achieved by reducing to the case $\bk=\Comp$, which was solved by Bandman and Makar-Limanov. This reduction is non-trivial, and makes essential use of the main result of Section~2. Also note Corollary~\ref{NewCorUFD}, which gives a pleasant answer to the above question in the factorial case. Then we derive several consequences from Section~3, for instance consider the following special case of Theorem~\ref{dwddwdwdwddwd}: \begin{quote} \it Let $R$ be a localization of a polynomial ring in finitely many variables over a field of characteristic zero. If $\ML(R) = K$ for some field $K \subset R$ such that $\trdeg_KR=2$, then $R \in \Dan( K )$. \end{quote} In turn, this has consequences in the study of $G_a$-actions on $\Comp^n$. \begin{conventions} All rings and algebras are commutative, associative and unital. If $A$ is a ring, we write $A^*$ for the units of $A$; if $A$ is a domain, $\Frac A$ is its field of fractions. If $A \subseteq B$ are rings, ``\,$B = A^{[n]}$\,'' means that $B$ is $A$-isomorphic to the polynomial algebra in $n$ variables over $A$. If $L/K$ is a field extension, ``\,$L = K^{(n)}$\,'' means that $L$ is a purely transcendental extension of $K$ and $\trdeg_KL=n$ (transcendence degree). \end{conventions} \smallskip In \cite{Dai:LSC}, one defines a Danielewski surface to be a pair $(R,\bk)$ such that $R \in \Dan(\bk)$. In the present paper we avoid using the term ``Danielewski surface'' in that sense, because it is incompatible with accepted usage. The reader should keep this in mind when consulting \cite{Dai:LSC} (our main reference for Section~2). \section{Base extension} Let $\bk$ be a field of characteristic zero. It is clear that if $R \in \Dan( \bk )$ then $K \otimes_\bk R \in \Dan( K )$ for every field extension $K/\bk$. However, if $K \otimes_\bk R \in \Dan( K )$ for some $K$, it does not follow that $R \in \Dan( \bk )$ (see Example~\ref{hghgoioioiooiop}, below). \begin{remark}\label{rururuurytytyru} {\it If $R \in \Dan( \bk )$ then $\Spec R$ has infinitely many $\bk$-rational points.} (Indeed, if $R = \bk[X,Y,Z] / (XY - \phi(Z))$ then there is a bijection between the set of $\bk$-rational points of $\Spec R$ and the zero-set in $\bk^3$ of the polynomial $XY-\phi(Z)$.) \end{remark} \begin{example}\label{hghgoioioiooiop} Let $A = \Reals[X,Y,Z] / (f)$, where $f = X^2 + Y^2 + Z^2$. Viewing $f$ as an element of $\Comp[X,Y,Z]$ we have $f = (X+iY)(X-iY) + Z^2$ (where $i^2=-1$), so $\Comp \otimes_\Reals A \isom \Comp[U,V,W] / (UV+W^2) \in \Dan( \Comp )$. As $\Spec A$ has only one $\Reals$-rational point, $A \notin \Dan( \Reals )$ by Remark~\ref{rururuurytytyru}. Thus $$ \text{$A \notin \Dan( \Reals )$ and $\Comp \otimes_\Reals A \in \Dan( \Comp )$.} $$ Note \footnote{A different proof that $\ML(A)=A$ is given in \cite[9.21]{Freud:Book}.} that Theorem~\ref{DanML} (below) implies that $\ML(A) = A$. Moreover, if we define $A' = \Reals[U,V,W] / (UV+W^2) \in \Dan( \Reals )$ then $A \not\isom A'$ but $\Comp \otimes_\Reals A \isom \Comp \otimes_\Reals A'$. \end{example} \begin{theorem}\label{DanML} For an algebra $R$ over a field $\bk$ of characteristic zero, the following conditions are equivalent: \begin{enumerate} \item[(a)] $R \in \Dan( \bk )$ \item[(b)] $\ML(R) \neq R$ and there exists a field extension $K/\bk$ such that $K \otimes_\bk R \in \Dan( K )$. \end{enumerate} \end{theorem} We shall prove this after some preparation. \begin{somefacts}\label{fofpofpofpofpof} Refer to \cite{VDE:book} or \cite{Freud:Book} for background on locally nilpotent derivations. Statement \eqref{nnnntttjptjnptjtpjn} is due to Rentschler \cite{Rent} and \eqref{cvvcccpcpccpcpvcpvv} to Nouaz\'e and Gabriel~\cite{GabNou} and Wright~\cite{Wright:JacConj}. \begin{enumerate} \renewcommand{\theenumi}{\alph{enumi}} \item\label{vcvcvctgfhyegf} If $A \in \klnd(B)$ where $B$ is a domain of characteristic zero then $A$ is \textit{factorially closed\/} in $B$ (i.e., if $x,y \in B\setminus\{0\}$ and $xy \in A$ then $x, y \in A$). It follows that $\ML(B)$ is factorially closed in $B$. Any factorially closed subring $A$ of $B$ is in particular \textit{algebraically closed\/} in $B$ (i.e., if $x \in B$ is a root of a nonzero polynomial with coefficients in $A$ then $x\in A$) and satisfies $A^*=B^*$ (in particular, any field contained in $B$ is contained in $A$). \item\label{shhohsohsshhshoohs} Let $B$ be a noetherian domain of characteristic zero. If $0 \neq D \in \lnd(B)$ then $D = \alpha D_0$ for some $\alpha \in \ker(D)$ and $D_0 \in \lnd(B)$ where $D_0$ is \textit{irreducible\/} (i.e., the only principal ideal of $B$ which contains $D_0(B)$ is $B$). \item\label{nnnntttjptjnptjtpjn} Let $B = \kk2$ where $\bk$ is a field of characteristic zero. If $D \in \lnd(B)$ is irreducible then there exist $X,Y$ such that $B=\bk[X,Y]$ and $D = \partial / \partial Y$. \item\label{cvvcccpcpccpcpvcpvv} Let $B$ be a $\Rat$-algebra. If $D \in \lnd(B)$ and $s\in B$ satisfy $Ds\in B^*$ then $B =A[s]=A^{[1]}$ where $A = \ker D$. \end{enumerate} \end{somefacts} \begin{lemma}\label{bvbvbvbryyryyrt} Let $\bk$ be a field of characteristic zero and $R$ a $\bk$-algebra satisfying: $$ \textit{there exists a field extension $\ck/\bk$ such that $\ck \otimes_\bk R \in \Dan( \ck )$.} $$ Then $R$ is a two-dimensional normal affine domain over $\bk$ and $R^* = \bk^*$. \end{lemma} \begin{proof} This is rather simple but it will be convenient to refer to this proof later. Choose a field extension $\ck/\bk$ such that $\ck \otimes_\bk R \in \Dan( \ck )$ and let $\bar R = \ck \otimes_\bk R$. As $R$ is a flat $\bk$-module, the canonical homomorphism $\bk \otimes_\bk R \to \ck \otimes_\bk R$ is injective, so we may regard $R$ as a subring of $\bar R$. In particular, $R$ is an integral domain and we have the diagram: \begin{equation*} \raisebox{6mm}{\xymatrix{\ck\ \ar @{^(->}[r] & \bar R\ \ar @{^(->}[r] & S^{-1}\bar R \ar @{^(->}[r] & \Frac \bar R \\ \bk\ \ar @{^(->}[r] \ar @{^(->}[u] & R\ \ar @{^(->}[r] \ar @{^(->}[u] & \Frac R \ar @{^(->}[u] }} \end{equation*} where $S = R \setminus \{0\}$. Let $\Beul$ be a basis of $\ck$ over $\bk$ such that $1 \in \Beul$. Note that $\Beul$ is also a basis of the free $R$-module $\bar R$ and of the vector space $S^{-1} \bar R$ over $\Frac R$. It follows: \begin{equation}\label{kdfjwehfui} \ck \cap R = \bk \quad \textit{and} \quad \bar R \cap \Frac R = R . \end{equation} As $\bar R \in \Dan( \ck )$, \cite[2.3]{Dai:LSC} implies that $\bar R^* = \ck^*$ and that $\bar R$ is a normal domain; so \eqref{kdfjwehfui} implies that $R^* = \bk^*$ and that $R$ is a normal domain. Also: \begin{equation}\label{xzsqeasfqwcvsoioi} \textit{If $E$ is a subset of $R$ such that $\ck[ E ] = \bar R$, then $\bk[E] = R$.} \end{equation} Indeed, $\Beul$ is a basis of the $R$-module $\bar R$ and a spanning set of the $\bk[ E ]$-module $\bar R$; as $\bk[E] \subseteq R$, it follows that $\bk[E] = R$. Note that $R$ is affine over $\bk$, by \eqref{xzsqeasfqwcvsoioi} and the fact that $\bar R$ is affine over $\ck$. Let $n = \dim R$ then, by Noether Normalization Lemma, there exists a subalgebra $R_0 = \kk n$ of $R$ over which $R$ is integral. Then $\bar R = \ck \otimes_\bk R$ is integral over $\ck \otimes_\bk R_0 = \ck^{[n]}$, so $n = \dim \bar R = 2$. \end{proof} We borrow the following notation from \cite[2.1]{Dai:LSC}. \begin{definition} \label{danielewski} Given a $\bk$-algebra $R$, let $\Gamma_\bk(R)$ denote the (possibly empty) set of ordered triples $(x_1,x_2,y)\in R\times R\times R$ satisfying: \begin{quote}\it The $\bk$-homomorphism $\bk[X_1,X_2,Y] \to R$ defined by $$ \text{$X_1\mapsto x_1$, $X_2\mapsto x_2$ and $Y\mapsto y$} $$ is surjective and has kernel equal to $(X_1X_2 - \phi(Y))\bk[X_1,X_2,Y]$ for some nonconstant polynomial in one variable $\phi(Y)\in\bk[Y]$. \end{quote} Note that $R \in \Dan( \bk )$ if and only if $\Gamma_\bk(R)\neq\emptyset$. \end{definition} \begin{proof}[Proof of Theorem~\ref{DanML}] That $R \in \Dan( \bk )$ implies $\ML(R) = \bk$ is well-known (for instance it follows from part~(d) of \cite[2.3]{Dai:LSC}), so it suffices to prove that (b) implies (a). Suppose that $R$ satisfies (b). Note that if $K/\bk$ is a field extension satisfying $K \otimes_\bk R \in \Dan( K )$ then for any field extension $L/K$ we have $L \otimes_\bk R \in \Dan( L )$. In particular, there exists a field extension $\ck / \bk$ such that $\ck \otimes_\bk R \in \Dan( \ck )$ and such that $\ck$ is an algebraically closed field. We fix such a field $\ck$. The fact that $\ck$ is algebraically closed implies that \begin{equation}\label{FixedField} \text{the fixed field $\ck^G$ is equal to $\bk$} \end{equation} where $G = \Gal( \ck / \bk )$. We use the notation ($\bar R$, $\Beul$, etc) introduced in the proof of Lemma~\ref{bvbvbvbryyryyrt}. As $\ML(R) \neq R$, there exists $0 \neq D \in \lnd(R)$. Let $\bar D \in \lnd( \bar R)$ be the unique extension of $D$, let $A = \ker D$ and $\bar A = \ker \bar D$. It follows from \cite{Dai:LSC} that $\bar A = \ck^{[1]}$ (\cite[2.3]{Dai:LSC} shows that some element of $\klnd( \bar R )$ is a $\ck^{[1]}$ and, by \cite[2.7.2]{Dai:LSC}, $\Aut_\ck( \bar R )$ acts transitively on $\klnd( \bar R )$). Applying the exact functor $\ck \otimes_\bk\,\underline{\ \ }$ to the exact sequence $0 \to A \to R \xrightarrow{D} R$ of $\bk$-linear maps shows that $\ck \otimes_\bk A = \bar A = \ck^{[1]}$, so $A = \kk1$. Choose $f \in R$ such that $A = \bk[f]$, then $\bar A = \ck[f]$. Consider the nonzero ideals $I = A \cap D(R)$ and $\bar I = \bar A \cap \bar D(\bar R)$ of $A$ and $\bar A$ respectively. Let $\psi \in A$ and $s\in R$ be such that $I = \psi A$ and $D(s) = \psi$. We claim that \begin{equation}\label{vcvcvtrtrtrtey} \bar I = \psi \bar A . \end{equation} Indeed, an arbitrary element of $\bar I$ is of the form $\bar D( \sigma )$ where $\sigma \in \bar R$ and $\bar D^2( \sigma )=0$. Write $\sigma = \sum_{\lambda \in \Beul} s_\lambda \, \lambda$ with $s_\lambda \in R$, then $0 = \bar D^2( \sigma ) = \sum_{\lambda \in \Beul} D^2( s_\lambda ) \, \lambda$, so for all $\lambda \in \Beul$ we have $D^2( s_\lambda ) = 0$, hence $D( s_\lambda ) \in I = \psi A$, and consequently $\bar D( \sigma ) \in \psi \bar A$, which proves~\eqref{vcvcvtrtrtrtey}. By \ref{fofpofpofpofpof}(\ref{shhohsohsshhshoohs}), $\bar D = \alpha \Delta$ for some $\alpha \in \bar A \setminus \{0\}$ and some irreducible $\Delta \in \lnd(\bar R)$. Consider the nonzero ideal $I_0 = \bar A \cap \Delta( \bar R )$ of $\bar A$. We claim that \begin{equation}\label{vwvwvwvwvvwfe} I_0 = \Delta(s) \bar A . \end{equation} To see this, consider an arbitrary element $\Delta( \sigma )$ of $I_0$ (where $\sigma \in \bar R$, $\Delta^2( \sigma ) = 0$). Then $ \alpha \Delta( \sigma ) = \bar D( \sigma ) \in \bar I = \psi \bar A = \bar D(s) \bar A = \alpha \Delta(s) \bar A $, so $\Delta( \sigma ) \in \Delta(s) \bar A$ and \eqref{vwvwvwvwvvwfe} is proved. Consider the case where $\Delta(s) \in \bar R^*$. Then $\bar R = \bar A[s] = \ck[ f, s ]$ by \ref{fofpofpofpofpof}(\ref{cvvcccpcpccpcpvcpvv}), so \eqref{xzsqeasfqwcvsoioi} implies that $R = \bk[f,s] = \kk2$, so in particular $R \in \Dan(\bk)$ and we are done. From now-on assume that $\Delta(s) \not\in\bar R^*$. By \cite[2.8]{Dai:LSC}, $\bar A = \ck[ \Delta(y) ]$ for some $y \in \bar R$. Note that $\Delta(y) \in I_0$, so \eqref{vwvwvwvwvvwfe} gives $\Delta(s) \mid \Delta(y)$ in $\bar A$. As $\Delta(y)$ is an irreducible element of $\bar A$ (because $\ck[ \Delta(y) ] = \bar A = \ck^{[1]}$) and $\Delta(s) \not\in\bar A^*$, we have $\ck[ \Delta(s) ] = \bar A = \ck[f]$ and consequently $\Delta(s) = \mu(f- \lambda)$ for some $\mu\in\ck^*$, $\lambda \in \ck$. We may as well replace $\Delta$ by $\mu^{-1}\Delta$, so \begin{equation}\label{hggfdewqqqq} \Delta(s) = f - \lambda, \quad \textit{for some $\lambda \in \ck$}. \end{equation} We claim: \begin{equation}\label{vcvcvtrttryey} \setspec{ c \in \ck } { \text{$\bar R / (f-c)\bar R$ is not an integral domain} } = \{ \lambda \} . \end{equation} Indeed, \cite[2.8]{Dai:LSC} implies that there exists $x_2 \in \bar R$ such that $( f - \lambda, x_2, s) \in \Gamma_{\ck}( \bar R )$. This means (cf.\ \ref{danielewski}) that the $\ck$-homomorphism $\pi : \ck[X_1, X_2, Y] \to \bar R$ defined by $X_1 \mapsto f-\lambda$, $X_2 \mapsto x_2$, $Y \mapsto s$, is surjective and has kernel $(X_1X_2 - P(Y))$ for some nonconstant $P(Y) \in \ck[Y]$ (where $X_1, X_2, Y$ are indeterminates). By \eqref{vwvwvwvwvvwfe} and $\Delta(s) \not\in\bar R^*$, we see that there does not exist $\sigma \in \bar R$ such that $\Delta(\sigma)=1$; as $\Delta$ is irreducible, it follows from \ref{fofpofpofpofpof}(c) that $\bar R \neq \ck^{[2]}$ and hence that $\deg_YP(Y) > 1$. Thus, for $c \in \ck$, $$ \bar R / (f - c) \bar R \ \isom \ \ck[X_1, X_2, Y] / ( X_1 - (c-\lambda),\ X_1X_2 - P(Y) ) $$ is a domain if and only if $c \neq \lambda$. This proves~\eqref{vcvcvtrttryey}. Let $\theta \in \Gal( \ck / \bk )$. Then $\theta$ extends to some $\Theta \in \Aut_R( \bar R )$ and $\Theta$ determines a ring isomorphism $$ \bar R / (f - \lambda) \bar R \isom \bar R / \Theta(f - \lambda) \bar R = \bar R / (f - \theta(\lambda)) \bar R. $$ So $\bar R / (f - \theta(\lambda)) \bar R$ is not a domain and it follows from~\eqref{vcvcvtrttryey} that $\theta(\lambda)=\lambda$. As this holds for every $\theta \in \Gal( \ck / \bk )$, \eqref{FixedField} implies that $\lambda \in \bk$. To summarize, if we define $x_1 = f-\lambda$ then $$ \textit{ $x_1, s \in R$ and there exists $x_2 \in \bar R$ such that $(x_1, x_2, s) \in \Gamma_\ck( \bar R )$.} $$ We now show that $x_2$ can be chosen in $R$. Consider the ideals $J = \bk[s] \cap x_1 R$ of $\bk[s]$ and $\bar J = \ck[s] \cap x_1 \bar R$ of $\ck[s]$, and choose $\phi(Y) \in \bk[Y]$ such that $J = \phi(s) \bk[s]$. Let $\Phi(s)$ be any element of $\bar J$ (where $\Phi(Y) \in \ck[Y]$). Then $\Phi(s) = x_1 G$ for some $G \in \bar R$. As $\Beul$ is a basis of the $R$-module $\bar R$ and also of the $\bk[Y]$-module $\ck[Y]$, we may write $G = \sum_{ \lambda \in \Beul } G_\lambda \lambda$ (where $G_\lambda \in R$) and $\Phi = \sum_{ \lambda \in \Beul } \Phi_\lambda \lambda$ (where $\Phi_\lambda \in \bk[Y]$). Then $\sum_{ \lambda \in \Beul } (x_1 G_\lambda) \lambda = \Phi(s) = \sum_{ \lambda \in \Beul } \Phi_\lambda(s) \lambda$, so for every $\lambda \in \Beul$ we have $\Phi_\lambda(s) = x_1 G_\lambda$, i.e., $\Phi_\lambda(s) \in J = \phi(s) \bk[s]$. We obtain that $\Phi(s) \in \phi(s) \ck[s]$, so: $$ \bar J = \phi(s) \ck[s] . $$ On the other hand, \cite[2.4]{Dai:LSC} asserts that $\bar J = x_1 x_2 \ck[s]$, so $x_1 x_2 = \mu \phi(s)$ for some $\mu \in \ck^*$. It is clear that if $(x_1, x_2, s)$ belongs to $\Gamma_\ck( \bar R )$ then so does $(x_1, \mu^{-1} x_2, s)$; so there exists $x_2 \in \bar R$ such that $(x_1, x_2, s) \in \Gamma_\ck( \bar R )$ and $x_1 x_2 = \phi(s)$. As $x_2 = \phi(s)/x_1 \in \Frac R$, \eqref{kdfjwehfui} implies that $x_2 \in R$. Thus $$ \textit{$(x_1, x_2, s) \in \Gamma_\ck( \bar R )$, where $x_1, x_2, s \in R$.} $$ In particular we have $\bar R = \ck[ x_1, x_2, s ]$, so \eqref{xzsqeasfqwcvsoioi} gives $R = \bk[ x_1, x_2, s ]$. As $x_1 x_2 = \phi(s)$ where $\phi(Y) \in \bk[Y]$ is nonconstant, it follows that $(x_1, x_2, s) \in \Gamma_\bk( R )$ and hence that $R \in \Dan( \bk )$. \end{proof} \section{On a result of Bandman and Makar-Limanov} \label{Sec:BandML} In this paper we adopt the following: \begin{definition} Let $R$ be an affine algebra over a field $\bk$ and let $q = \dim R$. We say that $R$ is a \textit{complete intersection over $\bk$} if $R \isom \bk[X_1, \dots, X_{p+q} ] / ( f_1, \dots, f_p )$ for some $p\ge0$ and some $ f_1, \dots, f_p \in \bk[X_1, \dots, X_{p+q} ]$. \end{definition} We refer to \cite[28.D]{Matsumura} for the definition of a \textit{smooth $\bk$-algebra} and to \cite[26.C]{Matsumura} for the definition of the $R$-module $\Omega_{R/\bk}$ (the module of differentials of $R$ over $\bk$), where $R$ is a $\bk$-algebra. \begin{theorem}\label{cxcxcxxexexeexexr} Let $\bk$ be a field of characteristic zero and $R$ a smooth affine $\bk$-domain of dimension $2$ such that $\ML(R) = \bk$. Then the following are equivalent: \begin{enumerate} \item[(a)] $R \in \Dan( \bk )$ \item[(b)] $R$ is generated by $3$ elements as a $\bk$-algebra \item[(c)] $R$ is a complete intersection over $\bk$ \item[(d)] $\bigwedge^2 \Omega_{R/\bk} \isom R$. \end{enumerate} \end{theorem} We shall prove this by reducing to the case $\bk=\Comp$, which was proved by Bandman and Makar-Limanov in \cite{BandML:AffSurfAK}. That reduction makes essential use of Theorem~\ref{DanML}. \begin{remark} Let $\bk$ be a field of characteristic zero. According to the definition of ``Danielewski surface over $\bk$'' given in \cite{Dub:EmbeddOfDans}, one has the following situation: $$ \setlength{\unitlength}{1mm} \begin{picture}(65,28)(-5,-5) \put(25,10){\oval(60,20)} \put(20,10){\circle{20}} \put(30,5){\oval(15,21)} \put(13,13){\makebox(0,0)[br]{\scriptsize $\danml(\bk)$}} \put(38,11){\makebox(0,0)[bl]{\scriptsize $\Dan(\bk)$}} \put(51,19){\makebox(0,0)[bl]{\scriptsize $\sml(\bk)$}} \end{picture} $$ where $\danml(\bk)$ is the class of Danielewski surfaces $S$ over $\bk$ satisfying $\ML(S)=\bk$, $\sml(\bk)$ is the larger class of smooth affine surfaces $S$ over $\bk$ satisfying $\ML(S)=\bk$, and $\Dan(\bk)$ is the class of surfaces corresponding to the already defined class $\Dan(\bk)$ of $\bk$-algebras. Among other things, paper~\cite{Dub:EmbeddOfDans} classifies the elements of $\danml(\bk)$ and characterizes those which belong to $\Dan(\bk)$. In contrast, Theorem~\ref{cxcxcxxexexeexexr} characterizes the elements of $\sml(\bk)$ which belong to $\Dan(\bk)$. \end{remark} \begin{remark}\label{RemCanSheaf} Let $R$ be a $q$-dimensional smooth affine domain over a field $\bk$ of characteristic zero. Then $X = \Spec R$ is in particular an irreducible regular scheme of finite type over the perfect field $\bk$; so, by \cite[ex.~8.1(c), p.\ 187]{Hartshorne}, the sheaf of differentials $\Omega_{X/\bk}$ is locally free of rank $q$; so the canonical sheaf $\omega_X = \bigwedge^q \Omega_{X/\bk}$ is locally free of rank $1$, i.e., is an invertible sheaf on $X$. As $\omega_X$ and the structure sheaf $\Oeul_X$ are respectively the sheaves associated to the $R$-modules $\bigwedge^q \Omega_{R/\bk}$ and $R$, the condition $\bigwedge^q \Omega_{R/\bk} \isom R$ is equivalent to $\omega_X \isom \Oeul_X$ (one says that $X$ has trivial canonical sheaf). This is also equivalent to the canonical divisor of $X$ being linearly equivalent to zero (because $\Pic(X) \isom \Cl(X)$ by \cite[6.16 p.\ 145]{Hartshorne}). \end{remark} \begin{remark}\label{fjefkwhehjwgjsklfjajhh} Let $A'$ and $B$ be algebras over a ring $A$ and let $B' = A' \otimes_A B$. Then $\Omega_{B'/A'} \isom B'\otimes_B \Omega_{B/A}$ (cf.\ \cite[p.\ 186]{Matsumura}) and, for any $B$-module $M$, $\bigwedge^n ( B'\otimes_B M ) \isom B'\otimes_B \bigwedge^n M$ for every $n$ (\cite{BourbakiAlgI_III}, Chap.~3, \S\,7, No~5, Prop.~8). Consequently, $\bigwedge^n \Omega_{B'/A'} \isom B'\otimes_B \bigwedge^n \Omega_{B/A}$. \end{remark} \begin{lemma}\label{fjejkhfjheudgygygwy} {\it Let $R$ be an algebra over a field $\bk$. If $R$ is a complete intersection over $\bk$ and a smooth $\bk$-algebra, then $\bigwedge^q \Omega_{R/\bk} \isom R$ where $q = \dim R$.} \end{lemma} This is the well-known fact that a smooth complete intersection has trivial canonical sheaf, but we don't know a suitable reference so we sketch a proof. \begin{proof}[Proof of \ref{fjejkhfjheudgygygwy}] Let $R = \bk[X_1, \dots, X_{p+q} ] / (f_1, \dots, f_p)$ and let $\phi_{ij} \in R$ be the image of $\frac{ \partial f_j }{ \partial X_i }$. Because $R$ is smooth over $\bk$, \cite[29.E]{Matsumura} implies that the matrix $(\phi_{ij})$ satisfies: \begin{equation}\label{fgfigigfgfgfig} \text{the $p \times p$ determinants of $(\phi_{ij})$ generate the unit ideal of $R$.} \end{equation} By \cite[8.4A, p.~173]{Hartshorne}, there is an exact sequence \mbox{$ R^p \xrightarrow{\ \phi\ } R^{p+q} \to \Omega_{R/\bk} \to 0 $} of $R$-linear maps where $\phi$ is the map corresponding to the matrix $(\phi_{ij})$. Now if $R$ is a ring and $R^p \xrightarrow{\ \phi\ } R^{p+q} \to M \to 0$ is an exact sequence of $R$-linear maps such that $\phi$ satisfies \eqref{fgfigigfgfgfig}, then $\bigwedge^q M \isom R$. \end{proof} \begin{lemma}\label{jfewjfkjhlshf;ij;} Let $R$ be an integral domain containing a field $\bk$ of characteristic zero. If $R$ is normal and $\ML(R) = \bk$, then for any field extension $K$ of $\bk$ we have: \begin{enumerate} \item[(a)] $K \otimes_\bk R$ is an integral domain \item[(b)] $\ML( K \otimes_\bk R ) = K$. \end{enumerate} \end{lemma} \begin{proof} As $\bk=\ML(R)$ is algebraically closed in $R$ (\ref{fofpofpofpofpof}(\ref{vcvcvctgfhyegf})) and $R$ is normal, it follows that $\bk$ is algebraically closed in $L = \Frac R$. By \cite[Cor.~2, p.~198]{ZarSamI}, $K \otimes_\bk L$ is an integral domain. As $K$ is flat over $\bk$ and $R \to L$ is injective, $K \otimes_\bk R \to K \otimes_\bk L$ is injective and (a) is proved. Let $\xi \in \ML( K \otimes_\bk R )$. Consider a basis $\Beul$ of $K$ over $\bk$; note that $\Beul$ is also a basis of the free $R$-module $R' = K \otimes_\bk R$ and write $\xi = \sum_{\lambda \in \Beul} x_\lambda \lambda$ (where $x_\lambda \in R$). If $D \in \lnd(R)$ then $D$ extends to an element $D' \in \lnd( R' )$ and the equation $ 0 = D' ( \xi ) = \sum_{\lambda \in \Beul} D(x_\lambda) \lambda $ shows that $D( x_\lambda ) = 0$ for all $\lambda \in \Beul$. As this holds for every $D \in \lnd(R)$, we have $x_\lambda \in \ML( R ) = \bk$ for all $\lambda$, so $\xi \in K$. \end{proof} \begin{proof}[Proof of Theorem~\ref{cxcxcxxexexeexexr}] Implications $\text{(a)} \Rightarrow \text{(b)} \Rightarrow \text{(c)}$ are trivial and $\text{(c)} \Rightarrow \text{(d)}$ is Lemma~\ref{fjejkhfjheudgygygwy}, so only $\text{(d)} \Rightarrow \text{(a)}$ requires a proof. Assume for a moment that $\bk=\Comp$ and suppose that $R$ satisfies~(d). Then Lemmas~4 and 5 of \cite{BandML:AffSurfAK} imply that $R\in\Dan(\Comp)$, so the Theorem is valid in the case $\bk=\Comp$. Let $\bk$ be a field of characteristic zero, consider a smooth affine $\bk$-domain $R$ of dimension $2$ such that $\ML(R) = \bk$, and suppose that $R$ satisfies~(d). We have $R \isom \bk[X_1, \dots, X_n ] / ( f_1, \dots, f_m )$ for some $m,n\ge0$ and some $f_1, \dots, f_m \in \bk[X_1, \dots, X_n ]$. Also consider $D_1, D_2 \in \lnd(R)$ such that $\ker D_1 \cap \ker D_2 = \bk$. Each $D_i$ can be lifted to a (not necessarely locally nilpotent) $\bk$-derivation $\delta_i$ of $\bk[X_1, \dots, X_n ]$. Let $\bk_0$ be a subfield of $\bk$ which is finitely generated over $\Rat$ and which contains all coefficients of the polynomials $f_i$ and $\delta_i(X_j)$. Define $R_0 = \bk_0[X_1, \dots, X_n ] / ( f_1, \dots, f_m )$ and note that $\bk \otimes_{\bk_0} R_0 \isom R$. As $\bk_0 \to \bk$ is injective and $R_0$ is flat over $\bk_0$, $\bk_0 \otimes_{\bk_0} R_0 \to \bk \otimes_{\bk_0} R_0$ is injective and we may regard $R_0$ as a subring of $R$. In particular, $R_0$ is a domain (a $2$-dimensional affine $\bk_0$-domain). Also note that $D_i(R_0) \subseteq R_0$ for $i=1,2$; if $d_i : R_0 \to R_0$ is the restriction of $D_i$ then $d_1, d_2 \in \lnd( R_0 )$ and $\ker d_1 \cap \ker d_2 = \bk \cap R_0 = \bk_0$ (see \eqref{kdfjwehfui} for the last equality), showing that $\ML( R_0 ) = \bk_0$. As $\bk_0$ is a field and $\bk \to R$ is obtained from $\bk_0 \to R_0$ by base extension, the fact that $\bk \to R$ is smooth implies that $\bk_0 \to R_0$ is smooth (cf.\ \cite[28.O]{Matsumura}). Consider the $R$-module $M = \bigwedge^2 \Omega_{R/\bk}$ and the $R_0$-module $M_0 = \bigwedge^2 \Omega_{R_0/\bk_0}$. Consider an isomorphism of $R$-modules $\theta : R \to M$ and let $\omega = \theta(1)$. We have $R \otimes_{ R_0 } M_0 \isom M$ by \ref{fjefkwhehjwgjsklfjajhh}, so there is a natural homomorphism $M_0 \to R \otimes_{ R_0 } M_0 \isom M$, $x \mapsto 1 \otimes x$; by adjoining a finite subset of $\bk$ to $\bk_0$, we may arrange that there exists $\omega_0 \in M_0$ such that $1 \otimes \omega_0 = \omega$. Consider the $R_0$-linear map $f : R_0 \to M_0$, $f(a)=a\omega_0$. Note that $R = \bk \otimes_{\bk_0} R_0$ is faithfully flat as an $R_0$-module and that applying the functor $R \otimes_{R_0}\underline{\ \ }$ to $f$ yields the isomorphism $\theta$; so $f$ is an isomorphism, so $\bigwedge^2 \Omega_{R_0/\bk_0} \isom R_0$. As $R \in \Deul(\bk)$ would follow from $R_0 \in \Deul(\bk_0)$, the problem reduces to proving the case $\bk = \bk_0$ of the theorem. Now $\bk_0$ is isomorphic to a subfield of $\Comp$, so it suffices to prove the theorem in the case $\bk \subseteq \Comp$. Assume that $\bk \subseteq \Comp$. As $R$ is smooth over $\bk$, the local ring $R_\pgoth$ is regular for every $\pgoth \in \Spec R$ (by \cite[28.E,F,K]{Matsumura}) so in particular $R$ is a normal domain. Then it follows from \ref{jfewjfkjhlshf;ij;} that $R' = \Comp \otimes_\bk R$ is an integral domain and that $\ML( R' ) = \Comp$. By \cite[28.G]{Matsumura}, $R'$ is smooth over $\Comp$. It is clear that $\dim R' = 2$ (for instance see the proof of \ref{bvbvbvbryyryyrt}) and \ref{fjefkwhehjwgjsklfjajhh} gives $ \bigwedge^2 \Omega_{ R' / \Comp } \isom R' \otimes_{ R } \bigwedge^2 \Omega_{ R / \bk } \isom R' \otimes_{ R } R \isom R'$. As the Theorem is valid over $\Comp$, it follows that $R' \in \Deul( \Comp )$. As $\ML(R) = \bk \neq R$, Theorem~\ref{DanML} implies that $R \in \Deul( \bk )$. \end{proof} \begin{corollary}\label{NewCorUFD} Let $R$ be a $2$-dimensional affine domain over a field $\bk$ of characteristic zero. If $R$ is a UFD and a smooth $\bk$-algebra satisfying $\ML(R) = \bk$, then $R \in \Dan( \bk )$. \end{corollary} \begin{proof} Since $R$ is a UFD, the scheme $X = \Spec R$ has a trivial divisor class group \cite[6.2 p.\ 131]{Hartshorne}. By Remark~\ref{RemCanSheaf}, it follows that $\bigwedge^2 \Omega_{R/\bk} \isom R$ and the desired conclusion follows from Theorem~\ref{cxcxcxxexexeexexr}. \end{proof} \section{Localizations of nice rings} \label{dkfj;wejij2wlrhwuiehl;wjeioqje;} Throughout this section we fix a field $\bk$ of characteristic zero and we consider the class $\Neul(\bk)$ of $\bk$-algebras $B$ satisfying the following conditions: \begin{quote}\it $B$ is a geometrically integral affine $\bk$-domain which is smooth over $\bk$ and satisfies at least one of the following conditions: \begin{itemize} \item $B$ is a UFD; or \item $B$ is a complete intersection over $\bk$. \end{itemize} \end{quote} Note that $\kk n \in \Neul(\bk)$ for every $n$. \begin{theorem}\label{dwddwdwdwddwd} Suppose that $R$ is a localization of a ring belonging to the class $\Neul(\bk)$. If $\ML(R) = K$ for some field $K \subset R$ such that $\trdeg_KR=2$, then $R \in \Dan( K )$. \end{theorem} \begin{lemma}\label{nbnbnbniuiuioypw} Let $B \in \Neul( \bk )$, let $E$ be a finitely generated $\bk$-subalbebra of $B$ and let $S = E \setminus\{0\}$. Then $S^{-1}B$ is a smooth algebra over the field $S^{-1}E$. \end{lemma} \begin{proof} Let $\ck$ be an algebraic closure of $\bk$ and define $\bar E = \ck \otimes_\bk E$ and $\bar B = \ck \otimes_\bk B$. Note that $\bar B$ is a domain because $B$ is geometrically integral, and $\bar E \to \bar B$ is injective because $\ck$ is flat over $\bk$. Let $K = \Frac E$ and $L = \Frac \bar E$. As $\bar B$ is smooth over $\ck$, applying \cite[10.7, p.~272]{Hartshorne} to $\Spec\bar B \to \Spec \bar E$ implies that $L \to L \otimes_{\bar E} \bar B$ is smooth. It is not difficult to see that $L \to L \otimes_{\bar E} \bar B$ is obtained from $K \to K \otimes_{E} B$ by base extension. As $K$ is a field and $L \to L \otimes_{\bar E} \bar B$ is smooth, it follows from \cite[28.O]{Matsumura} that $K \to K \otimes_{E} B$ is smooth. \end{proof} \begin{lemma}\label{fwkefkjnfkja;kl} Let $B \in \Neul( \bk )$, let $S$ be a multiplicative subset of $B$ and suppose that $K$ is a field such that $\bk \cup S \subseteq K \subseteq S^{-1}B$. Then $S^{-1}B$ is a smooth $K$-algebra and some transcendence basis of $K/\bk$ is a subset of $B$. \end{lemma} \begin{proof} Note that $K/\bk$ is a finitely generated field extension and write $K = \bk( \alpha_1, \dots, \alpha_m )$. For each $i$ we have $\alpha_i = b_i/s_i$ for some $b_i \in B$ and $s_i \in S$; as $S \subseteq K$, we have $b_i = s_i \alpha_i \in K$. Define $E = \bk[b_1, \dots, b_m, s_1, \dots, s_m] \subseteq K$ and $S_1 = E \setminus \{0\}$, then $S_1^{-1} E = K$ and hence $S_1^{-1} B = S^{-1} B$. By Lemma~\ref{nbnbnbniuiuioypw}, $S^{-1} B$ is a smooth $K$-algebra. Moreover, $\{ b_1, \dots, b_m, s_1, \dots, s_m \}$ contains a transcendence basis of $K/\bk$. \end{proof} \begin{proof}[Proof of Theorem~\ref{dwddwdwdwddwd}] We have $R = S^{-1}B$ for some $B \in \Neul( \bk )$ and some multiplicative subset $S$ of $B$. As $\bk^* \cup S \subseteq R^* \subseteq \ML(R) = K$, $R$ is smooth over $K$ by Lemma~\ref{fwkefkjnfkja;kl}. By definition of $\Neul( \bk )$, $B$ is a UFD or a complete intersection over $\bk$. If $B$ is a UFD then so is $R$; in this case we obtain $R \in \Dan( K )$ by Corollary~\ref{NewCorUFD}, so we are done. From now-on, assume that $B$ is a complete intersection over $\bk$. Let $q=\dim B$ and write $B = \bk[X_1, \dots, X_{p+q}] / (G_1, \dots, G_p)$. Using Lemma~\ref{fwkefkjnfkja;kl} again, choose a transcendence basis $\{ f_1, \dots, f_{q-2} \}$ of $K$ over $\bk$ such that $f_1, \dots, f_{q-2} \in B$; let $S_0 = \bk[ f_1, \dots, f_{q-2} ] \setminus \{ 0 \}$ and $K_0 = \bk( f_1, \dots, f_{q-2} )$. We claim: \begin{equation}\label{fjlwehfhwlehj;wkej;qilui} \text{$S_0^{-1}B$ is a complete intersection over $K_0$.} \end{equation} Let us prove this. For $1 \le i \le q-2$, choose $F_i \in \bk[X_1, \dots, X_{p+q}]$ such that $\pi( F_i ) = f_i$ where $\pi : \bk[X_1, \dots, X_{p+q}] \to B$ is the canonical epimorphism. Also, let $T_1, \dots, T_{q-2}$ be extra indeterminates. The $\bk$-homomorphism $\bk[T_1, \dots, T_{q-2}, X_1, \dots, X_{p+q} ] \to B$ which maps $T_i$ to $f_i$ and $X_i$ to $\pi(X_i)$ has kernel $(G_1, \dots, G_p, F_1-T_1, \dots, F_{q-2} - T_{q-2} )$, so there is an isomorphism of $\bk$-algebras $$ B \isom \bk[T_1, \dots, T_{q-2}, X_1, \dots, X_{p+q} ] / (G_1, \dots, G_p, F_1-T_1, \dots, F_{q-2} - T_{q-2} ) . $$ Localization gives an an isomorphism of $\bk$-algebras \begin{equation}\label{kfjwjehfiwuelj;;weklj} S_0^{-1}B \isom \bk(T_1, \dots, T_{q-2}) [ X_1, \dots, X_{p+q} ] / (G_1, \dots, G_p, F_1-T_1, \dots, F_{q-2} - T_{q-2} ) \end{equation} which maps $K_0$ onto $\bk(T_1, \dots, T_{q-2})$. As the right hand side of \eqref{kfjwjehfiwuelj;;weklj} is a complete intersection over $\bk(T_1, \dots, T_{q-2})$, assertion \eqref{fjlwehfhwlehj;wkej;qilui} is proved. Then we obtain \begin{equation}\label{bvbbvbvytytytyru} \textstyle \bigwedge^2 \Omega_{S_0^{-1}B/K_0} \isom S_0^{-1}B \end{equation} by Lemma~\ref{fjejkhfjheudgygygwy}, because $S_0^{-1}B$ is a smooth $K_0$-algebra by Lemma~\ref{nbnbnbniuiuioypw}. Each element of $K$ belongs to $\Frac( S_0^{-1}B )$ and is algebraic over $K_0$, hence integral over $S_0^{-1}B$; as $S_0^{-1}B$ is normal, $K \subseteq S_0^{-1}B$ and hence $S_0^{-1}B = R$. We may therefore rewrite \eqref{bvbbvbvytytytyru} as: \begin{equation}\label{fkefjjkhhjghgffsdeq} \textstyle \bigwedge^2 \Omega_{R/K_0} \isom R . \end{equation} Applying \cite[26.H]{Matsumura} to $K_0 \subseteq K \subseteq R$ gives the exact sequence of $R$-modules $$ \Omega_{K/K_0} \otimes_K R \to \Omega_{R/K_0} \to \Omega_{R/K} \to 0, $$ where $\Omega_{K/K_0} = 0$ by \cite[27.B]{Matsumura}. So $\Omega_{R/K} \isom \Omega_{R/K_0}$ and hence \eqref{fkefjjkhhjghgffsdeq} gives $\bigwedge^2 \Omega_{R/K} \isom R$. So $R \in \Dan( K )$ by Theorem~\ref{cxcxcxxexexeexexr}. \end{proof} Let $\bk$ be a field of characteristic zero, let $B \in \Neul( \bk )$ and consider locally nilpotent derivations $D: B \to B$. See \ref{fefefefeer} for the definition of $\klnd(B)$. It is known that if $A \in \klnd(B)$ then $\trdeg_A(B)=1$, and if $A_1, A_2$ are distinct elements of $\klnd(B)$ then $\trdeg_{ A_1 \cap A_2 }(B) \ge 2$. We are interested in the situation where $\trdeg_{ A_1 \cap A_2 }(B) = 2$, i.e., when $A_1, A_2$ are distinct and have an intersetion which is as large as possible. \begin{corollary}\label{ofoofofoofofoo} Let $B \in \Neul( \bk )$, where $\bk$ is a field of characteristic zero. If $A_1, A_2 \in \klnd(B)$ are such that $\trdeg_{ A_1 \cap A_2 }(B) = 2$, then the following hold. \begin{enumerate} \item[(a)] Let $R = A_1 \cap A_2$ and $K = \Frac R$. Then $K \otimes_R B \in \Dan( K )$. \item[(b)] If $B$ is a UFD then there exists a finite sequence of local slice constructions which transforms $A_1$ into $A_2$. \end{enumerate} \end{corollary} \begin{remark*} This generalizes results~1.10 and 1.13 of \cite{Dai:PolsAnnilTwoLNDS}. Local slice construction was originally defined in \cite{Freud:LocalSlice} in the case $B=\kk3$, and was later generalized in \cite{Dai:LSC}. \end{remark*} \begin{proof}[Proof of Corollary \ref{ofoofofoofofoo}] Let $S = R \setminus \{0 \}$, $\Aeul_i = S^{-1}A_i$ ($i=1,2$) and $\Beul = S^{-1}B = K \otimes_R B$. If $D_i \in \lnd(B)$ has kernel $A_i$, then $S^{-1}D_i \in \lnd( \Beul )$ has kernel $\Aeul_i$; thus $\Aeul_1, \Aeul_2 \in \klnd( \Beul )$. Using that $A_1, A_2$ are factorially closed in $B$, we obtain $\Aeul_1 \cap \Aeul_2 \subseteq K$, so $\ML( \Beul ) \subseteq K$. The reverse inclusion is trivial ($K^* \subseteq \Beul^* \subseteq \ML( \Beul )$), so $\ML( \Beul ) = K$. Then $\Beul \in \Dan( K )$ by Theorem~\ref{dwddwdwdwddwd}, so assertion~(a) is proved. In \cite[3.3]{Dai:LSC}, one defines a graph $\Klnd(B)$ whose vertex-set is $\klnd(B)$; then, given $A,A' \in \klnd(B)$, one says that $A'$ can be obtained from $A$ ``by a local slice construction'' if there exists an edge in $\Klnd(B)$ joining vertices $A$ and $A'$. So assertion~(b) of the Corollary is equivalent to the existence of a path in $\Klnd(B)$ going from $A_1$ to $A_2$. Paragraph \cite[3.2.2]{Dai:LSC} also defines a subgraph $\Klnd_R(B)$ of the graph $\Klnd(B)$, and clearly $A_1, A_2$ are two vertices of $\Klnd_R(B)$; so, to prove (b), it suffices to show that $\Klnd_R(B)$ is a connected graph. We have $R \in \Reul^{\mbox{\scriptsize in}}(B)$ (cf.\ \cite[5.2]{Dai:LSC}) and consequently (cf.\ \cite[5.3]{Dai:LSC}, using that $B$ is a UFD) we have an isomorphism of graphs $\Klnd_R(B) \isom \Klnd_K(\Beul)$. As $\Beul \in \Dan( K )$ by part~(a), we may apply \cite[4.8]{Dai:LSC} and conclude that $\Klnd_K(\Beul)$ is connected. Assertion~(b) is proved. \end{proof} The following is a trivial consequence of Corollary~\ref{ofoofofoofofoo}. \begin{corollary}\label{dwdwdwdwrwdwrdwrwdrwdwr} Let $B \in \Neul( \bk )$, where $\bk$ is a field of characteristic zero. Suppose that $B$ has transcendence degree two over $\ML(B)$. \begin{enumerate} \item Let $R = \ML(B)$ and $K = \Frac R$. Then $K \otimes_R B \in \Dan(K)$. \item If $B$ is a UFD then, for any $A_1, A_2 \in \klnd(B)$, there exists a finite sequence of local slice constructions which transforms $A_1$ into $A_2$. \end{enumerate} \end{corollary} \bibliographystyle{amsplain} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{"config": "arxiv", "file": "0705.0334.tex"}
TITLE: Matrix property question QUESTION [0 upvotes]: Suppose $A$ is a $10$ x $10$ matrix, with the following property: given any five rows and five columns, the sum of the entries of the $5$ x $5$ matrix formed by these rows and columns is even. Prove that all the entries of $A$ are even. REPLY [2 votes]: For a given element, choose any $5$ rows and $5$ columns in addition to those of the element. There are $5^2=25$ ways of choosing $5$ of these $6$ rows and $5$ of these $6$ columns such that the given element is included. Each of the other elements is included in an even number of these selections, only the given element is included in an odd number. Since the total sum over all these selections is even, the given element must be even.
{"set_name": "stack_exchange", "score": 0, "question_id": 1725067}
TITLE: Expansion of ideal gas QUESTION [2 upvotes]: Consider an ideal gas in a chamber (A), separated from another chamber (B) by a diaphragm, in the following two situations: (1) Instantaneously burst the diaphragm (2) Plug in an isentropic nozzle so that the gas escapes gradually Are the two cases identical? I believe there should be some work done in the second case, because the opposing pressure in chamber B increases gradually, so the work done by the gas would increase as it reaches steady state. Is this reasoning correct, or should both cases have zero work done? REPLY [2 votes]: Assuming the walls of the container are perfect insulators the final steady states must be indentical, as they are determined only by the gas's volume, internal energy and particle number, all of which are the same in both cases. (Internal energy is the same as no energy is transfered to the gas from outside.) I think the confusion arises because one part of the gas is doing work on another part of the gas during the expansion in the second case, but the net work done by the gas as a whole is still zero.
{"set_name": "stack_exchange", "score": 2, "question_id": 99807}
TITLE: How to integrate this : $\int \frac{\cos5x+\cos 4x}{1-2\cos 3x}\,dx$ QUESTION [1 upvotes]: How to integrate this : $\int \frac{\cos 5x+\cos 4x}{1-2\cos 3x}\,dx$ My approach : We know that $\cos A+cosB = 2\cos(\frac{A+B}{2})\cos(\frac{A-B}{2})$ But it is not working here, please suggest, thanks. REPLY [1 votes]: I think the following is easier. $$\cos5x+\cos4x=\cos5x+\cos{x}+\cos4x+\cos2x-\cos{x}-\cos{2x}=$$ $$=2\cos3x\cos2x+2\cos3x\cos{x}-\cos{x}-\cos{2x}=(2\cos3x-1)(\cos2x+\cos{x})$$ and the rest is smooth.
{"set_name": "stack_exchange", "score": 1, "question_id": 2331283}
TITLE: Why do we need both dot product and cross product? QUESTION [1 upvotes]: I was looking for an intuitive definition for dot product and cross product. I have found two similar quesitions in SO, but I am not satisfied with the answers. Finally I found a possible answer here. It says dot product actually gives us a way to depict mathematically how parallel two lines are and on the other side cross products tells us how two lines are perpendicular to each other. So my question is why do we want both. Why cant we just have dot product? REPLY [1 votes]: The dot and cross products are both recovered as components of the tensor product, which takes two vectors and gives you a rank-2 tensor $v^\mu w^\nu$. The dot product is the trace of this tensor, obtained by a contraction $v^\mu w_\mu$ -- this is useful because it is a scalar, which is invariant under nice transformations (rotations and skews, specifically). The cross product is the anticommutativity of this, evaluated as $v^\mu w^\nu-v^\nu w^\mu$ (for two vectors -- for more vectors you need to use a Le Cevita symbol to sum with signs over the permutations of $\mu,\nu...$).
{"set_name": "stack_exchange", "score": 1, "question_id": 186045}
TITLE: Suggestions to a thermodynamics seminar presentation QUESTION [0 upvotes]: I have to present a work on thermodynamics, having to answer a question in this field in my freshman college year. Could you suggest any interesting question that could be used in my work? Unfortunately, all that i thought involved statistical mechanics and quantum mechanics, but probably it is not allowed... The seminar presentation should take about ten minutes to present, so the question can not be so hard. REPLY [1 votes]: You can try presenting the different types of real-life heat engines (diesel, rotary, ... ) and the underlying cycles (Stirling, Otto, Atkinson, ... ), and the advantages and disadvantages of each type. I think it would be interesting to explore why some engines are more popular than others, and for which applications they're used (for instance, car vs motorcycle vs airplane engines). Or, you can try explaining why the ideal engine using the Carnot cycle doesn't exist in real life.
{"set_name": "stack_exchange", "score": 0, "question_id": 598309}
\section{Case Study: rigid body motion} \label{sec:rbm} The remainder of the article shows how to model euclidean rigid body motion using the Clifford algebra structures described above. It begins by showing how to use the Clifford algebra $\pdclal{3}{0}{1}$ to represent euclidean motions and their derivatives. Dynamics is introduced with newtonian particles, which are collected to construct rigid bodies. The inertia tensor of a rigid body is derived as a positive definite quadratic form on the space of bivectors. Equations of motion in the force-free case are derived. In the following, we represent velocity states by $\velo$, momentum states by $\momo$, and forces by $\foro$. \versions{}{Due to space limitations, results are compressed. For a fuller discussion, see \fullversion.} \subsection{Kinematics} \label{sec:kinematics2d} \begin{definition} A \emph{euclidean motion} is a $C^1$ path $g: [0,1] \rightarrow \spin{3}{0}{1}$ with $g(0) = \one$. \end{definition} \begin{theorem}\label{thm:bivector} For a euclidean motion $\vec{g}$, $\widetilde{\vec{g}}\dot{\vec{g}}$ is a bivector. \end{theorem} \begin{proof} $\widetilde{\vec{g}}\dot{\vec{g}}$ is in the even subalgebra. For a bivector $X$, $\tilde{X} = -X$; for scalars and pseudoscalars, $\tilde{X} = X$. Hence it suffices to show $\widetilde{\widetilde{\vec{g}}\dot{\vec{g}}}= -\widetilde{\vec{g}}\dot{\vec{g}}$. \begin{eqnarray*} \widetilde{\vec{g}} \vec{g}&=& 1 \\ \dot{(\widetilde{\vec{g}}\vec{g} )} &=& 0 \\ \dot{\widetilde{\vec{g}}}\vec{g}+\widetilde{\vec{g}}\dot{\vec{g}} &=& 0 \\ \widetilde{\dot{\vec{g}}}\vec{g}+\widetilde{\vec{g}}\dot{\vec{g}} &=& 0 \\ \widetilde{\widetilde{\vec{g}}\dot{\vec{g}}}&=& -\widetilde{\vec{g}}\dot{\vec{g}} \end{eqnarray*} \qed \end{proof} Define $\vec{\velo} := \dot{\vec{g}}(0)$; by the theorem, $\vec{\velo}$ is a bivector. We call $\vec{\velo}$ a \emph{euclidean velocity state}. For a point $\vec{P}$, the motion $\vec{g}$ induces a path $\vec{P}(t)$, the \emph{orbit} of the point $\vec{P}$, given by $\vec{P}(t) ={\vec{g}}(t)\vec{P} \widetilde{\vec{g}}(t)$. Taking derivatives of both sides and evaluating at $t=0$ yields: \begin{eqnarray*} \label{eqn:liebr} \dot{\vec{P}}(t) &=& \dot{{\vec{g}}}(t)\vec{P}\widetilde{\vec{g}}(t)+{\vec{g}}(t)\vec{P}\dot{\widetilde{\vec{g}}}(t)\\ \dot{\vec{P}}(t) &=&{\dot{\vec{g}}}(t)\vec{P} \widetilde{\vec{g}}(t)+{\vec{g}}(t)\vec{P} \widetilde{\dot{\vec{g}}}(t)\\ \dot{\vec{P}}(0) &=& \vec{\velo}\vec{P} - \vec{P}\vec{\velo}\\ &=& 2(\vec{\velo} \times \vec{P}) \end{eqnarray*} The last step follows from the definition of the commutator product of bivectors. In this formula we can think of $\vec{P}$ as a normalized euclidean point which is being acted upon by the euclidean motion $\vec{g}$. From \Sec{sec:enumpro3} we know that $\vec{\velo} \times \vec{P}$ is a ideal point, that is, a free vector. We sometimes use the alternative form $\vec{\velo} \times \vec{P} = (\vec{\velo} \vee \vec{P})\eye$ (Exercise). The vector field vanishes wherever $\vec{\velo} \vee \vec{P} = 0$. This only occurs if $\velo$ is a line and $\vec{P}$ lies on it. The picture is consistent with the knowledge, gained above, that in this case $e^{t\vec{\velo}}$ generates a rotation (or translation) with axis $\vec{\velo}$. Otherwise the motion is an instantaneous screw motion around the axis of $\velo$ and no points remain fixed. \versions{\Fig{fig:eucvf} shows how the vector field looks in case $n=2$. It's easy to see that $e^{t\vec{V}}$ in this case yields a rotation around the point $\vec{V}$. \begin{figure}[t] \sidecaption[t] \includegraphics[width=.52\columnwidth]{vectorField-euc2d-01.png} \caption{For $n=2$ the euclidean velocity state is a point, and it acts on points. The vector field $2 \vec{V} \times \vec{P}$ in the neighborhood of $\vec{V}$.} \label{fig:eucvf} \end{figure} }{\vspace{-.15in}}. \mysubsubsection{Null plane interpretation} In the formulation $\dot{\vec{P}} = 2(\vec{\velo} \vee \vec{P})\eye$, we recognize the result as the polar point (with respect to the euclidean metric) of the null plane of $\vec{P}$ (with respect to $\velo$). See \Fig{fig:nullplanes}. Thus, the vector field can be considered as the \emph{composition} of two simple polarities: first the null polarity on $\velo$, then the metric polarity on the euclidean quadric. This leads to the somewhat surprising result that regardless of the metric used, the underlying null polarity remains the same. \versions{One could say, for a given point, its null plane provide a projective \emph{ground} for kinematics, shared by all metrics; the individual metrics determine a different \emph{perpendicular} direction to the plane, giving the direction which the point moves. This decomposition only makes itself felt in the 3D case. In 2D, the null polarity is degenerate: $\vec{V} \vee \vec{P}$ is the joining line of $\vec{V}$ and $\vec{P}$ (a similar degeneracy occurs in 3D when $\velo$ is simple). }{} \begin{figure}[t] \label{fig:nullplanes} \begin{center} \includegraphics[width=.9\columnwidth]{GunnFigure04-NullLines-02.pdf} \caption{Two examples of a point $\vec{P}$, its null plane $(\vec{P} \vee \pip)$, and the null plane's polar point.} \end{center} \end{figure} \subsection{Dynamics} \label{sec:dynamics2d} With the introduction of forces, our discussion moves from the kinematic level to the dynamic one. We begin with a treatment of statics. We then introduce newtonian particles, build rigid bodies out of collection of such particles, and state and solve the equations of motions for these rigid bodies. \versions{See Appendix 3 for a detailed account of how 2D statics are handled in this model.}{} \mysubsubsection{3D Statics} Traditional statics represents 3D a single force $F$ as a pair of 3-vectors $(V,M)$, where $ V = (v_x, v_y, v_z)$ is the direction vector, and $M = (m_x,m_y,m_z)$ is the moment with respect to the origin (see \cite{featherstone07}, Ch. 2). The resultant of a system of forces $F_i$ is defined to be \versions{\[ \sum_i F_i = (\sum_i{V_i} , \sum_i{M_i}) =: (V,M)\] The forces are in equilibrium $\iff V=M=0$. $V=0$ and $M \neq 0 \iff $ the resultant force is a \emph{force couple}. Otherwise the vectors $V$ and $M$ are orthogonal $\iff$ the system represents a single force.}{the sum of the corresponding direction vectors $V_i$ and moment vectors $M_i$. The forces are in equilibrium if both terms of the resultant are zero.} If $\vec{P}$ is a normalized point on the line carrying the force, define $H(F) := \vec{P} \vee \vec{i}(V)$. We call $H(F)$ the\emph{ homogeneous form }of the force, and verify that: \begin{eqnarray*} H(F) &=&m_x \EE{01} + m_y \EE{02} + m_z \EE{03} + v_z \EE{12} +v_y \EE{31} + v_x \EE{23} \end{eqnarray*} If $F$ is the resultant of a force system $\{F_i\}$, then $H(F) = \sum_i{H(F_i)}$. Hence, a system of forces $\{F_i\}$ is the null force $\iff \sum_i{H(F_i)} = 0$. Furthermore, $H(F)$ is an ideal line $\iff$ the system of forces reduces to a force-couple, and $H(F)$ is a simple euclidean bivector $\iff$ $F$ represents a single force. Notice that the intensity of a bivector is significant, since it is proportional to the strength of the corresponding force. For this reason we sometimes say forces are represented by \emph{weighted} bivectors. \subsubsection{Newtonian particles} \label{sec:newtonpart} The basic object of Newtonian mechanics is a particle $P$ with mass $m$ located at the point represented by a trivector $\vec{R}$. Stated in the language of ordinary euclidean vectors, Newton's law asserts that the force $F$ acting on $P$ is: $F=m\ddot{\underline{\vec{R}}}$. \begin{definition} The \emph{spear} of the particle is $\vec{\Lambda} := \vec{R} \vee \dot{\vec{R}}$. \end{definition} \begin{definition} The \emph{momentum state} of the particle is $\vec{\momo} := m\vec{\Lambda}$. \end{definition} \begin{definition} The \emph{velocity state} of the particle is $\vec{\pvelo} := \vec{\Lambda}\eye$. \end{definition} \begin{definition} The \emph{kinetic energy} $E$ of the particle is \begin{equation} \label{eqn:kinen} E := \dfrac{m}{2} \| \dot{\vec{R}} \|_\infty^2 = -\frac{m}{2} \vec{\Lambda} \cdot \vec{\Lambda} = -\frac{1}{2}\stripeye{\vec{\pvelo} \wedge \vec{\momo} } \end{equation} \end{definition} \mysubsubsection{Remarks} Since we can assume $\vec{R}$ is normalized, $\dot{\vec{R}}$ is an ideal point. $\vec{\momo}$ is a weighted bivector whose weight is proportional to the mass and the velocity of the particle. $\vec{\pvelo}$ is ideal, corresponding to the fact that the particle's motion is \emph{translatory}. Up to the factor $m$, $\pvelo$ is the polar line of $\momo$ with respect to the euclidean metric. It is straightforward to verify that the linear and angular momentum of the particle appear as $\vec{\momo}_o$ and $\vec{\momo}_\infty$, resp., and that the definition of kinetic energy agrees with the traditional one (Exercise). The second and third equalities in \Eq{eqn:kinen} are also left as an exercise. We consider only force-free systems. \versions{}{The extension to include external forces is straightforward but lies outside the scope of this introduction.} \begin{theorem} If $F=0$ then $\vec{\Lambda}$, $\vec{\momo}$, $\vec{\pvelo}$, and $E$ are conserved quantities. \end{theorem} \begin{proof} $F=0$ implies $\ddot{\vec{R}} = 0$. Then: \begin{itemize} \item $\dot{\vec{\Lambda}} = ( \dot{\vec{R}} \vee \dot{\vec{R}} + \vec{R} \vee \ddot{\vec{R}}) =0~~$ \item $\dot{\vec{\momo}} = m\dot{\vec{\Lambda}} =0$ \item $\dot{\vec{\pvelo}} = (\dot{\vec{\Lambda}})\eye =0$ \item $\dot{E} = \dfrac{1}2(\stripeye{\dot{\vec{\pvelo}} \wedge \vec{\momo}} +\stripeye{\vec{\pvelo} \wedge \dot{\vec{\momo}}}) = 0$. \end{itemize} \end{proof} \mysubsubsection{Inertia tensor of a particle} Assume the particle is \quot{governed by} a euclidean motion $\vec{g}$ with associated euclidean velocity state $\vec{\velo} := \dot{\vec{g}}(0)$. Then $\vec{\momo}$, $\vec{\pvelo}$, and $E$ depend on $\vec{\velo}$ as follows: \begin{eqnarray} \label{eqn:inertiaparticle1} \dot{\vec{R}} &=& 2(\vec{\velo} \times \vec{R} ) \\ \label{eqn:inertiaparticle2} \vec{\momo} &=& 2m(\vec{R} \vee (\vec{\velo} \times \vec{R} ))\\ \label{eqn:inertiaparticle3} \vec{\pvelo} &=&2(\vec{R} \vee (\vec{\velo} \times \vec{R} ))\eye \\ \label{eqn:inertiaparticle4} &=&2(\vec{R} \times (\vec{\velo} \times \vec{R} )) \\ \label{eqn:inertiaparticle5} E &=& -2m\stripeye{\vec{\pvelo} \wedge \vec{\momo}}\\ \label{eqn:inertiaparticle6} &=& -2m\stripeye{(\vec{R} \times (\vec{\velo} \times \vec{R} )) \wedge (\vec{R} \vee (\vec{\velo} \times \vec{R} ))}\\ \label{eqn:inertiaparticle7} &=& -\stripeye{\vec{\velo} \wedge \vec{\momo}} \end{eqnarray} \versions{The step from \Eq{eqn:inertiaparticle3} to \Eq{eqn:inertiaparticle4} follows from \Eq{eqn:ex4-5}. The step from \Eq{eqn:inertiaparticle5} to \Eq{eqn:inertiaparticle7} is equivalent to the assertion that $2(\vec{\velo}\wedge\vec{\momo})= \vec{\pvelo} \wedge \vec{\momo}$. From $\vec{\momo} = m\vec{R} \vee \dot{\vec{R}}$ it is enough to show $2(\vec{\velo}\vee \vec{R}) = \vec{\pvelo} \vee \vec{R}$. Since both sides of the equation are planes passing through $\vec{R}$, it only remains to show that the planes have the same normal vectors. This is equivalent to \begin{theorem}\label{thm:simplerest} $2(\vec{\velo} \times \vec{R}) = \vec{\pvelo} \times \vec{R}$. \end{theorem} \begin{proof} \begin{align} \vec{\pvelo} \times \vec{R} &= 2(\vec{R} \times (\vec{\velo} \times \vec{R}) \times \vec{R}) \\ &= 2(\vec{R} (\vec{\velo} \times \vec{R}) \vec{R}) \\ &= (\vec{R}(\vec{\velo} \vec{R} - \vec{R} \vec{\velo})\vec{R}) \\ &= (\vec{R}\vec{\velo}\vec{R}^2 - \vec{R}^2\vec{\velo}\vec{R}) \\ &= (-\vec{R}\vec{\velo} + \vec{\velo}\vec{R}) \\ &=2(\vec{\velo} \times \vec{R}) \end{align} Here we have used the fact that $\vec{R}\cdot (\vec{\velo} \times \vec{R})=0$, the definition of $\times$, the fact that for euclidean points $\vec{R}^2=-1$, and finally the definition of $\times$ a second time. \qed \end{proof} Define Then the theorem yields immediately two corollaries: \begin{corollary}\label{cor:restmom} The difference $\vec{\Xi} := 2 \vec{\velo} - \vec{\pvelo}$ is a simple bivector incident with $\vec{R}$. \end{corollary} \begin{proof} The theorem implies $\vec{\Xi} \times \vec{R} = 0$. Since this is the normal direction of the plane $\vec{\Xi} \vee \vec{R}$, and $\vec{\Xi}$ is euclidean, this implies $\vec{\Xi} \vee \vec{R} = 0$. The null plane of a point, however, can only vanish if the bivector is simple and is incident with the point. \qed \end{proof} \begin{corollary} $\vec{\Xi}$ is the conjugate line of $\vec{\pvelo}$ with respect to the null polarity $\vec{\velo}$. \end{corollary} \begin{proof} This follows from the observation that the conjugate of a line $\fio$ with respect to a non-simple bivector $\vec{\velo}$ is a line lying in the pencil spanned by $\fio$ and $\vec{\velo}$. This condition is clearly satisfied by both $\vec{\Xi}$ and $\vec{\pvelo}$. Since there are at most two such lines in the pencil, the proof is complete. \qed \end{proof} We return to the theme of Newtonian particles below in \Sec{sec:rbm2}. }{The step from \Eq{eqn:inertiaparticle4} to \Eq{eqn:inertiaparticle5} is described in more detail in \fullversion.} Define a real-valued bilinear operator $\inert$ on pairs of bivectors: \begin{eqnarray} \label{eqn:symm1} \inert(\vec{\velo}, \vec{\pip}) &:=& -\frac{m}2\stripeye{((\vec{R} \vee 2(\vec{\velo} \times \vec{R} ))\eye) \wedge (\vec{R} \vee 2(\vec{\pip} \times \vec{R} ))} \\ \label{eqn:symm2} &=& \frac{m}2(\vec{R} \vee 2(\vec{\velo} \times \vec{R} )) \cdot (\vec{R} \vee 2(\vec{\pip} \times \vec{R} )) \end{eqnarray} where the step from \Eq{eqn:symm1} to \Eq{eqn:symm2} can be deduced from \Sec{sec:enumpro3}. \Eq{eqn:symm2} shows that $\inert$ is symmetric since $\cdot$ on bivectors is symmetric: $\vec{\Lambda} \cdot \vec{\Delta} = \vec{\Delta} \cdot \vec{\Lambda}$. We call $\inert$ the \emph{inertia tensor} of the particle, since $E =\inert(\vec{\velo},\vec{\velo}) = -\stripeye{\vec{\velo} \wedge \vec{\momo}} $. We'll construct the inertia tensor of a rigid body out of the inertia tensors of its particles below. We overload the operator and write $\vec{\momo} = \inert(\vec{\velo})$ to indicate the polar relationship between $\vec{\momo}$ and $\vec{\velo}$. \subsubsection{Rigid body motion} \label{sec:rbm2} Begin with a finite set of mass points $P_i$; for each derive the velocity state $\vec{\pvelo}_i$, the momentum state $\vec{\momo}_i$, and the inertia tensor $\inert_i$.\footnote{We restrict ourselves to the case of a finite set of mass points, since extending this treatment to a continuous mass distribution presents no significant technical problems; summations have to be replaced by integrals. } Such a collection of mass points is called a \emph{rigid body} when the euclidean distance between each pair of points is constant. Extend the momenta and energy to the collection of particles by summation: \begin{subequations} \label{eqn:lwe} \begin{eqnarray} \vec{\momo}&:=& \sum{\vec{\momo}_i} = \sum\inert_i(\vec{\velo}) \label{eqn:lwe2} \\ E&:=& \sum{E_i} = \sum{\inert_i(\vec{\velo}, \vec{\velo})} \end{eqnarray} \end{subequations} Since for each single particle these quantities are conserved when $F=0$, this is also the case for the aggregate $\vec{\momo}$ and $E$ defined here. We introduce the inertia tensor $A$ for the body: \begin{definition} $\inert:=\sum{\inert_i}$. \end{definition} Then $\vec{\momo} = \inert(\vec{\velo})$ and $E = \inert(\vec{\velo},\vec{\velo})$, neither formula requires a summation over the particles: the shape of the rigid body has been encoded into $\inert$. \versions{We sometimes use the identity \begin{equation} \label{eqn:dotwedge} \inert(\vec{\velo}_1,\vec{\velo}_2) = -\stripeye{\vec{\velo}_1 \wedge \inert(\vec{\velo}_2)}~~~~\text{(Exercise)} \end{equation} which is a consequence that the individual inertia tensors for each particle exhibit this property. }{}One can proceed traditionally and diagonalize the inertia tensor by finding the center of mass and moments of inertia (see \cite{arnold78}). Due to space constraints we omit the details. Instead, we sketch how to integrate the inertia tensor more tightly into the Clifford algebra framework. \mysubsubsection{Clifford algebra for inertia tensor} We define a Clifford algebra $\mathbf{C}_\inert$ based on $\pdgrassgr{4}{2}$ by attaching the positive definite quadratic form $\inert$ as the inner product.\footnote{It remains to be seen if this approach represents an improvement over the linear algebra approach which could also be maintained in this setting.} We denote the pseudoscalar of this alternative Clifford algebra by $\eye_\inert$, and inner product of bivectors by $\langle, \rangle_\inert$. We use the same symbols to denote bivectors in $W^*$ as 1-vectors in $\mathbf{C}_\inert$. Bivectors in $W$ are represented by 5-vectors in $\mathbf{C}_\inert$. Multiplication by $\eye_\inert$ swaps 1-vectors and 5-vectors in $\mathbf{C}_\inert$; we use $\mathbf{J}$ (lifted to $\mathbf{C}_\inert$) to convert 5-vectors back to 1-vectors as needed. The following theorem, which we present without proof, shows how to obtain $\vec{\momo}$ directly from $\eye_\inert$ in this context: \begin{theorem} \label{thm:inert} Given a rigid body with inertia tensor $\inert$ and velocity state $\vec{\velo}$, the momentum state $\vec{\momo} = \inert(\vec{\velo}) = \mathbf{J}(\vec{\velo \eye_\inert })$. \end{theorem} Conversely, given a momentum state $\momo$, we can manipulate the formula in the theorem to deduce: \begin{equation*} \vec{\velo} =\inert^{-1}(\vec{\momo}) = (\mathbf{J}(\vec{\momo})\eye_\inert^{-1}) \end{equation*} In the sequel we denote the polarity on the inertia tensor by $\inert(\vec{\velo})$ and $\inert^{-1}(\vec{\momo})$, leaving open whether the Clifford algebra approach indicated here is followed. \versions{ \mysubsubsection{Newtonian particles, revisited} Now that we have derived the inertia tensor for a euclidean rigid body, it is instructive to return to consider the formulation of euclidean particles above (\Sec{sec:newtonpart}). We can see that in this formulation, particles exhibit properties usually associated to rigid bodies. \begin{itemize} \item $E = -\frac12\stripeye{\vec{\pvelo} \wedge \vec{\momo}}$: The kinetic energy is the result of a dual pairing between the particle's velocity state and its momentum state, considered as bivectors. \item $\vec{\pvelo} = \frac1m \vec{\momo} \eye$: The dual pairing is given by the polarity on the euclidean metric quadric, scaled by $\frac1m$. This pairing is degenerate and only goes in one direction: from the momentum state to produce the velocity state. \item $E = -\stripeye{\vec{\velo} \wedge \vec{\momo}}$: the same energy is obtained by using twice the global velocity state in place of the particle's velocity state. This follows from \Thm{thm:simplerest}. \end{itemize} \exerc \begin{enumerate} \item Verify that the linear and angular momentum of a particle appear as $\vec{\momo}_o$ and $\vec{\momo}_\infty$, resp. \item Verify the equalities in \Eq{eqn:kinen}. \end{enumerate} }{\vspace{-.2in}} \subsubsection{The Euler equations for rigid body motion} In the absence of external forces, the motion of a rigid body is completely determined by its momentary velocity state or momentum state at a given moment. How can one compute this motion? First we need a few facts about coordinate systems. \versions{\textbf{Coordinate systems.} Up til now, we have been considering the behavior of the system at a single, arbitrary moment of time. But if we want to follow a motion over time, then there will be two natural coordinate systems. One, the \emph{body} coordinate system, is fixed to the body and moves with it as the body moves through space. The other, usually called the \emph{space} coordinate system, is the coordinate system of an unmoving observer. Once the motion starts, these two coordinate systems diverge.}{} The following discussion assumes we observe a system as it evolves in time. All quantities are then potentially time dependent; instead of writing $\vec{g}(t)$, we continue to write $\vec{g}$ and trust the reader to bear in mind the time-dependence. We use the subscripts $X_s$ and $X_c$\footnote{From \emph{c}orpus, Latin for body.} to distinguish whether the quantity $X$ belongs to the space or the body coordinate system. The conservation laws of the previous section are generally valid only in the space coordinate system, for example, $\dot{\vec{\momo}}_s = 0$. On the other hand, the inertia tensor will be constant only with respect to the body coordinate system, so, $\vec{\momo}_c = \inert( \vec{\velo}_c) $. When we consider a euclidean motion $\vec{g}$ as being applied to the body, then the relation between body and space coordinate systems for \emph{any} element $\vec{X} \in \pdclal{3}{0}{1}$, with respect to a motion ${\vec{g}}$, is given by the sandwich operator: \[ \vec{X}_s = \vec{g} \vec{X}_c\widetilde{\vec{g}} \] \begin{definition} The \emph{velocity in the body} $\vec{\velo}_c :=\tilde{\vec{g}} \dot {\vec{g}}$, and the \emph{velocity in space} $\vec{\velo}_s :=\vec{g} \vec{\velo}_c \tilde{\vec{g}}$. \end{definition} \begin{definition} The \emph{momentum in the body} $\vec{\momo}_c :=\inert( \vec{\velo}_c)$, and the \emph{momentum in space} $\vec{\momo}_s := \vec{g} \vec{\momo}_c \tilde{\vec{g}}$. \end{definition} We derive a general result for a time-dependent element (of arbitrary grade) in these two coordinate systems: \begin{theorem} \label{thm:liebracket} For time-varying $\vec{X} \in \pdclal{3}{0}{1}$ subject to the motion $\vec{g}$ with velocity in the body $\vec{\velo}_c$, \[ \dot{\vec{X}}_s = \vec{g}(\dot{\vec{X}}_c+2(\vec{\velo}_c \times \vec{X}_c )) \tilde{\vec{g}} \] \end{theorem} \begin{proof} \begin{eqnarray*} \dot{\vec{X}_s} &=&\dot{\vec{g}}\vec{X}_c\tilde{\vec{g}}+\vec{g}\dot{\vec{X}}_c \tilde{\vec{g}}+ \vec{g}\vec{X}_c \dot{\tilde{\vec{g}}}\\ &=&\vec{g}(\tilde{\vec{g}}\dot{\vec{g}}\vec{X}_c+\dot{\vec{X}}_c+ \vec{X}_c\dot{\tilde{\vec{g}}}{\vec{g}}) \tilde{\vec{g}}\\ &=& \vec{g}({\vec{\velo}}_c\vec{X}_c+\dot{\vec{X}_c}+\vec{X}_c \widetilde{\vec{\velo}}_c) \tilde{\vec{g}}\\ &=& \vec{g}(\dot{\vec{X}}_c+ \vec{\velo}_c \vec{X}_c - \vec{X}_c\vec{\velo}_c) \tilde{\vec{g}} \\ &=& \vec{g}(\dot{\vec{X}}_c+2(\vec{\velo}_c \times \vec{X}_c )) \tilde{\vec{g}} \end{eqnarray*} The next-to-last equality follows from the fact that for bivectors, $\widetilde{\vec{\velo}} = - \vec{\velo}$; the last equality is the definition of the commutator product. \qed \end{proof} We'll be interested in the case $\vec{X}_c $ is a bivector. In this case, $\vec{X}_c$ and $\vec{\velo}_c$ can be considered as Lie algebra elements, and $2(\vec{\velo}_c \times \vec{X}_c)$ is called the \emph{Lie bracket}. It expresses the change in one ($\vec{X}$) due to an instantaneous euclidean motion represented by the other ($\vec{\velo}$). \subsubsection{Solving for the motion} Since $\vec{\velo}_c =\widetilde{\vec{g}} \dot{\vec{g}}$, $\dot{\vec{g}} = \vec{g} \vec{\velo}_c$, a first-order ODE. If we had a way of solving for $\vec{\velo}_c$, we could solve for $\vec{g}$. If we had a way of solving for $\vec{\momo}_c$, we could apply \Theorem{thm:inert} to solve for $\vec{\velo}_c$. So, how to solve for $\vec{\momo}_c$? We apply the corollary to the case of force-free motion. Then $\dot{\vec{\momo}}_s = 0$: the momentum of the rigid body in space is constant. By \Theorem{thm:liebracket}, \begin{equation}\label{eqn:dmzero} 0 = \dot{\vec{\momo}}_s = {\vec{g}}(\dot{\vec{\momo}}_c+2 (\vec{\velo}_c \times \vec{\momo}_c)) \widetilde{\vec{g}} \end{equation} The only way the RHS can be identically zero is that the expression within the parentheses is also identically zero, implying: \[ \dot{\vec{\momo}}_c = 2 \vec{\momo}_c \times \vec{\velo}_c \] Use the inertia tensor to convert velocity to momentum yields a differential equation purely in terms of the momentum: \begin{eqnarray*} \dot{\vec{\momo}}_c &=& 2 \vec{\momo}_c \times \inert^{-1}(\vec{\momo_c}) \end{eqnarray*} \versions{One can also express this ODE in terms of the velocity state alone: \begin{eqnarray*} \dot{\vec{\velo}}_c = \inert^{-1}(\dot{\vec{\momo}}_c) &=& 2\inert^{-1}(\vec{\momo}_c \times \inert^{-1}(\vec{\momo_c}) )\\ &=& 2\inert^{-1}(\inert(\vec{\velo}_c) \times \vec{\velo}_c ) \end{eqnarray*} \vspace{-.2in}}{} When the inner product is written out in components, one arrives at the well-known Euler equations for the motion (\cite{arnold78}, p. 143). The complete set of equations for the motion $g$ are given by the pair of first order ODE's: \begin{eqnarray}\label{eqn:eqnmot} \dot{\vec{g}} &=& \vec{g}\vec{\velo}_c\\ \dot{\vec{\momo}}_c &=& 2 \vec{\momo}_c \times \vec{\velo}_c \end{eqnarray} where $\vec{\velo}_c = \inert^{-1}(\vec{\momo}_c )$. When written out in full, this gives a set of 14 first-order linear ODE's. The solution space is 12 dimensions; the extra dimensions corresponds to the normalization $\vec{g}\widetilde{\vec{g}}=1$. At this point the solution continues as in the traditional approach, using standard ODE solvers. Our experience is that the cost of evaluating the Equations (\ref{eqn:eqnmot}) is no more expensive than traditional methods. \versions{}{For an extension to account for external forces, see \fullversion.} \versions{ \subsubsection{External forces} \label{sec:extforce} The external force $F$ acting on the rigid body is the sum of the external forces $F_i$ acting on the individual mass points. In traditional notation, $F_i = m_i\ddot{\mathbf{r}}$. What is the homogeneous form for $F_i$? In analogy to the velocity state of the particle, we define the acceleration \emph{spear} of the particle $\vec{\Upsilon}_i := \vec{R}_i \vee \ddot{\vec{R}}_i$, and the force state of the particle $\vec{\foro}_i : = m_i\vec{\Upsilon}_i$. Then the total force $\vec{\foro}$ is the sum: \begin{eqnarray*} \vec{\foro} &=&\sum{\vec{\foro}_i} \\ \end{eqnarray*} The following theorem will be needed below in the discussion of work (\Sec{sec:work}): \begin{theorem} \label{thm:force} $\vec{\foro} = \dot{\vec{\momo}}$ \end{theorem} \begin{proof} Take the derivative of \Eq{eqn:lwe2}: \begin{eqnarray*} \dot{\vec{\momo}} &=& \sum{m_i \frac{d}{dt}(\vec{R}_i \vee \dot{\vec{R}}_i}) \\ &=& \sum{m_i (\dot{\vec{R}}_i \vee \dot{\vec{R}}_i + \vec{R}_i \vee \ddot{\vec{R}}_i)} \\ &=& \sum{m_i (\vec{R}_i \vee \ddot{\vec{R}}_i)}\\ &=& \vec{\foro} \end{eqnarray*}\qed \end{proof} Theorem \ref{thm:force} applied to \Eq{eqn:dmzero} yields Euler equations for motion with external forces:: \begin{eqnarray*} \vec{\foro}_s &=& \dot{\vec{\momo}}_s = \vec{g}(\dot{\vec{\momo}}_c+2 ( \vec{\velo}_c \times \vec{\momo}_c))\widetilde{\vec{g}} \\ \widetilde{\vec{g}} \vec{\foro}_s\vec{g}&=&\dot{\vec{\momo}}_c+2 ( \vec{\velo}_c \times \vec{\momo}_c) \\ \vec{\foro}_c&=&\dot{\vec{\momo}}_c+2 ( \vec{\velo}_c \times \vec{\momo}_c) \\ \dot{\vec{\momo}}_c&=&\vec{\foro}_c +2 (\vec{\momo}_c \times \vec{\velo}_c) \end{eqnarray*} Note that the forces have to converted from world to body coordinate systems. \subsubsection{Work} \label{sec:work} As a final example of the projective approach, we discuss the concept of \emph{work}. Recall that $\vec{\momo} = \inert( \vec{\velo})$, so $\dot{\vec{\momo}} = \inert(\dot{\vec{\velo}})$, and the definition of kinetic energy for a rigid body : $E = \frac{m}2 \inert(\vec{\velo}, \vec{\velo})$. \begin{theorem} $\dot{E} =-\stripeye{\vec{\velo} \wedge \vec{\foro}}$ \end{theorem} \begin{proof} \begin{eqnarray*} \dot{E} &=& \frac{m}2\frac{d}{dt} \inert(\vec{\velo}, \vec{\velo}) \\ & = & \frac{m}2 ( \inert(\dot{\vec{\velo}}, \vec{\velo}) +\inert(\vec{\velo}, \dot{\vec{\velo}})) \\ &=& m\inert( \vec{\velo}, \dot{\vec{\velo}})\\ &=& -\stripeye{\vec{\velo} \wedge m\inert(\dot{\vec{\velo}})}= -\stripeye{\vec{\velo} \wedge m\dot{\vec{\momo}} }\\ &=& -\stripeye{ \vec{\velo} \wedge \foro} \end{eqnarray*} where we apply Leibniz rule, the symmetry of $\inert$, \Eq{eqn:dotwedge} and finally Thm. \ref{thm:force}. \qed \end{proof} In words: the rate of change of the kinetic energy is equal to the signed magnitude of the outer product of force and velocity. This is noteworthy in that it does not involve the metric directly. $\dot{E}$ is sometimes called the \emph{power}. The \emph{work} done by the force between time $t_0$ and $t$ is the defined to be the integral: \begin{eqnarray*} w(t) = E(t) - E(t_0) &=& \int_{t_0}^{t}\dot{E} ds \\ &=& \int_{t_0}^{t}-\stripeye{\vec{\velo} \wedge \vec{\foro}}ds \\ \end{eqnarray*} The integrand depends on the incidence properties of the force and the velocity, as a line/point pair. If the two elements are incident, then $\vec{f} \wedge \vec{\velo} = 0$ and there is no work done; the further away $\vec{\velo}$ lies from $\vec{\foro}$, (in $\complexspace$!) the greater the power and hence the work done. One can be more precise: \begin{theorem} \label{thm:forcedist} Suppose $\vec{\foro} $ is a single force, $\vec{\velo} $ is a rotator, $d$ is the euclidean distance between the lines $\vec{\foro}$ and $\vec{\velo}$, and $\alpha$ is the angle between the two direction vectors. Then $ \stripeye{\vec{\velo} \wedge \vec{\foro}}= -d \sin{\alpha} \| \vec{\velo} \| \| \vec{\foro} \|.$ \end{theorem} \begin{proof} Left as an exercise. \qed \end{proof} In words: the rate of change of the kinetic energy is proportional to the intensity of the force, the intensity of the velocity, and the euclidean distance between the force line and the velocity line. \textbf{Example.} Imagine an ice skater who moving along the surface of a frozen lake with negligible friction; the single force is given by gravity. Assuming gravity is in the negative $z$-direction acting on the skater located at the origin, then $\vec{\foro} = g\e{12}$ (corresponding to the intersection line of the planes $x=0$ and $y=0$ with weight gravitation constant $g$). Consider two possible motions of the skater: \begin{itemize} \item The motion of the skater is a translation in the $x$-direction given by an ideal line of the form $\vec{\velo}=d\e{01}, d<0$. $\vec{\foro} \wedge \vec{\velo} = 0$, so no work is required for the skater to skate! \item The skater spins around a fixed point. Then the velocity state relative to the natural diagonalized form of the inertia tensor has null ideal part $\vec{\velo}_\infty = 0$ and the corresponding momentum state $\vec{\momo} = \inert(\vec{\velo})$ has null euclidean part $\vec{\momo}_o = 0$: it's a momentum couple: a momentum carried by an ideal line! \item As the skater spins, she stretches her arms out, then pulls her arms close to her body. This latter movement decreases the entries in the inertia tensor $\inert$, increasing the entries in $\inert^{-1}$; since $\vec{\velo} = \inert^{-1}(\vec{\momo})$, her velocity increases proportionally in intensity: she spins faster. \end{itemize} One can see from this example the advantages of the projective approach: it unifies calculations, and handles exceptional cases, such as the translations and couples in the above example, at no extra cost. }{ }
{"config": "arxiv", "file": "1101.4542/eucRBM.tex"}
\begin{document} \title[Composition Algebras and Algebraic Groups]{Composition Algebras and Outer Automorphisms of Algebraic Groups} \author[S. Alsaody]{Seidon Alsaody} \address{Uppsala University\\Dept.\ of Mathematics\\P.O.\ Box 480\\751 06 Uppsala\\Sweden} \email{seidon.alsaody@math.uu.se} \begin{abstract} In this note, we establish an equivalence of categories between the category of all eight-dimensional composition algebras with any given quadratic form $n$ over a field $k$ of characteristic not two, and a category arising from an action of the projective similarity group of $n$ on certain pairs of automorphisms of the group scheme $\PGO^+(n)$ defined over $k$. This extends results recently obtained in the same direction for symmetric composition algebras. We also derive known results on composition algebras from our equivalence. \end{abstract} \subjclass[2010]{17A75, 20G15, 11E88.} \keywords{Composition algebras, algebraic groups, outer automorphisms, triality.} \maketitle \section{Introduction} The problem of classifying finite-dimensional composition algebras up to isomorphism is a long open one which has attracted much attention, as in \cite{EM}, \cite{EP}, \cite{P}, \cite{Ca} and \cite{A4}. Some classes of these algebras, such as the symmetric ones, are quite well understood. In general, however, the problem is far from being solved. Finite-dimensional composition algebras necessarily have dimension 1, 2, 4 or 8, and it is the eight-dimensional case which is the most widely open and the one on which we shall focus here. Part of the difficulty arises from the appearance of the triality phenomenon in the isomorphism criteria. In the recent paper \cite{CKT}, the authors established a correspondence between eight-dimensional symmetric composition algebras with quadratic form $n$ over a field $k$ and outer automorphisms of order three of the affine group scheme $\PGO^+(n)$, called trialitarian automorphisms. They further showed that this induces a bijection between isomorphism classes of symmetric composition algebras on the one hand, and conjugacy classes of trialitarian automorphisms on the other. This was further developed in \cite{CEKT}. Our aim is to extend this approach. Namely, building on the results of \cite{CKT} and \cite{CEKT}, we relate not necessarily symmetric composition algebras of dimension eight with quadratic form $n$ to certain pairs of outer automorphisms of $\PGO^+(n)$, and prove that isomorphisms of such algebras correspond bijectively to simultaneous conjugation by inner automorphisms of $\PGO(n)$. We are then also able to derive some previously known isomorphism conditions from this description, which sheds new light on the classification problem of composition algebras. \section{Preliminaries} Throughout the paper $k$ denotes an arbitrary field of characteristic not two. \subsection{Composition Algebras} Let $V$ be a finite-dimensional vector space. A \emph{quadratic form} on $V$ is a map $n:V\to k$ satisfying $n(\alpha x)=\alpha^2n(x)$ for each $\alpha\in k$ and $x\in V$, and such that the map $b_n:V\times V\to k$ defined by \[b_n(x,y)=n(x+y)-n(x)-n(y)\] is bilinear. The pair $(V,n)$ is then called a \emph{quadratic space}. For each subset $U\subseteq V$, we write $U^\bot$ for the orthogonal complement of $U$ with respect to $b_n$. The form $n$ is said to be \emph{non-degenerate} if $V^\bot=\{0\}$. \begin{Def} An \emph{algebra over $k$} or \emph{$k$-algebra} is a $k$-vector space with a bilinear multiplication. A \emph{composition algebra} over $k$ is a $k$-algebra endowed with a non-degenerate multiplicative quadratic form $n$. \end{Def} We denote the operators of left and right multiplication by an element $a$ in an algebra $A$ by $L_a^A$ and $R_a^A$, respectively. By definition, the datum of a composition algebra is a triple $(C,\cdot,n)$ where $C$ denotes the vector space, $\cdot$ the multiplication, and $n$ the quadratic form. Below we shall subsume the multiplication, and often the quadratic form as well, in the notation, thus writing $C$ or $(C,n)$ for $(C,\cdot,n)$. \begin{Rk}\label{R: generalized} In \cite{CKT} the term \emph{normalized composition} is used for the structure of what is here called a composition algebra, and the term \emph{composition} is used for a more general structure, which we will call \emph{generalized composition algebras}. In this sense, a generalized composition algebra over $k$ is a $k$-algebra $A$ endowed with a non-degenerate quadratic form $n$ for which there exists $\lambda\in k^*$, called the \emph{multiplier} of $A$, with $n(xy)=\lambda n(x)n(y)$ for all $x,y\in A$. (Thus a generalized composition algebra with multiplier $\lambda$ is a composition algebra precisely when $\lambda=1$.) Our focus and terminology agrees with \cite{CEKT}, \cite{KMRT} and large parts of the literature. At one occasion we shall use generalized composition algebras as an intermediate step in a proof. \end{Rk} A finite-dimensional composition algebra is said to be a \emph{Hurwitz algebra} if it is unital. The following result is due to \cite{Ka}. \begin{Lma}\label{L: Kaplansky} Let $(C,n)$ be a finite-dimensional composition algebra. Then there exist a Hurwitz algebra $H$ and $f,g\in \ort(n)$ such that $(C,n)=(H_{f,g},n)$. \end{Lma} For any algebra $A$ and any pair $(f,g)$ of linear operators on $A$, the \emph{isotope} $A_{f,g}$ is the algebra with underlying space $A$ and multiplication \[x\cdot y=f(x)g(y),\] where juxtaposition denotes the multiplication of $A$. A definition of the orthogonal group $\ort(n)$ is given in Section 1.4 below. Indeed, if $(C,n)$ is a finite-dimensional composition algebra, then the non-degene\-racy of $n$ implies the existence of some $c\in C$ with $n(c)\neq 0$. Setting $e=n(c)^{-1}c^2$ and $H=C_{(R_e^C)^{-1},(L_e^C)^{-1}}$ we see that $(H,n)$ is a Hurwitz algebra with unity $e$, and $C=H_{f,g}$ with $f=R_e^C$ and $g=L_e^C$. These belong to $\ort(n)$ as $n(e)=1$ and $n$ is multiplicative. Another class of composition algebras of particular interest is that of symmetric composition algebras. \begin{Def} Let $(C,n)$ be a composition algebra. The bilinear form $b_n$ is called \emph{associative} if, for all $x,y,z\in C$, \[b_n(xy,z)=b_n(x,yz).\] The algebra is called \emph{symmetric} if the bilinear form $b_n$ is associative. \end{Def} We will use the same terminology for generalized composition algebras. Any Hurwitz algebra $H$ is endowed with a canonical involution $i$, defined by fixing the unity and acting as $x\mapsto -x$ on its orthogonal complement. This involution is an isometry and the \emph{para-Hurwitz algebra} $H_{i,i}$ is symmetric. Thus the following lemma is an immediate consequence of the previous lemma. \begin{Lma}\label{L: symmetric} Let $(C,n)$ be a finite-dimensional composition algebra. Then there exist a symmetric composition algebra $S$ and $f,g\in \ort(n)$ such that $(C,n)=(S_{f,g},n)$. \end{Lma} When dealing with isotopes of algebras and their isomorphisms, the following easy result is useful. Its proof is a straightforward verification which we leave out. \begin{Lma}\label{L: juggling} Let $A$ and $B$ be arbitrary $k$-algebras and let $f$ and $g$ be invertible linear operators on $A$. If $h:A\to B$ is an isomorphism, then $h f h^{-1}$ and $h g h^{-1}$ are invertible linear operators on $B$, and \[h:A_{f,g}\to B_{h f h^{-1},h g h^{-1}}\] is an isomorphism. \end{Lma} A natural question to ask is when a finite-dimensional quadratic space $(V,n)$ admits the structure of a composition algebra. It is known (see e.g.\ \cite[(33.18)]{KMRT}) that this is the case if and only if for some $r\in\{0,1,2,3\}$, $\dim V=d=2^r$ and $n$ is an $r$-fold Pfister form. We will write $n\in\Pf_r(k)$ to denote that $n$ is an $r$-fold Pfister form over $k$. We also write $\Comp_d$ for the category of all $d$-dimensional composition algebras over $k$, where the morphisms are all non-zero algebra homomorphisms. These are known to be isometries, hence injective by the non-degeneracy of the quadratic forms, whence they are isomorphisms for dimension reasons. For each $n\in\Pf_r(k)$ we write $\Comp(n)$ for the full subcategory of all composition algebras with quadratic form $n$. As in \cite{CKT} and \cite{CEKT}, we shall consider $\Comp(n)$ for a fixed quadratic form $n\in\Pf_3(k)$, which is arbitrarily chosen. It may not \emph{a priori} be clear if this is useful in understanding the category $\Comp_8$, in the sense that composition algebras with different quadratic forms may still be isomorphic. The following known result shows that this is not a problem. \begin{Prp} Let $n,n'\in\Pf_r(k)$ for some $r\in\{0,1,2,3\}$. \begin{enumerate} \item If $n$ and $n'$ are not isometric, $C\in\Comp(n)$ and $C'\in\Comp(n')$, then $C$ is not isomorphic to $C'$. \item If $n$ and $n'$ are isometric, then every $C'\in\Comp(n')$ is isomorphic to some $C\in\Comp(n)$. \end{enumerate} \end{Prp} \begin{proof} The first item is due to \cite{Pet}. For the second, Lemma \ref{L: Kaplansky} provides a Hurwitz algebra $H'\in\Comp(n')$ and $f',g'\in \ort(n')$ such that $C'=H'_{f',g'}$. The same lemma shows that if $\Comp(n)$ is not empty, then it contains a Hurwitz algebra $H$. Since $n$ and $n'$ are isometric, we conclude with \cite[(33.19)]{KMRT} that there exists an isomorphism $h:H'\to H$, which by the above is an isometry. Then the maps $f=h f' h^{-1}$ and $g=h g' h^{-1}$ belong to $\ort(n)$, whence $H_{f,g}\in\Comp(n)$, and from $h:H'\to H$ being an isomorphism it follows that \[h: H'_{f',g'}\to H_{f,g}\] is an isomorphism. \end{proof} For each isometry class $N\subseteq\Pf_3(k)$, we write $\Comp(N)$ for the full subcategory of $\Comp_8$ of all composition algebras whose quadratic form belongs to $N$. Then the above implies the coproduct decomposition \[\Comp_8=\coprod_{N\subseteq\Pf_3(k)} \Comp(N),\] and moreover, for each such class $N$ and each $n\in N$, the full subcategory $\Comp(n)$ is dense in $\Comp(N)$. Thus the classification problem of $\Comp_8$ may be treated by fixing one quadratic form at a time, and it then suffices to consider one quadratic form from each isometry class. Finally, given any algebra $A$ over $k$ and $\lambda \in k$, the \emph{scalar multiple} $\lambda A$ is defined to be equal to $A$ as a vector space, with multiplication \[x\cdot y=\lambda xy.\] Using isotopes, this can be formulated by saying that $\lambda A=A_{\lambda\Id,\Id}=A_{\Id,\lambda\Id}$. \begin{Lma}\label{L: scalars} Let $A$ be a $k$-algebra and $\lambda\in k^*$. \begin{enumerate} \item The map $\lambda\Id: \lambda A\to A$ is an isomorphism of algebras. \item If $A\in \Comp(n)$, then $\lambda A\in \Comp(n)$ if and only if $\lambda=\pm1$. \end{enumerate} \end{Lma} The proof is a straightforward verification. \subsection{Affine Group Schemes} We shall use the functorial approach to algebraic groups, which is developed in \cite{DG}, \cite{W} and \cite{KMRT}. We denote by $k\Alg$ the category of all unital commutative associative algebras over $k$, with algebra homomorphisms as morphisms.\footnote{Despite this notation, we shall not assume associativity, commutativity or unitality when we use the word \emph{algebra} in general.} An \emph{affine scheme} is a functor $\mathbf F: k\Alg\to\Set$ which is \emph{representable}, i.e.\ isomorphic to $\Hom(A_0,-)$ for some $A_0\in k\Alg$. An \emph{affine group scheme} over $k$ is a functor $\mathbf G: k\Alg\to\Grp$ which is representable in the sense that its composition with the forgetful functor into $\Set$ is. A \emph{(normal) subgroup functor} of a functor $\mathbf G:k\Alg\to\Grp$ is a functor $\mathbf H: k\Alg\to\Grp$ such that $\mathbf H(A)$ is a (normal) subgroup of $\mathbf G(A)$ for each $A\in k\Alg$. Given a functor $\G:k\Alg\to\Grp$, we write $\Aut(\G)(k)$ for the group of automorphisms of $\G$ defined over $k$. Here, we understand an \emph{automorphism of $\G$ defined over $k$}, or briefly an \emph{automorphism of $\G$} to be a natural transformation $\bm\eta$ from $\G$ to itself such that for each $A\in k\Alg$, the map $\bm\eta_A:\G(A)\to \G(A)$ is a group automorphism. Each automorphism of $\G$ thus in particular induces an automorphism of the group $G:=\G(k)$. Inner automorphisms of $G$ can conversely be lifted to automorphisms of $\G$. Indeed, if $g\in G$ and $A\in k\Alg$, then the image under $\G$ of the inclusion $\iota_A: k\to A, \alpha\mapsto\alpha1$, is a group homomorphism, and the map \[(\bm{\kappa}_g)_A: \G(A)\to \G(A), h\mapsto \G(\iota_A)(g)h\G(\iota_A)(g)^{-1}\] is a group automorphism of $\G(A)$. The following fact is clear. \begin{Lma}\label{L: lift} Let $\G: k\Alg \to \Grp$ be a functor, and let $g\in G$. Then the map $\bm{\kappa}_g: \G \to \G$ given, for each $A\in k\Alg$, by $(\bm{\kappa}_g)_A$, is an automorphism of $\G$. If $\mathbf H$ is a normal subgroup functor of $\G$, then $\bm{\kappa}_g$ restricts to an automorphism of $\mathbf H$, and for any $r\in \N$, the group $G$ acts on $\left(\Aut(\mathbf H)(k)\right)^r$ by \[g\cdot (\bm\alpha_1,\ldots,\bm\alpha_r)=(\bm\kappa_g \bm\alpha_1\bm\kappa_g^{-1},\bm\kappa_g \bm\alpha_r\bm\kappa_g^{-1}).\] \end{Lma} Automorphisms of the form $\bm\kappa_g$ for some $g\in G$ of $\G$ are called \emph{inner}, and automorphisms which are not inner are called \emph{outer}. We write $\Inn(\G)(k)$ for the subgroup of $\Aut(\G)(k)$ consisting of all inner automorphisms. If $\mathbf H$ is a normal subgroup functor of $\G$, we call automorphism of $\mathbf H$ \emph{weakly inner} (with respect to $\G$) if it is equal to $\bm\kappa_g$ for some $g\in G$, and \emph{strongly outer} otherwise. \subsection{Groups and Group Schemes of Quadratic Forms} To keep our presentation reasonably self-contained, we will in this section give an introduction to those groups and group schemes that will be needed in the present paper. Our summary is based on \cite{KMRT} and \cite{CKT}, the former of which contains an extensive account of the theory of groups and group schemes related to quadratic (and other) forms. Let $(V,n)$ be a non-degenerate quadratic space. A \emph{similarity} of $(V,n)$ is a linear map $f:V\to V$ such that there exists $\mu(f)\in k^*$ with \[n(f(x))=\mu(f)n(x)\] for all $x\in V$. Similarities form a group denoted by $\go(n)$. For each $f\in \go(n)$, the scalar $\mu(f)$ is called the \emph{multiplier} of $f$, and the assignment $f\mapsto \mu(f)$ defines a group homomorphism $\mu: \go(n)\to k^*$. The kernel of $\mu$ is the \emph{orthogonal group} $\ort(n)$, consisting of all isometries of $V$ with respect to $n$. On the other hand, there is a monomorphism in the other direction, i.e.\ from $k^*$ to $\go(n)$, given by $\lambda\mapsto \lambda\Id$. Abusing notation, we write $k^*$ for its image, which is a central, hence normal, subgroup, and define $\pgo(n)=\go(n)/k^*$. In the category of groups, we therefore have the two exact sequences \[1\to \ort(n) \to \go(n) \xrightarrow{\mu} k^* \to 1\] and \[1\to k^* \to \go(n) \to \pgo(n) \to 1.\] On the level of group schemes, we will mainly be concerned with $\PGO(n)$. To define it, we use the notion of the adjoint involution of the strictly non-degenerate quadratic form $n$. This is an anti-automorphism $\ad_n$ of $\End_kV$ defined by \[b_n(f(x),y)=(x,\ad_n(f)(y)).\] Thus $(\End_kV,\ad_n)$ is an algebra with involution, and we then define $\PGO(n)$ as the automorphism group scheme of this algebra, i.e.\ for each $A\in k\Alg$, \[\PGO(n)(A)=\left\{\psi\in\aut_A(A\otimes\End_kV)|\psi(\ad_n)_A=(\ad_n)_A\psi\right\},\] where $(\ad_n)_A$ is defined on $A\otimes\End_kV$ by extending $\ad_n$. From \cite[\S 23]{KMRT}, we know that $\PGO(n)\simeq \GO(n)/\G_m$, where $\GO(n)$ is the group scheme of similarities of $n$, with $\GO(n)(k)=\go(n)$, and $\G_m$ is multiplicative group scheme, with $\G_m(k)=k^*$. Note however that this is a quotient of group schemes, and does not imply that $\PGO(n)(A)\simeq\GO(N)(A)/\G_m(A)$ for each $A$ in $k\Alg$. This is however true for $A=k$. Indeed, the Skolem--Noether theorem implies that each $k$-algebra automorphism of $\End_kV$ is inner, and it is then straightforward to check that an inner automorphism $g\mapsto fgf^{-1}$ of $\End_kV$ commutes with $\ad_n$ if and only if $f\in\go(n)$. This defines a surjective group homomorphism from $\go(n)$ to $\PGO(n)(k)$, and as $\End_kV$ is central, the kernel of this homomorphism is $k\Id$. Thus we get an isomorphism of groups \[\begin{array}{ll}\pgo(n)\to \PGO(n)(k)& [f]\mapsto \left(g\mapsto fgf^{-1}\right). \end{array}\] \begin{Rk}\label{R: identification} Following custom, we will henceforth identify $\PGO(n)(k)$ with $\pgo(n)$ in view of the above isomorphism. Thus for each $h\in\go(n)$ we identify the inner automorphism $f\mapsto hfh^{-1}$ with the coset $[h]$ of $h$ in $\pgo(n)$. \end{Rk} We finally write $\PGO^+(n)$ for the connected component of the identity of $\PGO(n)$. Under the identification above, its group of rational points corresponds to $\pgo^+(n)=\go^+(n)/k^*$. To define $\go^+(n)$, let $C(V,n)$ be the Clifford algebra of the form $n$, with even part denoted by $C_0(V,n)$. The canonical involution of $C(V,n)$ which induces the identity map on $V$, is denoted by $\sigma$. Then it is known (see e.g.\ \cite[\S 13]{KMRT}) that each $f\in\go(n)$ induces a automorphism $C(f)$ of $C_0(V,n)$, and that the restriction of $C(f)$ to the centre of $C_0(V,n)$ either is the identity or has order two. The similarity $f\in \go(n)$ is called proper if $C(f)$ induces the identity on the centre, and the set $\go(n)^+$ of all proper similarity forms a normal subgroup of index two of $\go(n)$. We also write $\ort^+(n)=\go^+(n)\cap\ort(n)$ for the group of proper isometries of $n$. \subsection{Triality for Unital and Symmetric Composition Algebras} The Principle of Triality, originating from E.\ Cartan \cite{C}, is the following statement. \begin{Prp} Let $n\in\Pf_3(k)$ and let $H\in\Comp(n)$ be a Hurwitz algebra. For each $h\in \ort^+(n)$ there exist $h_1,h_2\in\ort^+(n)$ such that for all $x,y\in H$, \[h(xy)=h_1(x)h_2(y),\] and the pair $(h_1,h_2)$ is unique up to multiplication by $\lambda$ in the first argument and $\lambda^{-1}$ in the second for some $\lambda\in k^*$. \end{Prp} A proof and an elaborate discussion can be found in \cite{SV}. In \cite{CKT} and \cite{CEKT}, triality was discussed using symmetric, rather than unital, composition algebras. We recall some of their main results. \begin{Prp}\label{P: CKT} Let $S$ be a finite-dimensional symmetric generalized composition algebra over $k$, with quadratic form $n\in\Pf_3(k)$. \begin{enumerate}[(i)] \item For each $h\in \go^+(n)$ there exist $h_1,h_2\in \go^+(n)$ such that for any $x,y\in S$, \[h(xy)=h_1(x)h_2(y).\] The pair $([h_1],[h_2])\in\pgo^+(n)$ is determined uniquely by $h$. \item For each $r\in\{1,2\}$, the map $\rho_r^S:[h]\mapsto[h_r]$ is an outer automorphism of $\pgo^+(n)$ of order three, and induces an automorphism $\bm\rho_r^S$ of $\PGO^+(n)$ defined over $k$. Moreover, $\bm\rho_1^S\bm\rho_2^S=\ID$. \item The assignment $S\mapsto \bm\rho_1^S$ defines a bijection between eight-dimensional symmetric generalized composition algebras of dimension eight up to scalar multiples, and outer automorphisms of $\PGO^+(n)$ of order three defined over $k$. \item For any symmetric generalized composition algebra $T$ with quadratic form $n$ and any $h\in\go(n)$, $\kappa_{[h]}\rho_1^S\kappa_{[h]}^{-1}=\rho_1^T$ if and only if there exists $\lambda\in k^*$ such that $\lambda h$ is an isomorphism from $S$ to $T$. \end{enumerate} \end{Prp} From \cite[\S 35]{KMRT} we have the following description of $\Aut\left(\PGO^+(n)\right)(k)$. \begin{Prp}\label{P: KMRT} Let $n\in\Pf_3(k)$ and let $S\in \Comp(n)$ be symmetric. Then \[\Aut\left(\PGO^+(n)\right)(k)=\left\{\bm\kappa_{[g]}\left(\bm\rho_1^S\right)^r|g\in\go(n)\wedge r\in\{0,\pm1\} \right\}.\] Moreover, the group $\Aut\left(\PGO^+(n)\right)(k)/\Inn\left(\PGO^+(n)\right)(k)$ is generated by the cosets of $\bm\rho_1^S$ and $\bm\kappa_{[g]}$ for any $g\in\go(n)\setminus\go^+(n)$, and is isomorphic to $S_3$. \end{Prp} The next lemma shows that automorphisms of $\PGO^+(n)$ are determined by their action on rational points. \begin{Lma}\label{L: independent} Let $n\in\Pf_3(k)$ and $\bm\alpha\in\Aut\left(\PGO^+(n)\right)(k)$. If $\bm\alpha$ induces the identity on $\pgo^+(n)$, then $\bm\alpha$ is the identity in $\Aut\left(\PGO^+(n)\right)(k)$. \end{Lma} \begin{proof} Let $S\in\Comp(n)$ be symmetric. By Proposition \ref{P: KMRT} and Remark \ref{R: identification}, we have $\bm\alpha=\bm\kappa_{[h]}\left(\bm\rho_1^S\right)^r$ for some $h\in\go(n)$ and $r\in\{0,\pm1\}$. If $\bm\alpha$ induces the identity on $\pgo^+(n)$, we have $\kappa_{[h]}\left(\rho^S\right)^r=\Id$. Since $\rho^S$ is of order three, this implies that \[\kappa_{[h^2]}=\left(\rho_1^S\right)^r,\] and since the left hand side is an inner automorphism of $\pgo^+(n)$ and $\left(\rho_1^S\right)^{\pm1}$ is not by Proposition \ref{P: CKT}, we have $r=0$ and thus $\bm\alpha=\bm\kappa_{[h]}$. Since the centralizer of $\pgo^+(n)$ in $\pgo(n)$ is trivial we moreover have $[h]=1$, whence $\bm\kappa_{[h]}$ is the identity. Altogether $\bm\alpha$ is the identity, as desired. \end{proof} \subsection{Group Action Groupoids} By a \emph{groupoid} we understand any category where all morphisms are isomorphisms. Let $G$ be a group acting on a set $X$.\footnote{Unless otherwise stated, all group actions are assumed to be from the left.} This gives rise to a groupoid $_GX$ as follows. The object set of $_GX$ is $X$, and for each $x,y\in X$, the set of morphisms from $x$ to $y$ is \[_GX(x,y)=\{(x,y,g)\in X\times X\times G|g\cdot x=y\}.\] When the objects $x$ and $y$ are clear from context, we will denote the morphism $(x,y,g)$ simply by $g$. Group action groupoids are used to construct \emph{descriptions} of certain groupoids in the sense of \cite{D}, whereby a description of a groupoid $\mathfrak C$ is an equivalence of categories from a group action groupoid to $\mathfrak C$. Given a description of a groupoid, classifying it up to isomorphism is then transferred to solving the normal form problem for the group action, i.e.\ constructing a cross-section for its orbits. Our approach in this paper is similar, as it consists of constructing an equivalence of categories from a groupoid of algebras to a group action groupoid. \section{Triality for Eight-Dimensional Composition Algebras} Throughout this section, we fix a quadratic form $n\in\Pf_3(k)$. \begin{Lma}\label{L: Triality} Let $C\in\Comp(n)$. Then for each $h\in \go^+(n)$ there exists a pair $(h_1,h_2)\in \go^+(n)^2$ such that for each $x,y\in C$, \[h(xy)=h_1(x)h_2(y),\] and the pair $([h_1],[h_2])\in\pgo^+(n)^2$ is unique. \end{Lma} The maps $h_1$ and $h_2$ are called \emph{triality components of $h$ with respect to $C$}. \begin{proof} If $C$ is symmetric, then the statement follows from Proposition \ref{P: CKT}. In general there exists by Lemma \ref{L: symmetric} a symmetric $S\in\Comp(n)$ and $f,g\in \ort(n)$ such that $C=S_{f,g}$. Denote the multiplication in $S$ by juxtaposition and that in $C$ by $\cdot$. Then if $h_1'$ and $h_2'$ are triality components of $h$ with respect to $S$, then \begin{equation}\label{triality} h(x\cdot y)=h(f(x)g(y))=h_1'f(x)h_2'g(y)=f^{-1}h_1'f(x)\cdot g^{-1}h_2'g(y), \end{equation} whence $h_1=f^{-1}h_1' f$ and $h_2=g^{-1}h_2' g$ are triality components of $h$ with respect to $C$. The uniqueness of $([h_1],[h_2])$ follows from the uniqueness of $([h_1'],[h_2'])$. \end{proof} Note that \eqref{triality} does not use the symmetry of $S$, and arguing as in the above proof, we obtain the following. \begin{Lma}\label{L: relation} Let $B\in\Comp(n)$ and $f,g\in \ort(n)$. Then $C=B_{f,g}$ satisfies \[(\rho_1^B,\rho_2^B)=(\kappa_{[f]}\rho_1^C,\kappa_{[g]}\rho_2^C).\] \end{Lma} Lemma \ref{L: Triality} defines, for any $C\in\Comp(n)$, two maps \[\rho_1^C,\rho_2^C:\pgo^+(n)\to\pgo^+(n)\] by $\rho_r^C([h])=[h_r]$ for each $r\in\{1,2\}$. These in fact define automorphisms of affine group schemes, as the next proposition shows. \begin{Prp}\label{P: rho} For each $C\in\Comp(n)$ and each $r\in\{1,2\}$, the map $\rho_r^C$ is a strongly outer automorphism of $\pgo^+(n)$, and induces an automorphism $\bm\rho_r^C$ of $\PGO^+(n)$ defined over $k$. \end{Prp} \begin{proof} By Lemma \ref{L: symmetric}, there exist a symmetric composition algebra $S$ and $f,g\in \ort(n)$ such that $C=S_{f,g}$, and by Lemma \ref{L: relation}, \begin{equation}\label{rho} (\rho_1^C,\rho_2^C)=(\kappa_{[f]}^{-1}\rho_1^S,\kappa_{[g]}^{-1}\rho_2^S). \end{equation} Proposition \ref{P: CKT}(ii) then implies that $\rho_1^C$ is an automorphism of $\pgo^+(n)$. From the same result we further know that $\rho_1^S$ is an outer automorphism of $\pgo^+(n)$, and it is strongly outer as its square is not inner. If $\rho_1^C=\kappa_{[h]}$ for some $h\in \go(n)$, then \[\rho_1^S=\kappa_{[fh]},\] contradicting the fact that $\rho_1^S$ is strongly outer. From Proposition \ref{P: CKT} together with Lemma \ref{L: lift} we deduce that $\kappa_{[f]}^{-1}\rho_1^S$ induces an automorphism $\bm\rho_1^C=\bm\kappa_{[f]}^{-1}\bm\rho_1^S$ of $\PGO^+(n)$ defined over $k$. Since $\bm\rho_1^C$ acts as $\rho_1^C$ on rational points, Lemma \ref{L: independent} implies that it is independent of the choice of $S$, $f$ and $g$. The case of $\rho_2^C$ is treated analogously, and the proof is complete. \end{proof} We will prove in the next section that the triality components of a composition algebra of dimension eight essentially carry all the information about the composition algebra. At this point, we will prove that the triality components detect the property of being symmetric, in the following sense. \begin{Lma}\label{L: simultaneous} Assume that $C,D\in\Comp(n)$ and $h\in\go(n)$ satisfy \[\rho_r^C=\kappa_{[h]}\rho_r^D\kappa_{[h]}^{-1}\] for each $r\in\{1,2\}$. If $D$ is symmetric, then so is $C$. \end{Lma} \begin{proof} There exist by Lemma \ref{L: symmetric} a symmetric $S\in\Comp(n)$ and $f,g\in \ort(n)$ such that $C=S_{f,g}$. By Lemma 5.2 of \cite{CKT}, $C$ is symmetric if (and only if) $f,g\in\ort^+(n)$ and satisfy \begin{equation}\label{desired} \begin{array}{lll} \prod_{m=0}^2 \left(\rho_1^S\right)^m([f])=1 & \text{and} & \left(\rho_1^S\right)^2\left([f]^{-1}\right)=[g]. \end{array} \end{equation} To prove that $f\in\ort^+(n)$, recall from Lemma \ref{L: relation} that $\kappa_{[h]}\rho_1^D\kappa_{[h]}^{-1}=\rho_1^C=\kappa_{[f]}^{-1}\rho_1^S$, and therefore \[\bm\kappa_{[h]}\bm\rho_1^D \bm\kappa_{[h]}^{-1}=\bm\kappa_{[f]}^{-1}\bm\rho_1^S\] in the group $\Aut\left(\PGO^+(n)\right)(k)$. In \[\Aut\left(\PGO^+(n)\right)(k)/\Inn\left(\PGO^+(n)\right)(k)\simeq S_3,\] the coset of the left hand side has order three, while if $f\notin\ort^+(n)$, the coset of the right hand side is a product of an element of order two with an element of order three, which by the structure of $S_3$ has order two. Thus $f\in\ort^+(n)$, and a similar argument gives $g\in\ort^+(n)$. For the first equality in \eqref{desired}, it follows from $\rho_1^C=\kappa_{[f]}^{-1}\rho_1^S$ that \[\left(\kappa_{[f]}^{-1}\rho_1^S\right)^3=\left(\kappa_{[h]}\rho_1^D\kappa_{[h]}^{-1}\right)^3.\] Expanding and using the fact that $\rho_1^S$ and $\rho_1^D$ are homomorphisms, we get \[\kappa_{[f']}\left(\rho_1^S\right)^3=\kappa_{[h]}\left(\rho_1^D\right)^3\kappa_{[h]}^{-1}\] with $[f']$ being the inverse of $\prod_{m=0}^2 \left(\rho_1^C\right)^m([f])$. Since $D$ and $S$ are symmetric, $\rho_r^D$ and $\rho_r^S$ have order three for each $r$. Thus $\kappa_{[f']}=\Id$, and since the centralizer of $\pgo^+(n)$ in $\pgo(n)$ is trivial, this proves that $[f']$ is trivial, whence the first equality follows. For the second we have $\rho_2^C=\kappa_{[g]}^{-1}\rho_2^S$, and $\rho_2^T=\left(\rho_1^T\right)^2$ whenever $T\in\Comp(n)$ is symmetric. Thus \[\kappa_{[g]}^{-1}\left(\rho_1^S\right)^2=\kappa_{[g]}^{-1} \rho_2^S=\rho_2^C=\kappa_{[h]}\rho_2^D\kappa_{[h]}^{-1}=\kappa_{[h]} \left(\rho_1^D\right)^2\kappa_{[h]}^{-1}=\left(\rho_1^C\right)^2.\] The rightmost expression equals $\left(\kappa_{[f]}^{-1}\rho_1^S\right)^2=\kappa_{[f'']}^{-1}\left(\rho_1^S\right)^2$ with $[f'']=\rho_1^S\left([f]\right)[f]$. Thus \[\kappa_{[g]}^{-1}=\kappa_{[f'']}^{-1}\] and triviality of the centralizer of $\pgo^+(n)$ gives $[g]=[f'']$, which by the first equality in \eqref{desired} equals $\left(\rho_1^S\right)^2\left([f]^{-1}\right)$. This proves the second equality, and the proof is complete. \end{proof} \section{Pairs of Outer Automorphisms} As in the previous section we fix an arbitrary $n\in\Pf_3(k)$. In this section we prove that $\Comp(n)$ is equivalent to a full subcategory of a group action groupoid. \subsection{Constructing the Functor} By Lemma \ref{L: lift} we know that $\pgo(n)$ acts on \[\Aut\left(\PGO^+(n)\right)(k)\times\Aut\left(\PGO^+(n)\right)(k)\] by \[[h]\cdot(\bm\alpha_1,\bm\alpha_2)=\left(\bm\kappa_{[h]}\bm\alpha_1\bm\kappa_{[h]}^{-1},\bm\kappa_{[h]}\bm\alpha_2\bm\kappa_{[h]} ^{ -1}\right).\] In the fashion described in Section 2.5, this group action gives rise to the group action groupoid \[\AUT(n)=\phantom{}_{\pgo(n)}\left(\Aut_k(\PGO^+(n))(k)\times\Aut_k(\PGO^+(n))(k)\right).\] As a step toward showing that $\Comp(n)$ is equivalent to a full subcategory of $\AUT(n)$, we begin by proving that isomorphisms in $\Comp(n)$ correspond to isomorphisms in $\AUT(n)$. \begin{Prp} Let $C,D\in\Comp(n)$. Then there is a bijection \[\Comp(n)(C,D)\to\AUT(n)\left((\bm\rho_1^C,\bm\rho_2^C),(\bm\rho_1^D,\bm\rho_2^D)\right)\] given by $h\mapsto [h]$. \end{Prp} Our proof of the well-definedness of the map will closely generalize that of Proposition \ref{P: CKT}(iv), which is given in \cite{CKT}. Our proof of surjectivity instead transfers the problem to the corresponding problem for symmetric algebras, to which that proposition applies. \begin{proof} In view of Lemma \ref{L: independent}, proving that the map is well-defined amounts to showing that for each isomorphism $h:C\to D$ we have \begin{equation}\label{pair} \left(\kappa_{[h]}\rho_1^C\kappa_{[h]}^{-1},\kappa_{[h]}\rho_1^C\kappa_{[h]}^{-1}\right)=\left(\rho_1^D, \rho_2^D\right). \end{equation} Assume therefore that $h: C\to D$ is an isomorphism. Then $h\in\ort(n)$, whence $[h]\in\pgo(n)$. Let $j\in \go^+(n)$ and let $(j_1,j_2)$ be a pair of triality components of $j$ with respect to $D$. Denoting multiplication in $C$ by juxtaposition and in $D$ by $\cdot$ we have, for all $x,y\in C$, \[h^{-1}j h(xy)=h^{-1}j(h(x)\cdot h(y))=h^{-1} (j_1h(x)\cdot j_2h(y))=h^{-1}j_1h(x)h^{-1}j_2h(y).\] Thus \[\left(h^{-1}j_1h,h^{-1}j_2h\right)\] is a pair of triality components for $h^{-1}j h$ with respect to $C$, and the uniqueness statement of Lemma \ref{L: Triality} implies that \[\rho_r^C([h^{-1}j h])=[h^{-1}]\rho_r^D[j][h]\] which, since $j$ was chosen arbitrarily, implies \eqref{pair}. Thus the map is well-defined. It is injective since if $h,j: C\to D$ are isomorphisms with $[h]=[j]$, then for some $\lambda\in k^*$ we have $j=\lambda h$. Thus for all $x,y\in C$ \[\lambda h(xy)=j(xy)=j(x)\cdot j(y)=\lambda h(x)\cdot\lambda h(y)=\lambda^2h(xy),\] whence $\lambda=1$ and thus $j=h$. It remains to show surjectivity. Let $h\in \go(n)$ be such that \[\bm\rho_r^D=\bm\kappa_{[h]}\bm\rho_r^C \bm\kappa_{[h]}^{-1},\] whence \[\rho_r^D=\kappa_{[h]}\rho_r^C\kappa_{[h]}^{-1},\] for each $r\in\{1,2\}$. By Lemma \ref{L: symmetric} there exist symmetric $S,T\in\Comp(n)$ and $f,g,f',g'\in\ort(n)$ such that $C=S_{f,g}$ and $D=T_{f',g'}$. Then by Lemma \ref{L: relation}, \[\kappa_{[h]}\kappa_{[f]}^{-1}\rho_1^S\kappa_{[h]}^{-1}=\kappa_{[h]}\rho_1^C\kappa_{[h]}^{-1}=\rho_1^D=\kappa_{[f']}^{-1}\rho_1^T,\] which is equivalent to \[\kappa_{[h]}\rho_1^S\kappa_{[h]}^{-1}=\kappa_{[h fh^{-1}{f'}^{-1}]}\rho_1^T.\] Similarly one obtains \[\kappa_{[h]}\rho_2^S\kappa_{[h]}^{-1}=\kappa_{[h gh^{-1}{g'}^{-1}]}\rho_2^T.\] Now $f''=f'h f^{-1}h^{-1}$ and $g''=g'h g^{-1}h^{-1}$ belong to $\ort(n)$, whence $B=T_{f'',g''}$ is in $\Comp(n)$, and from the above two lines and Lemma \ref{L: relation} we deduce \[\kappa_{[h]}\rho_r^S\kappa_{[h]}^{-1}=\rho_r^B\] for each $i$. Since $S$ is symmetric, Lemma \ref{L: simultaneous} implies that $B$ is symmetric, and then with Proposition \ref{P: CKT}(iv) we conclude that for some $\lambda\in k^*$, $\lambda h$ is an isomorphism from $S$ to $B$. Since $C=S_{f,g}$ and \[D=T_{f',g'}=B_{h f h^{-1},h g h^{-1}}=B_{\lambda h f (\lambda h)^{-1},\lambda h g (\lambda h)^{-1}},\] we conclude from Lemma \ref{L: juggling} that $\lambda h$ is an isomorphism from $C$ to $D$. Since $[h]=[\lambda h]$, and $h\in\go(n)$ was chosen arbitrarily, this proves surjectivity. \end{proof} Time is now ripe to prove our main result. \begin{Thm}\label{T: main} The map \[\mathfrak F: \Comp(n)\to\AUT(n)\] defined on objects by \[C\mapsto \left(\bm\rho_1^C,\bm\rho_2^C\right)\] and on morphisms by $h\mapsto [h]$, is a full and faithful functor, which is injective on objects up to sign. \end{Thm} By \emph{injective up to sign}, we mean that if $\mathfrak F(C)=\mathfrak F(D)$, then $D$ is a scalar multiple of $C$ with scalar $\pm1$. \begin{proof} The map $\mathfrak F$ is well-defined on objects by Proposition \ref{P: rho}. It is well-defined on morphisms by the above lemma, and clearly maps identities to identities and respects composition of morphisms. Thus $\mathfrak F$ is a functor, which by the lemma above is full and faithful. It remains to be shown that $\mathfrak F$ is injective up to sign. Assume that $C,D\in\Comp(n)$ satisfy $\bm\rho_r^C=\bm\rho_r^D$, and thence $\rho_r^C=\rho_r^D$, for each $r$. Then $C=S_{f,g}$ and $D=T_{f',g'}$ with $S,T$ symmetric and $f,g,f',g'\in\ort(n)$, and by Lemma \ref{L: relation}, \[\left(\kappa_{[f'f^{-1}]}\rho_1^S,\kappa_{[g'g^{-1}]}\rho_2^S\right)=\left(\rho_1^T,\rho_2^T\right).\] But the left hand side is equal to $(\rho_1^B,\rho_2^B)$ with $B=S_{f{f'}^{-1},g{g'}^{-1}}$. Thus $B$ is symmetric by Lemma \ref{L: simultaneous}, and then by Theorem 5.8 in \cite{CKT}, we have $B=\lambda T$ for some $\lambda\in k^*$. Then $\lambda=\pm 1$ by Lemma \ref{L: scalars}. Therefore, \[C=S_{f,g}=\left(S_{f{f'}^{-1},g{g'}^{-1}}\right)_{f',g'}=B_{f',g'}=\lambda T_{f',g'}=\lambda D,\] proving injectivity up to sign. This completes the proof. \end{proof} \subsection{Trialitarian Pairs} The aim of this section is to describe the image of the functor $\mathfrak F$. We begin by the following observation. If $(\bm\alpha_1,\bm\alpha_2)$ is an object in $\AUT(n)$, then there exists a symmetric $S\in\Comp(n)$ such that $\bm\alpha_1=\bm\kappa_{[f]}\bm\rho_1^S$ for some $f\in\go(n)$, and then $\bm\alpha_2=\bm\kappa_{[g]}\left(\bm\rho_2^S\right)^r$ for some $g\in\go(n)$ and $r\in\{0,\pm1\}$. If $r=1$ and there exist $f',g'\in \ort(n)$ with $[f']=[f]$ and $[g']=[g]$, then by Lemma \ref{L: relation}, \[(\bm\alpha_1,\bm\alpha_2)=\mathfrak F\left(S_{{f'}^{-1},{g'}^{-1}}\right).\] We will, roughly speaking, show that the requirement on $r$ is essential, but that, up to isomorphism, that on $f$ and $g$ is not. To formalize and prove the latter statement, we need the following result. \begin{Lma}\label{L: normalization} Let $F,G\in \go(n)$ and let $S\in\Comp(n)$ be a para-Hurwitz algebra. Then there exist $f,g\in \ort(n)$ and a symmetric $T\in \Comp(n)$ such that \begin{equation}\label{generalized} \left(\kappa_{[F]}\rho_1^S,\kappa_{[G]}\rho_2^S\right)\simeq\left(\kappa_{[f]}\rho_1^T,\kappa_{[g]}\rho_2^T\right) \end{equation} in $\AUT(n)$. \end{Lma} In the proof we need to pass to symmetric generalized composition algebras. To treat these we shall refer to a couple of results proved in \cite{CKT}, where, as previously remarked, these algebras are called \emph{symmetric compositions}, while symmetric composition algebras are named \emph{normalized symmetric compositions}. \begin{proof} By definition of a para-Hurwitz algebra we have $S=H_{i,i}$ where $H$ is a Hurwitz algebra with canonical involution $i$. Denoting the unity of $H$ by $e$, and setting $a=F^{-1}(e)$ and $b=G^{-1}(e)$ we have \[\mu\left(F^{-1}\right)=n\left(F^{-1}(e)\right)=n(a),\] and since $\mu$ is a group homomorphism, \[\mu\left(FiR_a^H\right)=n(a)^{-1}n(i)n(a)=1,\] whereby $FiR_a^H\in\ort(n)$, and similarly one gets $GiL_b^H\in \ort(n)$. The algebra \[H'=H_{R_a^H,L_b^H}\] is in fact a unital generalized composition algebra with norm $n$, unity $e'=(ab)^{-1}$ and multiplier $\lambda=n(ab)$. One can check that the map \[\begin{array}{ll} i':H'\to H',& x\mapsto b_n(x,e')e'-x \end{array} \] is an anti-automorphism on $H'$ as well as an isometry, and further that $S'=H'_{i',i'}$ is a symmetric generalized composition algebra. (The proof of the latter statement is straightforward and consists of manipulations analogous to those used to prove that a para-Hurwitz algebra is a symmetric composition algebra.) Altogether, \[S'=S_{iR_a^Hi',iL_b^Hi'}\] and since $S$ and $S'$ are symmetric and $iR_a^Hi',iL_b^Hi'\in\go(n)$, Lemma 5.2 from \cite{CKT} applies to give \[\begin{array}{lll}\rho_1^{S'}=\kappa_{\left[iR_a^Hi'\right]}^{-1}\rho_1^S& \text{and}& \rho_2^{S'}=\kappa_{\left[iL_b^Hi'\right]}^{-1}\rho_2^S.\end{array}\] On the other hand, Proposition 3.6 from \cite{CKT} states that each symmetric generalized composition algebra is isomorphic to a (unique) symmetric composition algebra. Thus there exists a symmetric composition algebra $T$ and an isomorphism $h: T\to S'$, with $h\in\go(n)$. Therefore, by Proposition \ref{P: CKT}, \[\begin{array}{lll}\rho_1^{S'}=\kappa_{[h]}\rho_1^{T}\kappa_{[h]}^{-1}& \text{and}& \rho_2^{S'}=\kappa_{[h]}\rho_2^{T}\kappa_{[h]}^{-1}.\end{array}\] Equating the two above expressions of $\rho_1^{S'}$ we get \[\kappa_{[F]}\rho_1^S=\kappa_{\left[FiR_a^Hi'\right]}\kappa_{[h]}\rho_1^T\kappa_{[h]}^{-1}=\kappa_{[h]}\kappa_{\left[h^{-1} FiR_a^Hi'h\right]}\rho_1^T\kappa_{[h]}^{-1},\] and $h^{-1}FiR_a^Hi'h\in\ort(n)$ since, as we have already concluded, $FiR_a^H\in \ort(n)$ and $i'\in\ort(n)$. Likewise, \[\kappa_{[G]}\rho_2^S=\kappa_{[h]}\kappa_{\left[h^{-1}GiL_b^Hi'h\right]}\rho_2^T\kappa_{[h]}^{-1},\] with $h^{-1}GiL_b^Hi'h\in\ort(n)$. This proves the existence $f,g\in\ort(n)$ such that \eqref{generalized} holds, and the proof is complete. \end{proof} Next we will construct a full subcategory of $\AUT(n)$ in which the image of $\mathfrak F$ is dense, and to which, by Theorem \ref{T: main}, $\Comp(n)$ is therefore equivalent. To construct this subcategory, let $\Inn^*\left(\PGO^+(n)\right)(k)$ denote the subgroup of $\Aut\left(\PGO^+(n)\right)(k)$ consisting of all weakly inner automorphisms (with respect to $\PGO(n)$). From Proposition \ref{P: KMRT} we deduce that the set \[\Delta(n)=\Aut\left(\PGO^+(n)\right)(k)/\Inn^*\left(\PGO^+(n)\right)(k)\] of all left cosets of this subgroup consists of three elements. Denoting the quotient projection by $\pi$ we observe that any outer automorphism $\bm\alpha\in\Aut\left(\PGO^+(n)\right)(k)$ of order three satisfies \[\Delta(n)=\pi(\{\ID,\bm\alpha,\bm\alpha^2\}).\] Generalizing this, we call $(\bm\alpha_1,\bm\alpha_2)\in\AUT(n)$ a \emph{trialitarian pair} if \[\pi(\{\ID,\bm\alpha_1,\bm\alpha_2\})=\Delta(n).\] As an automorphism $\bm\alpha$ of $\PGO^+(n)$ is weakly inner if and only if $\pi(\bm\alpha)=\pi(\ID)$, this is equivalent to requiring $\bm\alpha_1$ and $\bm\alpha_2$ to be strongly outer and have different images under $\pi$. We denote the set of trialitarian pairs by $\TRI(n)$. \begin{Rk} Set $\Omega(n)=\Aut\left(\PGO^+(n)\right)(k)\setminus\Inn^*\left(\PGO^+(n)\right)(k)$, the set of all strongly outer automorphisms of $\PGO^+(n)$. Consider the diagram \[\xymatrix@1{& \Omega(n)\ar[d]^{\pi} \phantom{\: ,}\\ \Omega(n) \ar[r]_{\pi}& \Delta(n) \: ,}\] the pullback of which (in the category of sets) is $\Omega(n)\times_{\Delta(n)}\Omega(n)$ with the corresponding projection maps. Then we in fact have \[\TRI(n)=\Omega(n)\times\Omega(n)\setminus\Omega(n)\times_{\Delta(n)}\Omega(n).\] \end{Rk} Denoting the full subcategory of $\AUT(n)$ with object set $\TRI(n)$ by $\TRI(n)$ as well, we can describe the image of $\mathfrak F$ up to isomorphism as follows. \begin{Prp} The image of $\Comp(n)$ under $\mathfrak F$ is dense in $\TRI(n)$. \end{Prp} \begin{proof} Let $C\in\Comp(n)$. Then $C=S_{f,g}$ for some symmetric $S\in\Comp(n)$ and $f,g\in\ort(n)$. Thus by Lemma \ref{L: relation}, we have \[\left(\bm\rho_1^C,\bm\rho_2^C\right)=\left(\bm\kappa_{[f]}^{-1}\bm\rho_1^S,\bm\kappa_{[g]}^{-1}\bm\rho_2^S\right),\] which belongs to $\TRI(n)$ since $\bm\rho_1^S$ is an outer automorphism of order three and $\bm\rho_2^S=\left(\bm\rho_1^S\right)^2$. Thus $\mathfrak{F}\left(\Comp(n)\right)\subseteq\TRI(n)$. To prove denseness, assume that $(\bm\alpha_1,\bm\alpha_2)\in\TRI(n)$. Then for any para-Hurwitz algebra $S\in \Comp(n)$ there exist $F,G\in\go(n)$ with \[(\bm\alpha_1,\bm\alpha_2)=\left(\bm\kappa_{[F]}^{-1}\bm\rho_1^S,\bm\kappa_{[G]}^{-1}\bm\rho_2^S\right),\] and then Lemma \ref{L: normalization} provides a symmetric $T\in\Comp(n)$ and $f,g\in\ort(n)$ such that \[(\bm\alpha_1,\bm\alpha_2)=\left(\bm\kappa_{[f]}^{-1}\bm\rho_1^T,\bm\kappa_{[g]}^{-1}\bm\rho_2^T\right),\] But then $C=T_{f^{-1},g^{-1}}\in \Comp(n)$, and, by Lemma \ref{L: relation}, $(\bm\alpha_1,\bm\alpha_2)\simeq\mathfrak F(C)$. This completes the proof. \end{proof} We have thus achieved our advertised goal, as we have proved the following. \begin{Cor} The functor $\mathfrak F:\Comp(n)\to\AUT(n)$ induces an equivalence of categories $\Comp(n)\to\TRI(n)$. \end{Cor} \section{Previous Results Revisited} In this section we will revisit known results about composition algebras, and express them in terms of the triality pairs of these algebras. \subsection{Symmetric Composition Algebras} In order to precisely express how our approach generalizes that of \cite{CKT} and \cite{CEKT}, we begin by expressing the structural results obtained there in the current framework. To this end, consider the set of all trialitarian automorphisms of $\PGO^+(n)$, i.e.\ outer automorphisms of $\PGO^+(n)$ of order three. The group $\pgo(n)$ acts on this set by conjugation, viz. \[[h]\cdot\bm\alpha=\bm\kappa_{[h]}\bm\alpha \bm\kappa_{[h]}^{-1},\] and we denote the group action groupoid arising from this action by $\TRI^*(n)$. The following lemma is easy to check. \begin{Lma} The map $\mathfrak{G}: \TRI^*(n)\to\AUT(n)$, defined on objects by $\bm\alpha\mapsto(\bm\alpha,\bm\alpha^2)$, and on morphisms by $[h]\mapsto[h]$, is a full and faithful functor, which is injective on objects. Moreover, the image of $\mathfrak{G}$ is a full subcategory of $\TRI(n)$. \end{Lma} Thus $\mathfrak{G}$ induces an isomorphism of categories \[\mathfrak{G}':\TRI^*(n)\to\mathfrak G\left(\TRI^*(n)\right)\subseteq \TRI(n).\] The structural results from \cite{CKT} and \cite{CEKT} can now be reformulated as follows. \begin{Prp} The functor $\mathfrak F:\Comp(n)\to\AUT(n)$ induces an equivalence of categories $\mathfrak{F}':\Comp^S(n)\to\mathfrak{G}(\TRI^*(n))$. \end{Prp} Here, $\Comp^S(n)$ is the full subcategory of $\Comp(n)$ whose objects are all symmetric composition algebras in $\Comp(n)$. \begin{proof} The induced functor is well-defined since if $S\in\Comp^S(n)$, then $\bm\rho_1^S$ is a trialitarian automorphism with square $\bm\rho_2^S$. It is full by fullness of $\Comp^S(n)$ and $\mathfrak{F}$, and faithful by faithfulness of the latter. If $\bm\alpha\in \TRI^*(n)$, then from \cite{CKT} we know that $\bm\alpha=\bm\rho_1^S$ for some symmetric generalized composition algebra $S$, and that $\bm\alpha\simeq \bm\rho_1^T$ in $\TRI^*(n)$ for the unique symmetric composition algebra $T\simeq S$. \end{proof} In other words, composing $\mathfrak F'$ with the inverse of $\mathfrak{G}'$, one obtains an equivalence between the category of symmetric composition algebras with quadratic form $n$ and the groupoid arising from the action of $\pgo(n)$ on the set of all trialitarian automorphisms of $\PGO^+(n)$ by conjugation by weakly inner automorphisms. \subsection{The Double Sign} The double sign was defined for finite-dimensional real division algebras in \cite{DD}, and the topic has been implicitly treated for composition algebras over arbitrary fields of characteristic not two in e.g.\ \cite{EP}. Recall that for a composition algebra $C$ and an element $a\in C$ with $n(a)\neq 0$, the left and right multiplication operators $L_a^C$ and $R_a^C$ are similarities. \begin{Lma} Let $C$ be a finite-dimensional composition algebra over $k$, and let $a,b\in C$ be anisotropic. Then $L_a^C$ is a proper similarity if and only if $L_b^C$ is, and $R_a^C$ is proper if and only if $R_b^C$ is. \end{Lma} An element of a quadratic space is called anisotropic if its is not in the kernel of the quadratic form. The proof is essentially due to \cite{EP}. \begin{proof} There exists a Hurwitz algebra $H$ and $f,g\in\go(n)$ such that $C=H_{f,g}$. Then $L_c^C=L_{f(c)}^Hg$ and $R_c^C=R_{g(c)}^Hf$ for all $c\in C$. Now left and right multiplications by anisotropic elements in Hurwitz algebras are proper similarities. Thus $L_c^C$ is proper if and only if $g$ is, and $R_c^C$ is proper if and only if $f$ is. This completes the proof. \end{proof} We define the \emph{sign} $\sgn (h)$ of a similarity $h$ as $+1$ if $h$ is proper, and $-1$ otherwise. The above lemma guarantees that the following notion is well-defined. \begin{Def} The \emph{double sign} of a finite-dimensional composition algebra $C$ is the pair $(\sgn(L_c^C),\sgn(R_c^C))$ for any $c\in C$ with $n(c)\neq 0$. \end{Def} As noted in \cite{EP}, the double sign is the pair $(\det(L_c^C),\det(R_c^C))$ for any $c\in C$ with $n(c)=1$. Recall that such elements exist in any composition algebra. It is easily seen that isomorphic composition algebras have the same double sign. Thus \[\Comp(n)=\coprod_{(r,s)\in\{\pm1\}^2} \Comp^{rs}(n),\] where $\Comp^{rs}(n)$ is the full subcategory of $\Comp(n)$ consisting of all algebras with double sign $(r,s)$. We can now prove that the double sign can be inferred from the triality pair of the algebra. \begin{Prp} Let $n\in\Pf_3(k)$ and $C\in\Comp(n)$. Then the double sign of $C$ is $((-1)^{o_2},(-1)^{o_1})$, where $o_r$ is the order of the coset of $\bm\rho_r^C$ in the quotient group $\Aut(\PGO^+(n))(k)/\Inn(\PGO^+(n))(k)$. \end{Prp} \begin{proof} We have $C=H_{f,g}$ for some Hurwitz algebra $H\in\Comp(n)$ and $f,g\in\ort(n)$. Since the canonical involution $i$ on $H$ is not proper, we have \[(f,g)=\left(i^{p_1}f',i^{p_2}g'\right)\] for some $p_1,p_2\in\{0,1\}$ and $f',g'\in \ort^+(n)$. Then by Lemma \ref{L: relation}, for any $r\in\{1,2\}$, the coset of $\bm\rho_r^C$ equals the coset of $\bm\kappa_{[j]}^{p_r}\bm\rho_r^H$. If $p_r=1$, then the coset of $\bm\rho_r^C$ coincides with the coset of $\bm\rho_r^S$ for the symmetric composition algebra $S=H_{i,i}$, which has order three. If $p_r=0$, then the coset of $\bm\rho_r^C$ coincides with the coset of $\bm\kappa_{[j]}\bm\rho_r^S$, which has order two, since it is product of an element of order two with one of order three. In both cases $o_i=p_i+2$. Now the double sign of $H$ is $(+1,+1)$, which implies that the double sign of $C$ is $((-1)^{p_2},(-1)^{p_1})$. This completes the proof. \end{proof} \subsection{Isomorphism Conditions} The category $\Comp(n)$ contains a unique isomorphism class $\mathfrak H(n)$ of Hurwitz algebras. In view of Lemmata \ref{L: Kaplansky} and \ref{L: juggling}, one may fix a Hurwitz algebra $H\in\mathfrak H(n)$ and study the full subcategory of $\Comp(n)$ consisting of all orthogonal isotopes of $H$, which is dense and hence equivalent to $\Comp(n)$. This approach is pursued in \cite{Ca}, \cite{CDD} and \cite{A2} (for real division composition algebras) and in \cite{Da} (for general isotopes of Hurwitz algebras over arbitrary fields). We will recall the isomorphism conditions given in these papers, and give a new proof of these using our approach above. To begin with, the following result is proved in \cite{Da}. \begin{Lma} Let $(H,n)$ be a Hurwitz algebra and $f,g,f',g'\in\gl(A)$. If $h: H_{f,g}\to H_{f',g'}$ is an isomorphism, then $h\in\go^+(n)$. \end{Lma} Moreover, an isomorphism condition is given in \cite{Da} for isotopes of Hurwitz algebras. For orthogonal isotopes of eight-dimensional Hurwitz algebras, it implies the following. \begin{Prp} Let $n\in\Pf_3(k)$ and let $H\in\Comp(n)$ be a Hurwitz algebra, $f,g,f',g'\in\ort(n)$, and $h\in\ort^+(n)$. Then $h: H_{f,g}\to H_{f',g'}$ is an isomorphism, if and only if \begin{equation}\label{erik} \left([f'],[g']\right)=\left([h_1][f][h]^{-1},[h_2][g][h]^{-1}\right), \end{equation} where $(h_1,h_2)$ is a pair of triality components of $h$ with respect to $H$. \end{Prp} A similar result is proven in \cite{Ca} for the case $k=\mathbb R$ and $n=n_E$, the standard Euclidean norm. We will now show that this result in fact follows by applying Theorem \ref{T: main} above. \begin{proof} Set $C=H_{f,g}$ and $D=H_{f',g'}$. By Theorem \ref{T: main}, $h:C\to D$ is an isomorphism if and only if \[\left(\rho_1^D,\rho_2^D\right)=\left(\kappa_{[h]}\rho_1^C\kappa_{[h]}^{-1},\kappa_{[h]}\rho_2^C\kappa_{[h]}^{-1}\right).\] By Lemma \ref{L: relation}, the statement $\rho_1^D=\kappa_{[h]}\rho_1^C\kappa_{[h]}^{-1}$ is equivalent to \begin{equation}\label{equivalent} \kappa_{[f']}^{-1}\rho_1^H=\kappa_{[h]}\kappa_{[f]}^{-1}\rho_1^H\kappa_{[h]}^{-1}. \end{equation} Now \[\rho_1^H\kappa_{[h]}^{-1}=\kappa_{\rho_1^H([h])}^{-1}\rho_1^H\] since $\rho_1^H$ is a group homomorphism and $[h]\in\pgo^+(n)$, and therefore \eqref{equivalent} is equivalent to \[\kappa_{[f']}^{-1}=\kappa_{[h][f]^{-1}\rho_1^H([h])^{-1}},\] which, since the centralizer of $\pgo^+(n)$ in $\pgo(n)$ is trivial, is in turn equivalent to \[[f']=\rho_1^H([h])[f][h]^{-1}.\] By an analogous argument, the statement $\rho_2^D=\kappa_{[h]}\rho_2^C\kappa_{[h]}^{-1}$ is equivalent to \[[g']=\rho_2^H([h])[g][h]^{-1},\] and thus altogether $h:C\to D$ is an isomorphism if and only if \eqref{erik} holds, as desired. \end{proof} \bibliographystyle{amsplain} \bibliography{references} \end{document}
{"config": "arxiv", "file": "1504.01278/SubmissionAlsaody.tex"}
TITLE: Problem of random scheduling of queues of tasks QUESTION [5 upvotes]: Consider $L$ queues in a discrete time system. At each time $n=0,1,2,\ldots$, one task would arrive at one of the queues with equal probability $\frac{1}{L}$. Immediately after that, a task scheduler would uniformly randomly pick up a queue, and schedule one task in that queue (if there is any). Tasks in queues are first come first served. Assume that working time of each task is 0, i.e., the task is gone immediately when it's scheduled. My question is, what is the probability that the task scheduler picks up an empty queue over the time? Would someone point me to any references if this is a known problem? Thanks. REPLY [1 votes]: Heuristically, this probability should behave as $O(\sqrt{L/n})$, I guess. Observe that each queue, when not empty, is a random walk with zero drift, that actually moves once every $O(L^{-1})$ instances of time. So, up to time $n$ it would jump roughly $n/L$ times, and therefore visit the origin about $\sqrt{n/L}$ times and so there will be around the same (in order) number of instances when the walk "attempts to jump to (-1)", that is, the empty queue is selected. Overall, there would be around $L\times\sqrt{n/L}=\sqrt{Ln}$ such moments up to time $n$, and so that probability should behave as indicated. Of course, this does not take the interactions into account; but my intuition says that wouldn't change the order, only constants. I'm pretty sure that one can analyse the case $L=2$ very accurately, but I'm not so sure about larger $L$'s.
{"set_name": "stack_exchange", "score": 5, "question_id": 223267}
TITLE: Cohomology of space obtained by identifying the boundary of $M$=mobius band to $\mathbb{R}P^1\subset \mathbb{R}P^2$ using Mayer Vietoris. QUESTION [2 upvotes]: I am struggling with a particular part of this question. I think that I can do a) b)i) b)iii) ( provided I have b)ii) and I use UCT). My problem is with the computation of b)ii): I can compute $H^0(D)$ and $H^(1)(D)$ but i can't seem to compute $H^2(D)$ because in the long exact sequence ( Mayer vietoris for cohomology), $H^2(D)$ is sandwiched between $H^1(M\cap \mathbb{R}P^2) = H^1(S^1)= \mathbb{Z}$ and $H^2(\mathbb{R}P^2)\oplus H^2(S^1) = \mathbb{Z}_2$ and all I managed to figure out is the corresponding map from $H^2(D)$ to $H^2(\mathbb{R}P^2)\oplus H^2(S^1)$ is surjective. I give more details below: I used Mayer Vietoris long exact sequence with $X$ being the union of the interiors or $A$ and $B$: $$H^n(X)\rightarrow H^n(A)\oplus H^n(B) \rightarrow H^n(A\cap B) \rightarrow H^{n+1}(X)...$$ Here $X=D$ and I chose $A$ to be $M$ and $B$ to be the union of $\mathbb{R}P^2$ with a small chunk on $M$ that deformation retracts to $\mathbb{R}P^2$. Here I give the section to compute $H^2(D)$: $$H^1(M\cap \mathbb{R}P^2)=H^1(S^1) \xrightarrow{\partial} H^{2}(D)\xrightarrow{\phi} H^2(M)\oplus H^2(\mathbb{R}P^2)=\mathbb{Z}_2 \rightarrow H^2(M\cap \mathbb{R}P^2)=0$$ This tells me, by exactness that $\phi$ is surjective. In order to compute $H^2(D)$, I would need $ker(\phi) = Im(\partial)$. But i can't seem to figure out how to compute this part. I would appreciate a a solution for this calculation. I have seen solutions for the calculation for the homology and I do understand those but I am still stuck here. REPLY [1 votes]: Let us try Mayer-Vietoris in homology and look at $H_1$ (I think this is essentially the same place you are stuck). You have $H_2(D) \xrightarrow{\partial_{2*}} H_1(S^1) \xrightarrow{s} H_1(\mathbb{R}P^2) \oplus H_1(M) \to H_1(D) \xrightarrow{\partial_{1*}} H_0(S^1) \to H_0(\mathbb{R}P^2) \oplus H_0 (M) \to H_0(D) \to 0$ I think you can argue that $\partial_{1*}$ is actually $0$, since the map $H_0(S^1) \to H_0(\mathbb{R}P^2) \oplus H_0(M)$ is injective. Great so we actually have a sequence $H_1(S^1) = \mathbb{Z} \to \mathbb{Z}/2 \oplus \mathbb{Z} \to H_1(D) \to 0$. For similar reasons the map $\partial_{2*}$ is $0$ too, because $H_1(S^1) \to H_1(\mathbb{R}P^2) \oplus H_1(M)$ is injective (it maps a loop in $S^1$ to the exact same loop in the core circle in $H_1(M)$). So I have a short exact sequence $0 \to \mathbb{Z} \to \mathbb{Z} \oplus \mathbb{Z}/2 \to H_1(D) \to 0$. This sequence is actually split, because a loop in $S^1$ gets mapped to the "same" loop in $M$, so in particular we have a map $$t:H_1(\mathbb{R}P^2) \oplus H_1(M) \to H_1(S^1)$$ such that $s \circ t$ is the identity on $H_1(S^1)$. This means the sequence is left split, so indeed we have $H_1(D) = \mathbb{Z}/2$. In fact this calculation made me realize I had an error in my calculation with cellular homology that I am just about to fix. Calculating $H_2$ using MV should be trivial (since $H_2(S^1) = H_2(\mathbb{R}P^2) = H_2(M) = 0)$, and so we get the same homology results. Now you can UCT and continue.
{"set_name": "stack_exchange", "score": 2, "question_id": 3698176}
TITLE: What is the probability that there is an error in both blocks? QUESTION [1 upvotes]: A computer program consists of two blocks written independently by two different programmers. The first block has an error with probability $0.2$, the second block has an error with probability $0.3$. If the program returns an error, what is the probability that there is an error in both blocks? REPLY [2 votes]: Hint: $\Pr(X\mid Y) = \dfrac{\Pr(X\cap Y)}{\Pr(Y)}$ and in particular letting $X=A\cap B$ and $Y= A\cup B$ and noting that $A\cap B$ is a subset of $A\cup B$ you have $$\Pr(A\cap B\mid A\cup B) = \dfrac{\Pr(A\cap B)}{\Pr(A\cup B)}$$ REPLY [1 votes]: Consider this figure: So your answer is $\frac{0.06}{.2 + .3 - .06}$ which is simply the intersection over the union, as shown by JMoravitz.
{"set_name": "stack_exchange", "score": 1, "question_id": 3884693}
TITLE: vector space and general solution to the differential equation QUESTION [0 upvotes]: The set of solutions of $(E): y' + a(x)y = 0$ ($a\,:\mathbb{R}\,\rightarrow\,\mathbb{R}$ continuous function) is a one-dimensional vector space. If $f(x) = e^{-\int_0^x a(t)\,\mathrm{d}t}$ is solution of $(E)$, why the general solution of $(E)$ is of the form $Cf(x)$ ($C\,\in\,\mathbb{R}$)? REPLY [1 votes]: I am noting two important theorems which describe the formal form of a general solution of a homogenous linear nth-order differential equation. Let your equation is as: $$a_n(x)y^{(n)}+a_{n-1}(x)y^{(n-1)}+...+a_1(x)y'+a_0(x)y=0$$ where the functions $a_i(x), 1\leq i \leq n$ and $g(x)$ be continuous on an interval $I$ and $a_n(x)\neq0 $ over the interval. Then: Theorem 1: There exists a linearly independent solutions, called fundamental set of solutions, for above equation on interval $I$ as: $$y_1,y_2,...y_n$$ Theorem 2. if $y_1,y_2,...y_n$ be a fundamental set of solutions of above linear and homogenous nth-order equation on interval $I$. Then the general solution of the equation on $I$ is defined to be: $$y=c_1y_1(x)+c_2y_2(x)+...+c_ny_n(x)$$ wherein $c_i, 1\leq i\leq n$ are arbitrary constants. Now, I think you can conclude why the general solution of your first order linear and homogenous equation is as $$y=Cf(x)$$
{"set_name": "stack_exchange", "score": 0, "question_id": 197550}
TITLE: Goldstone boson couple to conserved current QUESTION [2 upvotes]: The Goldstone boson in spontaneous symmetry breaking problem couples naturally to the associated conserved current of the broken symmetry. How can I see a rigorous (mathematical) derivation for that? REPLY [2 votes]: From Goldstone theorem we know that $\langle0|J^\mu|\pi\rangle$ isn't zero, that's all. Adding some extra details, from Lorentz symmetry you have $\langle0|J^\mu|\pi\rangle\sim p^\mu e^{ip x}$ which you can get for a pion coupled derivatively to the current $\mathcal{L}\sim J^\mu \partial_\mu \pi$.
{"set_name": "stack_exchange", "score": 2, "question_id": 109029}
\begin{document} \begin{titlepage} \hbox to \hsize{\hfil math-ph/9807017} \hbox to \hsize{\hfil IFT-P.047/98} \hbox to \hsize{\hfil July, 1998} \vfill \Large \bf \begin{center} Riccati-type equations, \\ generalised WZNW equations, \\ and multidimensional Toda systems \end{center} \normalsize \rm \vskip 0.2in \begin{center} L. A. Ferreira$^\ast$, J. F. Gomes$^\ast$, A. V. Razumov$^\dagger$\\[0.05in] M. V. Saveliev$^\ast$\footnote{On leave of absence from the Institute for Hight Energy Physics, 142284 Protvino, Moscow Region, Russia, saveliev@mx.ihep.su}, A. H. Zimerman$^\ast$\\[0.2in] {\footnotesize $^\ast$Instituto de F\'\i sica Te\'orica - IFT/UNESP} \\ {\footnotesize Rua Pamplona 145, 01405-900, S\~ao Paulo - SP, Brazil}\\ {\footnotesize laf@ift.unesp.br, jfg@ift.unesp.br, saveliev@ift.unesp.br, ahz@ift.unesp.br} \\[0.2in] {\footnotesize $^\dagger$Institute for High Energy Physics} \\ {\footnotesize 142284 Protvino, Moscow Region, Russia} \\ {\footnotesize razumov@mx.ihep.su} \end{center} \vskip 0.2in \begin{abstract} We associate to an arbitrary $\mathbb Z$-gra\-da\-tion of the Lie algebra of a Lie group a system of Riccati-type first order differential equations. The particular cases under consideration are the ordinary Riccati and the matrix Riccati equations. The multidimensional extension of these equations is given. The generalisation of the associated Redheffer--Reid differential systems appears in a natural way. The connection between the Toda systems and the Riccati-type equations in lower and higher dimensions is established. Within this context the integrability problem for those equations is studied. As an illustration, some examples of the integrable multidimensional Riccati-type equations related to the maximally nonabelian Toda systems are given. \end{abstract} \vfill \end{titlepage} \section{Introduction} To the present time there is a great number of papers in mathematics and physics devoted to various aspects of the matrix differential Riccati equation proposed in twenties by Radon in the context of the Lagrange variational problem. In particular, this equation has been discussed in connection with the oscillation of the solutions to systems of linear differential equations, Lie group and differential geometry aspects of the theory of analytic functions of several complex variables in classical domains, the probability theory, computation schemes. For a systematic account of the development in the theory of the matrix differential Riccati equation up to seventies see, for example, the survey \cite{ZI73}. More recently there appeared papers where this equation was considered as a B\"acklund-type transformation for some integrable systems of differential geometry, in particular, for the Lam\'e and the Bourlet equations, and a relevant superposition principle for the equation has been studied on the basis of the theory of Lie algebras, see, for example, \cite{TT80} and references therein. The matrix Riccati equation also arises as equation of motion on Grassmann manifolds and on homogeneous spaces attached to the Hartree--Fock--Bogoliubov problem, see, for example, \cite{DWO84BG92} and references therein; and in some other subjects of applied mathematics and physics such as optimal control theory, plasma, etc., see, for example, \cite{HM82Sch73Sh80DFM87}. Continued--fraction solutions to the matrix differential Riccati equation were constructed in \cite{CHR8690}, based on a sequence of substitutions with the coefficients satisfying a matrix generalisation of the Volterra-type equations which in turn provide a B\"acklund transformation for the corresponding matrix version of the Toda lattice. In papers \cite{B90} the matrix differential Riccati equation occurs in the steepest descent solution to the total least squares problem as a flow on Grassmannians via the Brockett double bracket commutator equation; in the special case of projective space this is the Toda lattice flow in Moser's variables. In the present paper we investigate the equations associated with an arbitrary $\mathbb Z$-gradation of the Lie algebra $\mathfrak g$ of a Lie group $G$. For the case $G = {\rm GL}(2, \mathbb C)$ and the principal gradation of $\mathfrak{gl}(2, \mathbb C)$ this is the ordinary Riccati equation, for the case $G = {\rm GL}(n, \mathbb C)$ and some special $\mathbb Z$-gradation of $\mathfrak{gl}(n, \mathbb C)$ we get the matrix Riccati equation. The underlying group-algebraic structure allows us to give a unifying approach to the investigation of the integrability problem for the equations under consideration which we call the Riccati-type equations. We also give a multidimensional generalisation of the Riccati-type equations and discuss their integrability. It appeared very useful for the study of ordinary matrix Riccati equations to associate with them the so-called Redheffer--Reid differential system \cite{Red56Rei59}. In our approach the corresponding generalisation of such systems appears in a natural way. The associated Redheffer--Reid system can be considered as the constraints providing some reduction of the Wess--Zumino--Novikov--Witten (WZNW) equations. From the other hand, it is well known that the Toda-type systems can be also obtained by the appropriate reduction of the WZNW equations, see, for example, \cite{FRRTW92}. This implies the deep connection of the Toda-type systems and the Riccati-type equations. In particular, under the relevant constraints the Riccati-type equations play the role of a B\"acklund map for the Toda systems, and, in a sense, are a generalisation of the Volterra equations. Some years ago there appeared a remarkable generalisation \cite{GM93} of the Wess--Zumino--Novikov--Witten (WZNW) equations. The associated Redheffer--Reid system in the multidimensional case can be considered again as the constraints imposed on the solutions of those equations. We show that in the same way as in two dimensional case, the appropriate reduction of the multidimensional WZNW equations leads to the multidimensional Toda systems \cite{RS97}, in particular to the equations \cite{CV91Dub93} describing topological and antitopological fusion.\footnote{It is rather clear that the the multidimensional systems suggested in \cite{RS97} become two dimensional equations only under a relevant reduction. Moreover, arbitrary mappings determining the general solution to these equations are not necessarily factorised to the products of mappings each depending on one coordinate only. One can easily get convinced of it just by the examples considered there in detail.} The multidimensional Toda systems are integrable for the relevant integration data with the general solution being determined by the corresponding arbitrary mappings in accordance with the integration scheme developed in \cite{RS97}. Therefore the integrability problem for the multidimensional Riccati-type equations can be studied, in particular, on the basis of that fact. As an illustration of the general construction we discuss in detail some examples related to the maximally nonabelian Toda systems \cite{RS97a}. Analogously to the Toda systems one can construct higher grading generalisations in the sense of \cite{GS95,RS97b} for the multidimensional Riccati-type equations. \section{One dimensional Riccati--type equations} Let $G$ be a connected Lie group and $\mathfrak g$ be its Lie algebra. Without any loss of generality we assume that $G$ is a matrix Lie group, otherwise we replace $G$ by its image under some faithful representation of $G$. For any fixed mapping $\lambda: \mathbb R \to \mathfrak g$ consider the equation \begin{equation} \psi^{-1} \d \psi x = \lambda \label{2} \end{equation} for the mapping $\psi: \mathbb R \to G$. Certainly one can use the complex plane $\mathbb C$ instead of the real line $\mathbb R$. Suppose that the Lie algebra $\mathfrak g$ is endowed with a $\mathbb Z$-gradation, \[ {\mathfrak g} = \bigoplus_{m \in {\mathbb Z}} \mathfrak g_m. \] Define the following nilpotent subalgebras of $\mathfrak g$: \[ \mathfrak g_{<0} = \bigoplus_{m < 0} \mathfrak g_m, \qquad \mathfrak g_{>0} = \bigoplus_{m > 0} \mathfrak g_m, \] and represent the mapping $\lambda$ in the form \[ \lambda = \lambda_{<0} + \lambda_0 + \lambda_{>0}, \] where the mappings $\lambda_{<0}$, $\lambda_0$ and $\lambda_{>0}$ take values in $\mathfrak g_{<0}$, $\mathfrak g_0$ and $\mathfrak g_{>0}$ respectively. Denote by $G_{<0}$, $G_0$ and $G_{>0}$ the connected Lie subgroups of $G$ corresponding to the subalgebras $\mathfrak g_{<0}$, $\mathfrak g_0$ and $\mathfrak g_{>0}$ respectively. Under the appropriate assumptions for an element $a \in G$ belonging to some dense subset of $G$ it is valid the generalised Gauss decomposition \begin{equation} a = a_{<0} \, a_0 \, a_{>0}, \label{7} \end{equation} where $a_{<0} \in G_{<0}$, $a_0 \in G_0$ and $a_{>0} \in G_{>0}$. For the mapping $\psi$ we can write \begin{equation} \psi = \psi_{<0} \, \psi_0 \, \psi_{>0}, \label{3} \end{equation} where the mapping $\psi_{<0}$ takes values in $G_{<0}$, the mapping $\psi_0$ takes values in $G_0$ and the mapping $\psi_{>0}$ takes values in $G_{>0}$. Using the Gauss decomposition (\ref{3}) of the mapping $\psi$ rewrite equation (\ref{2}) as \begin{equation} \psi^{-1}_{>0} \, \left( \psi_{\le 0}^{-1} \, \d {\psi_{\le 0}} x \right) \, \psi_{>0} + \psi^{-1}_{>0} \, \d {\psi_{>0}} x = \lambda, \label{5} \end{equation} where $\psi_{\le 0} = \psi_{<0} \psi_0$. From (\ref{5}) it follows that \[ \psi_{\le 0}^{-1} \, \d{\psi_{\le 0}} x + \d{\psi_{>0}} x \, \psi^{-1}_{>0} = \psi_{>0} \, \lambda \, \psi^{-1}_{>0}, \] and hence \begin{equation} \psi_{\le 0}^{-1} \, \d{\psi_{\le 0}} x = (\psi_{>0} \, \lambda \, \psi^{-1}_{>0})_{\le 0}, \label{6} \end{equation} where the subscript $\le 0$ denotes the corresponding component with respect to the decomposition \[ \mathfrak g = \mathfrak g_{\le 0} \oplus \mathfrak g_{>0} = (\mathfrak g_{<0} \oplus \mathfrak g_0) \oplus \mathfrak g_{>0}. \] Substituting (\ref{6}) into (\ref{5}) one gets \[ \psi^{-1}_{>0} \, \d{\psi_{>0}} x = \lambda - \psi^{-1}_{>0} \, (\psi_{>0} \, \lambda \, \psi^{-1}_{>0})_{\le 0} \, \psi_{>0} \] that can be rewritten as \begin{equation} \d{\psi_{>0}} x \, \psi^{-1}_{>0} = (\psi_{>0} \, \lambda \, \psi^{-1}_{>0})_{>0}. \label{44} \end{equation} By the reasons which are clear from what follows we call this equation for the mapping $\psi_{>0}$ a {\it Riccati-type equation}. The formal integration of equation (\ref{44}) can be performed in the following way. Consider (\ref{2}) as a linear differential equation for the mapping $\psi$: \begin{equation} \d \psi x = \psi \, \lambda. \label{4} \end{equation} Find the solution of this equation with the initial condition $\psi(0) = a$, where $a$ is a constant element of the Lie group $G$. Using now the Gauss decomposition (\ref{3}) of the mapping $\psi$ we find the solution of equation (\ref{44}) with the initial condition $\psi_{>0} = a_{>0}$, where $a_{>0}$ is the positive grade component of $a$ arising from the Gauss decomposition (\ref{7}). It is clear that in order to obtain the general solution of equation (\ref{44}) it suffices to consider elements $a$ belonging to the Lie subgroup $G_{>0}$. Then the solution of (\ref{44}) is expressed in terms of solution of (\ref{4}). Note that the solution of equation (\ref{4}) with the initial condition $\psi(0) = a$ can be obtained from the solution with the initial condition $\psi(0) = e$, where $e$ is the unit element of $G$, by left multiplication by $a$. Thus we have shown that one can associate a Ricatti-type equation to any $\mathbb Z$-gradation of a Lie group. The integration of this equations is reduced to integration of some matrix system of first order linear differential equations. Let now $\chi$ be some mapping from $\mathbb R$ to $G$. It is clear that if the mapping $\psi$ satisfies equation (\ref{4}), then the mapping $\psi' = \psi \chi^{-1}$ satisfies the equation \[ \d {\psi'} x = \psi' \, \lambda', \] where \begin{equation} \lambda' = \chi \, \lambda \, \chi^{-1} - \d \chi x \, \chi^{-1}. \label{9} \end{equation} If $\chi$ is a mapping from $\mathbb R$ to $G_0$, then the corresponding component \[ \psi'_{>0} = \chi \, \psi_{>0} \, \chi^{-1} \] of the mapping $\psi'$ satisfies the Ricatti-type equation (\ref{44}) with $\lambda$ replaced by $\lambda'$. In this, \[ \lambda'_0 = \chi \, \lambda_0 \, \chi^{-1} - \d \chi x \, \chi^{-1}, \] and it is clear that we can choose the mapping $\chi$ so that $\lambda'_0$ vanishes. Another interesting possibility arises when $\chi$ is a mapping from $\mathbb R$ to $G_{>0}$. Let us choose a mapping $\chi$ such that $\lambda'_{>0} = 0$. From (\ref{9}) it follows that this case is realised if and only if \[ \d \chi x \, \chi^{-1} = (\chi \, \lambda \chi^{-1})_{>0}, \] i.e., $\chi$ should satisfy the Riccati-type equation (\ref{44}). Thus, having a particular solution of the Riccati-type equation, its general solution can be constructed from the general solution of the equation with $\lambda_{>0} = 0$. As will be shown below, for this case the Riccati-type equation can be solved in a quite simple way. \section{Simplest example} \label{Simplest} Consider first the case of the Lie group GL$(n, \mathbb C)$, $n \geq 2$ and represent $n$ as the sum of two positive integers $n_1$ and $n_2$. For the Lie algebra $\mathfrak{gl}(n, \mathbb C)$ there is a $\mathbb Z$-gradation where arbitrary elements $x_{<0}$, $x_0$ and $x_{>0}$ of the subalgebras $\mathfrak g_{<0}$, $\mathfrak g_{>0}$ and $\mathfrak g_0$ have the form \[ x_{<0} = \left( \begin{array}{cc} 0 & 0 \\ (x_{<0})_{21} & 0 \end{array} \right), \qquad x_0 = \left( \begin{array}{cc} (x_0)_{11} & 0 \\ 0 & (x_0)_{22} \end{array} \right), \qquad x_{>0} = \left( \begin{array}{cc} 0 & (x_{>0})_{12} \\ 0 & 0 \end{array} \right). \] Here $(x_{<0})_{21}$ is an $n_2 \times n_1$ matrix, $(x_{>0})_{12}$ is an $n_1 \times n_2$ matrix, $(x_0)_{11}$ and $(x_0)_{22}$ are $n_1 \times n_1$ and $n_2 \times n_2$ matrices respectively. The corresponding subgroups $G_{<0}$, $G_{>0}$ and $G_0$ are formed by the matrices \[ a_{<0} = \left( \begin{array}{cc} I_{n_1} & 0 \\ (a_{<0})_{21} & I_{n_2} \end{array} \right), \qquad a_0 = \left( \begin{array}{cc} (a_0)_{11} & 0 \\ 0 & (a_0)_{22} \end{array} \right), \qquad a_{>0} = \left( \begin{array}{cc} I_{n_1} & (a_{>0})_{12} \\ 0 & I_{n_2} \end{array} \right). \] Here $(a_{<0})_{21}$ is an arbitrary $n_2 \times n_1$ matrix, $(a_{>0})_{12}$ is an arbitrary $n_1 \times n_2$ matrix, $(a_0)_{11}$ and $(a_0)_{22}$ are arbitrary nondegenerate $n_1 \times n_1$ and $n_2 \times n_2$ matrices respectively. The Gauss decomposition (\ref{7}) of an element \[ a = \left( \begin{array}{cc} a_{11} & a_{12} \\ a_{21} & a_{22} \end{array} \right) \] is given by the relations \begin{eqnarray} (a_0)_{11} = a_{11}, &\qquad& (a_{>0})_{12} = a_{11}^{-1} a_{12}, \label{10} \\ (a_{<0})_{21} = a_{21} a_{11}^{-1}, &\qquad& (a_0)_{22} = a_{22} - a_{21} a_{11}^{-1} a_{12}. \label{11} \end{eqnarray} Parametrizing the mapping $\lambda$ as \[ \lambda = \left( \begin{array}{cc} A & B \\ C & D \end{array} \right) \] and $\psi_{>0}$ as \begin{equation} \psi_{>0} = \left( \begin{array}{cc} I_{n_1} & U \\ 0 & I_{n_2} \end{array} \right), \label{41} \end{equation} one easily sees that equation (\ref{44}) takes in the case under consideration the form \begin{equation} \d U x = B - AU + UD - UCU. \label{8} \end{equation} In the case $n=2$, $n_1 = n_2 = 1$, we have the usual Riccati equation. For $n = 2m$, $n_1 = n_2 = m$, we come to the so-called matrix Riccati equation. This justifies our choice for the name of equation (\ref{44}) in general case. \subsection{Case $B = 0$} If $C = 0$ then equation (\ref{8}) is linear. In the case $B = 0$, under the conditions $n_1 = n_2$ and $\det U(x) \ne 0$ for any $x$, the substitution $V = U^{-1}$ leads to the linear equation \[ \d V x = V A - D V + C. \] Nevertheless, it is instructive to consider the procedure of obtaining the general solution to equation (\ref{8}) for $B = 0$. Recall that having a particular solution to the Riccati-type equation, we can reduce the consideration to the case where $\lambda_{>0} = 0$. For the equation in question this is equivalent to the requirement $B = 0$. First, find the mapping $\chi: \mathbb R \to G_0$ such that transformation (\ref{9}) would give $\lambda'_0 = 0$. Parametrising $\chi$ as \[ \chi = \left( \begin{array}{cc} Q & 0 \\ 0 & R \end{array} \right), \] one comes to the following equations for $R$ and $Q$: \[ \d Q x = Q \, A, \qquad \d R x = R \, D. \] Therefore we can choose \begin{equation} Q(x) = P \exp \left( \int_0^x A(x') \, {\rm d}x' \right), \quad R(x) = P \exp \left( \int_0^x D(x') \, {\rm d}x' \right), \label{12} \end{equation} where the symbol $P \exp(\cdot)$ denotes the path ordered exponential (multiplicative integral). Now solve the equation \[ \d {\psi'} x = \psi' \lambda', \] where \[ \lambda' = \left( \begin{array}{cc} 0 & 0 \\ C' & 0 \end{array} \right) = \left( \begin{array}{cc} 0 & 0 \\ R C Q^{-1} & 0 \end{array} \right). \] The solution of this equation with the initial condition $\psi'(0) = I_n$ is \[ \psi(x) = \left( \begin{array}{cc} I_{n_1} & 0 \\ S(x) & I_{n_2} \end{array} \right) \] with \begin{equation} S(x) = \int_0^x R(x') \, C(x') \, Q^{-1}(x') \, {\rm d}x'. \label{13} \end{equation} Hence, the solution of equation (\ref{4}) with the initial condition $\psi(0) = I_n$ is given by \[ \psi = \left( \begin{array}{cc} Q & 0 \\ S \, Q & R \end{array} \right). \] To obtain the general solution of the equation under consideration we should have the solution of equation (\ref{4}) with the initial condition \begin{equation} \psi(0) = \left( \begin{array}{cc} I_{n_1} & m \\ 0 & I_{n_2} \end{array} \right), \label{16} \end{equation} where $m$ is an arbitrary $n_1 \times n_2$ matrix. Such a solution is represented as \[ \psi = \left( \begin{array}{cc} (I_{n_1} + m S) Q & m R \\ S Q & R \end{array} \right). \] Now, using (\ref{10}) we conclude that the general solution to equation (\ref{8}) in the case $B = 0$ is \[ U = Q^{-1} (I_{n_1} + m S)^{-1} m R, \] where $Q$, $R$ and $S$ are given by relations (\ref{12}) and (\ref{13}). Thus we see that in the case when $\lambda$ is a block upper or lower triangular matrix the Riccati-type equation (\ref{8}) can be explicitly integrated. Actually if $\lambda$ is a constant mapping we can reduce it by a similarity transformation to the block upper or lower triangular form and solve the corresponding Riccati-type equation. The solution of the initial equation is obtained then by some algebraic calculations. \subsection{The case $A = 0$ and $D = 0$} Representing the mapping $\psi$ in the form \[ \psi = \left( \begin{array}{cc} \psi_{11} & \psi_{12} \\ \psi_{21} & \psi_{22} \end{array} \right) \] one easily sees that equation (\ref{4}) is equivalent to the system \begin{eqnarray} &\displaystyle \d {\psi_{11}} x = \psi_{12} C, \qquad \d {\psi_{12}} x = \psi_{11} B, \label{14} \\ &\displaystyle \d {\psi_{21}} x = \psi_{22} C, \qquad \d {\psi_{22}} x = \psi_{21} B. \label{15} \end{eqnarray} \subsubsection{The case $C = B$} Consider the case $C = B$; that is certainly possible only if $n_1 = n_2$. In this case we can rewrite equations (\ref{14}) and (\ref{15}) as \begin{eqnarray*} &\displaystyle \d {(\psi_{11} + \psi_{12})} x = (\psi_{11} + \psi_{12} )B, \qquad \d {(\psi_{11} - \psi_{12})} x = -(\psi_{11} - \psi_{12}) B, \\ &\displaystyle \d {(\psi_{22} + \psi_{21})} x = (\psi_{22} + \psi_{21}) B, \qquad \d {(\psi_{22} - \psi_{21})} x = - (\psi_{22} - \psi_{21}) B. \end{eqnarray*} Hence, the solution of equation (\ref{4}) with the initial condition $\psi(0) = I_n$ is given by \[ \psi = \frac{1}{2} \left( \begin{array}{cc} F + H & F - H \\ F - H & F + H \end{array} \right), \] where \[ F(x) = P \exp \left( \int_0^x B(x') \, {\rm d} x' \right), \quad H(x) = P \exp \left(- \int_0^x B(x') \, {\rm d} x' \right). \] The solution of equation (\ref{4}) with the initial condition of form (\ref{16}) is \[ \psi = \frac{1}{2} \left( \begin{array}{cc} F + H + m(F - H)& F - H + m(F + H)\\ F - H & F + H \end{array} \right); \] therefore, the general solution to the Riccati-type equation under consideration can be written as \[ U = (F + H + m(F - H))^{-1} (F - H + m(F + H)). \] \subsubsection{The case of constant $B$ and $C$} As we noted above, the general solution to the Ricatti-type equations (\ref{8}) for the case of constant mapping $\lambda$ can be obtained by a reduction of $\lambda$ to the block upper or lower triangular form. Nevertheless, it is interesting to consider the particular case of constant $\lambda$ when the general solution has the most simple form. Suppose that $n_1 = n_2$ and that $B$ and $C$ are constant nondegenerate matrices. In this case the solution of equation (\ref{4}) with the initial condition $\psi(0) = I_n$ is \[ \psi(x) = \left( \begin{array}{cc} \cosh (\sqrt{BC} x) & \sinh(\sqrt{BC} x)\sqrt{BC} C^{-1} \\ \sinh (\sqrt{CB} x) \sqrt{CB} B^{-1} & \cosh (\sqrt{CB} x) \end{array} \right), \] and for the general solution one has \begin{eqnarray*} U(x) = \Bigl( \cosh (\sqrt{BC} x) &+& m \sinh(\sqrt{CB} x) \sqrt{CB} B^{-1} \Bigr)^{-1} \\ &\times& \Bigl( \sinh (\sqrt{BC} x) \sqrt{BC} C^{-1} + m \cosh(\sqrt{CB} x) \Bigr). \end{eqnarray*} It should be noted here that the expression for $U(x)$ does not actually contain square roots of matrices that can be easily seen from the corresponding expansions into the power series. \section{A further example} The next example is based on another $\mathbb Z$-gradation of the Lie algebra $\mathfrak{gl}(n, \mathbb C)$. Here one represents $n$ as the sum of three positive integers $n_1$, $n_2$ and $n_3$ and consider an element $x$ of $\mathfrak{gl}(n, \mathbb C)$ as a $3 \times 3$ block matrix $(x_{rs})$ with $x_{rs}$ being an $n_r \times n_s$ matrix. The subspace $\mathfrak g_m$ is formed by the block matrices $x = (x_{rs})$ where only the blocks $x_{rs}$ with $s - r = m$ are different from zero. Arbitrary elements $x_{<0}$, $x_0$ and $x_{>0}$ of the subalgebras $\mathfrak g_{<0}$, $\mathfrak g_0$ and $\mathfrak g_{>0}$ have the form \begin{eqnarray*} & x_{<0} = \left( \begin{array}{ccc} 0 & 0 & 0 \\ (x_{<0})_{21} & 0 & 0 \\ (x_{<0})_{31} & (x_{<0})_{32} & 0 \end{array} \right), \qquad x_{>0} = \left( \begin{array}{ccc} 0 & (x_{>0})_{12} & (x_{>0})_{13} \\ 0 & 0 & (x_{>0})_{23} \\ 0 & 0 & 0 \end{array} \right), \\ & x_0 = \left( \begin{array}{ccc} (x_0)_{11} & 0 & 0 \\ 0 & (x_0)_{22} & 0 \\ 0 & 0 & (x_0)_{33} \end{array} \right). \end{eqnarray*} The subgroups $G_{<0}$, $G_0$ and $G_{>0}$ are formed by the nondegenerate matrices \begin{eqnarray*} & a_{<0} = \left( \begin{array}{ccc} I_{n_1} & 0 & 0 \\ (a_{<0})_{21} & I_{n_2} & 0 \\ (a_{<0})_{31} & (a_{<0})_{32} & I_{n_3} \end{array} \right), \qquad a_{>0} = \left( \begin{array}{ccc} I_{n_1} & (a_{>0})_{12} & (a_{>0})_{13} \\ 0 & I_{n_2} & (a_{>0})_{23} \\ 0 & 0 & I_{n_3} \end{array} \right), \\ & a_0 = \left( \begin{array}{ccc} (a_0)_{11} & 0 & 0 \\ 0 & (a_0)_{22} & 0 \\ 0 & 0 & (a_0)_{33} \end{array} \right). \end{eqnarray*} The Gauss decomposition of an element $a \in {\rm GL}(n, \mathbb C)$ is determined by the relations \begin{eqnarray*} && (a_{<0})_{21} = a_{21} a_{11}^{-1}, \quad (a_{<0})_{31} = a_{31} a_{11}^{-1}, \\ && (a_{<0})_{32} = (a_{32} - a_{31} a_{11}^{-1} a_{12}) (a_{22} - a_{21} a_{11}^{-1} a_{12})^{-1}, \\ && (a_0)_{11} = a_{11}, \quad (a_0)_{22} = a_{22} - a_{21} a_{11}^{-1} a_{12}, \\ && (a_0)_{33} = a_{33} - a_{31} a_{11}^{-1} a_{13} \\ && \hskip 4em {} - (a_{32} - a_{31} a_{11}^{-1} a_{12})(a_{22} - a_{21} a_{11}^{-1} a_{12})^{-1} (a_{23} - a_{21} a_{11}^{-1} a_{13}), \\ && (a_{>0})_{12} = a_{11}^{-1} a_{12}, \quad (a_{>0})_{13} = a_{11}^{-1} a_{13}, \\ && (a_{>0})_{23} = (a_{22} - a_{21} a_{11}^{-1} a_{12})^{-1} (a_{23} - a_{21} a_{11}^{-1} a_{13}). \end{eqnarray*} We parametrise the mapping $\lambda$ as \[ \lambda = \left( \begin{array}{ccc} A_{11} & B_{12} & B_{13} \\ C_{21} & A_{22} & B_{23} \\ C_{31} & C_{32} & A_{33} \end{array} \right) \] and the mapping $\psi_{>0}$ as \[ \psi_{>0} = \left( \begin{array}{ccc} I_{n_1} & U_{12} & U_{13} \\ 0 & I_{n_2} & U_{23} \\ 0 & 0 & I_{n_3} \end{array} \right). \] After some algebra one sees that the Riccati-type equations for the case under consideration is \begin{eqnarray*} && \d{U_{12}} x = B_{12} - A_{11}U_{12} + U_{12}A_{22} + U_{13}C_{32} - U_{12}C_{21}U_{12} - U_{13}C_{31}U_{12}, \\ && \d{U_{23}} x = B_{23} - A_{22}U_{23} + U_{23}A_{33} - C_{21}U_{13} \\ && \hskip 6em {} + C_{21}U_{12}U_{23} - U_{23}C_{31}U_{13} - U_{23}C_{32}U_{23} + U_{23}C_{31}U_{12}U_{23},\\ && \d{U_{13}} x = B_{13} - A_{11}U_{13} + U_{13}A_{33} + U_{12}B_{23} - U_{12}C_{21}U_{13} - U_{13}C_{31}U_{13}. \end{eqnarray*} Consider the case where $B_{rs} = 0$. Here by transformation (\ref{9}) we can reduce our equations to the case where additionally $A_{rs} = 0$. In the latter case the solution of equation (\ref{4}) with the initial condition $\psi(0) = I_n$ has the form \[ \psi = \left( \begin{array}{ccc} I_{n_1} & 0 & 0 \\ S_{21} & I_{n_2} & 0 \\ S_{31} & S_{32} & I_{n_3} \end{array} \right), \] where \begin{eqnarray*} && S_{21}(x) = \int_0^x C_{21}(x') \, {\rm d} x', \\ && S_{31}(x) = \int_0^x \left( C_{31}(x') + \left( \int_0^{x'} C_{32}(x'') {\rm d} x'' \right) C_{21}(x') \right) {\rm d} x', \\ && S_{32}(x) = \int_0^x C_{32}(x') \, {\rm d} x'. \end{eqnarray*} Using the explicit expressions for the Gauss decomposition given in this section, we find that the solution to the Riccati-type equation under consideration with the initial condition \[ \psi_{>0}(0) = \left( \begin{array}{ccc} I_{n_1} & m_{12} & m_{13} \\ 0 & I_{n_2} & m_{23} \\ 0 & 0 & I_{n_3} \end{array} \right) \] is determined by the relations \begin{eqnarray*} && U_{12} = (I_{n_1} + m_{12} S_{21} + m_{13} S_{31})^{-1} (m_{12} + m_{13} S_{32}), \\ && U_{13} = (I_{n_1} + m_{12} S_{21} + m_{13} S_{31})^{-1} m_{13}, \\ && U_{23} = (I_{n_2} + m_{23} S_{32} \\ && \hskip 2em {} - (S_{21} + m_{23} S_{31}) (I_{n_1} + m_{12} S_{21} + m_{13} S_{31})^{-1} (m_{12} + m_{13} S_{32}))^{-1} \\ && \hskip 4em {} \times (m_{23} - (S_{21} + m_{23} S_{31}) (I_{n_1} + m_{12} S_{21} + m_{13} S_{31})^{-1} m_{13}). \end{eqnarray*} The above consideration can be directly generalised to the case of the $\mathbb Z$-gradation of $\mathfrak{gl}(n, \mathbb C)$ which leads to the natural representation of $n \times n$ matrices as $p \times p$ block matrices. The corresponding equations look more and more complicated. Nevertheless, at least for the case of constant mappings $\lambda$ and for the case of block upper or lower triangular mappings $\lambda$, they can be explicitly integrated. Actually these gradations exhaust in a sense all possible $\mathbb Z$-gradations of the Lie algebra $\mathfrak{gl}(n, \mathbb C)$ \cite{RS97a,R98}. \section{Multidimensional Riccati-type equations} Let now $\lambda_i$, $i = 1, \ldots, d$ be some $\mathfrak g$-valued functions on $\mathbb R^d$ whose standard coordinates are denoted by $x^i$. Consider the following system of equations for a mapping $\psi$ from $\mathbb R^d$ to the Lie group $G$: \begin{equation} \partial_i \psi = \lambda_i \, \psi, \label{36} \end{equation} where $\partial_i = \partial/\partial x^i$. The integrability conditions for system (\ref{36}) look as \begin{equation} \partial_i \lambda_j - \partial_j \lambda_i + [\lambda_i, \lambda_j] = 0. \label{45} \end{equation} Similarly to the one dimensional case we obtain the following equations for the component $\psi_{>0}$ entering the Gauss decomposition of type (\ref{3}): \begin{equation} \partial_i \psi_{>0} \, \psi_{>0}^{-1} = (\psi_{>0} \, \lambda_i \, \psi_{>0}^{-1})_{>0}. \label{42} \end{equation} We call these equations {\it multidimensional Ricatti-type equations\/}. The integration of equations (\ref{42}) is again reduced to the integration of linear system (\ref{36}). The transformation (\ref{9}), where $\chi$ is a mapping from $\mathbb R^d$ to $G_0$, cannot be used now to get the Riccati-type equations with $\lambda_0 = 0$. Indeed, to this end we should solve the equations \begin{equation} \chi^{-1} \, \partial_i \chi = (\lambda_i)_0. \label{46} \end{equation} The integrability conditions for these equations do not in general follow from (\ref{45}). However, for the case $(\lambda_i)_{>0} = 0$ relations (\ref{46}) are consequence of relations (\ref{45}) and we can, with the help of transformation (\ref{9}), reduce these equations to the case where $(\lambda_i)_0 = 0$. Note that in the multidimensional case it is again possible to use transformation (\ref{9}), where $\chi$ is some solution of the Riccati-type equations, to reduce the equations to the case where $(\lambda_i)_{>0} = 0$. When $\lambda_i$ are constant mappings, conditions (\ref{45}) imply that the matrices $\lambda_i$ commute. Here, by a similarity transformation, we can reduce $\lambda_i$ to a triangular form. In such case, and not only for constant $\lambda_i$, the multidimensional Riccati-type equations can be integrated by a procedure similar to one used in the one dimensional case. As a concrete example consider the Lie group GL$(n, \mathbb C)$ with the gradation of its Lie algebra described in section \ref{Simplest}. Parametrising the mappings $\lambda_i$ as \[ \lambda_i = \left( \begin{array}{cc} A_i & B_i \\ C_i & D_i \end{array} \right) \] and using for the mapping $\psi_{>0}$ parametrisation (\ref{41}) we come to the following multidimensional Riccati-type equations: \begin{equation} \partial_i U = B_i - A_i U + U D_i - U C_i U. \label{43} \end{equation} When $A_i = 0$ and $B_i = 0$ conditions (\ref{45}) become as \[ \partial_i C_j - \partial_j C_i = 0; \] hence, there exists a mapping $S$ such that $C_i = \partial_i S$. Then, the general solution of equations (\ref{43}) has the form \[ U = (I_{n_1} + m S)^{-1} m, \] where $m$ is an arbitrary $n_1 \times n_2$ matrix. \section{Generalised WZNW equations and multidimensional Toda equations} Consider the space $\mathbb R^{2d}$ as a differential manifold and denote the standard coordinates on $\mathbb R^{2d}$ by $z^{-i}$, $z^{+i}$, $i = 1, \ldots, d$. Let $\psi$ be a mapping from $\mathbb R^{2d}$ to the Lie group $G$, which satisfies the equations \begin{equation} \partial_{+j} (\psi^{-1} \, \partial_{-i} \psi) = 0, \label{20} \end{equation} that can be equivalently rewritten as \[ \partial_{-i}(\partial_{+j} \psi \, \psi^{-1}) = 0. \] Here and in what follows we use the notations $\partial_{-i} = \partial/\partial z^{-i}$ and $\partial_{+j} = \partial/\partial z^{+j}$. In accordance with \cite{GM93} we call equations (\ref{20}) the generalised WZNW equations. It is well-known that the two dimensional Toda equations can be considered as reductions of the WZNW equations; for a review we refer the reader to some remarkable papers \cite{FRRTW92}, and for the affine case to \cite{F91}. Let us show that in multidimensional situation the appropriate reductions of the generalised WZWN equations give the multidimensional Toda equations recently proposed and investigated in \cite{RS97}. It is clear that the $\mathfrak g$-valued mappings \begin{equation} \iota_{-i} = \psi^{-1} \, \partial_{-i} \psi, \qquad \iota_{+j} = - \partial_{+j} \psi \, \psi^{-1} \label{37} \end{equation} satisfy the relations \begin{equation} \partial_{+j} \iota_{-i} = 0, \qquad \partial_{-i} \iota_{+j} = 0. \label{17} \end{equation} Moreover, the mappings $\iota_{-i}$ and $\iota_{+i}$ satisfy, by construction, the following zero curvature conditions: \begin{equation} \partial_{-i} \iota_{-j} - \partial_{-j} \iota_{-i} + [\iota_{-i}, \iota_{-j}] = 0, \qquad \partial_{+i} \iota_{+j} - \partial_{+j} \iota_{+i} + [\iota_{+i}, \iota_{+j}] = 0. \label{18} \end{equation} The reduction in question is realised by imposing on the mapping $\psi$ the constraints \begin{equation} (\psi^{-1} \, \partial_{-i} \psi)_{<0} = c_{-i}, \qquad (\partial_{+i} \psi \, \psi^{-1})_{>0} = - c_{+i}, \label{19} \end{equation} where $c_{-i}$ and $c_{+i}$ are some fixed mappings taking values in the subspaces $\mathfrak g_{-1}$ and $\mathfrak g_{+1}$ respectively. In other words, one imposes the restrictions \[ (\iota_{-i})_{<0} = c_{-i}, \qquad (\iota_{+i})_{>0} = c_{+i}. \] {}From (\ref{17}) and (\ref{18}) it follows that we should consider only the mappings $c_{-i}$ and $c_{+i}$ which satisfy the conditions \begin{eqnarray} & \partial_{+j} c_{-i} = 0, \qquad \partial_{-i} c_{+j} = 0, \label{23} \\ & [c_{-i}, c_{-j}] = 0, \qquad [c_{+i}, c_{+j}] = 0. \label{24} \end{eqnarray} Using the Gauss decomposition (\ref{3}) we have \begin{eqnarray*} \psi^{-1} \, \partial_{-i} \psi = \psi_{>0}^{-1} \, \psi_0^{-1} (\psi_{<0}^{-1} \, && \partial_{-i} \psi_{<0}) \psi_0 \, \psi_{>0} \\ &&{} + \psi_{>0}^{-1} (\psi_0^{-1} \, \partial_{-i} \psi_0) \psi_{>0} + \psi_{>0}^{-1} \, \partial_{-i} \psi_{>0}. \end{eqnarray*} Taking into account the first equality of (\ref{19}), one sees that \begin{equation} \psi_0^{-1} (\psi_{<0}^{-1} \partial_{-i} \psi_{<0}) \psi_0 = c_{-i}. \label{21} \end{equation} Similarly one obtains the equality \begin{eqnarray*} \partial_{+i} \psi \, \psi^{-1} &=& \partial_{+i} \psi_{<0} \, \psi_{<0}^{-1} \\ &+& \psi_{<0} (\partial_{+i} \psi_0 \, \psi_0^{-1}) \psi_{<0}^{-1} + \psi_{<0} \, \psi_0 ( \partial_{+i} \psi_{>0} \, \psi_{>0}) \psi_0^{-1} \, \psi_{<0}^{-1} \end{eqnarray*} which implies \begin{equation} \psi_0 (\partial_{+i} \psi_{>0} \, \psi_{>0}^{-1}) \psi_0^{-1} = -c_{+i}. \label{22} \end{equation} Let us use now the observation that the generalised WZNW equations can be considered as the zero curvature condition for the connection on the trivial principal fibre bundle $\mathbb R^{2d} \times G$ determined by the $\mathfrak g$-valued 1-form $\rho$ on $\mathbb R^{2d}$ with the components \[ \rho_{-i} = \psi^{-1} \, \partial_{-i} \psi, \qquad \rho_{+i} = 0. \] After the gauge transformation of the form $\rho$ generated by the mapping $\psi_{>0}^{-1}$ we come to the connection form $\omega$ with the components \[ \omega_{-i} = \psi_0^{-1} (\psi_{<0}^{-1} \, \partial_{-i} \psi_{<0}) \psi_0 + \psi_0^{-1} \, \partial_{-i} \psi_0, \qquad \omega_{+i} = \psi_{>0} \partial_{+i} \psi_{>0}^{-1}. \] Since the zero curvature condition is invariant with respect to gauge transformations, we conclude that the generalised WZNW equations are equivalent to zero curvature condition for the form $\omega$. Using (\ref{21}), (\ref{22}) and denoting $\psi_0$ by $\gamma$ we see that \begin{equation} \omega_{-i} = c_{-i} + \gamma^{-1} \, \partial_{-i} \gamma, \qquad \omega_{+i} = \gamma^{-1} c_{+i} \gamma. \label{30} \end{equation} It is exactly the components of the form whose zero curvature condition leads to multidimensional Toda equations \cite{RS97}\footnote{In \cite{RS97} there was considered the case of constant $c_{-i}$ and $c_{+i}$. The generalisation to the case of arbitrary $c_{-i}$ and $c_{+i}$ satisfying (\ref{23}) and (\ref{24}) is straightforward.} having the following explicit form \begin{eqnarray} & \partial_{-i} (\gamma c_{-j} \gamma^{-1}) = \partial_{-j} (\gamma c_{-i} \gamma^{-1}), \label{26} \\ & \partial_{+j} (\gamma^{-1} \partial_{-i} \gamma) = [c_{-i}, \gamma^{-1} c_{+j} \gamma], \label{27} \\ & \partial_{+i} (\gamma^{-1} c_{+j} \gamma) = \partial_{+j} (\gamma^{-1} c_{+i} \gamma). \label{28} \end{eqnarray} Thus, if a mapping $\psi$ satisfies the generalised WZNW equations (\ref{20}) and constraints (\ref{19}), then its component $\psi_0$, entering the Gauss decomposition (\ref{3}), satisfies multidimensional Toda equations (\ref{26})--(\ref{28}). On the other hand, assume that $\gamma$ is a solution of the multidimensional Toda equations (\ref{26})--(\ref{28}); then putting $\psi_0 = \gamma$ and choosing some $\psi_{<0}$ and $\psi_{>0}$ which satisfy (\ref{21}) and (\ref{22}), respectively, one can construct the solution \[ \psi = \psi_{<0} \, \psi_0 \, \psi_{>0} \] of the generalised WZNW equation submitted to constraints (\ref{19}). The explicit construction of the mappings $\psi_{<0}$ and $\psi_{>0}$ from a given solution of Toda equation for the two-dimensional case was considered in \cite{GORS92}. Below we give the generalisation of such construction to the multidimensional case. First recall the procedure of obtaining the general solution to multidimensional Toda equations \cite{RS97}. Let $\gamma_-$ and $\gamma_+$ be some mappings from $\mathbb R^{2d}$ to $G_0$ satisfying the conditions \[ \partial_{+i} \gamma_- = 0, \qquad \partial_{-i} \gamma_+ = 0. \] Consider the equations \begin{equation} \mu_-^{-1} \partial_{-i} \mu_- = \gamma_- c_{-i} \gamma_-^{-1}, \qquad \mu_+^{-1} \partial_{+i} \mu_+ = \gamma_+ c_{+i} \gamma_+^{-1}, \label{25} \end{equation} where $\mu_-$ and $\mu_+$ obey the conditions \[ \partial_{+i} \mu_- = 0, \qquad \partial_{-i} \mu_+ = 0. \] The integrability conditions for equations (\ref{25}) are \[ \partial_{-i}(\gamma_- c_{-j} \gamma_-^{-1}) - \partial_{-j}(\gamma_- c_{-i} \gamma_-^{-1}) = 0, \quad \partial_{+i}(\gamma_+ c_{+j} \gamma_+^{-1}) - \partial_{+j}(\gamma_+ c_{+i} \gamma_+^{-1}) = 0. \] Hence, the mappings $\gamma_-$ and $\gamma_+$ cannot be arbitrary. Suppose that the above integrability conditions are satisfied and solve equations (\ref{25}). Consider the Gauss decomposition \begin{equation} \mu_+^{-1} \mu_- = \nu_- \eta \nu_+^{-1}, \label{34} \end{equation} where the mapping $\nu_-$ takes values in $G_{<0}$, the mapping $\eta$ takes values in $G_0$ and the mapping $\nu_+$ takes values in $G_{>0}$. It can be shown \cite{RS97} that the mapping \begin{equation} \gamma = \gamma_+^{-1} \eta \gamma_- \label{31} \end{equation} satisfies the multidimensional Toda equations (\ref{26})--(\ref{28}). Since the manifold $\mathbb R^{2d}$ is simply connected and the connection form $\omega$ satisfies the zero curvature condition, then there exists a mapping $\varphi: \mathbb R^{2d} \to G$ such that \[ \omega_{-i} = \varphi^{-1} \partial_{-i} \varphi, \qquad \omega_{+i} = \varphi^{-1} \partial_{+i} \varphi. \] As it was shown in \cite{RS97}, the general form of the mapping $\varphi$ corresponding to the solution of the multidimensional Toda equations constructed with the help of the above described procedure, is \begin{equation} \varphi = a \mu_+ \nu_- \eta \gamma_- = a \mu_- \nu_+ \gamma_-, \label{29} \end{equation} where $a$ is an arbitrary constant element of the Lie group $G$. Using (\ref{29}) we have \[ \omega_{-i} = \varphi^{-1} \, \partial_{-i} \varphi = (\eta \gamma_-)^{-1} \left(\nu_-^{-1} \, \partial_{-i} \nu_-\right) \eta \gamma_- + (\eta \gamma_-)^{-1} \partial_{-i} (\eta \gamma_-). \] Comparing this relation with the first equality in (\ref{30}) and taking into account (\ref{31}) we conclude that \[ (\gamma_+^{-1} \nu_- \gamma_+)^{-1} \partial_{-i} (\gamma_+^{-1} \nu_- \gamma_+) = \gamma c_{-i} \gamma^{-1}. \] Thus we see that the general solution of equations (\ref{21}) with $\psi_0 = \gamma$ can be written as \begin{equation} \psi_{<0} = \xi_-^{-1} \, \gamma_+^{-1} \, \nu_- \gamma_+, \label{32} \end{equation} where $\xi_-$ is an arbitrary mapping which takes values in $G_{<0}$ and satisfies the conditions \[ \partial_{-i} \xi_- = 0. \] In a similar way we obtain the relation \[ \partial_{+i} (\gamma_-^{-1} \nu_+^{-1} \gamma_-) \, (\gamma_-^{-1} \nu_+ \gamma_-) = -\gamma^{-1} c_{+i} \gamma \] which implies that the general solution of equations (\ref{22}) with $\psi_0 = \gamma$ is given by \begin{equation} \psi_{>0} = \gamma_-^{-1} \nu_+^{-1} \gamma_- \xi_+, \label{33} \end{equation} where $\xi_+$ is an arbitrary mapping which takes values in $G_{>0}$ and satisfies the conditions \[ \partial_{+i} \xi_+ = 0. \] Using relations (\ref{32}) and (\ref{33}) we come to the following representation for the solution of the generalised WZNW equations corresponding to the solution of the multidimensional Toda equations $\psi_0 = \gamma$: \[ \psi = \psi_{<0} \, \psi_0 \, \psi_{>0} = \xi_-^{-1} \gamma_+^{-1} \nu_- \eta \nu_+^{-1} \gamma_- \xi_+. \] Due to relation (\ref{34}) this representation is equivalent to \[ \psi = \xi_-^{-1} \gamma_+^{-1} \mu_+^{-1} \mu_- \gamma_- \xi_+. \] In the next section we use this representation to construct some integrable classes of the multidimensional Riccati-type equations. \section{Multidimensional Toda systems and Riccati-type \kern 1.em\ equations} Let $\lambda_{-i}$ and $\lambda_{+i}$, $i = 1, \ldots, d$, be some fixed mappings from the manifold $\mathbb R^{2d}$ to the Lie algebra $\mathfrak g$ which satisfy conditions \begin{equation} \partial_{+j} \lambda_{-i} = 0, \qquad \partial_{-j} \lambda_{+i} = 0. \label{51} \end{equation} Consider the system of equations \begin{equation} \partial_{-i} \psi = \psi \, \lambda_{-i}, \qquad \partial_{+i} \psi = - \lambda_{+i} \, \psi, \label{35} \end{equation} where $\psi$ is a mapping from $\mathbb R^{2d}$ to the Lie group $G$. The integrability conditions for this system is given by \begin{equation} \partial_{-i} \lambda_{-j} - \partial_{-j} \lambda_{-i} + [\lambda_{-i}, \lambda_{-j}] = 0, \quad \partial_{+i} \lambda_{+j} - \partial_{+j} \lambda_{+i} + [\lambda_{+i}, \lambda_{+j}] = 0. \label{52} \end{equation} It is clear that the mapping $\psi$ satisfies the generalised WZNW equations. Hence we can treat system (\ref{35}) with the mappings $\lambda_{-i}$ and $\lambda_{+i}$ satisfying (\ref{51}) and (\ref{52}), as a reduction of the generalised WZNW equations similar to the reduction considered in the previous section. The difference is that in the previous section we fixed only the components $(\iota_{-i})_{<0}$ and $(\iota_{+i})_{>0}$ of the the mappings $\iota_{-i}$ and $\iota_{+i}$ and did it in a quite special way, but here we fix the mappings $\iota_{-i}$ and $\iota_{+i}$ completely. It is easy to show that if the mapping $\psi$ satisfies equations (\ref{35}) then the mappings $\psi_{<0}^{-1}$ and $\psi_{>0}$ satisfy the multidimensional Riccati-type equations \begin{eqnarray} &\partial_{+i} \psi_{<0}^{-1} \, \psi_{<0}= (\psi_{<0}^{-1} \, \lambda_{+i} \, \psi_{<0})_{<0}, \label{39} \\ &\partial_{-i} \psi_{>0} \, \psi_{>0}^{-1} = (\psi_{>0} \, \lambda_{-i} \, \psi_{>0}^{-1})_{>0}. \label{40} \end{eqnarray} Equations (\ref{35}) are multidimensional generalisation of the so-called associated Redheffer--Reid system \cite{Red56Rei59}. The investigation of that system is very useful for studying one dimensional Riccati and matrix Riccati equations, see for example \cite{ZI73}. We believe that our generalisation also play a significant role for the multidimensional Riccati-type equations. As a first application of such systems let us give a construction of some integrable class of the multidimensional Riccati-type equations. Suppose now that the mappings $\lambda_{-i}$ and $\lambda_{+i}$ are that \begin{equation} (\lambda_{-i})_{<0} = c_{-i}, \qquad (\lambda_{+i})_{<0} = c_{+i} \label{53} \end{equation} with the mappings $c_{-i}$ and $c_{+i}$ taking values in $\mathfrak g_{-1}$ and $\mathfrak g_{+1}$, respectively, and submitted to conditions (\ref{23}) and (\ref{24}). In this case the mapping $\gamma = \psi_0$ satisfies the multidimensional Toda equations (\ref{26})--(\ref{28}). On the other hand, if we have a solution $\gamma$ of equations (\ref{26})--(\ref{28}), then using results of the previous section we can find the general solution to equations (\ref{21}) and (\ref{22}), and construct the mapping $\psi$ which satisfies the generalised WZNW equations and constraints (\ref{19}). This mapping, via equalities (\ref{35}), generates some mappings $\lambda_{-i}$ and $\lambda_{+i}$ certainly satisfying constraints (\ref{53}). Actually if we have the general solution to multidimensional Toda equations (\ref{26})--(\ref{28}), then we get in this way the general form of the mappings $\lambda_{-i}$ and $\lambda_{+i}$ which satisfy the integrability conditions (\ref{52}) and constraints (\ref{53}). Moreover, we have here the general solution to the multidimensional Riccati equations (\ref{39}) and (\ref{40}). The explicit form of the mappings $\lambda_{-i}$ and $\lambda_{+i}$ obtained with the help of the above described procedure is \begin{eqnarray*} & \lambda_{-i} = \xi_+^{-1} \, c_{-i} \, \xi_+ + \xi_+^{-1} (\gamma_-^{-1} \, \partial_{-i} \gamma_-) \xi_+ + \xi_+^{-1} \, \partial_{-i} \xi_+, \\ & \lambda_{+i} = \xi_-^{-1} \, \partial_{+i} \xi_- + \xi_-^{-1} (\gamma_+^{-1} \, \partial_{+i} \gamma_+) \xi_- + \xi_-^{-1} \, c_{+i} \, \xi_-, \end{eqnarray*} and the corresponding solutions of equations (\ref{39}) and (\ref{40}) are given by (\ref{32}) and (\ref{33}). Consider the Lie group GL$(n, \mathbb C)$ and the $\mathbb Z$-gradation of the Lie algebra $\mathfrak{gl}(n, \mathbb C)$ discussed in section \ref{Simplest}. Parametrise the mappings $\gamma_\mp$ as \[ \gamma_\mp = \left( \begin{array}{cc} \beta_{\mp 1} & 0 \\ 0 & \beta_{\mp 2} \end{array} \right). \] The general form of the mappings $c_{\mp i}$ is \[ c_{-i} = \left( \begin{array}{cc} 0 & 0 \\ X_{-i} & 0 \end{array} \right), \qquad c_{+i} = \left( \begin{array}{cc} 0 & X_{+i} \\ 0 & 0 \end{array} \right), \] where the mappings $X_{-i}$ and $X_{+i}$ are arbitrary. The integrability conditions of equations (\ref{25}) have now the form \begin{eqnarray} &\partial_{-i}(\beta_{-2} X_{-j} \beta_{-1}^{-1}) - \partial_{-j}(\beta_{-2} X_{-i} \beta_{-1}^{-1}) = 0, \label{47} \\ &\partial_{+i}(\beta_{+1} X_{+j} \beta_{+2}^{-1}) - \partial_{-j}(\beta_{+1} X_{-i} \beta_{+2}^{-1}) = 0. \label{48} \end{eqnarray} For the mappings \[ \lambda_{\mp i} = \left( \begin{array}{cc} A_{\mp i} & B_{\mp i} \\ C_{\mp i} & D_{\mp i} \end{array} \right) \] we obtain \begin{eqnarray*} && A_{-i} = \beta_{-1}^{-1} \partial_{-i} \beta_{-1} - (\xi_+)_{12} X_{-i}, \\ && B_{-i} = \beta_{-1}^{-1} \partial_{-i} \beta_{-1} (\xi_+)_{12} - (\xi_+)_{12} \beta_{-2}^{-1} \partial_{-i} \beta_{-2} - (\xi_+)_{12} X_{-i} (\xi_+)_{12} + \partial_{-i} (\xi_+)_{12}, \\ && C_{-i} = X_{-i}, \quad D_{-i} = \beta_{-2}^{-1} \partial_{-i} \beta_{-2} + X_{-i} (\xi_+)_{12}, \\ && A_{+i} = \beta_{+1}^{-1} \partial_{+i} \beta_{+1} + X_{+i} (\xi_-)_{21}, \quad B_{+i} = X_{+i}, \\ && C_{+i} = \beta_{+2}^{-1} \partial_{+i} \beta_{+2} (\xi_-)_{21} - (\xi_-)_{21} \beta_{+1}^{-1} \partial_{+i} \beta_{+1} - (\xi_-)_{21} X_{+i} (\xi_-)_{21} + \partial_{+i} (\xi_-)_{21}, \\ && D_{+i} = \beta_{+2} \partial_{+i} \beta_{+2} - (\xi_-)_{21} X_{+i}, \end{eqnarray*} where $(\xi_+)_{12}$ and $(\xi_-)_{21}$ are the nontrivial blocks of the mappings $\xi_+$ and $\xi_-$. In order to solve equations (\ref{39}) and (\ref{40}) one considers first equations (\ref{25}). Next, one uses the Gauss decomposition (\ref{34}) for finding the mappings $\nu_+^{-1}$ and $\nu_-$. In the case under consideration \begin{eqnarray*} && \nu_+^{-1} = \left( \begin{array}{cc} I_{n_1} & - (I_{n_1} - (\mu_+)_{12} (\mu_-)_{21})^{-1} (\mu_+)_{12} \\ 0 & I_{n_2} \end{array} \right), \\ && \nu_- = \left( \begin{array}{cc} I_{n_1} & 0 \\ (\mu_-)_{21} (I_{n_1} - (\mu_+)_{12} (\mu_-)_{21})^{-1} & I_{n_2} \end{array} \right). \end{eqnarray*} Finally, using (\ref{33}) and (\ref{32}) one arrives at the following expressions for nontrivial blocks $(\psi_{>0})_{12} = U_-$ and $(\psi_{<0})_{21} = U_+$ of the mappings $\psi_{>0}$ and $\psi_{<0}$: \begin{eqnarray*} && U_- = (\xi_+)_{12} - \beta_{-1}^{-1} (I_{n_1} - (\mu_+)_{12} (\mu_-)_{21})^{-1} (\mu_+)_{12} \beta_{-2}, \\ && U_+ = (\xi_-)_{21} + \beta_{+2}^{-1} (\mu_-)_{21} (I_{n_1} - (\mu_+)_{12} (\mu_-)_{21})^{-1} \beta_{+1}. \end{eqnarray*} It is clear that the dependence of $U_-$ and $U_+$ on $z^{+i}$ and $z^{-i}$, respectively, is parametric, and the general solution of the equations can be written as \begin{eqnarray} && U_- = (\xi_+)_{12} - \beta_{-1}^{-1} (I_{n_1} - m_- (\mu_-)_{21})^{-1} m_- \beta_{-2}, \label{49} \\ && U_+ = (\xi_-)_{21} + \beta_{+2}^{-1} m_+ (I_{n_1} - (\mu_+)_{12} m_+)^{-1} \beta_{+1}, \label{50} \end{eqnarray} where $m_-$ and $m_+$ are arbitrary constant matrices of dimensions $n_1 \times n_2$ and $n_2 \times n_1$ respectively. We have said nothing yet about solving integrability conditions (\ref{47}) and (\ref{48}). In the general case the solution to these equations is not known. However, they can be solved in some particular cases. For example, let $n = d + 1$, $n_1 = d$ and $n_2 = 1$, and let the mappings $X_{\mp i}$ be defined by the relations \[ (X_{-i})_{1j} = \delta_{ij}, \qquad (X_{+i})_{j1} = \delta_{ij}. \] In this case the general solution \cite{RS97} of integrability conditions (\ref{47}) and (\ref{48}) is \begin{eqnarray*} (\beta_{-1}^{-1})_{ij} = F_- \partial_{-i} H_{-j}, \qquad \beta_{-2}^{-1} = F_-, \\ (\beta_{+1})_{ij} = F_+ \partial_{+j} H_{+i}, \qquad \beta_{+2} = F_+, \end{eqnarray*} where $F_\mp$ and $H_{\mp i}$ are arbitrary functions depending on the coordinates $z^{\mp i}$. For the blocks $(\mu_-)_{21}$ and $(\mu_+)_{12}$ one has \[ (\mu_-)_{21} = H_-, \qquad (\mu_+)_{21} = H_+, \] where $H_-$ and $H_+$ are $1 \times d$ and $d \times 1$ matrices formed by the functions $H_{-i}$ and $H_{+i}$ respectively. Now using the evident notations we can write expressions (\ref{49}) and (\ref{50}) as \[ U_{-i} = \xi_{+i} + \partial_{-i} \log (1 - H_- m_-), \qquad U_{+i} = \xi_{-i} - \partial_{+i} \log (1 - m_+ H_+). \] \begin{center} {\large\bf Acknowledgements} \end{center} The authors are indebted to A.~M.~Bloch and A.~K.~Common who acquainted us with their studies related to the matrix ordinary differential Riccati equation. One of the authors (M.~V.~S.) is grateful to J.--L.~Gervais for useful discussions; he also wishes to acknowledge the warm hospitality of the Instituto de F\'\i sica Te\'orica, Universidade Estadual Paulista, S\~ao Paulo, Brazil, and the financial support from FAPESP during his stay there in March--July 1998. The research program of A.~V.~R. and M.~V.~S. is supported in part by the Russian Foundation for Basic Research under grant \# 98--01--00015 and by INTAS grant \# 96-690; and that of L.~A.~F., J.~F.~G. and A.~H.~Z. is partially supported by CNPq-Brazil. \small
{"config": "arxiv", "file": "math-ph9807017.tex"}
TITLE: Probability find color of iphone QUESTION [2 upvotes]: I have a question as following: There are n customers forming a line inside Apple store, half of them have Rose iPhone and half have Gray iPhone, shuffled randomly. You are challenged to predict iPhone color of the next person exiting the store, where they will exit one by one. At any point, if you correctly predict the iPhone color of exiting customer as Rose, you win. You can choose to skip any number of guesses and see their iPhone color until you decide to guess that the next iphone color is Rose. You will only win if predict the color to be Rose and people exiting the store carries a Rose iPhone. I want to ask that is there a method that has better chances of winning than $\frac{1}{2}$, and if there is no such method, why is that? Thank you REPLY [1 votes]: Let the exiting customers be labeled by numbers $1$ through to $N$, where $N$ is even. It is given that exactly half of them have rose colored phones and the rest half have grey colored phones. Consider the proposition: \begin{align} R_i\equiv\textrm{The exiting customer $i$ has rose colored phone} \end{align} The negation of the above proposition means that customer $i$ has grey colored phone. Suppose $(n-1)$ customers have exited, $1\leq n\leq N$, and you guess that the $n$-th customer holds a rose colored phone. What is the probability that your guess is right? That depends on the color of the phones carried by the $(n-1)$ customers who have exited and which cannot be known in advance. Let $m$ among the $(n-1)$ customers who have exited have rose colored phones. Since there are $N/2$ rose colored phones in all, there are $C_m^{N/2}$ ways of selecting $m$ rose colored phones. Rest of the customers who exited have grey colored phones, and there are $C_{n-1-m}^{N/2}$ ways of selecting $(n-1-m)$ grey colored phones. There are therefore $C_m^{N/2}\times C_{n-1-m}^{N/2}$ ways of having $m$ rose colored phones among $(n-1)$ customers who exited. Here we adopt the convention that $C_x^y=0$ if $x>y$. Therefore the probability that exactly $m$ among $(n-1)$ customers who exited have rose colored phones is: \begin{align} P(m|n-1)=\frac{C_m^{N/2}\times C_{n-1-m}^{N/2}}{\sum_{k=0}^{n-1}\left( C_k^{N/2}\times C_{n-1-k}^{N/2}\right)},\quad 0\leq m\leq \textbf{min}(n-1,N/2) \end{align} and zero otherwise. Given that $(n-1)$ customers exited with $m$ rose colored phones, the probability that your guess about the $n$-th customer will be correct is: \begin{align} P(R_n|(m,n-1))=\frac{N/2-m}{N-(n-1)},\quad 0\leq m\leq \textbf{min}(n-1,N/2) \end{align} and zero otherwise. Therefore the unconditional probability that your guess for $n$-th customer will be correct is by Bayes theorem: \begin{align} P(R_n)&=\sum_{m=0}^{\textbf{min}(n-1,N/2)}P(m|n-1)P(R_n|(m,n-1))\\ &=\sum_{m=0}^{\textbf{min}(n-1,N/2)}\left( \frac{C_m^{N/2}\times C_{n-1-m}^{N/2}}{\sum_{k=0}^{n-1}\left( C_k^{N/2}\times C_{n-1-k}^{N/2}\right)}\times \frac{N/2-m}{N-(n-1)}\right) \end{align} I have tried a few values of even $N$ and $P(R_n)=0.5$ always, no matter which customer you guess (i.e. for any $n$ such that $1\leq n\leq N$). I am not sure if there is an intuitive explanation for this. Also there must be some simplification for the expression above that reduces it to $1/2$ although I couldn't figure it out.
{"set_name": "stack_exchange", "score": 2, "question_id": 2463803}
TITLE: Do Chern Insulators (QAHE) have topological order (long-range quantum entanglement)? QUESTION [2 upvotes]: I know IQHE is a example having "invertible" topological order from Professor Wen's definition. And Topological Insulators is SRE because of necessary of underlying symmetry protection. After that, the Chern Insulator (QAHE) needs a underlying TRS-broken not a TRS protection like IQHE. Except the external magnetic field, it is almost as same as IQHE. Is it also a example having long-range quantum entanglement? More precisely speaking, what topological order does it have? Also "invertible" topological order? REPLY [3 votes]: Chern Insulator = QAHE = IQHE. Chern Insulator has "invertible" topological order and long-range entanglement as defined in https://arxiv.org/abs/1004.3835 . Chern Insulator does not need any symmetry, although one usually assume Chern Insulator has an U(1) symmetry. Q: what topological order does IQHE have? A: An "invertible" topological order characterized by a gravitational Chern-Simons term.
{"set_name": "stack_exchange", "score": 2, "question_id": 463466}
\begin{document} \pagestyle{headings} \title[Codimensions of Newton Strata for $SL_3(F)$]{Codimensions of Newton Strata for $SL_3(F)$ in the Iwahori Case} \author{E. T. Beazley} \address{University of Chicago, Department of Mathematics, 5734 S. University Ave., Chicago, IL 60637} \email{townsend@math.uchicago.edu} \begin{abstract} We study the Newton stratification on $SL_3(F)$, where $F$ is a Laurent power series field. We provide a formula for the codimensions of the Newton strata inside each component of the affine Bruhat decomposition on $SL_3(F)$. These calculations are related to the study of certain affine Deligne-Lusztig varieties. In particular, we describe a method for determining which of these varieties is non-empty in the case of $SL_3(F)$. \end{abstract} \maketitle \begin{section}{Introduction}\label{S:intro} \renewcommand{\thefootnote}{} \footnote{\textbf{Key Words}: Newton polygon, Newton stratification, isocrystal, affine Bruhat decomposition, affine Deligne-Lusztig variety, Frobenius-linear characteristic polynomial} \footnote{\textbf{Mathematics Subject Classification (2000)}: Primary 20G25, Secondary 14L05} The study of abelian varieties in positive characteristic dates back to Andr\'{e} Weil in the 1940s \cite{Weil}, and was further developed in the 1960s by Barsotti \cite{Bar}. In \cite{Gro}, Grothendieck describes the theory of $F$-crystals and Barsotti-Tate groups, or $p$-divisible groups, which are fundamental to the study of algebraic geometry in characteristic $p >0$. Isogeny classes of $F$-crystals are indexed by combinatorial objects called Newton polygons, and these polygons thus naturally provide a stratification on the space of $F$-crystals. In the late 1960s, Grothendieck proved his famous specialization theorem, which asserts that Newton polygons ``go down'' under specialization, according to the conventions of this paper. In particular, the set of points for which the associated Newton polygons lie below a given Newton polygon is Zariski closed. Grothendieck conjectured the converse to his theorem in 1970, which says that given a $p$-divisible group $G_0$ having Newton polygon $\gamma$, then for any $\beta$ lying above $\gamma$, there exists a deformation of $G_0$ whose generic fiber has Newton polygon equal to $\beta$. This conjecture was proved by Oort a quarter of a century later (see \cite{dJO}, \cite{Onp&formalgps}, \cite{Onpinmoduli}). In the late 1970s, Katz extended these ideas of Grothendieck to Newton polygons associated to families of $F$-crystals \cite{Kat}. By the early 1980s, work on the moduli problem for abelian varieties of fixed dimension in positive characteristic was well underway, having been outlined by Mumford and applied by Norman and Oort \cite{NO}. Analogs of Grothendieck's specialization theorem and its converse were formulated and proved in the context of (moduli spaces of) abelian varieties (see \cite{Man}, \cite{Ta}, \cite{Kob}, \cite{Omoduli&NP}, \cite{Onp&formalgps}, \cite{Onpinmoduli}, \cite{Onp&pdiv}). Oort has also formulated several conjectures and results about irreducibility and the dimensions and number of the components, which generalize his work with Li on the supersingular case (see \cite{Omoduliposchar}, \cite{Onpinmoduli}, \cite{LiO}). In \cite{dJO}, de Jong and Oort prove a purity result that gives an estimate for the codimensions of the Newton polygon strata in moduli spaces of $p$-divisible groups. Viehmann computes the dimensions and the number of connected and irreducible components of moduli spaces of $p$-divisible groups in \cite{VieGlobal} and \cite{VieModuli}. More information is known about the structure of the Newton strata and the poset of slopes of Newton polygons in the special case of the Siegel moduli space (see \cite{Onpinmoduli}, \cite{WedCong}). Recent work by Harashita extends some of Oort's work determining when certain Newton strata are non-empty \cite{Har}. There has also been recent interest in the foliation structure on the Newton polygon strata (see \cite{Ofoliations}, \cite{AG}). For a survey of both classical and newer results on the properties of the Newton stratification for moduli spaces of abelian varieties in positive characteristic, see \cite{Omoduliposchar}, \cite{vdGO}, and the recent survey article by Rapoport \cite{RapNewton}. In the mid-1990s, Rapoport and Richartz generalized Grothendieck's specialization theorem and the notion of the Newton stratification to $F$-isocrystals with $G$-structure, where $G$ is a reductive group over a discretely valued field \cite{RR}. These generalized Newton strata are indexed by $\sigma$-conjugacy classes, where $\sigma$ is the Frobenius automorphism. There is a natural bijection between the set of $\sigma$-conjugacy classes and a suitably generalized notion of the set of Newton polygons, which was described by Kottwitz in \cite{KotIsoI} and \cite{KotIsoII}. The poset of Newton polygons in the context of reductive group theory has interesting combinatorial and Lie-theoretic interpretations, which were described by Chai in \cite{Ch}. In particular, Chai generalizes the work of Li and Oort to the case of $F$-isocrystals with $G$-structure, where $G$ is any quasisplit reductive group over a non-Archimedean local field $F$, proving that the poset of Newton slope sequences is catenary; \textit{i.e.}, any two maximal chains have the same length. Chai also provides a root-theoretic formula for the expected dimension of the Newton strata. Rapoport lists several other conjectures in the context of these generalized Newton strata in \cite{RapNewton}. Chai was primarily interested in applications to the reduction modulo $p$ of a Shimura variety, although his results are more general. Interest in the special case of Shimura varieties has roots in implications toward the local Langlands conjecture, as demonstrated by Harris and Taylor \cite{HT}. Similar topological and geometric questions have been answered about the Newton stratification on Shimura varieties. Wedhorn has demonstrated that these strata are locally closed, and he provides a codimension formula in \cite{WedDim}. In addition, he showed that the ordinary locus is open and dense \cite{WedOrdinariness}. B\"{u}ltel and Wedhorn have studied the relationship among the Newton polygon stratification, the Ekedahl-Oort stratification, and the final stratification in \cite{BW}. Recent work by Yu generalizes considerations of Oort's Siegel case to type $C$ families of Shimura varieties \cite{Yu}. Again, for a comprehensive overview of known and conjectured results on Shimura varieties, see Rapoport's survey \cite{RapShimura}. Haines has also written an article on Shimura varieties that provides an introduction to the field through the theory of local models \cite{Hai}. The notion of the Newton stratification has since arisen in many other contexts. Goren and Oort discuss the relationship between the Newton polygon strata and the Ekedahl-Oort strata for Hilbert modular varieties in \cite{GO}. Vasiu studies latticed $F$-isocrystals over the field of Witt vectors in \cite{Vas}, in which he proves a purity property on the Newton polygon stratification. Blache and F\'{e}rard explicitly describe the Newton stratification for polynomials over finite fields in their recent work \cite{BF}. In \cite{KotNewtStrata}, Kottwitz describes the Newton stratification in the adjoint quotient of a reductive group. Goresky, Kottwitz, and MacPherson discuss the root valuation strata in Lie algebras over Laurent power series fields \cite{GKM}, and while this stratification is not a Newton stratification per se, the techniques employed are reminiscent of those that arise in the situation of a Newton stratification. In all of the aforementioned contexts, there are several common topological, geometric, and combinatorial themes. The goal of this paper is to address these common themes in the specific context of the Newton stratification on the algebraic group $G=SL_3(F)$ in the so-called Iwahori case. Here, $F$ is the field of Laurent power series over an algebraic closure of a finite field. We begin by reviewing the theory of isocrystals over $F$ and the associated Newton stratification on $SL_3(F)$. In order to develop a topology and notions of irreducibility and codimension, in Section \ref{S:admis} we define admissible subsets of Iwahori (double) cosets of $SL_3(F)$, which are sets that satisfy a property analogous to Vasiu's Crystalline boundedness principle \cite{Vas}. We then compute explicit equations that characterize the isocrystals having Newton polygons lying below a given polygon. These equations turn out to be polynomial in finitely many coefficients of the entries of a given $g \in G$, which guarantees admissibility. Our main theorem is a sort of purity result. Namely, we show that the codimension between adjacent strata jumps by one. We provide two versions of a formula for the codimensions of the Newton strata inside each component of the affine Bruhat decomposition on $G = I\widetilde{W}I$, where $I$ is the Iwahori subgroup and $\widetilde{W}$ the affine Weyl group. Following Chai, we present both root-theoretic and combinatorial versions of this codimension formula, and we provide the combinatorial version here. This theorem appears as Theorem \ref{T:main} in Section \ref{S:thm}. \vspace{5pt} \noindent \textbf{Theorem} \textit{Let $G=SL_3(F)$ and fix $x \in \widetilde{W}$. For a Newton slope sequence $\lambda \in \mathcal{N}(G)_x$, the subset $(IxI)_{\leq \lambda}$ of $IxI$ is admissible, and} \begin{equation*} \codim\left( (IxI)_{\leq \lambda} \subseteq IxI \right) =\length_{\mathcal{N}(G)_x}[\lambda, \nu_x].\end{equation*} \textit{Moreover, the closure of a given Newton stratum $(IxI)_{\lambda}$ in $IxI$ is precisely $(IxI)_{\leq \lambda}$. Therefore, for two Newton polygons $\lambda_1< \lambda_2$ which are adjacent in the poset $\mathcal{N}(G)_x$,} \begin{equation*} \codim\left( (IxI)_{\leq \lambda_1} \subset (IxI)_{\leq \lambda_2} \right) = 1. \end{equation*} For a definition of the poset $\mathcal{N}(G)_x$, see Section \ref{S:gpthy}, although we also discuss $\mathcal{N}(G)_x$ later in this introduction. We also introduce the Newton strata $(IxI)_{\lambda}$ and $(IxI)_{\leq \lambda}$ in Section \ref{S:gpthy}. We denote by $\nu_x$ the generic slope sequence in $\mathcal{N}(G)_x$, which both appears later in the introduction and is formally defined in Section \ref{S:generic}. The length of the segment $[\lambda,\nu_x]$ is defined in Section \ref{S:length}. The calculations performed in Sections \ref{S:valcalc} and \ref{S:codimcalc} provide a very concrete description of the closed Newton strata $(IxI)_{\leq \lambda}$. These descriptions vary greatly with $x$, but there are some things that can be said regarding their geometric structure. First, the sets $(IxI)_{\leq \lambda}$ are often, but not always, irreducible. In the cases in which the closures of the Newton strata are irreducible, they are fiber bundles over an irreducible affine scheme, having irreducible fibers (see Section \ref{S:thmproof}). For certain $x$ the structure of $(IxI)_{\leq \lambda}$ is a bit more complicated. When $(IxI)_{\leq \lambda}$ is reducible, each irreducible component is the closure of a fiber bundle over an irreducible affine scheme whose fibers are themselves irreducible affine schemes. We see in Section \ref{S:thmproof}, however, that the space $(IxI)_{\leq \lambda}$ is equicodimensional; \textit{i.e.}, all of the irreducible components have the same codimension inside $IxI$. Since the geometric structure of the Newton strata varies so greatly with $x$, we do not make formal statements regarding irreducibility or equicodimensionality, although we provide informal discussion when we are able. Independent of whether or not the space $(IxI)_{\leq \lambda}$ is irreducible, we shall see that it is precisely the closure of the Newton stratum $(IxI)_{\lambda}$, which is a locally closed subset of $IxI$. This fact demonstrates, in particular, that these Newton strata satisfy the strong stratification property; \textit{i.e.} the closure of a given stratum is the union of locally closed subsets defined by all Newton polygons which lie below that of the given stratum and are comparable in the poset $\mathcal{N}(G)_x$. Theorem \ref{T:main} then says that for two Newton polygons that are adjacent in the poset $\mathcal{N}(G)_x$, the codimension of the smaller stratum in the closure of the larger is always equal to 1. The Newton strata associated to Shimura varieties are related to the study of certain affine Deligne-Lusztig varieties. Rapoport's dimension formula for affine Deligne-Lusztig varieties inside the affine Grassmannian was proved in two steps by G\"{o}rtz, Haines, Kottwitz, and Reuman \cite{GHKR} and Viehmann \cite{VieDim}. Viehmann has also described the set of connected components of these affine Deligne-Lusztig varieties \cite{VieConncpt}. Our study of the Newton strata in the Iwahori case relates to certain affine Deligne-Lusztig varieties inside the affine flag manifold, about which less is known. To every $\sigma$-conjugacy class in $G$ and every element of the affine Weyl group, we can associate the affine Deligne-Lusztig variety \begin{equation*} X_x(b) := \{ g \in G(F)/I : g^{-1}b\sigma(g) \in IxI\}. \end{equation*} For a fixed $x \in \widetilde{W}$, we explicitly describe in Section \ref{S:slopes} the poset $\mathcal{N}(G)_x$ of Newton polygons that arise for elements in the double coset $IxI$, when $G = SL_3(F)$. As a direct consequence, we can answer the question of which of the associated affine Deligne-Lusztig varieties are non-empty. This result was proved by Reuman for the case $b=1$ \cite{Reu}, but our results may be applied to any $b\in G$. We believe that it might be possible to apply arguments similar to those we employ in this paper to obtain a description for the poset $\mathcal{N}(G)_x$, at least in the case of $G=SL_n(F)$ (see Question \ref{T:nonemptyQ}, Section \ref{S:adlv}). In addition to giving information about non-emptiness of certain affine Deligne-Lusztig varieties, our work gives rise to a conjectural relationship between the dimension of these varieties and the codimensions of the associated Newton strata. \begin{conjecture}\label{T:dimconj} Let $x\in \widetilde{W}$ be an element of the affine Weyl group and $b \in G$. The relationship between the dimension of the affine Deligne-Lusztig variety $X_x(b)$ and the codimension of the associated set of Newton strata is given by \begin{equation}\label{E:dimconj} \dim X_x(b) + \codim((IxI)_{\leq \ov{\nu}(b)} \subseteq IxI) = \ell(x) - \langle 2\rho, \ov{\nu}(b)\rangle. \end{equation} Here, $\ell(x)$ is the length of $x$, $\rho$ is the half-sum of the positive roots, and $\ov{\nu}(b)$ is the Newton slope sequence associated to $b$. \end{conjecture} \noindent An earlier version of this conjecture, due to Kottwitz, assumed that the codimensions of the Newton strata in the Iwahori case were given by the same root-theoretic expression appearing in \cite{Ch}, and combined this with the right-hand side of \eqref{E:dimconj} to provide a conjecture for the dimension of $X_x(b)$. We demonstrate in Corollary \ref{T:maincor} that for certain affine Weyl group elements, we must correct this initial guess for $\codim((IxI)_{\leq \ov{\nu}(b)} \subseteq IxI)$ by -1. In fact, Theorem \ref{T:main} suggests that the codimensions of the Newton strata are most conveniently expressed combinatorially in terms of lengths in the poset of Newton slopes, rather than root-theoretically. Conjecture \ref{T:dimconj} accounts for this fact and thus includes the codimension as a separate term. In the case $G=SL_3(F)$ and $b=1$, Conjecture \ref{T:dimconj} is true, as \cite{Reu} and our work demonstrate. We refer the reader to \cite{GHKR} for conjectural formulas for the dimensions of the varieties $X_x(b)$ for more general $G$. Although neither these dimensions nor the codimensions of the Newton strata are known except in a few cases, the right-hand side of the equality in Conjecture \ref{T:dimconj} is simple and can be easily computed in general. The study of the poset of Newton slope sequences for $SL_3(F)$ has led to the several additional combinatorial questions. In the Iwahori case, the problem of determining the Newton slope sequence associated to the open stratum remains unsolved. Even in the case of $SL_3(F)$, in which we explicitly compute this generic slope sequence $\nu_x$ for every $x \in \widetilde{W}$, we cannot yet do better than providing a list. We expect there to be a closed root-theoretic formula that would provide an analog of Mazur's inequality in the Iwahori case (see Question \ref{T:nuxform}, Section \ref{S:generic}). When $G=SL_n(F)$ and $x$ corresponds to an alcove lying in the dominant Weyl chamber, we can formulate a precise conjecture determining the generic slope. \begin{conjecture}\label{T:domconj} For $G = SL_n(F)$, let $x=\pi^{\mu}w \in \widetilde{W}$ be such that $\mu$ is dominant; \textit{i.e.}, $\langle \alpha_i,\mu\rangle \geq 0$ for all simple roots $\alpha_i$ in $\text{Lie}(G)$, or equivalently that $\mu_1\geq \cdots \geq \mu_n$. Then $\nu_x = -\mu_{\text{dom}}$. \end{conjecture} \noindent We point out that the negative sign appears in front of $\mu_{\text{dom}}$ as a result of making several less traditional conventions in our definitions (see Section \ref{S:gpthy}) which make our main calculations easier. In general, the generic slope sequence $\nu_x$ does not necessarily coincide with the unique dominant element in the Weyl orbit of $-\mu$. Rather, there is frequently a correction term needed, which comes in the form of a sum of simple coroots, at least for $x$ lying inside the so-called ``shrunken'' Weyl chambers (see \cite{Reu}). Experimentation with groups of higher rank confirms that this observation should be able to be made precise for general $G$, although a closed formula is not yet obvious. Already in the case where $G=SL_3(F)$, the poset of Newton slope sequences has both interesting and surprising properties. We prove, for example, that the poset $\mathcal{N}(G)_x$ for $SL_3(F)$ is a ranked lattice, and we expect that this might be the case for general $G$ (see Question \ref{T:rklattice}, Section \ref{S:length}). One might guess that an analog of Manin's converse to the specialization theorem holds in the Iwahori case as well; \textit{i.e.}, that $\mathcal{N}(G)_x$ consists of all possible Newton slope sequences lying below the generic one. \begin{figure}[h] \centering \includegraphics{poset} \caption{The posets $\{\lambda \in \mathcal{N}(G) \mid \lambda \leq (1,0,-1)\}$ and $\mathcal{N}(G)_x$ for $x=\pi^{(-2,0,2)}s_{121}$}\label{fig:poset} \end{figure} We shall demonstrate that this suspicion is actually false. Consider the example in which $x \in \widetilde{W}$ has finite Weyl part equal to the longest element in the Weyl group $s_{121}$, and translation part equal to $(-2,0,2)$. We show that the generic slope sequence is given by $(1,0,-1)$, in which case the lattice consisting of all possible slope sequences in $\mathcal{N}(G)$ lying below $(1,0,-1)$ is given by the poset on the left in Figure \ref{fig:poset}. The actual description of the poset $\mathcal{N}(G)_x$ in this example consists, however, of only the two elements $(1,0,-1)$ and $(0,0,0)$; see the picture on the right in Figure \ref{fig:poset}. Therefore, the length of the segment $[(0,0,0), (1,0,-1)]$ inside $\mathcal{N}(G)_x$ equals one, even though the length inside $\mathcal{N}(G)$ is two. Moreover, the codimension of the stratum associated to $(0,0,0)$ also equals one, since the slope sequences $(0,0,0)$ and $(1,0,-1)$ are adjacent in the poset $\mathcal{N}(G)_x$. The author believes that it is possible to characterize the affine Weyl group elements that produce these strange examples using the language of parabolic and Levi subgroups, although we are not yet prepared to formulate a precise conjecture. \vspace{5pt} \noindent \textbf{Acknowledgments.} The author would like to thank Robert Kottwitz for suggesting this problem, for numerous helpful comments on earlier versions of this paper, and for his unparalleled dedication as an advisor. Eva Viehmann also provided several useful suggestions for improving the introduction. The author also thanks the anonymous referee for facilitating important structural and technical improvements. \subsection{Isocrystals over the discretely valued field $F$}\label{S:cyclic} Let $k$ be a finite field with $q$ elements, and let $\overline{k}$ be an algebraic closure of $k$. Denote by $\pi$ the uniformizing element of the discrete valuation ring $\mathcal{O}:= \ov{k}[[\pi]]$, having fraction field $F:=\overline{k}((\pi))$ and maximal ideal $P:= \pi \mathcal{O}$. Normalize the valuation homomorphism $\val: F^{\times} \rightarrow \Z$ so that $\val(\pi) = 1$. We can extend the usual Frobenius automorphism $x \mapsto x^q$ on $\ov{k}$ to a map $\sigma: F \rightarrow F$ given by $\sum a_i\pi^i \mapsto \sum a_i^q\pi^i$. Recall that an isocrystal $(V,\Phi)$ is a finite-dimensional vector space $V$ over $F$ together with a $\sigma$-linear bijection $\Phi: V\rightarrow V$; \textit{i.e.}, $\Phi(av)=\sigma(a)\Phi(v)$ for $a \in F$ and $v \in V$. We now define a ring $R = F[\sigma]$, where any element $y\in R$ is of the form $y=\sum\limits a_i\sigma^i$, for $a_i \in F$. Note that $R$ is not a polynomial ring in the usual sense, since for $a \in F$, we have that $\sigma a = \sigma(a) \sigma$. Given an isocrystal $(V,\Phi)$ over $F$, defining $\sigma^iv:=\Phi^i(v)$ makes $V$ into an $R$-module. We then have the following well-known proposition describing isocrystals as cyclic modules over the ring $R$. \begin{prop}\label{T:Cyclic} Let $(V,\Phi)$ be an isocrystal and $R = F[\sigma]$, as above. Then $V$ is a cyclic $R$-module; \textit{i.e.}, $Rv = V$ for some $v$ in $V$. \end{prop} In the context of Proposition \ref{T:Cyclic}, we call the generator $v$ a \textit{cyclic vector}. The ring $R$ is non-commutative, but there exist both a right and left division algorithm, whence we may conclude that $R$ is a principal ideal domain. Upon choosing a cyclic vector, we may thus write $V \cong R/Rf$ for some $f = \sigma^n + \cdots + a_{n-1}\sigma + a_n \in R$, where $n=\mbox{dim}_F(V)$. We shall call $f$ the \emph{characteristic polynomial} associated to the isocrystal $(V,\Phi)$. Note, however, that $f$ depends on the choice of a cyclic vector. Consequently, we shall be interested in the Newton polygon associated to $f$, an isocrystal invariant that is independent of the choice of cyclic vector. The Newton polygon of $f$, or equivalently the Newton polygon of $(V, \Phi)$, is defined to be the convex hull of the set of points $\{(0,0),\ (i, -\val(a_i))\mid i = 1, 2, \dots, n\}$, where the $a_i$ are the coefficients of $f$. More specifically, the Newton polygon of $f$ is the tightest-fitting convex polygon joining the points $(0,0)$ and $(n,-\val(a_n))$ that passes either through or above all of the points in the set $\{(0,0),(i, -\val(a_i))\}$. The reader should observe that our definition of the Newton polygon differs from the usual one, in which the polygon is formed from the set of points $\{(0,0), (i,\val(a_i))\}$. We adopt the less conventional construction in order that our definitions for the Newton stratification in Section \ref{S:strata} agree with those in other related contexts. The associated Newton slope sequence is the $n$-tuple $\lambda = (\lambda_1, \dots, \lambda_n) \in \Q^n$, where the $\lambda_i$ are the slopes of the edges of the Newton polygon, repeated with multiplicity and ordered such that $\lambda_1 \geq \cdots \geq \lambda_n$. Occasionally we will wish to move freely between a slope sequence $\lambda$ and the Newton polygon having $\lambda$ as its slope sequence, which we shall denote by $N_{\lambda}$. \subsection{The characteristic polynomial for $GL_3(F)$}\label{S:charpoly} If we fix a basis for the $n$-dimensional vector space $V$, the isocrystal $(V, \Phi)$ is isomorphic to one the form $(F^n, A\sigma)$ for some $A \in GL_n(F)$. In this context, $\sigma(v)$ means that we apply $\sigma$ to each component of the vector $v$. In this section, we specialize to 3-dimensional isocrystals. In order that we may work with matrices in our calculations, we fix a basis. Let $e_1, e_2, e_3$ denote the standard basis vectors for $F^3$. \begin{prop}\label{T:Dneq0} Let $\Phi =A\sigma$, where $A=\begin{pmatrix} a& b & c\\ d& e & f\\g& h& i\end{pmatrix} \in GL_3(F)$. Then $e_1$ is a cyclic vector for $(F^3, \Phi)$ if and only if \begin{equation*}D := \sigma(d)\begin{vmatrix}d & e\\g&h\end{vmatrix}+ \sigma(g)\begin{vmatrix}d&f\\g&i\end{vmatrix}\neq 0.\end{equation*} \end{prop} \begin{proof} Compute that $e_1 \wedge \Phi(e_1) \wedge \Phi^2(e_1) = D(e_1 \wedge e_2 \wedge e_3)$. \end{proof} We shall now assume that the hypotheses of Proposition \ref{T:Dneq0} are met so that $e_1$ is a cyclic vector for $(F^3, \Phi)$. The characteristic polynomial of $(F^3, \Phi)$ is of the form $f:= \sigma^3 + \alpha\sigma^2 + \beta\sigma + \gamma = 0$, for some $\alpha, \beta, \gamma \in F$. Using linear algebra, we calculate that $\alpha$ and $\beta$ are determined by $\Phi$ as follows: \begin{equation}\label{E:mateqn} \begin{pmatrix}\alpha\\ \beta\end{pmatrix} = -\begin{pmatrix} \Phi^2(e_1)e_2 & \Phi(e_1)e_2\\ \Phi^2(e_1)e_3 & \Phi(e_1)e_3 \end{pmatrix}^{-1} \begin{pmatrix} \Phi^3(e_1)e_2 \\ \Phi^3(e_1)e_3 \end{pmatrix}. \end{equation} Here we denote by $\Phi^k(e_i)e_j$ the coefficient of $e_j$ in the vector $\Phi^k(e_i)$. We can then solve matrix equation \eqref{E:mateqn} explicitly for $\alpha$ and $\beta$ to obtain the following:\begin{equation*} \displaystyle \alpha = -\sigma^2(a) - \frac{1}{D}\left( (\sigma^2(d)\sigma(e)+\sigma^2(g)\sigma(f))\begin{vmatrix} d&e\\g&h\end{vmatrix} + (\sigma^2(d)\sigma(h)+\sigma^2(g)\sigma(i))\begin{vmatrix}d&f\\g&i\end{vmatrix}\right), \end{equation*} \begin{equation*} \displaystyle \beta = -(\sigma(a)\alpha + \sigma^2(a)\sigma(a)+\sigma^2(d)\sigma(b)+\sigma^2(g)\sigma(c)) + \frac{\sigma(D)}{D}\begin{vmatrix}e&f\\h&i\end{vmatrix}. \end{equation*} Here we observe directly the need for the hypothesis $D \neq 0$. So that formulas like the previous ones for $\alpha$ and $\beta$ appear less complicated, we define $\overline{x} := \sigma(x)$. In this notation, our formulae for $\alpha$ and $\beta$ then become \begin{equation}\label{E:alpha} \displaystyle \alpha = -\overline{\overline{a}} - \frac{1}{D}\left( (\ov{\ov{d}}\ov{e}+\ov{\ov{g}}\ov{f})\begin{vmatrix} d&e\\g&h\end{vmatrix} + (\ov{\ov{d}}\ov{h}+\ov{\ov{g}}\ov{i})\begin{vmatrix}d&f\\g&i\end{vmatrix}\right), \end{equation} \begin{equation*}\label{E:beta1} \displaystyle \beta = -(\ov{a}\alpha + \ov{\ov{a}}\ov{a}+\ov{\ov{d}}\ov{b}+\ov{\ov{g}}\ov{c}) + \frac{\ov{D}}{D}\begin{vmatrix}e&f\\h&i\end{vmatrix}. \end{equation*} To make our calculations in Section \ref{S:valcalc} less cumbersome, let us also introduce the following notation: \begin{equation*}B_1:= \frac{\ov{a}}{D} \left( (\ov{\ov{d}}\ov{e}+\ov{\ov{g}}\ov{f})\begin{vmatrix} d&e\\g&h\end{vmatrix}\right) \hspace{20pt} B_2:= \frac{\ov{a}}{D}\left((\ov{\ov{d}}\ov{h}+\ov{\ov{g}}\ov{i})\begin{vmatrix}d&f\\g&i\end{vmatrix}\right) \end{equation*} so that \begin{equation}\label{E:beta} \beta = B_1 +B_2-\ov{\ov{d}}\ov{b}-\ov{\ov{g}}\ov{c} + \frac{\ov{D}}{D}\begin{vmatrix}e&f\\h&i\end{vmatrix}. \end{equation} Finally, recall from the proof of Proposition \ref{T:Dneq0} that $\Phi(e_1 \wedge \Phi(e_1) \wedge \Phi^2(e_1)) =\linebreak \ov{D}\Phi(e_1\wedge e_2 \wedge e_3) = \ov{D}\det A(e_1\wedge e_2\wedge e_3)$, where $\Phi=A\sigma$. On the other hand, using that $(\Phi^3+\alpha \Phi^2 + \beta \Phi + \gamma)(e_1) = 0$, we compute that $\Phi(e_1 \wedge \Phi(e_1) \wedge \Phi^2(e_1)) = -\gamma D(e_1 \wedge e_2 \wedge e_3)$. Equating these two expressions yields \begin{equation*}\gamma = -\frac{\ov{D}}{D}\det A. \end{equation*} One should note that the method used to calculate $\gamma$ generalizes from $GL_3(F)$ to $GL_n(F)$. We shall use these explicit formulae for the coefficients of the characteristic polynomial to make calculations in Section \ref{S:valcalc}. \subsection{The Newton stratification}\label{S:strata} Let $G = SL_3(F)$. If $A \in G$, we have that $\val(\det A) = 0$ and thus $\gamma \in \mathcal{O}^{\times}$. As discussed in Section \ref{S:cyclic}, the Newton polygon of $f$ is formed from the set $\{ (0,0), (1, -\val(\alpha)), (2, -\val(\beta)), (3,0)\}$. We define $\overline{\nu}(A)$ to be the slope sequence of the Newton polygon associated to the isocrystal $(F^3, A\sigma)$. Again, recall that our definition for $\ov{\nu}$ differs from the conventional one. For example, if $\val(x) \leq \val(y) \leq \val(z)$, we have $\ov{\nu}(\text{diag}(x,y,z)) = (-\val(x), -\val(y), -\val(z))$. The map $$\overline{\nu}: G \longrightarrow \mathcal{N}(G)$$ induces a bijection $B(G) \longleftrightarrow \mathcal{N}(G)$, see \cite{KotIsoI}. Here, $B(G)$ is the set of $\sigma$-conjugacy classes of $G$; \textit{i.e.}, $B(G) = G(F)/\sim$ where $x \sim y \iff x = gy\sigma(g)^{-1}$ for some $g \in G(F)$. We denote by $\mathcal{N}(G)$ the set of possible slope sequences arising from Newton polygons for isocrystals of the form $(F^3, A\sigma)$ with $A \in G$. The map $\ov{\nu}$ induces a natural stratification on $G$ indexed by the elements of $\mathcal{N}(G)$. We define the Newton strata referred to in the title of this paper to be \begin{equation*} G_{\lambda} := \{ g \in G \mid \ov{\nu}(g) = \lambda\}.\end{equation*} The group $G$ then breaks up into a disjoint union of these strata as follows: \begin{equation*} G = \coprod_{\lambda \in \mathcal{N}(G)} G_{\lambda}. \end{equation*} The set $\mathcal{N}(G)$ is actually a partially ordered set. We define $\lambda' \leq \lambda$ if $N_{\lambda'}$ and $N_{\lambda}$ have the same endpoints and all edges of $N_{\lambda'}$ lie on or below the corresponding edges of $N_{\lambda}$. Given a particular $\lambda \in \mathcal{N}(G)$, we are interested in studying all strata $G_{\lambda '}$ such that $\lambda' \leq \lambda$. The closed subset of $G$ determined by a slope sequence $\lambda$ is \begin{equation*} G_{\leq \lambda}:= \coprod_{\lambda' \leq \lambda} G_{\lambda '}.\end{equation*} \subsection{Lie-theoretic interpretation}\label{S:gpthy} Let $B \subset G=SL_3(F)$ denote the Borel subgroup consisting of the upper triangular matrices, and $T$ the maximal torus consisting of all diagonal matrices. Let $W$ denote the Weyl group of $T$ in $G$, which is isomorphic to the symmetric group $S_3$ in this case. Let $\mathfrak{a} := X_*(T)\otimes_{\Z}\R$, and denote its $\Q$-subspace by $\mathfrak{a}_{\Q}:=X_*(T)\otimes_{\Z}\Q$. Denote by $\alpha_i$ the simple roots in $\text{Lie}(G)$, and let $C := \{\lambda \in \mathfrak{a}\ \mid \langle \alpha_i, \lambda \rangle > 0, \ \forall i\}$ denote the dominant Weyl chamber. Analogously, denote by $C^0:=\{\lambda \in \mathfrak{a}\ \mid \langle \alpha_i, \lambda \rangle < 0, \ \forall i\}$ the antidominant Weyl chamber. Our convention will be to call the unique alcove in $C^0$ whose closure contains the origin the base alcove $\mathbf{a}_1$. (This convention makes our calculations in Section \ref{S:valcalc} less cumbersome, and our less traditional definition of the Newton polygon in Section \ref{S:cyclic} aligns more naturally with this convention.) Let $I$ be the associated Iwahori subgroup of $G(F)$. According to our conventions, $I$ is the standard Iwahori subgroup $$I = \begin{pmatrix} \mathcal{O}^{\times} & \mathcal{O} & \mathcal{O} \\ P & \mathcal{O}^{\times} & \mathcal{O}\\ P & P & \mathcal{O}^{\times} \end{pmatrix}.$$ One can also consider $\mathbf{a}_1$ to be the basepoint of the affine flag manifold $G/I$. Denote by $\widetilde{W} = X_*(T)\rtimes W$ the affine Weyl group. We shall express an element $x\in \widetilde{W}$ as $x = \pi^{\mu}w$, for $\mu \in X_*(T)$ and $w \in W$. For $G = GL_3(F)$, we may identify $X_*(T)$ with $\Z^3$. We then write $\pi^{\mu} = \text{diag}(\pi^{\mu_1},\pi^{\mu_2},\pi^{\mu_3})$ for $\mu = (\mu_1, \mu_2, \mu_3) \in \Z^3$. In this group-theoretic context, we can interpret a Newton slope sequence $\lambda = (\lambda_1,\lambda_2, \lambda_3)$ as an element $\lambda \in \mathfrak{a}_{\Q,\text{dom}}$, where $\mathfrak{a}_{\Q,\text{dom}}$ denotes the dominant elements in $\mathfrak{a}_{\Q}$. Specifically, $\mathfrak{a}_{\Q,\text{dom}} = \{(\nu_1, \nu_2, \nu_3) \in \Q^3 \mid \nu_1\geq \nu_2 \geq \nu_3\}$. For $G = SL_3(F)$, our description of the group of cocharacters is $X_*(T) \cong \{\mu \in \Z^3 \mid \sum \mu_i = 0\}$. In this case, $\mathfrak{a}_{\Q,\text{dom}} = \{(\nu_1, \nu_2, \nu_3) \in \Q^3 \mid \nu_1\geq \nu_2 \geq \nu_3\ \text{and}\ \nu_1+\nu_2+\nu_3=0\}$. Under these identifications, the partial order on $\mathcal{N}(G)$ then becomes $\lambda' \leq \lambda \iff \lambda - \lambda'$ is a non-negative linear combination of positive coroots. Recall the affine Bruhat decomposition for $G=SL_3(F)$: \begin{equation*} G = \coprod_{x \in \widetilde{W}} IxI.\end{equation*} We study the sets $G_{\leq \lambda}$ when intersected with these double cosets $IxI$ in order that we may define a notion of codimension. For a fixed $x \in \widetilde{W}$, we thus introduce the following analog of the Newton strata discussed in Section \ref{S:strata}: \begin{equation*} (IxI)_{\lambda}:= G_{\lambda} \cap IxI \end{equation*} \begin{equation*} (IxI)_{\leq \lambda}:= \coprod\limits_{\lambda' \leq \lambda} (IxI)_{\lambda'}. \end{equation*} The subset $(IxI)_{\leq \lambda}$ consists of all $g \in IxI$ such that the Newton polygon associated to $(F^3, g\sigma)$ has the same endpoints as $N_{\lambda}$ and lies on or below $N_{\lambda}$. The stratum $(IxI)_{\lambda}$ is non-empty for only finitely many $\lambda \in \mathcal{N}(G)$. It will be useful to introduce notation for the finitely many Newton slope sequences that actually arise for elements inside a particular Iwahori double coset: \begin{equation*} \mathcal{N}(G)_x := \{ \lambda \in \mathcal{N}(G) \mid (IxI)_{\lambda} \neq \emptyset \}. \end{equation*} The subset $\mathcal{N}(G)_x$ inherits the partial ordering $\leq$ on $\mathcal{N}(G)$. \subsection{Admissibility of $(IxI)_{\lambda}$}\label{S:admis} The double cosets $IxI$ are not finite dimensional; however, we can develop an adequate notion of codimension by working with finite dimensional quotients of $IxI$ such as $IxI/I^N$, where $I^N:= \{ g \in I \mid g \equiv \text{id}\mod(P^N)\}$. We obtain another finite dimensional quotient of $IxI$ by considering its image under the map on $3 \times 3$ matrices induced by $F \rightarrow F/P^N$. By abuse of notation, we denote this image by $IxI/P^N$. Following \cite{GKM}, denote by $p_N$ and $\rho_N$ the surjections $p_N:IxI \twoheadrightarrow IxI/I^N$ and $\rho_N:IxI \twoheadrightarrow IxI/P^N$. Observe that the quotients $IxI/I^N$ and $IxI/P^N$ are finite dimensional affine schemes. We say that a subset $Y$ of $IxI$ is \emph{admissible} if there exists an integer $N$ such that $Y = p_N^{-1}p_NY$. Note that $Y$ is admissible if and only if there exists an integer $M$ such that $Y = \rho_M^{-1}\rho_MY$. Since these two notions of admissibility are equivalent, we will use whichever is most convenient for us in the given context. If a subset $Y$ of $IxI$ is admissible, we can treat $Y$ as though it is finite-dimensional. In particular, we define the \emph{codimension} of $Y$ in $IxI$ to be the codimension of $p_NY$ in $IxI/I^N$ for any $N$ such that $Y = p_N^{-1}p_NY$. Similarly, we say that $Y$ is \emph{irreducible} (resp. open, closed, locally closed) in $IxI$ if $p_NY$ is irreducible (resp. open, closed, locally closed) for some $N$ such that $Y = p_N^{-1}p_NY$. Note that we may replace $p_N$ by $\rho_N$ and $I^N$ by $P^N$ to obtain equivalent formulations of these topological notions using the image of $IxI$ under the map $F \rightarrow F/P^N$. We will see that $(IxI)_{\lambda}$ is an admissible subset of $IxI$ for any $\lambda \in \mathcal{N}(G)_x$. We remark that Vasiu has demonstrated the admissibility of latticed $F$-isocrystals, which are isocrystals over the field of fractions of the Witt vectors satisfying an additional property \cite{Vas}. In addition, we will see that the sets $(IxI)_{\lambda}$ are locally closed in $IxI$ and that $(IxI)_{\leq \lambda}$ are precisely the closures of the $(IxI)_{\lambda}$ inside $IxI$. It is not necessarily true in general that $(IxI)_{\leq \lambda}$ is an irreducible subset of $IxI$. We show, however, that all of the irreducible components have the same codimension inside $IxI$. We make more detailed remarks of this nature in Section \ref{S:thmproof}. \subsection{The generic Newton slope sequence}\label{S:generic} By observing that $IxI/I^N$ is irreducible for any positive integer $N$, we see that the double coset $IxI$ is irreducible for fixed $x \in \widetilde{W}$. Furthermore, $IxI$ is the finite union of subsets of the form $(IxI)_{\lambda}$, any two of which are disjoint. If $\lambda \in \mathcal{N}(G)_x$ is maximal, then $(IxI)_{\lambda}$ is actually an open subset of $IxI$. Since $IxI$ is irreducible, there must exist a unique maximal element in $\mathcal{N}(G)_x$. \begin{defn} Given $x \in \widetilde{W}$, we define the generic Newton slope sequence $\nu_x \in \mathfrak{a}_{\Q, \text{dom}}$ to be the unique maximal element in $\mathcal{N}(G)_x$; \textit{i.e.}, $\nu_x$ is defined such that for all $\lambda \in \mathcal{N}(G)_x$, we have $\lambda \leq \nu_x$. \end{defn} For a given $x=\pi^{\mu}w$, note that $\nu_x$ may not coincide with the unique dominant element in the $W$-orbit of $-\mu$, which we denote by $-\mu_{\text{dom}}$. In general, $\nu_x \leq -\mu_{\text{dom}}$, with strict inequality occurring for some $x$ such that $\mathbf{a}_x$ lies outside the dominant Weyl chamber. In Section \ref{S:slopes} we provide explicit descriptions for the maximal and minimal elements in $\mathcal{N}(G)_x$ for all $x\in \widetilde{W}$. It would be nice to have a closed formula providing both the maximal element $\nu_x$ and the minimal element in $\mathcal{N}(G)_x$, even in the case of $G=SL_3(F)$. \begin{question}\label{T:nuxform} Is there a closed, root theoretic formula for the maximal and minimal elements in $\mathcal{N}(G)_x$ for $G = SL_3(F)$? For all $x \in \widetilde{W}$ and any $G$?\end{question} \subsection{Length of a segment $[\mu, \lambda]$}\label{S:length} The codimensions of the Newton strata inside $IxI$ are more conveniently expressed in terms of the length of a segment in the poset $\mathcal{N}(G)_x$, which we now introduce. For $G=SL_3(F)$, the poset $\mathcal{N}(G)$ consists of a single connected component, which is a lattice. Given $x \in \widetilde{W}$ and two slope sequences $\mu, \lambda \in \mathcal{N}(G)_x$ such that $\mu \leq \lambda$, we may consider the segment $[\mu, \lambda]$ defined as follows: \begin{equation*} [\mu,\lambda] := \{ \nu \in \mathcal{N}(G)_x \mid\ \mu \leq \nu \leq \lambda \}. \end{equation*} We define the length of the segment $[\mu, \lambda]$ inside $\mathcal{N}(G)_x$, denoted $\length_{\mathcal{N}(G)_x}[\mu,\lambda]$, to be the supremum of all natural numbers $n$ such that there exists a chain $\mu = \nu_0 < \nu_1 < \cdots < \nu_n = \lambda$ in the poset $\mathcal{N}(G)_x$. Our definition of length is the same as Chai's notion of length on subsets of Newton points expected to appear in the reduction modulo $p$ of a Shimura variety, see \cite{Ch}. Similar to the situation in \cite{Ch}, it turns out that the poset $\mathcal{N}(G)_x$ is ranked or catenary; \textit{i.e.}, any two maximal chains have the same length. We shall also see in Section \ref{S:slopes} that $\mathcal{N}(G)_x$ is a lattice. We might reasonably expect that $\mathcal{N}(G)_x$ is always a ranked lattice. \begin{question}\label{T:rklattice} Is the poset $\mathcal{N}(G)_x$ ranked for all $G$? Is $\mathcal{N}(G)_x$ a lattice for all $G$? \end{question} \subsection{Problem statement}\label{S:thm} We are now prepared to formally state the main theorem, which provides a formula for the codimension of the subset $(IxI)_{\leq \lambda}$ inside $IxI$. \pagebreak \begin{theorem}\label{T:main} Let $G=SL_3(F)$ and fix $x \in \widetilde{W}$. For a Newton slope sequence $\lambda \in \mathcal{N}(G)_x$, the subset $(IxI)_{\leq \lambda}$ of $IxI$ is admissible, and \begin{equation*}\label{E:codim} \codim\left( (IxI)_{\leq \lambda} \subseteq IxI \right) =\length_{\mathcal{N}(G)_x}[\lambda, \nu_x].\end{equation*} Moreover, the closure of a given Newton stratum $(IxI)_{\lambda}$ in $IxI$ is precisely $(IxI)_{\leq \lambda}$. Therefore, for two Newton polygons $\lambda_1< \lambda_2$ which are adjacent in the poset $\mathcal{N}(G)_x$, \begin{equation*} \codim\left( (IxI)_{\leq \lambda_1} \subset (IxI)_{\leq \lambda_2} \right) = 1. \end{equation*} \end{theorem} If we interpret Theorem \ref{T:main} root-theoretically, we can produce an alternative formula for the codimensions of the Newton strata inside $IxI$. Order the simple roots $\alpha_1, \alpha_2 \in X_*(T)$ in the usual way so that $\alpha_i = e_i-e_{i+1}$. Let $\omega_1=(1,0,0)$ and $\omega_2=(1,1,0)$. For $s\in W$ denote by $s(C^0)$ the Weyl chamber corresponding to $s$. As a corollary to Theorem \ref{T:main} we have the following explicit formulae. \begin{cor}\label{T:maincor} Let $G = SL_3(F)$, and fix $x \in \widetilde{W}$ and $\lambda \in \mathcal{N}(G)_x$. \begin{enumerate} \item If $x = \pi^{(\mu_1,\mu_2,\mu_3)}s_1s_2s_1$ where $\mu_1+2 <\mu_2 +1 < \mu_3$, or if $x=\pi^{(-2n,n,n)}s_1s_2,\linebreak \pi^{-(n,n,-2n)}s_2s_1,\ \pi^{(-2n+1,n-1,n)}s_2,$ or $\pi^{-(n,n-1,-2n+1)}s_1$ for some $n \in \N$, then \begin{equation}\label{E:rootform1} \codim\left( (IxI)_{\leq \lambda} \subseteq IxI\right)= \left(\sum\limits_{i=1}^2 \lceil \langle \omega_i, \nu_x-\lambda\rangle\rceil\right)-1. \end{equation} \item For all $x \in \widetilde{W}$ not of the form $x',\ \varphi(x'),$ or $\varphi^2(x')$ for $x'$ one of the values listed above, where $\varphi(x')$ rotates the alcove $\mathbf{a}_{x'}$ 120 degrees counterclockwise about the center of the base alcove, we have \begin{equation}\label{E:rootform2} \codim\left( (IxI)_{\leq \lambda} \subseteq IxI\right)= \sum\limits_{i=1}^2 \lceil \langle \omega_i,\nu_x- \lambda\rangle\rceil. \end{equation} \end{enumerate} Here, $\lceil \ell \rceil$ denotes the ceiling function, which rounds up to the nearest integer. Note that the formula in \eqref{E:rootform1} only makes sense for $\lambda \neq \nu_x$. \end{cor} Equation \eqref{E:rootform2} is the naive analog of Chai's root-theoretic formula for the length of posets of Newton slope sequences associated to $G(F)$ for $F$ a $p$-adic field, which we now recall for comparison: \begin{theorem}[Chai]\label{T:chai} Let $F$ be non-Archimedean local field, and let $C^{\nu}_{F,R^{\vee}}$ denote the poset of Newton slope sequences that lie below $\nu$ which occur for $G(F)$, where $G$ is connected, reductive, quasisplit over $F$. Denote by $\omega_{F,i}$ the fundamental $F$-weights, and let $\lambda$ be a slope sequence lying below $\nu$. Then \begin{equation*}\label{E:chailength} \length_{C^{\nu}_{F,R^{\vee}}}[\lambda,\nu]=\sum^n_{i=1}\lceil \langle \omega_{F,i}, \nu-\lambda\rangle \rceil \end{equation*} \end{theorem} \noindent The statement of this theorem in \cite{Ch} is for $F$ a $p$-adic field, since Chai is primarily interested in applications to Shimura varieties, although he remarks that the theorem is true even when $F$ has positive characteristic. A similar expression also arises as the formula for the codimension of the Newton strata in the adjoint quotient of a reductive group, $\mathbb{A}(F)_{\leq \lambda}$ in $\mathbb{A}(F)_{\leq \nu_x}$, appearing in \cite{KotNewtStrata}. As indicated by Equation \eqref{E:rootform1}, for certain values of $x$, we require a correction term of -1 to the initial guess for the codimensions of the Newton strata, which incorrectly assumes that the length of the segment $[\lambda,\nu]$ inside the poset associated to a particular affine Weyl group element, $\mathcal{N}(G)_x$, coincides with the length in the larger poset $\mathcal{N}(G)$. The affine Weyl group elements for which this correction term appears correspond either to ones whose poset $\mathcal{N}(G)_x$ is missing expected elements, as in our example from Figure \ref{fig:poset}, or to alcoves having half-integral generic Newton slopes, in which case rounding up to the nearest integer yields an overestimate for the codimension. There are advantages to both the combinatorial and root-theoretic presentations for the codimension formula. When expressed in terms of the length of the segment $[\lambda, \nu_x]$, the formula is independent of the affine Weyl group element $x$ in consideration. On the other hand, the explicit root-theoretic version has a natural graphical interpretation, since it depicts, in some sense, the distance between the Newton polygons $N_{\lambda}$ and $N_{\nu_x}$. \subsection{Affine Deligne-Lusztig varieties for $A_2$}\label{S:adlv} Theorem \ref{T:main} is related to the study of certain affine Deligne-Lusztig varieties. Let $G= SL_3(F)$ and $b \in G(F)$. Recall the definition of the affine Deligne-Lusztig variety $X_x(b)$ inside the affine flag manifold: \begin{equation*} X_x(b) := \{ g \in G(F)/I : g^{-1}b\sigma(g) \in IxI\}. \end{equation*} Little is known about the varieties $X_x(b)$, including, in most cases, whether or not they are empty as sets. In \cite{Reu}, Reuman provides a simple criterion for determining non-emptiness of the affine Deligne-Lusztig varieties inside the affine flag manifold for $G=SL_3(F)$ and $b=1$, and alternative methods are discussed in \cite{GHKR}. It is worth noting that the methods used in this paper will provide another means by which we can determine for which $x$ the variety $X_x(1)$ is non-empty. More specifically, 0 is the minimal element in $\mathcal{N}(G)_x$ if and only if $X_x(1) \neq \emptyset$. Denote by $\lambda$ the element $\ov{\nu}(b) \in \mathcal{N}(G)$. Recall from Section \ref{S:gpthy} that $(IxI)_{\lambda} = \linebreak IxI \cap \{ gb\sigma(g)^{-1} \mid g\in G\}$, so that $X_x(b) \neq \emptyset$ if and only if $(IxI)_{\lambda} \neq \emptyset$. One application of the calculations in Section \ref{S:slopes}, in which we explicitly describe the poset $\mathcal{N}(G)_x = \{\lambda \in \mathcal{N}(G) \mid (IxI)_{\lambda} \neq \emptyset \}$, is to determine for which $b \in G$ we have $X_x(b) \neq \emptyset$. Our approach differs from Reuman's method and answers the non-emptiness question for any $b \in G$, rather than only $b=1$, in the case of $A_2$. Although our treatment of $SL_3(F)$ in Section \ref{S:valcalc} suggests that the number of cases becomes unmanageable as the rank of $G$ increases, the author believes that it might be possible to employ arguments similar in flavor to provide a complete answer to the question of non-emptiness in the case of $A_n$. The reader will observe in Section \ref{S:slopes} that the examples constructed to prove non-emptiness all lie in $k((\pi))$, rather than $F=\ov{k}((\pi))$. In this case, the characteristic polynomial is much simpler since the Frobenius $\sigma$ fixes all of the matrix entries. Producing matrices over $F^{\sigma}$, together with defining the various cases in a much more combinatorial manner, might provide a strategy for answering questions about $\mathcal{N}(G)_x$. \begin{question}\label{T:nonemptyQ} Can arguments similar to those in appearing in Sections \ref{S:valcalc} and \ref{S:slopes} yield complete descriptions of $\mathcal{N}(G)_x$ for all $x$ and $G = SL_n(F)$? \end{question} \end{section} \begin{section}{Reduction Steps}\label{S:reduction} \subsection{Geometry of the Newton strata}\label{S:geom} The group $W$ is generated by $s_1$ and $s_2$, the simple reflections through the walls of the chamber $C$. In coordinates, if we write $x=\pi^{\mu}w$ for $\mu=(\mu_1, \mu_2, \mu_3)$, then $s_1: \mathbf{a}_x \mapsto \mathbf{a}_{x'}$, where $x' = \pi^{(\mu_2, \mu_1, \mu_3)}s_1w$, and $s_2: \mathbf{a}_x \mapsto \mathbf{a}_{x''}$, where $x'' = \pi^{(\mu_1, \mu_3, \mu_2)}s_2w$. We shall use this description of $W$, together with some basic geometry of the root system for $A_2$, to make several key reduction steps. \begin{prop}\label{T:thetacodim} Let $\theta \in \text{Aut}_F(G)$ be such that $\theta(I) = I$. The automorphism $\theta$ then induces bijections on $\widetilde{W}$ and $\mathcal{N}(G)$, and a bijection $\mathcal{N}(G)_x \xrightarrow{\sim} \mathcal{N}(G)_{\theta(x)}$. Moreover, \begin{equation*}\label{E:thetacodim} \codim((IxI)_{\leq \lambda} \subseteq IxI) = \codim((I\theta(x)I)_{\leq \theta(\lambda)} \subseteq I\theta(x)I). \end{equation*} \end{prop} \begin{proof} Recall that there is a bijective correspondence between $\widetilde{W}$ and double cosets $IxI$. The map $\theta: IxI \rightarrow \theta(IxI)=I\theta(x)I$ therefore induces a bijection on double cosets $\theta:I\ba G/I \xrightarrow{\sim} I\ba G/I$ and hence on $\theta: \widetilde{W} \xrightarrow{\sim} \widetilde{W}$. In addition, since $\theta \in \text{Aut}_F(G)$, if two elements $g_1$ and $g_2$ are $\sigma$-conjugate in $G$, then $\theta(g_1)$ and $\theta(g_2)$ are also $\sigma$-conjugate. We therefore also obtain a bijection on the level of $\sigma$-conjugacy classes $\theta: B(G) \xrightarrow{\sim} B(G)$, which gives rise to a bijection on the two sets of Newton slope sequences \linebreak $\theta:\mathcal{N}(G)\xrightarrow{\sim} \mathcal{N}(G)$ and $\theta: \mathcal{N}(G)_x \xrightarrow{\sim} \mathcal{N}(G)_{\theta(x)}$. Consequently, we obtain $\theta : (IxI)_{\lambda} \xrightarrow{\sim} (I \theta(x)I)_{\theta(\lambda)},$ which is an isomorphism of schemes. \end{proof} \begin{remark} Let $\theta \in \text{Aut}_F(G)$ be such that $\theta (I)=I$, and assume in addition that $\theta(T) = T$. Then $\theta(N_G(T)) = N_G(T)$, and so the bijection $\theta: \widetilde{W} \rightarrow \widetilde{W}$ will also be a group homomorphism. \end{remark} \begin{lemma}\label{T:reduction} Let $x = \pi^{\mu}w \in \widetilde{W}$, where $\mu = (\mu_1, \mu_2, \mu_3)$. It suffices to calculate the codimensions of the Newton strata in $IxI$ for the following cases: \begin{enumerate} \item[(A)] $\mathbf{a}_x \subset C^0$, where $\mu_2 \geq 0$, \item[(B)] $\mathbf{a}_x \subset s_1(C^0)$, where $\mu_1 \geq 0$ and $\mu \neq (\mu_1, \mu_2, \mu_1)$. \end{enumerate} \end{lemma} \begin{proof} Once we compute the codimensions of the Newton strata inside $IxI$ for all $x$ such that $\mathbf{a}_x$ lies in one of two fixed and adjacent Weyl chambers, then we can obtain the codimensions for the remaining $x$ by applying Proposition \ref{T:thetacodim} to the automorphism of $I$ which changes the coordinates that determine the origin for the base alcove. Similarly, once we compute the codimensions of the Newton strata inside $IxI$ where $\mu$ has two non-negative coordinates, we obtain the remaining ones by applying Proposition \ref{T:thetacodim} to the automorphism which exchanges the two simple roots. First we consider the symmetries of the base alcove, which change the coordinates that determine which vertex of $\mathbf{a}_1$ is the origin. Let us take a representative for the rotation by 120 degrees about the center of $\mathbf{a}_1$ to be \begin{equation*}\tau := \displaystyle \begin{pmatrix}0 & 0 & \pi^{-1} \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix} \in GL_3(F). \end{equation*} Define $\varphi(g):= \tau g \tau^{-1}\in \text{Aut}_F(G)$, and note that $\varphi$ fixes $I$. Since $\sigma(\tau) = \tau$, the induced map $\varphi: \mathcal{N}(G) \rightarrow \mathcal{N}(G)$ is the identity. In addition, one can check that $\varphi(x) =\pi^ys_1s_2w(s_1s_2)^{-1},$ where $y = (-1,0,0)+s_1s_2(\mu_1,\mu_2,\mu_3) + s_1s_2w(0,0,1)$. Proposition \ref{T:thetacodim} says that we can extend the calculations for $x \in \widetilde{W}$ such that $\mathbf{a}_x$ lie in two adjacent Weyl chambers, to $I\varphi(x)I$ and $I\varphi^2(x)I$. Another element of $\text{Aut}_F(G)$ comes from the automorphism of the Dynkin diagram associated to $\text{Lie}(G)$, which interchanges the two simple roots. Denote by $\eta$ the longest element in the Weyl group \begin{equation*}\eta:= s_1s_2s_1 = \begin{pmatrix} 0&0&1\\0&1&0\\1&0&0\end{pmatrix},\end{equation*} and define $\psi(g):= \eta (g^t)^{-1} \eta^{-1}$ for $g \in G$. Then $\psi(I)=I$, and $\psi$ induces a map on $\mathcal{N}(G)$ given by $\psi (\mu_1, \mu_2,\mu_3) = (-\mu_3, -\mu_2, -\mu_1)$. In addition, $\psi$ induces a bijection $\psi(x) = \pi^{(-\mu_3, -\mu_2, -\mu_1)}w'$, where the reduced expression for $w'$ is obtained from $w$ by interchanging the subscripts 1 and 2. Applying Proposition \ref{T:thetacodim} enables us to make the positivity restrictions on $\mu_1$ and $\mu_2$ in the two Weyl chambers. \end{proof} \subsection{Newton strata for single cosets}\label{S:coset} As described in Section \ref{S:gpthy}, the affine Bruhat decomposition provides a natural decomposition of $G=SL_3(F)$ into Newton strata of the form $(IxI)_{\lambda}$. In practice, however, it is easier to work with single cosets of the Iwahori subgroup. For the purpose of computing the codimensions of the Newton strata, Lemma \ref{T:coset} below justifies passing to single cosets of the form $xI$. We thus introduce two natural variants of the definitions of the Newton strata given in Section \ref{S:gpthy}. For a fixed $x \in \widetilde{W}$ and $\lambda \in \mathcal{N}(G)$, define \begin{equation*} (xI)_{\lambda} := xI \cap (IxI)_{\lambda}, \end{equation*} \begin{equation*} (xI)_{\leq \lambda} := \coprod_{\lambda ' \leq \lambda} (xI)_{\lambda '}. \end{equation*} Here again, $(xI)_{\lambda'}$ is non-empty for only the finitely many $\lambda' \in \mathcal{N}(G)_x$, so that $(xI)_{\leq \lambda}$ is a union of finitely many Newton strata $(IxI)_{\lambda'}$ intersected with the infinite-dimensional space $xI$. By applying our notion of admissibility to the single coset stratum $(xI)_{\lambda}$, we can define the codimension of $(xI)_{\leq \lambda}$ in $xI$. In addition, this codimension agrees with the desired codimension of $(IxI)_{\leq \lambda}$ in $IxI$. \begin{lemma}\label{T:coset} Fix $x \in \widetilde{W}$, and let $\lambda \in \mathcal{N}(G)_x$. Then, \begin{equation*}\label{E:coset}\codim\left( (IxI)_{\leq \lambda} \subseteq IxI\right) =\codim\left( (xI)_{\leq \lambda} \subseteq xI\right). \end{equation*} \end{lemma} \begin{proof} First consider the case in which $x=\pi^{\mu}w \in \widetilde{W}$ satisfies $\mathbf{a}_x \subset C^0$. In this case, $I \cap xIx^{-1}$ is of the form \begin{equation*}I\cap xIx^{-1}= \begin{pmatrix} \mathcal{O}^{\times} & \mathcal{O} & \mathcal{O} \\ P^r & \mathcal{O}^{\times} & \mathcal{O} \\ P^s & P^t & \mathcal{O}^{\times} \end{pmatrix},\end{equation*} where $r,s,t$ are positive integers that depend on $\mu$ and satisfy $s \geq r + t$. Consider \begin{equation*}H:= \begin{pmatrix} 1 & 0 & 0 \\ \ov{k}[\pi]_1^{r-1} & 1 & 0 \\ \ov{k}[\pi]_1^{s-1} & \ov{k}[\pi]_1^{t-1} & 1 \end{pmatrix}\subseteq I,\end{equation*} where $\ov{k}[\pi]_1^n$ is the vector space over $\ov{k}$ generated by $\pi^i$ for $i = 1, \dots, n$. If $n=0$, we define $\ov{k}[\pi]_1^n := 0$. The relationship $s \geq r+t$ implies that $H$ is a subgroup of $I$. Observe that the Iwahori subgroup decomposes into a product $I=H\cdot (I\cap xIx^{-1})$, in which $H \cap (I \cap xIx^{-1}) = 1$. One can verify that the map $H \times xI \rightarrow IxI$ given by $(h, xi) \mapsto hxi\sigma(h)^{-1}$ is an isomorphism of schemes. Under this isomorphism, $H \times (xI)_{\lambda}$ is mapped to $(IxI)_{\lambda}$ and $H \times (xI)_{\leq \lambda}$ is mapped to $(IxI)_{\leq \lambda}$, and so the result holds in the case where $\mathbf{a}_x \subset C^0$. The other cases are handled similarly. \end{proof} \end{section} \begin{section}{Conditions determining the Newton strata}\label{S:valcalc} The proof of Theorem \ref{T:main} proceeds in two steps. The focus of this section will be to calculate the explicit form of the subscheme $(xI)_{\leq \lambda}$ inside $xI$ for all $x \in \widetilde{W}$ in the two cases determined by Lemma \ref{T:reduction}. We will see that the conditions that define $(xI)_{\leq \lambda}$ are polynomial in finitely many of the coefficients of the entries of a given $g \in xI$. Having explicit formulae determining all Newton strata in $xI$ allows us to compute $\mathcal{N}(G)_x$ for all $x$, from which we may obtain a concrete formula for $\length_{\mathcal{N}(G)_x}[\lambda, \nu_x]$. We provide descriptions of $\mathcal{N}(G)_x$ in the next section, and the proof of Theorem \ref{T:main} then appears in Section \ref{S:codimcalc}. For fixed $x \in \widetilde{W}$ and $\lambda \in \mathcal{N}(G)_x$, we use the characteristic polynomial from Section \ref{S:charpoly} to find explicit conditions on the entries of a particular $g \in xI$ that yield $\ov{\nu}(g) \leq \lambda$. Since $\lambda \in \mathfrak{a}_{\Q}$, we will encounter conditions on the valuations of the matrix entries that involve rational numbers. We thus adopt the convention that $P^{\ell} := P^{\lceil \ell \rceil}$, for $\ell \in \Q$. In addition, we will occasionally abuse notation and write $\pi^{\ell}:= \pi^{\lceil \ell \rceil}$, for $\ell \in \Q$. \subsection{Two technical lemmas}\label{S:twolemmas} We open with two technical, but useful lemmas. The first lemma reformulates the definition of the partial ordering on $\mathcal{N}(G)$ in terms of conditions on the valuations of the coefficients of the characteristic polynomial. \begin{lemma}\label{T:charpolypo} Fix $\lambda \in \mathcal{N}(G)$, and suppose that $\nu$ is the Newton slope sequence associated to the isocrystal $(F^3,g\sigma)$ for $g\in G$, having characteristic polynomial of the form $f = \sigma^3 + \alpha\sigma^2 + \beta\sigma + \gamma$. Then, \begin{equation*}\label{E:charpolypo}\nu \leq \lambda \iff \alpha \in P^{-\lambda_1}\ \text{and}\ \beta \in P^{\lambda_3}.\end{equation*} \end{lemma} \begin{proof} Write $\nu = (\nu_1,\nu_2,\nu_3)$. Since $\val(\gamma) = 0$ in $SL_3(F)$, the endpoints for $N_{\nu}$ and $N_{\lambda}$ coincide. We thus have that $\nu \leq \lambda$ precisely when $\lambda_1 \geq \nu_1$ and $\lambda_3 \leq \nu_3$. \end{proof} At present the second lemma is unmotivated, but the result will be useful in certain natural subcases within the proofs of almost every subsequent proposition. \begin{lemma}\label{T:irrelterms} Let $\lambda, \mu \in X_*(T) \otimes_{\Z} \Q$, and assume that $\lambda$ is dominant. If, in addition, $\mu_1+\mu_3 \leq \lambda_3$, then $P^{\mu_2 - \lambda_1} \subseteq P^{\lambda_3}$. \end{lemma} \begin{proof} It suffices to show that $\mu_2 - \lambda_1 \geq \lambda_3$. Now, $\mu_1 + \mu_3 \leq \lambda_3 \iff -\mu_1 - \mu_3 \geq -\lambda_3$. But $\mu_1+\mu_2+\mu_3 = 0$ in $SL_3(F)$, so that $\mu_2 \geq -\lambda_3 \geq -\lambda_2$, where we have also used that $\lambda$ is dominant. Hence, $\mu_2 \geq \lambda_1+\lambda_3$, and so $\mu_2 - \lambda_1 \geq \lambda_3$, as desired. \end{proof} \subsection{Conditions on valuations determining the Newton strata: Case A}\label{S:valpolysA} It suffices to compute $(xI)_{\leq \lambda}$ for $x \in \widetilde{W}$ such that the alcoves $\mathbf{a}_x$ satisfy either condition A or B as specified in Lemma \ref{T:reduction}. We begin by systematically analyzing case A. Recall that in this case, we consider $x \in \widetilde{W}$ such that the translation component is antidominant and has two non-negative coordinates. In addition, if $x = \pi^{\mu}w \in \widetilde{W}$, then there are six possible values for $w \in W$: \begin{align*} (\text{I})\ &w = s_1s_2 & (\text{IV})\ &w = s_1\\ (\text{II})\ &w = s_2s_1 & (\text{V})\ &w = s_2\\ (\text{III})\ &w = s_1s_2s_1 & (\text{VI})\ &w = 1. \end{align*} In this subsection, we compute $(xI)_{\leq \lambda}$ for $x=\pi^{\mu}w$ such that $\mathbf{a}_x \subset C^0$ and $\mu_2\geq 0$, where $w\in W$ falls into one of the above six cases. The reader should note that in order to rigorously verify the arguments for case B, which are only indicated in an abbreviated form in Section \ref{S:valpolysB}, he should also perform the following calculations not only for $xI$, but also for $xI'$, where $I'$ is the non-standard Iwahori subgroup defined in Equation \eqref{E:I'}. We justify this claim in Section \ref{S:valpolysB}, although the calculations are more easily performed simultaneously with those for case A. \begin{prop}[\textbf{Case IA}]\label{T:123vals} Let $x=\pi^{\mu}s_1s_2$ satisfy $\mathbf{a}_x \subset C^0$ and $\mu_2 \geq 0$. Then $\mu_1< \mu_2 \leq \mu_3$. In addition, $\mu_1 < 0$ and $\mu_3 > 0$. Now fix $\lambda=(\lambda_1, \lambda_2, \lambda_3) \in \mathcal{N}(G)_x$. We then have \begin{equation}\label{E:123gen} \lambda_1 \leq -\mu_1 - 1 \quad \text{and} \quad \lambda_3 \geq -\mu_3 + \frac{1}{2}.\end{equation} Further, the only possibility in which $\lambda_3 = -\mu_3 + \frac{1}{2}$ is for $\mu_2=\mu_3$. Otherwise, $\lambda_3 \geq -\mu_3 + 1$. In describing $(xI)_{\leq \lambda}$, we have the following two subcases: \begin{enumerate} \item[($i$)] If $-\mu_3 + \frac{1}{2}\leq \lambda_3\leq -\mu_2+1$, then \begin{equation}\label{E:123i} (xI)_{\leq \lambda} = \left\{ \begin{pmatrix} a& b& c\\d&e&f\\g&h&i \end{pmatrix} \in xI \biggm| a \in P^{-\lambda_1} \ \text{and}\ \begin{vmatrix} a & b\\ d& e\end{vmatrix} \in P^{\lambda_3}\right\}. \end{equation} \item[($ii$)] If $-\mu_2+1 < \lambda_3$, then \begin{equation}\label{E:123ii} (xI)_{\leq \lambda} = \left\{ \begin{pmatrix} a& b& c\\d&e&f\\g&h&i \end{pmatrix} \in xI \biggm| a \in P^{-\lambda_1}\ \text{and}\ \ov{d}b+\ov{g}c \in P^{\lambda_3}\right\}. \end{equation} Note that subcase ($ii$) only arises when $\mu_2 >1$, since $\lambda_3$ is always non-positive. \end{enumerate} \end{prop} \begin{proof} We first claim that if $A \in xI$, then $e_1$ is a cyclic vector for $(F^3,A\sigma)$. Since $w = s_1s_2$, we have that \begin{equation*}\label{E:123coset} A:= \begin{pmatrix} a& b& c\\d&e&f\\g&h&i \end{pmatrix} \in \begin{pmatrix}P^{\mu_1+1} & P^{\mu_1+1} & P^{\mu_1}_{\times}\\ P^{\mu_2}_{\times} & P^{\mu_2} & P^{\mu_2} \\ P^{\mu_3+1} & P^{\mu_3}_{\times} & P^{\mu_3} \end{pmatrix} = xI. \end{equation*} Here, by $y \in P^k_{\times}$ we mean that $\val(y) = k$. Recall from Section \ref{S:charpoly} that $D= \displaystyle \ov{d}\begin{vmatrix} d& e\\g&h \end{vmatrix} + \ov{g}\begin{vmatrix} d&f\\g&i\end{vmatrix}$. We may directly compute that $\val(D) = 2\mu_2+\mu_3$, since in this case we have $\mu_2 < \mu_3 + 1$. Hence, by Proposition \ref{T:Dneq0}, $e_1$ is a cyclic vector for $(F^3, A\sigma)$, and therefore the characteristic polynomial is given by the equations provided in Section \ref{S:charpoly}. Denote by $\nu_A$ the Newton slope sequence associated to $(F^3, A\sigma)$. Recall from Lemma \ref{T:charpolypo} that $\nu_A \leq \lambda \iff \alpha \in P^{-\lambda_1}$ and $\beta \in P^{\lambda_3}$. It thus suffices to compute the conditions under which $\alpha \in P^{-\lambda_1}$ and $\beta \in P^{\lambda_3}$. We begin by examining the conditions under which $\alpha \in P^{-\lambda_1}$. Observe that $$\frac{1}{D}\left( (\ov{\ov{d}}\ov{e}+\ov{\ov{g}}\ov{f})\begin{vmatrix} d&e\\g&h\end{vmatrix} + (\ov{\ov{d}}\ov{h}+\ov{\ov{g}}\ov{i})\begin{vmatrix}d&f\\g&i\end{vmatrix}\right) \in \mathcal{O},$$ since $\mu_2 \geq 0$. Thus we see by Equation \eqref{E:alpha} that $\alpha \in P^{-\lambda_1} \iff \ov{\ov{a}} \in P^{-\lambda_1} \iff a \in P^{-\lambda_1}$, since $\sigma(P)=P$. In addition, since in this case $a \in P^{\mu_1+1}$, we see that $\lambda_1\leq -\mu_1-1$. Now we consider the condition $\beta \in P^{\lambda_3}$. Compute that \begin{align*} B_1 &\in P^{\mu_1+\mu_2+1} & B_2 &\in P^{\mu_1+\mu_3+1}\\ - \ov{\ov{d}}\ov{b}&\in P^{\mu_1+\mu_2+1} & -\ov{\ov{g}}\ov{c} &\in P^{\mu_1+\mu_3+1}\\ \frac{\ov{D}}{D}\begin{vmatrix}e&f\\h&i\end{vmatrix} &\in P^{\mu_2+\mu_3} \subset \mathcal{O} & \phantom{} \end{align*} In particular, $\beta \in P^{\mu_1+\mu_2+1}$ and so if $\mu_2<\mu_3$, then $\lambda_3 \geq -\mu_3 + 1.$ In the special case in which $\mu_2 = \mu_3$, we instead have $\lambda_3 \geq \frac{\mu_1+1}{2} = -\mu_3 + \frac{1}{2},$ and we have verified Equation \eqref{E:123gen}. We will use our estimates for $\lambda_1$ and $\lambda_3$ to explicitly describe $\mathcal{N}(G)_x$ in Section \ref{S:slopes}. Similar estimates will appear in all subsequent propositions without further comment. Comparing the valuations of the summands of $\beta$, we see that we should consider two subcases, which are identical to those provided in the statement of the proposition: \begin{enumerate} \item[($i$)] $\mu_1+\mu_2+\frac{1}{2} \leq \lambda_3\leq \mu_1+\mu_3+1$, \item[($ii$)] $\mu_1+\mu_3+1 < \lambda_3$. \end{enumerate} We now consider each subcase individually. \vskip 10 pt Subcase ($i$): $\mu_1+\mu_2+\frac{1}{2} \leq \lambda_3\leq \mu_1+\mu_3+1$ \vskip 10 pt First assume that $\lambda_3 = -\mu_3 + \frac{1}{2}$, which only arises if $\mu_2=\mu_3$. In this special case, $\beta \in P^{\lambda_3}$ automatically. Therefore, if $\mu_2=\mu_3$, we have \begin{equation}\label{E:123i*} (xI)_{\leq \lambda} = \left\{ \begin{pmatrix} a& b& c\\d&e&f\\g&h&i \end{pmatrix} \in xI \biggm| a \in P^{-\lambda_1} \right\}. \end{equation} The expression in \eqref{E:123i*} is equivalent to Equation \eqref{E:123i}, since in this special case we automatically have $\ov{\begin{vmatrix}a&b\\d&e\end{vmatrix}} \in P^{\lambda_3} \iff \begin{vmatrix}a&b\\d&e\end{vmatrix} \in P^{\lambda_3}$. Now assume that $\mu_2 < \mu_3$ so that $\lambda_3 \geq -\mu_3 +1$. Then $\beta \in P^{\lambda_3} \iff B_1 - \ov{\ov{d}}\ov{b} \in P^{\lambda_3}$. We may write \begin{align} B_1 - \ov{\ov{d}}\ov{b} &= \frac{1}{D}\left(\ov{\ov{d}}\ov{a}\ov{e}\begin{vmatrix}d&e\\g&h\end{vmatrix} + \ov{\ov{g}}\ov{a}\ov{f}\begin{vmatrix} d&e\\g&h\end{vmatrix}- \ov{\ov{d}}\ov{b}D\right)\notag \\ \phantom{B_1 - \ov{\ov{d}}\ov{b}} &= \frac{1}{D}\left(\ov{\ov{d}}\ov{a}\ov{e}\begin{vmatrix}d&e\\g&h\end{vmatrix} + \ov{\ov{g}}\ov{a}\ov{f}\begin{vmatrix} d&e\\g&h\end{vmatrix}- \ov{\ov{d}}\ov{b}\ov{d}\begin{vmatrix}d&e\\g&h\end{vmatrix} - \ov{\ov{d}}\ov{b}\ov{g}\begin{vmatrix}d&f\\g&i\end{vmatrix}\right) \notag \\\phantom{B_1 - \ov{\ov{d}}\ov{b}} &= \frac{1}{D}\left(\ov{\ov{d}}\ov{\begin{vmatrix}a&b\\d&e\end{vmatrix}}\begin{vmatrix}d&e\\g&h\end{vmatrix} + \ov{\ov{g}}\ov{a}\ov{f}\begin{vmatrix} d&e\\g&h\end{vmatrix}- \ov{\ov{d}}\ov{b}\ov{g}\begin{vmatrix}d&f\\g&i\end{vmatrix}\right) .\label{E:betaarrange}\end{align} First, observe that $\displaystyle \frac{\ov{\ov{g}}\ov{a}\ov{f}}{D}\begin{vmatrix}d&e\\g&h\end{vmatrix} \in P^{\mu_1+\mu_3+2}\subset P^{\lambda_3}$ in subcase ($i$). Similarly, $\displaystyle -\frac{\ov{\ov{d}}\ov{b}\ov{g}}{D}\begin{vmatrix}d&f\\g&i\end{vmatrix} \in P^{\lambda_3}$. Thus, the second and third terms in our final expression for $B_1 - \ov{\ov{d}}\ov{b}$ are automatically in $P^{\lambda_3}$. Consequently, $\beta \in P^{\lambda_3} \iff \displaystyle \frac{\ov{\ov{d}}}{D}\ov{\begin{vmatrix}a&b\\d&e\end{vmatrix}}\begin{vmatrix}d&e\\g&h\end{vmatrix} \in P^{\lambda_3} \iff \ov{\begin{vmatrix}a&b\\d&e\end{vmatrix}} \in P^{\lambda_3}$, since $\displaystyle \frac{\ov{\ov{d}}}{D}\begin{vmatrix}d&e\\g&h\end{vmatrix} \in \mathcal{O}^{\times}$. Lemma \ref{T:charpolypo} and the fact that $\sigma(P)=P$ now imply the result for subcase ($i$). \vskip 10 pt Subcase ($ii$): $\mu_1+\mu_3+1 < \lambda_3$ \vskip 10 pt A priori, all four terms $B_1+B_2 - \ov{\ov{d}}\ov{b} - \ov{\ov{g}}\ov{c}$ affect $\val(\beta)$ if $\lambda_3 > \mu_1+\mu_3+1$. Note, however, that both $B_1$ and $B_2$ contain a factor of $a$. We used that $a \in P^{\mu_1+1}$ to obtain our original estimates for the valuations of $B_1$ and $B_2$. In this subcase, we actually have a better estimate for $\val(a)$. Namely, $a \in P^{-\lambda_1}$ by our analysis of $\alpha$. In particular, we then see that $B_1 \in P^{\mu_2-\lambda_1}.$ Since $\mu_1+\mu_3+1 < \lambda_3$, the hypotheses of Lemma \ref{T:irrelterms} are satisfied, and $B_1$ is automatically in $P^{\lambda_3}$. Similarly, we can compute that $B_2 \in P^{\mu_3-\lambda_1} \subseteq P^{\mu_2-\lambda_1}$ since $\mu_2 \leq \mu_3$ so that $B_2$ is also automatically in $P^{\lambda_3}$. For the range of $\lambda_3$ specified in subcase ($ii$), we thus see that $\beta \in P^{\lambda_3} \iff \ov{\ov{d}}\ov{b} + \ov{\ov{g}}\ov{c}\in P^{\lambda_3} \iff \ov{d}b + \ov{g}c\in P^{\lambda_3}$, and Lemma \ref{T:charpolypo} implies Equation \eqref{E:123ii}. \end{proof} \begin{prop}[\textbf{Case IIA}]\label{T:132vals} Let $x=\pi^{\mu}s_2s_1$ satisfy $\mathbf{a}_x \subset C^0$ and $\mu_2 \geq 0$. Then \linebreak $\mu_1< \mu_2 < \mu_3$. In addition, $\mu_1 < 0$ and $\mu_3 > 0$. Now fix $\lambda=(\lambda_1, \lambda_2, \lambda_3) \in \mathcal{N}(G)_x$. We then have \begin{equation}\label{E:132gen} \lambda_1 \leq -\mu_1-1 \quad \text{and} \quad \lambda_3 \geq -\mu_3+1.\end{equation} In describing $(xI)_{\leq \lambda}$, we have the following two subcases: \begin{enumerate} \item[($i$)] If $-\mu_3+1\leq \lambda_3\leq -\mu_2$, then \begin{equation} (xI)_{\leq \lambda} = \left\{ \begin{pmatrix} a& b& c\\d&e&f\\g&h&i \end{pmatrix} \in xI \biggm| a \in P^{-\lambda_1} \ \text{and}\ \begin{vmatrix} a & b\\ d& e\end{vmatrix} \in P^{\lambda_3}\right\}. \end{equation} \item[($ii$)] If $-\mu_2 < \lambda_3$, then \begin{equation} (xI)_{\leq \lambda} = \left\{ \begin{pmatrix} a& b& c\\d&e&f\\g&h&i \end{pmatrix} \in xI \biggm| a \in P^{-\lambda_1}\ \text{and}\ \ov{d}b+\ov{g}c \in P^{\lambda_3}\right\}. \end{equation} Note that subcase ($ii$) only arises when $\mu_2 >0$, since $\lambda_3$ is always non-positive. \end{enumerate} \end{prop} \begin{proof} Unlike in Proposition \ref{T:123vals}, $e_1$ is not automatically a cyclic vector in this case. By computing $e_2 \wedge \Phi(e_2) \wedge \Phi^2(e_2)$ as in the proof of Proposition \ref{T:Dneq0}, we see that $e_2$ is a cyclic vector for $(F^3, A\sigma)$ if and only if $D'= \displaystyle \ov{b}\begin{vmatrix} a& b\\g&h \end{vmatrix} - \ov{h}\begin{vmatrix} b&c\\h&i\end{vmatrix} \neq 0$. We have that $\val(D') = 2\mu_1+\mu_3$ in this case, and thus $e_2$ is a cyclic vector for $(F^3, A\sigma)$. So that we can continue to use the expressions for $\alpha$ and $\beta$ from Section \ref{S:charpoly}, we replace $\Phi = A\sigma$ by $B\Phi B^{-1}$, where \begin{equation*} B:= \begin{pmatrix} 0&1&0\\1&0&0\\0&0&1\end{pmatrix}\end{equation*} is a change of basis. Since $\sigma$ fixes $B$, we then see that $e_1$ is a cyclic vector for $(F^3, BAB^{-1}\sigma)$. Thus, if we denote by $A' := BAB^{-1}$, we have that \begin{equation*}A':= \begin{pmatrix} a&b&c\\d&e&f\\g&h&i \end{pmatrix} \in \begin{pmatrix}P^{\mu_2+1} & P^{\mu_2+1} & P^{\mu_2}_{\times}\\ P^{\mu_1}_{\times} & P^{\mu_1+1} & P^{\mu_1} \\ P^{\mu_3} & P^{\mu_3}_{\times} & P^{\mu_3} \end{pmatrix} = B(xI)B^{-1}.\end{equation*} Since $\sigma(B)=B$, conjugation and $\sigma$-conjugation by $B$ coincide and thus induce the identity on $\mathcal{N}(G)_x$. For the matrix $A'$ we have $\val(D) = 2\mu_1 + \mu_3$ and can thus use Equations \eqref{E:alpha} and \eqref{E:beta} to compute the conditions under which $A' \in (BxIB^{-1})_{\leq \lambda}$. We then change back to our coordinates for $A$ instead of $A'$ to describe $(xI)_{\leq \lambda}$. Lemma \ref{T:charpolypo} says that we need only compute the conditions under which $\alpha \in P^{-\lambda_1}$ and $\beta \in P^{\lambda_3}$. First, observe that $$-\ov{\ov{a}} - \frac{1}{D}\left( \ov{\ov{g}}\ov{f}\begin{vmatrix} d&e\\g&h\end{vmatrix} + (\ov{\ov{d}}\ov{h}+\ov{\ov{g}}\ov{i})\begin{vmatrix}d&f\\g&i\end{vmatrix}\right) \in \mathcal{O}.$$ Hence we see that $\alpha \in P^{-\lambda_1} \iff \displaystyle \frac{\ov{\ov{d}}\ov{e}}{D}\begin{vmatrix} d & e\\g&h \end{vmatrix}\in P^{-\lambda_1} \iff \ov{e} \in P^{-\lambda_1}$, since $\displaystyle \frac{\ov{\ov{d}}}{D}\begin{vmatrix}d&e\\g&h \end{vmatrix} \in \mathcal{O}^{\times}$. But then $ \ov{e} \in P^{-\lambda_1} \iff e \in P^{-\lambda_1}$, since $\sigma(P)=P$. Similarly, by recalling Equation \eqref{E:beta} for $\beta$, we compute that \begin{align*} B_1 &\in P^{\mu_1+\mu_2+2} & B_2 &\in \mathcal{O}\\ - \ov{\ov{d}}\ov{b}&\in P^{\mu_1+\mu_2+1} & -\ov{\ov{g}}\ov{c} &\in \mathcal{O}\\ \frac{\ov{D}}{D}\begin{vmatrix}e&f\\h&i\end{vmatrix} &\in P^{\mu_1+\mu_3} & \phantom{} \end{align*} Since $\alpha \in P^{\mu_1+1}$ and $\beta \in P^{\mu_1+\mu_2+1}$, Lemma \ref{T:charpolypo} implies Equation \eqref{E:132gen}. Comparing the valuations of the summands of $\beta$, we once more see that we should consider two subcases: \begin{enumerate} \item[($i$)] $\mu_1+\mu_2+1 \leq \lambda_3\leq \mu_1+\mu_3$, \item[($ii$)] $\mu_1+\mu_3 < \lambda_3$. \end{enumerate} We now consider each subcase individually. \vskip 10 pt Subcase ($i$): $\mu_1+\mu_2+1 \leq \lambda_3\leq \mu_1+\mu_3$ \vskip 10 pt In this subcase, $\beta \in P^{\lambda_3} \iff B_1 - \ov{\ov{d}}\ov{b} \in P^{\lambda_3}$. Recalling Equation \eqref{E:betaarrange}, we have that \begin{equation*} B_1 - \ov{\ov{d}}\ov{b} = \frac{1}{D}\left(\ov{\ov{d}}\ov{\begin{vmatrix}a&b\\d&e\end{vmatrix}}\begin{vmatrix}d&e\\g&h\end{vmatrix} + \ov{\ov{g}}\ov{a}\ov{f}\begin{vmatrix} d&e\\g&h\end{vmatrix}- \ov{\ov{d}}\ov{b}\ov{g}\begin{vmatrix}d&f\\g&i\end{vmatrix}\right). \end{equation*} Again, the last two terms are automatically in $\mathcal{O}$, so $\beta \in P^{\lambda_3} \iff \displaystyle \frac{\ov{\ov{d}}}{D}\ov{\begin{vmatrix}a&b\\d&e\end{vmatrix}}\begin{vmatrix}d&e\\g&h\end{vmatrix} \in P^{\lambda_3} \iff \ov{\begin{vmatrix}a&b\\d&e\end{vmatrix}} \in P^{\lambda_3}$, since $\displaystyle \frac{\ov{\ov{d}}}{D}\begin{vmatrix}d&e\\g&h\end{vmatrix} \in \mathcal{O}^{\times}$. But again, $\ov{\begin{vmatrix}a&b\\d&e\end{vmatrix}} \in P^{\lambda_3} \iff \begin{vmatrix}a&b\\d&e\end{vmatrix} \in P^{\lambda_3}$. \vskip 10 pt Subcase ($ii$): $\mu_1+\mu_3 < \lambda_3$ \vskip 10 pt Note that $\beta \in P^{\lambda_3} \iff B_1 - \ov{\ov{d}}\ov{b} + \ds \frac{\ov{D}}{D}ei - \frac{\ov{D}}{D}fh \in P^{\lambda_3}$. However, we can employ a better estimate for $\val(e)$, since $e \in P^{-\lambda_1}$. Using this observation, we compute that $B_1 \in P^{\mu_2-\lambda_1+1}\subset P^{\mu_2 -\lambda_1}$. Again, Lemma \ref{T:irrelterms} applies to demonstrate that $B_1 \in P^{\lambda_3}$. In addition, note that $\ds \frac{\ov{D}}{D} \in \mathcal{O}^{\times}$ so that $\ds \frac{\ov{D}}{D}ei \in P^{\mu_3-\lambda_1}\subset P^{\mu_2 - \lambda_1}$ by our hypotheses on $\mu$. Therefore, $\displaystyle \frac{\ov{D}}{D}ei \in P^{\lambda_3}$ by Lemma \ref{T:irrelterms} as well. We have thus shown that $\beta \in P^{\lambda_3} \iff \ov{\ov{d}}\ov{b} +\displaystyle \frac{\ov{D}}{D}fh \in P^{\lambda_3}$. Write \begin{align*} \ov{\ov{d}}\ov{b} + \frac{\ov{D}}{D}fh &= \frac{D}{D}\ov{\ov{d}}\ov{b} + \frac{\ov{D}}{D}fh \notag\\ \phantom{\ov{\ov{d}}\ov{b} + \frac{\ov{D}}{D}fh} &= \frac{1}{D}\left( \ov{\ov{d}}\ov{b}\ov{d} \begin{vmatrix} d&e \\ g&h \end{vmatrix} + \ov{\ov{d}}\ov{b}\ov{g}\begin{vmatrix}d&f\\g&i \end{vmatrix} +fh\ov{\ov{d}} \ov{\begin{vmatrix} d&e \\ g&h \end{vmatrix}} + fh\ov{\ov{g}}\ov{\begin{vmatrix}d&f\\g&i \end{vmatrix}}\right). \end{align*} First we argue that the second and last terms in this expression lie in $\mathcal{O}$. Computing valuations, we see that $\ds \frac{\ov{\ov{d}}\ov{b}\ov{g}}{D}\begin{vmatrix}d&f\\g&i\end{vmatrix} + \frac{fh\ov{\ov{g}}}{D}\ov{\begin{vmatrix}d&f\\g&i \end{vmatrix}} \in P^{\mu_2+\mu_3+1} +P^{2\mu_3} \subset \mathcal{O}$, since $\mu_3 > \mu_2 \geq 0$ by hypothesis. Thus, $\beta \in P^{\lambda_3} \iff \ds \frac{1}{D}\left( \ov{\ov{d}}\ov{b}\ov{d} \begin{vmatrix} d&e \\ g&h \end{vmatrix} +fh\ov{\ov{d}} \ov{\begin{vmatrix} d&e \\ g&h \end{vmatrix}} \right) \in P^{\lambda_3}$. Now we again use our estimate on $\val(e)$ to show that $\ds \frac{-1}{D} \left( \ov{\ov{d}}\ov{b}\ov{d}ge + fh\ov{\ov{d}}\ov{ge} \right)$ is automatically in $P^{\lambda_3}$. Using that $e \in P^{-\lambda_1}$, we see that $\ds \frac{\ov{\ov{d}}\ov{b}\ov{d}ge}{D} \in P^{\mu_2-\lambda_1+1}\subset P^{\lambda_3}$ by Lemma \ref{T:irrelterms}. Similarly, $\ds \frac{fh\ov{\ov{d}}\ov{ge}}{D} \in P^{\mu_3 - \lambda_1} \subset P^{\mu_2-\lambda_1} \subseteq P^{\lambda_3}$. We have thus demonstrated that $\ds \beta \in P^{\lambda_3} \iff \frac{\ov{\ov{d}}\ov{d}h}{D}(\ov{b}d + \ov{h}f)\in P^{\lambda_3}$ in subcase ($ii$). Again since $\ds \frac{\ov{\ov{d}}\ov{d}h}{D} \in \mathcal{O}^{\times}$, we conclude that $\beta \in P^{\lambda_3} \iff \ov{b}d + \ov{h}f \in P^{\lambda_3}$. Altogether, we have proved the following: \begin{enumerate} \item[($i$)] If $-\mu_3+1\leq \lambda_3\leq -\mu_2$, then \begin{equation*} (BxIB^{-1})_{\leq \lambda} = \left\{ \begin{pmatrix} a& b& c\\d&e&f\\g&h&i \end{pmatrix} \in BxIB^{-1} \biggm| e \in P^{-\lambda_1} \ \text{and}\ \begin{vmatrix} a & b\\ d& e\end{vmatrix} \in P^{\lambda_3}\right\}. \end{equation*} \item[($ii$)] If $-\mu_2 < \lambda_3$, then \begin{equation*} (BxIB^{-1})_{\leq \lambda} = \left\{ \begin{pmatrix} a& b& c\\d&e&f\\g&h&i \end{pmatrix} \in BxIB^{-1} \biggm| e \in P^{-\lambda_1}\ \text{and}\ \ov{b}d+\ov{h}f \in P^{\lambda_3}\right\}. \end{equation*} \end{enumerate} Conjugating by $B^{-1}$ to change back to the original coordinates, we obtain the desired expressions for $(xI)_{\leq \lambda}$. \end{proof} To discuss the remaining cases, we make two additional reduction steps. In order to use the results from Section \ref{S:charpoly}, our analysis will break into certain natural subcases. During the course of several of these subcases, the following lemma will be useful. \begin{lemma}\label{T:split} Any short exact sequence of isocrystals over $F$ splits. \end{lemma} For a proof of this lemma, see \cite{Dem}, and note that changing fields from $B(\ov{k})$ to $\ov{k}((\pi))$ does not alter the arguments. In applying Lemma \ref{T:split}, the following result will also be necessary. \begin{lemma}\label{T:GL_2} Let $G = GL_2(F)$ and $(F^2, g\sigma)$ be an isocrystal, where $g:=\begin{pmatrix} a&b\\c&d\end{pmatrix} \in G$. Let $\lambda =(\lambda_1,\lambda_2) \in \mathcal{N}(G)_x$ for $x \in \widetilde{W}$, the corresponding affine Weyl group, in which case $\lambda_1 + \lambda_2=\val(\det x)$. Then $e_1$ is a cyclic vector for $(F^2,g\sigma)$ if and only if $c \neq 0$, and in this situation we have \begin{equation*}(xI)_{\leq \lambda} = \left\{ \begin{pmatrix} a&b\\c&d\end{pmatrix} \in xI \biggm| \ov{a} + \frac{\ov{c}}{c}d \in P^{-\lambda_1} \right\}\end{equation*} \end{lemma} \begin{proof}Let $g\sigma = \Phi$. To verify the condition under which $e_1$ is a cyclic vector, compute $e_1 \wedge \Phi(e_1)$. If $e_1$ is a cyclic vector, then the characteristic polynomial for $(F^2,\Phi)$ is of the form $f = \sigma^2 + \alpha_1\sigma + \frac{\ov{c}}{c}\det g$. Using linear algebra as in Section \ref{S:charpoly}, we compute that $\alpha_1 = -(\ov{a} + \frac{\ov{c}}{c}d)$, as required. \end{proof} Our final reduction step in case A shows that, without loss of generality, we can assume that certain matrix entries are zero. \begin{lemma}\label{T:13zeros} Let $x = \pi^{\mu}s_1s_2s_1$, where $\mu = (\mu_1,\mu_2,\mu_3)$ satisfies $\mu_1 < \mu_2 < \mu_3$. Define $$ J_1:= \begin{pmatrix} 1& 0& 0\\P^{\mu_2-\mu_1}&1&0\\P^{\mu_3-\mu_1}&P^{\mu_3-\mu_2}&1 \end{pmatrix}\quad \text{and}\quad K_1:= \left( xI \cap \left\{ \begin{pmatrix} a& b& c\\d&e&0\\g&0&0 \end{pmatrix} \right\} \right) .$$ Then the map \begin{align*}\kappa: J_1 \times K_1 &\rightarrow xI \\ (j,k) & \mapsto j^{-1}k\sigma(j) \end{align*} is an isomorphism of schemes. \end{lemma} \begin{proof} Consider $J_1, K_1,$ and $xI$ as schemes over $k = \mathbb{F}_q$. We illustrate the argument by providing a bijection on sets of $\ov{k}$-points. Let \begin{equation*} A := \begin{pmatrix} a & b & c \\ d & e & f \\ g & h & i \end{pmatrix} \in xI(\ov{k}) \quad \text{and} \quad j := \begin{pmatrix}1 & 0 & 0 \\d' & 1 & 0\\g' & h' & 1 \end{pmatrix} \in J_1(\ov{k}). \end{equation*} The reader may verify that, if the entries of $j$ are defined as follows: \begin{equation*} d':= -\frac{f}{c}, \quad h' := \frac{bi -ch}{ce - bf}, \quad g':= -\left(\frac{i+fh'}{c}\right), \end{equation*} then the product $jA\sigma(j)^{-1}$ lies in $K_1(\ov{k})$. Observe that $\val(c) = \mu_1$ and $\val(ce-bf) = \mu_1+\mu_2$, so that the entries of $j$ are completely determined by the entries of $A$. We therefore see that, given any $A \in xI(\ov{k})$, there exists a unique $j \in J_1(\ov{k})$ such that $jA\sigma(j)^{-1} \in K_1(\ov{k})$. Consequently, we obtain mutually inverse injective morphisms $\iota: (xI)(\ov{k}) \rightarrow (J_1 \times K_1)(\ov{k})$ given by $A\mapsto (j,k)$, where $k:=jA\sigma(j)^{-1}$, and $\kappa: (J_1 \times K_1)(\ov{k}) \rightarrow xI(\ov{k})$ by $(j,k) \mapsto j^{-1}k\sigma(j)$. The argument on $S$-points, where $S$ is an arbitrary ring, proceeds in like manner. \end{proof} This isomorphism of schemes allows us to instead compute the codimension of \linebreak $J_1 \times \left( K_1 \cap (xI)_{\leq \lambda}\right)$ inside $J_1 \times K_1$, which will significantly simplify our calculations. \begin{cor}\label{T:K_1cor} Let $x = \pi^{\mu}s_1s_2s_1$, where $\mu = (\mu_1,\mu_2,\mu_3)$ satisfies $\mu_1 < \mu_2 < \mu_3$. Denote $(K_1)_{\leq \lambda}:= K_1 \cap (xI)_{\leq \lambda}$. Then \begin{equation*}\label{E:K_1cor} \codim((K_1)_{\leq \lambda} \subseteq K_1) = \codim((xI)_{\leq \lambda}\subseteq xI).\end{equation*} \end{cor} \begin{remark}\label{T:zerormk} The reader should note that the argument in the proof of Lemma \ref{T:13zeros} generalizes to $x=\pi^{\mu}w$ in $SL_n(F)$, where $w$ is the longest element of the Weyl group. Furthermore, similar arguments can be employed for any value of $w\in W$ to reduce to the case where certain matrix entries are zero. We could have used analogous lemmas to make slight simplifications to the proofs of the previous two propositions. However, such reductions do not reduce the complexity of the arguments as significantly as they do in the remaining cases, and so the author finds it illuminating to carry out the computations for $w=s_1s_2$ and $w=s_2s_1$ without the aid of any additional lemmas. \end{remark} The relative simplicity of the arguments in Propositions \ref{T:123vals} and \ref{T:132vals} depends upon the existence of a convenient basis that guarantees that $e_1$ is a cyclic vector. Since, by contrast, no such obvious choice of basis exists in the remaining cases, we employ this further reduction step in each case to drastically simplify the remaining arguments. For the purpose of computing codimensions, we thus focus on providing a concrete description of $(K_1)_{\leq \lambda}$ inside $K_1$. It is interesting to note, however, that the descriptions for $(K_1)_{\leq \lambda}$ actually agree with those for $(xI)_{\leq \lambda}$ in that these schemes are defined by precisely the same polynomial equations in the matrix entries. This fact is somewhat tedious to verify and thus will not be proved. \begin{prop}[\textbf{Case IIIA}]\label{T:13vals}Let $x=\pi^{\mu}s_1s_2s_1$ satisfy $\mathbf{a}_x \subset C^0$ and $\mu_2 \geq 0$. Then $\mu_1 < \mu_2< \mu_3$. In addition, $\mu_1 < 0$ and $\mu_3 >0$. Now fix $\lambda=(\lambda_1,\lambda_2,\lambda_3) \in \mathcal{N}(G)_x$. We then have \begin{equation}\label{E:13gen} \lambda_1 \leq -\mu_1-1 \quad \text{and} \quad \lambda_3 \geq -\mu_3+1.\end{equation} In describing $K_1 \cap (xI)_{\leq \lambda}$, we have the following two subcases: \begin{enumerate} \item[($i$)] If $\mu_2 + 1 < \mu_3$ and $-\mu_3+1 \leq \lambda_3\leq -\mu_2$, or if $\mu_2+1=\mu_3$ and $d =0$, then we have \begin{equation}\label{E:13i} (K_1)_{\leq \lambda} = \left\{ \begin{pmatrix} a& b& c\\d&e&0\\g&0&0 \end{pmatrix} \in K_1 \biggm| a \in P^{-\lambda_1} \ \text{and}\ \begin{vmatrix} a & b\\ d& e\end{vmatrix} \in P^{\lambda_3}\right\}. \end{equation} \item[($ii$)] If $\mu_2+1<\mu_3$ and $-\mu_2 < \lambda_3$, or if $\mu_2+1=\mu_3$ and $d \neq 0$, then we have \begin{equation}\label{E:13ii} (K_1)_{\leq \lambda} = \left\{ \begin{pmatrix} a& b& c\\d&e&0\\g&0&0 \end{pmatrix} \in K_1 \biggm| a \in P^{-\lambda_1}\ \text{and}\ \ov{d}b+\ov{g}c \in P^{\lambda_3}\right\}. \end{equation} Note that subcase ($ii$) only arises when $\mu_2 >0$, since $\lambda_3$ is always non-positive. In the event that $\mu_2+1=\mu_3$ and $d\neq 0$, the value $\lambda_3=-\mu_3+1$ is fixed. In addition, if $\mu_2+1<\mu_3$ and $-\mu_2 < \lambda_3$, then $d \neq 0$. \end{enumerate} \end{prop} \begin{proof} Consider an element \begin{equation*}\label{E:13coset} A:= \begin{pmatrix} a& b& c\\d&e&0\\g&0&0 \end{pmatrix} \in \begin{pmatrix}P^{\mu_1+1} & P^{\mu_1+1} & P^{\mu_1}_{\times}\\ P^{\mu_2+1} & P^{\mu_2}_{\times} & 0 \\ P^{\mu_3}_{\times} &0 & 0 \end{pmatrix} = K_1. \end{equation*} Here we compute that $D = -\ov{d}ge$, and so if $d \neq 0$, then $D\neq 0$. We shall handle the two cases $d = 0$ and $d \neq 0$ separately. \vskip 10 pt First assume that $d\neq 0$. Then $e_1$ is a cyclic vector for $(F^3,A\sigma)$, and Equations \eqref{E:alpha} and \eqref{E:beta} reduce to \begin{equation*} \alpha = -\ov{\ov{a}} - \frac{\ov{\ov{d}}\ov{e}}{\ov{d}}, \quad \beta = \frac{\ov{\ov{d}}\ov{a}\ov{e}}{\ov{d}} - \ov{\ov{d}}\ov{b} - \ov{\ov{g}}\ov{c}. \end{equation*} Note that $\ov{\ov{d}}\ov{e}/\ov{d} \in \mathcal{O}$, since $\mu_2 \geq 0$. Thus, $\alpha \in P^{-\lambda_1} \iff \ov{\ov{a}} \in P^{-\lambda_1} \iff a \in P^{-\lambda_1}$. Computing the valuations of the summands of $\beta$, we see that \begin{equation*} \frac{\ov{\ov{d}}\ov{a}\ov{e}}{\ov{d}} \in P^{\mu_1+\mu_2+1}, \quad \ov{\ov{d}}\ov{b} \in P^{\mu_1+\mu_2+2}, \quad \ov{\ov{g}}\ov{c} \in P^{\mu_1+\mu_3}_{\times}. \end{equation*} Observe that since $\alpha \in P^{\mu_1+1}$ and $\beta \in P^{\mu_1+\mu_2+1}$, Lemma \ref{T:charpolypo} implies Equation \eqref{E:13gen}. Once more, we see that our analysis of $\beta$ naturally determines two subcases, in the event that $\mu_2+1<\mu_3$. As before, we consider these subcases in turn, at first under the additional assumption that $\mu_2+1<\mu_3$. \vskip 5 pt Subcase ($i$): $\mu_1+\mu_2+1 \leq \lambda_3\leq \mu_1 + \mu_3$, and $\mu_2+1<\mu_3$ \vskip 5 pt In this subcase, we observe that $\ds \beta \in P^{\lambda_3} \iff \frac{\ov{\ov{d}}\ov{a}\ov{e}}{\ov{d}} - \ov{\ov{d}}\ov{b} \in P^{\lambda_3} \iff \ov{\ov{d}}\ov{a}\ov{e} - \ov{\ov{d}}\ov{d}\ov{b} \in P^{\lambda_3 + \val(d)} \iff \ov{\begin{vmatrix} a&b\\d&e \end{vmatrix}} \in P^{\lambda_3} \iff \begin{vmatrix} a&b\\d&e \end{vmatrix} \in P^{\lambda_3}$, as desired. \vskip 5 pt Subcase ($ii$): $\mu_1+\mu_3 < \lambda_3$, and $\mu_2+1<\mu_3$ \vskip 5 pt We again use our estimate on $\val(a)$ to compute that $\ov{\ov{d}}\ov{ae}/\ov{d} \in P^{\mu_2 - \lambda_1} \subseteq P^{\lambda_3}$ by Lemma \ref{T:irrelterms}. Thus, $\beta \in P^{\lambda_3} \iff \ov{\ov{d}}\ov{b} + \ov{\ov{g}}\ov{c} \in P^{\lambda_3} \iff \ov{d}b + \ov{g}c \in P^{\lambda_3}$. By Lemma \ref{T:charpolypo}, the proposition is true in the case where $\mu_2+1<\mu_3$ and $d \neq 0$. \vskip 5 pt Now assume that $\mu_2+1=\mu_3$. Computing the valuations of the summands of $\beta$ in this case, we see that \begin{equation*} \frac{\ov{\ov{d}}\ov{a}\ov{e}}{\ov{d}} \in P^{\mu_1+\mu_2+1}, \quad \ov{\ov{d}}\ov{b} \in P^{\mu_1+\mu_2+2}, \quad \ov{\ov{g}}\ov{c} \in P^{\mu_1+\mu_2+1}_{\times}. \end{equation*} Therefore, $\beta \in P^{\mu_1+\mu_2+1} = P^{-\mu_2}$ in this special case. By Lemma \ref{T:charpolypo}, we thus see that $\nu_x = -(\mu_1+1,\mu_2,\mu_2)$. Now, if $\lambda_3 > \mu_1+\mu_2+1$, then we must also have $\lambda_1 < -\mu_1-1$. However, we see that $\mu_2-\lambda_1 > \mu_1+\mu_2+1$, and so $\ov{\ov{d}}\ov{ae}/\ov{d} \in P^{\mu_2-\lambda_1}\subsetneq P^{\mu_1+\mu_2+1}$, since $a \in P^{-\lambda_1}$. But $\ov{\ov{g}}\ov{c} \in P^{\mu_1+\mu_2+1}_{\times} \iff \ov{g}c \in P^{\mu_1+\mu_2+1}_{\times}$, and so we must have $\beta \in P^{\mu_1+\mu_2+1}_{\times}$ and $\lambda_3=-\mu_3+1$ is fixed. Since we also have that $\ov{\ov{d}}\ov{b} \in P^{\mu_1+\mu_2+2} \iff \ov{d}b \in P^{\mu_1+\mu_2+2} \subsetneq P^{\mu_1+\mu_2+1}$, Lemma \ref{T:charpolypo} yields Equation \eqref{E:13ii} in the case in which $\mu_2+1=\mu_3$ and $d \neq 0$. \vskip 5 pt Now assume that $d=0$. In this case, we observe that $\left\{ \begin{pmatrix} y \\0\\z \end{pmatrix} \biggm| y,z \in F \right\} \cong F^2$ is a subspace fixed by $A\sigma$. That is, $\left( F^2, \begin{pmatrix}a&c\\g&0\end{pmatrix}\sigma \right)$ is a sub-isocrystal of $(F^3, A\sigma)$, giving rise to the following short exact sequence: \begin{equation*} 0 \longrightarrow \left( F^2, \begin{pmatrix}a&c\\g&0\end{pmatrix}\sigma \right) \longrightarrow (F^3, A\sigma) \longrightarrow (F, e\sigma) \rightarrow 0\ . \end{equation*} By Lemma \ref{T:split}, this short exact sequence splits, so that $(F^3, A\sigma)$ decomposes as the direct sum of the two sub-isocrystals. The Newton slope sequence for the direct sum is obtained by ordering the Newton slopes of the two sub-isocrystals \cite{Kat}, so it suffices to understand the Newton strata for these two isocrystals. Note that $\val(e)=\mu_2$ so that the only Newton slope sequence occurring for $(F,e\sigma)$ is $-\mu_2$. Therefore, the Newton polygon for $(F^3, A\sigma)$ satisfies either $\lambda_3 = -\mu_2$ or $\lambda_2 = -\mu_2$, depending on whether or not $-\lambda_1+\mu_2\geq -\mu_2$. In either case, note that $\lambda_3 \leq -\mu_2$, so that if $\mu_2+1<\mu_3$ we are necessarily in subcase ($i$). Applying Lemma \ref{T:GL_2} to $(F^2, \begin{pmatrix} a&c\\g&0\end{pmatrix}\sigma)$, we see that $e_1$ is a cyclic vector, since $\val(g) = \mu_3$. We now define some notation to distinguish our restriction of $SL_3(F)$ to the copy of $GL_2(F)$ corresponding to this sub-isocrystal. Denote by $x' := \pi^{(\mu_1,\mu_3)}w'$, where $w'=\begin{pmatrix} 0&1\\1&0\end{pmatrix}$ is the restriction of $w$ to the affine Weyl group for $GL_2(F)$. Let $\eta \in \mathcal{N}(G)_{x'}$. Using this notation, applying Lemma \ref{T:GL_2} says that \begin{equation*} (x'I)_{\leq \eta} = \left\{ y \in x'I \biggm| \ov{a} \in P^{-\eta_1} \right\}.\end{equation*} Altogether, Lemmas \ref{T:split} and \ref{T:GL_2}, together with the fact that $\ov{a} \in P^{-\lambda_1} \iff a \in P^{-\lambda_1}$, say that if $d=0$, then \begin{equation}\label{E:13d=0} (K_1)_{\leq \lambda} = \left\{ \begin{pmatrix} a& b& c\\d&e&0\\g&0&0 \end{pmatrix} \in K_1 \biggm| a \in P^{-\lambda_1}\right\}. \end{equation} We thus see that the first slope $\lambda_1$ determines only $\val(a)$, and in this case, $a \in P^{\mu_1+1}$. In addition, either $\lambda_3 = -\mu_2 \geq -\mu_3 +1$, or $\lambda_3=-\lambda_1-\lambda_2=-\lambda_1+\mu_2 \geq \mu_1+\mu_2+1 = -\mu_3+1$. Therefore, $\lambda_3 \geq -\mu_3 +1$, whether or not $\lambda_2 = -\mu_2$ or $\lambda_3 = -\mu_2$. Lemma \ref{T:charpolypo} then indicates that Equation \eqref{E:13gen} still applies. It remains to show that Equations \eqref{E:13i} and \eqref{E:13d=0} are equivalent under the hypothesis $d=0$. Let us consider the case in which $\lambda_3 = -\mu_2$. Then we see that $\lambda_2 \geq \lambda_3 \iff -\lambda_1-\lambda_3 \geq \lambda_3 \iff-\lambda_1 + \mu_2 \geq \lambda_3$. Therefore, $ae \in P^{-\lambda_1 + \mu_2} \subseteq P^{\lambda_3}$. In the situation in which $\lambda_2 = -\mu_2$, we have $-\lambda_1-\lambda_2 = \lambda_3 \iff -\lambda_1+\mu_2 = \lambda_3$, so that we again automatically have $ae \in P^{\lambda_3}$. Therefore, Equations \eqref{E:13i} and \eqref{E:13d=0} are equivalent in the case where $d=0$, as desired. \end{proof} For the remaining three cases, we employ arguments similar to those used in the proof of Proposition \ref{T:13vals}. We begin by stating the analog of Lemma \ref{T:13zeros} for case IVA. The proof proceeds in manner similar to Lemma \ref{T:13zeros} and is thus omitted. \begin{lemma}\label{T:12zeros} Let $x = \pi^{\mu}s_1$, where $\mu = (\mu_1,\mu_2,\mu_3)$ satisfies $\mu_1 < \mu_2 \leq \mu_3$. Define $$ J_2:= \begin{pmatrix} 1& 0& 0\\P^{\mu_2-\mu_1}&1&0\\P^{\mu_3-\mu_1+1}&0&1 \end{pmatrix}\quad \text{and}\quad K_2:= \begin{pmatrix} P^{\mu_1+1}&P^{\mu_1}_{\times} & P^{\mu_1} \\ P^{\mu_2}_{\times}&0 & P^{\mu_2}\\ P^{\mu_3+1}&0 & P^{\mu_3}_{\times} \end{pmatrix} .$$ Then the map $\kappa: J_2 \times K_2 \rightarrow xI$ given by $(j,k) \mapsto j^{-1}k\sigma(j)$ is an isomorphism of schemes. \end{lemma} \begin{cor}\label{T:K_2cor} Let $x = \pi^{\mu}s_1$, where $\mu = (\mu_1,\mu_2,\mu_3)$ satisfies $\mu_1 < \mu_2 \leq \mu_3$. Denote $(K_2)_{\leq \lambda}:= K_2 \cap (xI)_{\leq \lambda}$. Then \begin{equation*}\label{E:K_2cor} \codim((K_2)_{\leq \lambda} \subseteq K_2) = \codim((xI)_{\leq \lambda}\subseteq xI).\end{equation*} \end{cor} \begin{prop}[\textbf{Case IVA}]\label{T:12vals}Let $x=\pi^{\mu}s_1$ satisfy $\mathbf{a}_x \subset C^0$ and $\mu_2 \geq 0$. Then \linebreak $\mu_1 < \mu_2\leq \mu_3$. In addition, $\mu_1 < 0$ and $\mu_3 >0$. Now fix $\lambda=(\lambda_1,\lambda_2,\lambda_3) \in \mathcal{N}(G)_x$. We then have \begin{equation}\label{E:12gen} \lambda_1 \leq -\mu_1-\frac{1}{2} \quad \text{and} \quad \lambda_3 = -\mu_3.\end{equation} Further, the only possibility in which $\lambda_1 = -\mu_1-\frac{1}{2}$ is for $\mu=(-1,0,1)$. Otherwise, $\lambda_1 \leq -\mu_1 - 1$. Finally, we have the following description of $(K_2)_{\leq \lambda} := K_2 \cap (xI)_{\leq \lambda}$: \begin{equation}\label{E:12} (K_2)_{\leq \lambda} = \left\{ \begin{pmatrix} a& b& c\\d&0&f\\g&0&i \end{pmatrix} \in K_2 \biggm| a \in P^{-\lambda_1}\right\}. \end{equation} \end{prop} \begin{proof} Consider an element \begin{equation*}\label{E:12vals} A = \begin{pmatrix}a&b&c\\d&0&f\\g&0&i\end{pmatrix} \in \begin{pmatrix} P^{\mu_1+1}&P^{\mu_1}_{\times} & P^{\mu_1} \\ P^{\mu_2}_{\times}&0 & P^{\mu_2}\\ P^{\mu_3+1}&0 & P^{\mu_3}_{\times} \end{pmatrix}=K_2.\end{equation*} Here we see that $D = \ov{g}\begin{vmatrix}d&f\\g&i \end{vmatrix}$, so that $D\neq 0$ if $g \neq 0$. Assume first that $g \neq 0$ so that $e_1$ is a cyclic vector for $(F^3,A\sigma)$. Using that $e=h=0$, Equations \eqref{E:alpha} and \eqref{E:beta} reduce to \begin{equation*} \alpha = -\ov{\ov{a}} - \frac{\ov{\ov{g}}\ov{i}}{\ov{g}}, \quad \beta = \frac{\ov{\ov{g}}\ov{ai}}{\ov{g}} - \ov{\ov{d}}\ov{b} - \ov{\ov{g}}\ov{c}. \end{equation*} First, observe that $\ov{\ov{g}}\ov{i}/ \ov{g} \in \mathcal{O},$ since $\mu_3 \geq 0$. In the special case $\mu = (-1,0,1)$, we have $\alpha \in \mathcal{O}$, so that $\ov{\ov{a}} \in P^{-\lambda_1}$ automatically. For all other $\mu$ satisfying the hypotheses of this proposition, we see that $\alpha \in P^{-\lambda_1} \iff \ov{\ov{a}} \in P^{-\lambda_1} \iff a \in P^{-\lambda_1}$. In addition, if $\mu \neq (-1,0,1)$ note that $\lambda_1 \leq -\mu_1-1$, since $a \in P^{\mu_1+1}$. Now, comparing valuations of the summands of $\beta$, we have \begin{equation*} \frac{\ov{\ov{g}}\ov{ai}}{\ov{g}} \in P^{\mu_1+\mu_3+1}, \quad \ov{\ov{d}}\ov{b} \in P^{\mu_1+\mu_2}_{\times}, \quad \ov{\ov{g}}\ov{c} \in P^{\mu_1+\mu_3+1}. \end{equation*} Since $\mu_2 \leq \mu_3$, we observe that $\val(\beta) = \mu_1+\mu_2 = -\mu_3$. In the special case $\mu=(-1,0,1)$, we see that the only possibility is that $\lambda = (\frac{1}{2},\frac{1}{2},-1).$ Equation \eqref{E:12gen} consequently holds. We have now shown that $\lambda$ determines only $\val(a)$, and Lemma \ref{T:charpolypo} implies the result if $g\neq 0$. \vskip 5 pt If $g=0$, we have the following split exact sequence: \begin{equation*} 0 \longrightarrow \left( F^2, \begin{pmatrix}a&b\\d&0\end{pmatrix}\sigma \right) \longrightarrow (F^3, A\sigma) \longrightarrow (F, i\sigma) \rightarrow 0\ .\end{equation*} Observe that $\val(i) = \mu_3$, so we need only understand the conditions on the two-dimensional sub-isocrystal. Since $d \neq 0$, we again use Lemma \ref{T:GL_2} to see that $\lambda_1$ determines only $\val(a)$, as desired. Finally, since $a \in P^{\mu_1+1}$, Lemma \ref{T:charpolypo} indicates that the inequalities in \eqref{E:12gen} still hold. \end{proof} \begin{lemma}\label{T:23zeros} Let $x = \pi^{\mu}s_2$, where $\mu = (\mu_1,\mu_2,\mu_3)$ satisfies $\mu_1 \leq \mu_2 < \mu_3$. Define $$ J_3:= \begin{pmatrix} 1& 0& 0\\ 0&1&0\\P^{\mu_3-\mu_1+1}&P^{\mu_3-\mu_2}&1 \end{pmatrix}\quad \text{and}\quad K_3:= \begin{pmatrix} P^{\mu_1}_{\times}&P^{\mu_1} & P^{\mu_1} \\ P^{\mu_2+1}&P^{\mu_2+1} & P^{\mu_2}_{\times}\\ 0&P^{\mu_3}_{\times} & 0 \end{pmatrix} .$$ Then the map $\kappa: J_3 \times K_3 \rightarrow xI$ given by $(j,k) \mapsto j^{-1}k\sigma(j)$ is an isomorphism of schemes. \end{lemma} \begin{cor}\label{T:K_3cor} Let $x = \pi^{\mu}s_2$, where $\mu = (\mu_1,\mu_2,\mu_3)$ satisfies $\mu_1 \leq \mu_2 < \mu_3$. Denote $(K_3)_{\leq \lambda}:= K_3 \cap (xI)_{\leq \lambda}$. Then \begin{equation*}\label{E:K_3cor} \codim((K_3)_{\leq \lambda} \subseteq K_3) = \codim((xI)_{\leq \lambda}\subseteq xI).\end{equation*} \end{cor} \begin{prop}[\textbf{Case VA}]\label{T:23vals}Let $x=\pi^{\mu}s_2$ satisfy $\mathbf{a}_x \subset C^0$ and $\mu_2 \geq 0$. Then $\mu_1 < \mu_2< \mu_3$. In addition, $\mu_1 < 0$ and $\mu_3 >0$. Now fix $\lambda=(\lambda_1,\lambda_2,\lambda_3) \in \mathcal{N}(G)_x$. We then have \begin{equation}\label{E:23gen} \lambda_1 = -\mu_1 \quad \text{and} \quad \lambda_3 \geq -\mu_3 + \frac{1}{2}.\end{equation} Further, the only possibility in which $\lambda_3 = -\mu_3 +\frac{1}{2}$ is for $\mu_2+1=\mu_3$. Otherwise, $\lambda_3 \geq -\mu_3 +1$. In describing $(K_3)_{\leq \lambda} := K_3 \cap (xI)_{\leq \lambda}$, we have the following two subcases: \begin{enumerate} \item[($i$)] If $\mu_2+1 = \mu_3$, then $\lambda = (-\mu_1, \frac{\mu_1}{2}, \frac{\mu_1}{2})$ and $(K_3)_{\leq \lambda} = K_3.$ \item[($ii$)] If $\mu_2+1 < \mu_3$, then \begin{equation}\label{E:23} (K_3)_{\leq \lambda} = \left\{ \begin{pmatrix} a& b& c\\d&e&f\\0&h&0 \end{pmatrix} \in K_3 \biggm| \begin{vmatrix} a & b\\ d& e\end{vmatrix} \in P^{\lambda_3}\right\}. \end{equation} \end{enumerate} \end{prop} \begin{proof} Let $A \in K_3$. Then we may write \begin{equation*}\label{E:23vals} A = \begin{pmatrix}a&b&c\\d&e&f\\0&h&0\end{pmatrix} \in \begin{pmatrix} P^{\mu_1}_{\times}&P^{\mu_1} & P^{\mu_1} \\ P^{\mu_2+1}&P^{\mu_2+1} & P^{\mu_2}_{\times}\\ 0&P^{\mu_3}_{\times} & 0 \end{pmatrix},\end{equation*} and in this context, $D = \ov{d}dh$. Therefore, $e_1$ is a cyclic vector if $d \neq 0$. First assume that $d\neq 0$ so that Equations \eqref{E:alpha} and \eqref{E:beta} hold. Using that $g = i = 0$, these equations reduce to \begin{equation*} \alpha = -\ov{\ov{a}} - \frac{\ov{\ov{d}}\ov{e}}{\ov{d}}, \quad \beta = \frac{\ov{\ov{d}}\ov{ae}}{\ov{d}} - \ov{\ov{d}}\ov{b} -\frac{\ov{D}}{D}fh. \end{equation*} First, observe that $\ov{\ov{d}}\ov{e}/ \ov{d} \in \mathcal{O},$ since $\mu_2 \geq 0$. Thus, $\alpha \in P^{-\lambda_1} \iff \ov{\ov{a}} \in P^{-\lambda_1} \iff a \in P^{-\lambda_1}$. However, $a \in P^{\mu_1}_{\times}$, so the only possibility is that $\lambda_1=-\mu_1.$ Similarly, we can compute that \begin{equation*} \frac{\ov{\ov{d}}\ov{ae}}{\ov{d}} \in P^{\mu_1+\mu_2+1}, \quad -\ov{\ov{d}}\ov{b}\in P^{\mu_1+\mu_2+1}, \quad -\frac{\ov{D}}{D}fh \in P^{\mu_2+\mu_3}_{\times}\subset \mathcal{O}. \end{equation*} We now consider the two subcases defined in the statement of the proposition. \vskip 10 pt Subcase ($i$): $\mu_2+1 = \mu_3$ \vskip 10 pt If $\mu_2+1 = \mu_3$, note that $\mu_1+\mu_2 + 1 > \frac{\mu_1}{2}$. Consequently, $\lambda_3 = \frac{\mu_1}{2}$ is fixed. But because we also know that $\lambda_1 = -\mu_1$, we thus see that $\lambda = (-\mu_1, \frac{\mu_1}{2},\frac{\mu_1}{2})$ is the only possible value for $\lambda$ in this subcase. That is, $\mathcal{N}(G)_x$ consists of a single element, and so $(K_3)_{\leq \lambda} = K_3$. \vskip 10 pt Subcase ($ii$): $\mu_2+1 < \mu_3$ \vskip 10 pt We first observe that $\beta \in P^{\mu_1+\mu_2+1}$ so that if $\mu_2+1 < \mu_3$, then $\lambda_3 \geq -\mu_3+1.$ In particular, we have verified Equation \eqref{E:23gen}. Comparing the valuations of the summands of $\beta$, it appears that we should consider two further subcases: \begin{enumerate} \item[($a$)] $\mu_1+\mu_2+1 \leq \lambda_3\leq \mu_1+\mu_3$, \item[($b$)] $\mu_1+\mu_3 < \lambda_3$. \end{enumerate} First, we argue that subcase ($b$) does not actually arise. By Lemma \ref{T:irrelterms}, we know that $\mu_2-\lambda_1\geq \lambda_3$ for this range of $\lambda_3$. In addition, recall that by our analysis of $\val(\alpha)$, we always have that $-\lambda_1 = \mu_1$. Thus, $\mu_2-\lambda_1 = \mu_1 + \mu_2 < \mu_1 + \mu_3 < \lambda_3$, which is a contradiction. If $\mu_1+\mu_2+1 \leq \lambda_3\leq \mu_1+\mu_3$, we see that $\ds \beta \in P^{\lambda_3} \iff \frac{\ov{\ov{d}}\ov{a}\ov{e}}{\ov{d}} - \ov{\ov{d}}\ov{b} \in P^{\lambda_3} \iff \ov{\ov{d}}\ov{a}\ov{e} - \ov{\ov{d}}\ov{d}\ov{b} \in P^{\lambda_3 + \val(d)} \iff \ov{\begin{vmatrix} a&b\\d&e \end{vmatrix}} \in P^{\lambda_3} \iff \begin{vmatrix} a&b\\d&e \end{vmatrix} \in P^{\lambda_3}$. Equation \eqref{E:23} then follows by Lemma \ref{T:charpolypo}. \vskip 5 pt If, on the other hand $d=0$, we have the following split exact sequence: \begin{equation*} 0 \longrightarrow (F, a\sigma) \longrightarrow (F^3, A\sigma) \longrightarrow \left( F^2, \begin{pmatrix}e&f\\h&0\end{pmatrix}\sigma \right) \rightarrow 0\ .\end{equation*} Observe that $\val(a) = \mu_1$, so we need only understand the conditions on the two-dimensional sub-isocrystal. Since $h \neq 0$, we again use Lemma \ref{T:GL_2} to see that $\lambda_3$ determines only $\val(e)$. In particular, we must have $e \in P^{\lambda_3-\mu_1} \iff ae \in P^{\lambda_3}$, since $\val(a) = \mu_1$. Therefore, when $d=0$, Equation \eqref{E:23} holds. Finally, since $ae \in P^{-\mu_3+1}$, Lemma \ref{T:charpolypo} indicates that the inequalities in \eqref{E:23gen} are still true. \end{proof} The same methods used in the previous propositions in this subsection can be employed in the case where $x$ is a pure translation to show that $\val(\alpha) = \mu_1$ and $\val(\beta) = -\mu_3$; \textit{i.e.}, the set $\mathcal{N}(G)_x$ consists of a single Newton slope sequence. Alternatively, we can cite the following more general result of \cite{GHKRadlvs}. \begin{theorem}\label{T:translation} Let $\mu \in X_*(T)$. Then any element of $I\pi^{\mu}I$ is $\sigma$-conjugate under $I$ to $\pi^{\mu}$. \end{theorem} Using either argument, we obtain the following proposition. \begin{prop}[\textbf{Case VIA}]\label{T:1vals}If $x=\pi^{\mu}$, then $ \mathcal{N}(G)_x = \{ -\mu\}.$ \end{prop} The hypotheses on $\mu$ in the statements of the propositions in this section omit $\mu=0$. Although only the base alcove $\mathbf{a}_1$ satisfies $\mu=0$ and $\mathbf{a}_1 \subset C^0$, we discuss the case $\mu=0$ for the sake of completeness. Recall the automorphism $\varphi$ from Lemma \ref{T:reduction}, which represents rotation by 120 degrees about the center of the base alcove and induces the identity on $\mathcal{N}(G)$. If $x=\pi^{(-1,0,1)}s_1s_2$, for example, then $\varphi(x) = \pi^0s_1s_2$ so that our analysis in Proposition \ref{T:123vals} yields that $\mathcal{N}(G)_x = \{0\}$. Similarly, if $\mu=0$, then one can verify that $\mathcal{N}(G)_x = \{(0,0,0)\}$ for any $w \in W$, completing our analysis of case A. \subsection{Conditions on valuations determining the Newton strata: Case B}\label{S:valpolysB} We now argue that computing the explicit form of the variety $(xI)_{\leq \lambda}$ in case B proceeds in exactly the same manner as in case A. The idea is to use a change of basis as in the proof of Proposition \ref{T:132vals} to reduce the computations required for case B to ones essentially the same as the calculations performed in Section \ref{S:valpolysA}. More specifically, let $x = \pi^{\mu}w$ satisfy $\mathbf{a}_x \subset s_1(C^0)$, where $\mu_1 \geq 0$ and $\mu \neq (\mu_1,\mu_2,\mu_1)$. Consider \begin{equation}\label{E:x'I'}s_1^{-1}\pi^{\mu}wIs_1 = \pi^{{\mu}_{\text{dom}^*}}(s_1^{-1}ws_1)(s_1^{-1}Is_1)=:x'I'.\end{equation} By $\mu_{\text{dom}^*}$ we mean the unique antidominant element in the Weyl orbit of $\mu$. Here, $I'=s_1^{-1}Is_1$ is the non-standard Iwahori subgroup \begin{equation}\label{E:I'} I' = \begin{pmatrix} \mathcal{O}^{\times} &P &\mathcal{O}\\ \mathcal{O} & \mathcal{O}^{\times} &\mathcal{O} \\ P &P & \mathcal{O}^{\times} \end{pmatrix}. \end{equation} The varieties $(x'I)_{\leq \lambda'}$ for $\lambda' \in \mathcal{N}(G)_{x'}$ are precisely the ones described in the previous section. As we demonstrate below, by replacing $I$ by $I'$ in the propositions from Section \ref{S:valpolysA}, we obtain a complete description of $(x'I')_{\leq \lambda'}$ for all possible values of $x'$. To illustrate this phenomenon, we briefly present the calculation for $w=s_2s_1$ in case B, which mirrors case IA, since $s_1^{-1}(s_2s_1)s_1=s_1s_2$. The other five arguments proceed in a similar fashion. \begin{prop}[\textbf{Case IIB}]\label{T:132Bvals} Let $x=\pi^{\mu}s_2s_1 \in \widetilde{W}$ satisfy $\mathbf{a}_x \subset s_1(C^0)$, where $\mu_1 \geq 0$ and $\mu \neq (\mu_1,\mu_2,\mu_1)$. Then $\mu_2 < \mu_1 < \mu_3$. In addition, $\mu_2 < 0$ and $\mu_3 >0$. Consider $x' = \pi^{(\mu_2,\mu_1,\mu_3)}s_1s_2$, and fix $\lambda'=(\lambda_1,\lambda_2,\lambda_3) \in \mathcal{N}(G)_{x'}$. We then have \begin{equation*}\label{E:132Bgen}\lambda_1 \leq -\mu_2 - 1 \quad \text{and} \quad \lambda_3 \geq -\mu_3 + 1.\end{equation*} Recall the non-standard Iwahori subgroup $I'$ defined in Equation \eqref{E:I'}. To describe $(x'I')_{\leq \lambda'}$, we have the following two subcases: \begin{enumerate} \item[($i$)] If $-\mu_3+1\leq \lambda_3\leq -\mu_1$, then \begin{equation*} (x'I')_{\leq \lambda'} = \left\{ \begin{pmatrix} a& b& c\\d&e&f\\g&h&i \end{pmatrix} \in x'I' \biggm| a \in P^{-\lambda_1} \ \text{and}\ \begin{vmatrix} a & b\\ d& e\end{vmatrix} \in P^{\lambda_3}\right\}. \end{equation*} \item[($ii$)] If $-\mu_1 < \lambda_3$, then \begin{equation*} (x'I')_{\leq \lambda'} = \left\{ \begin{pmatrix} a& b& c\\d&e&f\\g&h&i \end{pmatrix} \in x'I' \biggm| a \in P^{-\lambda_1}\ \text{and}\ \ov{d}b+\ov{g}c \in P^{\lambda_3}\right\}. \end{equation*} Note that subcase ($ii$) only arises when $\mu_1 >0$, since $\lambda_3$ is always non-positive. \end{enumerate} \end{prop} \begin{proof} If $A \in x'I'$, then we have \begin{equation*}\label{E:132Bcoset} A:= \begin{pmatrix} a& b& c\\d&e&f\\g&h&i \end{pmatrix} \in \begin{pmatrix}P^{\mu_2+1} & P^{\mu_2+1} & P^{\mu_2}_{\times}\\ P^{\mu_1}_{\times} & P^{\mu_1+1} & P^{\mu_1} \\ P^{\mu_3} & P^{\mu_3}_{\times} & P^{\mu_3} \end{pmatrix} = x'I', \end{equation*} so that $e_1$ is a cyclic vector for $(F^3,A\sigma)$. Now, observe that $$\frac{1}{D}\left( (\ov{\ov{d}}\ov{e}+\ov{\ov{g}}\ov{f})\begin{vmatrix} d&e\\g&h\end{vmatrix} + (\ov{\ov{d}}\ov{h}+\ov{\ov{g}}\ov{i})\begin{vmatrix}d&f\\g&i\end{vmatrix}\right) \in \mathcal{O},$$ since $\mu_1 \geq 0$. Therefore, $\alpha \in P^{-\lambda_1} \iff \ov{\ov{a}} \in P^{-\lambda_1} \iff a \in P^{-\lambda_1}$. In addition, since in this case $a \in P^{\mu_2+1}$, we see that $\lambda_1\leq -\mu_2-1$. Now we consider the condition $\beta \in P^{\lambda_3}$. Compute that \begin{align*} B_1 &\in P^{\mu_1+\mu_2+2} & B_2 &\in P^{\mu_2+\mu_3+1}\\ - \ov{\ov{d}}\ov{b}&\in P^{\mu_1+\mu_2+1} & -\ov{\ov{g}}\ov{c} &\in P^{\mu_2+\mu_3}\\ \frac{\ov{D}}{D}\begin{vmatrix}e&f\\h&i\end{vmatrix} &\in P^{\mu_1+\mu_3} \subset \mathcal{O}. & \phantom{} \end{align*} In particular, $\beta \in P^{\mu_1+\mu_2+1}$ so that $\lambda_3 \geq -\mu_3 + 1.$ Comparing the valuations of the summands of $\beta$, we see that we again have two subcases: \begin{enumerate} \item[($i$)] $\mu_1+\mu_2+1 \leq \lambda_3\leq \mu_2+\mu_3$, \item[($ii$)] $\mu_2+\mu_3 < \lambda_3$. \end{enumerate} The analysis of these two subcases proceeds in the same manner as in the proof of Proposition \ref{T:123vals}, which the reader may easily verify. \end{proof} Observe that the varieties $(xI)_{\leq \lambda}$ and $(x'I')_{\leq \lambda'}$ differ only by a change of basis, so that the description for case IIB provided in Proposition \ref{T:132Bvals} suffices. Furthermore, the polynomials that describe $(x'I')_{\leq \lambda'}$ in case IIB are exactly the same as the ones appearing in case IA. In fact, after performing the change of basis, the only difference between the analysis of these two cases is that the ranges for $\lambda_3$ that determine subcases ($i$) and ($ii$) are slightly different. We generalize these observations for any $x$ satisfying the hypotheses of case B in the following remark. \begin{remark}\label{T:valpolysB} Let $x=\pi^{\mu}w$ satisfy $\mathbf{a}_x \subset s_1(C^0)$ with $\mu_1 \geq 0$ and $\mu \neq (\mu_1,\mu_2,\mu_1)$. Conjugation by $s_1^{-1}$ transforms $x$ to $x'$, $I$ to $I'$, and $\lambda \in \mathcal{N}(G)_x$ to $\lambda' \in \mathcal{N}(G)_{x'}$, where $x'$ and $I'$ are defined by Equation \eqref{E:x'I'}. Note, however, that the posets $\mathcal{N}(G)_x$ and $\mathcal{N}(G)_{x'}$ may be different and not even in bijection with one another. In fact, it is not clear from our methods what the map between these two posets is. \begin{question}\label{T:posetmap} Can we explicitly describe the map $\mathcal{N}(G)_x \rightarrow \mathcal{N}(G)_{x'}$ induced by the map $xI \rightarrow x'I'$ given by conjugation by an element of the finite Weyl group? \end{question} \noindent This question is related to Question \ref{T:nuxform} about finding a closed formula for $\nu_x$. Together with Conjecture \ref{T:domconj}, Question \ref{T:posetmap} would provide a means by which we could answer Question \ref{T:nonemptyQ} at least in the case of $SL_n(F)$, which asks for a complete description of $\mathcal{N}(G)_x$. Independent of understanding the map between posets, however, we know that \begin{equation*} \codim((xI)_{\leq \lambda} \subseteq xI) = \codim((x'I')_{\leq \lambda'} \subseteq x'I'). \end{equation*} Therefore, for the purpose of the theorem, it suffices to compute $(x'I')_{\leq \lambda'}$. As Proposition \ref{T:132Bvals} illustrates, the only possible difference between $(x'I')_{\leq \lambda'}$ and the corresponding variety $(x'I)_{\leq \lambda}$ from Section \ref{S:valpolysA}, is the range for $\lambda_3$ that defines the subcases. We therefore leave the remainder of the verification to the reader, both here and in the proof of Theorem \ref{T:main}. \end{remark} \end{section} \begin{section}{The Poset of Newton Slope Sequences $\mathcal{N}(G)_x$}\label{S:slopes} As a direct consequence of our calculations in Section \ref{S:valpolysA}, we can list the Newton slope sequences that arise for a particular $IxI$. This calculation is necessary in the proof of Theorem \ref{T:main} in that it provides a concrete number for the length of the segment $[\lambda, \nu_x]$ in the poset $\mathcal{N}(G)_x$. The reader will recall from Section \ref{S:adlv} that one additional application of such a calculation is to determine for which $b \in G$ the affine Deligne-Lusztig variety $X_x(b) \neq \emptyset$. We begin by explicitly describing the poset $\mathcal{N}(G)_x$ for $x$ such that $\mathbf{a}_x$ lies in the antidominant Weyl chamber. We then explain the algorithm for computing $\mathcal{N}(G)_x$ for any $x \in \widetilde{W}$, although we omit the details. These results are recorded in Tables \ref{Ta:s_12} through \ref{Ta:s_2} at the end of the paper. An obvious consequence of understanding $\mathcal{N}(G)_x$ is that we obtain a list of the generic Newton slope sequences $\nu_x$. We conclude this section by discussing some patterns for these generic Newton slope sequences. \subsection{The set of Newton slopes in the antidominant Weyl chamber}\label{S:slopesC0} In Section \ref{S:valpolysA}, for each $x \in \widetilde{W}$ satisfying $\mathbf{a}_x \subset C^0$ and $\mu_2 \geq 0$, we explicitly computed either $(xI)_{\leq \lambda}$ or $K_i \cap (xI)_{\leq \lambda}$ for $i \in \{1, 2, 3\}$. We now use those calculations to describe the set of Newton slopes $\mathcal{N}(G)_x$ for any $x\in \widetilde{W}$ such that $\mathbf{a}_x \subset C^0$. \pagebreak \begin{prop} We study the cases $w = s_1s_2$ and $w=s_2s_1$ together. \begin{enumerate} \item Let $x=\pi^{\mu}s_1s_2$ and $\mathbf{a}_x \subset C^0$. Then $\mu_1< \mu_2 \leq \mu_3$, and \begin{equation}\label{E:N(G)123}\mathcal{N}(G)_x = \begin{cases} \{ \lambda \in \mathcal{N}(G) \mid \lambda \leq (-\mu_1-1, \frac{\mu_1+1}{2},\frac{\mu_1+1}{2}) \}, & \text{if $\mu_2=\mu_3$;}\\ \{ \lambda \in \mathcal{N}(G) \mid \lambda \leq -\mu - (1,0,-1)\}, & \text{otherwise}. \end{cases} \end{equation} \item Let $x=\pi^{\mu}s_2s_1$ and $\mathbf{a}_x \subset C^0$. Then $\mu_1\leq \mu_2 < \mu_3$, and \begin{equation}\label{E:N(G)132}\mathcal{N}(G)_x = \begin{cases} \{ \lambda \in \mathcal{N}(G) \mid \lambda \leq (\frac{\mu_3-1}{2}, \frac{\mu_3-1}{2},-\mu_3+1) \}, & \text{if $\mu_1=\mu_2$;}\\ \{ \lambda \in \mathcal{N}(G) \mid \lambda \leq -\mu - (1,0,-1)\}, & \text{otherwise}. \end{cases} \end{equation} \end{enumerate} \end{prop} \begin{proof} We first examine $\mu$ satisfying the additional hypothesis $\mu_2\geq 0$, since this was a crucial hypothesis in Section \ref{S:valpolysA}. We then apply the reflection $\psi$ from Lemma \ref{T:reduction}, which interchanges the two simple roots. Recall that $\psi(x) = \pi^{(-\mu_3,-\mu_2, -\mu_1)}w'$, where the reduced expression for $w'$ is that for $w$ with all of the subscripts reversed. In particular, $\psi(s_1s_2) = s_2s_1$, which motivates our studying these two cases together. Let $x=\pi^{\mu}s_1s_2$, where $\mu_1< \mu_2 \leq \mu_3$ and $\mu_2 \geq 0$. Recall from Proposition \ref{T:123vals} that $\lambda_1 \leq -\mu_1-1$ and $\lambda_3 \geq -\mu_3+1$, unless $\mu_2=\mu_3$. Therefore, by Lemma \ref{T:charpolypo}, if $\mu_2\neq \mu_3$, we have $\lambda \leq -\mu - (1,0,-1)$. In the special case $\mu_2=\mu_3$, Equation \eqref{E:123gen} indicates that $\lambda \leq (-\mu_1-1, \frac{\mu_1+1}{2},\frac{\mu_1+1}{2})$. Now consider $x=\pi^{\mu}s_2s_1$, where $\mu_1< \mu_2 < \mu_3$ and $\mu_2 \geq 0$. Recall Equation \eqref{E:132gen} that implies that $\lambda \leq -\mu - (1,0,-1)$ holds without exception. To demonstrate the reverse containment, it suffices to show that for any $\lambda$ in the designated ranges, we have $(xI)_{\lambda} \neq \emptyset$. Let $x=\pi^{\mu}s_1s_2$, where $\mu_1< \mu_2 \leq \mu_3$ and $\mu_2 \geq 0$, and assume that $\lambda_3 \geq -\mu_3+1$. Routine calculations demonstrate that \begin{equation*} \begin{pmatrix} \pi^{-\lambda_1} & \pi^{\lambda_3 - \mu_2} & \pi^{\mu_1} \\ \pi^{\mu_2} & 0 & 0 \\ 0 & \pi^{\mu_3} & 0 \end{pmatrix} \in (xI)_{\lambda}, \end{equation*} where the reader will recall that $\pi^{\ell}:= \pi^{\lceil \ell \rceil}$, for $\ell \in \Q$. In the special case $\lambda =\linebreak (-\mu_1-1,\frac{\mu_1+1}{2},\frac{\mu_1+1}{2})$, we have that \begin{equation*} \begin{pmatrix} \pi^{\mu_1+1} & 0 & \pi^{\mu_1} \\ \pi^{\mu_2} & 0 & 0 \\ 0 & \pi^{\mu_3} & 0 \end{pmatrix} \in (xI)_{\lambda}. \end{equation*} Now let $x=\pi^{\mu}s_2s_1$, where $\mu_1< \mu_2 < \mu_3$ and $\mu_2 \geq 0$. To prove the opposite containment in Equation \eqref{E:N(G)132}, one can check that \begin{equation*} \begin{pmatrix} \pi^{-\lambda_1} & \pi^{\mu_1} & 0 \\ \pi^{\lambda_3-\mu_1} & 0 & \pi^{\mu_2} \\ \pi^{\mu_3}&0 & 0 \end{pmatrix} \in (xI)_{\lambda}. \end{equation*} Finally, observe that $\psi(\pi^{(\mu_1,\mu_2,\mu_3)}s_1s_2) = \pi^{-(\mu_3,\mu_2,\mu_1)}s_2s_1$ and vice versa. By applying $\psi$ to the values of $x$ we have discussed, we see that Equations \eqref{E:N(G)123} and \eqref{E:N(G)132} are satisfied. \end{proof} Recall from Section \ref{S:valpolysA} that in Cases IIIA, IVA, and VA, we explicitly computed $(K_i)_{\leq \lambda} = \linebreak K_i \cap (xI)_{\leq \lambda}$, where $i =1,2,3$, respectively. Note, however, that the obvious map $\ov{\nu}:(K_i)_{\leq \lambda} \rightarrow \mathcal{N}(G)$ has the same image as the map $\ov{\nu}: (xI)_{\leq \lambda} \rightarrow \mathcal{N}(G)$. To compute $\mathcal{N}(G)_x$ in these three cases, it therefore suffices to use the results from Propositions \ref{T:13vals}, \ref{T:12vals}, and \ref{T:23vals}, in which we only consider elements in the corresponding subschemes $(K_i)_{\leq \lambda}$. \begin{prop}\label{T:N(G)13} Let $x=\pi^{\mu}s_1s_2s_1$ and $\mathbf{a}_x \subset C^0$. Then $\mu_1 < \mu_2< \mu_3$, and \begin{equation*}\mathcal{N}(G)_x = \begin{cases} \{ \lambda \in \mathcal{N}(G) \mid (\frac{\mu_3-1}{2}, \frac{\mu_3-1}{2}, -\mu_3+1) \leq \lambda \leq -\mu-(1,0,-1)\}, & \text{if $\mu_2+1 = \mu_3$},\\ \{ \lambda \in \mathcal{N}(G) \mid (-\mu_1-1, \frac{\mu_1+1}{2}, \frac{\mu_1+1}{2}) \leq \lambda \leq -\mu-(1,0,-1)\}, & \text{if $\mu_1+1 = \mu_2$},\\ \{ \lambda \in \mathcal{N}(G) \mid \lambda \leq -\mu - (2,0,-2)\}\cup \{-\mu-(1,0,-1)\}, & \text{otherwise}. \end{cases} \end{equation*} \end{prop} \begin{proof} First assume that $\mu_2 \geq 0$. Recall the inequalities from Equation \eqref{E:13gen} that demonstrate that $\mathcal{N}(G)_x \subseteq \{ \lambda \in \mathcal{N}(G) \mid \lambda \leq -\mu - (1,0,-1)\}$. In the special case $\mu_2+1=\mu_3$, recall from Proposition \ref{T:13vals} that $\lambda_3 = -\mu_3+1$ is fixed. Let $x = \pi^{\mu}s_1s_2s_1$ satisfy $\mathbf{a}_x \subset C^0$ and $\mu_2 \geq 0$. Note in this case that $\min\{\val(ae)\}=\mu_1+\mu_2+1$ and $\min\{\val(bd)\}=\mu_1+\mu_2+2$. This observation implies that if $-\mu_3+1\leq \lambda_3 \leq -\mu_2$, and both $\lambda<-\mu-(1,0,-1)$ and $\lambda \nleq - \mu-(2,0,-2)$, then $(K_1)_{\lambda} = \emptyset$. For the opposite containments in the case $\mu_2 \geq 0$, we again provide elements in $(K_1)_{\lambda}$ for all possible $\lambda$. Here, our examples must be handled according to several cases. First, note that \begin{equation*}\begin{pmatrix} \pi^{\mu_1+1} & 0 & \pi^{\mu_1} \\ 0 & \pi^{\mu_2} & 0 \\ \pi^{\mu_3} & 0 & 0\end{pmatrix} \in (K_1)_{\lambda}, \end{equation*} for $\lambda_3 = -\mu_3+1$. If $\lambda \leq -\mu - (2,0,-2)$, we may assume that $\mu_2+2\leq \mu_3$. Consider the two subcases in the statement of Proposition \ref{T:13vals} and compute that \begin{align*} (i) \quad \begin{pmatrix} \pi^{-\lambda_1} & \pi^{-\lambda_1-1}+\pi^{\lambda_3-\mu_2-1} & \pi^{\mu_1} \\ \pi^{\mu_2+1} & \pi^{\mu_2} & 0 \\ \pi^{\mu_3} & 0 & 0\end{pmatrix} &\in (K_1)_{\lambda},\\ (ii) \quad \begin{pmatrix} \pi^{-\lambda_1} & -\pi^{\mu_1+1} & \pi^{\mu_1} \\ \pi^{\mu_3-1}+\pi^{\lambda_3-\mu_1-1} & \pi^{\mu_2} & 0 \\ \pi^{\mu_3} & 0 & 0\end{pmatrix} &\in (K_1)_{\lambda}. \end{align*} Finally, since $\psi(s_1s_2s_1) = s_2s_1s_2= s_1s_2s_1$, the result for all $\mu$ follows. \end{proof} \begin{prop} We study the remaining cases $w=s_1$ and $w=s_2$ together. \begin{enumerate} \item Let $x=\pi^{\mu}s_1$ and $\mathbf{a}_x \subset C^0$. Then $\mu_1 < \mu_2\leq \mu_3$, and \begin{equation*}\mathcal{N}(G)_x = \begin{cases} \{ (\frac{\mu_3}{2}, \frac{\mu_3}{2},-\mu_3) \}, & \text{if $\mu_1+1=\mu_2$;}\\ \{ \lambda \in \mathcal{N}(G) \mid (\frac{\mu_3}{2}, \frac{\mu_3}{2},-\mu_3) \leq \lambda \leq -\mu - (1,-1,0)\}, & \text{otherwise}. \end{cases} \end{equation*} \item Let $x=\pi^{\mu}s_2$ and $\mathbf{a}_x \subset C^0$. Then $\mu_1 \leq \mu_2< \mu_3$, and \begin{equation*}\mathcal{N}(G)_x = \begin{cases} \{ (-\mu_1, \frac{\mu_1}{2}, \frac{\mu_1}{2}) \}, & \text{if $\mu_2+1=\mu_3$;}\\ \{ \lambda \in \mathcal{N}(G) \mid (-\mu_1, \frac{\mu_1}{2}, \frac{\mu_1}{2}) \leq \lambda \leq -\mu - (0,1,-1)\}, & \text{otherwise}. \end{cases} \end{equation*} \end{enumerate} \end{prop} \begin{proof} Let $x=\pi^{\mu}s_1$, where $\mu_1 < \mu_2\leq \mu_3$, and further assume that $\mu_2 \geq 0$. First consider $\mu \neq (-1,0,1)$, and recall from Proposition \ref{T:12vals} that $\lambda_3= -\mu_3$ is fixed and $\lambda_1 \leq -\mu_1-1$ holds. Combining these observations yields $(\frac{\mu_3}{2}, \frac{\mu_3}{2},-\mu_3)\leq \lambda \leq -\mu-(1,-1,0)$. In the special case $\mu = (-1,0,1)$, recall from Equation \eqref{E:12gen} that the only possibility is that $\lambda =(\frac{1}{2}, \frac{1}{2},-1)$. Conversely, if $\mu \neq (-1,0,1)$ consider \begin{equation*} \begin{pmatrix} \pi^{-\lambda_1}& \pi^{\mu_1} & 0\\ \pi^{\mu_2} & 0&0 \\ \pi^{\mu_3+1}&0& \pi^{\mu_3} \end{pmatrix} \in (K_2)_{\lambda}.\end{equation*} In the case in which $\mu=(-1,0,1)$, consider \begin{equation*} \begin{pmatrix} 0 & \pi^{-1} & 0\\ 1 & 0&0 \\ \pi^{\mu_3+1}&0& \pi \end{pmatrix} \in (K_2)_{\lambda}.\end{equation*} Now let $x=\pi^{\mu}s_2$, where $\mu_1 \leq \mu_2< \mu_3$ and $\mu_2 \geq 0$. Proposition \ref{T:23vals} says that if $\mu_2+1=\mu_3$, then $\mathcal{N}(G)_x$ consists of the single slope sequence $(-\mu_1, \frac{\mu_1}{2}, \frac{\mu_1}{2})$. If $\mu_2+1<\mu_3$, then we showed in Proposition \ref{T:23vals} that $\lambda_1= -\mu_1$ is fixed and $\lambda_3 \geq -\mu_3+1$. Combining these observations yields $ (-\mu_1, \frac{\mu_1}{2}, \frac{\mu_1}{2}) \leq \lambda \leq -\mu - (0,1,-1)$, for $\mu_2+1 < \mu_3$. To demonstrate the reverse containment, we must provide two classes of examples. In the case in which $\mu_2+1<\mu_3$, consider the following element: \begin{equation*} \begin{pmatrix} \pi^{\mu_1}& \pi^{\lambda_3 - \mu_2-1} & 0\\ \pi^{\mu_2+1} & 0&\pi^{\mu_2} \\ 0& \pi^{\mu_3}&0 \end{pmatrix} \in (K_3)_{\lambda}.\end{equation*} If, on the other hand, $\mu_2+1=\mu_3$, we have that \begin{equation*} \begin{pmatrix} \pi^{\mu_1}& 0 & 0\\ \pi^{\mu_2+1} & 0&\pi^{\mu_2} \\ 0& \pi^{\mu_3}&0 \end{pmatrix} \in (K_3)_{\lambda}.\end{equation*} Finally, observe that $\psi(s_1) = s_2$ and vice versa to complete our description of $\mathcal{N}(G)_x$ for all $\mu$. \end{proof} \subsection{Generic Newton slopes in the antidominant Weyl chamber}\label{S:genslopesC0} Except in some boundary cases in which the associated alcove is adjacent the wall of the antidominant Weyl chamber, our calculations in the previous section demonstrate that we have the following generic Newton slope sequences: \begin{equation}\label{E:roughgens} \nu_x = \begin{cases} -\mu - (1,0,-1), &\text{if $w = s_1s_2,\ s_2s_1,$ or $s_1s_2s_1$;}\\ -\mu - (1,-1,0), &\text{if $w = s_1$;}\\ -\mu - (0,1,-1), &\text{if $w = s_2$;}\\ -\mu, &\text{if $w = 1$}, \end{cases} \end{equation} for $x$ such that $\mathbf{a}_x \subset C^0$. The alcoves in the antidominant Weyl chamber for which $\nu_x$ satisfies one of the above equalities correspond to alcoves lying in the \emph{shrunken} Weyl chamber, as defined by Reuman in \cite{Reu}. In fact, in $SL_3(F)$ in general, the values of $x \in \widetilde{W}$ for which $\nu_x$ is half-integral always correspond to alcoves that lie outside these shrunken Weyl chambers, but not conversely, as we have seen. Except when $x$ is a pure translation, we have seen that $\nu_x \neq -\mu_{\text{dom}}$ if $\mathbf{a}_x \subset C^0$. For every other value of $w \in \widetilde{W}$, there is a correction term. Note in addition that each correction term is a positive coroot. These observations illustrate one difference between calculating the codimensions of the Newton strata using the Cartan decomposition $G(F) = KTK$, where $K= G(\mathcal{O})$, and the affine Bruhat decomposition $I\widetilde{W}I$. The generic Newton slope sequence associated to a particular double coset $K\pi^{\mu}K$ is always given by $-\mu_{\text{dom}}$, using the conventions of this paper, since Mazur's inequality says that $\nu_x \leq -\mu_{\text{dom}}$, but we have $\pi^{\mu} \in K\pi^{\mu}K$. When using the affine Bruhat decomposition, our calculations demonstrate that this initial guess for the generic Newton slope sequence is not always correct. \subsection{Poset of Newton slopes $\mathcal{N}(G)_x$ for $SL_3(F)$}\label{S:N(G)_x} Similar calculations can be performed to determine the generic Newton slope sequences and the posets $\mathcal{N}(G)_x$ for $\mathbf{a}_x$ lying in the remaining Weyl chambers. The results depend very much on the Weyl chamber in consideration. To compute the posets $\mathcal{N}(G)_x$ for all $x\in \widetilde{W}$, we proceed in several steps as outlined in the reduction arguments from Section \ref{S:reduction}. First, compute $(xI)_{\leq \lambda}$ for each $x \in \widetilde{W}$ such that $\mathbf{a}_x \subset s_1(C^0)$ and $\mu_1 \geq 0$ as indicated in Section \ref{S:valpolysB}. One immediate corollary of this calculation will be the generation of a list of generic Newton slope sequences associated to these alcoves. Now, as in Section \ref{S:slopesC0}, we can explicitly find matrices that lie in $(xI)_{\lambda}$ for all $\lambda \leq \nu_x$. Once we have the description for $\mathcal{N}(G)_x$, where $x$ satisfies $\mathbf{a}_x \subset s_1(C^0)$ and $\mu_1 \geq 0$, apply the reflection $\psi$ from Lemma \ref{T:reduction} that interchanges the two simple roots. The result will be a description of $\mathcal{N}(G)_x$ for $x$ such that $\mathbf{a}_x \subset s_2(C^0)$ and $\mu_3 \leq 0$. Now, if we apply to these alcoves the rotation by 120 degrees about the center of the base alcove, given by $\varphi$ in Lemma \ref{T:reduction}, this completes our description of $\mathcal{N}(G)_x$ for $x$ such that $\mathbf{a}_x \subset s_1(C^0)$. Finally, using that $\varphi$ induces the identity on $\mathcal{N}(G)$, we can extend the results for the two adjacent Weyl chambers $C^0$ and $s_1(C^0)$ to the remaining four Weyl chambers. We omit the details of these calculations, but we include the resulting descriptions for both $\nu_x$ and $\mathcal{N}(G)_x$ in Tables \ref{Ta:s_12} through \ref{Ta:s_2} at the conclusion of the paper. Let $x = \pi^{\mu}w$, where $\mu=(\mu_1,\mu_2,\mu_3)$ and $w \in W$. Recall from Theorem \ref{T:translation} that if $x = \pi^{\mu}$, then $\nu_x = -\mu_{\text{dom}}$ and $\mathcal{N}(G)_x = \{\nu_x\}$. Since this poset is relatively uninteresting, we do not create a separate table for $w=1$. For the other five cases, we organize the results according to the finite Weyl part $w$. We first list the generic Newton slope sequences $\nu_x$. The posets $\mathcal{N}(G)_x$ then consist of all elements $\lambda \in \mathcal{N}(G)$ that satisfy the indicated properties, where we write $\nu_x = (\nu_1, \nu_2,\nu_3)$. In addition, when expressing elements of $\widetilde{W}$ as products of the generators, we write $s_{121}$ for $s_1s_2s_1$, etc. Inside the shrunken Weyl chambers, the correction term for a given alcove, if any, is a coroot of the three forms appearing in Equation \eqref{E:roughgens}. As we have seen, in the antidominant Weyl chamber, every value of $x$ gives rise to a correction term, except the pure translation. In all other Weyl chambers, this is not the case. There are increasingly fewer correction terms as the Weyl chambers get farther from the antidominant chamber. In fact, there are no necessary correction terms for the generic Newton slope sequences associated to alcoves in the dominant Weyl chamber $s_{121}(C^0)$. In the dominant Weyl chamber, the initial estimate $\nu_x = -\mu_{\text{dom}}$ is always correct. We conjecture that this phenomenon is a general pattern (see Conjecture \ref{T:domconj}). Other patterns among the descriptions for $\nu_x$ and $\mathcal{N}(G)_x$ for $SL_3(F)$ exist, but we cannot yet formulate a precise conjecture of this sort. \end{section} \begin{section}{Proofs of Theorem \ref{T:main} and Corollary \ref{T:maincor}}\label{S:codimcalc} \subsection{Proof of Theorem \ref{T:main} and Corollary \ref{T:maincor}, case A}\label{S:thmproof} Fix $x=\pi^{\mu}w \in \widetilde{W}$, where $\mu=(\mu_1,\mu_2,\mu_3)$. We choose an integer $N$ such that $(xI)_{\leq \lambda}=\rho_N^{-1}\rho_N(xI)_{\leq \lambda}$ for any $\lambda \in \mathcal{N}(G)_x$, where we recall from Section \ref{S:admis} that $\rho_N:xI \twoheadrightarrow xI/P^N$ is the map that truncates the power series entries of $xI$ at level $P^N$. We explicitly describe the geometric structure of the closed subscheme $\rho_N(xI)_{\leq \lambda}$ in $xI/P^N$, and our description will yield a dimension formula. \vskip 5 pt \textbf{Case A}: $\mathbf{a}_x \subset C^0$ and $\mu_2 \geq 0$. \vskip 5 pt We illustrate the argument by considering the specific example of case IA. In this case, we argue that $\rho_N(xI)_{\leq \lambda}$ is actually a fiber bundle over an irreducible affine scheme, having irreducible fibers. Let $x = \pi^{\mu}s_1s_2$, where $\mathbf{a}_x \subset C^0$ and $\mu_2 \geq 0$. Let us begin by analyzing subcase ($i$). If we further assume that $\mu_2 \neq \mu_3$, then $-\mu_3+1 \leq \lambda_3 \leq -\mu_2+1$, and recall from \eqref{E:123i} that we have the following description of $(xI)_{\leq \lambda}$: \begin{equation}\label{E:123calc} (xI)_{\leq \lambda} = \left\{ \begin{pmatrix} a& b& c\\d&e&f\\g&h&i \end{pmatrix} \in xI \biggm| a \in P^{-\lambda_1} \ \text{and}\ \begin{vmatrix} a & b\\ d& e\end{vmatrix} \in P^{\lambda_3}\right\}. \end{equation} Furthermore, by Equation \eqref{E:123gen}, we see that $\nu_x = (-\mu_1-1,-\mu_2,-\mu_3+1)$ in this case. In general we denote $\nu_x=(\nu_1,\nu_2,\nu_3)$. Let us roughly outline the structure of the argument. The codimension of $(xI)_{\leq \lambda}$ in $xI$ will be given by the codimension in \begin{equation*}X:= P^{\mu_1+1} \times P^{\mu_1+1} \times P^{\mu_2}_{\times} \times P^{\mu_2}\end{equation*} of \begin{equation*}X_{\leq \lambda}:= \{ (a,b,d,e) \in X \mid a \in P^{-\lambda_1},\ ae-bd \in P^{\lambda_3}\}.\end{equation*} Consider the projection map \begin{align*} p:X & \longrightarrow Y:= P^{\mu_1+1}\times P^{\mu_2}_{\times} \times P^{\mu_2} \\ (a,b,d,e) & \mapsto (a,d,e) \phantom{ P^{\mu_1+1}\times P^{\mu_2}_{\times} \times P^{\mu_2}}, \end{align*} and consider the restriction of $p$ to $X_{\leq \lambda}$ \begin{equation*}p_{\lambda}: X_{\leq \lambda}\rightarrow Y_{\leq \lambda} := P^{-\lambda_1}\times P^{\mu_2}_{\times} \times P^{\mu_2}.\end{equation*} Observe that $Y_{\leq \lambda}$ has codimension $-\lambda_1-(\mu_1+1)$ in $Y$. Further, the fiber of $p_{\lambda}$ over the point $(a,d,e) \in Y_{\leq \lambda}$ is a coset of the form $aed^{-1} + P^{\lambda_3-\mu_2}$, so that the fibers of $p_{\lambda}:X_{\leq \lambda} \rightarrow Y_{\leq \lambda}$ are cosets of $P^{\lambda_3-\mu_2}$ in the fiber $P^{\mu_1+1}$ of the map $p:X \rightarrow Y$. Therefore, $(xI)_{\leq \lambda}$ is irreducible, having codimension given by \begin{align*}\codim((xI)_{\leq \lambda} \subseteq xI) &= \codim(X_{\leq \lambda} \subseteq X)\notag \\ \phantom{\codim((xI)_{\lambda} \subseteq xI)}&= \lceil -\lambda_1-(\mu_1+1)\rceil + \lceil(\lambda_3 - \mu_2)-(\mu_1+1)\rceil \notag\\ \phantom{\codim(X_{\leq \lambda} \subseteq X)} &= \lceil \nu_1-\lambda_1 \rceil + \lceil -\nu_3+\lambda_3\rceil \notag \\ \phantom{\codim(X_{\leq \lambda} \subseteq X)}&= \sum_{i=1}^2 \lceil \langle \omega_i,\nu_x- \lambda\rangle\rceil \\ \phantom{\codim(X_{\leq \lambda} \subseteq X)}&= \length_{\mathcal{N}(G)_x}[\lambda,\nu_x].\end{align*} To obtain the last equality, we recall from Equation \eqref{E:N(G)123} that there is indeed a chain of the specified length in $\mathcal{N}(G)_x$. Further, since $(xI)_{\lambda}$ is open in $(xI)_{\leq \lambda}$, and since $(xI)_{\leq \lambda}$ is irreducible in $xI$, we see that the closure of the Newton stratum $(xI)_{\lambda}$ in $xI$ is precisely $(xI)_{\leq \lambda}$. Therefore, if we have two Newton polygons $\lambda_1 < \lambda_2$ which are adjacent in the poset $\mathcal{N}(G)_x$, then we can compute that the codimension of the smaller Newton stratum in the closure of the larger is given by \begin{align*}\codim((xI)_{\lambda_1} \subseteq (xI)_{\leq \lambda_2}) &= \codim((xI)_{\leq \lambda_1} \subseteq (xI)_{\leq \lambda_2})\notag \\ \phantom{\codim((xI)_{\lambda_1} \subseteq (xI)_{\leq \lambda_2})}&= \codim((xI)_{\leq \lambda_1} \subseteq xI) - \codim((xI)_{\leq \lambda_2} \subseteq xI) \notag\\ \phantom{\codim((xI)_{\lambda_1} \subseteq (xI)_{\leq \lambda_2})} &= \length_{\mathcal{N}(G)_x}[\lambda_1,\nu_x] - \length_{\mathcal{N}(G)_x}[\lambda_2,\nu_x] \notag\\ \phantom{\codim((xI)_{\lambda_1} \subseteq (xI)_{\leq \lambda_2})} &= 1.\end{align*} To make all of these arguments rigorous, we of course need to use the admissibility of $(xI)_{\leq \lambda}$. By choosing $N \geq \mu_3+1$ and replacing $P^{j}$ everywhere by $P^j/P^N$, the result will follow for case IA, subcase ($i$) with $\mu_2 \neq \mu_3$. Subcase ($ii$) and the case $\mu_2 = \mu_3$ for case IA are handled similarly, as are all other cases. We mention that in the special case $\mu_2=\mu_3$, we have $\nu_3=-\mu_3+\frac{1}{2}$, and the value $\lceil \lambda_3-\nu_3\rceil$ yields an overestimate for the codimension. In particular, for $\lambda_3=\nu_3 + \frac{1}{2}$, we have $\lceil \lambda_3-\nu_3\rceil = 1$, even though $\codim((xI)_{\leq \lambda}\subseteq xI) = \lceil \nu_1-\lambda_1 \rceil$ in this case. Therefore, in case IA, subcase ($i$) with $\mu_2=\mu_3$, we have $\codim ((xI)_{\leq \lambda} \subseteq xI)= \left( \sum^2_{i=1}\lceil \langle \omega_i, \nu_x-\lambda \rangle\rceil \right)-1$. We omit the details for the remaining arguments since they all proceed in a similar fashion, but we highlight some subtle differences among them. \begin{lemma}\label{T:mult} Consider the multiplication map given by \begin{align*} m: \mathcal{O}/P^n \times \mathcal{O}/ P^n & \rightarrow \mathcal{O}/ P^n \notag \\ (x,y) & \mapsto xy.\end{align*} Suppose $z \in P^k_{\times}$ with $0\leq k<n$. Then the fiber of $m$ over $z$ is of the form \begin{equation*}m^{-1}(z+P^n) = \coprod\limits_{0\leq j \leq k} S_j, \end{equation*} where each $S_j$ is a fiber bundle over $P^j_{\times}/P^n$, having fibers isomorphic to $P^{n-j}/P^n$. If, on the other hand, $z \in P^n$, then the fiber of $m$ over $z$ is of the form \begin{equation*}m^{-1}(z+P^n) = \bigcup\limits_{0\leq j \leq n}\left( P^j/P^n \times P^{n-j}/P^n \right). \end{equation*} In either case, the fiber of $m$ over a point $z \in \mathcal{O}/P^n$ is a union of irreducible spaces of codimension $n$ in $\mathcal{O}/P^n \times \mathcal{O}/P^n$. \end{lemma} \begin{proof} Clear. \end{proof} We now examine case IIIA, which proceeds in a slightly different manner than the other cases. Let $x=\pi^{\mu}s_1s_2s_1$ with $\mathbf{a}_x \subset C^0$ and $\mu_2 \geq 0$. We begin with subcase ($i$), in which we recall that if $-\mu_3+1 \leq \lambda_3 \leq -\mu_2$, then Equation \eqref{E:123calc} still describes $(xI)_{\leq \lambda}$. Here, the main difference is that we do not know the valuation of either $b$ or $d$. Let \begin{equation*}X:= P^{\mu_1+1} \times P^{\mu_1+1} \times P^{\mu_2+1} \times P^{\mu_2}_{\times},\end{equation*} and let $X_{\leq \lambda}$ be defined as in the previous argument. Consider the projection map \begin{align*} p:X & \longrightarrow Z:= P^{\mu_1+1}\times P^{\mu_2}_{\times} \\ (a,b,d,e) & \mapsto (a,e),\end{align*} and consider the restriction of $p$ to $X_{\leq \lambda}$ \begin{equation*}p_{\lambda}: X_{\leq \lambda}\rightarrow Z_{\leq \lambda} := P^{-\lambda_1}\times P^{\mu_2}_{\times},\end{equation*} Observe as before that $Z_{\leq \lambda}$ has codimension $-\lambda_1-(\mu_1+1)$ in $Z$. In this case, however, the fiber of $p_{\lambda}$ over the point $(a,e) \in Z_{\leq \lambda}$ is isomorphic to the fiber of the multiplication map $m$ from Lemma \ref{T:mult} over the point $ae + P^{\lambda_3}$, after appropriate scaling. In particular, for any $(a,e) \in Z_{\leq \lambda}$ the fiber of $p_{\lambda}$ is reducible, and each irreducible component has codimension $\lambda_3+\mu_3-2$ in the fiber $P^{\mu_1+1}\times P^{\mu_2+1}$ of $p$ over $(a,e)\in Z$. Now we define \begin{equation*}X_{\lambda}:= \{ (a,b,d,e) \in X \mid a \in P^{-\lambda_1}_{\times},\ ae-bd \in P^{\lambda_3}_{\times}\}\subset X_{\leq \lambda},\end{equation*} where the reader will observe that while $X_{\lambda}$ consists of elements having Newton polygon $\lambda$, it is only a subspace of the Newton stratum corresponding to $\lambda$. Nevertheless, to compute the closure of the Newton stratum, it suffices to work with the subspace $X_{\lambda}$, which justifies the notation. If we define locally closed subsets of $X_{\lambda}$ and $X_{\leq \lambda}$ as follows: \begin{equation*}X^j_{\lambda}:= \{ (a,b,d,e) \in X \mid a \in P^{-\lambda_1}_{\times},\ ae-bd \in P^{\lambda_3}_{\times},\ b \in P^j_{\times}\},\ \text{and}\end{equation*} \begin{equation*}X^j_{\leq \lambda}:= \{ (a,b,d,e) \in X \mid a \in P^{-\lambda_1},\ ae-bd \in P^{\lambda_3},\ b\in P^j_{\times}\},\end{equation*} then we can write $X_{\leq \lambda}$ as a disjoint union of locally closed subsets of $X$: \begin{equation}\label{E:union}X_{\leq \lambda} = \coprod\limits_{j=\mu_1+1}^{-\lambda_1-1} X^j_{\leq \lambda} \quad \text{or} \quad X_{\leq \lambda} = \coprod\limits_{j=\mu_1+1}^{\lambda_3-\mu_2-1} X^j_{\leq \lambda}.\end{equation} Here, the two cases correspond to the two possibilities in Lemma \ref{T:mult}, where we must consider whether $\val(ae) < \lambda_3$ or $\val(ae)\geq \lambda_3$. Therefore, we see that there are either $\lceil -\lambda_1-\mu_1-1\rceil$ or $\lceil \lambda_3+\mu_3-1\rceil$ components, depending on which of these integers is smaller. Furthermore, each of the components $X^j_{\leq \lambda}$ are irreducible in $X$ by our discussion above, and are therefore precisely the closures of the $X^j_{\lambda}$, respectively. Since the union in \eqref{E:union} is finite in either case, we see that the closure of $X_{\lambda}$ is $X_{\leq \lambda}$, and therefore the closure of $(xI)_{\lambda}$ is $(xI)_{\leq \lambda}$. The codimension calculations proceed as before, as does the means by which we can make this argument rigorous. The reader should also note that case IIIA, subcase ($ii$) should be handled in the same manner as subcase ($i$). In particular, in case IIIA, the scheme $(xI)_{\leq \lambda}$ is reducible for all $\lambda \leq -\mu-(2,0,-2)$. We point out one further difference in case IIIA. Consider the point $(a_{\mu_1+1},e_{\mu_2}) \in Z_{\leq \lambda}$, where $a_i$ and $e_i$ denote the coefficients of $\pi^i$ in $a$ and $e$, respectively. Since in case IIIA we have $\min\{\val(ae)\}=\mu_1+\mu_2+1$ and $\min\{\val(bd)\}=\mu_1+\mu_2+2$, we see that the fiber over this point is empty. In fact, as we saw in the proof of Proposition \ref{T:N(G)13}, this phenomenon occurs whenever $\lambda<\nu_x=-\mu-(1,0,-1)$ and $\lambda \nleq - \mu-(2,0,-2)$. \subsection{Proof of Theorem \ref{T:main} and Corollary \ref{T:maincor}, case B}\label{S:corproof} Recall from Remark \ref{T:valpolysB} that the codimensions in case B agree with the codimensions of the Newton strata in $x'I'$, where $x'=\pi^{{\mu}_{\text{dom}^*}}(s_1^{-1}ws_1)$ and $I'=s_1^{-1}Is_1$. Note that the theorem once more holds trivially in case VI, since for $w=1$, we have $w':=s_1^{-1}ws_1=1$, and so $\mathcal{N}(G)_x = \{-\mu_{\text{dom}}\}$. A routine calculation computes $x'I'$ for each of the five remaining values of $w' \in W$: \begin{align*} (\text{I})\ \ &x'I' = \begin{pmatrix} P^{\mu_2}&P^{\mu_2}_{\times}&P^{\mu_2}\\P^{\mu_1+1}&P^{\mu_1+1}&P^{\mu_1}_{\times} \\P^{\mu_3}_{\times}&P^{\mu_3+1}&P^{\mu_3}\end{pmatrix} & (\text{IV})\ \ &x'I' = \begin{pmatrix} P^{\mu_2}&P^{\mu_2}_{\times}&P^{\mu_2}\\P^{\mu_1}_{\times}&P^{\mu_1+1}&P^{\mu_1} \\P^{\mu_3+1}&P^{\mu_3+1}&P^{\mu_3}_{\times}\end{pmatrix}\\ (\text{II})\ \ &x'I' = \begin{pmatrix} P^{\mu_2+1}&P^{\mu_2+1}&P^{\mu_2}_{\times}\\P^{\mu_1}_{\times}&P^{\mu_1+1}&P^{\mu_1} \\P^{\mu_3}&P^{\mu_3}_{\times}&P^{\mu_3}\end{pmatrix} & (\text{V})\ \ &x'I' = \begin{pmatrix} P^{\mu_2+1}&P^{\mu_2+1}&P^{\mu_2}_{\times}\\P^{\mu_1}&P^{\mu_1}_{\times}&P^{\mu_1} \\P^{\mu_3}_{\times}&P^{\mu_3+1}&P^{\mu_3}\end{pmatrix}.\\ (\text{III})\ \ &x'I' = \begin{pmatrix} P^{\mu_2}_{\times}&P^{\mu_2+1}&P^{\mu_2}\\P^{\mu_1+1}&P^{\mu_1+1}&P^{\mu_1}_{\times} \\P^{\mu_3}&P^{\mu_3}_{\times}&P^{\mu_3}\end{pmatrix} & \phantom{} \end{align*} Arguments similar to those used in case A apply to show that $(x'I')_{\leq \lambda'}$ is admissible and has the structure of a fiber bundle over an irreducible base space, having non-empty fibers over every point. Note that case VB should be handled in the same way as IIIA, in which the fibers are reducible and look like fibers of the requisite analog of the multiplication map from Lemma \ref{T:mult}. Using the arguments outlined above, the reader can check that for any $x \in \widetilde{W}$ satisfying the conditions of cases A or B, we have \begin{equation*} \codim\left( (xI)_{\leq \lambda} \subseteq xI\right) = \length_{\mathcal{N}(G)_x}[\lambda,\nu_x].\end{equation*} Lemmas \ref{T:reduction} and \ref{T:coset} then imply that for all $x \in \widetilde{W}$, we have \begin{equation*} \codim\left( (IxI)_{\leq \lambda} \subseteq IxI\right) = \length_{\mathcal{N}(G)_x}[\lambda,\nu_x]. \end{equation*} We have also shown that for any $x \in \widetilde{W}$ satisfying the conditions of cases A or B, the closure of $(xI)_{\lambda}$ in $xI$ is precisely $(xI)_{\leq \lambda}$, in which case the codimensions between adjacent strata always equals 1. Finally, observe that the proofs of Lemmas \ref{T:reduction} and \ref{T:coset} enable us to extend these observations about the closures of the Newton strata $(xI)_{\lambda}$ to $(IxI)_{\lambda}$ and finally to all $x\in \widetilde{W}$. Therefore, Theorem \ref{T:main} holds for any $x \in \widetilde{W}$. The reader will verify the remaining root-theoretic versions of the codimension formula provided in Corollary \ref{T:maincor} during the course of the proof of Theorem \ref{T:main}. Extend the analysis for cases A and B to the rest of $C^0$ and $s_1(C^0)$ by applying the reflection $\psi$ that interchanges the two simple roots, discussed in Lemma \ref{T:reduction}. We may then extend our calculations to the remaining Weyl chambers by applying the rotations $\varphi$ and $\varphi^2$ to the Weyl chambers $C^0$ and $s_1(C^0)$, where $\varphi$ is the rotation by 120 degrees about the center of the base alcove defined in Lemma \ref{T:reduction}. \end{section} \newpage \begin{table}[h] \begin{center}\renewcommand{\arraystretch}{1.25} \begin{tabular}{| c | l | l |} \hline Weyl chamber & $\nu_x$ & $\mathcal{N}(G)_x$ \\ \hline\hline $C^0$ & $-(\mu+(1, -\frac{1}{2},-\frac{1}{2}))_{\text{dom}},\ $ if $\mu_2 = \mu_3$ & $\{\lambda \leq \nu_x\}$ \\ \cline{2-3} & $-(\mu+(1,0,-1))_{\text{dom}},\ $ otherwise & $\{ \lambda \leq \nu_x\}$ \\ \hline\hline $s_1(C^0)$ & $-(\mu + (\frac{1}{2},0, -\frac{1}{2}))_{\text{dom}},\ $ if $\mu_1+1=\mu_3$ & $\{\lambda \leq \nu_x\}$ \\ \cline{2-3} & $-(\mu +(1,0,-1))_{\text{dom}},\ $ otherwise & $\{ \lambda \leq \nu_x\}$ \\ \hline\hline $s_2(C^0)$ & $-(\mu+(1,-1,0))_{\text{dom}}$ & $\{ \lambda \leq \nu_x\}$\\ \hline\hline $s_{12}(C^0)$ & $-\mu_{\text{dom}}$ & $\{ \lambda \leq \nu_x\}$\\ \hline\hline $s_{21}(C^0)$ & $-(\mu+(\frac{1}{2},-\frac{1}{2},0))_{\text{dom}},\ $ if $\mu_1+1=\mu_2$ &$\{ \lambda \leq \nu_x\}$ \\ \cline{2-3} & $-(\mu+(1,-1,0))_{\text{dom}},\ $ otherwise & $\{ \lambda \leq \nu_x\}$ \\ \hline\hline $s_{121}(C^0)$ & $-\mu_{\text{dom}}$ & $\{ \lambda \leq \nu_x\}$ \\ \hline \end{tabular} \caption{$w = s_{12}$}\label{Ta:s_12} \end{center} \end{table} \begin{table}[h] \begin{center}\renewcommand{\arraystretch}{1.25} \begin{tabular}{| c | l | l |} \hline Weyl chamber & $\nu_x$ & $\mathcal{N}(G)_x$ \\ \hline\hline $C^0$ & $-(\mu+(\frac{1}{2},\frac{1}{2},-1))_{\text{dom}},\ $ if $\mu_1 = \mu_2$ & $\{\lambda \leq \nu_x\}$ \\ \cline{2-3} & $-(\mu+(1,0,-1))_{\text{dom}},\ $ otherwise & $\{ \lambda \leq \nu_x\}$ \\ \hline\hline $s_1(C^0)$ & $-(\mu+(0,1,-1))_{\text{dom}}$ & $\{ \lambda \leq \nu_x\}$\\ \hline\hline $s_2(C^0)$ & $-(\mu + (\frac{1}{2},0, -\frac{1}{2}))_{\text{dom}},\ $ if $\mu_1+1=\mu_3$ & $\{\lambda \leq \nu_x\}$ \\ \cline{2-3} & $-(\mu +(1,0,-1))_{\text{dom}},\ $ otherwise & $\{ \lambda \leq \nu_x\}$ \\ \hline\hline $s_{12}(C^0)$ & $-(\mu+(0,\frac{1}{2},-\frac{1}{2}))_{\text{dom}},\ $ if $\mu_2+1=\mu_3$ &$\{ \lambda \leq \nu_x\}$ \\ \cline{2-3} & $-(\mu+(0,1,-1))_{\text{dom}},\ $ otherwise & $\{ \lambda \leq \nu_x\}$ \\ \hline\hline $s_{21}(C^0)$ & $-\mu_{\text{dom}}$ & $\{ \lambda \leq \nu_x\}$\\ \hline\hline $s_{121}(C^0)$ & $-\mu_{\text{dom}}$ & $\{ \lambda \leq \nu_x\}$ \\ \hline \end{tabular} \caption{$w = s_{21}$}\label{Ta:s_21} \end{center} \end{table} \clearpage \begin{table}[h] \begin{center}\renewcommand{\arraystretch}{1.25} \begin{tabular}{| c | l | l |} \hline Weyl chamber & $\nu_x$ & $\mathcal{N}(G)_x$ \\ \hline\hline $C^0$ & $-(\mu+(1,0,-1))_{\text{dom}}$ & $\{(\nu_1, -\frac{\nu_1}{2}, -\frac{\nu_1}{2}) \leq \lambda \leq \nu_x\},\ $ if $\mu_1+1=\mu_2$ \\ \cline{3-3} & & $\{(-\frac{\nu_3}{2}, -\frac{\nu_3}{2}, \nu_3) \leq \lambda \leq \nu_x\},\ $ if $\mu_2+1=\mu_3$ \\ \cline{3-3} & & $\{\nu_x\} \cup \{\lambda \leq \nu_x-(1,0,-1)\},\ $ otherwise \\ \hline\hline $s_1(C^0)$ & $-(\mu+(\frac{1}{2}, 0, -\frac{1}{2}))_{\text{dom}},\ $ if $\mu_1+1=\mu_3$ & $\{\nu_x\}$ \\ \cline{2-3} & $-(\mu+(1,0,-1))_{\text{dom}},\ $ otherwise & $\{(\nu_1,-\frac{\nu_1}{2}, -\frac{\nu_1}{2}) \leq \lambda \leq \nu_x\}$ \\ \hline\hline $s_2(C^0)$ & $-(\mu+(\frac{1}{2}, 0, -\frac{1}{2}))_{\text{dom}},\ $ if $\mu_1+1=\mu_3$ &$\{\nu_x\}$ \\ \cline{2-3} & $-(\mu+(1,0,-1))_{\text{dom}},\ $ otherwise & $\{(-\frac{\nu_3}{2}, -\frac{\nu_3}{2},\nu_3) \leq \lambda \leq \nu_x\} $ \\ \hline\hline $s_{12}(C^0)$ & $-\mu_{\text{dom}}$ & $\{(\nu_1, -\frac{\nu_1}{2}, -\frac{\nu_1}{2}) \leq \lambda \leq \nu_x\}$ \\ \hline\hline $s_{21}(C^0)$ &$-\mu_{\text{dom}}$ & $\{(-\frac{\nu_3}{2},-\frac{\nu_3}{2},\nu_3)\leq \lambda \leq \nu_x\}$ \\ \hline\hline $s_{121}(C^0)$ & $-\mu_{\text{dom}}$ & $\{\lambda \leq \nu_x\}$ \\ \hline \end{tabular} \caption{$w = s_{121}$}\label{Ta:s_121} \end{center} \end{table} \begin{table}[h] \begin{center}\renewcommand{\arraystretch}{1.25} \begin{tabular}{| c | l | l |} \hline Weyl chamber & $\nu_x$ & $\mathcal{N}(G)_x$ \\ \hline\hline $C^0$ & $-(\mu+(\frac{1}{2}, -\frac{1}{2}, 0))_{\text{dom}},\ $ if $\mu_1+1 = \mu_2$ & $\{\nu_x\}$ \\ \cline{2-3} & $-(\mu+(1,-1,0))_{\text{dom}},\ $ otherwise & $\{(-\frac{\nu_3}{2}, -\frac{\nu_3}{2}, \nu_3) \leq \lambda \leq \nu_x\}$ \\ \hline\hline $s_1(C^0)$ & $-\mu_{\text{dom}}$ & $\{(-\frac{\nu_3}{2}, -\frac{\nu_3}{2}, \nu_3) \leq \lambda \leq \nu_x\}$ \\ \hline\hline $s_2(C^0)$ & $-(\mu+(1,-1,0))_{\text{dom}}$ & $\{(\nu_1, -\frac{\nu_1}{2}, -\frac{\nu_1}{2}) \leq \lambda \leq \nu_x\},\ $ if $\mu_1=\mu_3$ \\ \cline{3-3} & & $\{\lambda \leq \nu_x\},\ $ otherwise \\ \hline\hline $s_{12}(C^0)$ & $-\mu_{\text{dom}}$ & $\{(\nu_1, -\frac{\nu_1}{2}, -\frac{\nu_1}{2}) \leq \lambda \leq \nu_x\},\ $ if $\mu_2=\mu_3$ \\ \cline{3-3} & & $\{\lambda \leq \nu_x\},\ $ otherwise \\ \hline\hline $s_{21}(C^0)$ & $-(\mu+(\frac{1}{2},-\frac{1}{2},0))_{\text{dom}},\ $ if $\mu_1+1=\mu_2$ & $\{\nu_x\}$ \\ \cline{2-3} & $-(\mu+(1,-1,0))_{\text{dom}},\ $ otherwise & $\{(\nu_1, -\frac{\nu_1}{2}, -\frac{\nu_1}{2}) \leq \lambda \leq \nu_x\}$ \\ \hline\hline $s_{121}(C^0)$ & $-\mu_{\text{dom}}$ & $\{(\nu_1, -\frac{\nu_1}{2}, -\frac{\nu_1}{2}) \leq \lambda \leq \nu_x\}$ \\ \hline \end{tabular} \caption{$w = s_1$}\label{Ta:s_1} \end{center} \end{table} \clearpage \begin{table}[h] \begin{center}\renewcommand{\arraystretch}{1.25} \begin{tabular}{| c | l | l |} \hline Weyl chamber & $\nu_x$ & $\mathcal{N}(G)_x$ \\ \hline\hline $C^0$ & $-(\mu+(0,\frac{1}{2}, -\frac{1}{2}))_{\text{dom}},\ $ if $\mu_2+1 = \mu_3$ & $\{\nu_x\}$ \\ \cline{2-3} & $-(\mu+(0,1,-1))_{\text{dom}},\ $ otherwise & $\{(\nu_1, -\frac{\nu_1}{2}, -\frac{\nu_1}{2}) \leq \lambda \leq \nu_x\}$ \\ \hline\hline $s_1(C^0)$ & $-(\mu+(0,1,-1))_{\text{dom}}$ & $\{(-\frac{\nu_3}{2}, -\frac{\nu_3}{2}, \nu_3) \leq \lambda \leq \nu_x\},\ $ if $\mu_1=\mu_3$ \\ \cline{3-3} & & $\{\lambda \leq \nu_x\},\ $ otherwise \\ \hline\hline $s_2(C^0)$ & $-\mu_{\text{dom}}$ & $\{(\nu_1, -\frac{\nu_1}{2}, -\frac{\nu_1}{2}) \leq \lambda \leq \nu_x\}$\\ \hline\hline $s_{12}(C^0)$ & $-(\mu+(0,\frac{1}{2},-\frac{1}{2}))_{\text{dom}},\ $ if $\mu_2+1=\mu_3$ & $\{\nu_x\}$ \\ \cline{2-3} & $-(\mu+(0,1,-1))_{\text{dom}},\ $ otherwise & $\{(-\frac{\nu_3}{2},-\frac{\nu_3}{2},\nu_3) \leq \lambda \leq \nu_x\}$ \\ \hline\hline $s_{21}(C^0)$ & $-\mu_{\text{dom}}$ & $\{(-\frac{\nu_3}{2},-\frac{\nu_3}{2},\nu_3) \leq \lambda \leq \nu_x\},\ $ if $\mu_1=\mu_2$ \\ \cline{3-3} & & $\{ \lambda \leq \nu_x\},\ $ otherwise \\ \hline\hline $s_{121}(C^0)$ & $-\mu_{\text{dom}}$ & $\{(-\frac{\nu_3}{2}, -\frac{\nu_3}{2}, \nu_3) \leq \lambda \leq \nu_x\}$ \\ \hline \end{tabular} \caption{$w = s_2$}\label{Ta:s_2} \end{center} \end{table} \bibliographystyle{amsplain} \bibliography{references} \end{document}
{"config": "arxiv", "file": "0711.3820/SL_3.tex"}
\begin{document} \maketitle \begin{abstract} We give manifestly positive Andrews-Gordon type series for the level 3 standard modules of the affine Lie algebra of type $A^{(1)}_2$. We also give corresponding bipartition identities which have representation theoretic interpretations via the vertex operators. Our proof is based on the Bordin product formula and the Corteel-Welsh recursion for the cylindric partitions, a $q$-version of Sister Celine's technique and a generalization of Andrews' partition ideals by finite automata due to Takigiku and the author. \end{abstract} \section{Introduction} \subsection{The Rogers-Ramanujan partition identities} A partition $\lambda=(\lambda_1,\dots,\lambda_{\ell})$ of a nonnegative integer $n$ is a weakly decreasing sequence of positive integers (called parts) whose sum $|\lambda|:=\lambda_1+\dots+\lambda_{\ell}$ (called the size) is $n$. Let $i=1$ or $2$. Then the celebrated Rogers-Ramanujan partition identities may be stated as follows. \begin{quotation}\label{eq:RR:PT} The number of partitions of $n$ such that parts are at least $i$ and such that consecutive parts differ by at least $2$ is equal to the number of partitions of $n$ into parts congruent to $\pm i$ \mbox{modulo $5$.} \end{quotation} As $q$-series identities the Rogers-Ramanujan identities are stated as \begin{align} \sum_{n\ge 0} \frac{q^{n^2}}{(q;q)_n} = \frac{1}{(q,q^4;q^5)_\infty}, \quad \sum_{n\ge 0} \frac{q^{n^2+n}}{(q;q)_n} = \frac{1}{(q^2,q^3;q^5)_\infty}, \label{eq:RR:q} \end{align} where for $n\in\mathbb{Z}_{\geq0} \sqcup \{\infty\}$, \begin{align*} (a;q)_n := \prod_{0\leq j<n} (1-aq^j), \quad (a_1,\dots,a_k;q)_{n} := (a_1;q)_n \cdots (a_k;q)_n. \end{align*} \subsection{The main result}\label{mainse} A bipartition of a nonnegative integer $n$ is a pair of partitions $\boldsymbol{\lambda}=(\lambda^{(1)},\lambda^{(2)})$ such that $|\lambda^{(1)}|+|\lambda^{(2)}|=n$. A 2-colored partition $\lambda=(\lambda_1,\dots,\lambda_{\ell})$ is a weakly decreasing sequence of positive integers and colored positive integers (called parts) by the order \begin{align} \cdots>3>\cint{3}>2>\cint{2}>1>\cint{1}. \label{orde} \end{align} We define the size $|\lambda|$ of $\lambda$ by $|\lambda|=\CONT(\lambda_1)+\dots+\CONT(\lambda_{\ell})$, where $\CONT(k)=\CONT(\cint{k})=k$ for a positive integer $k$. Clearly, there is a natural identification between a bipartition of $n$ and a 2-colored partition of $n$. We put $\COLOR(k)=+$ and $\COLOR(\cint{k})=-$ for a positive integer $k$. For a 2-colored partition $\lambda=(\lambda_1,\dots,\lambda_{\ell})$, we consider the following conditions. \begin{enumerate} \item[(D1)] If consecutive parts $\lambda_a$ and $\lambda_{a+1}$, where $1\leq a<\ell$, satisfy $\CONT(\lambda_a)-\CONT(\lambda_{a+1})\leq 1$, then $\CONT(\lambda_a)+\CONT(\lambda_{a+1})\in 3\mathbb{Z}$ and $\COLOR(\lambda_a)\ne\COLOR(\lambda_{a+1})$. \item[(D2)] If consecutive parts $\lambda_a$ and $\lambda_{a+1}$, where $1\leq a<\ell$, satisfy $\CONT(\lambda_a)-\CONT(\lambda_{a+1})=2$ and $\CONT(\lambda_a)+\CONT(\lambda_{a+1})\not\in 3\mathbb{Z}$, then $(\COLOR(\lambda_a),\COLOR(\lambda_{a+1}))\ne(-,+)$. \item[(D3)] $\lambda$ does not contain $(3k,\cint{3k},\cint{3k-2})$, $(3k+2,3k,\cint{3k})$ and $(\cint{3k+2},3k+1,3k-1,\cint{3k-2})$ for $k\geq 1$. \item[(D4)] $\lambda$ does not contain $1$, $\cint{1}$, and $\cint{2}$ as parts (i.e., $\lambda_a\ne 1,\cint{1},\cint{2}$ for $1\leq a\leq\ell$). \end{enumerate} On (D3), $\lambda=(\lambda_1,\dots,\lambda_{\ell})$ contains $\mu=(\mu_1,\dots,\mu_{\ell'})$ means that there exists $0\leq i\leq \ell-\ell'$ such that $\lambda_{j+i}=\mu_j$ for $1\leq j\leq \ell'$. \begin{Thm}\label{RRbiiden} Let $\BIR$ (resp. $\BIRP$) the set of 2-colored partitions which satisfy the conditions (D1)--(D3) (resp. (D1)--(D4)) above. Then, we have \begin{align*} \sum_{\lambda\in\BIR}q^{|\lambda|} &= \frac{(q^2,q^4;q^6)_{\infty}}{(q,q,q^3,q^3,q^5,q^5;q^6)_{\infty}},\\ \sum_{\lambda\in\BIRP}q^{|\lambda|} &= \frac{1}{(q^2,q^3,q^3,q^4;q^6)_{\infty}} \end{align*} \end{Thm} \begin{Ex} \begin{align*} \BIR &= \{\emptypar, (1),(\cint{1}),(2),(\cint{2}),(3),(\cint{3}),(2,\cint{1}),(\cint{2},1), (4),(\cint{4}),(3,1),(3,\cint{1}),(\cint{3},\cint{1}),\dots\}\\ \BIRP &= \{\emptypar, (2),(3),(\cint{3}),(4),(\cint{4}),(5),(\cint{5}),(6),(\cint{6}),(4,2),(\cint{4},2),(3,\cint{3}),(7),(\cint{7}),(5,2),(\cint{5},2),\dots\} \end{align*} \end{Ex} We propose the aforementioned bipartition identities (in terms of 2-colored partitions) as $A_2$ Rogers-Ramanujan bipartition identities of level 3. In the rest of this section, we give some justifications for our proposal. \subsection{Lie theoretic interpretations} In the following of the paper, we denote by $\GEE(A)$ the Kac-Moody Lie algebra associated with a generalized Cartan matrix (GCM, for short) $A$. For an affine GCM $A$ and a dominant integral weight $\Lambda\in\MP$, we denote by $\chi_{A}(V(\Lambda))$ the principal character of the standard module (i.e., the integrable highest weight module) $V(\Lambda)$ with a highest weight vector $v_{\Lambda}$ of the affine Lie algebra $\GEE(A)$. In ~\cite{LM}, Lepowsky and Milne observed a similarity between the characters of the level 3 standard modules of the affine Lie algebra of type $A^{(1)}_{1}$ \begin{align} \chi_{A^{(1)}_1}(V(2\Lambda_0+\Lambda_1)) = \frac{1}{(q,q^4;q^5)_{\infty}},\quad \chi_{A^{(1)}_1}(V(3\Lambda_0)) = \frac{1}{(q^2,q^3;q^5)_{\infty}}. \label{a11level3} \end{align} and the infinite products of the Rogers-Ramanujan identities \eqref{eq:RR:q}. This was one of the motivations for inventing the vertex operators~\cite{LW1} as well as ~\cite{FK,Seg} (see also ~\cite{Lep}). Subsequently, in ~\cite{LW3}, Lepowsky and Wilson promoted the observation \eqref{a11level3} to a vertex operator theoretic proof of \eqref{eq:RR:q} (see also ~\cite{LW2}) which affords a Lie theoretic interpretations of the infinite sums in \eqref{eq:RR:q}. The result is generalized to higher levels in ~\cite{LW4}, assuimg the Andrews-Gordon-Bressoud identities (a generalization of the Rogers-Ramanujan identities, see ~\cite[\S3.2]{Sil}), for which Meurman and Primc gave a vertex operator theoretic proof in ~\cite{MP}. Recall the principal realization of the affine Lie algebra $\GEE(A^{(1)}_{1})$ (see ~\cite[\S7,\S8]{Kac}). Using the notation in ~\cite[\S2]{MP}, it affords a basis \begin{align*} \{\AOB{n},\AOX{n'},c,d\mid n\in\mathbb{Z}\setminus 2\mathbb{Z}, n'\in\mathbb{Z}\}, \end{align*} of $\GEE(A^{(1)}_{1})$. Note that $\{\AOB{n},c\mid n\in\mathbb{Z}\setminus 2\mathbb{Z}\}$ forms a basis of the principal Heisenberg subalgebra of $\GEE(A^{(1)}_{1})$. The following is essentially the Lepowsky-Wilson interpretation of the Rogers-Ramanujan partition identities in terms of the representation theory of $\GEE(A^{(1)}_1)$ (see also \cite[Theorem 10.4]{LW3} and \cite[Appendix]{MP}). \begin{Thm}[{\cite{MP}}] For $i=1,2$, let $\RR$ be the set of partitions such that parts are at least $i$ and such that consecutive parts differ by at least $2$. Then, the set \begin{align*} \{\AOB{-\mu_1}\cdots\AOB{-\mu_{\ell'}}\AOX{-\lambda_1}\cdots\AOX{-\lambda_\ell}v_{(i+1)\Lambda_0+(2-i)\Lambda_1}\} \end{align*} forms a basis of $V((i+1)\Lambda_0+(2-i)\Lambda_1)$, where $(\mu_1,\dots,\mu_{\ell'})$ varies in $\REG{2}$ and $(\lambda_1,\dots,\lambda_{\ell})$ varies in $\RR$. \end{Thm} Here, for $p\geq 2$ we denote by $\REG{p}$ the set of $p$-class regular partitions. Recall that a partition is called $p$-class regular if no parts are divisible by $p$. We show a similar interpretation for $\BIR$ and $\BIRP$. Using the notation in \S\ref{vertset}, \begin{align*} \{ B(n),x_{\alpha_1}(n'),x_{-\alpha_1}(n'),c,d \mid n\in\mathbb{Z}\setminus3\mathbb{Z},n'\in\mathbb{Z} \} \end{align*} forms a basis of $\GEE(A^{(1)}_2)$. \begin{Thm}\label{biideninter} For $i=1$ (resp. $i=2$), the set \begin{align*} \{\AOB{-\mu_1}\cdots\AOB{-\mu_{\ell'}} x_{\COLOR(\lambda_1)\alpha_1}(-\!\CONTT(\lambda_1))\cdots x_{\COLOR(\lambda_{\ell})\alpha_1}(-\!\CONTT(\lambda_{\ell})) v_{(2i-1)\Lambda_0+(2-i)\Lambda_1+(2-i)\Lambda_2}\} \end{align*} forms a basis of $V((2i-1)\Lambda_0+(2-i)\Lambda_1+(2-i)\Lambda_2)$, where $(\mu_1,\dots,\mu_{\ell'})$ varies in $\REG{3}$ and $(\lambda_1,\dots,\lambda_{\ell})$ varies in $\BIR$ (resp. $\BIRP$). \end{Thm} Note that Theorem \ref{biideninter} implies Theorem \ref{RRbiiden} thanks to \begin{align} \chi_{A^{(1)}_{2}}(V(\Lambda_0+\Lambda_1+\Lambda_2)) &= \frac{(q^2,q^4;q^6)_{\infty}}{(q,q,q^3,q^3,q^5,q^5;q^6)_{\infty}},\\ \chi_{A^{(1)}_{2}}(V(3\Lambda_0)) &= \frac{1}{(q^2,q^3,q^3,q^4;q^6)_{\infty}}. \label{charcalc} \end{align} In \S\ref{maincomp} and \S\ref{cal}, we show that the set in Theorem \ref{biideninter} spans $V((2i-1)\Lambda_0+(2-i)\Lambda_1+(2-i)\Lambda_2)$ (see Corollary \ref{biidenintercor}). Thus, Theorem \ref{biideninter} and Theorem \ref{RRbiiden} are equivalent. \subsection{$A_2$ Rogers-Ramanujan identities}\label{agthm} A standard $q$-series technique to prove the Andrews-Gordon-Bressoud identities is the Bailey Lemma (see ~\cite[\S3]{An2} and ~\cite[\S3]{Sil}). In ~\cite{ASW}, Andrews-Schilling-Warnaar found an $A_2$ analog of it and obtained a family of Rogers-Ramanujan type identities for characters of the $W_3$ algebra. The result can be regarded as an $A^{(1)}_2$ analog of the Andrews-Gordon-Bressoud identities, whose infinite products are some of the principal characters of the standard modules of $\GEE(A^{(1)}_2)$ (see ~\cite[Theorem 5.1, Theorem 5.3, Theorem 5.4]{ASW}) after a multiplication of $(q;q)_{\infty}$. In our case of level 3, Andrews-Schilling-Warnaar identities are stated as follows. \begin{Thm}[{\cite[Theorem 5.4 specialized to $k=2$ and $i=2,1$]{ASW}}] \begin{align*} \sum_{s,t\geq 0}\frac{q^{s^2-st+t^2}}{(q;q)^2_{s+t}}{s+t \brack s}_{q^3} &= \frac{1}{(q;q)_{\infty}}\frac{(q^2,q^4;q^6)_{\infty}}{(q,q,q^3,q^3,q^5,q^5;q^6)_{\infty}},\\ \sum_{s,t\geq 0}\frac{q^{s^2-st+t^2+s+t}}{(q;q)_{s+t+1}(q;q)_{s+t}}{s+t \brack s}_{q^3} &= \frac{1}{(q;q)_{\infty}}\frac{1}{(q^2,q^3,q^3,q^4;q^6)_{\infty}}. \end{align*} \end{Thm} We show manifestly positive, Andrews-Gordon type series (in the sense of ~\cite{TT2}) for $\BIR$ and $\BIRP$ as follows. Here, the length $\ell(\lambda)$ of a 2-colored partition $\lambda$ is defined to be the number of parts. For the size $|\lambda|$ of $\lambda$, see \S\ref{mainse}. \begin{Thm}\label{RRidentAG} \begin{align*} \sum_{\lambda\in\BIR}x^{\ell(\lambda)}q^{|\lambda|} &= \sum_{a,b,c,d\geq 0}\frac{q^{a^2+b^2+3c^2+3d^2+2ab+3ac+3ad+3bc+3bd+6cd}x^{a+b+2c+2d}}{(q;q)_a(q;q)_b(q^3;q^3)_c(q^3;q^3)_d},\\ \sum_{\lambda\in\BIRP}x^{\ell(\lambda)}q^{|\lambda|} &= \sum_{a,b,c,d\geq 0}\frac{q^{a(a+1)+b(b+2)+3c(c+1)+3d(d+1)+2ab+3ac+3ad+3bc+3bd+6cd}x^{a+b+2c+2d}}{(q;q)_a(q;q)_b(q^3;q^3)_c(q^3;q^3)_d}. \end{align*} \end{Thm} The result is simiar to the fact that for $i=1,2$ we have \begin{align*} \sum_{\lambda\in\RR}x^{\ell(\lambda)}q^{|\lambda|}=\sum_{n\geq 0}\frac{q^{n(n+i-1)}x^n}{(q;q)_n}. \end{align*} Recently, Kanade-Russell showed (see ~\cite[(1.8)]{KR3}) \begin{align*} \sum_{s,t\geq 0}\frac{q^{s^2-st+t^2+t}}{(q;q)_{s+t+1}(q;q)_{s+t}}{s+t \brack s}_{q^3} = \frac{1}{(q;q)_{\infty}}\frac{1}{(q,q^2;q^3)_{\infty}}, \end{align*} where $(q,q^2;q^3)_{\infty}^{-1}=\chi_{A^{(1)}_2}(2\Lambda_0+\Lambda_1)$. Although it can be similarly proven \begin{align*} \sum_{a,b,c,d\geq 0}\frac{q^{a^2+b(b+1)+3c^2+c+3d^2+2d+2ab+3ac+3ad+3bc+3bd+6cd}}{(q;q)_a(q;q)_b(q^3;q^3)_c(q^3;q^3)_d} = \frac{1}{(q,q^2;q^3)_{\infty}}, \end{align*} we do not consider the level 3 module $V(2\Lambda_0+\Lambda_1)$ in this paper. A reason is that $V(2\Lambda_0+\Lambda_1)$ is not a submodule of $V(\Lambda_0)^{\otimes 3}$ as in Remark \ref{notsub}. After the groundbreaking paper ~\cite{ASW}, a vast literature is devoted to the study of $A_2$ Rogers-Ramanujan identities (see ~\cite{CDA,CW,FFW,FW,KR3,War2,War}), especially to the search for manifestly positive infinite sums such as ~\cite[Theorem 5.2]{ASW}, ~\cite[Theorem 1.1]{CW} for level 4 and ~\cite[Theorem 1.6]{CDA} for level 5. After a success of vertex operator theoretic proofs of the Rogers-Ramanujan identities such as ~\cite{LW3,MP}, it has been expected that, for each affine GCM $X^{(r)}_N$ and a dominant integral weight $\Lambda\in\MP$, there should exists a Rogers-Ramanujan type identity whose infinite product is given by $\chi_A(V(\Lambda))$. More precisely, it is natural to expect that the sum side is related to the $n(X^{(r)}_N)$-colored partitions, where \begin{align*} n(X^{(r)}_N):=\frac{\textrm{the number of roots of type $X_N$}}{\textrm{the $r$-twisted Coxeter number of $X_N$}}. \end{align*} For the twisted Coxeter number, see ~\cite[\S8, Table I,II]{Fil}. Concerning the fact $n(A^{(1)}_{r})=r$, it is expected that $A_2$ Rogers-Ramanujan identities are related to the 2-colored partitions (and thus to the bipartitions). It would be interesting if the results in this paper are generalized to higher level standard $\GEE(A^{(1)}_2)$-modules. \hspace{0mm} \noindent{\bf Organization of the paper.} The paper is organized as follows. In \S\ref{vertset}, we recall the principal realization of $\GEE(A^{(1)}_2)$ and the vertex operator realization of the basic module following ~\cite{Fil}. In \S\ref{maincomp} and \S\ref{cal}, we show that the definiting conditions for $\BIR$ and $\BIRP$ are naturally deduced by calculations (similar to ~\cite{Cap0,MP,Nan}) of the vertex operators on the triple tensor product of the basic module. In \S\ref{auto}, we show that $q$-difference equations for $\BIR$ and $\BIRP$ are automatically derived by the technique developed in ~\cite{TT} as a generalization of Andrews' linked partition ideals~\cite{An0}, ~\cite[Chapter 8]{An1} using finite automata. In \S\ref{proof} and \S\ref{proofp}, we show Theorem \ref{RRidentAG} using a $q$-version of Sister Celine's technique~\cite{Rie}, ~\cite[Chapter 4]{Koe}. This and analogous results for the cylindric partitions (see \S\ref{gs111} and \S\ref{gs300}) give a proof of Theorem \ref{RRbiiden} (see \S\ref{finalsec}) by combining the standard results, such as the Corteel-Welsh recursion~\cite{CW} and the Bordin product formula~\cite{Bor}, which are reviewed in \S\ref{cylin}. \hspace{0mm} \noindent{\bf Acknowledgments.} The author was supported by JSPS Kakenhi Grant 20K03506, the Inamori Foundation, JST CREST Grant Number JPMJCR2113, Japan and Leading Initiative for Excellent Young Researchers, MEXT, Japan. \section{The vertex operators}\label{vertset} In this section we put $m=3$ and $\omega=\exp(2\pi\sqrt{-1}/m)$. As usual, the affine Cartan matrix \begin{align*} A^{(1)}_2= \begin{pmatrix} 2 & -1 & -1 \\ -1 & 2 & -1 \\ -1 & -1 & 2 \end{pmatrix} \end{align*} is indexed by the set $I=\{0,1,2\}$. Note that $A_2=(A^{(1)}_2)|_{I_0\times I_0}$, where $I_0=\{1,2\}$. \subsection{The principal realization} In the $A_2$-lattice $L=\mathbb{Z}^2$ with the inner product $\langle \boldsymbol{x},\boldsymbol{y}\rangle_L={}^t\boldsymbol{x}A_2\boldsymbol{y}$ for $\boldsymbol{x},\boldsymbol{y}\in L$, the set of the roots \begin{align*} \Phi=\{\boldsymbol{x}\in\mathbb{Z}^2\mid\langle\boldsymbol{x},\boldsymbol{x}\rangle_L=2\} =\{\pm\alpha_1,\pm\alpha_2,\pm(\alpha_1+\alpha_2)\} \end{align*} is divided into two orbits \begin{align*} \Phi = \{\alpha_1, \alpha_2, -(\alpha_1+\alpha_2)\}\bigsqcup \{-\alpha_1, -\alpha_2, \alpha_1+\alpha_2\} \end{align*} under the Coxeter element $\nu=\sigma_1\sigma_2$, where the involution $\sigma_i:L\to L$ is defined by $\sigma_i(\boldsymbol{x})=\boldsymbol{x}-\langle \boldsymbol{x},\alpha_i\rangle\alpha_i$ for $i\in I_0$ and $\boldsymbol{x}\in L$. Note that $\alpha_2=\nu(\alpha_1)$ and $-(\alpha_1+\alpha_2)=\nu(\alpha_2)$. We regard the special linear Lie algebra \begin{align*} \mathfrak{sl}_3=\{M\in\MAT_3(\mathbb{C})\mid\TR M=0\} \end{align*} with the Cartan-Killing form $\langle M_1,M_2\rangle=\TR(M_1M_2)$ for $M_1,M_2\in\mathfrak{sl}_3$ as the Kac-Moody Lie algebra $\GEE(A_2)$ with the Chevalley generators $e_1=E_{12},e_2=E_{23},f_1=E_{21},f_2=E_{32},h_1=E_{11}-E_{22}$ and $h_2=E_{22}-E_{33}$, where $E_{ij}$ is the $3\times 3$ matrix unit for $1\leq i,j\leq 3$. Let $E_{(3)}=E_{12}+E_{23}+E_{31}$ be the principal cyclic element due to Kostant~\cite{Kos}. Take the principal automorphism \begin{align*} \tau:\mathfrak{sl}_3\to\mathfrak{sl}_3,\quad E_{ij}\mapsto \omega^{j-i}E_{ij} \end{align*} and the principal Cartan subalgebra \begin{align*} \HPRIN = \{M\in \mathfrak{sl}_3\mid [M,E_{(3)}]=O\}=\mathbb{C}E_{(3)}\oplus\mathbb{C}E^2_{(3)}. \end{align*} Note that $\langle,\rangle$ is $\tau$-invariant, i.e., $\langle\tau M_1,\tau M_2\rangle=\langle M_1,M_2\rangle$ for $M_1,M_2\in\mathfrak{sl}_3$. Also note that $\tau|_{\HPRIN}:\HPRIN\to\HPRIN$ is identified with $\ID_{\mathbb{C}}\otimes \nu:\mathbb{C}\otimes_{\mathbb{Z}} L\to \mathbb{C}\otimes_{\mathbb{Z}} L$ under the isometry $\mathbb{C}\otimes_{\mathbb{Z}} L\isom \HPRIN$ given by \begin{align*} 1\otimes \alpha_1\mapsto \frac{1}{3}((1-\omega)E_{(3)}+(2+\omega)E^2_{(3)}),\quad 1\otimes \alpha_2\mapsto \frac{1+2\omega}{3}(E_{(3)}-E^2_{(3)}). \end{align*} The twisted affinization is given by \begin{align*} \GE &= (\SPAN\{E_{11}-E_{22},E_{22}-E_{33}\}\otimes\mathbb{C}[t^3,t^{-3}]) \oplus(\SPAN\{E_{12},E_{23},E_{31}\}\otimes t\mathbb{C}[t^3,t^{-3}])\\ &\quad\quad \oplus(\SPAN\{E_{13},E_{21},E_{32}\}\otimes t^2\mathbb{C}[t^3,t^{-3}]) \oplus\mathbb{C}c\oplus\mathbb{C}d, \end{align*} with the Lie algebra structure \begin{align*} [M\otimes t^n,M'\otimes t^{n'}]=[M,M']\otimes t^{n+n'}+\frac{n \langle M,M'\rangle}{3}\delta_{n,-n'}c, \quad [d,M\otimes t^n]=nM\otimes t^n, \end{align*} where $M,M'\in\mathfrak{sl}_3$, $n,n'\in\mathbb{Z}$ and $c$ is central. The principal realization is the Lie algebra isomorphism $\GEE(A^{(1)}_2)\isom \GE$ given by \begin{align*} \begin{array}{lll} e_0\mapsto E_{31}\otimes t, & e_1\mapsto E_{12}\otimes t, & e_2\mapsto E_{23}\otimes t,\\ f_0\mapsto E_{13}\otimes t^{-1}, & f_1\mapsto E_{21}\otimes t^{-1}, & f_2\mapsto E_{32}\otimes t^{-1},\\ h_0\mapsto (E_{33}-E_{11})\otimes 1+\frac{c}{3}, & h_1\mapsto (E_{11}-E_{22})\otimes 1+\frac{c}{3}, & h_2\mapsto (E_{22}-E_{33})\otimes 1+\frac{c}{3}, \end{array} \end{align*} in addition to $d\mapsto d$. The principal degeree for $\GEE(A^{(1)}_2)$ (and thus for $\GE$) is defined by $\DEG e_i=1=-\DEG f_i$ and $\DEG h_i=0=\DEG d$ for $i\in I$. See also ~\cite[\S7,\S8]{Kac}. \subsection{The vertex operator realization}\label{vor} For $M\in\mathfrak{sl}_3$ and $n\in\mathbb{Z}$, we put \begin{align*} M(n)=\pi_{(n)}(M)\otimes t^n\in\GE \end{align*} where $\pi_{(n)}:\mathfrak{sl}_3\to\{M\in\mathfrak{sl}_3\mid\tau(M)=\omega^nM\}$ is the projection. Let $\HPRIN_{(n)}=\pi_{(n)}(\HPRIN)$. Then, $\HPRIN_{(n)}=\{0\}$ when $n\in 3\mathbb{Z}$ and \begin{align*} B(n) &= \begin{cases} (E_{12}+E_{23}+E_{31})\otimes t^n & \textrm{if $n\equiv 1\pmod{3}$},\\ (E_{13}+E_{21}+E_{32})\otimes t^n & \textrm{if $n\equiv 2\pmod{3}$}, \end{cases} \end{align*} is a basis of $\HPRIN_{(n)}$, where $B=E_{(3)}+E^2_{(3)}$. \begin{Rem}[{\cite[\S3.1]{Fil}}]\label{indv} Let $\GAAA=[\GAA,\GAA]= \GAAA_{+}\oplus\GAAA_{-}\oplus\mathbb{C}c$ be the principal Heisenberg subalgebra of $\GE$, where \begin{align*} \GAA = \GAAA_{+}\oplus\GAAA_{-}\oplus\mathbb{C}c\oplus\mathbb{C}d,\quad \GAAA_{\pm}=\bigoplus_{\pm n>0}\HPRIN_{(n)}\otimes t^n. \end{align*} The induced $\GAA$-module \begin{align*} V=\IND_{\GAAA_{+}\oplus\mathbb{C}c\oplus\mathbb{C}d}^{\GAA}\mathbb{C} \cong U(\GAAA_-)=\mathbb{C}[B(n)\mid n\in\mathbb{Z}_{<0}\setminus3\mathbb{Z}]. \end{align*} is irreducible, where $\GAAA_{+}\oplus\mathbb{C}d$ acts as 0 and $c$ acts as 1. \end{Rem} In this paper, for a vector space $U$, we denote by $U\{\zeta\}$ (resp. $U\{\zeta_1,\zeta_2\}$) the set of formal power series $\sum_{n\in\mathbb{Z}}u_n\zeta^n$ (resp. $\sum_{n,n'\in\mathbb{Z}}u_{n,n'}\zeta_1^n\zeta_2^{n'}$), where $u_n\in U$ (resp. $u_{n,n'}\in U$)~\cite[\S3.2]{Fil}. For $M\in\mathfrak{sl}_3$, we put \begin{align*} M(\zeta)=\sum_{n\in\mathbb{Z}}M(n)\zeta^n\in\GE\{\zeta\}. \end{align*} As in ~\cite[\S6,\S9]{Fil}, there is a 6-tuple $(x_{\alpha})_{\alpha\in\Phi}$ of $\mathfrak{sl}_3$ so that the set \begin{align*} \{ B(n),x_{\alpha_1}(n'),x_{-\alpha_1}(n'),c,d \mid n\in\mathbb{Z}\setminus3\mathbb{Z},n'\in\mathbb{Z} \} \end{align*} forms a basis of $\GE$. Moreover, we may assume that the following two results. \begin{enumerate} \item[(V1)] The commutation relations of the basis elements are given in terms of the generating functions in $\GE\{\zeta_1,\zeta_2\}$ and $\GE\{\zeta\}$ as follows (see ~\cite[Proposition 5.1]{Fil}). \begin{align*} [x_{\alpha}(\zeta_1),x_{\beta}(\zeta_2)] &= \frac{1}{m}\sum_{s\in C_{-1}} \varepsilon(\nu^s\alpha,\beta)x_{\nu^s\alpha+\beta}(\zeta_2) \delta(\omega^{-s}\zeta_1/\zeta_2)\\ &\quad + \frac{1}{m^2}\sum_{s\in C_{-2}} \varepsilon(\nu^s\alpha,\beta) \left(D\delta(\omega^{-s}\zeta_1/\zeta_2)-m\beta(\zeta_2)\delta(\omega^{-s}\zeta_1/\zeta_2)\right),\\ [\gamma(\zeta_1),x_{\beta}(\zeta_2)] &= \frac{1}{m}\sum_{s\in I}\langle\nu^s\alpha,\beta\rangle x_{\beta}(\zeta_2)\delta(\omega^{-s}\zeta_1/\zeta_2),\\ x_{\nu^p\alpha}(\zeta) &= x_{\alpha}(\omega^p\zeta), \end{align*} where $\delta(\zeta)=\sum_{n\in\mathbb{Z}}\zeta^n$, $D\delta(\zeta)=\sum_{n\in\mathbb{Z}}n\zeta^n$, $\alpha,\beta\in\Phi$, $\gamma\in\HPRIN$, $p\in I$ and \begin{align*} \varepsilon(\alpha,\beta)=\prod_{s\in I_0}(1-\omega^{-s})^{\langle \nu^p\alpha,\beta\rangle},\quad C_k=\{s\in I\mid \langle \nu^s\alpha,\beta\rangle=k\}. \end{align*} \item[(V2)] The assignment $x_{\alpha}(n')\mapsto X(\alpha,n')$ for $\alpha\in\Phi$ and $n'\in\mathbb{Z}$ in addition to the $\GAA$-module structure (see Remark \ref{indv}) identifies $V$ with the basic $\GE$-module $V(\Lambda_0)$ under the aforementioned isomorphism $\GEE(A^{(1)}_2)\cong \GE$, where \begin{align*} X(\alpha,\zeta) &= \sum_{n'\in\mathbb{Z}}X(\alpha,n')\zeta^{n'} = \frac{1}{m}E^-(-\alpha,\zeta)E^+(-\alpha,\zeta), \\ E^{\pm}(\alpha,\zeta) &= \sum_{\pm n\geq 0}E^{\pm}(\alpha,n)\zeta^n =\exp\left(m\sum_{\pm j>0}\frac{\alpha(j)}{j}\zeta^j\right). \end{align*} \end{enumerate} \begin{Rem}[{\cite[\S2]{MP}}]\label{sortres} For $\ell\geq 0$, we define a subspace \begin{align*} \Theta_{<\ell} = \sum_{\substack{0\leq\ell'<\ell \\ \beta_1,\dots,\beta_{\ell'}\in\{\pm\alpha_1\} \\ m_1,\dots,m_{\ell'}\in\mathbb{Z}}} U(\GAAA_-) x_{\beta_1}(m_1)\dots x_{\beta_{\ell'}}(m_{\ell'}) U(\GAAA_+) \end{align*} of the universal enveloping algebra $U(\GE)$. By (V1) we have $\bigcup_{\ell\geq 0}\Theta_{\ell}=U(\GE)$ and \begin{align*} x_{\beta_1}(m_1)\cdots x_{\beta_{\ell}}(m_{\ell})-x_{\beta_{p(1)}}(m_{p(1)})\cdots x_{\beta_{p(\ell)}}(m_{p(\ell)})\in \Theta_{<\ell}. \end{align*} for any permutation $p\in\mathfrak{S}_{\ell}$ and $\beta_1,\dots,\beta_{\ell}\in\{\pm\alpha_1\}$, $m_1,\dots,m_{\ell}\in\mathbb{Z}$. \end{Rem} \begin{Lem}[{\cite[Proposition 3.5, Proposition 3.6]{LW3}}] For $\alpha,\beta\in\Phi$, we have \begin{align*} X(\alpha,\zeta_1)E^-(\beta,\zeta_2) &= E^-(\beta,\zeta_2)X(\alpha,\zeta_1)\myphi_{\alpha,\beta}(\zeta_1/\zeta_2),\\ E^+(\alpha,\zeta_1)X(\beta,\zeta_2) &= X(\beta,\zeta_2)E^+(\alpha,\zeta_1)\myphi_{\alpha,\beta}(\zeta_1/\zeta_2), \end{align*} where \begin{align*} \myphi_{\alpha,\beta}(x)=\prod_{p\in I}(1-\omega^{-p}x)^{-\langle\nu^p\alpha,\beta\rangle}. \end{align*} \end{Lem} \begin{Ex}\label{exppoly} The following explicit values will be used in \S\ref{cal}. \begin{align*} \myphi_{\alpha_1,\alpha_1}(x) &= \myphi_{-\alpha_1,-\alpha_1}(x)=\frac{1-x^3}{(1-x)^3}=1+\sum_{k\geq 1}3kx^k,\\ \myphi_{-\alpha_1,\alpha_1}(x) &= \myphi_{\alpha_1,-\alpha_1}(x)=\frac{(1-x)^3}{1-x^3}=1+\sum_{k\geq 1}3(x^{3k-1}-x^{3k-2}). \end{align*} \end{Ex} \begin{Cor}[{\cite[Corollary 2.2.12]{Nan}}]\label{idounan}\label{idou} For $\alpha,\beta\in\Phi$, let $\myphi_{\alpha,\beta}(x)=\sum_{k\geq 0}c_kx^k$. For $n,n'\in\mathbb{Z}$, we have \begin{align*} X(\alpha,n)E^-(\beta,n') &= \sum_{k\geq 0}c_kE^{-}(\beta,n'+k)X(\alpha,n-k),\\ E^+(\alpha,n)X(\beta,n') &= \sum_{k\geq 0}c_kX(\beta,n'+k)E^+(\alpha,n-k). \end{align*} More generally, for $n_1,\dots,n_{\ell},n'_1,\dots,n'_{\ell}\in\mathbb{Z}$ and $\alpha_1,\dots,\alpha_{\ell},\beta_1,\dots,\beta_{\ell}\in\Phi$, we have \begin{align*} {} &{} X(\alpha_1,n_1)\cdots X(\alpha_{\ell},n_{\ell})E^-(\beta,n') \\ &= \sum_{j_1,\cdots,j_{\ell}\geq 0}c_{j_1}\cdots c_{j_{\ell}}E^{-}(\beta,n'+j_1+\dots+j_{\ell})X(\alpha_1,n-j_1)\cdots X(\alpha_{\ell},n_{\ell}-j_{\ell}),\\ {} &{} E^+(\alpha,n)X(\beta_1,n'_1)\cdots X(\beta_{\ell},n'_{\ell}) \\ &= \sum_{j_1,\cdots,j_{\ell}\geq 0}c_{j_1}\cdots c_{j_{\ell'}}X(\beta_1,n'+j_1)\cdots X(\beta_{\ell},n'_{j_{\ell}}) E^+(\alpha,n-j_1-\cdots-j_{\ell}). \end{align*} \end{Cor} \subsection{The triple tensor product of the basic module} For $\alpha,\beta\in\Phi$, we define a polynomial \begin{align*} P_{\alpha,\beta}(x)=\prod_{\substack{p\in I \\ \langle\nu^p\alpha,\beta\rangle<0}}(1-\omega^{-p}x)^{-\langle\nu^p\alpha,\beta\rangle}. \end{align*} \begin{Ex}\label{exppoly2} The following explicit values will be used in \S\ref{cal}. \begin{align*} P_{\alpha_1,\alpha_1}(x) &= P_{-\alpha_1,-\alpha_1}(x)=1+x+x^2,\\ P_{\alpha_2,-\alpha_1}(x) &= P_{\alpha_1,\alpha_1+\alpha_2}(x)=(1-\omega x)^2,\\ P_{-(\alpha_1+\alpha_2),-\alpha_1}(x) &= P_{\alpha_1,-\alpha_2}(x)=(1-\omega^2 x)^2. \end{align*} \end{Ex} Let $W$ be a $\GE$-module which has a nonpositive eigenspace decomposition with respect to $d$ (i.e., $W=\bigoplus_{n\in\mathbb{Z}_{\leq 0}}W_n$, where $W_n=\{w\in W\mid dw=nw\}$). By (V1), \begin{align*} \lim_{\zeta_1,\zeta_2\to\zeta}P_{\alpha,\beta}(\zeta_1/\zeta_2)x_{\alpha}(\zeta_1)x_{\beta}(\zeta_2) \end{align*} makes sense as an element of $(\END W)\{\zeta\}$ (see ~\cite[\S4,\S5]{MP}) and we denote it by $x_{\alpha,\beta}(\zeta)$. In the rest of this paper, we take $W=V^{\otimes 3}$. \begin{Prop}\label{mpprop} We have the following relations in $(\END W)\{\zeta\}$. \begin{enumerate} \item $x_{-\alpha_1,-\alpha_1}(\zeta)= 2E^{-}(\alpha_1,\zeta)x_{\alpha_1}(\zeta)E^{+}(\alpha_1,\zeta)$. \item $x_{\alpha_1,\alpha_1}(\zeta)= 2E^{-}(-\alpha_1,\zeta)x_{-\alpha_1}(\zeta)E^{+}(-\alpha_1,\zeta)$. \item $x_{\alpha_2,-\alpha_1}(\zeta)= E^{-}(\alpha_1,\zeta)x_{\alpha_1,\alpha_1+\alpha_2}(\zeta)E^{+}(\alpha_1,\zeta)$. \item $x_{-(\alpha_1+\alpha_2),-\alpha_1}(\zeta)= E^{-}(\alpha_1,\zeta)x_{\alpha_1,-\alpha_2}(\zeta)E^{+}(\alpha_1,\zeta)$. \end{enumerate} \end{Prop} \begin{proof} This is similarly established as the proof of ~\cite[Theorem 6.6]{MP}. \end{proof} \begin{Prop}\label{high} For $a_1,a_2,a_3\in\mathbb{C}$, we define a vector in $W$ by \begin{align*} u_{a_1,a_2,a_3} = a_1(B(-1)\otimes 1\otimes 1) + a_2(1\otimes B(-1)\otimes 1) + a_3 (1\otimes 1\otimes B(-1)). \end{align*} If $a_1+a_2+a_3=0$ and $(a_1,a_2,a_3)\ne (0,0,0)$, then $u_{a_1,a_2,a_3}$ is a highest weight vector with highest weight $3\Lambda_0-\alpha_0(=\Lambda_0+\Lambda_1+\Lambda_2-\delta)$. \end{Prop} \begin{proof} By direct calculation using $B(-1)=f_0+f_1+f_2$ in $\GE$. \end{proof} \begin{Rem}\label{notsub} The level 3 module $V(2\Lambda_i+\Lambda_j+p\delta)$ for $i\ne j\in I$ and $p\in\mathbb{Z}$ is not a submodule of $W$ because the equation \begin{align*} 3\Lambda_0-(k_0\alpha_0+k_1\alpha_1+k_2\alpha_2)\equiv p_0\Lambda_0+p_1\Lambda_1+p_2\Lambda_2\pmod{\mathbb{Z}\delta} \end{align*} is equivalent to the equation \begin{align*} A^{(1)}_2\cdot{}^t(k_0,k_1,k_2)={}^t(3-p_0,-p_1,-p_2) \end{align*} and the assumption that the latter equation has a integer solution $(k_0,k_1,k_2,p_0,p_1,p_2)\in\mathbb{Z}^6$ implies $p_1\equiv p_2\pmod{3\mathbb{Z}}$. \end{Rem} \section{Spanning vectors}\label{maincomp} Let $\CZ=\{\cint{n}\mid n\in\mathbb{Z}\}$ be the set of colored integers and put $\AZ=\mathbb{Z}\sqcup\CZ$. \subsection{Reducibilities} Recall \eqref{orde} in \S\ref{mainse}. There, we defined the order $>$ for positive integers and colored positive integers for simplicity. The order $>$ below is a generalization of the order $>$ in \S\ref{mainse}. \begin{Def}\label{twoorders} On the set $\AZ$, we consider total orders $>$ and $\ANO$ such that \begin{align*} \dots>2>\cint{2}>1>\cint{1}>0>\cint{0}>-1>\cint{-1}>-2>\cint{-2}>\cdots, \\ \cdots\ANO\cint{2}\ANO 2\ANO\cint{1}\ANO 1\ANO\cint{0}\ANO 0\ANO\cint{-1}\ANO -1\ANO\cint{-2}\ANO -2\ANO\cdots. \end{align*} As usual, the notation $x\ANOO y$ for $x,y\in\AZ$ means $x=y$ or $y\ANO x$. \end{Def} Similarly, we (re)define $\COLOR(n)\in\{\pm\}$ and $\CONT(n)\in\mathbb{Z}$ for $n\in\AZ$ by \begin{align*} \COLOR(n) = \begin{cases} + & \textrm{if $n\in\mathbb{Z}$},\\ - & \textrm{if $n\in\CZ$}, \end{cases}\quad \CONT(n) = \begin{cases} n & \textrm{if $n\in\mathbb{Z}$},\\ m & \textrm{if $n=\cint{m}$ for some $m\in\mathbb{Z}$}. \end{cases} \end{align*} Let $\ZSEQ$ be the set of finite length sequences $\boldsymbol{n}=(n_1,\dots,n_{\ell})$ of $\AZ$. Note that a 2-colored partition can be seen as an element of $\ZSEQ$. We (re)define the length $\LENGTH(\boldsymbol{n})$ and the size $\SUM{\boldsymbol{n}}$ of $\boldsymbol{n}$ as follows. \begin{align*} \LENGTH(\boldsymbol{n})=\ell,\quad \SUM{\boldsymbol{n}}=\CONT(n_1)+\dots+\CONT(n_{\ell}). \end{align*} The result $(n'_1,\dots,n'_{\ell})$ of sorting $\boldsymbol{n}$ so that $n'_1\ANOO\cdots\ANOO n'_{\ell}$ is denoted by $\SORT(\boldsymbol{n})$. \begin{Ex} For $\boldsymbol{n}=(\cint{-5},-5,\cint{-5},-5,\cint{-6})$, we have $\LENGTH(\boldsymbol{n})=5$, $\SUM{\boldsymbol{n}}=-26$ and $\SORT(\boldsymbol{n})=(\cint{-6},-5,-5,\cint{-5},\cint{-5})$. \end{Ex} In the following of the paper, we abbreviate $x_{\pm\alpha_1}(n)$ for $n\in\mathbb{Z}$ to $\YMP{n}$. For $\VZ\in W$ and $\boldsymbol{n}=(n_1,\dots,n_{\ell})\in\ZSEQ$, we put \begin{align*} \BUI(\boldsymbol{n})=\YY_{\COLOR(n_1)}(n_1)\cdots\YY_{\COLOR(n_{\ell})}(n_{\ell})\VZ. \end{align*} \begin{Def} Let $\NSEQ$ be the subset of $\ZSEQ$ consisting of $\boldsymbol{n}=(n_1,\dots,n_{\ell})\in\ZSEQ$ such that $\boldsymbol{n}=\SORT(\boldsymbol{n})$ (i.e., $\boldsymbol{n}$ is weakly increasing by the order $\ANO$) and $\CONT(n_i)<0$ for $1\leq i\leq\ell$ (i.e., $\boldsymbol{n}$ consists of negative integers or colored negative integers). \end{Def} The following is easily verified by Remark \ref{sortres}. See also ~\cite[(2.11)]{MP}. \begin{Lem} \label{sortres3} For $\ell\geq 0$ and a highest weight vector $\VZ\in W$, we have \begin{align*} \Theta_{<\ell}\cdot\VZ=\sum_{i=0}^{\ell-1}\sum_{\substack{\boldsymbol{m}\in\NSEQ \\ \LENGTH(\boldsymbol{m})=i}} U(\GAAA_-)\BUI(\boldsymbol{m}). \end{align*} \end{Lem} \begin{Def}\label{lexdef} For $\boldsymbol{m}=(m_1,\dots,m_{\ell}),\boldsymbol{n}=(n_1,\dots,n_{\ell})\in\ZSEQ$, we write $\boldsymbol{m}\LEX\boldsymbol{n}$ if there exists $1\leq i\leq \ell$ such that $m_i\ANO n_i$ and $m_j=n_j$ for $1\leq j<i$. \end{Def} Note that our definition of $\boldsymbol{m}\LEX\boldsymbol{n}$ for $\boldsymbol{m},\boldsymbol{n}\in\ZSEQ$ implies $\LENGTH(\boldsymbol{m})=\LENGTH(\boldsymbol{n})$. \begin{Ex} $(\cint{-6},\cint{-5},-5,\cint{-10})\LEX (-7,-5,\cint{-5},\cint{-5})\LEX (-7,-5,-5,-5)$. \end{Ex} The following are obvious. Here, $(\boldsymbol{h},\boldsymbol{m},\boldsymbol{t})$ stands for the concatenation of $\boldsymbol{h},\boldsymbol{m}$ and $\boldsymbol{t}$. For example, $(\boldsymbol{h},\boldsymbol{m},\boldsymbol{t})=(\cint{-6},\cint{-5},\cint{-6},-6,\cint{-5},\cint{-5})$ when $\boldsymbol{h}=(\cint{-6},\cint{-5})$, $\boldsymbol{m}=(\cint{-6})$ and $\boldsymbol{t}=(-6,\cint{-5},\cint{-5})$. \begin{Lem}\label{lexcomp2} For $\boldsymbol{n}\in\ZSEQ$, we have $\boldsymbol{n}\LEXX\SORT(\boldsymbol{n})$. \end{Lem} \begin{Lem}\label{lexcomp} If $\boldsymbol{m}\LEX\boldsymbol{n}$, then $\SORT((\boldsymbol{h},\boldsymbol{m},\boldsymbol{t}))\LEX\SORT((\boldsymbol{h},\boldsymbol{n},\boldsymbol{t}))$ for $\boldsymbol{h},\boldsymbol{t}\in\ZSEQ$. \end{Lem} For $s\in\mathbb{Z}$, we put $U(\GAAA_-)_{s}$ the subspace of principal degree $s$ elements in $U(\GAAA_-)$, namely $U(\GAAA_-)_{s}=\{x\in U(\GAAA_-)\mid dx-xd=sx\}$. \begin{Def} Let $\VZ\in W$ be a highest weight vector. We say that an element $\boldsymbol{n}$ in $\ZSEQ$ is $\VZ$-reducible if $\BUI(\boldsymbol{n})\in W_{>\boldsymbol{n}}$, where \begin{align*} W_{>\boldsymbol{n}} = \sum_{\substack{\boldsymbol{m}\in\NSEQ \\ \SUM{\boldsymbol{n}}<\SUM{\boldsymbol{m}}}} U(\GAAA_-)_{\SUM{\boldsymbol{n}}-\SUM{\boldsymbol{m}}}\BUI(\boldsymbol{m}) + \sum_{\substack{\boldsymbol{m}\in\NSEQ \\ \LENGTH(\boldsymbol{n})>\LENGTH(\boldsymbol{m}) \\ \SUM{\boldsymbol{n}}=\SUM{\boldsymbol{m}}}} \mathbb{C}\BUI(\boldsymbol{m}) + \sum_{\substack{\boldsymbol{m}\in\NSEQ \\ \boldsymbol{n}\LEX\boldsymbol{m} \\ \SUM{\boldsymbol{n}}=\SUM{\boldsymbol{m}}}} \mathbb{C}\BUI(\boldsymbol{m}). \end{align*} Otherwise, we say that $\boldsymbol{n}$ is $\VZ$-irreducible. \end{Def} If $\boldsymbol{n}\in\ZSEQ\setminus\NSEQ$, then $\boldsymbol{n}$ is $\VZ$-irreducible. Let $\RED_{\VZ}$ be the subset of $\NSEQ$, consisting of all the $\VZ$-irreducible elements. It is clear that we have \begin{align} U(\GE)\VZ=\sum_{\boldsymbol{n}\in\RED_{\VZ}} \mathbb{C}[B(n)\mid n\in\mathbb{Z}\setminus 3\mathbb{Z}]\BUI(\boldsymbol{n}). \label{spaneq} \end{align} \subsection{Forbidden patterns} Similarly to \S\ref{mainse}, $\boldsymbol{n}=(n_1,\dots,n_{\ell})\in\ZSEQ$ contains $\boldsymbol{m}=(m_1,\dots,m_{\ell'})\in\ZSEQ$ means that there exists $0\leq i\leq \ell-\ell'$ such that $n_{j+i}=m_j$ for $1\leq j\leq \ell'$. \begin{Def} Let $\boldsymbol{m}\in\ZSEQ$ be nonempty, i.e., $\LENGTH(\boldsymbol{m})>0$. We say that $\boldsymbol{m}$ is a forbidden pattern if $\boldsymbol{n}\in\ZSEQ$ is $\VZ$-reducible for any highest weight vector $v$ whenever $\boldsymbol{n}$ contains $\boldsymbol{m}$. \end{Def} \begin{Thm}\label{forthm} For $k\in\mathbb{Z}$, the following elements in $\ZSEQ$ are forbidden patterns. \begin{enumerate} \item[(1)] $(-k,-k), (\cint{-k},\cint{-k}), (-k-1,-k), (\cint{-k-1},\cint{-k})$. \item[(2)] $(\cint{-1-3k},-3k),(-1-3k,\cint{-3k})$. \item[(3)] $(-1-3k,\cint{-1-3k}),(\cint{-2-3k},-3k)$. \item[(4)] $(-2-3k,\cint{-2-3k}),(\cint{-3-3k},-1-3k)$. \item[(5)] $(\cint{-3-3k},-2-3k),(-3-3k,\cint{-2-3k})$. \item[(6)] $(-3-3k,\cint{-3-3k},\cint{-1-3k}),(-5-3k,-3-3k,\cint{-3-3k})$. \item[(7)] $(\cint{-5-3k},-4-3k,-2-3k,\cint{-1-3k})$. \end{enumerate} \end{Thm} \begin{proof} We prove that the elements in (1),(2),(3),(4),(5),(6),(7) are forbidden patterns in \S\ref{forone}, \S\ref{fortwoA}, \S\ref{fortwoB}, \S\ref{fortwoC}, \S\ref{fortwoD}, \S\ref{forthree}, \S\ref{forfour} respectively. \end{proof} In the rest of this section, we put $w=1\otimes 1\otimes 1\in W$. \begin{Prop}\label{iniprop} The elements $(-1),(\cint{-1})$ and $(\cint{-2})$ in $\NSEQ$ are $w$-reducible. \end{Prop} \begin{proof} By direct calculation we see $w((-1))=-w((\cint{-1})) \in \mathbb{C}B(-1)w$ and $w((-2))-w((\cint{-2})) \in \mathbb{C}B(-2)w$. \end{proof} We note that Proposition \ref{iniprop} also follows from ~\eqref{charcalc}. \begin{Cor}\label{initcor} The element $(\lambda_1,\dots,\lambda_{\ell})$ in $\NSEQ$ is $w$-reducible if $\ell\geq 1$ and $\lambda_{\ell}=-1,\cint{-1},\cint{-2}$. \end{Cor} Recall the definitions of $\BIR$ and $\BIRP$ in \S\ref{mainse}. For a 2-colored partition $\lambda=(\lambda_1,\dots,\lambda_{\ell})$, we easily see that the condition that $\lambda$ satisfies (D1)--(D3) in \S\ref{mainse} is equivalent to the condition that $(-\lambda_1,\dots,-\lambda_{\ell})\in\NSEQ$ does not contain the elements (1)--(7) in Theorem \ref{forthm}. Concerning Corollary \ref{initcor} and (D4), we can restate \eqref{spaneq} as follows. \begin{Cor}\label{biidenintercor} For $i=1$ (resp. $i=2$), put $v_1=u_{a_1,a_2,a_3}$ with $a_1+a_2+a_3=0$ and $(a_1,a_2,a_3)\ne (0,0,0)$ (resp. $v_2=w$). The set \begin{align*} \{\AOB{-\mu_1}\cdots\AOB{-\mu_{\ell'}}\BUI_i(-\lambda_1,\dots,-\lambda_{\ell})\} \end{align*} spans a module which is isomorphic to $V((2i-1)\Lambda_0+(2-i)\Lambda_1+(2-i)\Lambda_2-(2-i)\delta)$, where $(\mu_1,\dots,\mu_{\ell'})$ varies in $\REG{3}$ and $(\lambda_1,\dots,\lambda_{\ell})$ varies in $\BIR$ (resp. $\BIRP$). \end{Cor} \section{Vertex operator calculations}\label{cal} In this section, we put $\omega=\exp(2\pi\sqrt{-1}/3)$ (as in \S\ref{vertset}) and $k$ (resp. $\VZ$) stands for an integer (resp. a highest weight vector in $W$). We remark that the arguments in this section are similar to ~\cite{Nan}. \begin{Def} We denote by $R_i(n)$ the coefficient of $\zeta^n$ in $R_i(\zeta)$ for $i\in\{1,2,\pm\}$, where \begin{align*} R_{+}(\zeta) &= x_{\alpha_1,\alpha_1}(\zeta), \quad\quad R_{-}(\zeta) = x_{-\alpha_1,-\alpha_1}(\zeta), \\ R_1(\zeta) &= E^{-}(-\alpha_1,\zeta)x_{\alpha_2,-\alpha_1}(\zeta)-x_{\alpha_1,\alpha_1+\alpha_2}(\zeta)E^{+}(\alpha_1,\zeta),\\ R_2(\zeta) &= E^{-}(-\alpha_1,\zeta)x_{-(\alpha_1+\alpha_2),-\alpha_1}(\zeta)-x_{\alpha_1,-\alpha_2}(\zeta)E^{+}(\alpha_1,\zeta). \end{align*} \end{Def} In the following, we abbreviate $E^\pm(\pm\alpha_1,n)$ to $E^\pm(n)$ for $n\in\mathbb{Z}$ (see (V2) in \S\ref{vor}). Note that $E^\pm(n)=0$ when $\mp n>0$. For an integer $\ell\geq 0$ and summable expressions (see ~\cite[\S4]{MP}) $G$ and $H$, the notation $G\equiv_{\ell} H$ stands for $G\VZ-H\VZ\in\Theta_{<\ell}\cdot\VZ$. \subsection{Forbiddenness of $(-k,-k)$, $(\cint{-k},\cint{-k})$, $(-k-1,-k)$ and $(\cint{-k-1},\cint{-k})$}\label{forone} Note that $R_{\pm}(-2k)$ and $R_{\pm}(-2k-1)$ are expanded as follows. {\footnotesize \begin{align*} R_{\pm}(-2k)/3 &\equiv_2 \YMP{-k}\YMP{-k}+2\YMP{-k-1}\YMP{-k+1}+2\YMP{-k-2}\YMP{-k+2}+\cdots, \\ R_{\pm}(-2k-1)/3 &\equiv_2 2(\YMP{-k-1}\YMP{-k}+\YMP{-k-2}\YMP{-k+1}+\YMP{-k-3}\YMP{-k+2}+\cdots). \end{align*} \normalsize} Here, $P_{\alpha_1,\alpha_1}(1)=3=P_{-\alpha_1,-\alpha_1}(1)$ (see Example \ref{exppoly2}). We show that $(-k,-k)$ is a forbidden pattern. Let $\boldsymbol{n}\in\ZSEQ$ contains $(-k,-k)$. In other words, $\boldsymbol{n}$ is of the form $\boldsymbol{n}=(\boldsymbol{h},-k,-k,\boldsymbol{t})$ for some $\boldsymbol{h}=(h_1,\dots,h_a),\boldsymbol{t}=(t_1,\dots,t_b)\in\ZSEQ$. By the above expansion, we have \begin{align*} \BUI(\boldsymbol{n})+2\sum_{i\geq 1}\BUI((\boldsymbol{h},-k-i,-k+i,\boldsymbol{t})) -\frac{1}{3}\YY_{\COLOR(h_1)}(h_1)\cdots\YY_{\COLOR(h_{a})}(h_{a})R_+(-2k)\BUI(\boldsymbol{t})\in\Theta_{<2}\cdot\VZ. \end{align*} By Proposition \ref{mpprop} (2), we have \begin{align*} {} &{} \YY_{\COLOR(h_1)}(h_1)\cdots\YY_{\COLOR(h_{a})}(h_{a})R_+(-2k)\BUI(\boldsymbol{t})\\ &= \sum_{i,j\geq 0}\YY_{\COLOR(h_1)}(h_1)\cdots\YY_{\COLOR(h_{a})}(h_{a})2E^-(-i)\YM{-2k+i-j}E^+(j)\BUI(\boldsymbol{t}). \end{align*} Thanks to Corollary \ref{idou}, Lemma \ref{sortres3} and Lemma \ref{lexcomp}, we see that \begin{align*} {} &{} \YY_{\COLOR(h_1)}(h_1)\cdots\YY_{\COLOR(h_{a})}(h_{a})2E^-(-i)\YM{-2k+i-j}E^+(j)\BUI(\boldsymbol{t})\\ &\in \sum_{\substack{\boldsymbol{m}\in\NSEQ \\ \SUM{\boldsymbol{n}}<\SUM{\boldsymbol{m}}}} U(\GAAA_{-})_{\SUM{\boldsymbol{n}}-\SUM{\boldsymbol{m}}}\BUI(\boldsymbol{m}) + \sum_{\substack{\boldsymbol{m}\in\NSEQ \\ \LENGTH(\boldsymbol{n})>\LENGTH(\boldsymbol{m}) \\ \SUM{\boldsymbol{n}}=\SUM{\boldsymbol{m}}}} \mathbb{C}\BUI(\boldsymbol{m}) \end{align*} for $i,j\geq 0$. Thus, we have $\BUI(\boldsymbol{n})\in W_{>\boldsymbol{n}}$ by Lemma \ref{sortres3}. The other cases are similar. \subsection{Forbiddenness of $(\cint{-1-3k},-3k)$ and $(-1-3k,\cint{-3k})$}\label{fortwoA} The expansions of $R_{1}(-6k-1)$ and $R_{2}(-6k-1)$ are as follows. {\footnotesize \begin{align*} {} &{} R_1(-6k-1)/(-3(1+2\omega)) \\ &\equiv_2 \YM{-1-3k}\YP{-3k} -(1+\omega)\YP{-1-3k}\YM{-3k} + \omega\YM{-2-3k}\YP{1-3k}+\cdots\\ &+ \frac{1}{3}E^-(-1)((2+\omega)\YP{-3k}\YM{-3k} + (-1+\omega)\YM{-1-3k}\YP{1-3k} - (1+2\omega)\YP{-1-3k}\YM{1-3k}+\cdots)\\ &+ \cdots\\ &- \frac{1}{3}((-1+\omega)\YP{-1-3k}\YM{-1-3k} -(1+2\omega) \YM{-2-3k}\YP{-3k}+(2+\omega)\YP{-2-3k}\YM{-3k}+\cdots)E^+(1)\\ &- \cdots. \end{align*} \normalsize} {\footnotesize \begin{align*} {} &{} (R_1(-6k-1)+R_2(-6k-1))/(-9) \\ &\equiv_2 \YP{-1-3k}\YM{-3k} -\YM{-2-3k}\YP{1-3k} -\YP{-2-3k}\YM{1-3k}+\cdots\\ &+ \frac{1}{3}E^-(-1)(-\YP{-3k}\YM{-3k} -\YM{-1-3k}\YP{1-3k} + 2\YP{-1-3k}\YM{1-3k}+\cdots)\\ &+ \cdots\\ &- \frac{1}{3}(-\YP{-1-3k}\YM{-1-3k} + 2\YM{-2-3k}\YP{-3k} -\YP{-2-3k}\YM{-3k}+\cdots)E^+(1)\\ &- \cdots. \end{align*} \normalsize} We show that $(\cint{-1-3k},-3k)$ is a forbidden pattern. Let $\boldsymbol{n}\in\ZSEQ$ contains $(\cint{-1-3k},-3k)$. In other words, $\boldsymbol{n}$ is of the form $\boldsymbol{n}=(\boldsymbol{h},\cint{-1-3k},-3k,\boldsymbol{t})$ for some $\boldsymbol{h}=(h_1,\dots,h_a),\boldsymbol{t}=(t_1,\dots,t_b)\in\ZSEQ$. By Proposition \ref{mpprop}, we have \begin{align*} \YY_{\COLOR(h_1)}(h_1)\cdots\YY_{\COLOR(h_{a})}(h_{a})(R_1(-6k-1)/(-3(1+2\omega)))\BUI(\boldsymbol{t})=0. \end{align*} Thus, the element \begin{align*} {} &{} \BUI(\boldsymbol{n})-(1+\omega)\BUI((\boldsymbol{h},-1-3k,\cint{-3k},\boldsymbol{t}))+\dots \\ &\quad +\sum_{i\geq 1}\YY_{\COLOR(h_1)}(h_1)\cdots\YY_{\COLOR(h_{a})}(h_{a})E^{-}(-i)p^{(-)}_i\BUI(\boldsymbol{t})\\ &\quad -\sum_{i\geq 1}\YY_{\COLOR(h_1)}(h_1)\cdots\YY_{\COLOR(h_{a})}(h_{a})p^{(+)}_iE^{+}(i)\BUI(\boldsymbol{t}), \end{align*} belongs to $\Theta_{<a+2+b}\cdot\VZ$, where $p^{(\pm)}_i$ are suitable expansions. For example, $p^{(\pm)}_1$ is displayed in the exapansion of $R_1(-6k-1)/(-3(1+2\omega))$ above. Let $i\geq 1$. Thanks to Corollary \ref{idou}, Lemma \ref{sortres3} and Lemma \ref{lexcomp}, we see that $\YY_{\COLOR(h_1)}(h_1)\cdots\YY_{\COLOR(h_{a})}(h_{a})E^{-}(-i)p^{(-)}_i\BUI(\boldsymbol{t})$ belongs to $W_{>\boldsymbol{n}}$. We also see that $\YY_{\COLOR(h_1)}(h_1)\cdots\YY_{\COLOR(h_{a})}(h_{a})p^{(+)}_iE^{+}(i)\BUI(\boldsymbol{t})$ belongs to $W_{>\boldsymbol{n}}$ because any monomial $\boldsymbol{m}$ appearing in $p^{(+)}_i$ satisfies $\boldsymbol{n}\LEX\boldsymbol{m}$ (see Definition \ref{lexdef}). Thus, we have $\BUI(\boldsymbol{n})\in W_{>\boldsymbol{n}}$ by Lemma \ref{sortres3}. The other case is similar. \subsection{Forbiddenness of $(-1-3k,\cint{-1-3k})$ and $(\cint{-2-3k},-3k)$}\label{fortwoB} The expansions of $R_{1}(-6k-2)$ and $R_{2}(-6k-2)$ are as follows. The rest of the argument is the same as in \S\ref{fortwoA}. {\footnotesize \begin{align*} {} &{} R_1(-6k-2)/(-3(2+\omega)) \\ &\equiv_2 \YP{-1-3k}\YM{-1-3k} + \omega \YM{-2-3k}\YP{-3k} -(1+\omega)\YP{-2-3k}\YM{-3k} +\cdots\\ &+ \frac{1}{3}E^-(-1)((1+2\omega)\YM{-1-3k}\YP{-3k} + (1-\omega)\YP{-1-3k}\YM{-3k} -(2+\omega)\YM{-2-3k}\YP{1-3k}+\cdots)\\ &+ \cdots\\ &- \frac{1}{3}((1-\omega)\YM{-2-3k}\YP{-1-3k} -(2+\omega)\YP{-2-3k}\YM{-1-3k} + (1+2\omega)\YM{-3-3k}\YP{-3k}+\cdots)E^+(1)\\ &- \cdots. \end{align*} \normalsize} {\footnotesize \begin{align*} {} &{} (R_1(-6k-2)-(1+\omega)R_2(-6k-2))/(-9\omega) \\ &\equiv_2 \YM{-2-3k}\YP{-3k} -\YP{-2-3k}\YM{-3k} -\YM{-3-3k}\YP{1-3k}+\cdots \\ &+ \frac{1}{3}E^-(-1)(2\YM{-1-3k}\YP{-3k} -\YP{-1-3k}\YM{-3k} -\YM{-2-3k}\YP{1-3k}+\cdots)\\ &+ \cdots\\ &- \frac{1}{3}(-\YM{-2-3k}\YP{-1-3k} -\YP{-2-3k}\YM{-1-3k} +2\YM{-3-3k}\YP{-3k}+\cdots)E^+(1)\\ &- \cdots. \end{align*} \normalsize} \subsection{Forbiddenness of $(-2-3k,\cint{-2-3k})$ and $(\cint{-3-3k},-1-3k)$}\label{fortwoC} The expansions of $R_{1}(-6k-4)$ and $R_{2}(-6k-4)$ are as follows. The rest of the argument is the same as in \S\ref{fortwoA}. {\footnotesize \begin{align*} {} &{} R_1(-6k-4)/(3(2+\omega)) \\ &\equiv_2 \YP{-2-3k}\YM{-2-3k} + \omega \YM{-3-3k}\YP{-1-3k} -(1+\omega) \YP{-3-3k}\YM{-1-3k}+\cdots\\ &+ \frac{1}{3}E^-(-1)((-1+\omega)\YM{-2-3k}\YP{-1-3k} + (2+\omega)\YP{-2-3k}\YM{-1-3k} -(1+2\omega)\YM{-3-3k}\YP{-3k}+\cdots)\\ &+ \cdots\\ &- \frac{1}{3}(-(1+2\omega)\YM{-3-3k}\YP{-2-3k} + (-1+\omega)\YP{-3-3k}\YM{-2-3k} + (2+\omega)\YM{-4-3k}\YP{-1-3k}+\cdots)E^+(1)\\ &- \cdots. \end{align*} \normalsize} {\footnotesize \begin{align*} {} &{} (R_1(-6k-4)-(1+\omega)R_2(-6k-4))/(9\omega) \\ &\equiv_2 \YM{-3-3k}\YP{-1-3k} -\YP{-3-3k}\YM{-1-3k} -\YM{-4-3k}\YP{-3k}+\cdots\\ &+ \frac{1}{3}E^-(-1)(\YM{-2-3k}\YP{-1-3k} + \YP{-2-3k}\YM{-1-3k} -2\YM{-3-3k}\YP{-3k}+\cdots)\\ &+ \cdots\\ &- \frac{1}{3}(-2\YM{-3-3k}\YP{-2-3k} + \YP{-3-3k}\YM{-2-3k} + \YM{-4-3k}\YP{-1-3k}+\cdots)E^+(1)\\ &- \cdots. \end{align*} \normalsize} \subsection{Forbiddenness of $(\cint{-3-3k},-2-3k)$ and $(-3-3k,\cint{-2-3k})$}\label{fortwoD} The expansions of $R_{1}(-6k-5)$ and $R_{2}(-6k-5)$ are as follows. The rest of the argument is the same as in \S\ref{fortwoA}. {\footnotesize \begin{align*} {} &{} R_1(-6k-5)/(3(1+2\omega)) \\ &\equiv_2 \YM{-3-3k}\YP{-2-3k} -(1+\omega)\YP{-3-3k}\YM{-2-3k} + \omega \YM{-4-3k}\YP{-1-3k}+\cdots\\ &+ \frac{1}{3}E^-(-1)((1-\omega)\YP{-2-3k}\YM{-2-3k} + (1+2\omega)\YM{-3-3k}\YP{-1-3k} -(2+\omega)\YP{-3-3k}\YM{-1-3k}+\cdots)\\ &+ \cdots\\ &- \frac{1}{3}(-(2+\omega)\YP{-3-3k}\YM{-3-3k} + (1-\omega)\YM{-4-3k}\YP{-2-3k} + (1+2\omega)\YP{-4-3k}\YM{-2-3k}+\cdots)E^+(1)\\ &- \cdots. \end{align*} \normalsize} {\footnotesize \begin{align*} {} &{} (R_1(-6k-5)+R_2(-6k-5))/9 \\ &\equiv_2 \YP{-3-3k}\YM{-2-3k} -\YM{-4-3k}\YP{-1-3k} -\YP{-4-3k}\YM{-1-3k}+\cdots\\ &+ \frac{1}{3}E^-(-1)(\YP{-2-3k}\YM{-2-3k} -2\YM{-3-3k}\YP{-1-3k} + \YP{-3-3k}\YM{-1-3k}+\cdots)\\ &+ \cdots\\ &- \frac{1}{3}(\YP{-3-3k}\YM{-3-3k} + \YM{-4-3k}\YP{-2-3k} -2\YP{-4-3k}\YM{-2-3k}+\cdots)E^+(1)\\ &- \cdots. \end{align*} \normalsize} \subsection{Forbiddenness of $(-3-3k,\cint{-3-3k},\cint{-1-3k})$ and $(-5-3k,-3-3k,\cint{-3-3k})$}\label{forthree} Using Corollary \ref{idounan}, we see that \begin{align*} Z_1(k)=\YP{-3-3k}R_{-}(-4-6k)/3-(R_1(-6k-5)+R_2(-6k-5))\YM{-2-3k}/9 \end{align*} is expanded as follows. {\tiny \begin{align*} {} &{} Z_1(k)\\ &\equiv_3 \YP{-3-3k}\YM{-3-3k}\YM{-1-3k} + \YM{-4-3k}\YM{-2-3k}\YP{-1-3k} -\YM{-4-3k}\YP{-2-3k}\YM{-1-3k}+\cdots\\ &+ \frac{1}{3}E^-(-1)(-\YP{-2-3k}\YM{-2-3k}\YM{-2-3k} + 2\YM{-3-3k}\YM{-2-3k}\YP{-1-3k} -\YP{-3-3k}\YM{-2-3k}\YM{-1-3k}+\cdots)\\ &+ \cdots\\ &- \frac{1}{3}(-\YP{-3-3k}\YM{-3-3k}\YM{-2-3k} -\YM{-4-3k}\YP{-2-3k}\YM{-2-3k} + 2\YP{-4-3k}\YM{-2-3k}\YM{-2-3k}+\cdots)E^+(1)\\ &- \cdots. \end{align*} \normalsize} The element $(-3-3k,\cint{-3-3k},\cint{-1-3k})$ is a forbidden pattern because any monomial $\boldsymbol{m}$ appearing in the coefficient of $E^+(i)$ satisfies $(-3-3k,\cint{-3-3k},\cint{-1-3k})\LEX\boldsymbol{m}$. Similarly, using Corollary \ref{idounan}, we see that \begin{align*} Z_2(k)=R_{+}(-8-6k)\YM{-3-3k}/3+\YP{-4-3k}(R_1(-7-6k)+R_2(-7-6k))/9 \end{align*} is expanded as follows. {\tiny \begin{align*} {} &{} Z_2(k)\\ &\equiv_3 \YP{-5-3k}\YP{-3-3k}\YM{-3-3k} + \YM{-5-3k}\YP{-4-3k}\YP{-2-3k} -\YP{-5-3k}\YM{-4-3k}\YP{-2-3k}+\cdots\\ &+ \frac{1}{3}E^-(-1)(\YP{-4-3k}\YP{-3-3k}\YM{-3-3k} + \YP{-4-3k}\YM{-4-3k}\YP{-2-3k} -2\YP{-4-3k}\YP{-4-3k}\YM{-2-3k}+\cdots)\\ &+ \cdots\\ &- \frac{1}{3}(\YP{-4-3k}\YP{-4-3k}\YM{-4-3k} -2\YM{-5-3k}\YP{-4-3k}\YP{-3-3k} + \YP{-5-3k}\YP{-4-3k}\YM{-3-3k}+\cdots)E^+(1)\\ &- \frac{1}{3}(-2\YM{-5-3k}\YP{-4-3k}\YP{-4-3k} + \YP{-5-3k}\YP{-4-3k}\YM{-4-3k} + \YM{-6-3k}\YP{-4-3k}\YP{-3-3k}+\cdots)E^+(2)\\ &- \cdots. \end{align*} \normalsize} By applying \begin{align*} R_+(-8-6k)/3\equiv_2 \YP{-4-3k}\YP{-4-3k}+2\YP{-5-3k}\YP{-3-3k}+\cdots, \end{align*} to the coefficient of $E^+(1)$ in $Z_2(k)$, we see that $Z_2(k)\equiv_3 Z'_2(k)$, where {\tiny \begin{align*} {} &{} Z'_2(k)\\ &\equiv_3 \YP{-5-3k}\YP{-3-3k}\YM{-3-3k} + \YM{-5-3k}\YP{-4-3k}\YP{-2-3k} -\YP{-5-3k}\YM{-4-3k}\YP{-2-3k}+\cdots\\ &+ \frac{1}{3}E^-(-1)(\YP{-4-3k}\YP{-3-3k}\YM{-3-3k} + \YP{-4-3k}\YM{-4-3k}\YP{-2-3k} -2\YP{-4-3k}\YP{-4-3k}\YM{-2-3k}+\cdots)\\ &+ \cdots\\ &- \frac{1}{3}(-2\YM{-5-3k}\YP{-4-3k}\YP{-3-3k}-2\YP{-5-3k}\YM{-4-3k}\YP{-3-3k} + \YP{-5-3k}\YP{-4-3k}\YM{-3-3k}+\cdots)E^+(1)\\ &- \frac{1}{3}(-2\YM{-5-3k}\YP{-4-3k}\YP{-4-3k} + \YP{-5-3k}\YP{-4-3k}\YM{-4-3k} + \YM{-6-3k}\YP{-4-3k}\YP{-3-3k}+\cdots)E^+(2)\\ &- \cdots. \end{align*} \normalsize} By applying the same relation to the coefficient of $E^+(2)$ and by applying \begin{align*} R_+(-7-6k)/6\equiv_2 \YP{-4-3k}\YP{-3-3k}+\YP{-5-3k}\YP{-2-3k}+\cdots, \end{align*} to the coefficient of $E^+(1)$ in $Z'_2(k)$, we see that $Z'_2(k)\equiv_3 Z''_2(k)$, where {\tiny \begin{align*} {} &{} Z''_2(k)\\ &\equiv_3 \YP{-5-3k}\YP{-3-3k}\YM{-3-3k} + \YM{-5-3k}\YP{-4-3k}\YP{-2-3k} -\YP{-5-3k}\YM{-4-3k}\YP{-2-3k}+\cdots\\ &+ \frac{1}{3}E^-(-1)(\YP{-4-3k}\YP{-3-3k}\YM{-3-3k} + \YP{-4-3k}\YM{-4-3k}\YP{-2-3k} -2\YP{-4-3k}\YP{-4-3k}\YM{-2-3k}+\cdots)\\ &+ \cdots\\ &- \frac{1}{3}(-2\YP{-5-3k}\YM{-4-3k}\YP{-3-3k}+\YP{-5-3k}\YP{-4-3k}\YM{-3-3k} + 2\YP{-5-3k}\YM{-5-3k}\YP{-2-3k}+\cdots)E^+(1)\\ &- \frac{1}{3}(\YP{-5-3k}\YP{-4-3k}\YM{-4-3k} + 4\YP{-5-3k}\YM{-5-3k}\YP{-3-3k} + \YM{-6-3k}\YP{-4-3k}\YP{-3-3k}+\cdots)E^+(2)\\ &- \cdots. \end{align*} \normalsize} The element $(-5-3k,-3-3k,\cint{-3-3k})$ is a forbidden pattern because any monomial $\boldsymbol{m}$ appearing in the coefficient of $E^+(i)$ in $Z''_2(k)$ satisfies $(-5-3k,-3-3k,\cint{-3-3k})\LEX\boldsymbol{m}$. \subsection{Forbiddenness of $(\cint{-5-3k},-4-3k,-2-3k,\cint{-1-3k})$}\label{forfour} By applying \begin{align*} R_-(-5-6k)/6 \equiv_2 \YM{-3-3k}\YM{-2-3k}+\YM{-4-3k}\YM{-1-3k}+\cdots, \end{align*} to the coefficient of $E^+(1)$ in $Z_1(k)$, we see that $Z_1(k)\equiv_3 Z'_1(k)$, where {\tiny \begin{align*} {} &{} Z'_1(k)\\ &\equiv_3 \YP{-3-3k}\YM{-3-3k}\YM{-1-3k} + \YM{-4-3k}\YM{-2-3k}\YP{-1-3k} -\YM{-4-3k}\YP{-2-3k}\YM{-1-3k}+\cdots\\ &+ \frac{1}{3}E^-(-1)(-\YP{-2-3k}\YM{-2-3k}\YM{-2-3k} + 2\YM{-3-3k}\YM{-2-3k}\YP{-1-3k} -\YP{-3-3k}\YM{-2-3k}\YM{-1-3k}+\cdots)\\ &+ \cdots\\ &- \frac{1}{3}(-\YM{-4-3k}\YP{-2-3k}\YM{-2-3k} +2\YP{-4-3k}\YM{-2-3k}\YM{-2-3k} + 4\YM{-4-3k}\YP{-3-3k}\YM{-1-3k}+\cdots)E^+(1)\\ &- \cdots. \end{align*} \normalsize} Again, using Corollary \ref{idounan}, we see that \begin{align*} Z_3(k)=Z'_2(k)\YM{-1-3k}-\YP{-5-3k}Z'_1(k) \end{align*} is expanded as follows. {\tiny \begin{align*} {} &{} Z_3(k)\\ &\equiv_4 \YM{-5-3k}\YP{-4-3k}\YP{-2-3k}\YM{-1-3k} -\YP{-5-3k}\YM{-4-3k}\YM{-2-3k}\YP{-1-3k} +\cdots\\ &+ \frac{1}{3}E^-(-1)(\YP{-4-3k}\YP{-3-3k}\YM{-3-3k}\YM{-1-3k} + \YP{-4-3k}\YM{-4-3k}\YP{-2-3k}\YM{-1-3k} +\cdots)\\ &+ \cdots\\ &- \frac{1}{3}(\YP{-5-3k}\YM{-4-3k}\YP{-2-3k}\YM{-2-3k} -2\YP{-5-3k}\YP{-4-3k}\YM{-2-3k}\YM{-2-3k} +\cdots)E^+(1)\\ &- \frac{1}{3}(\YP{-5-3k}\YM{-4-3k}\YP{-3-3k}\YM{-2-3k} + \YP{-5-3k}\YP{-4-3k}\YM{-3-3k}\YM{-2-3k} +\cdots)E^+(2)\\ &- \cdots. \end{align*} \normalsize} The element $(\cint{-5-3k},-4-3k,-2-3k,\cint{-1-3k})$ is a forbidden pattern because any monomial $\boldsymbol{m}$ appearing in the coefficient of $E^+(i)$ satisfies $(\cint{-5-3k},-4-3k,-2-3k,\cint{-1-3k})\LEX\boldsymbol{m}$. \section{An automatic derivation of $q$-difference equations}\label{auto} \subsection{A brief survey of ~\cite{TT}} Let $\Sigma$ be a nonempty finite set, called an alphabet in the context of formal language theory. We denote by $\Sigma^\ast$ the set of finite words $\bigsqcup_{n\geq 0}\Sigma^n$ of $\Sigma$. A word $(w_1,\dots,w_n)\in\Sigma^n$ of length $n$ is written as $w_1\cdots w_n$. By the word concatenation and the empty word $\EMPTYWORD$, we regard the set $\Sigma^{\ast}$ as a free monoid generated by $\Sigma$. For $a\in\Sigma$ and $X,Y\subseteq\Sigma^{\ast}$, we put $aX=\{aw\mid w\in X\}$ and $XY=\{wv\mid w\in X,v\in Y\}$. \begin{Def} A deterministic finite automaton (DFA, for short) over $\Sigma$ is a 5-tuple $(Q,\Sigma,\delta,s,F)$, where $Q$ is a finite set (called the set of states), $\delta:Q\times \Sigma\to Q$ is a function (called the transition function), $s$ is an element of $Q$ (called the start state) and $F$ is a subset of $Q$ (called the set of accept states). \end{Def} For a DFA $M=(Q,\Sigma,\delta,s,F)$, the language recognized by $M$ is defined as \begin{align*} L(M)=\{w\in\Sigma^{\ast}\mid \EXTE{\delta}(s,w)\in F\}, \end{align*} where $\EXTE{\delta}(q,w)$ is defined inductively by $\EXTE{\delta}(q,\EMPTYWORD)=q$ and $\EXTE{\delta}(q,w_1v)=\EXTE{\delta}(\delta(q,w_1),v)$ for $q\in Q, w_1\in\Sigma$ and $w,v\in\Sigma^{\ast}$. \begin{Def}[{\cite[Definition 2.5]{TT}}] Let $\SEQ(\Sigma)=\Sigma^{\mathbb{Z}_{\geq 1}}$ be the set of infinite sequences $\boldsymbol{i}=(i_1,i_2,\dots)$ of $\Sigma$. For $A\subseteq \Sigma^{\ast}$, we put \begin{align*} A^{\wedge} = \{\boldsymbol{i}\in\SEQ(\Sigma)\mid i_1\dots i_n\in A\textrm{ for all }n\geq 1\}. \end{align*} \end{Def} In other words, $A^{\wedge}$ is the set of infinite sequences that any (nonempty) finite truncation belongs to $A$. For a DFA $M=(Q,\Sigma,\delta,s,F)$ and $v\in Q$, we put $M_v=(Q,\Sigma,\delta,v,F)$, namely the DFA obtained from $M$ by changing the start state to $v$ (see \cite[Definition 3.11]{TT}). Then, the following obvious identity (see ~\cite[Lemma 3.13]{TT}) \begin{align} (L(M_v)^{c})^{\wedge}=\bigsqcup_{\substack{a\in\Sigma \\ \delta(v,a)\not\in F}}a\cdot(L(M_{\delta(v,a)})^{c})^{\wedge} \label{usefulidentity} \end{align} plays a role in our automatic derivations of $q$-difference equations. \subsection{An application} Recall that a 2-colored partition is defined in \S\ref{mainse} to be a weakly decreasing sequence of positive integers and colored positive integers by the order ~\eqref{orde} (see also Definition \ref{twoorders}). We denote the set of 2-colored partitions by $\TPAR$. Let $I=\{\mya,\myb,\dots,\mym\}$ and consider the map $\pi:I\to \TPAR$ defined by \begin{align*} \mya\mapsto\emptypar,\quad \myb\mapsto(1),\quad \myc\mapsto(\cint{1}),\quad \myd\mapsto(2),\quad \mye\mapsto(\cint{2}),\quad \myf\mapsto(3),\quad \myg\mapsto(\cint{3}),\\ \myh\mapsto(2,\cint{1}),\quad \myi\mapsto(\cint{2},1),\quad \myj\mapsto(3,1),\quad \myk\mapsto(3,\cint{1}),\quad \myl\mapsto(\cint{3},\cint{1}),\quad \mym\mapsto(3,\cint{3}). \end{align*} Let $\SEQ(I,\pi)$ be the set of $\boldsymbol{i}=(i_1,i_2,\dots)$ in $\SEQ(I)$ such that $\{k\geq 1\mid i_k\ne\mya\}$ is a finite set. For a nonnegative integer $t$, we define $\SHIFT_t(u)=u+t$ and $\SHIFT_t(\cint{u})=\cint{u+t}$ for a positive integer $u$. We also define $\SHIFT_t(\lambda)=(\SHIFT_t(\lambda_1),\dots,\SHIFT_t(\lambda_{\ell}))$ for a 2-colored partition $\lambda=(\lambda_1,\dots,\lambda_{\ell})$. Then, the map \begin{align*} \PAI:\SEQ(I,\pi)\to \TPAR, \end{align*} which sends $\boldsymbol{i}=(i_1,i_2,\dots)\in \SEQ(I,\pi)$ to the smallest (with respect to the length) 2-colored partition that contains $\SHIFT_{3k-3}(\pi(i_k))$ for any $k\geq 1$ such that $i_k\ne\mya$ is well-defined (see ~\cite[Definition 2.5.(2)]{TT}). It is clearly an injection. \begin{Ex} $\PAI(\boldsymbol{i})=(\cint{11},10,\cint{8},3,\cint{3})$ for $\boldsymbol{i}=(\mym,\mya,\mye,\myi,\mya,\mya,\dots)$. \end{Ex} The following translation result is verified elementally. \begin{Prop}\label{iikae} For $\boldsymbol{i}\in\SEQ(I,\pi)$, the condition $\PAI(\boldsymbol{i})\in\BIR$ is equivalent to the condition that $\boldsymbol{i}$ does not contain any of the word in $J$, where \begin{align*} J &= \{\myf\myb,\myg\myb,\myj\myb,\myk\myb,\myl\myb,\mym\myb,\myl\myc,\mym\myc,\myj\myc,\myk\myc,\myf\myc,\myg\myc, \mym\myd,\myf\mye,\myj\mye,\myk\mye,\mym\mye,\myh\myi,\myf\myi,\myg\myi,\myj\myi,\myk\myi,\myl\myi,\mym\myi,\\ &\quad\quad \myf\myh,\myg\myh,\myj\myh,\myk\myh,\myl\myh,\mym\myh,\myf\myj,\myg\myj,\myj\myj,\myk\myj,\myl\myj,\mym\myj, \myf\myk,\myg\myk,\myj\myk,\myk\myk,\myl\myk,\mym\myk,\myf\myl,\myg\myl,\myj\myl,\myk\myl,\myl\myl,\mym\myl \}. \end{align*} Moreover, any $\lambda\in\BIR$ is obtained in this way (i.e., $\lambda=\PAI(\boldsymbol{i})$ for some $\boldsymbol{i}\in\SEQ(I,\pi)$). \end{Prop} Let $M=(Q,I,\delta,s,F)$ be the minimal DFA (with respect to the number of states) over $I$ such that $L(M)=I^{\ast}JI^{\ast}$. It is computable by the standard algorithms in formal language theory (see ~\cite[Appendix A]{TT} for a review). By running them (see Example \ref{gap} below), we see automatically that $Q=\{\JOO{0},\dots,\JOO{5}\}$, $s=\JOO{0}$, $F=\{\JOO{1}\}$ and that $\delta(q,j)$ for $q\in Q, j\in I$ is given by the table below. \begin{center} \begin{tabular}{r|rrrrrrrrrrrrr} $q\backslash j$ & $\mya$ & $\myb$ & $\myc$ & $\myd$ & $\mye$ & $\myf$ & $\myg$ & $\myh$ & $\myi$ & $\myj$ & $\myk$ & $\myl$ & $\mym$ \\ \hline \JO{0} & \JO{0} & \JO{0} & \JO{0} & \JO{0} & \JO{0} & \JO{2} & \JO{4} & \JO{5} & \JO{0} & \JO{2} & \JO{2} & \JO{4} & \JO{3} \\ \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{1} \\ \JO{2} & \JO{0} & \JO{1} & \JO{1} & \JO{0} & \JO{1} & \JO{2} & \JO{4} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{3} \\ \JO{3} & \JO{0} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{2} & \JO{4} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{3} \\ \JO{4} & \JO{0} & \JO{1} & \JO{1} & \JO{0} & \JO{0} & \JO{2} & \JO{4} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{1} & \JO{3} \\ \JO{5} & \JO{0} & \JO{0} & \JO{0} & \JO{0} & \JO{0} & \JO{2} & \JO{4} & \JO{5} & \JO{1} & \JO{2} & \JO{2} & \JO{4} & \JO{3} \end{tabular} \end{center} \begin{Ex}\label{gap} An execution for obtaining the minimum DFA using the computer algebra package GAP is as follows. \noindent\verb|LoadPackage("automata");| \noindent\verb|A:=RationalExpression("fbUgbUjbUkbUlbUmbUlcUmcUjcUkcUfcUgcUmdUfeUjeUkeUmeUhiUfiUgiUjiUkiUliU| \noindent\verb|miUfhUghUjhUkhUlhUmhUfjUgjUjjUkjUljUmjUfkUgkUjkUkkUlkUmkUflUglUjlUklUllUml","abcdefghijklm");| \noindent\verb|Is:=RationalExpression("(aUbUcUdUeUfUgUhUiUjUkUlUm)*","abcdefghijklm");| \noindent\verb|r:=ProductRatExp(Is,ProductRatExp(A,Is));| \noindent\verb|M:=RatExpToAut(r);| \noindent\verb|Display(M);| \end{Ex} As in ~\cite[Theorem 3.17]{TT}, for each $q\in Q\setminus F(=\{\JOO{0},\JOO{2},\JOO{3},\JOO{4},\JOO{5}\})$, we define \begin{align*} \LC{q}=\PAI(\SEQ(I,\pi)\cap (L(M_q)^{c})^{\wedge}). \end{align*} By \eqref{usefulidentity}, we have \begin{align*} f_{\LC{q}}(x,q)=\sum_{\substack{a\in I \\ \delta(q,a)\not\in F}} x^{\ell(\pi(a))}q^{|\pi(a)|}f_{\LC{\delta(q,a)}}(xq^3,q). \end{align*} \begin{Rem} By Proposition \ref{iikae}, we have $\LC{q_0}=\BIR$. Moreover, we accidentally have $\LC{q_2}=\BIRP$ thanks to the following two facts. \begin{enumerate} \item For $\boldsymbol{i}\in\SEQ(I,\pi)$ such that $\PAI(\boldsymbol{i})\in\BIR$, the condition $\PAI(\boldsymbol{i})\in\BIRP$ is equivalent to the condition $i_1\ne \myb,\myc,\mye,\myh,\myi,\myj,\myk,\myl$, where $\boldsymbol{i}=(i_1,i_2,\dots)$. \item For $t\in\Sigma$, the condition $\delta(\JOO{2},t)\not\in F$ is equivalent to the condition $t\ne \myb,\myc,\mye,\myh,\myi,\myj,\myk,\myl$. Moreover, $\delta(\JOO{2},\mya)=\JOO{0}$. \end{enumerate} \end{Rem} Putting $F_i(x)=f_{\LC{\JOO{i}}}(x,q)$, we have the simultaneous $q$-difference equation \begin{align*} \begin{pmatrix} F_0(x) \\ F_2(x) \\ F_3(x) \\ F_4(x) \\ F_5(x) \end{pmatrix} = \begin{pmatrix} 1+2xq+2xq^2+x^2q^3 & xq^3+2x^2q^4 & x^2q^6 & xq^3+x^2q^4 & x^2q^3 \\ 1+xq^2 & xq^3 & x^2q^6 & xq^3 & 0 \\ 1 & xq^3 & x^2q^6 & xq^3 & 0 \\ 1+2xq^2 & xq^3 & x^2q^6 & xq^3 & 0 \\ 1+2xq+2xq^2 & xq^3+2x^2q^4 & x^2q^6 & xq^3+x^2q^4 & x^2q^3 \\ \end{pmatrix} \begin{pmatrix} F_0(xq^3) \\ F_2(xq^3) \\ F_3(xq^3) \\ F_4(xq^3) \\ F_5(xq^3) \end{pmatrix}. \end{align*} By the Murray-Miller algorithm (see ~\cite[Appendix B]{TT} for a review), we get \tiny \begin{align*} {} &\frac{x^4(1-x)^2(2q^5+3x+q^2x)}{xq^8+3xq^6+2q^8}F_0(xq^3)\\ &- \frac{x^5(2q^5+6q^3+q^2-3)+x^4q^2(4q^6+2q^5+2q^4+6q^3+6q^2+2)+x^3q^4(q^{6}+4q^5-q^4+4q^3+q^2+4q+3)+2x^2q^{9}(1+q)}{xq^{14}+3xq^{12}+2q^{14}}F_0(x) \\ &+ \frac{x^3(q^5+3q^3+2q^2+6)+x^2q^2(2q^6+2q^5+2q^4+8q^3+6q^2+6q+4)+xq^6(q^{5}+3q^3+4q^2+4q+4)+2q^{11}}{xq^{11}+3xq^9+2q^{11}})F_R(xq^{-3}) \\ &- F_0(xq^{-6})=0. \end{align*} \normalsize By applying the Murray-Miller algorithm after switching the first row and the second row and the first row and the second column, we obtain \tiny \begin{align*} {} &\frac{x^4(1-x)(q^3-x)(q^7+x+q^2x)}{xq^{17}+xq^{15}+q^{19}}F_{2}(x) \\ &- \frac{x^5(2q^5+2q^3+q^2+1)+x^4q^3(2q^{7}+q^6+q^5+3q^4+2q^3+q^2+2q-1)+x^3q^7(q^{6}+q^{5}+2q^{4}+2q^2+q+1)+x^2q^{14}(1+q)}{xq^{20}+xq^{18}+q^{22}}F_{2}(xq^{-3}) \\ &+ \frac{x^3(q^5+q^3+2q^2+2)+x^2q^4(q^6+q^5+2q^4+3q^3+3q^2+2q+3)+xq^9(q^{5}+2q^{3}+2q^{2}+2q+1)+q^{16}}{xq^{14}+xq^{12}+q^{16}}F_{2}(xq^{-6}) \\ &- F_{2}(xq^{-9})=0. \end{align*} \normalsize The results in this section are summarized as follows. \begin{Prop}\label{PropqdiffR} The generating functions $f_{\BIR}(x,q)$ and $f_{\BIRP}(x,q)$ satisfy the following $q$-difference equations respectively. \tiny \begin{align*} {} &{} (2+3xq^{4}+xq^{6})f_{\BIR}(x,q)\\ &= (2+4xq+4xq^{2}+4xq^{3}+3xq^{4}+xq^{6}+4x^{2}q^{3} +6x^{2}q^{4}+6x^{2}q^{5}+8x^{2}q^{6}+2x^{2}q^{7} +2x^{2}q^{8}+2x^{2}q^{9}+6x^{3}q^{7}+2x^{3}q^{9}+3x^{3}q^{10}+x^{3}q^{12})f_{\BIR}(xq^{3},q)\\ &- x^{2}q^{7}(2+2q+3xq+4xq^{2}+xq^{3}+4xq^{4}-xq^{5}+4xq^{6} +xq^{7}+2x^{2}q^{5}+6x^{2}q^{7}+6x^{2}q^{8}+2x^{2}q^{9}+2x^{2}q^{10}+4x^{2}q^{11}+3x^{3}q^{9} +x^{3}q^{11}\\ &+6x^{3}q^{12}+2x^{3}q^{14})f_{\BIR}(xq^{6},q)+ x^{4}q^{21}(1-xq^6)^2(2+3xq+xq^{3})f_{\BIR}(xq^{9},q),\\ {} &{} (1+xq^{5}+xq^{7})f_{\BIRP}(x,q) \\ &+(1+xq^{2}+2xq^{3}+2xq^{4}+2xq^{5}+xq^{7}+3x^{2}q^{6}+2x^{2}q^{7}+3x^{2}q^{8}+3x^{2}q^{9} +2x^{2}q^{10}+x^{2}q^{11}+x^{2}q^{12}+2x^{3}q^{11}+2x^{3}q^{13}+x^{3}q^{14}+x^{3}q^{16})f_{\BIRP}(xq^{3},q)\\ &-x^{2}q^{10}(1+q+xq^{2}+xq^{3}+2xq^{4}+2xq^{6}+xq^{7}+xq^{8}-x^{2}q^{7}+2x^{2}q^{8}+x^{2}q^{9} +2x^{2}q^{10}+3x^{2}q^{11}+x^{2}q^{12}+x^{2}q^{13}+2x^{2}q^{14}+x^{3}q^{13}+x^{3}q^{15}\\ &+2x^{3}q^{16}+2x^{3}q^{18})f_{\BIRP}(xq^{6},q)+x^{4}q^{27}(1-xq^6)(1-xq^9)(1+xq^{2}+xq^{4})f_{\BIRP}(xq^{9},q) \end{align*} \normalsize \end{Prop} \section{The cylindric partitions}\label{cylin} Let $r\geq 1$. A profile $\boldsymbol{c}=(c_i)_{i\in\mathbb{Z}/r\mathbb{Z}}$ is a family of nonnegative integers indexed by $\mathbb{Z}/r\mathbb{Z}$. In this paper, when a profile is written in one-line format $(c_0,\dots,c_{r-1})$ (as in Example \ref{cwrecex}), the nonnegative integer $c_i$ is the one indexed by $i+r\mathbb{Z}\in\mathbb{Z}/r\mathbb{Z}$ in the profile for $0\leq i<r$. \begin{Def}[{\cite{GK}}] A cylindric partition $\boldsymbol{\lambda}=(\lambda^{(i)})_{i\in\mathbb{Z}/r\mathbb{Z}}$ of a profile $\boldsymbol{c}=(c_i)_{i\in\mathbb{Z}/r\mathbb{Z}}$ is a family of partitions indexed by $\mathbb{Z}/r\mathbb{Z}$ such that \begin{align*} \lambda^{(i)}_j\geq \lambda^{(i+1)}_{j+c_{i+1}} \end{align*} for $i\in\mathbb{Z}/r\mathbb{Z}$ and $j\geq 1$. As usual, we put $\mu_k=0$ for $\mu\in\PAR$ and $k>\ell(\mu)$. \end{Def} Let $\CP{\boldsymbol{c}}$ be the set of cylindric partitions of profile $\boldsymbol{c}=(c_i)_{i\in\mathbb{Z}/r\mathbb{Z}}$ and define \begin{align*} F_{\boldsymbol{c}}(x,q)=\sum_{\boldsymbol{\lambda}\in\CP{\boldsymbol{c}}}x^{\max(\boldsymbol{\lambda})}q^{|\boldsymbol{\lambda}|},\quad G_{\boldsymbol{c}}(x,q)=(xq;q)_{\infty}F_{\boldsymbol{c}}(x,q), \end{align*} where for a cylindric partition $\boldsymbol{\lambda}=(\lambda^{(i)})_{i\in\mathbb{Z}/r\mathbb{Z}}$ \begin{align*} \max(\boldsymbol{\lambda})=\max\{\lambda^{(i)}_1,\dots,\lambda^{(i)}_{\ell(\lambda^{(i)})}\mid i\in\mathbb{Z}/r\mathbb{Z}\},\quad |\boldsymbol{\lambda}|=\sum_{i\in\mathbb{Z}/r\mathbb{Z}}|\lambda^{(i)}|. \end{align*} \begin{Thm}[{\cite[Proposition 5.1]{Bor}}]\label{borp} For a profile $\boldsymbol{c}=(c_i)_{i\in\mathbb{Z}/r\mathbb{Z}}$, we have \begin{align*} F_{\boldsymbol{c}}(1,q)=\frac{1}{(q;q)_{\infty}}\chi_{A^{(1)}_{r-1}}(\sum_{i\in\mathbb{Z}/r\mathbb{Z}}c_i\Lambda_i). \end{align*} \end{Thm} \begin{Thm}[{\cite[Proposition 3.1$+$(3.5)]{CW}}]\label{cwrec} For a profile $\boldsymbol{c}=(c_i)_{i\in\mathbb{Z}/r\mathbb{Z}}$, we have \begin{align*} G_{\boldsymbol{c}}(x,q)=\sum_{\emptyset\ne J\subseteq I_{\boldsymbol{c}}}(-1)^{|J|-1}(xq;q)_{|J|-1}G_{\boldsymbol{c}(J)}(xq^{|J|}), \end{align*} where $I_{\boldsymbol{c}}=\{i\in\mathbb{Z}/r\mathbb{Z}\mid c_i>0\}$ and $\boldsymbol{c}(J)=(c_i(J))_{i\in\mathbb{Z}/r\mathbb{Z}}$ is defined by \begin{align*} c_i(J)= \begin{cases} c_i-1 & \textrm{(if $i\in J$ and $(i-1)\not\in J$)},\\ c_i+1 & \textrm{(if $i\not\in J$ and $(i-1)\in J$)},\\ c_i & \textrm{otherwise}. \end{cases} \end{align*} \end{Thm} \begin{Ex}\label{cwrecex} By the Corteel-Welsh recursion formula (Theorem \ref{cwrec}), we have \begin{align*} G_{(3,0,0)}(x) &= G_{(2,1,0)}(xq), \\ G_{(2,1,0)}(x) &= 2G_{(2,0,1)}(xq)-(1-xq)G_{(1,1,1)}(xq^2), \\ G_{(2,0,1)}(x) &= G_{(3,0,0)}(xq)+G_{(1,1,1)}(xq)-(1-xq)G_{(2,1,0)}(xq^2), \\ G_{(1,1,1)}(x) &= 3G_{(2,1,0)}(xq)-3(1-xq)G_{(2,0,1)}(xq^2)+(1-xq)(1-xq^2)G_{(1,1,1)}(xq^3). \end{align*} \end{Ex} By Example \ref{cwrecex}, we easily see \begin{align*} G_{(2,0,1)}(x) &= G_{(1,1,1)}(xq) + xq G_{(3,0,0)}(xq), \\ G_{(2,1,0)}(x) &= (1+xq)G_{(1,1,1)}(xq^2) + 2xq^2 G_{(3,0,0)}(xq^2), \\ G_{(3,0,0)}(x) &= (1+xq^2)G_{(1,1,1)}(xq^3) + 2xq^3 G_{(3,0,0)}(xq^3), \\ G_{(1,1,1)}(x) &= 3xq^3(1+xq)G_{(3,0,0)}(xq^3) + (1+2xq+2xq^2+x^2q^3)G_{(1,1,1)}(xq^3). \end{align*} By applying the Murray-Miller algorithm to \begin{align} \begin{pmatrix} G_{(3,0,0)}(x) \\ G_{(1,1,1)}(x) \end{pmatrix} = \begin{pmatrix} 2xq^3 & 1+xq^2 \\ 3xq^3(1+xq) & 1+2xq+2xq^2+x^2q^3 \end{pmatrix} \begin{pmatrix} G_{(3,0,0)}(xq^3) \\ G_{(1,1,1)}(xq^3) \end{pmatrix}, \label{Grel} \end{align} we obtain \tiny \begin{align*} \frac{xq^3(1+xq^{-1})(1-xq)(1-xq^2)}{xq^2+1}G_{(3,0,0)}(xq^3) +\frac{x^3q^3+x^2q^4+2x^2q^3+2x^2q^2+2x^2q+2xq^3+2xq^2+2xq+x+q}{xq^3+q}G_{(3,0,0)}(x) -G_{(3,0,0)}(xq^{-3})=0. \end{align*} \normalsize Similarly, we obtain \tiny \begin{align*} \frac{x(1+xq^{-2})(1-xq)(1-xq^2)}{xq^3+q^2}G_{(1,1,1)}(xq^3) +\frac{x^3q+2x^2q^3+2x^2q^2+2x^2q+x^2+xq^4+2xq^3+2xq^2+2xq+q^3}{xq^4+q^3}G_{(1,1,1)}(x)-G_{(1,1,1)}(xq^{-3})=0. \end{align*} \normalsize The results in this section are summarized as follows. \begin{Prop}\label{PropqdiffG} The generating functions $G_{(3,0,0)}(x,q)$ and $G_{(1,1,1)}(x,q)$ satisfy the following $q$-difference equations respectively. \begin{align*} {} &(1+xq^5)G_{(3,0,0)}(x,q)\\ &=(1+xq^{2}+2xq^{3}+2xq^{4}+2xq^{5}+2x^{2}q^{6}+2x^{2}q^{7}+2x^{2}q^{8}+x^{2}q^{9}+x^{3}q^{11})G_{(3,0,0)}(xq^{3},q)\\ &+ xq^6(1+xq^2)(1-xq^4)(1-xq^5)G_{(3,0,0)}(xq^{6},q),\\ {} &(1+xq^4)G_{(1,1,1)}(x,q)\\ &= (1+2xq^{1}+2xq^{2}+2xq^{3}+xq^{4}+x^{2}q^{3}+2x^{2}q^{4}+2x^{2}q^{5}+2x^{2}q^{6}+x^{3}q^{7})G_{(1,1,1)}(xq^{3},q)\\ &+ xq^{3}(1+xq)(1-xq^4)(1-xq^5)G_{(1,1,1)}(xq^6,q). \end{align*} \end{Prop} \section{Andrews-Gordon type series} \subsection{Certificate recurrences} Let $r\geq 1$ and we denote the set of maps from $\mathbb{Z}\times \mathbb{Z}^r$ to $\mathbb{Q}(q)$ by $\MAP(\mathbb{Z}\times \mathbb{Z}^r,\mathbb{Q}(q))$, with the variables $n,k_1,\dots,k_r$ in this order. Let $f\in \MAP(\mathbb{Z}\times \mathbb{Z}^r,\mathbb{Q}(q))$. The shift operators $\ENU$, $\KEI{1},\dots,\KEI{r}$ are defined by \begin{align*} \ENU f(n,k_1,\dots,k_r) &= f(n-1,k_1,\dots,k_r),\\ \KEI{i} f(n,k_1,\dots,k_r) &= f(n,k_1,\dots,k_{i-1},k_i-1,k_{i+1},\dots,k_r) \end{align*} for $1\leq i\leq r$. We say that $f$ is summable if \begin{align*} \{(k_1,\dots,k_r)\in\mathbb{Z}^r\mid f(n,k_1,\dots,k_r)\ne 0\} \end{align*} is a finite set for any $n\in\mathbb{Z}$. In this case, \begin{align} f_n=\sum_{k_1,\dots,k_r\in\mathbb{Z}}f(n,k_1,\dots,k_r) \label{efuenu} \end{align} is well-defined. Let $\NONC$ be a $\mathbb{Q}(q)$-subalgebra in $\END_{\textrm{$\mathbb{Q}(q)$-lin}}(\MAP(\mathbb{Z}\times \mathbb{Z}^r,\mathbb{Q}(q)))$ generated by the shift operators $\ENU$, $\KEI{1},\dots,\KEI{r}$ and the scalar multiplications $q^n$, $q^{k_1},\dots,q^{k_r}$. Note that in $\NONC$ we have \begin{align*} \ENU\KEI{i}=\KEI{i}\ENU,\quad q^{k_i}\ENU=\ENU q^{k_i},\quad q^{n}\KEI{i}=\KEI{i} q^{n},\\ \ENU q^{n} = q^{n-1}\ENU,\quad \KEI{j} q^{k_i} = q^{k_i-\delta_{ij}}\KEI{j},\quad q^{n}q^{k_i}=q^{k_i}q^{n}, \end{align*} for $1\leq i,j\leq r$. We denote by $\mathbb{Q}(q)[q^n]$ the $\mathbb{Q}(q)$-subalgebra generated by $q^n$ in $\NONC$, which is clearly a polynomial $\mathbb{Q}(q)$-algebra generated by $q^n$. The following is standard in deriving a recurrence relation (see ~\cite[\S3]{Rie} and ~\cite[Proposition 4.1]{Aig}). For completeness, we duplicate a proof. \begin{Prop} Assume $f\in \MAP(\mathbb{Z}\times \mathbb{Z}^r,\mathbb{Q}(q))$ is summable and there exists $J\geq 0$, $p_1,\dots,p_J\in\mathbb{Q}(q)[q^n]$ and $C_1,\dots,C_r\in\NONC$ such that \begin{align*} \left(\sum_{j=0}^{J}p_j\ENU^j + \sum_{i=1}^{r}(1-\KEI{i})C_i\right)f=0. \end{align*} Then, we have a $q$-holonomic recurrence (see ~\eqref{efuenu}) \begin{align*} f_n+p_1f_{n-1}+\dots+p_Jf_{n-J}=0. \end{align*} \end{Prop} \begin{proof} Let $G_i=C_if$ for $1\leq i\leq r$ and note that it is summable. For $n\in\mathbb{Z}$, take $M>0$ so that $(k_1,\dots,k_r)\not\in\{-M,-M+1,\dots,M-1,M\}^{r}$ implies \begin{align*} 0=f(n,k_1,\dots,k_r)=\cdots=f(n-J,k_1,\dots,k_r)=G_1(n,k_1,\dots,k_r)=\dots=G_r(n,k_1,\dots,k_r). \end{align*} By applying the summation $\sum_{k_1=-M}^{M+1}\dots\sum_{k_r=-M}^{M+1}$ to \begin{align*} {} &\sum_{j=0}^{J}p_jf(n-j,k_1,\dots,k_r)\\ &= -\sum_{i=1}^{r}(G(n,k_1,\dots,k_{i-1},k_i,k_{i+1},\dots,k_r)-G(n,k_1,\dots,k_{i-1},k_i-1,k_{i+1},\dots,k_r)), \end{align*} we get the result. \end{proof} \subsection{Andrews-Gordon type series for $G_{(1,1,1)}(x,q)$}\label{gs111} \begin{Prop}\label{g111} We have \begin{align*} G_{(1,1,1)}(x,q)=\sum_{a,b,c,d\geq 0}\frac{q^{a^2+b^2+3c^2+3d^2+2ab+3ac+3ad+3bc+3bd+6cd}}{(q;q)_a(q;q)_b(q^3;q^3)_c(q^3;q^3)_d}x^{a+b+c+2d}. \end{align*} \end{Prop} \begin{proof} It is enough to show that the Andrews-Gordon type series in Proposition \ref{g111} satisfies the same $q$-difference equation for $G_{(1,1,1)}(x,q)$ in Proposition \ref{PropqdiffG} because the coefficients of $x^0$ (resp. $x^n$ for $n<0$) in the both series are equal to 1 (resp. 0). Let \begin{align*} F(n,b,c,d)= \frac{q^{(n-b-c-2d)^2+b^2+3c^2+3d^2+2(n-b-c-2d)b+3(n-b-2c-2d)(c+d)+3bc+3bd+6cd}}{(q;q)_{n-b-c-2d}(q;q)_b(q^3;q^3)_c(q^3;q^3)_d} \end{align*} and put \begin{align*} p_0 &= -2q^{6n+8}-q^{3n+12}+2q^{3n+8}+q^{12}, \quad r_0 = 0,\quad s_0 = (2q^{3n+8}+q^{12})(q^b-1),\\ q_0 &= (4q^{4n+9}+2q^{4n+8}+q^{n+12}+2q^{n+10})(q^{2n}-q^{b+2c+d})+(2q^{3n+8}+q^{12})(q^{n+2c+d}-q^b),\\ p_1 &= -(2q^{6n}+3q^{3n+4}+4q^{3n+3}+4q^{3n+2}+2q^{3n+1}+2q^{6}+2q^{5}+2q^{4})q^{3n+5},\\ q_1 &= (4q^{6n+6}+2q^{6n+5}+q^{3n+9}+2q^{3n+7})(q^{3n}C+q^{2+b}+(B-Cq^b)q^{n+2+2c+d})\\ &\quad+ (4q^{7n+8}+2q^{7n+7}-q^{4n+11}+2q^{4n+9}-q^{n+12})q^{b+2c+d}\\ &\quad+ q^{n+7}(2q^{3n}+q^{4})((1+C)q^{3n}+Dq^{1+b})q^{2c+d}+ (2q^{3n+3}+2q^{3n+1}+2q^{3n}+q^{5}+2q^{4})q^{3n+6},\\ r_1 &= (2q^{9n+5}+q^{6n+9}),\quad r_2 = (2q^{9n+3}+q^{6n+4})-(2q^{7n+4}+q^{4n+8})q^{b+2c+d},\\ s_1 &= (4q^{6n+8}+2q^{6n+7}+q^{3n+11}+2q^{3n+9})(1-q^b) + (2q^{3n}+q^4)q^{8+n+b+2c+d}, \\ p_2 &= 2q^{9n+4}+2q^{9n+3}-3q^{6n+8}+2q^{6n+5}+q^{6n+4}-q^{3n+9}, \\ q_2 &= ((4q^{10n+4}+2q^{10n+3}+q^{7n+7}+2q^{7n+5})(B+q^b)+2q^{10n+3}+q^{7n+7}-q^{4n+8+b}-2q^{7n+7+b})Cq^{2c+d}\\ &\quad +(-2q^{9n+4}+q^{6n+8}-2q^{6n+5})+2(q^{7n+7}+q^{7n+4}+q^{4n+8})q^{b+2c+d} \\ &\quad +(2q^{6n+7}+x^{3n+8})(q^{3n-4}C+Bq^{n+2c+d}+q^{n+2c+d}+q^{1+b}),\\ s_2 &= (2q^{6n+7}+q^{3n+8})(q(1-q^b)-2q^{n+b+2c+d}),\quad r_3 = (4q^{3n}+2q)q^{7n+b+2c+d},\quad s_3 = 0,\\ p_3 &= -(2q^{9n+2}+q^{6n+3}), \quad q_3 = (2q^{9n}+q^{6n+1})(q^2+(1+q^b+B)Cq^{n+2c+d}). \end{align*} Let $\TTE=\varepsilon_0+\varepsilon_1N+\dots+\varepsilon_3N^3$ for $\varepsilon\in\{p,q,r,s\}$. One can check \begin{align*} (\TTP+(1-B)\TTQ+(1-C)\TTR+(1-D)\TTS)F(n,b,c,d)=0, \end{align*} where $NF(n,b,c,d)=F(n-1,b,c,d)$, $BF(n,b,c,d)=F(n,b-1,c,d)$, $CF(n,b,c,d)=F(n,b,c-1,d)$ and $DF(n,b,c,d)=F(n,b,c,d-1)$. Thus, we have \begin{align*} p_0f_n+p_1f_{n-1}+p_2f_{n-2}+p_3f_{n-3}=0. \end{align*} On the other hand, the $q$-difference equation for $G_{(1,1,1)}(x,q)$ in Proposition \ref{PropqdiffG} is equivalent to the claim that \begin{align*} p'_0g_n+p'_1g_{n-1}+p'_2g_{n-2}+p'_3g_{n-3}+p'_4g_{n-4}=0. \end{align*} holds for all $n\in\mathbb{Z}$. Here, \begin{align*} p'_0 &= -1+q^{3n},\\ p'_1 &= -q^{4}+2q^{3n-2}+2q^{3n-1}+2q^{3n}+q^{3n+1}+q^{6n-3},\\ p'_2 &= q^{3n-3}+2q^{3n-2}+2q^{3n-1}+2q^{3n}+q^{6n-8}-q^{6n-5}-q^{6n-4},\\ p'_3 &= q^{3n-2}-q^{6n-10}-q^{6n-9}+q^{6n-6},\\ p'_4 &= q^{6n-11}, \end{align*} and $g_{n}(q)$ is defined by $G_{(1,1,1)}(x,q)=\sum_{n\in\mathbb{Z}} g_{n}(q)x^n$. Note that $g_{n}=0$ for $n<0$. One can check that the following is equal to $(p'_0,p'_1,p'_2,p'_3,p'_4)$. \begin{align*} \frac{2q^{10n}+q^{7n+1}}{-2q^{3n+10}-q^{14}}(p_0,p_1,p_2,p_3,0) + \frac{q^{7n}}{-q^{16}}(0,\ENU p_0,\ENU p_1,\ENU p_2,\ENU p_3). \end{align*} \end{proof} \subsection{Andrews-Gordon type series for $G_{(3,0,0)}(x,q)$}\label{gs300} \begin{Prop}\label{g300} We have \begin{align*} G_{(3,0,0)}(x,q)=\sum_{a,b,c,d\geq 0}\frac{q^{a(a+1)+b(b+2)+3c(c+1)+3d(d+1)+2ab+3ac+3ad+3bc+3bd+6cd}}{(q;q)_a(q;q)_b(q^3;q^3)_c(q^3;q^3)_d}x^{a+b+c+2d}. \end{align*} \end{Prop} \begin{proof} The argument is the same as in \S\ref{gs111}. Let \begin{align*} F(n,a,c,d)= \frac{q^{a(a+1)+(n-a-c-2d)(n-a-c-2d+2)+3c(c+1)+3d(d+1)+3ac+3ad+(n-a-c-2d)(2a+3c+3d)+6cd}}{(q;q)_a(q;q)_{n-a-c-2d}(q^3;q^3)_c(q^3;q^3)_d} \end{align*} and put \begin{align*} r_1 &= (q^{3n+1}+q^{3n}+q^{4})q^{6n+5}, \quad p_0 = (1-q^{3n})q^{-6n}r_1,\quad s_0 = q^{-6n}r_1(q^{a}-1),\quad r_0 = 0,\\ q_0 &= (q^{3n+2}+3q^{3n+1}+2q^{3n}+q^4+q^2+q)q^{n+5}(q^{2n}-q^{a+2c+d}) + q^{-6n}r_1(q^{n+2c+d}-q^a),\\ p_1 &= -(r_1+2q^{6n+8}+4q^{6n+7}+4q^{6n+6}+2q^{6n+5}+q^{3n+10}+2q^{3n+9}+2q^{3n+8}+q^{3n+7}), \\ p_3 &= -(q^{3n+1}+q^{3n}+q)q^{6n+2}, \quad t = (q^{3n+2}+3q^{3n+1}+2q^{3n}+q^{4}+q^2+q)q^{3n+6},\\ q_1 &= tq^{-1}(Aq^{n+2c+d}+q^{1+a}-Cq^{n+2+a+2c+d}+Cq^{3n})+ r_1(Cq^2+1+Dq^{-3n+1+a})q^{-2n+2c+d} \\ &\quad + (2r_1-p_3q^5)q^{-3n} + (q^nt + q^{-5n+7}p_3)q^{a+2c+d}, \\ s_1 &= t(1-q^a)+r_1q^{-5n+1+a+2c+d},\quad r_3 = (q^{3n+2}+2q^{3n+1}+q^{3n}+q^2+q)q^{7n+2+a+2c+d},\\ p_2 &= (q^{6n+2}+2q^{6n+1}+q^{6n}-q^{3n+6}-q^{3n+5}-q^{3n+4}+q^{3n+2}+2q^{3n+1}-q^{6})q^{3n+3}, \\ q_2 &= tACq^{4n-2+2c+d}-p_3(Aq^{-2n+4+2c+d}+Cq^{2}+q^{-2n+4+2c+d}+q^{-3n+6+a})+ r_1Cq^{n-1+2c+d}\\ &\quad +(q^{4n-1}t+p_3q^{-2n+6})Cq^{a+2c+d}+ p_3q^{2}+q^{6n+4}(-1+q^3)+ (r_1-q^5 p_3)q^{-2n+a+2c+d}, \\ r_2 &= -p_3q^2 - r_1q^{-2n+a+2c+d}, \quad q_3 = -p_3(1+C(A+1+q^{1+a})q^{n+2c+d}),\quad s_3 = 0, \\ s_2 &= -p_3q^{-3n+6}(1-q^a) -(q^{3n+2}+2q^{3n+1}+q^{3n}+q^{2}+q)q^{4n+6+a+2c+d}. \end{align*} Let $\TTE=\varepsilon_0+\varepsilon_1N+\dots+\varepsilon_3N^3$ for $\varepsilon\in\{p,q,r,s\}$. One can check \begin{align*} (\TTP+(1-A)\TTQ+(1-C)\TTR+(1-D)\TTS)F(n,a,c,d)=0, \end{align*} where $NF(n,a,c,d)=F(n-1,a,c,d)$, $AF(n,a,c,d)=F(n,a-1,c,d)$, $CF(n,a,c,d)=F(n,a,c-1,d)$ and $DF(n,b,c,d)=F(n,a,c,d-1)$. The rest is similar. \end{proof} \subsection{A proof of Theorem \ref{RRidentAG} for $\BIR$}\label{proof} As in \S\ref{gs111}, we show that the Andrews-Gordon type series in Theorem \ref{RRidentAG} for $\BIR$ satisfies the same $q$-difference equation for $f_{\BIR}(x,q)$ in Proposition \ref{PropqdiffR}. Let \begin{align*} F(n,b,c,d)= \frac{q^{(n-b-2c-2d)^2+b^2+3c^2+3d^2+2(n-b-2c-2d)b+3(n-b-2c-2d)(c+d)+3bc+3bd+6cd}}{(q;q)_{n-b-2c-2d}(q;q)_b(q^3;q^3)_c(q^3;q^3)_d} \end{align*} ans put \tiny \begin{align*} p_0 &= -12q^{6n+21}-q^{3n+29}-6q^{3n+27}-9q^{3n+25}+12q^{3n+21}+q^{29}+6q^{27}+9q^{25},\quad r_0 = s_0 = 6(q^{2}+3)(q^3-1)q^{3n+21}(1-q^b),\\ t &= 36q^{3n+1}+12q^{3n}+q^{8}+9q^{6}+27q^{4}+27q^2,\quad q_0 = tq^{n+21}(q^{2n}-q^{b+c+d}) + 6(q^{2}+3)(q^3-1)q^{3n+21}(q^{b}-q^{n+c+d}), \\ p_1 &= -2(3q^{3n+5}+9q^{3n+3}+9q^{3n+2}+12q^{3n+1}+3q^{3n}+q^{10}+q^{9}+4q^{8}+6q^{7}+6q^{6}+12q^{5}+9q^{4}+9q^{3})q^{3n+19}\\ q_1 &= (6q(q^{2}+3)(1-q^3)(C+D)+36q^{3n+1}+12q^{3n}+q^{8}+15q^{6}+45q^{4}-6q^{3}+27q^{2}-18q)q^{4n+20+b+c+d} \\ &\quad+ q^{3n+20}((12q^{3n}+q^{8}+6q^{6}+9q^4)(1+q^{-1}+q^{n+c+d})+t(q^b+q^{n+c+d}B)),\\ r_1 &= s_1 = tq^{3n+20}(1-q^b)+6(q^{2}+3)(1-q^3)q^{4n+21}q^{b+c+d},\\ p_2 &= (12q^{6n+1}+12q^{6n}+q^{3n+9}-8q^{3n+8}-6q^{3n+7}+12q^{3n+6}-30q^{3n+5}+12q^{3n+4}\\ &\quad\quad -6q^{3n+3}+12q^{3n+2}+9q^{3n+1}-q^{13}-6q^{11}-2q^{10}-9q^{9}-12q^{8}-18q^{6})q^{3n+16},\\ q_2 &= (-12q^{3n+1}+12q^{3n}-q^{9}-2q^{8}-6q^{6}+12q^{5}-6q^{4}+12q^{3}-18q^2+9q)q^{6n+16}\\ &\quad+ (6q^{3n+5}+18q^{3n+3}-6q^{3n+2}-6q^{3n}+q^{8}+6q^{6}+9q^{4})(B+1)q^{4n+17+c+d}+ 6(q^{2}+3)(1-q^3)q^{6n+18+b}\\ &\quad+ (36q^{3n+3}+12q^{3n+12}+12q^{3n}+q^{10}+10q^{8}+33q^{6}+36q^{4})q^{4n+17+b+c+d},\\ r_2 &= q^{3n+18}(6(q^{2}+3)(q^3-1)q^{3n}D(q^b-1)+ (6q^{3n+5}+18q^{3n+3}-6q^{3n+2}-6q^{3n}+q^{8}+6q^{6}+9q^{4})-(12q^{3n}+q^{5}+6q^{3}+9q)q^{n+2+b+c+d}),\\ s_2 &= (12q^{3n}+q^{8}+6q^{6}+9q^4)q^{3n+18}+ 6(q^{2}+3)(q^3-1)q^{6n+18}q^b- 3(12q^{3n}+q^{5}+6q^{3}+9q)q^{4n+20+b+c+d},\\ p_3 &= (6q^{3n+5}+24q^{3n+4}-6q^{3n+3}+18q^{3n+2}+6q^{3n}+4q^{11}+2q^{10}+14q^{9}+8q^{8}+12q^{7}+6q^{6}+18q^{5})q^{6n+12},\\ q_3 &= ((12q^{3n}-2q^{8}-6q^{6}+3q^{5}+12q^{3}+9q)(B+1+q^b)+6(q^{2}+3)(q^3-1)q^{b+3}CD)q^{7n+13+c+d}\\ &\quad+ (-6q^{3n+5}-12q^{3n+4}+18q^{3n+3}-6q^{3n+2}-6q^{3n}-q^{10}-q^{9}-5q^{8}-6q^{7}-3q^{6}-9q^{5}+9q^{4})q^{6n+12}-tq^{6n+14+b},\\ r_3 &= tq^{6n+14}D(q^b-1)+ 6(q^2+3)(q^3-1)Dq^{7n+16+b+c+d}+ (12q^{3n}-2q^{8}-6q^{6}+3q^{5}+12q^{3}+9q)q^{6n+15},\\ s_3 &= -(24q^{3n+1}+12q^{3n}+2q^{9}+q^{8}+6q^{7}+6q^{6}+15q^{4}+18q^{2})q^{6n+14}+tq^{6n+14+b}+ 6(q^3-1)(q^2+3)q^{7n+16+b+c+d},\\ p_4 &= (-12q^{6n}+24q^{3n+9}+8q^{3n+8}+24q^{3n+6}-9q^{3n+5}-18q^{3n+3}-9q^{3n+1}+2q^{14}+12q^{12}+q^{11}+18q^{10}+6q^{9}+9q^7)q^{6n+6}\\ q_4 &= -(12q^{3n}+q^{5}+6q^{3}+9q)(B+1+q^b)q^{7n+12+c+d}+ (12q^{3n}-2q^{8}-6q^{6}+3q^{5}+12q^{3}+9q)q^{9n+6},\\ r_4 &= (-6q^{3n+5}-18q^{3n+3}+6q^{3n+2}+6q^{3n}-q^{8}-6q^{6}-9q^{4})q^{6n+9}D-(12q^{3n}+q^{5}+6q^{3}+9q)q^{6n+12}(q^3-3Dq^{n+b+c+d}),\\ s_4 &= ((-12q^{3n+6}-6q^{3n+5}-18q^{3n+3}+6q^{3n+2}+6q^{3n}-q^{11}-6q^{9}-q^{8}-9q^{7}-6q^{6}-9q^{4})+3(12q^{3n}+q^{5}+6q^{3}+9q)q^{n+3+b+c+d})q^{6n+9},\\ p_5 &= 2(12q^{3n}-q^{8}-3q^{6}+2q^{5}+9q^{3}+9q)q^{9n+3},\quad q_5 = -(12q^{3n}+q^{5}+6q^{3}+9q)q^{9n+3},\\ s_5 &= (-12q^{3n}+2q^{8}+6q^{6}-3q^{5}-12q^{3}-9q)q^{9n+3},\quad r_5 = s_5D,\quad p_6 = -s_6 = -(12q^{3n}+q^{5}+6q^{3}+9q)q^{9n}, \quad r_6 = s_6 D, \quad q_6=0. \end{align*} \normalsize Let $\TTE=\varepsilon_0+\varepsilon_1N+\dots+\varepsilon_6N^6$ for $\varepsilon\in\{p,q,r,s\}$. One can check \begin{align*} (\TTP+(1-B)\TTQ+(1-C)\TTR+(1-D)\TTS)F(n,b,c,d)=0, \end{align*} where $NF(n,b,c,d)=F(n-1,b,c,d)$, $BF(n,b,c,d)=F(n,b-1,c,d)$, $CF(n,b,c,d)=F(n,b,c-1,d)$ and $DF(n,b,c,d)=F(n,b,c,d-1)$. The rest is similar. \subsection{A proof of Theorem \ref{RRidentAG} for $\BIRP$}\label{proofp} The argument is the same as in \S\ref{proof}. Let \begin{align*} F(n,a,c,d)= \frac{q^{a(a+1)+(n-a-2c-2d)(n-a-2c-2d+2)+3c(c+1)+3d(d+1)+3ac+3ad+(n-a-2c-2d)(2a+3c+3d)+6cd}}{(q;q)_a(q;q)_{n-a-2c-2d}(q^3;q^3)_c(q^3;q^3)_d} \end{align*} and put \tiny \begin{align*} p_0 &= (-q^{6n+2}-q^{6n+1}-q^{6n}-q^{3n+8}-2q^{3n+6}-q^{3n+4}+q^{3n+2}+q^{3n+1}+q^{3n}+q^8+2q^6+q^4)q^{12}, \\ q_0 &=(q^{7}+q^{6}+2q^{5}-2q^{2}-q-1)q^{2n+12}(Cq^{n}-Cq^{a+2c+2d}+Dq^{n}-Dq^{a+2c+2d}-q^n-q^{2n+c+d}+q^{n+a}+q^{a+2c+2d})\\ &\quad +(q^{3n+4}+2q^{3n+3}+4q^{3n+12}+3q^{3n+1}+2q^{3n}+q^{8}+q^{7}+3q^{6}+3q^{5}+3q^{4}+3q^{3}+q^{2}+q)q^{n+12}(q^{n+c+d}-q^{a})q^{c+d},\\ r_0 &= (q^{7}+q^{6}+2q^{5}-2q^{2}-q-1)q^{3n+12}(2-q^a-q^{-n+a+2c+2d}+Dq(q^{-n+a+2c+2d}-1)), \\ s_0 &= q^{3n+12}(q^3-1)(q^2+1)((-q^3+q^2+q+2)-q^a(q^2+q+1)+q^{-n+a+2c+2d}(q^3-1)),\\ p_1 &= -(q^{3n+6}+q^{3n+5}+3q^{3n+4}+3q^{3n+3}+5q^{3n+2}+3q^{3n+1}+2q^{3n}+q^{10}+q^9+3q^8+3q^7+5q^6+4q^5+4q^4+2q^3+q^2)q^{3n+12}, \\ q_1 &= (-q^{3n+5}-2q^{3n+4}-4q^{3n+3}-2q^{3n+2}-q^{3n+1}+q^{3n}-q^9-3q^7-q^6-3q^5-2q^4-q^3-q^2)q^{3n+12}\\ &\quad + (q^{3n+2}+q^{3n+1}+q^{3n}+q^8+2q^6+q^4)q^{4n+12+c+d}+ (q^{3n+5}+q^{3n+4}+2q^{3n+3}-q^{3n}+q^6+2q^4+q^2)q^{2n+14+2c+2d}\\ &\quad + (q^{3n+4}+2q^{3n+3}+4q^{3n+2}+3q^{3n+1}+2q^{3n}+q^8+q^7+3q^6+3q^5+3q^4+3q^3+q^2+q)q^{3n+13+a}\\ &\quad + (-q^{3n+7}-q^{3n+6}-q^{3n+5}+2q^{3n+4}+4q^{3n+3}+5q^{3n+2}+3q^{3n+1}+q^{3n}+q^9+q^8+3q^7+3q^6+3q^5+3q^4+q^3+q^2)q^{2n+13+a+2c+2d}, \\ r_1 &= -(q-1)(q^2+1)(q^2+q+1)^2q^{4n+14}Dq^{c+d}\\ &\quad + (2q^{3n+4}+4q^{3n+3}+7q^{3n+2}+5q^{3n+1}+3q^{3n}+q^8+2q^7+4q^6+6q^5+5q^4+6q^3+2q^2+2q)q^{3n+13}\\ &\quad -(q^{3n+4}+2q^{3n+3}+4q^{3n+2}+3q^{3n+1}+2q^{3n}+q^8+q^7+3q^6+3q^5+3q^4+3q^3+q^2+q)q^{3n+13+a}\\ &\quad -(q^{3n+4}+2q^{3n+3}+3q^{3n+2}+2q^{3n+1}+q^{3n}+q^7+q^6+3q^5+2q^4+3q^3+q^2+q)q^{2n+14+a+2c+2d},\\ s_1 &= (2q^{3n+4}+4q^{3n+3}+7q^{3n+2}+5q^{3n+1}+3q^{3n}+q^8+2q^7+4q^6+6q^5+5q^4+6q^3+2q^2+2q)q^{3n+13}\\ &\quad -(q-1)(q^2+1)(q^2+q+1)^2q^{4n+14+c+d}\\ &\quad -(q^{3n+4}+2q^{3n+3}+4q^{3n+2}+3q^{3n+1}+2q^{3n}+q^8+q^7+3q^6+3q^5+3q^4+3q^3+q^2+q)q^{3n+13+a}\\ &\quad -(q^{3n+4}+2q^{3n+3}+3q^{3n+2}+2q^{3n+1}+q^{3n}+q^7+q^6+3q^5+2q^4+3q^3+q^2+q)q^{2n+14+a+2c+2d}, \\ p_2 &= (q^{6n+3}+2q^{6n+2}+2q^{6n+1}+q^{6n}-q^{3n+10}-2q^{3n+8}-2q^{3n+6}+q^{3n+5}\\ &\quad\quad -q^{3n+4}+2q^{3n+3}+2q^{3n+1}-q^{13}-2q^{11}-2q^{10}-q^{9}-4q^{8}-2q^{6})q^{3n+10}, \\ q_2 &= (q-1)(q^2+1)(q^2+q+1)^2CDq^{5n+14+a+2c+2d}+ (-q^{3n+2}-q^{3n+1}-q^{3n}-2q^8-3q^6+2q^5-q^4+2q^3-q^2-1)q^{6n+11}\\ &\quad + (q^{3n+5}+q^{3n+4}+2q^{3n+3}-q^{3n}+q^6+2q^4+q^2)q^{4n+12+c+d}\\ &\quad + (q^{3n+2}+q^{3n+1}+q^{3n}-q^7+q^4+2q^3+q^2+q)q^{5n+13+2c+2d}- (q-1)(q^2+1)(q^2+q+1)^2q^{6n+12+a}, \\ r_2 &= (q-1)(q^2+1)(q^2+q+1)^2q^{6n+12}D(q^a-1+q^{-n+2+a+2c+2d})+ (q^{3n+5}+q^{3n+4}+2q^{3n+3}-q^{3n}+q^6+2q^4+q^2)q^{3n+14}, \\ s_2 &= (q^{3n+2}+q^{3n+1}+q^{3n}+q^8+2q^6+q^4)q^{3n+12} + (q-1)(q^2+1)(q^2+q+1)^2q^{6n+12+a}(1+q^{-n+2+2c+2d}), \\ p_3 &= (q^{3n+6}+2q^{3n+5}+4q^{3n+4}+2q^{3n+3}+2q^{3n+2}+q^{3n}+2q^{11}+q^{10}+3q^{9}+2q^{8}+3q^{7}+2q^{6}+3q^{5}+q^3-q^2)q^{6n+8}, \\ q_3 &= (-q^{3n+6}+2q^{3n+3}-q^{3n}-q^9-2q^8+q^6-q^5+2q^4+q^2)q^{6n+8}\\ &\quad + (q^{3n+2}+q^{3n+1}+q^{3n}-q^7+q^4+2q^3+q^2+q)q^{7n+9+c+d}- s_6q^{-4n+14+2c+2d} \\ &\quad - (q^{3n+4}+2q^{3n+3}+4q^{3n+2}+3q^{3n+1}+2q^{3n}+q^8+q^7+3q^6+3q^5+3q^4+3q^3+q^2+q)q^{6n+10+a}, \\ r_3 &= -(2q^{3n+4}+4q^{3n+3}+7q^{3n+2}+5q^{3n+1}+3q^{3n}+q^8+2q^7+4q^6+6q^5+5q^4+6q^3+2q^2+2q)q^{6n+10}D\\ &\quad + (q^{3n+4}+2q^{3n+3}+4q^{3n+2}+3q^{3n+1}+2q^{3n}+q^8+q^7+3q^6+3q^5+3q^4+3q^3+q^2+q)Dq^{6n+10+a}\\ &\quad + (q^{3n+4}+2q^{3n+3}+3q^{3n+2}+2q^{3n+1}+q^{3n}+q^7+q^6+3q^5+2q^4+3q^3+q^2+q)Dq^{5n+13+a+2c+2d}\\ &\quad + (q^{3n+2}+q^{3n+1}+q^{3n}-q^7+q^4+2q^3+q^2+q)q^{6n+12}, \\ s_3 &= -(q^{3n+4}+3q^{3n+3}+6q^{3n+2}+5q^{3n+1}+3q^{3n}+q^9+q^8+2q^7+3q^6+4q^5+4q^4+5q^3+2q^2+2q)q^{6n+10}\\ &\quad + (q^{3n+4}+2q^{3n+3}+4q^{3n+2}+3q^{3n+1}+2q^{3n}+q^8+q^7+3q^6+3q^5+3q^4+3q^3+q^2+q)q^{6n+10+a}\\ &\quad + (q^{3n+4}+2q^{3n+3}+3q^{3n+2}+2q^{3n+1}+q^{3n}+q^7+q^6+3q^5+2q^4+3q^3+q^2+q)q^{5n+13+a+2c+2d}, \\ p_4 &= (-q^{6n+2}-q^{6n+1}-q^{6n}+2q^{3n+11}+3q^{3n+10}+3q^{3n+9}+2q^{3n+8}+q^{3n+7}-q^{3n+5}\\ &\quad\quad -q^{3n+4}-2q^{3n+3}-q^{3n+2}-q^{3n+1}+2q^{14}+4q^{12}+q^{11}+2q^{10}+2q^{9}+q^{7})q^{6n+3},\\ s_6 &= (q^{3n+2}+q^{3n+1}+q^{3n}+q^{5}+2q^{3}+q)q^{9n},\quad q_4 = -(s_5 +s_6q^{-2n+8+c+d}), \\ r_4 &= -(q^{3n+5}+q^{3n+4}+2q^{3n+3}-q^{3n}+q^{6}+2q^{4}+q^{2})q^{6n+8}D-s_6q^{-3n+12}, \\ s_4 &= -(q^{3n+6}+2q^{3n+5}+2q^{3n+4}+2q^{3n+3}-q^{3n}+q^{9}+2q^{7}+q^{6}+q^{5}+2q^{4}+q^{2})q^{6n+8}, \\ p_5 &= (q^{3n+5}+q^{3n+4}+q^{3n+3}+q^{3n+2}+q^{3n+1}+q^{3n}-q^{10}+q^{7}+2q^{6}+2q^{5}+q^{4}+2q^{3}+q)q^{9n}, \\ q_5 &= p_6 = -s_6, \quad s_5 = -q^{9n+3}(q^{3n+2}+q^{3n+1}+q^{3n}-q^{7}+q^{4}+2q^{3}+q^{2}+q), \quad r_5 = s_5D, \quad q_6 = 0,\quad r_6 = s_6 D. \end{align*} \normalsize Let $\TTE=\varepsilon_0+\varepsilon_1N+\dots+\varepsilon_6N^6$ for $\varepsilon\in\{p,q,r,s\}$. One can check \begin{align*} (\TTP+(1-A)\TTQ+(1-C)\TTR+(1-D)\TTS)F(n,b,c,d)=0, \end{align*} where $NF(n,a,c,d)=F(n-1,a,c,d)$, $AF(n,a,c,d)=F(n,a-1,c,d)$, $CF(n,a,c,d)=F(n,a,c-1,d)$ and $DF(n,b,c,d)=F(n,a,c,d-1)$. The rest is similar. \subsection{A proof of Theorem \ref{RRbiiden}}\label{finalsec} By Proposition \ref{g111}, Proposition \ref{g300} and Theorem \ref{RRidentAG}, we have $f_{\BIR}(q)=G_{(3,0,0)}(1,q)$ and $f_{\BIRP}(q)=G_{(1,1,1)}(1,q)$. Thanks to ~\eqref{charcalc} and Theorem \ref{borp}, we get the results.
{"config": "arxiv", "file": "2205.04811.tex"}
TITLE: Ergodicity of Geodesic Flow QUESTION [3 upvotes]: I know the Birkhoff Ergodic theorem; and I know what is a Riemannian manifold and what a geodesic is. I also read the definition of geodesic flow on the tangent bundle of one such. But I do not yet know the meaning of the statement "the geodesic flow is ergodic", of course with some added conditions, such as constant negative curvature. Could someone please give me a reference for me for an introduction to this topic, and how does this ergodicity relate to Birkhoff's ergodic theorem? REPLY [1 votes]: You can consider every flow as action of the additive group $\mathbb R$ on a unit tangent bundle. So, ergodicity means that the afforementioned action is ergodic.
{"set_name": "stack_exchange", "score": 3, "question_id": 280547}
TITLE: Difference of Lebesgue-measurable sets QUESTION [0 upvotes]: Let $A, B\subset\mathbb{R}$ two Lebesgue-measurable sets of positive Lebesgue-measure. Prove that $A-B$ contains an interval. I thought that being the measure positive then the Hausdorff dimension of the two sets is 1. But I don't know how to go further. The hint says: "use the convolution of the two characteristics function of the two sets" but I can't really figure out how to use it. REPLY [0 votes]: It is enough to consider the case when $A$ and $B$ are bounded. Let $f=\chi_A * \chi_B$. It is a general fact that convolution of two functions in $L^{1} \cap L^{\infty}$ is continuous. [This is proved by approximating these functions by continuous with compact support in $L^{1}$ norm]. Now $\int f(x)dx=\int \chi_A(x)dx \int \chi_B(x)dx=m(A)m(B) >0$. Hence $f(x_0) >0$ for some $x_0$ and there exists $\epsilon >0$ such that $f(x) >0$ for all $x \in (x_0-\epsilon ,x_0+ \epsilon)$. So for any $x$ in this interval there exists $y$ such that $\chi_A(x-y)\chi_B(y) >0$ which means $y \in B \cap (x-A)$. It follows that $x =(x-y)+y \in A+B$. We have proved that $A+B$ contains an interval. For $A-B$ just change $B$ to $-B$.
{"set_name": "stack_exchange", "score": 0, "question_id": 3819775}
TITLE: Is $\mathbb{Z} \oplus \mathbb{Z} \oplus \mathbb{Z}_{4} \oplus \mathbb{Z}_{8}$ a free abelian group? QUESTION [0 upvotes]: Is $\mathbb{Z} \oplus \mathbb{Z} \oplus \mathbb{Z}_{4} \oplus \mathbb{Z}_{8}$ a free abelian group? Clearly this group has a torsion component and a torsion-free component. The torsion-free component is a free abelian group with basis $\{(1,0),(0,1)\}$, but what about the torsion component? If it is a free abelian group, what would its basis be? REPLY [0 votes]: (1) $\mathbb{Z}_4$ is not free abelian group: since for any subset $S$ of this group which do not contains $0$, take any map from $S$ to $\mathbb{Z}_3$ which sends some element $s\in S$ to a generator of $\mathbb{Z}_3$; it can not be extended to a homomorphism. (2) Subgroup of a free abelian group should be free abelian.
{"set_name": "stack_exchange", "score": 0, "question_id": 2294560}
TITLE: What is known about the space of measure-preserving transformations? QUESTION [4 upvotes]: I started reading about measure-preserving transformations, the ergodic theorems and mixing, but I was also wondering what is known about the space of measure-preserving transformations. The books that I have on ergodic theory (P. Walters, K. Petersen) don't really address that, and I've only seen one reference to the Halmos book, where he supposedly defines a metric on this space. But is this, for example, a Banach space? I would assume it's not, but I'd like to hear more on it and where I could get some information on this. REPLY [3 votes]: Let me mention some related results. Given a compact metric space $X$, the set $Y$ of all Borel probability measures on $X$ is metrizable and in fact the induced topology on $Y$ makes it compact. The first property is a more or less easy consequence of the separability of the space $C(X)$ of continuous functions on $X$ with the supremum norm, which allows us to define a distance on $Y$ by $$ d(\mu,\nu)=\sum_{n=1}^\infty\frac1{2^n}\left|\int_X\phi_n\,d\mu-\int_X\phi_n\,d\nu\right|, $$ where $\phi_n$ is any fixed sequence of continuous functions whose closure (that is, the closure of their union) is the closed unit ball in $C(X)$. Note that the induced notion of converge is simply weak convergence. The compactness of $Y$ has in particular the following sequence: any continuous map on a compact metric space has at least one $T$-invariant probability measure. So, under that hypotheses, the subset of all $T$-invariant measures in $Y$ is a nonempty compact convex set (that has either one element of infinitely many elements). Incidentally, the extreme points of the convex set are precisely the ergodic measures. The general problem of describing the set of invariant measures in other contexts really depends on he hypotheses and to the best of my knowledge there exists no general theory.
{"set_name": "stack_exchange", "score": 4, "question_id": 1761808}
TITLE: Integration of PDF QUESTION [0 upvotes]: Let $X$ and $Y$ be random variables. I need to find $Y$ conditional probability density function, when: a) $f_{xy}(x,y)=\lambda^2exp(-\lambda y)$, $0\leq x\leq y< \infty$ b) $f_{xy}(x,y)=x exp(-x(y+1))$, $x,y \geq 0$ My question is about interval of integration. For b) part interval should be $[0,\infty]$: ($f_x(x)=\int_{0}^{\infty}x exp(-x(y+1))dy \Longrightarrow f(y|x)=\frac{f(x,y)}{f_x(x)}$), right? How about part a) ? Thanks for help. REPLY [1 votes]: The domain in a) is $0\leq x \leq y < \infty$, meaning that $x \leq y < \infty$ so $y \in [x, \infty)$. Thus, the limits of integration are $x$ and $\infty$: $$ f_X(x)=\int_x^\infty f_{X, Y}(x, y) dy $$
{"set_name": "stack_exchange", "score": 0, "question_id": 384775}
TITLE: Does $(x,y,z) = (2,1,1) +s(-1,-1,-1) + t(2,-2,-2)$ represent a line or plane? QUESTION [1 upvotes]: Does the equation $$(x,y,z) = (2,1,1) +s(-1,-1,-1) + t(2,-2,-2)$$ represent a line or plane? I claimed it is a plane, as the two direction vectors are not multiples and thus for any values of $s$ and $t$, we can get infinite points on the plane. Is this true? REPLY [2 votes]: The idea is close to being correct. The correct way of looking at the two direction vectors is to ask whether they are linearly independent. In case of two vectors this turns out to be the same as not being multiples of each other. So yes, this is a plane. (For three dimensional subspaces of, say, a four dimensional space this would be a bit more complicated). (Edit: And the part with 'infinite number of points' is also not really helping, a line also contains an infinite number of points. This is why I wrote 'close' to being correct...)
{"set_name": "stack_exchange", "score": 1, "question_id": 1636281}
TITLE: derivative of max functions in defined interval QUESTION [0 upvotes]: Exercise: Determine the maximum and minimum of the function $f$ over the interval $A$ $$f(x)=\max \left(1-2 x-x^{2}, 2+x-x^{2}, 1+3 x-x^{2}\right), \quad A=[-1,2]$$ My approach: Aplying $\max\{f,g\}=\frac{f+g}{2}+\frac{|f-g|}{2}$ formula twice we get:$$f=\frac{|3 x+1|-4 x^{2}+5x+|7x-|3x+1|-1|+5}{4}$$ then I can continue by cases but it is very irritating Is there any more advanced techniqe??? REPLY [2 votes]: hint : you need to determine the maximum on $A$ of all the different functions in the $\max()$ and then compare them to find the "greatest" maximum. Remember that you can have a maximum also at the extremes of the interval . For the minimum instead you are right it is a bit more complicated, and that means you can't apply the same technique for the maximum but you can construct the $\max$ function dividing it in different intervals in which you use the component which is the greatest one in each interval and then you can just minimize the function defined by intervals obtained.
{"set_name": "stack_exchange", "score": 0, "question_id": 4243246}
\begin{document} \begin{abstract} We establish spectral enclosures and spectral approximation results for the anisotropic lossy Drude-Lorentz system with purely imaginary poles, in a possibly unbounded Lipschitz domain of $\R^3$. Under the assumption that the coefficients $\theta_e$, $\theta_m$ of the material are asymptotically constant at infinity, we prove that: 1) the essential spectrum can be decomposed as the union of the spectrum of a bounded operator pencil in the form $-\Div p(\omega) \nabla$ and of a second order $\curl \curl_0 - V_{e,\infty}(\omega)$ pencil with constant coefficients; 2) spectral pollution due to domain truncation can lie only in the essential numerical range of a $\curl \curl_0 - f(\omega)$ pencil.\\ As an application, we consider a conducting metamaterial at the interface with the vacuum; we prove that the complex eigenvalues with non-trivial real part lie outside the set of spectral pollution. We believe this is the first result of enclosure of spectral pollution for the Drude-Lorentz model without assumptions of compactness on the resolvent of the underlying Maxwell operator.\\[0.2cm] \emph{Classification:} 35P99, 35Q61, 47A56 \\[0.2cm] \emph{Keywords:} Drude-Lorentz model, dissipative Maxwell's equations, spectral enclosures, spectral pollution. \end{abstract} \maketitle \section{Introduction} \subsection{The Drude-Lorentz model} This paper concerns the spectra and spectral approximation of a time-harmonic Drude-Lorentz model \cite{Tip} which commonly occurs in the description of a class of metamaterials. This class includes doubly negative metamaterials, which behave as if the electric permittivity and the magnetic permeability are simultaneously negative. In the early 2000s, it was conjectured that these materials might allow the creation of a \emph{perfect lens} or an \emph{invisibility cloak}, see e.g. \cite{Nico}, \cite{Pendry}. Shortly afterwards, experimental evidence of metamaterial cloaking at microwave frequencies \cite{SMJCPSS} and of optical superlensing \cite{Lee_2005} was obtained. In the mathematics literature, `cloaking by anomalous localized resonances' has been intensively studied, see \emph{e.g.,} \cite{MR3035988}. Mathematically, some of the counter-intuitive spectral properties of the time-dependent Maxwell system for an interface between a metamaterial and a vacuum are investigated for the non-dissipative case, in a special geometry, in \cite{MR3764925, CHJII}. For the dissipative case, in the whole space and in a setting allowing dimension-reduction, we refer to the recent article \cite{BDPW}. Here we consider a more general Drude-Lorentz system \begin{equation}\label{PDEs} \curl \hat{H} = i \omega\bigg( 1 - \frac{(\theta_e)^2}{\omega^2 + i \gamma_e \omega} \bigg) \hat{E}, \;\;\; -\curl\hat{E} = i \omega\bigg(1 - \frac{(\theta_m)^2}{\omega^2 + i \gamma_m \omega} \bigg) \hat{H}, \end{equation} in a Lipschitz domain $\Omega\subseteq {\mathbb R}^3$, which may be bounded or unbounded. The variable $\omega$ is the spectral parameter. We describe the essential spectrum and its decomposition into parts connected with scattering and parts due to local dissipative effects. We obtain tight a-priori enclosures for the set in which these different components of the spectrum may lie. Adapting new non-selfadjoint techniques from \cite{MR3694623} and \cite{BFMT} to the setting of meromorphic operator-valued functions, we examine how the spectrum behaves under perturbation of the domain $\Omega$. We obtain unexpectedly small enclosures for the sets where spectral pollution \cite[Def. 2.2]{MR3694623} may appear if an unbounded $\Omega$ is approximated by a large, bounded $\Omega$. We now describe the problem in more detail. Starting from Maxwell's equations \[ \partial_t D = \curl H, \quad \partial_t B = - \curl E, \quad \Div D = 0, \quad \Div B = 0,\\ \] relations between $(D, B)$ and $(E, H)$ must be imposed to capture the properties of the medium under consideration, see \cite{MR3023383} for an interesting discussion on the diverse constitutive relations and applications to linear bianisotropic media. The Drude-Lorentz model assumes these relations to be given by convolutions \[ D(x,t) = E(x,t) + \hspace{-1mm}\int_{t_0}^t \hspace{-2mm}\chi_e(x, t-s) E(x, s) ds, \;\; B(x,t) = H(x,t) + \hspace{-1mm}\int_{t_0}^t \hspace{-2mm}\chi_m(x, t-s) H(x, s) ds. \] The functions $\chi_e(\cdot,t)$ and $\chi_m(\cdot,t)$ are assumed to be zero for $t<0$ and are usually described in terms of their Fourier transforms in time; for instance, \[ \hat{\chi}_e(\omega) = - \frac{(\theta_e)^2}{\omega^2 + i \gamma_e \omega} - \sum_{n=1}^\infty \frac{(\Omega_n^e)^2}{\omega^2 + i \gamma_n^e \omega - (\la_n^e)^2}, \] in which $\la_n^e > 0$, $\Omega_n^e \geq 0$, $\gamma_e$, $\gamma_n^e > 0$, $n \in \N \cup \{0\}$, are constants, and $\theta_e$ is some non-negative function. From the equation $\curl \hat{H} = i \hat{D} = i \omega(1 + \hat{\chi}_e) \hat{E}$ one then obtains \[ \curl \hat{H} = i \omega\bigg( 1 - \frac{(\theta_e)^2}{\omega^2 + i \gamma_e \omega} - \sum_{n=1}^\infty \frac{(\Omega_n^e)^2}{\omega^2 + i \gamma_n^e \omega - (\la_n^e)^2}\bigg) \hat{E}, \] together with a corresponding equation for $\curl\hat{E}$. In this paper, as in \cite{MR3764925}, we treat the simplest case, namely the lossy Drude system \cite[\S6]{MR3421776} defined in \eqref{PDEs}. \subsection{Operator formulations and main results} The system (\ref{PDEs}) has several operator formulations, which we now outline. The equations hold in a (bounded or unbounded) Lipschitz domain $\Omega\subseteq {\mathbb R}^3$, in which the functions $\theta_e$ and $\theta_m$ are assumed to be bounded and non-negative. The Fourier transform $\hat{E}$ of the electric field $E$ is supposed to lie in the space $H_0(\curl, \Omega)$, while $\hat{H}$ is assumed to lie in $H(\curl, \Omega)$. The first operator formulation of \eqref{PDEs} is then \[ \cL(\omega)\left(\begin{array}{c} \hat{E} \\ \hat{H} \end{array}\right) = {\bf 0}, \] in which $\omega\mapsto \cL(\omega)$ is the $2\times 2$ rational block-matrix pencil given by \begin{equation}\label{eq:intro1} \cL(\omega) = \begin{pmatrix} - \omega + \frac{\theta_e^2}{\omega + i \gamma_e} & i \curl \\ -i\curl_0 & - \omega + \frac{\theta_m^2}{\omega + i \gamma_m} \end{pmatrix}, \quad \dom(\cL) = H_0(\curl, \Omega) \oplus H(\curl, \Omega); \end{equation} see subsection \ref{notation} below for definitions of the Sobolev spaces, $\curl$, $\curl_0$, etc. It is not difficult to show (see \cite{MR3543766}) that the Drude-Lorentz pencil $\cL(\omega)$ is the first Schur complement of the `companion' block operator matrix \begin{equation}\label{def:cA} \cA = \begin{pmatrix} A & B \\ B|_{\dom(A)} & -iD \end{pmatrix} \end{equation} in $L^2(\Omega)^6$, with domain $\dom(\cA) = H_0(\curl, \Omega) \oplus H(\curl, \Omega) \oplus L^2(\Omega)^3 \oplus L^2(\Omega)^3$ and \begin{equation}\label{ABDdef} A = \begin{pmatrix} 0 & i \curl \\ -i \curl_0 & 0 \end{pmatrix}, \quad B = \begin{pmatrix} \theta_e & 0 \\ 0 & \theta_m \end{pmatrix}, \quad D = \begin{pmatrix} \gamma_e & 0 \\ 0 & \gamma_m \end{pmatrix}; \end{equation} in other words, \begin{equation} \label{intro:cL} \cL(\omega) = A - \omega - B (-iD - \omega)^{-1} B. \end{equation} In particular the spectrum of $\cA$ coincides with the spectrum of $\cL$ outside the two poles $-i \gamma_e, -i \gamma_m$. We will exploit this connection and the results in \cite{BFMT} to further decompose the spectrum of $\cL$ into the spectra of two operator pencils. This method allow us to generalise the known spectral analysis of the Drude-Lorentz model in the following ways:\\ (1) In our assumptions, $0 \in \sigma_e(A)$, where $A$ is defined as in \eqref{ABDdef}, since $\nabla \dot{H}^1_0(\Omega)\oplus\nabla \dot{H}^1(\Omega)$ is an infinite-dimensional kernel of $A$. This is not allowed by many results in the literature, e.g. \cite[Proposition 2.2]{MR3543766}, where it is required that $A$ have compact resolvent. \\ (2) We allow the domain $\Omega$ to be unbounded. Consequently, contributions to $\sigma_e(\cA)$ are expected from infinity. \\ (3) We allow the coefficients $\theta_e$, $\theta_m$ to be \textit{both} non-constant, even though we assume that they are asymptotically constant. \\[0.1cm] On the other hand, to avoid very singular situations we restrict ourselves to the case where $\gamma_e$ and $\gamma_m$ (namely, the position of the poles) are fixed.\\ A large part of the spectral analysis has been achieved not by inspecting directly the operator pencil $\cL$, but its first Schur complement $\cS_1$, defined on $\dom(\cS_1) := \{u \in H_0(\curl, \Omega): \Theta_m(\omega)^{-1} \curl_0 u \in H(\curl, \Omega)\}$ by \begin{equation} \label{S1} \cS_1(\omega) = \curl \Theta_m(\omega)^{-1} \curl_0 - \frac{\Theta_e(\omega)}{(\omega + i \gamma_e)(\omega + i \gamma_m)}, \end{equation} for $\omega \in \C \setminus (\{-i\gamma_e,-i\gamma_m\}\cup\overline{W(\Theta_m)})$; here the notation $W(\cdot)$ denotes the numerical range of an operator or a pencil, and \begin{equation}\label{Thetadef} \Theta_e(\omega) := \omega^2 + \omega i \gamma_e - \theta_e^2, \quad \Theta_m(\omega) := \omega^2 + \omega i \gamma_m - \theta_m^2, \end{equation} are $\omega$-quadratic multiplication pencils. An important technical point is realising that $\cS_1(\omega)$, in general, cannot be defined either as an $m$-accretive operator or as a self-adjoint operator \emph{independently of $\omega \in \C \setminus \{-i \gamma_e, - i \gamma_m\}$}. We overcome this obstacle by introducing a set \begin{equation} \label{def:Sigma} \Sigma = \{\omega\in\C \, | \, \re(\omega)\im(\omega+i\gamma_m/2)\neq 0\}, \end{equation} decomposing $\Sigma$ as a disjoint union $\Sigma = \Sigma_1 \dot{\cup}\Sigma_2$, and defining $\cS_1(\omega)$ in two different ways, depending on whether $\omega \in \Sigma_1$ or in $\Sigma_2$. In fact, $i\cS_1$ is $m$-accretive for $\omega \in \Sigma_1 \subset \Sigma$, while it is $m$-dissipative for $\omega \in \Sigma_2 = \Sigma \setminus \overline{\Sigma_1}$.\\ Note also that the relation between the spectrum of the operator pencil $\cL$ and that of $\cS_1$ is completely non-trivial. This is a long-standing problem in the study of spectra of metamaterials where the dependence on the spectral parameter is non-linear, see e.g.\cite{MR4191388} where similar hurdles were encountered in the study of the essential spectrum of a negative metamaterial in a bounded domain. From our perspective, these difficulties are natural consequences of the lack of a \emph{diagonal dominance} pattern (in the sense of \cite[Def. 2.2.1]{TreB}) for the block operator matrices involved. In \cite[p.1187]{MR4191388} the operator matrix is upper-dominant. In our case, $\cL(\omega)$ is off-diagonally dominant, since the off-diagonal entries are differential operators of order 1 while the diagonal entries are of order 0. Unfortunately the off-diagonal entries are not boundedly invertible, so standard theorems relating the spectrum of an operator matrix and that of its Schur complements, such as \cite[Thm. 2.3.3]{TreB}, do not apply. We overcome these difficulties by defining the Schur complement $\cS_1$ locally and by improving the abstract result \cite[Prop. 2.10.1(c)]{TreB}, which would allow only bounded and self-adjoint entries on the main diagonal. We note in passing that the question raised in \cite[p.1187]{MR4191388} can be partially solved by applying \cite[Prop. 2.10.1(b)]{TreB}. Our first main result is a decomposition of the essential spectrum, see Proposition \ref{sigmaLS} and Theorem \ref{sigma-ess}, which is summarised in the following theorem. \begin{theorem}\label{thmintro} Suppose that $\theta_e$, $\theta_m$ are asymptotically constant. Let $P_\nabla$ be the orthogonal projection from $ L^2(\Omega)^3\!=\!\nabla \dot H^1_0(\Omega) \oplus H(\Div 0,\Omega)$ onto $\nabla \dot{H}^1_0(\Omega)$. Let \begin{equation} \label{eq: G} G(\omega) = -P_{\nabla \dot{H}^1_0(\Omega)} \left( \frac{\Theta_e(\omega)}{(\omega + i \gamma_e)(\omega + i \gamma_m)} \right) P_{\nabla \dot{H}^1_0(\Omega)} \end{equation} Let $\cS_\infty$ be the pencil $\cS_1$ restricted to divergence-free vector fields and with coefficients $\theta_e, \theta_m$ constantly equal to their value at infinity. Finally, let $\Sigma$ be as in \eqref{def:Sigma}. Then, with $\sigma_{ek}$ denoting the essential spectrum as in \cite[Chp. IX, p.414]{EE}, \[ \sigma_{ek}(\cL) \cap \Sigma = \sigma_{ek}(\cS_1) \cap \Sigma = (\sigma_{ek}(\cS_\infty) \cup \sigma_{ek}(G)) \cap \Sigma, \quad k=1,2,3,4, \] where $\sigma_{ek}(\cS_\infty)$ is described in Prop. \ref{prop: spectrum infty} and \[ \sigma_{ek}(G) \subset \begin{cases}- i [0, \gamma_e), \quad &\textup{if $- \frac{\gamma_e^2}{4} + \norma{\theta^2_e}_{\infty} \leq 0$,} \\ - i [0, \gamma_e) \cup \left( (-d_e, d_e) \times \{-i \frac{\gamma_e}{2} \} \right) \quad &\textup{if $- \frac{\gamma_e^2}{4} + \norma{\theta^2_e}_{\infty} > 0$.} \end{cases} \] with $d_e = \left(- \sqrt{- \frac{\gamma_e^2}{4} + \norma{\theta^2_e}_{\infty}}, \sqrt{- \frac{\gamma_e^2}{4} + \norma{\theta^2_e}_{\infty}} \right)$. \end{theorem} The explicit computation of $\sigma_{ek}(G)$ generally depends upon the regularity of the function $\theta_e$. If it is continuous, we obtain the equality \[\sigma_{ek}(G) = \ran\biggl(- \frac{i \gamma_e}{2} - \sqrt{- \frac{\gamma_e^2}{4} + \theta_e^2}\biggr) \cup \,\ran\biggl(- \frac{i \gamma_e}{2} + \sqrt{- \frac{\gamma_e^2}{4} + \theta_e^2}\biggr).\] Otherwise, the essential spectrum depends on the geometry of the set of discontinuities of $\theta_e$. If $\theta_e$ is a step function, some computations can be found in Section \ref{sec:example} below, which are based on the analytic results for transmission problems of Ola \cite{MR1362712} and Pankrashkin \cite{MR4041099}, initially investigated in the seminal paper \cite{MR782799}. In the example of Section \ref{sec:example}, $\sigma_{ek}(G)$ consists of at most six distinct points. For more complicated examples where the discontinuity interfaces are allowed to have non-convex corners, bands of essential spectrum can be generated, see \cite{MR1680769}. These problems have been tackled recently using the $\tt{T}$-coercivity method, see \cite{MR3200087}. Our second fundamental result concerns spectral approximation of $\cL$ by the truncation method. This involves replacing $\Omega$ with a bounded domain $\Omega_n \subset \Omega$, and $\cL$ with $\cL_n$, which will be associated with Problem \eqref{eq:intro1} with the same \emph{electric boundary conditions}; when $n \to \infty$, $\Omega_n$ monotonically increases and covers the whole of $\Omega$. The question is whether $\sigma(\cL_n)$ will be `close to' $\sigma(\cL)$ as $n \to \infty$. This is already an interesting problem for self-adjoint operators having band-gap spectrum \cite{MR2640293}: indeed, the gaps in the essential spectrum may be covered by eigenvalues of the approximating operators as $n \to \infty$, or equivalently, the spectral gaps may contain points of the \emph{spectral pollution} set, given by \[ \sigma_{\rm poll}((\cL_n)_n) = \{ \omega \in \rho(\cL) \,:\, \exists\, \omega_n \in \sigma(\cL_n),\,\, \omega_n \to \omega, \, n \to \infty \}. \] For the Maxwell pencil $\cL$ defined in \eqref{intro:cL}, the presence of spectral pollution for the approximating sequence $(\cL_n)_n$ is almost inevitable, since $\cL_n$ is a non-self-adjoint, rational pencil of operators for every $n$, having non-trivial essential spectrum even in bounded domains. It is then of pivotal importance to determine where spectral pollution may appear; and, on the other hand, which spectral points $\omega \in \sigma(\cL)$ can be approximated exactly via domain truncation. Theorem \ref{thm: final} shows that spectral pollution for the sequence $\cL_n$, $n \in \N$ can only occur in the {\it essential numerical range} of the constant coefficient pencil $\cS_\infty$ given on a suitable domain by \begin{equation}\label{s1inf} \cS_{\infty}(\omega) = \Theta_{m, \infty}(\omega)^{-1}\curl\curl_0 - \frac{\Theta_{e, \infty}(\omega)}{(i\gamma_e + \omega)(i \gamma_m + \omega)}, \end{equation} in which $\Theta_{e,\infty}(\Omega) = \omega^2 + i \gamma_e \omega - (\theta_e^{0})^2$, $\Theta_{m,\infty}(\Omega) = \omega^2 + i \gamma_m \omega - (\theta_m^{0})^2$, and $\theta_e^0$, $\theta_m^0$ are the values of $\theta_e$ and $\theta_m$ at infinity. In particular, the set of spectral pollution is always union of one-dimensional curves in $\C$, improving in a substantial way abstract enclosures for the spectral pollution set in term of the essential numerical range of $\cL$, which in this case would establish only that spectral pollution is contained in the infinite horizontal strip $\R \times [-\gamma_e, 0]$. The structure of this article is as follows. In Section \ref{sec:numr} we first establish a basic numerical range enclosure for the whole of $\sigma(\cL)$, see Prop. \ref{prop:numrenc}; we then define the operator pencil $\cS_1$ in a rigorous way in Thm \ref{thm: s_1 form_refined} and we prove that the spectral properties of $\cL$ are retained by $\cS_1$ inside $\Sigma$, see Prop. \ref{sigmaLS}. This result is then exploited to prove a refined numerical range enclosure, see Thm. \ref{thm:refnumran}. In Section \ref{sec:ess_spec} we prove Thm. \ref{sigma-ess}, which establishes that $\sigma_e(\cS_1) \cap \Sigma$ can be decomposed in the union of the essential spectra of two operator pencils, $\cS_\infty$ capturing the behaviour at infinity due to divergence-free vector fields; $G$ capturing the contribution of gradients. Section \ref{sec:red_op} contain qualitative results regarding the essential spectra of the reduced operators $\cS_\infty$ and $G$. In Section \ref{sec:lim_ess_spec} we then prove Thm. \ref{thm: final}, establishing that spectral pollution for the domain truncation method is contained in $W_e(\cS_\infty)$, and an approximation property for the isolated eigenvalues of $\cL$. Finally, Section \ref{sec:example} contains explicit computations for the case of locally constant functions $\theta_e(x) = \alpha_e \chi_K(x)$, $\theta_m(x) = \alpha_m \chi_K(x)$, which are identically zero at infinity. \subsection{Notation} \label{notation} \hspace{1mm}\\ \noindent $\bullet$ Let $\Omega \subset \R^3$ be an open set. $L^2(\Omega)^3 = L^2(\Omega, \C^3)$ is the standard Hilbert space of complex-valued vector fields having finite $L^2$-norm. The $L^2$-norm will be denoted by $\norma{\cdot}$.\\ $\bullet$ The homogeneous Sobolev or Beppo Levi space $\dot{H}^1(\Omega)$ is defined as the completion of $C^\infty_c(\overline{\Omega})^3$ with respect to the seminorm $\norma{u}_{\dot{H}^1(\Omega)} = \norma{\nabla u}$. \\ $\bullet$ $\nabla \dot{H}^1(\Omega) = \{ \nabla \varphi \in L^2(\Omega)^3 : \varphi \in \dot{H}^1(\Omega) \}$ will be regarded as a subspace of $L^2(\Omega)^3$. \\ $\bullet$ $H(\curl, \Omega) = \{ u \in L^2(\Omega)^3 : \curl u \in L^2(\Omega)^3 \}$ is endowed with the norm given by $\norma{u}^2_{H(\curl, \Omega)} = \norma{u}^2 + \norma{\curl u}^2$.\\ $\bullet$ $H_0(\curl, \Omega)$ is the closure of $C^\infty_c(\Omega)^3$ with respect to $\norma{\cdot}_{H(\curl, \Omega)}$. If $\p \Omega$ is sufficiently regular, it can also be described as \[ H_0(\curl, \Omega) = \{ u \in L^2(\Omega)^3 : \curl u \in L^2(\Omega)^3, \, \nu \times u = 0 \,\, \textup{on $\p \Omega$} \} \] $\bullet$ The differential expression $\curl$ is associated with two self-adjoint realisations in $L^2(\Omega)^3$. $\curl$ is the maximal one, with domain $\dom(\curl) = H(\curl, \Omega)$; $\curl_0$ the minimal one with domain $\dom(\curl_0) = H_0(\curl, \Omega)$. Note that $(\curl_0)^* = \curl$. \\ $\bullet$ $H(\Div, \Omega) = \{ u \in L^2(\Omega)^3 : \Div u \in L^2(\Omega) \}$, $\norma{u}^2_{H(\Div, \Omega)} = \norma{u}^2 + \norma{\Div u}^2$. \\ $\bullet$ $H(\Div 0, \Omega)$ is the subspace of $L^2(\Omega)^3$ of vector fields with null (distributional) divergence.\\ $\bullet$ Given a linear operator $T : \cH \supset \dom(T)\to \cH$, \begin{align*} &\sigma(T) = \{ \omega \in \C \,:\, T - \omega \:\: \textup{is not boundedly invertible} \}, \\ &\sigma_{\rm app}(T) = \{ \omega \in \C \,:\, \exists (u_n)_n \subset \dom(T),\, \norma{u_n} = 1,\, \norma{(T - \omega)u_n} \to 0 \} \\ &\sigma_e(T) := \{ \omega \in \C \,:\, \exists (u_n)_n \subset \dom(T),\, \norma{u_n} = 1,\, u_n \rightharpoonup 0,\, \norma{(T - \omega)u_n} \to 0 \}. \end{align*} For non-selfadjoint operators in complex Banach spaces, there are 5 non-equivalent definitions of essential spectrum, see \cite[Chp.9, p.414]{EE}, denoted by $\sigma_{ek}(T)$, $k = 1,\dots, 5$. Note that $\sigma_e(T) := \sigma_{e2}(T)$.\\ $\bullet$ Let $D \subset \C$ be a domain. Given $\omega \mapsto \cL(\omega)$, $\omega \in D$, a holomorphic family of closed linear operators with the same $\omega$-independent domain $\dom(\cL) = \dom(\cL(\omega))$, $\omega \in D$, we define $\sigma(\cL) = \{ \omega \in D : 0 \in \sigma(\cL(\omega)) \}$, and similarly we define point, continuous, residual, essential spectrum by replacing $\sigma$ with $\sigma_x$, $x = p, c, r, e$ in the previous formula. \section{Numerical range, Schur complements and spectral enclosures} \label{sec:numr} Let $\cH = L^2(\Omega)^3\oplus L^2(\Omega)^3$. The operators $A$, $B$ and $D$ appearing in the definition (\ref{def:cA}) of $\cA$ have domains $\dom(A) = H_0(\curl, \Omega) \oplus H(\curl, \Omega)$, $\dom(B) = \dom(D) = \cH$; the fact that $\dom(B)=\dom(D)=\cH$ relies on our assumption that the functions $\theta_e$ and $\theta_m$ lie in $L^\infty(\Omega, \R)$. Since the off-diagonal operators $B$ and $D$ are bounded, $\cA$ is a diagonally dominant, closed $\cJ$-self-adjoint operator matrix, where $\cJ = \diag(i,-i, i, -i) J$, and $J$ is the standard componentwise complex conjugation. In particular, $\sigma(\cA) = \sigma_{app}(\cA)$. Due to \cite[Thm 2.3.3 (ii)]{TreB}, $\sigma(\cA) \setminus \sigma(-iD) = \sigma(\cL)$, with equality for the point, continuous, and essential spectrum as well. \begin{prop} \label{prop: num range 1} Let $M:= \max\{\gamma_e,\gamma_m\}$. Then the numerical range $W(\cA)$ of the block operator matrix $\cA$ is contained in the strip $\R \times [-iM,0]$. \end{prop} \begin{proof} If $\omega \in W(\cA)$, by definition there exists $(u,v) \in \cH$, $\norma{u}^2 + \norma{v}^2 = 1$ such that \[ (Au,u) + 2 \re (Bv,u) - i (Dv,v) = \omega. \] Hence, $0 \geq \im \omega = - \re (Dv, v) = - \gamma_e \norma{v_1}^2 - \gamma_m \norma{v_2}^2 \geq - \max\{\gamma_e, \gamma_m\}$. \end{proof} \begin{rem} The previous enclosure holds independently on the domain $\Omega$ and it holds for non-constant but bounded $\gamma_e$ and $\gamma_m$ by replacing them with $\norma{\gamma_e}_{\infty}$ and $\norma{\gamma_m}_{\infty}$. \end{rem} On the other hand, $\C \setminus \{-i \gamma_e, -i \gamma_m\} \ni \omega \mapsto \cL(\omega)$ defines a pencil of block operator matrices in $\cH$, given explicitly by \begin{equation} \label{def: cL} \cL(\omega) = \begin{pmatrix} -\omega + \frac{\theta_e^2}{(\omega + i \gamma_e)} & i \curl \\ -i\curl_0 & -\omega + \frac{\theta_m^2}{(\omega + i \gamma_m)} \end{pmatrix} \end{equation} where $\dom(\cL(\omega)) = H_0(\curl, \Omega) \oplus H(\curl, \Omega)$. \begin{prop} \label{prop:numrenc} The numerical range $W(\cL)$ is contained in the non-convex subset of $\mathbb C$ described by the inequality \begin{equation}\label{enc} 0 \leq -\im\omega \leq \min\left(M, \frac{\gamma_e \| \theta_e \|_\infty^2 + \gamma_m \|\theta_m\|_\infty^2}{(\re\omega)^2} \right), \end{equation} in which $M=\max\{\gamma_e,\gamma_m\}$. \end{prop} \begin{proof} Let $(u,v) \in H_0(\curl, \Omega) \oplus H(\curl, \Omega)$, $\norma{u}^2 + \norma{v}^2 = 1$, and consider the equation \[ \left(\frac{\theta_e^2}{(\omega + i \gamma_e)}u, u\right) + 2 \re (i\curl v, u) + \bigg(\frac{\theta_m^2}{(\omega + i \gamma_m)} v, v\bigg) = 0. \] Upon taking the imaginary part we see that \begin{equation}\label{imp} - \im \omega - \bigg( \frac{(\im \omega + \gamma_e)}{|\omega + i \gamma_e|^2} \theta_e^2 u, u \bigg) - \bigg( \frac{(\im \omega + \gamma_m)}{|\omega + i \gamma_m|^2} \theta_m^2 v, v \bigg) = 0. \end{equation} This implies immediately that $\im \omega < 0$. However, it cannot happen that both $\im \omega + \gamma_e < 0$ and $\im \omega + \gamma_m < 0$; hence $- \max\{ \gamma_e, \gamma_m \} < \im \omega < 0$, i.e. $-M<\im\omega <0.$ To obtain (\ref{enc}), we observe that (\ref{imp}) may be rewritten as \[ (-\im\omega)\left\{1 + \frac{(\theta_e^2 u,u)}{|\omega+i\gamma_e|^2} + \frac{(\theta_m^2 v,v)}{|\omega+i\gamma_m|^2} \right\} = \hspace{4cm} \] \[ = \frac{\gamma_e(\theta_e^2 u,u)}{(\re\omega)^2 + (\gamma_e+\im\omega)^2} + \frac{\gamma_m(\theta_m^2 v,v)}{(\re\omega)^2 + (\gamma_m+\im\omega)^2} \] \[ \leq \frac{\gamma_e \| \theta_e \|_\infty^2 + \gamma_m \|\theta_m\|_\infty^2}{(\re\omega)^2}. \] The factor in parentheses $\left\{ \cdot \right\}$ on the left hand side exceeds $1$, so the result follows. \end{proof} In order to make further progress we use an additional Schur complement argument on the pencil $\cL(\omega)$, which can be considered as a block operator matrix in $\cH = L^2(\Omega)^3 \oplus L^2(\Omega)^3$. In terms of the bounded quadratic pencils $\Theta_e$ and $\Theta_m$, see \eqref{Thetadef}, $\cL(\omega)$ has the form \[ \cL(\omega) = \begin{pmatrix} - \frac{\Theta_e(\omega)}{(i \gamma_e + \omega)} & i \curl \\ -i \curl_0 & - \frac{\Theta_m(\omega)}{(i \gamma_m + \omega)} \end{pmatrix}. \] If $\omega$ is an eigenvalue of $\cL$ with eigenfunction $\left(\begin{array}{c} E \\ H\end{array}\right)$, then \[ \begin{cases} - \frac{\Theta_e(\omega)}{(i \gamma_e + \omega)}E + i \curl H = 0, \\ -i \curl_0 E - \frac{\Theta_m(\omega)}{(i \gamma_m + \omega)}H = 0. \end{cases} \] Assume that $\Theta_m(\omega)$ is boundedly invertible. Formally, we could apply $\Theta_m(\omega)^{-1}$ from the left in the second equation and apply $\curl$; by using the first equation we obtain \[ \curl \Theta_m(\omega)^{-1} \curl_0 E - \frac{\Theta_e(\omega)}{(i \gamma_e + \omega)(i \gamma_m + \omega)} E = 0 \] for all $\omega \in \rho(\Theta_m)$. However, without further restrictions on $\omega$ the operator $\curl \Theta_m(\omega)^{-1} \curl_0$ may not even be accretive. \begin{theorem}\label{thm: s_1 form_refined} Let $\Sigma_1$ be the set \[ \Sigma_1 := \{ \omega\in{\mathbb C} \, | \, \re(\omega)\im(\omega+i\gamma_m/2)>0\}. \] Then for each $\omega\in\Sigma_1$, the sesquilinear form \[ \fs_1(\omega)[x,y] = (\Theta_m(\omega)^{-1} \curl_0 x, \curl_0 y) - \bigg( \frac{\Theta_e(\omega)}{(\omega + i \gamma_m)(\omega + i\gamma_e)} x, y\bigg) \] is such that $e^{i\pi/2} \fs_1(\omega)$ is quasi-accretive. Similarly, defining \[ \Sigma_2 := \{ \omega\in{\mathbb C} \, | \, \re(\omega)\im(\omega+i\gamma_m/2)<0\}, \] if $\omega \in \Sigma_2$ then $e^{-i \pi/2}\fs_1(\omega)$ is quasi-accretive. \end{theorem} \begin{proof} Let $\fa(\omega)$ be the sesquilinear form \[ \fa(\omega)(x,y) = \bigg( \frac{\Theta_e(\omega)}{(\omega + i \gamma_m)(\omega + i\gamma_e)} x, y\bigg). \] Let $\ft(\omega)(x, y) = (\Theta_m(\omega)^{-1} \curl_0 x, \curl_0 y)$, $x,y \in H_0(\curl, \Omega)$. Since $\fa(\omega)$ is bounded for fixed $\omega \in \Sigma_1$, we see that $e^{i \phi}\fa$ is closed and quasi-accretive with domain $\cH$ for all $\phi \in[0, 2\pi)$. Assume that we have already proved that $e^{i \pi/2}\ft(\omega)$ is quasi-accretive for $\omega$ lying in the set $\Sigma_1$. By \cite[Thm VI.1.27, VI.1.31]{MR1335452} the sum $e^{i \pi/2}\fs_1(\omega) = e^{i \pi/2}\fa(\omega) + e^{i\pi/2}\ft(\omega)$ is closed and quasi-accretive on $\dom(\ft(\omega))$. Hence, it is sufficient to show that if $\omega \in \Sigma_1$ then $e^{i \pi/2}\ft(\omega)$ is quasi-accretive. Equivalently, we must show that $W(\ft(\omega)) \subset {\mathbb H}_- = \{z \in \C : \im z \leq 0 \}$ for all $\omega \in \Sigma_1$. We note that \[ (\Theta_m(\omega)^{-1} u, u) = \frac{ \bar{\omega}^2 - i \gamma_m \bar{\omega} - (\theta_m^2 u, u)}{| \omega^2 + i \gamma_m \omega - (\theta_m^2 u, u)|^2} \] for every $u \in \cH$, $\norma{u} = 1$. Now \[ \bar{\omega}^2 - i \gamma_m \bar{\omega} - (\theta_m^2 u, u) = \bigg(\bar{\omega} - \frac{i\gamma_m}{2}\bigg)^2 + \frac{\gamma^2_m}{4} - (\theta_m^2 u, u). \] Suppose first that $\omega\in\Sigma_1$ and $\re(\omega)>0$. Then $\omega = -i \gamma_m/2 + r e^{i\phi}$, for some $r > 0$, $\phi \in (0, \pi/2)$. (The case where $\omega \in \Sigma_1$ with $\re(\omega)<0$ follows from the previous case and a reflection argument with respect to the point $-i \gamma_m/2$.) It follows that \[ \bigg(\bar{\omega} - \frac{i\gamma_m}{2}\bigg)^2 + \frac{\gamma^2_m}{4} - (\theta_m^2 u, u) = r^2 e^{-2i\phi} + \frac{\gamma^2_m}{4} - (\theta_m^2 u, u) \] and upon taking the imaginary part, $\im(r^2 e^{-2i\phi} + \frac{\gamma^2_m}{4} - (\theta_m^2 u, u)) = \sin(-2\phi) <0$. Thus, \begin{multline} \label{eq: final ineq} \im \ft(\omega)[x] = \im (\Theta_m(\omega)^{-1} \curl_0 x, \curl_0 x) \leq \\ r^2 \sin(-2\phi) \bigg( \frac{1}{| \omega^2 + i \gamma_m \omega - \theta_m^2|^2} \curl_0 x, \curl_0 x \bigg) \leq 0. \end{multline} So $W(\ft(\omega)) \subset {\mathbb{H}}_-$ for all $\omega \in \Sigma_1$ and $e^{i \pi/2}\ft(\omega)$ is accretive for all $\omega \in \Sigma_1$. The final claim follows by noting that when $\omega \in \Sigma_2$, $\im (\ft(\omega)[x]) \geq 0$, by reversing all the inequalities in \eqref{eq: final ineq}. \end{proof} \begin{rem} Theorem \ref{thm: s_1 form_refined} shows that $\fs_1(\omega)$ has a well-defined $m$-accretive representation via the first representation theorem for all $\omega \in \C \setminus (i\R \cup (-i \gamma_m/2 + \R))$. We note \emph{en passant} that the singular sets $\{-i \gamma_e, -i \gamma_m \}$ and $W(\Theta_m)$ are both contained in $\C \setminus \Sigma = (i \R \cup (-i \gamma_m/2 + \R))$. \end{rem} \begin{corollary} \label{cor: S_1} Let $\Sigma_1$ and $\Sigma_2$ be as in Theorem \ref{thm: s_1 form_refined}. For every $\omega \in \Sigma_1$, there exists an operator $\cS_1(\omega)$ such that $i \cS_1(\omega)$ is $m$-accretive and \[ i(\cS_1(\omega) x, y) = i\fs_1(\omega)[x,y] \] for all $x \in \dom(S_1)$, $y \in \dom(\fs_1)$, and $\dom(\cS_1)$ is a core of $\dom(\fs_1)$. Similarly for $\omega\in\Sigma_2$ there exists an operator $\cS_1(\omega)$ such that $-i\cS_1(\omega)$ is $m$-accretive and \[ -i(\cS_1(\omega) x, y) = -i\fs_1(\omega)[x,y]. \] \end{corollary} \begin{rem}\label{S1rmk} The operator $\cS_1$ defined in Corollary \ref{cor: S_1} is given, for $\omega\in \Sigma := \Sigma_1 \cup \Sigma_2$ defined in \eqref{def:Sigma}, by \begin{equation}\label{s1def} \cS_1(\omega) = \curl ( \Theta_m(\omega)^{-1} \curl_0 \cdot) - \frac{\Theta_e(\omega)}{(i \gamma_m + \omega)(i \gamma_e + \omega)}. \end{equation} From (\ref{Thetadef}), we see that $\Theta_m(it)<0$ for all sufficiently large $t\in\R$, and for all $t\geq 0$. Since $\Theta_e(it)<0$ for all $t\in\I\R_+$, $S_1(\omega)$ is self-adjoint and negative for $\omega\in\I\R_+$. \end{rem} \begin{prop} \label{prop: symmetry} The following properties hold. \begin{enumerate}[label=(\roman*)] \item $\sigma(\cA) = - \ov{\sigma(\cA)}$. \item $\sigma(\cL) \setminus \{-i\gamma_e, -i \gamma_m\} = - \ov{\sigma(\cL)} \setminus \{-i\gamma_e, -i \gamma_m\}$. \item $\sigma(\cS_1) \cap \Sigma = - \ov{\sigma(\cS_1) \cap \Sigma}$ \end{enumerate} \end{prop} \begin{proof} $(i)$ follows from the equality $\cA = Q \cA_{c} Q^{-1}$ for $Q = \diag(-i,-i,1,1)$, and $\cA_c = - \ov{\cA_c}$ $(ii)$ then follows from $(i)$ due to the equality $\sigma(\cL)\setminus \sigma(-iD) = \sigma(\cA) \setminus \sigma(-iD)$ and the fact that $\sigma(-iD)$ is invariant to the symmetry $\zeta \mapsto - \ov{\zeta}$. $(iii)$ now follows from $(ii)$ in a similar fashion. \end{proof} \noindent\textbf{Notation.} Define multiplication operators in $L^2(\Omega)^3$ by \begin{equation}\label{Vemdef} \begin{array}{l} V_m(\omega) = \frac{\Theta_m(\omega)}{(\omega + i \gamma_m)}, \;\;\;\; \omega\neq -i\gamma_m; \\ V_e(\omega) = -\frac{\Theta_e(\omega)}{(\omega + i \gamma_e)(\omega + i \gamma_m)}, \;\;\;\; \omega \not\in \{-i\gamma_m,-i\gamma_e\}. \end{array} \end{equation} \begin{lemma}\label{lemma:bddcls} Let $\omega \in \Sigma$. Then $\curl_0 \cS_1(\omega)^{-1}$ is closed and bounded in $L^2(\Omega)^3$; $\cS_1(\omega)^{-1} \curl $ and $\curl_0 \cS_1(\omega)^{-1} \curl$ are closable with bounded closure as operators in $L^2(\Omega)^3$. \end{lemma} \begin{proof} Since $\dom(\cS_1(\omega)) \subset H_0(\curl, \Omega) = \dom(\curl_0)$ we have immediately that $\curl_0 \cS_1(\omega)^{-1}$ is closed and bounded, $\cS_1(\omega)^{-1} \curl$ is closable with bounded closure given by $\overline{\cS_1(\omega)^{-1} \curl} = (\curl_0 \cS_1(\omega)^{-*})^*$. It therefore remains to prove that $\curl_0 \cS_1(\omega)^{-1} \curl$ is closable and bounded. We will prove that $\curl_0 \cS_1(\omega)^{-1} \curl$ can be extended as a bounded operator in $L^2(\Omega)^3$, therefore implying that it is closable.\\ Let $B < 0$ be bounded and self-adjoint with the property that $\im(V_e(\omega) + i B) < 0$, where $V_e(\omega)$ is the multiplication operator defined in \eqref{Vemdef}. Then $\im(\cS_1(\omega) + i B) < 0$ for $\omega \in \Sigma_1$ as a consequence of Thm. \ref{thm: s_1 form_refined}. The Lax-Milgram theorem implies immediately that $\cS_1(\omega) + iB$ is boundedly invertible in $L^2(\Omega)^3$. However, a further inspection shows that if $u$ is the weak solution of $(\cS_1(\omega) + iB) u = \curl g$, $g \in L^2(\Omega)^3$, i.e., \[ \langle (\Theta_m(\omega))^{-1} \curl_0 u, \curl_0 v \rangle + \langle (V_e(\omega)+iB) u, v \rangle = \langle g, \curl_0 v \rangle, \quad v \in (C^\infty_c(\Omega))^3, \] then for every $\delta < 1$ we have \[ c_1(\omega) \norma{\curl_0 u}^2 + c_2(\omega) \norma{u}^2 \leq \frac{\norma{g}^2}{4 \delta} + \delta \norma{\curl_0 u}^2, \] in which \[ \begin{split} c_1(\omega) &= \essinf_{x \in \Omega}|\im \Theta_m(\omega, x)^{-1}| \\ &= |\re \omega| |2 \im \omega + \gamma_m | \, \essinf_{x \in \Omega}\left(\frac{1}{|\omega^2 + i \gamma_m \omega - \theta_m^2(x)|^2} \right),\\[0.3cm] c_2(\omega) &= \inf_{u \in L^2(\Omega)^3} \frac{| \im (\langle (V_e(\omega)+iB) u, u \rangle) |}{\norma{u}^2} > 0. \end{split} \] From this we deduce that $u \in H_0(\curl, \Omega)$, hence $(\cS_1(\omega) + i B)^{-1}$ maps $\curl L^2(\Omega)^3$ to $H_0(\curl, \Omega)$, or equivalently $\curl_0 (\cS_1(\omega) + i B)^{-1} \curl$ has bounded closure in $L^2(\Omega)^3$.\\ Now let $\omega \in \rho(\cS_1) \cap \Sigma_1$. Then $\cS_1(\omega)^{-1}$ is a bounded operator in $L^2(\Omega)^3$ and we have the resolvent identity \[ \cS_1(\omega)^{-1} = (\cS_1(\omega) + i B )^{-1} + i \cS_1(\omega)^{-1} B ( \cS_1(\omega) + i B)^{-1} \] and then \[ \cS_1(\omega)^{-1} \curl = (\cS_1(\omega) + i B )^{-1} \curl + i \cS_1(\omega)^{-1} B ( \cS_1(\omega) + i B)^{-1} \curl \] is bounded since so is the right-hand side, due to the previous discussion. Now, if $u \in L^2(\Omega)^3$ is the weak solution of \[ \cS_1(\omega) u = \curl (\Theta_m(\omega))^{-1} \curl_0 u + V_e(\omega) u = \curl g \] for some $g \in L^2(\Omega)^3$, $V_e(\omega) u \in L^2(\Omega)^3$ implies $\curl \Theta_m(\omega)^{-1} \curl_0 u \in L^2(\Omega)^3$, hence $u \in \dom(\curl \Theta_m(\omega)^{-1} \curl_0) \subset H_0(\curl, \Omega)$. In particular, $\cS_1(\omega)^{-1} \curl$ is bounded as an operator from $L^2(\Omega)^3$ to $H_0(\curl, \Omega)$, concluding the proof. \end{proof} \begin{prop}\label{sigmaLS} $\sigma(\cL)\cap \Sigma = \sigma(\cS_{1}) \cap \Sigma$ and $\sigma_x(\cL) \cap \Sigma = \sigma_x(\cS_{1}) \cap \Sigma$, where $x \in \{p, c, r, e\}$, denoting point, continuous, residual and essential spectra, respectively. \end{prop} \begin{proof} The proof is along the lines of \cite[Prop. 2.10.1(c)]{TreB}. Without loss of generality, we consider only $\omega \in \Sigma_1$. Note that $\cL(\omega)$ is the sum of a self-adjoint operator and a $\cJ$-self-adjoint bounded operator, where $\cJ = \diag (i, -i) J$, $J$ being the standard complex conjugation; hence, it is easy to check that $\cL(\omega)$ is $\cJ$-self-adjoint for all $\omega \in \Sigma_1$ and $\dom(\cL(\omega))$ does not depend on $\omega \in \Sigma_1$ and it is given by $H_0(\curl, \Omega) \oplus H(\curl, \Omega)$. Due to Theorem \ref{thm: s_1 form_refined}, $i \cS_1(\omega)$, $\omega \in \Sigma_1$ is a well-defined $m$-accretive operator associated with the sesquilinear form $i \fs_1(\omega)$, via the first representation theorem. In particular, $\dom(\cS_1(\omega)) \subset H_0(\curl, \Omega)$ does not depend on $\omega \in \Sigma_1$ and it is a core for $H_0(\curl, \Omega)$. \\ We will first prove that $\sigma(\cS_1) \cap \Sigma_1 \subset \sigma(\cL) \cap \Sigma_1$ and that $\sigma_p(\cS_1) \cap \Sigma_1 = \sigma_p(\cL) \cap \Sigma_1$. If $f \in L^2(\Omega)^3$ and $\omega \in \rho(\cL) \cap \Sigma_1$, then the solution $u$ to the equation $\cL(\omega)(u,v)^t = (f,0)^t$, $v := - i (\omega + i \gamma_m)\Theta_m(\omega)^{-1} \curl_0 u$, is in one-to-one correspondence with the solution of $\cS_1(\omega) u = f$, therefore implying that $\omega \in \rho(\cS_1)$. This also proves that $\sigma_p(\cS_1) \cap \Sigma_1 = \sigma_p(\cL) \cap \Sigma_1$ by arguing in a similar way for $f = 0$. \\ We now prove that $\sigma(\cS_1) \cap \Sigma_1 \supset \sigma(\cL) \cap \Sigma_1$. Assume that $\omega \in \rho(\cS_1) \cap \Sigma_1$. Then, at least on $L^2(\Omega)^3 \oplus V_m(\omega)H(\curl, \Omega)$, after recalling \eqref{Vemdef}, we can write the equality \begin{equation} \label{eq: res_eq} \cL(\omega)^{-1} = \begin{pmatrix} \cS_1(\omega)^{-1} & - \cS_1(\omega)^{-1} i \curl V_m(\omega)^{-1} \\ V_m(\omega)^{-1} i \curl_0 \cS_1(\omega)^{-1} & V_m(\omega)^{-1} (I + \curl_0 \cS_1(\omega)^{-1} \curl (V_m(\omega))^{-1}) \end{pmatrix} \end{equation} so that $\cS_1(\omega) = T(\omega) + V_e(\omega)$, $T(\omega) = \curl \Theta_m(\omega)^{-1} \curl_0$. First note that $L^2(\Omega)^3 \oplus V_m(\omega)H(\curl, \Omega)$ is dense in $L^2(\Omega)^3 \oplus L^2(\Omega)^3$, whenever $\omega \in \Sigma_1$. Hence, it suffices to prove that the right-hand side in \eqref{eq: res_eq} has bounded closure as an operator in $L^2(\Omega)^3 \oplus L^2(\Omega)^3$. This is an immediate consequence of Lemma \ref{lemma:bddcls}, concluding therefore the proof of the inclusion.\\ It remains to prove that $\sigma_e(\cL) \cap \Sigma = \sigma_e(\cS_1) \cap \Sigma$. Due to the previous part of the proof, it is enough to show that $\sigma_p(\cL) \cap \sigma_e(\cL) \cap \Sigma = \sigma_p(\cS_1) \cap \sigma_e(\cS_1)\cap \Sigma$. First, we note that $\cL(\omega)$ is $\cJ$-selfadjoint with respect to $\cJ = \diag(i, -i) J$ and $\cS_1(\omega)$ is $J$-self-adjoint, $J$ being the standard complex conjugation; therefore, \cite[Theorem IX.1.6]{EE} implies $\sigma_{e1}(\cL(\omega)) = \cdots = \sigma_{e4}(\cL(\omega))$ (and similarly for $\cS_1(\omega)$). We will first show that $\sigma_{e2}(\cS_1)) \cap \Sigma \subset \sigma_{e2}(\cL)) \cap \Sigma$. Let $\omega \in \sigma_{e2}(\cS_1)) \cap \Sigma$ and let $u_n$ be a Weyl singular sequence in $\dom(\cS_1)$ such that $\cS_1(\omega) u_n \to 0$. Then, by setting $v_n = - (\omega + i \gamma_m) (\Theta_m(\omega))^{-1} \curl_0 u_n$ and $h_n = (u_n, v_n)/ (\norma{u_n}^2 + \norma{v_n}^2)^{1/2}$ we have $\cL(\omega) h_n = \norma{h_n}^{-1}(\cS_1(\omega) u_n, 0)^t \to 0$. Let $t \in \R$ be such that $\cS_1(\omega) + i t$ is boundedly invertible (this is possible since for $|t|$ sufficiently big $ \im (\fs_1(\omega) + it)[u] < 0$, hence by the Lax-Milgram theorem we conclude that $\cS_1(\omega) + i t$ is boundedly invertible). Now from $\cS_1(\omega) u_n \to 0$ we deduce \[ v_n = - (\omega + i \gamma_m) (\Theta_m(\omega))^{-1} \curl_0 (\cS_1(\omega) + it)^{-1}(\cS_1(\omega) + it) u_n \rightharpoonup 0 \] since $\curl_0 (\cS_1(\omega) + it)^{-1}$ is a bounded operator, $\cS_1(\omega) u_n \to 0$ and $u_n \rightharpoonup 0$. Hence $h_n$ is a Weyl sequence for $\cL(\omega)$, and $\omega \in \sigma_{e2}(\cL)$. \\ We will now show that $\sigma_{e4}(\cS_1) \cap \Sigma \supset \sigma_{e4}(\cL)) \cap \Sigma$. We can assume without loss of generality that we are inside $\Sigma_1$. Now assume that $\omega \in \Sigma_1$ but $\omega \notin \sigma_{e4}(\cS_1)$. Then there exists a compact operator such that $0 \in \rho(\cS_1(\omega) + K)$. According to \eqref{eq: res_eq}, if $\cK = \diag(K,0)$, then {\small \begin{multline*} (\cL(\omega) + \cK)^{-1} = \\ \begin{pmatrix} (\cS_1(\omega) + K)^{-1} & - (\cS_1(\omega) + K)^{-1} i \curl V_e(\omega)^{-1} \\ V_e(\omega)^{-1} i \curl_0 (\cS_1(\omega) + K)^{-1} & V_e(\omega)^{-1} (I + \curl_0 (\cS_1(\omega) + K)^{-1} \curl (V_e(\omega))^{-1}). \end{pmatrix} \end{multline*} } In order to conclude, we then just need to show that $\curl_0 (\cS_1(\omega) + K)^{-1} \curl$ is bounded. But this can be proved as in the proof of Lemma \ref{lemma:bddcls} by first showing that $\curl_0 (\cS_1(\omega) + K + iB)^{-1} \curl$ is bounded for some bounded operator $B < 0$, and then by using the resolvent identity {\small \begin{multline*} (\cS_1(\omega) + K)^{-1} \curl = \\ (\cS_1(\omega) + K + i B )^{-1} \curl + i(\cS_1(\omega) + K)^{-1} B ( \cS_1(\omega) + K + i B)^{-1} \curl. \end{multline*} } Altogether, $(\cL(\omega) + \cK)^{-1}$ is bounded as an operator in $L^2(\Omega)^3 \oplus L^2(\Omega)^3$, hence $\omega \notin \sigma_{e4}(\cL) \cap \Sigma_1$. \end{proof} According to Proposition \ref{sigmaLS}, $\sigma(\cL)\cap \Sigma$ can be enclosed in $W(\cS_1) \cap \Sigma$. \begin{theorem} \label{thm:refnumran} $\sigma(\cL) \cap \Sigma \subset W(\cS_1) \cap \Sigma$. Moreover, the following explicit enclosure holds: \[ \sigma(\cL) \setminus \overline{W(\Theta_m)}\subset \Gamma \setminus \overline{W(\Theta_m)} \] \[ \begin{split} \Gamma &:= \{\omega \in \C : \re \omega = 0, \im \omega \in (- \gamma_e, 0) \setminus \{\gamma_m\} \}\\ &\hspace{2.5cm} \cup \{\omega \in \C : \re \omega \neq 0, \im \omega \geq - (\gamma_e + \gamma_m)/2, \textup{\eqref{enc} holds}\}\\ \end{split} \] \end{theorem} \begin{proof} Note that due to Remark \ref{S1rmk}, the operator $\cS_1(it)$, $t \in \R \setminus (-\gamma_m, 0)$ is symmetric and since for fixed $t$ it is the sum of a semibounded self-adjoint operator and a bounded self-adjoint operator, $\cS_1(it)$ is self-adjoint and semibounded. The proof of Proposition \ref{sigmaLS} therefore extends to $\omega \in i\R \setminus (-i \gamma_m, 0)$, giving $\sigma(\cL) \setminus \overline{W(\Theta_m)} = \sigma(\cS_1) \setminus \overline{W(\Theta_m)}$ (note that $W(\Theta_m) \subset (- i \gamma_m, 0) \cup (-i \gamma_m/2 - s, -i \gamma_m/2 + s)$ for some $s > 0$ depending on $\theta_m$).\\ Let then $\omega \in \sigma_{\rm app}(\cS_1) \setminus \overline{W(\Theta_m)}$. There exists $u_n \in \dom(\cS_1)$, $\norma{u_n} = 1$, $n \in \N$, such that $\cS_1(\omega) u_n \to 0$. In particular, \begin{equation} \label{NRS1} \begin{cases} &\im \langle \Theta_m(\omega)^{-1} \curl_0 u_n, \curl_0 u_n \rangle = \im \left\langle \frac{\Theta_e(\omega)}{(\omega + i\gamma_e)(\omega + i \gamma_m)} u_n, u_n \right\rangle + \eps_n \\ &\re \langle \Theta_m(\omega)^{-1} \curl_0 u_n, \curl_0 u_n \rangle = \re \left\langle \frac{\Theta_e(\omega)}{(\omega + i\gamma_e)(\omega + i \gamma_m)} u_n, u_n \right\rangle + \eps_n \end{cases} \end{equation} with $\eps_n \to 0$ as $n \to \infty$. Let $\omega = x + i y$. Then \eqref{NRS1} can be written explicitly as \begin{equation} \label{NRS1_2} \begin{cases} &\begin{aligned} &\left\langle \frac{x (2 y + \gamma_m)}{|\Theta_m(\omega)|^2} \curl_0 u_n, \curl_0 u_n \right\rangle = \frac{x \gamma_m}{x^2 + (y + \gamma_m)^2} \\ &\hspace{2cm}- \frac{x(2y + \gamma_e + \gamma_m)}{(x^2 + (y + \gamma_m)^2)(x^2 + (y + \gamma_e)^2)} \langle \theta_e^2 u_n, u_n \rangle + \eps_n \end{aligned}\\[1cm] &\begin{aligned} &\left\langle \frac{x^2 - y^2 - \gamma_m y - \theta_m^2}{|\Theta_m(\omega)|^2} \curl_0 u_n, \curl_0 u_n \right\rangle = \frac{x^2 + y^2}{x^2 + (y + \gamma_m)^2} \\ &\hspace{2cm}- \frac{(x^2 - y^2 - y(\gamma_e + \gamma_m) - \gamma_e \gamma_m)}{(x^2 + (y + \gamma_m)^2)(x^2 + (y + \gamma_e)^2)}\langle \theta_e^2 u_n, u_n \rangle + \eps_n \end{aligned} \end{cases} \end{equation} If $x = 0$, $y < - \gamma_e$ then the second equation reads \begin{multline*} \left\langle \frac{- y^2 - \gamma_m y - \theta_m^2}{|\Theta_m(\omega)|^2} \curl_0 u_n, \curl_0 u_n \right\rangle = \frac{y^2}{(y + \gamma_m)^2} \\ - \frac{(- (y + \gamma_e)(y + \gamma_m))}{((y + \gamma_m)^2)(y + \gamma_e)^2)}\langle \theta_e^2 u_n, u_n \rangle + \eps_n \end{multline*} and since $y < - \gamma_e < - \gamma_m$, the left-hand side is negative while the right-hand side is strictly positive for sufficiently big $n$. Similarly, if $y > 0$, the left-hand side is negative while the right-hand side is strictly positive for sufficiently big $n$, a contradiction. Therefore, if $x = 0$, $y \in (- \gamma_e, 0)$. \\ Now, assume $x \neq 0$. We can then divide by $x$ in the first equation of \eqref{NRS1_2}. Then we see immediately that if $y \leq - (\gamma_e + \gamma_m)/2 < - \gamma_m/2$ then the left-hand side is negative while the right-hand side is strictly positive, a contradiction. Therefore, for $x \neq 0$, $y > - (\gamma_e + \gamma_m)/2$. \end{proof} \section{Decomposition of the essential spectrum} \label{sec:ess_spec} In this section we will adapt the strategy of proof recently used for the analogous decomposition of the essential spectrum for the time-harmonic Maxwell system with non-trivial conductivity in the submitted article \cite{BFMT}. For the convenience of the reader we will state and prove all the required results. Let $\Omega_R = \Omega \cap B(0,R)$, for $R>0$. For any $\delta>0$ we assume that the functions $\theta_e$ and $\theta_m$ admit a decomposition \begin{equation} \label{eq:coeffs-infty} \theta_e(x) = \theta^c_{e}(x) + \theta^\delta_{e}(x) + \theta_e^0, \quad \theta_m(x) = \theta^c_{m}(x) + \theta^\delta_{m}(x) + \theta_m^0 \end{equation} for all $x \in \Omega$, where $\theta^c_{e}, \theta^c_{m}, $ have compact support in $\Omega_R$ (for some sufficiently large $R$ depending on $\delta$), and $\theta^\delta_{e}$, $\theta^\delta_{m}$, are bounded multiplication operators with norm less than $\delta$. In particular, \[ \lim_{R \to \infty} \sup_{|x| > R} |\theta_*(x) - \theta_*^0| = 0, \quad * = e,m. \] If $\Omega$ is bounded, we assume directly that $\theta_e = \theta_e^0$ and $\theta_m = \theta_m^0$ are constant. Corresponding to this decomposition of $\theta_e$ and $\theta_m$ we also introduce `limits at infinity' of the functions $\Theta_e$ and $\Theta_m$ in \eqref{Thetadef}, namely \begin{equation}\label{Thetainfdef} \Theta_{e,\infty}(\omega) = \omega^2 + \omega i\gamma_e -(\theta_e^0)^2, \;\;\; \Theta_{m,\infty}(\omega) = \omega^2 + \omega i\gamma_m -(\theta_m^0)^2, \end{equation} and of the functions $V_e$ and $V_m$ appearing in \eqref{Vemdef}, namely \begin{equation} \label{Veminfty} V_{m,\infty}(\omega) = \frac{\Theta_{m,\infty}(\omega)}{(\omega+i\gamma_m)}, \;\;\; V_{e,\infty}(\omega) = \frac{\Theta_{e,\infty}(\omega)}{(\omega+i\gamma_e)(\omega+i\gamma_m)}. \end{equation} We use the classical Helmholtz decomposition $L^2(\Omega)^3\!=\!\nabla \dot H^1_0(\Omega) \oplus H(\Div 0,\Omega)$, see e.g.\ \cite[Lemma 11]{MR3942228}, and we denote by $P_{\ker(\Div)}$ the associated orthogonal projection onto $H(\Div 0,\Omega)$. The following result is stated in \cite[Proposition 5.1]{BFMT} and in a less general setting in \cite[Lemma 23]{MR3942228}. \begin{prop} \label{thm: compactness} Let $m:\Omega\to \C^{3\times 3}$ be a function such that \begin{equation} \label{eq:limit-gen} \lim_{R\to\infty}\sup_{\|x\|>R} \|m(x)\|=0. \end{equation} Then $mP_{\ker(\Div)}$ is compact from $(H(\curl, \Omega), \norma{\cdot}_{H(\curl, \Omega)})$ to $( L^2(\Omega)^3, \norma{\cdot}_{ L^2(\Omega)^3})$. \end{prop} \begin{proof} Given $\delta>0$, there exist a bounded operator $m_\delta$ (which we identify with the corresponding multiplication operator in $L^2(\Omega)^3$) with $\|m_\delta\|<\delta$, and a function $m^\delta_c$ which is compactly supported in $\Omega_R := \Omega\cap B(0,R)$ for large $R>0$, such that $m = m^\delta_c + m_\delta$. We claim that $m^\delta_cP_{\ker(\Div)}$ is compact for every $\delta > 0$. Note that $\norma{m P_{\ker(\Div)} \!-\! m^\delta_c P_{\ker(\Div)}}_{\cB(H(\curl,\Omega), L^2(\Omega)^3)} \leq \delta$ vanishes as $\delta \to 0$; therefore $mP_{\ker(\Div)}$ is compact as limit of the compact operators $m^\delta_c P_{\ker(\Div)}$. Define $\chi_R$ to be a $C^\infty$ cut-off function, $\chi_R = 1$ on $ \supp(m_c)\subset \Omega_R$ and $\chi = 0$ in $\R^3 \setminus \overline{\Omega_R}$. There exists $C_R>0$ such that, for $u\in H(\curl,\Omega)$, $$ \|(\chi_R P_{\ker(\Div)} u)|_{\Omega_R}\|_{H(\curl, \Omega_R) \cap H(\Div, \Omega_R)} \leq C_R\|u\|_{H(\curl, \Omega)}, $$ where we have used that $\Div (\chi_R P_{\ker(\Div)}u)=\nabla \chi_R\cdot P_{\ker(\Div)}u$ and the identity $\curl (\chi_R P_{\ker(\Div)}u)=\nabla \chi_R\times P_{\ker(\Div)}u+\chi_R \curl u$, which holds since $\curl P_{\ker(\Div)}u=\curl u$. Finally, $m_c P_{\ker(\Div)}$ is seen to be compact by rewriting it as follows $$m_c P_{\ker(\Div)} u = m_c \iota(\chi_R P_{\ker(\Div)} u)|_{\Omega_R};$$ $\iota$ is the compact embedding of ${H_0(\curl, \Omega_R)} \cap H(\Div, \Omega_R)$ in $L^2(\Omega_R)^3\!$, see~\cite{MR561375}. \end{proof} \begin{definition} \label{Smdef} For $\omega\in \Sigma = \Sigma_1 \cup \Sigma_2$, we define rational pencils of closed operators acting in the Hilbert space $H(\Div 0,\Omega)$ equipped with the $ L^2(\Omega)^3$-norm by \begin{align*} &\begin{array}{l} \cC_m(\omega) \!:=\! \curl (\Theta_m(\omega))^{-1} \curl_{0}, \;\;\;\;\;\; \cS_m(\omega) \!:=\! \cC_m(\omega) - V_{e,\infty}(\omega), \\[2mm] \dom(\cC_m(\omega)) = \dom(\cS_m(\omega)) \\ \hspace{2cm} \!:=\! \{u \in H_0(\curl,\Omega){\cap H(\Div 0,\Omega)} \; :\; (\Theta_m(\omega))^{-1}\curl u \!\in\! H(\curl,\Omega) \}, \end{array} \intertext{ and} &\begin{array}{l} \!\cC_{\infty}(\omega) \!:=\! (\Theta_{m,\infty}(\omega))^{-1} \curl \curl_{0},\;\;\;\;\;\; \!\cS_{\infty}(\omega) \!:=\! \cC_{\infty}(\omega) - V_{e,\infty}(\omega), \\[2mm] \!\dom(\cC_{\infty}(\omega)) = \!\dom(\cS_{\infty}(\omega)) \!:=\! \{u \in H_0(\curl,\Omega) {\cap H(\Div 0,\Omega)} : \curl u \!\in\! H(\curl,\Omega) \} \end{array} \end{align*} where $V_{e,\infty}(\omega)$ is defined in \eqref{Veminfty}. \end{definition} \textbf{Notation.} For $\omega \in \C \setminus \{-i\gamma_e, -i \gamma_m\}$ define the function \begin{equation} \label{def:f} f(\omega) = \frac{\Theta_{m,\infty}(\omega) \Theta_{e,\infty}(\omega)}{(\omega + i \gamma_e)(\omega + i \gamma_m)} \end{equation} If $\theta_m$ is not differentiable, it may happen that $\dom(\cS_m)$ is $\omega$-dependent and even that $\dom(\cS_m(\omega)) \cap \dom(\cS_{\infty}(\omega)) = \{ 0 \}$ for suitably chosen $\omega \in \Sigma$. In spite of this, the following result holds (the version for the non-self-adjoint time-harmonic Maxwell system was proved in \cite[Proposition 5.4]{BFMT}). \begin{prop} \label{thm: difference res} If $\theta_e$, $\theta_m$ satisfy \eqref{eq:coeffs-infty} and $\cS_m$, $\cS_{\infty}$ are as in Definition {\rm \ref{Smdef}}, then $\sigma_{ek}(\cS_m) \!=\! \sigma_{ek}(\cS_{\infty})$ for $k=1,2,3,4$, and hence \begin{align*} \sigma_{ek}(\cS_m) \cap \Sigma \!=\! \bigg\{ \omega \in \Sigma :\, f(\omega) = t, \:\, t \in \sigma_{ek}(\curl \curl_0|_{H(\Div 0, \Omega)}) \bigg\}. \end{align*} where $f$ is the function defined in \eqref{def:f}. \end{prop} \begin{proof} We follow the proof of \cite[Proposition 5.4]{BFMT}. Let $\omega\!\in\!\Sigma$ and to shorten the notation set $z_\omega\!:= V_{e,\infty}(\omega)\!$. Then $\omega\!\in\!\sigma_{ek}(\cS_m)$ if and only if $0\!\in\! \sigma_{ek}(\cC_m(\omega) - z_\omega)$ and $\omega\!\in\!\sigma_{ek}(\cS_\infty)$ if and only if $ 0 \!\in\! \sigma_{ek}(\cC_\infty(\omega) - z_\omega)$ where, $\cC_m, \cC_\infty$ are the operator functions defined in Definition \ref{Smdef}, for $\omega \in \Sigma$.\\ Since the quadratic form ${\mathfrak c}_m(\cdot)$ associated with $\cC_m(\cdot)$ and the form ${\mathfrak c}_\infty$ associated with $\cC_\infty(\cdot)$ have the same domain $\dom {\mathfrak c}_m(\cdot) \!=\! \dom {\mathfrak c}_\infty \!=\! H_0(\curl,\Omega)$, the second resolvent identity takes the~form \begin{equation} \begin{split} \label{eq:form-2nd-res.-id} &(\cC_m(\omega)\!-\!z_\omega)^{-1} \!- (\cC_\infty(\omega)\!-\!z_\omega)^{-1} \!=\! \\ &\big( \curl_0 (\cC_m(\omega)^*\!-\!\overline{z_\omega})^{-1} \big)^{\!*} (\Theta_{m,\infty}(\omega)^{-1} \!\!-\Theta_m(\omega)^{-1}) \curl_0 (\cC_\infty(\omega)\!-\!z_\omega)^{-1} \end{split} \end{equation} for $\omega\!\in\! \Sigma \cap (\rho(\cS_m) \cap \rho(\cS_\infty))$. In fact, for arbitrary $u$, $v \in L^2(\Omega)^3$ and $\omega\!\in\! \Sigma \cap (\rho(\cS_m) \cap \rho(\cS_\infty))$, we can write \begin{align*} &\!\big\langle \big( (\cC_m(\omega)\!-\!z_\omega)^{-1} \!- (\cC_\infty(\omega)\!-\!z_\omega)^{-1} \big) u,v \big\rangle \\ &\!=\! \big\langle (\cC_m(\omega)\!-\!z_\omega)^{-1} u,v \big\rangle - \big\langle u, (\cC_\infty(\omega)\!-\!z_\omega)^{-*} v \big\rangle\\ \!&=\! \big\langle (\cC_m(\omega)\!-\!z_\omega)^{-1} u,(\cC_\infty(\omega)^*\!-\!\overline{z_\omega})(\cC_\infty(\omega)^*\!-\!\overline{z_\omega})^{-1}v \big\rangle\\ & \hspace{3.5cm} - \big\langle (\cC_m(\omega)\!-\!z_\omega) (\cC_m(\omega)\!-\!z_\omega)^{-1} u, (\cC_\infty(\omega)^*\!-\!\overline{z_\omega})^{-1} v \big\rangle\\ \!&=\! ( {\mathfrak c}_\infty(\omega) - {\mathfrak c}_m(\omega) ) \big[ (\cC_m(\omega)\!-\!z_\omega)^{-1} u, (\cC_\infty(\omega)^*\!-\!\overline{z_\omega})^{-1} v \big]; \end{align*} together with ${\mathfrak c}_m(\omega) \!=\! \langle\Theta_m(\omega)^{-1} \!\curl_0 \cdot, \curl_0 \cdot\rangle$ and analogously for ${\mathfrak c}_\infty(\omega)$, the identity \eqref{eq:form-2nd-res.-id}~follows. The first factor on the right-hand side of \eqref{eq:form-2nd-res.-id} is bounded since $\dom \cC_m(\cdot) \!\subset\! \dom \curl_0$. By assumption \eqref{eq:coeffs-infty}, for fixed $\omega \in \Sigma$, \textcolor{black}{condition \eqref{eq:limit-gen} of Proposition \ref{thm: compactness} is satisfied by} $(\Theta_m(\omega)^{-1}\!-\!\Theta_\infty(\omega)^{-1})$ and thus the operator $(\Theta_m(\omega)^{-1}\!-\!\Theta_\infty(\omega)^{-1}) P_{\ker\Div}$ is com\-pact from $H(\curl,\Omega)$ to $H(\Div 0,\Omega)\!\subset\! L^2(\Omega)^3$. The boundedness of $\curl_0 (\cC_\infty(\omega)\!-\!z_\omega)^{-1}$ from $H(\Div 0, \Omega)$ to $H(\curl,\Omega)$ follows from \[ \begin{split} \curl\curl_0 (\cC_\infty(\omega)\!-\!z_\omega)^{-1} &= \curl \curl_0 (\curl \curl_0 - f(\omega))^{-1} \Theta_m(\omega)\\ &= I + f(\omega)(\curl \curl_0 - f(\omega))^{-1} \end{split} \] where $f$ is defined in \eqref{def:f}. Now, $f(it) < 0$, $t \in \R$, hence $$0 \leq I + f(it)(\curl \curl_0 - f(it))^{-1} \leq I,$$ and the boundedness for $\omega = it$ follows. For a general $\omega \in (\rho(S_{\infty})\cap \Sigma)$, we have \[ (\cC_\infty(\omega)\!-\!z_\omega)^{-1} = (\cC_\infty(it)\!-\!z_{it})^{-1} + (\cC_\infty(it)\!-\!z_{it})^{-1} (z_{it} - z_{\omega})(\cC_\infty(\omega)\!-\!z_\omega)^{-1} \] hence, upon applying $\curl \curl_0$, $\curl \curl_0 (\cC_\infty(\omega)\!-\!z_\omega)^{-1}$ is seen to be bounded.\\ Altogether, the operator \begin{align*} &(\Theta_{m,\infty}(\omega)^{-1} \!\!-\Theta_m(\omega)^{-1}) \curl_0 (\cC_\infty\!-\!z_\omega)^{-1} \\ &\hspace{3cm}= (\Theta_{m,\infty}(\omega)^{-1} \!\!-\Theta_m(\omega)^{-1}) P_{\ker(\Div)} \curl_0 (\cC_\infty\!-\!z_\omega)^{-1} \end{align*} is compact. Hence, by \eqref{eq:form-2nd-res.-id}, the resolvent difference of $\cS_m(\omega)$ and $\cS_\infty(\omega)$ is compact and, by \cite[Thm.\ IX.2.4]{EE}, $\sigma_{ek}(\cS_m(\omega))=\sigma_{ek}(\cS_\infty(\omega))$ follows for all $k=1,2,3,4$, $\omega \in \Sigma$, \textcolor{black}{hence $0 \in \sigma_{ek}(\cS_m(\omega))$ if and only if $0 \in \sigma_{ek}(\cS_\infty(\omega))$, for $\omega\in\Sigma$. This means that} $\sigma_{ek}(\cS_m) \cap \Sigma=\sigma_{ek}(\cS_\infty) \cap \Sigma$. \end{proof} \begin{rem} \label{rem: unif_bound_Cinfty} In the proof of Prop. \ref{thm: difference res}, it is shown that for $\omega = it$, $t > 0$, $\norma{\curl_0 (\cC_\infty(\omega)\!-\!z_\omega)^{-1}}_{\cB(H(\Div 0, \Omega),H(\curl,\Omega))} \leq C$, where the constant $C>0$ does not depend on $\Omega$. This will be important in Section \ref{sec:lim_ess_spec}, where families of domains are considered. \end{rem} We further state the following abstract result regarding the spectrum of triangular block operator matrices, a proof of which can be found in \cite[Theorem 8.1]{BFMT}. Following \cite[Chp.IX, p.414]{EE}, given a linear operator $T$ densely defined in $\cH$, we set $\sigma^*_{e2}(T) = \{ \omega \in \C : {\rm def} (T - \omega) = \infty \}$, with the convention that ${\rm def} (T - \omega) = \infty$ if $\ran (T - \omega)$ is not closed. \begin{theorem} \label{thm: ess spec} Let $\cA$ be defined by \[ \cA = \begin{pmatrix} A & 0 \\ C & D \end{pmatrix} \] with $A$, $D$ are densely defined, $C$, $D$ are closable, $\dom(A) \subset \dom(C)$ and $\rho(A) \neq \emptyset$. Then \begin{equation} \label{se2} \big( \sigma_{e2}(A) \setminus \sigma_{e2}^*(\overline{D} ) \big) \cup \sigma_{e2}(\overline{D} ) \subset \sigma_{e2}(\overline{\cA}) \subset \sigma_{e2}(A) \cup \sigma_{e2}(\overline{D} ), \vspace{-1mm} \end{equation} and hence \[ \sigma_{e2}(\overline{\cA}) \cup \big( \sigma_{e2}(A) \cap \sigma_{e2}^*(\overline{D} ) \big) = \sigma_{e2}(A) \cup \sigma_{e2}(\overline{D} ); \] in particular, if $\sigma_{e2}^*(\overline{D}) = \sigma_{e2}(\overline{D})$ or if $ \sigma_{e2}(A) \cap \sigma_{e2}^*(\overline{D})=\emptyset$, then \[ \sigma_{e2}(\overline{\cA}) = \sigma_{e2}(A) \cup \sigma_{e2}(\overline{D} ). \] \end{theorem} We are now in position to prove the following theorem, which yields a decomposition of the $\sigma_{e}(\cL)$ as the union of the essential spectrum of the constant-coefficient pencil $\cS_{\infty}$ and the essential spectrum of the pencil of bounded multiplication operators $V_e(\cdot)$, compressed to gradient fields. For the convenience of the reader, the relations between the several different operators and their essential spectra are represented in Fig. \ref{fig:graph_ALS}. \begin{theorem} \label{sigma-ess} Suppose that $\theta_e$, $\theta_m$ satisfy the limiting assumption \eqref{eq:coeffs-infty}. Let $P_\nabla := \id - P_{\ker(\Div)}$ be the orthogonal projection from $ L^2(\Omega)^3\!=\!\nabla \dot H^1_0(\Omega) \oplus H(\Div 0,\Omega)$ onto $\nabla \dot{H}^1_0(\Omega)$. Let $G(\omega)$ denote the operator \begin{equation}\label{B1def} \mbox{$G(\omega) = -P_{\nabla \dot{H}^1_0(\Omega)} V_e(\omega) P_{\nabla \dot{H}^1_0(\Omega)}$, with $\dom (G(\omega))=\nabla \dot{H}^1_0(\Omega)$}, \end{equation} viewed as an operator from the space $\nabla \dot{H}^1_0(\Omega)$ to $\nabla \dot{H}^1_0(\Omega)$. Then \[ \sigma_{ek}(\cS_1) \cap \Sigma = (\sigma_{ek}(\cS_\infty) \cup \sigma_{ek}(G)) \cap \Sigma, \quad k=1,2,3,4, \] where $\sigma_{ek}(\cS_\infty)$ is described in Prop. \ref{prop: spectrum infty}, and \[ \sigma_{ek}(G) \subset \begin{cases}- i [0, \gamma_e), \quad &\textup{if $- \frac{\gamma_e^2}{4} + \norma{\theta^2_e}_{\infty} \leq 0$,} \\ - i [0, \gamma_e) \cup \left( (-d_e, d_e) \times \{-i \frac{\gamma_e}{2} \} \right) \quad &\textup{if $- \frac{\gamma_e^2}{4} + \norma{\theta^2_e}_{\infty} > 0$.} \end{cases} \] with $d_e = \left(- \sqrt{- \frac{\gamma_e^2}{4} + \norma{\theta^2_e}_{\infty}}, \sqrt{- \frac{\gamma_e^2}{4} + \norma{\theta^2_e}_{\infty}} \right)$. \end{theorem} \begin{proof} Let $\omega\!\in\!\Sigma$. The operator $M(\omega)\!:=\!(V_e(\omega)\!-\!V_{e,\infty}(\omega))P_{\ker(\Div)}$ in $ L^2(\Omega)^3$ is $\curl_0$-compact by Proposition~\ref{thm: compactness}, and hence $\cC_m(\omega)$-compact, since $\dom(\cC_m) \subset \dom(\curl_0)$, where $\cC_m(\omega) = \curl \Theta_m(\omega)^{-1} \curl_0$ is defined in Definition \ref{Smdef}. Since $\cS_1(\omega)=\cC_m(\omega) + V_e(\omega)$, with $V_e(\omega)$ bounded multiplication operator, bounded sequences in the $\cS_1(\omega)$-graph norm have bounded $\cC_m(\omega)$-graph norms. Hence $M(\omega)$ is $\cS_1(\omega)$-compact which yields $\sigma_e(\cS_1(\omega))=\sigma_e(\cS_1(\omega)+M(\omega))$. Since $\nabla \dot{H}^1_0(\Omega)\subset\ker(\curl_0)$ and hence $\curl_0 P_\nabla \!=\! P_\nabla \curl\!=\!0$, $\nabla \dot{H}^1_0(\Omega)$ is a reducing subspace for $\curl \Theta_m(\omega)^{-1} \curl_0$. Therefore the operator \begin{equation}\label{eq:omcT} \begin{aligned} &\!\cT(\omega)\!:=\! \cS_1(\omega)\!+\!M(\omega) \\ &\!=\! \curl \Theta_m(\omega)^{-1} \curl_0 - V_e(\omega)P_\nabla - V_e(\omega) P_{\ker(\Div)} + (V_e(\omega)\!-\!V_{e,\infty}(\omega))P_{\ker(\Div)} \\ &\!=\! \cC_m(\omega)\!-\!V_e(\omega) P_\nabla - V_{e,\infty}(\omega) P_{\ker(\Div)} \end{aligned} \end{equation} which is a bounded perturbation of $\cC_m(\omega)$ admits an operator matrix representation with respect to the decomposition $ L^2(\Omega)^3\!=\! \nabla \dot{H}^1_0(\Omega) \oplus H(\Div0, \Omega)$ given \vspace{-1mm} by \begin{align} \cT(\omega) \!&=\! \begin{pmatrix} \hspace{5.7mm} P_\nabla \cT(\omega) |_{\nabla \dot H^1_0(\Omega)} \!&\! \hspace{6.5mm} P_\nabla \cT(\omega) |_{H(\Div0, \Omega)}\\ P_{\ker\Div} \cT(\omega) |_{\nabla \dot H^1_0(\Omega)} \!&\! P_{\ker\Div} \cT(\omega) |_{H(\Div0, \Omega)} \!&\! \end{pmatrix} \nonumber \\ \!&=\! \begin{pmatrix} \hspace{5.5mm} -P_\nabla V_e(\omega) |_{\nabla \dot H^1_0(\Omega)} \!&\! 0 \\ -P_{\ker\Div} V_e(\omega) |_{\nabla \dot H^1_0(\Omega)} \!&\! P_{\ker\Div} (\cC_m(\omega)\!-\!V_{e,\infty}(\omega)) |_{H(\Div0, \Omega)}\\ \end{pmatrix} \nonumber \\ \!&=\! \begin{pmatrix} \hspace{5.7mm} G(\omega) \!&\! 0 \\ P_{\ker\Div} V_e(\omega) |_{\nabla \dot H^1_0(\Omega)} \!&\! \cS_m(\omega) \end{pmatrix}. \label{eq: A^0} \\[-7mm] \nonumber \end{align} with domain $\dom(\cT(\omega))=\nabla \dot{H}^1_0(\Omega)\oplus \dom(\cS_m(\omega))$. Apart from $\cS_m(\omega)$, the other two matrix entries in $\cT(\omega)$ are bounded and everywhere defined, and $\sigma_{e2}(\cS_m(\omega)) = \sigma^*_{e2}(\cS_m(\omega))$, due to $J$-self-adjointness. Thus Theorem \ref{thm: ess spec} and Proposition~\ref{thm: difference res} yield \vspace{-1mm} that $$ \sigma_{e2}(\cT\!(\omega)) \!=\! \sigma_{e2}(\cS_m(\omega)) \cup \sigma_{e2}(G(\omega)) \!=\! \sigma_{e2}(\cS_\infty(\omega)) \cup \sigma_{e2}(G(\omega)) $$ and hence, since $\omega\in\Sigma$ was arbitrary, \begin{align*} \sigma_{e2}(\cS_1) \cap \Sigma&=\sigma_{e2}(\cS_1+M) \cap \Sigma=\sigma_{e2}(\cT) \cap \Sigma = (\sigma_{e2}(\cS_\infty) \cup \sigma_{e2}(G)) \cap \Sigma. \qedhere \\[-6mm] \end{align*} \end{proof} \begin{figure} \label{fig:graph_ALS} \centering \includegraphics{graph_ALS.pdf} \caption{Relations between $\cA$, $\cL$ and $\cS_1$. $\sigma_e(\cS_1)$ then decomposes in $\sigma_e(\cS_\infty) \cup \sigma_e(G)$, according to Thm \ref{sigma-ess}} \end{figure} \section{Spectrum of the reduced operators} \label{sec:red_op} Recall that \[\cS_{1,\infty}(\omega)|_{\ker(\Div)} = \cS_{\infty}(\omega)\] for all $\omega \in \Sigma$, where $\cS_\infty$ is defined in Definition \ref{Smdef}. Since $\cS_\infty$ is a constant-coefficients operator, by classical symbol analysis we deduce that \begin{prop} \label{prop: spectrum infty} For $f$ as in \eqref{def:f} we have \[ \sigma_e(\cS_\infty) \cap \Sigma = \bigg\{ \omega \in \C : f(\omega)= t, t \in \sigma_e(\curl \curl_0|_{\ker(\Div)}) \bigg\} \cap \Sigma \] \end{prop} \begin{proof} Without loss of generality, we may assume $\omega \in \Sigma_1$, the case $\omega \in \Sigma_2$ being similar. We first note that if $\omega \in \Sigma_1$, $\Theta_{m, \infty}(\omega)$ is (boundedly) invertible. Studying the spectrum of $\cS_\infty$ is then equivalent to considering directly the pencil $(\curl \curl_0 - f(\omega))|_{\ker(\Div)}$, which we will call again $\cS_\infty(\omega)$ with an abuse of notation. Now, $\omega \in \sigma_e(\cS_{\infty}) \cap \Sigma_1$ iff there exists a Weyl singular sequence $u_n\in H(\Div0,\Omega)$ such that $\curl \curl_0 u_n - f(\omega)u_n \to 0$, which holds iff $f(\omega) \in \R$ and $f(\omega) \in \sigma_e(\curl \curl_0|_{\ker(\Div)})$. The claim is proved. \end{proof} \begin{prop} \label{prop: We infty} \[ W_e(\cS_\infty) \cap \Sigma= \big\{ \omega \in \C : f(\omega) = t, t \in W_e(\curl \curl_0|_{\ker(\Div)}) \big\} \cap \Sigma \] \end{prop} \begin{proof} If $(u_n)$ is a Weyl sequence in $H(\Div0, \Omega)$ with $\| u_n \| = 1$ for all $n$, then $(\curl_0 u_n, \curl_0 u_n) - t \to 0$ if and only if $(\curl_0 u_n, \curl_0 u_n) - f(\omega) \to 0$ for all $\omega\in\C$ such that $f(\omega) = t$. \end{proof} As a consequence of Proposition \ref{prop: spectrum infty} we can study the asymptotics of the spectrum of $\cS_\infty$. \begin{prop} \label{prop: asymptotics spectrum} Assume that \begin{equation}\label{eq: asymp} f(\omega_n) = \frac{\Theta_{e,\infty}(\omega_n)\Theta_{m, \infty}(\omega_n)}{(i\gamma_e + \omega_n)(i \gamma_m + \omega_n)} = t_n \to + \infty \end{equation} as $n \to \infty$, $t_n \in \sigma_e(\curl \curl_0|_{\ker(\Div)})$. Then the following are true: \begin{enumerate}[label = (\roman*)] \item if $\re \omega_n$ is bounded as $n \to \infty$ then $\dist(\im \omega_n, \{-i \gamma_e, -i \gamma_m\}) \to 0$. If $\omega_n \to -i \gamma_x$, $x=e,m$ then $\omega_n = - i \gamma_x - \frac{i c}{t_n} + o(1/t_n)$ as $n \to \infty$, where $c \in \R \setminus \{0\}$ is explicitly given by $c = (\theta^0_x)^2\big( - \gamma_x + \frac{(\theta^0_x)^2 (\theta^0_y)^2}{\gamma_y - \gamma_x} \big)$, with $y \neq x$, $x,y \in \{ e, m\}$. \item If $|\re \omega_n| \to +\infty$ then $t_n = |\re \omega_n|^2 + o(|\re \omega_n|^2)$ as $n \to \infty$ and $\im \omega_n \to 0$ with the asymptotics $\im \omega_n = - \frac{1}{\re \omega_n^2} ((\theta^0_e)^2 \gamma_e + (\theta^0_m)^2 \gamma_m) + o(1/\re \omega_n^2)$ . \end{enumerate} \end{prop} \begin{proof} (i) $(\im \omega_n)_n$ is a bounded sequence since the numerical range of $S_{1, \infty}$ is contained in an horizontal strip. Since also $(\re \omega_n)_n$ is a bounded sequence by assumption, we may assume that up to a subsequence $\omega_n \to \omega_{\infty} \in \C$. From \eqref{eq: asymp} we have that \[ (\omega_n^2 + i \gamma_m \omega_n - (\theta_m^0)^2)(\omega_n^2 + i \gamma_e \omega_n - (\theta_e^0)^2) = t_n (\omega_n + i \gamma_e)(\omega_n + i \gamma_m) \] so \[ \limsup_{n \to \infty} |t_n (\omega_n + i \gamma_e)(\omega_n + i \gamma_m)| \leq |(\omega_\infty^2 + i \gamma_m \omega_\infty - (\theta_m^0)^2)(\omega_\infty^2 + i \gamma_e \omega_\infty - (\theta_e^0)^2)| \] which implies that either $\omega_\infty = - i \gamma_e$ or $\omega_\infty = -i \gamma_m$. Set then $\omega_n = -i \gamma_e + \eps_n$ as $n \to \infty$, where $\eps_n \to 0$, $\eps_n \in \C$. Substituting this ansatz in \eqref{eq: asymp} and keeping only the zeroth order terms we get \[ - (\theta_e^0)^2(- \gamma_e^2 + \gamma_e \gamma_m - (\theta_m^0)^2) = z (i (\gamma_m - \gamma_e)), \quad z = \lim_{n \to \infty} t_n \eps_n \] We note that the existence of the limit for $z$ is up to a subsequence, and can be easily deduced from the fact that equation \eqref{eq: asymp} is not satisfied if $(t_n \eps_n)_n$ is not a bounded sequence. Hence we have that \[ \eps_n = \frac{z}{t_n} + o(1/t_n) = - \frac{i (\theta_e^0)^2}{t_n} \bigg( - \gamma_e + \frac{(\theta_e^0)^2 (\theta_m^0)^2}{\gamma_m - \gamma_e} \bigg) + o(1/t_n) \] as $n \to \infty$, concluding the proof of $(i)$. \\ (ii) To shorten the notation, let us set $x_n = \re \omega_n$, $y_n = \im \omega_n$. From equation \eqref{eq: asymp}, recalling that $(y_n)_n$ is bounded, after taking the real part we see that \[ x_n^4 - t_n x_n^2 = o(x_n^4) \quad \Rightarrow \quad t_n = x_n^2 + o(x_n^2), \quad n \to \infty. \] A further inspection of equation \eqref{eq: asymp} gives that the $o(x_n^2)$-term must be in the form $c_n + o(1)$, where $c_n$ possibly depends on $y_n$. Using the ansatz $t_n = x_n^2 +c_n$ in the real part of \eqref{eq: asymp} and neglecting the lower order terms we get \[ \begin{array}{c} x_n^4 - 6 x_n^2 y_n^2 - 3 x_n^2 y_n (\gamma_e + \gamma_m) + x_n^2 (- (\theta_e^0)^2 - (\theta_m^0)^2 - \gamma_e \gamma_m) \hspace{2cm}\\ = x_n^4+ x_n^2(c_n - y_n^2 - y_n(\gamma_e + \gamma_m) - \gamma_e \gamma_m)\end{array} \] from which we deduce $-5 y_n^2 - 2 y_n ( \gamma_e + \gamma_m) - (\theta_e^0)^2 - (\theta_m^0)^2 = c_n + o(1)$ as $n \to \infty$. In order to continue the analysis we now turn to the imaginary part of \eqref{eq: asymp}. More explicitly, we have \begin{multline} \label{eq: asymp eq imag} 4 x_n^3 y_n - 4 x_n y_n^3 + (x_n^3 - 3 x_n y_n^2)(\gamma_e + \gamma_m) + 2x_n y_n (- (\theta^0_e)^2 - (\theta_m^0)^2 - \gamma_e \gamma_m) \\ - x_n \big(\gamma_m (\theta_e^0)^2 + \gamma_e (\theta_m^0)^2\big) = (x_n^2 + c_n) x_n( 2 y_n + (\gamma_e + \gamma_m)) \end{multline} The term in $x_n^3$ simplifies. Now, if $y_n$ does not tend to zero as $n \to \infty$, the highest order term in the previous equation is $x_n^3 y_n$, so we get the equation $2 x_n^3 y_n = o(x_n^3)$, which is a contradiction. Hence $y_n \to 0$. The candidate highest order terms are $x_n^3 y_n$ and $x_n$. By direct inspection one checks that if $x_n^3 y_n = o(x_n)$ or $x_n = o(x_n^3 y_n)$ as $n \to \infty$ the previous equation gives a contradiction. So it must be $y_n = \frac{z_n}{x_n^2} + o(1/x_n^2)$ as $n \to \infty$. By using this ansatz in \eqref{eq: asymp eq imag} and keeping only the highest order term (namely the ones in $x_n$), it may be proved that \[ - x_n (\gamma_m (\theta_e^0)^2 + \gamma_e (\theta_m^0)^2) + 4 z_n x_n = 2 z_n x_n + c_n(\gamma_e + \gamma_m) x_n, \] for $c_n = - (\theta_e^0)^2 - (\theta_m^0)^2 + o(1)$. Thus, $z_n = - \frac{1}{2} ((\theta_e^0)^2 \gamma_e + (\theta_m^0)^2 \gamma_m) + o(1)$ and $y_n = - \frac{1}{2 x_n^2} ((\theta_e^0)^2 \gamma_e + (\theta_m^0)^2 \gamma_m) + o(1/x_n^2)$. \end{proof} We now turn to the bounded pencil $G(\cdot)$ defined in \eqref{B1def}. Recall that $V_e(\omega)(x) = \frac{\Theta_e(\omega,x)}{(\omega + i \gamma_e)(\omega + i \gamma_m)}$, $\omega \in \Sigma$, $x \in \Omega$. \begin{prop} Assume that $\theta_e$ is a continuous function in $\overline{\Omega}$. Then \[ \begin{split} \sigma_e(G) &= \{ \omega \in \C :\, \exists x_0 \in \Omega, \, \Theta_e(\omega,x_0) = 0 \}\\ &= \{ \omega \in \C :\, \exists x_0 \in \Omega,\, \omega = - i \gamma_e/2 \pm (\sqrt{- \gamma_e^2 + 4 \theta_e^2(x_0)})/2 \} \end{split} \] \end{prop} \begin{proof} Since $(\omega + i \gamma_e) (\omega + i \gamma_m)$ is constant in $x \in \Omega$, we can replace $V_e$ by $\Theta_e$ in the definition of $G$ without changing the spectrum. We first note that the set \[\{ \omega \in \C : \re(\Theta_e(\omega, x_0)) = 0, \im(\Theta(\omega, x_1)) = 0, \, x_0 \neq x_1\}\] actually coincides with $\{ \omega \in \C : \exists x_0 \in \Omega : \Theta_e(\omega,x) = 0 \}$. In fact, it is easy to check that $\im(\Theta(\omega, x)) = 2 \re \omega \im \omega + \re \omega \gamma_e$ does not depend on $x \in \Omega$, and therefore $\im(\Theta(\omega, x_1)) = 0$ for $x_1 \in \Omega$ if and only if $\im(\Theta(\omega, x)) = 0$ for all $x \in \Omega$.\\ It is clear that if either $\re(\Theta_e(\omega))$ or $\im(\Theta_e(\omega))$ is strictly positive (or strictly negative) in the whole of $\Omega$ then by the Lax-Milgram theorem the problem \[(\Theta_e(\omega) \nabla u, \nabla v) = \langle F , v \rangle, \:\:F \in H^{-1}(\Omega),\: u,v \in \dot{H}^1_0(\Omega)\] has a unique solution $u_F \in \dot{H}^1_0(\Omega)$. This proves the inclusion \[ \sigma_e(G(\omega)) \subseteq \{ \omega \in \C : \exists x_0 \in \Omega : \Theta_e(\omega,x) = 0 \} \] The reverse inclusion (which uses the continuity of $\theta_e$) follows by constructing quasi-modes as in the proof of \cite[Proposition 27]{MR3942228}. \end{proof} \section{Limiting essential spectrum and spectral pollution} \label{sec:lim_ess_spec} The aim of this section is to enclose the set of spectral pollution for the domain truncation method applied to the Drude-Lorentz pencil $\cL$. We start by recalling three definitions. \begin{definition} \label{lim-sigma-app} For a family of operator-valued functions $(F_n(\cdot))$ defined on some set $K\subset \C$, the {\it limiting approximate point spectrum}, denoted $\sigma_{\rm app}((F_n)_n)$, is the set of $\omega \in K$ such that there exists a sequence $(u_n)_n$ with $u_n\in \dom(F_n(\omega))$ for each $n$, $\| u_n\| = 1$, as $n\to\infty$ and $\| F_n(\omega_n)u_n\| \to 0$ as $n\to \infty$. For a family of operators $(F_n)_n$, one takes $K=\C$ and the requirement is that $\|(F_n-\omega I)u_n\|\to 0$ as $n\to\infty$. \end{definition} \begin{definition} \label{lim-sigma-ess} For a family of operator-valued functions $(F_n(\cdot))$ defined on some set $K\subset \C$, the {\it limiting essential spectrum}, denoted $\sigma_e((F_n)_n)$, is the subset of $\sigma_{\rm app}((F_n)_n)$ consisting of $\omega \in K$ for which the sequence $(u_n)_n$ as in the definition \ref{lim-sigma-app} has the additional property $u_n \rightharpoonup 0$, $n \to \infty$. \end{definition} \begin{definition}\label{region-of-boundedness} For an family of operator-valued functions $(F_n(\cdot))$ defined on some set $K\subset \C$, the {\it region of boundedness}, denoted $\Delta_b((F_n)_n)$, is the set of $\omega \in K$ such that $(F_n(\omega))^{-1}$ exists for all sufficiently large $n$ and $\limsup_{n\to\infty} \| (F_n(\omega))^{-1} \| < +\infty$. For a family of operators $(F_n)_n$, then $\Delta_b((F_n)_n)$, is the set of $\omega \in \C$ such that $(F_n-\omega I)^{-1}$ exists for all sufficiently large $n$ and $\limsup_{n\to\infty} \| (F_n - \omega I)^{-1} \| < +\infty$. \end{definition} Given an unbounded Lipschitz domain $\Omega$, let $(\Omega_n)_n$ be a monotonically increasing sequence of Lipschitz bounded domains exhausting $\Omega$. Note that we do not make any assumption on the topology of $\Omega$; in particular, $\R^3 \setminus \overline{\Omega}$ may have infinitely many connected components. We will denote by $\cA_n$, $\cL_n$, $\cS_{1,n}$, \emph{etc.,} the operators or pencils obtained from $\cA$, $\cL$, $\cS_1$ by replacing $\Omega$ with $\Omega_n$ in their domain definitions. In order to avoid cumbersome notation we will not use the subscript $n$ to denote the restriction of multiplication operators to $\Omega_n$.\\ The domain truncation method consists of finding $\sigma(\cL_n)$, $n \in \N$, in the hope that for sufficiently large $n$ these will be good approximations to $\sigma(\cL)$. Ideally, one would like to prove that $(\cL_n)_n$ is a spectrally exact approximation of $\cL$, that is for every $\omega \in \sigma(\cL)$ there exists $\omega_n \in \sigma(\cL_n)$, $n \in \N$ such that $\omega_n \to \omega$; and conversely, every limit point of sequences $(\omega_n)_n$ with $\omega_n \in \sigma(\cL_n)$, $n \in \N$, lie in $\sigma(\cL)$. However, this is false in general. Indeed, spectral pollution may appear due to the non-self-adjointness of the operators involved. Therefore, our strategy will be to enclose the set of spectral pollution in a (possibly) small subset of $\C$ and to show that we can approximate exactly the discrete points of $\sigma(\cL)$ outside the set of spectral pollution. We begin with a result about the generalised resolvent convergence of the operators involved. \begin{theorem} \label{thm:gsr} The following statements hold. \begin{enumerate}[label=(\roman*)] \item $\cA_n \gsr \cA$, $n \to \infty$. \item $\cL_n(\cdot) \gsr \cL(\cdot)$ for all $\omega \in (\Delta_b((\cL_n)_n)\setminus \{-i \gamma_e, - i \gamma_m\}) \cap \rho(\cL)$, $n \to \infty$ \item $\Delta_b((\cL_n)_n)\cap \Sigma = \Delta_b((\cS_{1,n})_n)\cap \Sigma$ \item $\cS_{1,n}(\cdot) \gsr \cS_{1}(\cdot)$, for all $\omega \in (\Delta_b((\cL_n)_n) \cap \Sigma) \cap \rho(\cL)$, $n \to \infty$. \end{enumerate} \end{theorem} \begin{proof} (i) According to Proposition \ref{prop: num range 1}, $W(\cA_n) \subset \R \times [-i\gamma_e, 0]$ for all $n \in \N$, and the same enclosure holds for the numerical range of $\cA$ as well. We then deduce that $\norma{(\cA_n - \la)^{-1}} \leq \dist(\la, W(\cA_n))^{-1} \leq (\min \{|\im \la|, |\im \la + \gamma_e| \}^{-1})$ for all $\la \in (\R \times [-i\gamma_e, 0])^c$. In particular, $\Delta_b((\cA_n)_n) \cap \rho(\cA) \neq \emptyset$. The matrices $\cA_n$, $n \in \N$ and $\cA$ are diagonally dominant of order $0$ for all $n$ since the operators $B_n$, $n \in \N$ in the matrix representation \eqref{def:cA} are bounded for all $n$. Let $P_n$ denote projection from $L^2(\Omega)^3$ to $L^2(\Omega_n)^3$ by restriction, so that $P_n \sto I_{L^2(\Omega)^3}$ as $n \to \infty$. It is clear that $B_n = B P_n$ converges strongly to $B$, and similarly $D_n = D P_n \to D$ strongly. Due to \cite[Thm 3.1]{MR3694623}, to conclude that $\cA_n \gsr \cA$ it is enough to show that there exists a core $\Phi$ of $H_0(\curl, \Omega) \oplus H(\curl, \Omega)$ such that $\norma{A_n P_n u - A u} \to 0$ for all $u \in \Phi$. This last property is satisfied by $C^\infty_c(\Omega)^3 \oplus C^\infty_c(\overline{\Omega})^3$ because $\curl_0 P_n \varphi = \curl_0 \varphi$ for all $\varphi \in C^\infty_c(\Omega)^3$ and sufficiently large $n$, and $\curl P_n \psi = P_n \curl \psi$ in $\Omega_n$ for all $\psi \in C^\infty_c(\overline{\Omega})^3$.\\ (ii) To prove $\cL_n(\omega) \gsr \cL(\omega)$ for $\omega \in (\Delta_b((\cL_n)_n) \cap \rho(\cL)) \setminus \{-i \gamma_m, -i \gamma_e \}$, $n \in \N$ it is sufficient to write \begin{multline*} (\cA_n - \omega)^{-1} = \\ \begin{pmatrix} \cL_n(\omega)^{-1} & - \cL_n(\omega)^{-1} B(-iD - \omega)^{-1} \\ (-iD - \omega)^{-1} B \cL_n(\omega)^{-1} & (-iD - \omega)^{-1} (I + B \cL_n(\omega)^{-1} B (-iD - \omega)^{-1}) \end{pmatrix} \end{multline*} hence, since $(\cA_n - \omega)^{-1} (P_n F, 0)^t \to (\cA - \omega)^{-1} (F, 0)^t $ for all $F \in L^2(\Omega)^6 \oplus L^2(\Omega)^6$, we deduce that $\cL_n(\omega)^{-1}P_n F \to \cL(\omega)^{-1}F$, $n \to \infty$. \\ (iii) Let $\omega \in (\Delta_b((\cL_n)_n)\cap \Sigma)$, so that $\sup_{n \geq n_0} \norma{(\cL_n(\omega))^{-1}} < \infty$. The identity{\small \begin{multline} \label{eq:res_eq_LS} (\cL_n(\omega))^{-1} = \\ \begin{pmatrix} \cS_{1,n}(\omega)^{-1} & - \cS_{1,n}(\omega)^{-1} i \curl (V_m(\omega))^{-1} \\ (V_m(\omega))^{-1} i\curl_0 \cS_{1,n}(\omega)^{-1} & (V_m(\omega))^{-1} (I + \curl_0 \cS_{1,n}(\omega)^{-1} \curl (V_m(\omega))^{-1}) \end{pmatrix} \end{multline}} implies that $\norma{\cL_n(\omega)^{-1}} \geq \norma{\cS_{1,n}(\omega)^{-1}}$, $n \geq n_0$, hence $\sup_{n \geq n_0}\norma{(\cS_{1,n}(\omega))^{-1}} < \infty$; equivalently, $\omega \in (\Delta_b((\cS_{1,n})_n)\cap \Sigma)$. \\ Conversely, if $\omega \in (\Delta_b((\cS_{1,n})_n)\cap \Sigma)$ then, by definition of region of boundedness, there exists $n_0 \in \N$ and $C > 0$ such that $\omega \in \rho(\cS_{1,n})$ for $n \geq n_0$ and $\sup_{n \geq n_0} \norma{\cS_{1,n}(\omega)^{-1}} \leq C$. Hence, if $f \in L^2(\Omega)^3$, the equation $\cS_{1,n}(\omega) u_n = P_n f$ has a unique solution $u_n \in L^2(\Omega_n)^3$ with the uniform \emph{a priori} bound $\norma{u_n}_{L^2(\Omega_n)^3} \leq C \norma{f}_{L^2(\Omega)^3}$. This implies that \[ |\langle \Theta_m(\omega)^{-1} \curl_0 u_n, \curl_0 u_n \rangle | \leq |\langle P_n f, u_n \rangle | + |\langle V_e(\omega) u_n, u_n \rangle | \] hence \[ c_1(\omega) \norma{\curl_0 u_n}^2 \leq \frac{\norma{f}^2}{4 \delta} + (\delta + \norma{V_e(\omega)}) \norma{u_n}^2 \leq [C^2 (\delta + \norma{V_e(\omega)}) + 1/4 \delta] \norma{f}^2 \] and all the constants appearing in the previous estimate are independent of $n \geq n_0$, so in particular $\sup_{n \geq n_0} \norma{\curl_0\cS_{1,n}(\omega)^{-1}} \leq C'(\omega) $, where we can set for example $C' = c_1(\omega)^{-1} [C^2 (1 + \norma{V_e(\omega)}) + 1/4]$. Now we can repeat the previous estimate starting from elements $f = \curl g \in \curl L^2$, where we note that $\langle P_n \curl g, u_n \rangle = \langle g, \curl_0 u_n \rangle$ can be estimated in term of $\curl_0 u_n$ which is uniformly bounded in terms of the datum by the previous discussion. Altogether we obtain that there exists a constant $C''$ depending on $C'$ and $\omega$ such that \[ \sup_{n \geq n_0} \norma{\curl\, \cS_{1,n}(\omega)^{-1}\, \curl_0} \leq C'' \] Now the claim of the theorem follow from \eqref{eq:res_eq_LS} since the right-hand side therein is uniformly bounded in $n$, $n \geq n_0$. \\ (iv) Finally, $\cS_{1,n}(\omega)^{-1} P_n \to \cS_1(\omega)^{-1}$ for all $\omega \in (\Delta_b((\cL_n)_n) \cap \Sigma)$ follows by observing that $\omega \in \Delta_b((\cL_n)_n) \cap \Sigma $, hence $(ii)$ implies that $\cL_n(\omega)^{-1}P_n \sto \cL(\omega)^{-1}$; therefore \eqref{eq:res_eq_LS} implies that $\cS_{1,n}(\omega)^{-1}P_n \sto \cS_1(\omega)$ (by direct calculation on vectors $(f,0)^t$). \end{proof} \begin{corollary} \label{cor: sigmae An} $\sigma_{\rm poll}((\cA_n)_n) \subset \sigma_e((\cA_n)_n)$. \end{corollary} \begin{proof} From Definition \eqref{def:cA}, $\cA_n$ is seen to be $\cJ$-self-adjoint with respect to $\cJ = \diag(i , -i, i, -i) J$, with $J u = \bar{u}$ the componentwise complex conjugation. The result is then a consequence of Thm 2.3 in \cite{MR3831156}. \end{proof} \begin{prop} \label{prop: pollution Ln} $\sigma_{\rm poll} ((\cL_n)_n) = \sigma_{\rm poll}((\cA_n)_n) \setminus \{-i \gamma_e, -i \gamma_m \}$ \end{prop} \begin{proof} \cite[Thm. 2.3.3(ii)]{TreB} implies that, for fixed $n$, $\sigma(\cL_n) = \sigma(\cA_n) \setminus \{-i \gamma_e, -i \gamma_m \}$ and similarly $\sigma(\cL) = \sigma(\cA) \setminus \{-i \gamma_e, -i \gamma_m \}$. Hence, $\sigma_{\rm poll} ((\cL_n)_n) \subset \sigma_{\rm poll}((\cA_n)_n) \setminus \{-i \gamma_e, -i \gamma_m \}$. Conversely, one may observe that if $\la_n \in \sigma(\cA_n)$, $\la_n \to \la \in (\rho(\cA) \setminus \{-i \gamma_e, -i \gamma_m \})$, then for big enough $n$ $\la_n \notin \{-i \gamma_e, -i \gamma_m \}$, so $\la_n \in \sigma(\cL_n)$ and $\la_n \to \la \in \rho(\cL)$. Thus, $\la \in \sigma_{\rm poll}((\cL_n)_n)$. \end{proof} \begin{rem} The poles $\{-i \gamma_e, -i \gamma_m \}$ may or may not be in the essential spectrum of $\cA$. If $B$ is compactly supported and $A$ restricted to divergence-free vector field has compact resolvent, then the poles belong to $\sigma_e(\cA)$, see \cite[Prop. 2.2]{MR3543766} for a proof in a similar setting. However, for the purposes of this paper we are not interested in the poles that are out of the domain of definition of $\cL$. \end{rem} \begin{prop} $\sigma_e((\cA_n)_n) \setminus \{-i \gamma_e, -i \gamma_m \} = \sigma_e((\cL_n)_n) \setminus \{-i \gamma_e, -i \gamma_m \}$ \end{prop} \begin{proof} $\cA_n$ is diagonally dominant for every $n$ and the norms of the off-diagonal entries do not depend on $n$; furthermore, $\Delta_b((D_n)_n = \C \setminus \{-i \gamma_e, -i \gamma_m \}$. The result therefore follows from \cite[Proposition 2.3.4(i),(iii)]{SThesis}. \end{proof} \begin{prop} \label{prop:sigma-app-Ln} The following identities hold. \begin{enumerate}[label=(\roman*)] \item $\sigma_e((\cL_n)_n) \cup \sigma_p(\cL) = \sigma_{\rm app}((\cL_n)_n)$; \item $(\sigma_e((\cS_{1,n})_n) \cup \sigma_p(\cS_1)) \cap \Sigma = \sigma_{\rm app}((\cS_{1,n})_n) \cap \Sigma$. \end{enumerate} \end{prop} \begin{proof} The proof is a generalisation of \cite[Prop. 2.15(ii)]{MR3831156} to families of operators.\\ Observe that the inclusion $\sigma_p(\cL) \subset \sigma_{\rm app}((\cL_n)_n)$ (and the analogous inclusion for $(\cS_{1,n})_n)$) are consequence of the gsr convergence of $\cL_n$ to $\cL$ established in Theorem \ref{thm:gsr}. Indeed, we claim that if $\cL_n(\omega_0) \gsr \cL(\omega_0)$ for some $\omega_0 \in \Delta_b((\cL_n)_n) \cap \rho(\cL)$, then for all $u \in \dom(\cL)$ there exists a sequence $u_n \in \dom(\cL_n)$, $\norma{u_n} = 1$ such that $\norma{u_n - u} \to 0$, $\norma{\cL_n(\omega)u_n - \cL(\omega)u} \to 0$, $n \to \infty$, $\omega \in \C \setminus \{-i \gamma_e, -i \gamma_m\}$. Assuming the claim is satisfied, if $\omega \in \sigma_p(\cL)$ with eigenfunction $u$, then there exists an approximating sequence $(u_n)_n$ as above, and therefore $\sigma_p(\cL) \subset \sigma_{\rm app}((\cL_n)_n)$.\\ To prove the claim, one first realises that if $\omega \in \Delta_b((\cL_n)_n) \cap \rho(\cL)$ then the sequence $u_n := \cL_n(\omega)^{-1} P_n \cL(\omega) u$ has the required properties. In the general case, one first note that for $t < 0$ sufficiently big $\omega \in \Delta_b((\cL_n(\cdot) + i t)) \cap \rho(\cL(\cdot) + it)$, see the proof of Lemma \ref{lemma:bddcls}. Therefore, given $u \in \dom(\cL + it) = \dom(\cL)$, there exists a sequence $u_n \in \dom( \cL_n(\cdot) + i t) = \dom(\cL_n)$ such that $\norma{u_n - u} \to 0$ and $\norma{(\cL_n(\omega) + it) u_n - (\cL(\omega) + it) u} \to 0$, and therefore $\norma{ \cL_n(\omega) u_n - \cL(\omega) u} \to 0$, as claimed.\\ The inclusion "$\subset$" in $(i)$ and $(ii)$ is immediate from Definitions \ref{lim-sigma-app}, \ref{lim-sigma-ess}, and the previous observation.\\ We now prove that $\sigma_e((\cL_n)_n) \cup \sigma_p(\cL) \supset \sigma_{\rm app}((\cL_n)_n)$. Let $\omega \in \sigma_{\rm app}((\cL_n)_n)$, that is, there exists a sequence of elements $u_n \in \dom(\cL_n)$, $\norma{u_n} = 1$, $n \in \N$, such that $\norma{\cL_n(\omega)u_n} \to 0$. Since the unit ball in a Hilbert space is weakly compact, we may assume that, up to a subsequence, $u_n \rightharpoonup u$ in $L^2(\Omega)^3 \oplus L^2(\Omega)^3$. If $u = 0$, then $\omega \in \sigma_e((\cL_n)_n)$, concluding the proof. Assume then $u \neq 0$. Theorem \ref{thm:gsr}(iii) implies that there exists $\omega_0 \in \Delta_b((\cL_n)_n) \cap \rho(\cL)$ such that $\cL_n(\omega_0) \gsr \cL(\omega_0)$. Now, \begin{equation} \label{eq:cL_n(omega_0)} \cL_n(\omega_0)u_n = (\cL_n(\omega_0) - \cL_n(\omega))u_n + \cL_n(\omega) u_n \end{equation} and we notice that $\cL_n(\omega_0) - \cL_n(\omega) := \cB(\omega, \omega_0) \diag(P_n, P_n)$, where $\cB$ is a bounded $2 \times 2$ block operator matrix not depending on $n$. Taking the inverse of $\cL_n(\omega_0)$ in \eqref{eq:cL_n(omega_0)} gives \[ u_n = \cL_n(\omega_0)^{-1} \cB\,\, \diag(P_n, P_n) u_n + \eps_n, \quad \eps_n:= \cL_n(\omega_0)^{-1} \cL_n(\omega)u_n. \] Now, $\eps_n\to 0$, $n \to \infty$ because $\omega_0 \in \Delta_b((\cL_n)_n)$ and $\cL_n(\omega) u_n \to 0$, $n \to \infty$. Moreover, the weak convergence $u_n \rightharpoonup u$, the strong convergence $\cL_n(\omega_0)^{-1} \sto \cL(\omega_0)^{-1}$ and the uniqueness of the weak limit imply that \[ u = \cL(\omega_0)^{-1} \cB\, u, \] or equivalently, since $\cB = \cL(\omega_0) - \cL(\omega)$, that $\cL(\omega) u = 0$. Thus, $\omega \in \sigma_p(\cL)$.\\ The proof of $(\sigma_e((\cS_{1,n})_n) \cup \sigma_p(\cS_1)) \cap \Sigma \supset \sigma_{\rm app}((\cS_{1,n})_n) \cap \Sigma$ in $(ii)$ is analogous to the proof of the same inclusion in $(i)$ with $\cS_{1,n}(\omega)$ replacing $\cL_n(\omega)$; in place of $\cS_{1,n}(\omega_0)$ for some $\omega_0 \in \Delta_b((\cS_{1,n})_n)$ we choose $\cS_{1,n}(\omega) + i t$ for $t < 0$ big enough. Notice that this is possible since $\omega \in \Delta_b((\cS_{1,n}(\cdot) + it)_n)$ for $t<0$ large enough, see proof of Lemma \ref{lemma:bddcls}. The proof is concluded. \end{proof} \begin{theorem} $\sigma_e((\cL_n)_n) \cap \Sigma = \sigma_e ((\cS_{1,n})_n) \cap \Sigma$ \end{theorem} \begin{proof} Proposition \ref{prop:sigma-app-Ln} implies \[ \sigma_e((\cL_n)_n) \cup \sigma_p(\cL) = \sigma_{\rm app}((\cL_n)_n), \quad \sigma_e((\cS_{1,n})_n) \cup \sigma_p(\cS_1)= \sigma_{\rm app}((\cS_{1,n})_n) \] Now, the definitions of region of boundedness and of limiting approximate point spectrum, together with the equality $\sigma_{\rm app}((\cL_n)_n) = \sigma_{\rm app}((\cL^*_n)_n)^*$, imply that $\sigma_{\rm app}((\cL_n)_n) = \C \setminus \Delta_b((\cL_n)_n)$ and $\sigma_{\rm app}((\cS_{1,n})_n) = \C \setminus \Delta_b((\cS_{1,n})_n)$. By Thm. \ref{sigmaLS}, $\sigma_p(\cL) \cap \Sigma = \sigma_p(\cS_1) \cap \Sigma$; and as a consequence of Thm. \ref{thm:gsr}(iii), $\Delta_b((\cS_{1,n})_n) \cap \Sigma = \Delta_b((\cL_n)_n) \cap \Sigma$. Thus, up to intersection with $\Sigma$ we have \[ \begin{split} \sigma_e((\cL_n)_n) \cup \sigma_p(\cL) &= \sigma_{\rm app}((\cL_n)_n) = \C \setminus \Delta_b((\cL_n)_n) = \C \setminus \Delta_b((\cS_{1,n})_n) \\ &= \sigma_{\rm app}((\cS_{1,n})_n) = \sigma_e((\cS_{1,n})_n) \cup \sigma_p(\cS_1).\qedhere \end{split} \] \end{proof} In the following proposition we use the notion of \emph{discrete compactness} for sequences of operators in varying Hilbert spaces. We refer to \cite[Def. 2.5]{MR3694623} and the references therein for the relevant definitions and properties. \begin{prop} \label{prop: decomp sigmaeS1n} Given $n \in \N$, the following equality holds. \[\sigma_e((\cS_{1,n})_n) \cap \Sigma = \left(\sigma_e((\cS_{\infty, n})_n) \cup (\, \sigma_e((G_{n})_n)\right) \cap \Sigma\,),\] where $G_n$ is defined as in \eqref{B1def}. \end{prop} \begin{proof} This can be proved along the lines of \cite[Section 7, Section 8]{BFMT}. For the sake of completeness we recall here the main steps of the proof.\\ Observe that Proposition \ref{thm: compactness} implies that $M_{e,n}(\omega) = (\Theta_e(\omega) - \Theta_{e, \infty}(\omega))P^n_{\ker(\Div)}$ is a compact operator from $H(\curl, \Omega_n)$ to $L^2(\Omega_n)^3$ for every $n$. According to the decomposition of the coefficients \eqref{eq:coeffs-infty}, up to an operator which is vanishing uniformly in $n$, the sequence $M_{e,n}(\omega)$ is compactly supported in $\Omega_n$ for each $n$ and equals $(\Theta_e(\omega) - \Theta_{e, \infty}(\omega))\chi_{\Omega_R \cap \Omega_n}P^n_{\ker(\Div)}$, where $\Omega_R = \Omega \cap B(0,R)$ contains the compact support of $\Theta_e(\omega) - \Theta_{e, \infty}(\omega)$. This last sequence of operators is clearly discretely compact from $H(\curl, \Omega_n)$ to $L^2(\Omega_n)^3$ because of the compact embedding of $H(\curl, \Omega_R) \cap H(\Div0, \Omega_R)$ into $L^2(\Omega_R)$. Since discretely compact perturbations do not modify the limiting essential spectrum, $\sigma_e(((\cS_{1,n})_n) \cap \Sigma = \sigma_e((\cS_{1, n} + M_{e,n}))_n) \cap \Sigma$. Now we note that \begin{multline*} \cS_{1, n}(\omega) + M_{e,n}(\omega) \\ = \curl \Theta_m(\omega)^{-1} \curl_0 - \frac{\Theta_{e, \infty}(\omega)}{(\omega + i \gamma_e) (\omega + i \gamma_m)}P_{\ker(\Div)} - \frac{\Theta_{e}(\omega)}{(\omega + i \gamma_e) (\omega + i \gamma_m)}P_{\nabla} \end{multline*} has a triangular block operator matrix representation with respect to the Helmholtz decomposition $\nabla H^1_0(\Omega_n) \oplus H(\Div0, \Omega_n)$. More specifically, if we define $\cS_{m,n}(\omega)$, as $\cS_m$ with $\Omega_n$ replacing $\Omega$ in the domain definition, we have \[ \cS_{1,n}(\omega) + M_{e,n}(\omega) \simeq \cT_n(\omega) = \begin{pmatrix} -P^n_\nabla V_e(\omega)P^n_\nabla & 0 \\ -P^n_{\ker(\Div)}V_e(\omega)P^n_\nabla & \cS_{m,n} \end{pmatrix}. \] Now since $\cT_n$ is a sequence of triangular block operator matrices with bounded off-diagonal entries and with $J$-selfadjoint diagonal entries, Theorem \ref{thm: ess spec} implies that \[ \sigma_e((\cT_n)_n) \cap \Sigma = (\sigma_e((-P^n_\nabla V_e(\omega)P^n_\nabla)_n)_n \cup (\sigma_e((\cS_{m,n})_n) \cap \Sigma) \] Now \cite[Proposition 7.3]{BFMT} implies that $\sigma_e((\cS_{m,n})_n) = \sigma_e((\cS_{\infty,n})_n)$. We give here a sketch of the proof. The proof is modelled upon the proof of Proposition \ref{thm: difference res}. The idea is to establish that for a suitably chosen $\omega \in \Sigma$ the difference $\cK_n(\omega) = \cS_{m,n}(\omega)^{-1} - \cS_{\infty,n}(\omega)^{-1}$ is discretely compact and that $\cK_n(\omega)^*P_n$ is strongly convergent. Then the equality of the limiting essential spectra follows from \cite[Thm. 2.12(ii)]{MR3831156}. The strong convergence \[\cK_n(\omega)^*P_n = \cS_{m,n}(\omega)^{-*}P_n - \cS_{\infty,n}(\omega)^{-*}P_n \sto \cS_m(\omega)^{-*} - \cS_{\infty}(\omega)^{-*}\] for $\omega \in \Delta_b((\cK_n)_n) \cap \rho(\cS_m) \cap \rho(\cS_\infty)$ can be proved along the lines of Thm.\ref{thm:gsr}(iv). \\ It remains to prove that $(\cK_n(\omega))_n$ is discretely compact. Arguing as in the proof of Prop. \ref{thm: difference res} it may be shown that \begin{equation} \label{eq:cK_n} \cK_n(\omega) = (\curl_0 (\cC_{m,n}(\omega) - \overline{z_{\omega}})^{-1})^* (\Theta_{m,\infty}(\omega)^{-1} - \Theta_{m}(\omega)^{-1}) \curl_0 (\cC_{\infty,n}(\omega) - z_{\omega})^{-1} \end{equation} As a consequence of proof of Thm.\ref{thm:gsr}(iv), \[ \begin{aligned} (\curl_0 (\cC_{m,n}(\omega) - \overline{z_{\omega}})^{-1})^*P_n &\sto (\curl_0 (\cC_{m}(\omega) - \overline{z_{\omega}})^{-1})^* \\ \curl_0 (\cC_{\infty,n}(\omega) - z_{\omega})^{-1}P_n &\sto \curl_0 (\cC_{\infty}(\omega) - z_{\omega})^{-1}, \end{aligned} \] for suitably chosen $\omega$. Moreover, due to Remark \ref{rem: unif_bound_Cinfty} there exists $C > 0$ such that, for $\omega \notin \overline{W(\cC_{\infty})}$, \[ \sup_{n \in \N}\norma{\curl_0 (\cC_{\infty,n}(\omega) - z_{\omega})^{-1}}_{\cB(H(\Div0,\Omega_n); H(\curl, \Omega_n))} \leq C, \] hence $\curl_0 (\cC_{\infty,n}(\omega) - z_{\omega})^{-1} u_n$ is uniformly bounded in $H(\curl, \Omega_n)$ for every sequence $u_n \in L^2(\Omega_n)^3$, $\norma{u_n} \leq 1$. Therefore, by \eqref{eq:cK_n}, the discrete compactness of $\cK_n(\omega)$ boils down to the discrete compactness of $(\Theta_{m,\infty}(\omega)^{-1} - \Theta_{m}(\omega)^{-1})P^n_{\ker(\Div)}$ from $H(\curl, \Omega_n)$ to $L^2(\Omega_n)^3$. The proof of this last property is identical to the proof of the discrete compactness of the sequence $(\Theta_{e,\infty}(\omega) - \Theta_{e}(\omega))P^n_{\ker(\Div)}$, which was established above. Altogether we have \[ \sigma_e((\cT_n)_n) \cap \Sigma = (\sigma_e((-P^n_\nabla V_e(\omega)P^n_\nabla)_n)_n \cup \sigma_e((\cS_{\infty, n})_n)) \cap \Sigma, \] and the result follows by recalling that $G_n(\omega) = -P^n_\nabla V_e(\omega)P^n_\nabla$. \end{proof} We recall the following standard result, a proof of which can be found in \cite[Lemma 7.4]{BFMT}. \begin{lemma} \label{lemma: core} Let $n \in \N$. The closure of $C^{\infty}_c(\Omega_{n} )^3 \cap H(\Div 0, \Omega_{n})$ with respect to the $H(\curl, \Omega_{n})$-norm is $H_0(\curl, \Omega_{n}) \cap H(\Div 0, \Omega_{n})$. \end{lemma} \begin{theorem} \label{thm: final} The following enclosures hold: \begin{equation}\label{mme} \sigma_{\rm poll}((\cL_n)_n) \cap \Sigma\subset \sigma_e((\cL_n)_n) \cap \Sigma \subset \big( W_e(\cS_{\infty}) \cup \sigma_e(G) \big) \cap \Sigma. \end{equation} and therefore $(\sigma_{\rm poll}((\cL_n)_n) \cap \Sigma) \subset (W_e(\cS_{\infty}) \cap \Sigma)$. For every isolated $\omega \in (\sigma_p(\cL) \cap \Sigma)$ outside $W_e(\cS_\infty) \cup \sigma_e(G)$ there exists a sequence $\omega_n \in \sigma(\cL_n)$, $n\in\N$, such that $\omega_n \to \omega$ as $n \to \infty$. \end{theorem} \begin{proof} The enclosure of spectral pollution in the limiting essential spectrum follows from Prop. \ref{prop: pollution Ln} and Corollary \ref{cor: sigmae An}. For the enclosure (\ref{mme}) we argue as in \cite[Theorem 7.5]{BFMT}. If $\omega \!\in\! \sigma_e((\cS_{\infty,n})_{n\in\N})$, by definition there exist $w_n \!\in\! \dom \cS_{\infty,n}(\omega) \subset H_0(\curl, \Omega_n)$ $\cap H(\Div0, \Omega_n)$, $\norma{w_n} = 1$, $n\in\N$, such that $w_n \rightharpoonup 0$ and $\cS_{\infty,n}(\omega)w_n \to 0$ as $n\to\infty$. Taking the scalar product with $w_n$, we find that \begin{multline*} \langle \cS_{\infty,n}(\omega)w_n,w_n\rangle = \Theta_{m,\infty}(\omega)^{-1} \norma{ \curl_{0} w_n}^2 - \frac{\Theta_{e,\infty}(\omega)}{(\omega + i \gamma_e)(\omega + i \gamma_m)} \to 0 \end{multline*} as $n\to\infty$. By Lemma \ref{lemma: core}, for each $n \in \N$ there exists $v_n \in C^\infty_c(\Omega_n)^3 \cap H(\Div 0, \Omega_n)$ with $\norma{v_n\!-\!w_n}^2 \!\leq\! 1/n$, $\norma{\curl(v_n \!-\! w_n)}^2 \!\leq\! 1/n$. Let $v_n^{0} \in H_0(\curl, \Omega) \cap H(\Div 0, \Omega)$ be the extension of $v_n$ to $\Omega$ by zero for $n\in\N$. \vspace{-1mm} Then \begin{align*} & \left|\Theta_{m,\infty}(\omega)^{-1} \norma{\curl v_n^{0}}^2 \!\!- \!\frac{\Theta_{e,\infty}(\omega)}{(\omega + i \gamma_e)(\omega + i \gamma_m)}\norma{v_n^{0}}^2\right| \\ &\!\leq\! \left| \Theta_{m,\infty}(\omega)^{-1}\norma{\curl w_n}^2 \!\!-\! \frac{\Theta_{e,\infty}(\omega)}{(\omega + i \gamma_e)(\omega + i \gamma_m)} \norma{w_n}^2 \right| \\ &\hspace{3cm}\!+\! \frac{1}{n} \, \bigg(|\Theta_{m,\infty}(\omega)^{-1}| \!+\! \left| \frac{\Theta_{e,\infty}(\omega)}{(\omega + i \gamma_e)(\omega + i \gamma_m)} \right| \bigg) \to 0 \end{align*} as $n\to\infty$. Since $\norma{v_n^{0}} \to 1$ as $n \to \infty$, upon renormalisation of the elements $v_n^{0}$, we obtain $\omega \in W_e(\cS_{\infty})$.\\ Next, we prove the inclusion $\sigma_e((P^n_\nabla G_n(\cdot)|_{\nabla \dot H^1_0(\Omega_n)})_{n\in\N}) \!\subset\! \sigma_e(G)$. If $\omega$ lies in $\sigma_e((P_\nabla G_n(\cdot) |_{\nabla \dot H^1_0(\Omega)})_{n\in\N})$, there exist $u_n \!\in\! \dot H^1_0(\Omega_n)$, $\norma{\nabla u_n} \!=\! 1$, $n\!\in\!\N$, such that $\nabla u_n \!\rightharpoonup\! 0$ \vspace{-1mm} and \[ \norma{P_{\nabla \dot H^1_0(\Omega_n)} \Theta_e(\omega)^{-1} \nabla u_n} \to 0, \quad n \to \infty. \] Let $u_n^{0} \in \dot H^1_0(\Omega)$ be the extension of $u_n \in \dot H^1_0(\Omega_n)$ to $\Omega$ by zero for $n\in\N$. By standard properties of Sobolev spaces, $\nabla u_n^{0} = (\nabla u_n)^{0}$. Hence the sequence ${(u_n^{0})_{n\in\N}} \subset \dot H^1_0(\Omega)$ is such that $\norma{\nabla u_n^{0}} = 1$, $n\in\N$ , $\nabla u_n^{0} \rightharpoonup 0$ and \[ \norma{P_{\nabla \dot H^1_0(\Omega_n)} \Theta_e(\omega)^{-1} \nabla u_n^{0}} \to 0, \quad n \to \infty. \] Now the claim follows if we observe that $P_{\nabla \dot H^1_0(\Omega)} f = P_{\nabla \dot H^1_0(\Omega_n)} f$ for all $f \in L^2(\Omega)^3$ with $\supp f \subset \Omega_n$.\\ Finally, we consider the approximation of isolated eigenvalues which lie outside $W_e(\cS_\infty) \cup \sigma_e(G)$ but inside $\Sigma$. Note first that $\sigma(\cA_n) \setminus \{-i \gamma_e, -i \gamma_m\} = \sigma(\cL_n) \setminus \{- i \gamma_e, -i \gamma_m \}$, $n \in \N$. Therefore \cite[Theorem 2.3]{MR3831156}, applied to the sequence $(\cA_n)_n$ approximating $\cA$, yields that for every isolated $\omega \in \sigma(\cA)$ outside $\sigma_e((\cA_n)_n) \cup \sigma_e((\cA^*_n)_n)^* = \sigma_e((\cA_n)_n)$ there exists $\omega_n \in \sigma(\cA_n)$, $n \in \N$, and $\omega_n \to \omega$. Since we have already proved that $\sigma_e((\cA_n)_n) \setminus \{-i \gamma_e, -i \gamma_m\} = \sigma_e((\cL_n)_n) \setminus \{-i \gamma_e, -i \gamma_m\}$ and that $\sigma_e((\cL_n)_n) \cap \Sigma \subset (W_e(\cS_\infty) \cup \sigma_e(G)) \cap \Sigma$, we deduce that every isolated point in $\omega \in \sigma(\cA) \cap \Sigma = \sigma(\cL) \cap \Sigma$, outside $W_e(\cS_\infty) \cup \sigma_e(G)$ can be approximated by spectral points $\omega_n \in \sigma(\cA_n)\setminus \{-i \gamma_e, -i \gamma_m\} = \sigma(\cL_n)$, concluding the proof. \end{proof} \section{Example}\label{sec:example} We consider the Drude-Lorentz model of a dispersive metamaterial in a cuboid $K =(0,1) \times (0,L_2) \times (0,L_3)$, embedded in an infinite waveguide $\Omega = (0, +\infty) \times (0, L_2) \times (0, L_3)$, for some $L_2, L_3 > 0$; the region $x_1>1$ in $\Omega$ is assumed to be a vacuum. We model the discontinuity between the vacuum and the metamaterial as a discontinuity in the Drude-Lorentz parameters $\theta_e^2$ and $\theta_m^2$; note that when $\theta_e = \theta_m = 0$ we have the standard time-harmonic Maxwell system in the vacuum with permeability and permittivity constant and equal to 1. Specifically, we set \[ \theta_e^2(x) = \alpha_e \chi_K(x), \quad \theta_m^2(x) = \alpha_m \chi_K(x), \quad \gamma_e = t > 0, \: \gamma_m = 1, \] where $\alpha_e, \alpha_m$ are positive constants. This leads to the coupled pair of operators \[ \cL_1(\omega) = \begin{pmatrix} - \omega & i \curl \\ -i \curl_0 & - \omega \end{pmatrix} \qquad \cL_2(\omega) = \begin{pmatrix} - \omega + \frac{\alpha_e}{\omega + it} & i \curl \\ - i \curl_0 & - \omega + \frac{\alpha_m}{\omega + i} \end{pmatrix} \] where $\omega \in \C \setminus \{-i, -i t\}$, $\cL_i(\omega)$ acts in $L^2(\Omega_i)^3 \oplus L^2(\Omega_i)^3$, $i=1,2$, and \[\Omega_1 = K, \quad \Omega_2 = (1, +\infty) \times (0,L_2) \times (0,L_3).\] According to our results it is convenient to consider the associated first Schur complements, from which we deduce that if $(E,H)$ is an eigenfunction with eigenvalue $\omega$ then \[ \begin{cases} \curl \curl_0 E - \omega^2 E = 0 \quad &\textup{in $\Omega_1$,}\\ \nu \times E = 0 \quad &\textup{on $\p \Omega_1 \cap \p \Omega$,} \end{cases} \] \[ \begin{cases} \curl \curl_0 E - f(\omega) E = 0 \quad &\textup{in $\Omega_2$,} \\ \nu \times E = 0 \quad &\textup{on $\p \Omega_2 \cap \p \Omega$,} \end{cases} \] in which $f(\omega)$ is defined by \[ f(\omega) = \frac{(\omega^2 + i \omega t - \alpha_e)(\omega^2 + i \omega - \alpha_m)}{(\omega + i) (\omega + it)} \] for all $\omega \notin \{-i, - it\}$. Note that the points $\omega = 0$ and the roots of $(\omega^2 + i \omega t - \alpha_e)(\omega^2 + i \omega - \alpha_m)$ will be in the essential spectrum, since any gradient field compactly supported in $\Omega_{i}$, $i=1,2$ will solve both systems. Define, for $n=(n_2,n_3)\in\N^2$, \[ \la^1_n(\omega) = \sqrt{\frac{\pi^2 n_2}{L_2^2} + \frac{\pi^2 n_3^2}{L_3^2} - f(\omega)}, \quad\quad \la^2_n(\omega) = \sqrt{\frac{\pi^2 n_2}{L_2^2} + \frac{\pi^2 n_3^2}{L_3^2} - \omega^2}. \] The compatibility condition $\nu \times \curl E|_{x_1 = 0^-} = -\nu \times \curl E|_{x_1 = 0^+}$ implies that every eigenvalue $\omega$ must satisfy, for some $n \in \N^2$, the equation \begin{equation}\label{compat1} \la^1_n(\omega) \coth(\la^1_n(\omega)) + \la^2_n(\omega) = 0. \end{equation} Consider now the truncated domains $\Omega_X = (0,X) \times (0, L_2) \times (0, L_3)$. The compatibility condition \eqref{compat1} now becomes \begin{equation}\label{compat2} \la^1_n(\omega) \coth(\la^1_n(\omega)) + \la^2_n(\omega) \coth (\la^2_n(\omega) (X-1)) = 0. \end{equation} Equations (\ref{compat1},\ref{compat2}) can be solved with a standard computational engine. For the computations, we set $t=4$, $\alpha_e = 400$, $\alpha_m = 10$. \begin{figure}[tbh] \centering \includegraphics[width=0.6\textwidth]{plot_DL_full_accurate.pdf} \caption{Spectrum of the Drude-Lorentz model in the waveguide $\Omega = (0, +\infty) \times (0, 1) \times (0, \pi)$. The eigenvalues are in blue, the essential spectrum in red, the poles in black, and the spectral enclosure in green.} \label{fig:full_waveguide} \end{figure} \begin{figure}[tbh] \centering \label{fig:truncated_waveguide} \includegraphics[width=0.6\textwidth]{plot_DL_finitedomain_newcol_acc_v2.pdf} \caption{Spectrum of the Drude-Lorentz model in the truncated waveguide $\Omega_X = (0, X) \times (0, 1) \times (0, \pi)$, with $X=25$. Accumulation of eigenvalues to the real axis is clearly visible.} \end{figure} For this example, Theorem \ref{thm: final} implies that spectral pollution can only happen in $W_e(\cS_\infty)$. Since $(\omega + i \gamma_m(\omega))\cS_\infty(\omega) = \curl \curl_0 - \omega^2$ acting on divergence-free vector fields, $$W_e(\cS_\infty) = -({\rm conv}(\sigma_e(\curl\curl_0)))^{1/2} \cup ({\rm conv}(\sigma_e(\curl\curl_0)))^{1/2}.$$ Due to the divergence-free condition, $\curl \curl_0 = - \Delta$ as differential expressions. We can now perform a standard principal symbol analysis to obtain $\sigma_e(\curl\curl_0) = [(\pi/\max\{L_2, L_3\})^2, + \infty)$, and hence \[ \sigma_e(\cS_\infty) = W_e(\cS_\infty) = \left(-\infty,\frac{-\pi}{\max\{L_2, L_3\}}\right]\cup\left[\frac{\pi}{\max\{L_2, L_3\}},+\infty\right). \] Regarding $\sigma_e(G)$, let us set $(\omega^2 + \omega i \gamma_e)/ \alpha_e =: z$ and let $\omega \in \sigma_e(G)$ with associated Weyl sequence $\nabla \varphi_n \rightharpoonup 0$ in $L^2(\Omega)^3$, $\varphi_n \in \dot{H}^1_0(\Omega)$, $n \in \N$. We then have $-P_\nabla (z - \chi_K) \nabla \varphi_n \to 0$ in $L^2(\Omega)^3$ if and only if $- \nabla (-\Delta_{\dot{H}^1_0})^{-1} \Div (z - \chi_K) \nabla \varphi_n \to 0$ in $L^2(\Omega)^3$, where $-\Delta_{\dot{H}^1_0}$ is the Dirichlet laplacian mapping $\dot{H}^1_0(\Omega)$ to its dual $\dot{H}^{-1}(\Omega)$. Now, note that $- \nabla (-\Delta_{\dot{H}^1_0})^{-1} \Div (z - \chi_K) \nabla \varphi_n \to 0$ in $L^2(\Omega)^3$ if and only if $- \Div (z - \chi_K) \nabla \varphi_n \to 0$ in $\dot{H}^{-1}(\Omega)$; the `only if' part follows immediately by applying $\nabla (-\Delta_{\dot{H}^1_0})^{-1}$, while the `if' part follows from definition of $\dot{H}^{-1}(\Omega)$. Therefore $\sigma_e(G)$ is completely determined by $\sigma_e(- \Div (\cdot - \chi_K) \nabla)$ where for every $z \in \C$, $- \Div (z - \chi_K) \nabla$ is understood as an operator from $\dot{H}^1_0(\Omega)$ to $\dot{H}^{-1}(\Omega)$. For smooth boundaries, the problem of finding the essential spectrum of such $\Div (p(\cdot) - \chi_K) \nabla$ pencils has been recently investigated in \cite{MR4041099}. It is not too difficult to prove that $z = 0$ and $z = 1$ are in the essential spectrum of $G$. They correspond to the solutions of the two quadratic equations $\omega^2 + \omega i \gamma_e - \alpha_e = 0$ and $\omega^2 + \omega i \gamma_e = 0$. However, there are further points in the essential spectrum corresponding to $z = 1/2$. In total, therefore, $\sigma_e(G)$ consists of the six points \[ \sigma_e(G) = \left\{0,-i\gamma_e,-i\frac{\gamma_e}{2}\pm\sqrt{\alpha_e-\frac{\gamma_e^2}{4}},-i\frac{\gamma_e}{2}\pm\sqrt{\frac{\alpha_e}{2}-\frac{\gamma_e^2}{4}}\right\}; \] for the values used in the numerical experiments, namely $\alpha_e = 400$ and $\gamma_e = 4$, only $0$ and $-i\gamma_e$ are purely imaginary. The four points of $\sigma_e(G)$ lying off the imaginary axis are marked in red in Fig. \ref{fig:full_waveguide}. Moreover, we claim that the eigenvalues of $\cL$ (in blue in Figure \ref{fig:full_waveguide}) are isolated (and of finite geometric multiplicity), and therefore Theorem \ref{thm: final} implies that they are approximated without spectral pollution via domain truncation. For the claim, note that $\sigma_{e1}(\cL) = \sigma_{e2}(\cL)$ due to $\cJ$-self-adjointness of $\cL$. Also, it was proved above that $\sigma_{e1}(\cL)$ is contained in the union of two real half-lines and six isolated points. Therefore, $\Delta_{e1}(\cL) := \C \setminus \sigma_{e1}(\cL)$ has only one connected component, which has non-trivial intersection with $\rho(\cL)$. According to the notation of \cite[Chp. IX]{EE}, $\Delta_{e1}(\cL) = \Delta_{e5}(\cL)$, where $\Delta_{e5}(\cL) = \C \setminus \sigma_{e5}(\cL)$; \cite[Theorem 1.5]{EE} now implies that any $\omega \notin \sigma_{e}(\cL) = \sigma_{e5}(\cL)$ is an isolated eigenvalue (of finite geometric multiplicity). The claim is proved. \vspace{3mm} \noindent {\small {\bf Acknowledgements.} The authors are thankful for the support of the UK Engineering and Physical Sciences Research Council through grant EP/T000902/1, \textit{`A new paradigm for spectral localisation of operator pencils and analytic operator-valued functions'}. } \bibliographystyle{abbrv} \bibliography{Maxbib} \end{document}
{"config": "arxiv", "file": "2206.07644/dlt_v23_final_arXiv.tex"}