text
stringlengths
28
2.36M
meta
stringlengths
20
188
\begin{document} \markboth{V. Millot \& A. Pisante} {Symmetry of local minimizers for the three dimensional Ginzburg-Landau functional} \title{SYMMETRY OF LOCAL MINIMIZERS FOR THE THREE DIMENSIONAL GINZBURG-LANDAU FUNCTIONAL} \author{VINCENT MILLOT} \address{D\'epartement de Math\'ematiques\\ Universit\'e de Cergy-Pontoise\\ 2 avenue Adolphe Chauvin\\ 95302 Cergy-Pontoise cedex, France\\ \emph{\tt{vmillot@math.u-cergy.fr}}} \author{ADRIANO PISANTE} \address{Department of Mathematics\\ University of Rome 'La Sapienza'\\ P.le Aldo Moro 5, 00185 Roma, Italy\\ \emph{\tt{pisante@mat.uniroma1.it}}} \maketitle \begin{abstract} {\bf Abstract.} We classify nonconstant entire local minimizers of the standard Ginzburg-Landau functional for maps in ${H}^{1}_{\rm{loc}}(\mathbb{R}^3;\mathbb{R}^3)$ satisfying a natural energy bound. Up to translations and rotations, such solutions of the Ginzburg-Landau system are given by an explicit solution equivariant under the action of the orthogonal group. \end{abstract} \keywords{Ginzburg-Landau equation, harmonic maps, local minimizers.} \ccode{Mathematics Subject Classification 2000: 35J50, 58E20, 58J70.} \section{Introduction} Symmetry results for nonlinear elliptic PDE's are difficult and usually rely on a clever use of the maximum principle as in the celebrated Serrin's moving planes method, or the use of rearrengement techniques as the Schwartz symmetrization (see, {\it e.g.}, \cite{B2} for a survey). In case of systems the situation is more involved since there are no general tools for proving this kind of results. In this paper we investigate symmetry properties of maps $u:\mathbb{R}^3 \to \mathbb{R}^3$ which are entire (smooth) solutions of the system \begin{equation} \label{GL} \Delta u + u (1-|u|^2)=0 \end{equation} possibly subject to the condition at infinity \begin{equation} \label{modtoone} |u(x)| \to 1 \quad \hbox{as} \quad |x|\to +\infty \, . \end{equation} The system \req{GL} is naturally associated to the energy functional \begin{equation} \label{GLenergy} E(v, \Omega):= \int_{\Omega} \bigg( \frac12 |\nabla v|^2 +\frac14 (1-|v|^2)^2 \bigg)dx \end{equation} defined for $v\in H^1_{\rm loc}(\R^3;\R^3)$ and a bounded open set $\Omega\subset \R^3$. Indeed, if $u\in H^1_{\rm loc}(\R^3;\R^3)$ is a critical point of $E(\cdot,\Omega)$ for every $\Omega$ then $u$ is a weak solution of \req{GL} and thus a classical solution according to the standard regularity theory for elliptic equations. In addition, any weak solution $u$ of \req{GL} satisfies the natural bound $|u| \leq 1$ in the entire space, see \cite[Proposition 1.9]{F}. Here the ``boundary condition" \req{modtoone} is added to rule out solutions with values in a lower dimensional Euclidean space like the scalar valued solutions relevant for the De Giorgi conjecture (see, {\it e.g.}, \cite{AC}), or the explicit vortex solutions of \cite{HH} (see also \cite{BBH}) arising in the 2D Ginzburg-Landau model. More precisely, under the assumption \req{modtoone} the map $u$ has a well defined topological degree at infinity given by $${\rm deg}_\infty u := {\rm deg} \bigg(\frac{u}{|u|}, \partial B_R \bigg)$$ whenever $R$ is large enough, and we are interested in solutions satisfying ${\rm deg}_\infty u \neq 0$. A special symmetric solution $U$ to \req{GL}-\req{modtoone} with ${\rm deg}_\infty U=1$ has been constructed in \cite{AF} and \cite{G} in the form \begin{equation} \label{GLsolutions} U(x)=\frac{x}{|x|} f(|x|) \, , \end{equation} for a unique function $f$ vanishing at zero and increasing to one at infinity. Taking into account the obvious invariance properties of \req{GL} and \req{GLenergy}, infinitely many solutions can be obtained from \req{GLsolutions} by translations on the domain and orthogonal transformations on the image. In addition, these solutions satisfy $R^{-1}E(u,B_R) \to 4\pi$ as $R \to +\infty$. It is easy to check that $U$ as in \req{GLsolutions} is the unique solution $u$ of \req{GL}-\req{modtoone} such that $u^{-1}(\{0\})=\{ 0\}$, ${\rm deg}_\infty u=1$ and $u$ is $O(3)$-equivariant, {\it i.e.}, $\displaystyle{u(Tx)=Tu(x)}$ for all $x \in \mathbb{R}^3$ and for all $T \in O(3)$ (see Remark \ref{equivariance}). In addition $u=U$ satisfies $|u(x)|=1+\mathcal{O}(|x|^{-2})$ as $|x|\to+\infty$. \vskip5pt In \cite{B2}, H. Brezis has formulated the following problem: \begin{itemize} \item[] {\it Is any solution to \req{GL} satisfying \req{modtoone} (possibly with a ``good" rate of convergence) and ${\rm deg}_\infty u= \pm 1 $, of the form \req{GLsolutions} (up to a translation on the domain and an orthogonal transformation on the image)?} \end{itemize} \noindent In this paper we investigate this problem focusing on local minimizers of the energy in the following sense. \begin{definition} Let $u\in H^1_{\rm loc}(\R^3;\R^3)$. We say that $u$ is a local minimizer of $E(\cdot)$ if \begin{equation} \label{minimality} E(u,\Omega) \leq E(v,\Omega) \end{equation} for any bounded open set $\Omega\subset\R^3$ and $v\in H^1_{\rm loc}(\R^3;\R^3)$ satisfying $v-u\in H^1_0(\Omega; \mathbb{R}^3)$. \end{definition} Obviously local minimizers are smooth entire solutions of \eqref{GL} but it is not clear that nonconstant local minimizers do exist or if the solutions obtained from \req{GLsolutions} are locally minimizing. In case of maps from the plane into itself the analogous problems are of importance in the study of the asymptotic behaviour of minimizers of the 2D Ginzburg-Landau energy near their vortices, the explicit solutions of the form \req{GLsolutions} giving the asymptotic profile of the minimizers in the vortex cores. Both these questions were essentially solved affirmatively in \cite{M1,M2,Sa} (see also \cite{R} for the more difficult gauge-dependent problem, {\it i.e.}, in presence of a magnetic field) but the complete classification of entire solutions to \req{GL}-\req{modtoone}, even in the 2D case remains open. \vskip5pt The first result of this paper concerns the existence of nonconstant local minimizers. \begin{theorem} \label{existence} There exists a smooth nonconstant solution $u:\R^3\to\R^3$ of (\ref{GL})-(\ref{modtoone}) which is a local minimizer of $E(\cdot)$. In addition, $u(0)=0$, ${\rm deg}_\infty u=1$ and $R^{-1}E(u,B_R) \to 4\pi$ as $R \to +\infty$. \end{theorem} The construction of a nonconstant local minimizer relies on a careful analysis of the vorticity set for solutions $u_\lambda$ to \begin{equation} \label{GLball} (P_\lambda) \quad \begin{cases} \ds \Delta u+\lambda^2u(1-|u|^2)=0 & \quad \hbox{in} \, \, B_1\,, \\ u={\rm Id} & \quad \hbox{on} \, \, \partial B_1 \,, \end{cases}\quad\lambda >0\,, \end{equation} which are absolute minimizers of the Ginzburg-Landau functional $E_\lambda(u,B_1)$ on $H^1_{\rm{Id}}(B_1;\mathbb{R}^3)$ where $$E_\lambda(u,\Omega):=\int_{\Omega}e_\lambda(u)dx \quad\text{with}\quad e_{\lambda}(u):=\frac12 |\nabla u|^2+\frac{\lambda^2}{4}(1-|u|^2)^2\,.$$ Up to a translation, we will obtain a locally minimizing solution to \eqref{GL} as a limit of $u_{\lambda_n}(x/\lambda_n)$ for some sequence $\lambda_n \to +\infty$. \vskip5pt As the smooth entire solutions of \req{GL}, critical points of the energy functional $E_{\lambda}(\cdot,\Omega)$ satisfy a fundamental monotonicity identity (see \cite{Sc}, \cite{LW2}). \begin{lemma}[Monotonicity Formula]\label{monform} Assume that $u:\Omega\to\R^3$ solves $\Delta u+\lambda^2 u(1-|u|^2)=0$ in some open set $\Omega\subset \R^3$ and $\lambda>0$. Then, \begin{multline} \label{monotonicity} \frac{1}{R}\,E_\lambda(u,B_{R}(x_0)) =\frac{1}{r}\,E_\lambda(u,B_{r}(x_0)) +\\ +\int_{B_{R}(x_0)\setminus B_{r}(x_0)}\frac{1}{|x-x_0|} \bigg| \frac{\partial u}{\partial |x-x_0|}\bigg|^2 dx + \frac{\lambda^2}{2}\int_{r}^{R}\frac{1}{t^2} \int_{B_t(x_0)}(1-|u|^2)^2dx\,dt \, , \end{multline} for any $x_0 \in \Omega$ and any $0<r \leq R\leq {\rm dist}(x_0,\partial \Omega)$. \end{lemma} An entire solution $u$ to \eqref{GL} for which the left hand side of \req{monotonicity} (with $\lambda=1$) is bounded, {\it i.e.}, \begin{equation}\label{lingro} \sup_{R>0}R^{-1}E(u,B_R)<+\infty\,, \end{equation} can be studied near infinity through a ``blow-down" analysis. More precisely, for each $R>0$ we introduce the scaled map $u_R$ defined by \begin{equation}\label{defscmap} u_R(x):=u(Rx)\,, \end{equation} which is a smooth entire solution of \begin{equation}\label{GLresc} \Delta u_R+R^2 u_R(1-|u_R|^2)=0 \, . \end{equation} Whenever $E(u,B_R)$ grows at most linearly with $R$, $E_R(u_R,\Omega)$ is equibounded and thus $\{u_R\}_{R>0}$ is bounded in $H^1_{\rm{loc}}(\R^3;\R^3)$. Any weak limit $u_\infty:\R^3\to\R^3$ of $\{u_R\}_{R>0}$ as $R\to+\infty$ is called a tangent map to $u$ at infinity, and the potential term in the energy forces $u_\infty$ to take values into $\mathbb{S}^2$. Moreover (see \cite{LW2}), $u_\infty$ turns out to be harmonic and positively $0$-homogeneous, {\it i.e.}, $u_\infty(x)=\omega(x/|x|)$ for some harmonic map $\omega:\mathbb{S}^2\to \mathbb{S}^2$, and $u_\infty$ is a solution or a critical point (among $\mathbb{S}^2$-valued maps) of \[ \Delta v+v|\nabla v|^2=0 \, , \qquad E_\infty(v,\Omega)= \int_{\Omega} \frac12 |\nabla v|^2 dx \ , \] respectively. This is readily the case for the equivariant solution \req{GLsolutions}, where $U_R(x) \to x/|x|$ strongly in $H^1_{\rm loc}(\R^3;\R^3)$ as $R \to+ \infty$. In the general case, uniqueness of the tangent map at infinity is not guaranteed and the possible lack of compactness of $\{u_R\}_{R>0}$ have been carefully analyzed in \cite{LW1,LW2} where the blow-up analysis of the defect measure arising in the limit of the measures $e_R(u_R)dx$ is performed. As a byproduct (see \cite[Corollary D]{LW2}), a quantization result for the normalized energy is obtained, namely $R^{-1}E(u,B_R) \to 4\pi k$ as $R \to +\infty$ for some $k \in\NN$, the case $k=1$ being valid both for the solution \req{GLsolutions} (see Proposition \ref{Radsol}) and the local minimizer constructed in Theorem \ref{existence}. The following result shows that the same property is true for any local minimizer of $E(\cdot)$ satisfying \req{lingro}, so that any nonconstant local minimizer of $E(\cdot)$ satisfying \req{lingro} realizes the lowest energy quantization level. \begin{theorem} \label{quantization} Let $u\in H^1_{\rm loc}(\R^3;\R^3)$ be a nonconstant local minimizer of $E(\cdot)$ satisfying \req{lingro}. Then $R^{-1}E(u,B_R)\to 4\pi$ as $R \to+ \infty$ and the scaled maps $\{ u_R \}_{R>0}$ are relatively compact in $H^1_{\rm{loc}}(\R^3;\R^3)$. \end{theorem} In proving this theorem, the first step is to apply the blow-down analysis from infinity given in \cite{LW2}. Then, taking minimality into account, we exclude concentration by a comparison argument involving a ``dipole removing technique". This yields the compactness of the scaled maps. Finally another comparison argument based on minimality and on the results in \cite{BCL}, gives the desired value for the limit of the scaled energy. Here we believe that (as shown in \cite{Sa} for the 2D case) assumption \req{lingro} should always hold, as a consequence of local minimality. \vskip5pt In order to prove full symmetry of a nonconstant local minimizer, a natural approach is to prove uniqueness and symmetry of the tangent map at infinity, and then try to propagate the symmetry from infinity to the entire space. As a first step in this direction, we have the following result inspired by the asymptotic analysis developed for harmonic maps at isolated singularities in the important work \cite{Si1} (see also \cite{Si2}, \cite{Je} for a possibly simplified treatment and a more comprehensive exposition on the subject, and \cite{GW} for the case of $\mathbb{S}^2$-valued harmonic maps in $\mathbb{R}^3$). \begin{theorem} \label{asymmetry} Let $u$ be an entire smooth solution of \req{GL} satisfying \req{lingro} and such that the scaled maps $\{u_R\}_{R>0}$ are relatively compact in $H^1_{\rm{loc}}(\R^3;\R^3)$. Then there exist a constant $C>0$ such that for all $x\in \mathbb{R}^3$, \begin{equation} \label{simonbound} |x|^2 (1-|u(x)|^2)+|x||\nabla u(x)|+|x|^3 |\nabla (1-|u(x)|^2)|+|x|^2 |\nabla^2 u(x)| \leq C \, , \end{equation} and there exists a unique harmonic map $\omega:\mathbb{S}^2 \to \mathbb{S}^2$ such that ${\rm deg} \, \omega={\rm deg}_\infty u$ and setting $u_\infty(x)=\omega(x/|x|)$, \begin{itemize} \item[(i)] $\| {u_R}_{|\mathbb{S}^2}-\omega \|_{C^2(\mathbb{S}^2;\mathbb{R}^3)}\to 0$ as $R\to +\infty\,$, \\ \item[(ii)] $e_R(u_R)(x)dx \overset{*}{\rightharpoonup} \frac12 |\nabla u_\infty|^2dx $ weakly* as measures as $R \to +\infty\,$. \end{itemize} If in addition ${\rm deg}_{\infty} u=\pm 1$ then $\omega(x)=Tx$ for some $T\in O(3)$. \end{theorem} This result strongly relies on the a priori bound \req{simonbound} for entire solutions to \req{GL} which, loosely speaking, do not exibit any bubbling phenomena at infinity (more precisely the scaled maps $\{u_R\}$ do not exibit energy concentration as $R\to+ \infty$). Whenever \eqref{simonbound} holds, we can write for $|x|$ sufficiently large the polar decomposition of the solution $u$ as $u(x)=\rho(x) w(x)$ for some positive function $\rho$ and some $\mathbb{S}^2$-valued map $w$ which have to solve the system \begin{equation} \label{polarsystem} \begin{cases} {\rm div} ( \rho^2(x) \nabla w(x) )+w(x) \rho^2(x) |\nabla w(x)|^2=0\,, \\ \Delta \rho (x)+\rho(x) (1-\rho^2(x))=\rho(x) |\nabla w(x)|^2\,, \end{cases} \end{equation} for $|x|$ large. It is clear from \eqref{simonbound} that $\rho$ smoothly tends to $1$ at infinity. Hence the unit map $w$ tends to be harmonic as $|x|\to +\infty$, and system \req{polarsystem} can be considered as a perturbation of the harmonic map system. In the present situation, uniqueness of the asymptotic limit can be obtained from an elementary but tricky estimate on the radial derivative of $ w$, and we avoid the use of the Simon-Lojasievicz inequality. \vskip5pt Once the asymptotic symmetry is obtained we can adapt the division method used in \cite{M2} and \cite{R} to get full symmetry. The main result of the paper is the following. \begin{theorem} \label{SYMMETRY} Let $u$ be an entire solution of \req{GL}. The following conditions are equivalent: \begin{itemize} \item[${(i)}$] $u$ is a nonconstant local minimizer of $E(\cdot)$ satisfying \req{lingro}; \vskip5pt \item[$(ii)$] $E(u,B_R)=4\pi R +o(R)$ as $R \to+\infty$; \vskip5pt \item[$(iii)$] $u$ satisfies $|u(x)|=1+\mathcal{O}(|x|^{-2})$ as $|x|\to +\infty$ and ${\rm deg}_\infty u=\pm 1$; \vskip5pt \item[$(iv)$] up to a translation on the domain and an orthogonal transformation on the image, $u$ is $O(3)$-equivariant, i.e., $u=U$ as given by \req{GLsolutions}. \end{itemize} \end{theorem} As a consequence of this theorem, we see that under the assumption \req{lingro}, up to translations and orthogonal transformations, any nonconstant local minimizer of $E_\lambda(\cdot)$ in $H^1_{\rm{loc}}(\mathbb{R}^3;\mathbb{R}^3)$ is given by $u(x)=U(\lambda x)$ with $U$ as in \req{GLsolutions}. In the limiting case $\lambda=+\infty$, a similar result has been proved in \cite[Theorem 2.2]{AL} showing that any nonconstant local minimizer $u$ of the Dirichlet integral $E_\infty(\cdot)$ in $H^1_{\rm{loc}}(\mathbb{R}^3;\mathbb{S}^2)$ is given by $u(x)=x/|x|$ up to translations and orthogonal transformations. \vskip10pt The plan of the paper is the following. In Section 2 we review the properties of the equivariant solution \req{GLsolutions}. In Section 3 we study minimizing solutions to $(P_\lambda)$ and prove Theorem \ref{existence}. In Section 4 we prove the quantization property for an arbitrary local minimizer, {\it i.e.}, we prove Theorem \ref{quantization}. In Section 5 we deal with asymptotic symmetry and Theorem \ref{asymmetry}. Finally we obtain in Section 6 the full symmetry and the main result of the paper. \section{The equivariant solution}\label{radsol} In this section we collect some preliminary results about equivariant entire solutions. The existence statement and the qualitative study are essentially contained in \cite{AF,FG} and \cite{G}. In the following lemma we stress the asymptotic decay at infinity. \begin{lemma} \label{radode} There is a unique solution $f \in C^2([0,+\infty))$ of \begin{equation}\label{cauchypb} \begin{cases} \ds f^{\prime \prime}+\frac{2}{r} f^\prime -\frac{2}{r^2} f +f(1-f^2)=0 \,,\\[8pt] f(0)=0 \;\;\text{and}\;\; f(+\infty)=1\,. \end{cases} \end{equation} In addition $0<f(r)<1$ for each $r>0$, $f^\prime(0)>0$, $f$ is strictly increasing, \begin{equation} \label{odedec} R^2 |f^{\prime\prime}(R)|+R f^\prime(R)+\left| 2-R^2(1-f(R)^2)\right|=o(1) \quad \hbox{as} \quad R \to+\infty \, , \end{equation} and \begin{equation} \label{radendec} \frac{1}{R}\int_0^R\bigg( \frac{r^2}{2}(f^\prime)^2+f^2 +r^2\frac{(1-f^2)^2}{4}\bigg) dr \to 1 \quad \hbox{as} \quad R \to +\infty \, . \end{equation} \end{lemma} \begin{proof} The existence of an increasing solution follows from \cite{G} and \cite{AF}. To obtain the estimates at infinity in \eqref{odedec}, we multiply the equation by $r^2 f^\prime (r)$ and an integration by parts yields \begin{equation}\label{multeq} \frac{R^2}{2}(f^\prime(R))^2+\int_0^R r(f^\prime(r))^2 dr+\int_0^R r^2(1-(f(r))^2)f(r)f^\prime(r)dr=(f(R))^2\leq 1 \, . \end{equation} Using the monotony of $f$, we deduce that $ \int_0^{+\infty} r(f^\prime(r))^2 dr<+\infty$. Hence we can find a sequence $R_n\to+\infty$ such that $R_n f^\prime(R_n)\to 0$ as $n \to +\infty$. On the other hand the integral terms in \eqref{multeq} admit a limit as $R \to +\infty$. As a consequence, $rf^\prime(r)$ admits a limit at infinity and thus $Rf^\prime(R)\to 0$ as $R\to +\infty$. For any $k \in (0,1)$ fixed, multiplying the equation by $r^2$ and averaging over $(kR,R)$ leads to $$\frac{R^2f^\prime(R)-k^2R^2f^\prime(kR)}{(1-k)R}+\frac{1}{(1-k)R}\int_{kR}^R f(r)r^2(1-(f(r))^2)dr=\frac{2}{(1-k)R}\int_{kR}^R f(r)dr \,. $$ Since $f$ is increasing and tends to $1$ at infinity, we infer $$ k^2 \limsup_{R \to+ \infty} R^2 (1-(f(R))^2) \leq 2 \leq \liminf_{R\to +\infty} R^2 (1-(f(kR))^2) \, , $$ so that $R^2 (1-(f(R))^2) \to 2$ as $R \to +\infty$ by arbitrariness of $k$. Taking the equation into account \req{odedec} follows. To prove \req{radendec} we multiply the equation by $r^2(1-f^2)$ and we integrate by parts on $(0,R)$ to get $$R^2(1-(f(R))^2)f^\prime(R)+2\int_0^Rr^2f(f^\prime)^2dr+\int_0^R r^2f(1-f^2)^2dr=2\int_0^Rf(1-f^2)dr \, .$$ Since $f$ is increasing and tends to $1$ at infinity, we deduce using \req{odedec} that $$\frac{1}{R}\int_0^R r^2(1-f^2)^2dr+\frac{1}{R}\int_0^R 2r^2 (f^\prime)^2dr +R^2 (1-(f(R))^2)f^\prime(R) \to 0 \,,$$ and \req{radendec} follows easily. \end{proof} A consequence of the previous lemma is the following result. \begin{proposition} \label{Radsol} Let $x_0 \in \mathbb{R}^3$ and $T \in O(3)$. Consider the function $f:[0,+\infty) \to [0,1)$ given by Lemma~\ref{radode} and define $$w(x):= \frac{T(x-x_0)}{|x-x_0|} f(|x-x_0|)\,.$$ Then $w$ is a smooth solution of \eqref{GL}. In addition, $0<|w(x)|<1$ for each $x \neq x_0$, $w$ satisfies \req{simonbound} and \begin{equation} \label{radscaledenergy} \lim_{R \to +\infty} \frac{1}{R} \int_{B_R(x_0)} \left( \frac{1}{2}|\nabla w(x)|^2+\frac{(1-|w(x)|^2)^2}{4}\right) dx = 4\pi \, . \end{equation} \end{proposition} \begin{proof} As in \cite{AF} and \cite{G}, $w$ is smooth and it is a classical solution of \eqref{GL}. It is routine to check that \req{simonbound} follows from \req{odedec}. Then a simple calculation yields $$\displaystyle{|\nabla w(x)|^2=(f^\prime(|x-x_0|))^2+\frac{2(f(|x-x_0|))^2}{|x-x_0|^2}+ \frac{(1-|f(|x-x_0|)|^2)^2}{4} }\,,$$ whence \req{radscaledenergy} follows from \req{radendec}. \end{proof} \begin{remark} \label{equivariance} The solution $U$ given by \req{GLsolutions} is the unique $O(3)$-equivariant solution $u$ of \req{GL}-\req{modtoone} such that $u^{-1}(\{0\})=\{0\}$ and ${\rm deg}_\infty u=1$. Indeed for each fixed $x \neq 0$, setting $l_x$ to be the line passing through $0$ and $x$, $u(l_x) \subset l_x$ because $u$ is equivariant (actually invariant) under rotations fixing $l_x$. Hence we can write $u(x)=(x/|x|)\sigma(x) |u(x)|$ with $\sigma(x)=\pm1$ and $|u(x)|=g(|x|)$ for some smooth function $g:(0,+\infty) \to (0,+\infty)$. Since $u$ is smooth and ${\rm deg}_\infty u=1$, we conclude that $\sigma\equiv 1$. Taking \req{modtoone} into account we conclude that $g$ satisfies the Cauchy problem \req{cauchypb}. Finally by the uniqueness result in \cite{AF,G}, we obtain $g\equiv f$ as claimed. \end{remark} \section{Existence of nonconstant local minimizers} A basic ingredient in the construction of a nonconstant local minimizer is the following small energy regularity result taken from \cite{LW2} (see also \cite{CS}). \begin{lemma} \label{epsregularity} There exists two positive constants $\eta_0>0$ and $C_0>0$ such that for any $\lambda\geq 1$ and any $u\in C^2(B_{2R}(x_0);\mathbb{R}^3)$ satisfying \[ \Delta u +\lambda^2 u(1-|u|^2)=0 \quad \hbox{in $B_{2R}(x_0)$}\,, \] with $\displaystyle{\frac{1}{2R}\,E_{\lambda}(u, B_{2R}(x_0)) \leq \eta_0 }\,$, then \begin{equation} \label{LinfL1energy} R^2 \sup_{B_R(x_0)} e_{\lambda}(u) \leq C_0 \frac{1}{2R}\, E_{\lambda}(u,B_{2R}(x_0)) \, . \end{equation} \end{lemma} We will also make use of the following boundary version of Lemma \ref{epsregularity} (see \cite{C,CL}). \begin{lemma} \label{epsregularitybd} Let $g:\partial B_1\to \mathbb{S}^2$ be a smooth map. There exists two positive constants $\eta_1>0$ and $C_1>0$ such that for any $\lambda\geq 1$, $0<R<\eta_1/2$, $x_0\in\partial B_1$ and any $u\in C^2(\overline B_1\cap B_{2R}(x_0);\mathbb{R}^3)$ satisfying $u=g$ on $\partial B_1\cap B_{2R}(x_0)$ and \[ \Delta u +\lambda^2 u(1-|u|^2)=0 \quad \hbox{in $B_1\cap B_{2R}(x_0)$}\,, \] with $\displaystyle{\frac{1}{2R}\,E_{\lambda}(u, B_1\cap B_{2R}(x_0)) \leq \eta_1 }\,$, then \begin{equation} \label{LinfL1energybis} R^2 \sup_{B_1\cap B_R(x_0)} e_{\lambda}(u) \leq C_1\frac{1}{2R}\,E_{\lambda}(u, B_1\cap B_{2R}(x_0)) \, . \end{equation} \end{lemma} Another result which is a combination of \cite{LW1} and \cite{LW2} will play a crucial role in the sequel. \begin{proposition} \label{linwang} Let $\Omega \subset \mathbb{R}^3$ be a smooth bounded open set and let $\lambda_n\to +\infty$ as $n\to+\infty$. For every $n\in\NN$ let $u_n$ be a critical point of $E_{\lambda_n}(\cdot,\Omega)$ such that ${\sup_{n} E_{\lambda_n}(u_n,\Omega) <+\infty}$. Then, up to a subsequence, $u_n \rightharpoonup u$ weakly in $H^1(\Omega;\mathbb{R}^3)$ for some weakly harmonic map $u:\Omega\to \mathbb{S}^2$ and $e_{\lambda_n}(u_n)(x)dx\overset{*}{\rightharpoonup} \frac12 |\nabla u|^2dx+\nu$ weakly* as measures on $\Omega$ where $\nu=4\pi \theta \mathcal{H}^1 \LL \Sigma$ for some $\mathcal{H}^1$-rectifiable set $\Sigma$ of locally finite $\mathcal{H}^1$-measure and some integer valued $\mathcal{H}^1$-measurable function $\theta: \Sigma \to \mathbb{N}$. \end{proposition} The key result of this section is the following proposition. \begin{proposition} \label{Vorticity} Let $\lambda\geq 1$ and $u_\lambda \in H^1(B_1;\mathbb{R}^3)$ be a global minimizer of $E_{\lambda}(\cdot,B_1)$ over $H^1_{\rm{Id}}(B_1;\mathbb{R}^3)$. For any $\delta\in(0,1)$, there exists a constant $C_\delta>0$ independent of $\lambda$ such that ${\rm diam}\big(\{ |u_\lambda|\leq \delta \}\big )\leq C_\delta\lambda^{-1}$ and ${\rm dist}_{H}\big(\{ |u_\lambda|\leq \delta \} ,\{0\}\big)=o(1)$ as $\lambda\to+\infty$ where ${\rm dist}_H$ denotes the Haussdorf distance. \end{proposition} \begin{proof} Let us consider an arbitrary sequence $\lambda_n\to +\infty$, and for every $n\in\NN$ let $u_n\in H^1(B_1;\R^3)$ be a global minimizer of $E_{\lambda_n}(\cdot,B_1)$ under the boundary condition ${u_n}_{|\partial B_1}=x$. It is well known that $u_n$ satisfies $u_n\in C^2(\overline B_1)$ and $|u_n|\leq 1$ for every $n\in\NN$. \vskip5pt \noindent {\em Step 1.} We claim that $u_n \to v(x):=x/|x|$ strongly in $H^1(B; \mathbb{R}^3)$. Since the map $v$ is admissible, one has \begin{equation}\label{bdenggl} \frac{1}{2}\int_{B_1}|\nabla u_n|^2\leq E_{\lambda_n}(u_n,B_1)\leq E_{\lambda_n}(v,B_1)=\frac{1}{2}\int_{B_1}|\nabla v|^2=4\pi\quad\text{for every $n\in\NN$.} \end{equation} As a consequence, $\{u_n\}$ is bounded in $H^1(B_1;\R^3)$ and up to a subsequence, $u_n\to u_\star$ weakly in $H^1(B; \mathbb{R}^3)$ for some $\mathbb{S}^2$-valued map $u_\star$ satisfying ${u_\star}_{|\partial B_1}=x$. By Theorem~7.1 in \cite{BCL}, the map $v$ is the unique minimizer of $\,u\in H^1(B_1;\mathbb{S}^2)\mapsto \int_{B_1}|\nabla u|^2$ under the boundary condition $u_{|\partial B_1}=x$. In particular, $\int_{B_1}|\nabla u_\star|^2 \geq \int_{B_1}|\nabla v|^2$ which, combined with \eqref{bdenggl}, yields $$\frac{1}{2}\int_{B_1}|\nabla u_n|^2\to \frac{1}{2}\int_{B_1}|\nabla u_\star|^2=\frac{1}{2}\int_{B_1}|\nabla v|^2\quad\text{as $n\to+\infty$}\,.$$ Therefore $u_\star\equiv v$ and $u_n\to v$ strongly in $H^1(B; \mathbb{R}^3)$. \vskip5pt \noindent {\em Step 2.} Let $\delta \in (0,1)$ be fixed. We now prove that the family of compact sets $\mathcal{V}_n:=\{|u_n| \leq \delta \} \to \{ 0 \}$ in the Hausdorff sense. It suffices to prove for any given $0<\rho<1$, $\mathcal{V}_n\subset B_\rho$ for every $n$ large enough. Since $v$ is smooth outside the origin, we can find $0<\sigma\leq\min( \rho/8,\eta_1/4)$ such that $$\frac{1}{\sigma}\int_{B_1\cap B_{4\sigma}(x)}|\nabla v|^2<\min(\eta_0,\eta_1):=\ell \quad \text{for every $x\in \overline B_1\setminus B_\rho$}\,,$$ where $\eta_0$ and $\eta_1$ are given by Lemma \ref{epsregularity} and Lemma \ref{epsregularitybd} respectively. From the strong convergence of $u_n$ to $v$ in $H^1$, we infer that \begin{equation}\label{smalleng} \frac{1}{\sigma}\,E_{\lambda_n}(u_n,B_{4\sigma}(x))<\ell\quad \text{for every $x\in \overline B_1\setminus B_\rho$} \end{equation} whenever $n \geq N_1$ for some integer $N_1$ independent of $x$. Next consider a finite family of points $\{x_j\}_{j\in J}\subset \overline B_1\setminus B_\rho$ satisfying $B_{2\sigma}(x_j)\subset B_1$ if $x_j\in B_1$ and $$\overline B_1\setminus B_\rho \subset \bigg(\bigcup_{x_j\in B_1}B_{\sigma}(x_j)\bigg)\cup \bigg(\bigcup_{x_j\in \partial B_1}B_{2\sigma}(x_j)\bigg)\,.$$ In view of \eqref{smalleng}, for each $j\in J$ we can apply Lemma \ref{epsregularity} in $B_{2\sigma}(x_j)$ if $x_j\in B_1$ and Lemma \ref{epsregularitybd} in $B_1\cap B_{4\sigma}(x_j)$ if $x_j\in \partial B_1$ to deduce $$\sup_{\overline B_1\setminus B_\rho}\,e_{\lambda_n}(u_n)\leq C\sigma^{-2}\quad\text{for every $n\geq N_1$}\,, $$ for some constant $C$ independent of $n$. By Ascoli Theorem the sequence $\{u_n\}$ is compact in $C^0(\overline{B_1} \setminus B_\rho)$, and thus $|u_n| \to 1$ uniformly in $\overline{B_1}\setminus B_\rho$. In particular $|u_n|>\delta$ in $\overline {B_1}\setminus B_\rho$ whenever $n$ is large enough. \vskip5pt In the remaining of this proof we will establish the estimate $\text{diam}\,(\mathcal{V}_n)\leq C_\delta \lambda^{-1}_n$. We shall argue by contradiction. Setting $r_n:=\text{diam}\,(\mathcal{V}_n)$, we assume that for a subsequence $\kappa_n:=r_n\lambda_n\to +\infty$. Let $a_n,b_n\in \mathcal{V}_n$ such that $|a_n-b_n|=r_n$ and set $c_n$ to be the middle point of the segment $[a_n,b_n]$. In view of Step 2, we have $c_n\to 0$. Next we define for $n$ large enough and $x\in B_{2}$, $$w_n(x):=u_n(r_nx+c_n)\,,$$ so that $w_n$ satisfies \begin{equation}\label{rescgl} \Delta w_n+\kappa_n^2w_n(1-|w_n|^2)=0\quad \text{in $B_2$\,.} \end{equation} Up to a rotation, we may assume without loss of generality that $(a_n-c_n)/r_n=(1/2,0,0)=:P_1$ and $(b_n-c_n)/r_n=(-1/2,0,0)=:P_2$ so that \begin{equation}\label{dblevort} |w_n(P_1)|=|w_n(P_2)|=\delta\quad \text{for every $n$ sufficiently large}\,. \end{equation} \vskip5pt \noindent {\em Step 3}. We claim that up to a subsequence $w_n\to \phi$ strongly in $H^1_{\rm loc}(B_2;\mathbb{R}^3)$ for some weakly stationary harmonic map $\phi:B_2\to \mathbb{S}^2$. First we infer from \eqref{bdenggl} and the Mononocity Formula \eqref{monotonicity} applied to $w_n$ and $u_n$ that \begin{align} \frac{1}{R}\,E_{\kappa_n}(w_n,B_R(x_0))\leq\frac{1}{1-|r_nx_0+c_n|}\,E_{\lambda_n}(u_n,B_{1-|r_nx_0+c_n|}(r_nx_0+c_n)) \label{estimon}\leq \frac{4\pi}{1-|r_n x_0+c_n|} \,, \end{align} for every $x_0\in B_2$ and $0<R<\text{dist}(x_0,\partial B_2)$. Hence $\sup_n E_{\kappa_n}(w_n,B_2)<+\infty$. In view of Proposition \ref{linwang}, up a further subsequence, $w_n\rightharpoonup \phi $ weakly in $H^1(B_2;\mathbb{R}^3)$ where $\phi:B_2\to \mathbb{S}^2$ is a weakly harmonic map, and \begin{equation}\label{convrad} e_{\kappa_n}(w_n)dx\mathop{\rightharpoonup}\limits^{*}\mu:= \frac{1}{2}|\nabla \phi|^2dx+\nu\quad\text{weakly* as measures on $B_2$}\,, \end{equation} for some Radon measure $\nu=4\pi\theta \mathcal{H}^1\LL \Sigma$ where $\Sigma$ is a $\mathcal{H}^1$-rectifiable set with locally finite $\mathcal{H}^1$-measure and $\theta$ is an integer valued function. As a direct consequence of the Monotonicity Formula \eqref{monotonicity} and \req{estimon}, we have \begin{equation}\label{estdefmeas} \frac{1}{R}\, \nu(B_{R}(x_0))\leq \frac{1}{R} \,\mu(B_{R}(x_0))\leq 4\pi \end{equation} for every $x_0\in B_2$ and $0<R<\text{dist}(x_0,\partial B_2)$. By Theorem~2.83 in \cite{AFP}, the $1$-dimensional density of $\nu$ at $x_0$, {\it i.e.}, $\Theta_1(\nu,x_0)=\lim_{R\to0} (2R)^{-1}\nu(B_R(x_0))$, exists and coincides with $4\pi\theta(x_0)$ for $\mathcal{H}^1$-a.e. $x_0\in\Sigma$. In view of \eqref{estdefmeas} we deduce that $\theta\leq 1/2$ $\mathcal{H}^1$-a.e. on $\Sigma$. Since $\theta$ is integer valued, we have $\theta=0$ $\mathcal{H}^1$-a.e. on $\Sigma$, i.e., $\nu\equiv 0$. Going back to \eqref{convrad}, we conclude that $w_n\to \phi$ strongly in $H^1_{\rm loc}(B_2;\mathbb{R}^3)$ and \begin{equation}\label{convpen} \kappa_n^2(1-|w_n|^2)^2\mathop{\longrightarrow}\limits_{n\to+\infty} 0\quad\text{in $L^1_{\rm loc}(B_2)$}\,. \end{equation} It now remains to prove the stationarity of $\phi$. Since $w_n$ is smooth and satisfies \eqref{rescgl}, we have $$\int_{B_2}e_{\kappa_n}(w_n)\,{\rm div}\,\zeta -\sum_{i,j=1}^3\frac{\partial \zeta_i}{\partial x_j}\,\frac{\partial w_n}{\partial x_i}\cdot \frac{\partial w_n}{\partial x_j}=0$$ for every $\zeta\in C^1_c(B_2;\R^3)$. Using the local strong convergence of $w_n$ and \eqref{convpen}, we can pass to the limit $n\to+\infty$ in the above equation to derive that $$\int_{B_2}|\nabla \phi|^2\,{\rm div}\,\zeta -2\sum_{i,j=1}^3\frac{\partial \zeta_i}{\partial x_j}\,\frac{\partial \phi}{\partial x_i}\cdot \frac{\partial \phi}{\partial x_j}=0 \quad \forall \zeta\in C^1_c(B_2;\R^3)\,,$$ i.e., $\phi$ is stationary in $B_2$. \vskip5pt \noindent {\em Step 4.} By the energy monotonicity formula for stationary harmonic maps (see \cite{Sc}) and \eqref{estimon}, we have \begin{equation}\label{monhm} \frac{1}{R_1}\int_{B_{R_1}(x_0)}|\nabla\phi|^2\leq \frac{1}{R_2}\int_{B_{R_2}(x_0)}|\nabla\phi|^2\leq 8\pi \end{equation} for every $x_0\in B_2$ and $0<R_1\leq R_2\leq {\rm dist}(x_0,\partial B_2)$. We claim that \begin{equation}\label{nonvanvor} \lim_{R\to0}\frac{1}{R}\,\int_{B_R(P_i)}|\nabla \phi|^2=\inf_{0<R<1}\frac{1}{R}\,\int_{B_R(P_i)}|\nabla \phi|^2>0\quad \text{for $i=1,2$}\,. \end{equation} Indeed if the limit above vanishes, we could argue as in Step 2 using Lemma \ref{epsregularity} to deduce that $|w_n(P_i)|>\delta$ for $n$ large which contradicts \eqref{dblevort}. By the quantization results in \cite{LR}, for $i=1,2$, $$\lim_{R\to0}\frac{1}{R}\,\int_{B_R(P_i)}|\nabla \phi|^2 = 8\pi k_i \quad\text{for some $k_i\in\NN$}\,.$$ Combining \eqref{monhm} with \eqref{nonvanvor}, we deduce that $k_1=k_2=1$ and thus \begin{equation}\label{pricev} \inf_{0<R<1}\frac{1}{R}\,\int_{B_R(P_i)}|\nabla \phi|^2=8\pi\quad \text{for $i=1,2$}\,. \end{equation} Setting $Q_R=(R-1/2,0,0)$ for $0<R<1$, we then have $$8\pi\geq \int_{B_1(Q_R)}|\nabla\phi|^2 \geq \int_{B_{R}(P_1)}|\nabla \phi|^2+\int_{B_{1-R}(P_2)}|\nabla \phi|^2\geq 8\pi R+8\pi(1-R)=8\pi\,. $$ Hence $|\nabla \phi|^2\equiv 0$ a.e. in $B_1(Q_R)\setminus \big(B_{R}(P_1)\cup B_{1-R}(P_2)\big)$ for every $0<R<1$. Since $$B_1\cap \bigcup_{0<R<1}\bigg( B_1(Q_R)\setminus \big(B_{R}(P_1)\cup B_{1-R}(P_2)\big)\bigg)=B_1\setminus [(-1,0,0),(1,0,0)]\,,$$ we derive that $\int_{B_1}|\nabla \phi|^2=0$ which obviously contradicts \eqref{pricev}. Therefore $r_n\lambda_n$ remains bounded and the proof is complete . \end{proof} \noindent {\bf Proof of Theorem \ref{existence}.} Consider a sequence $\lambda_n\to+\infty$ and let $u_n$ be a minimizer of $E_{\lambda_n}(\cdot,B_{1})$ on $H^1_{\rm{Id}}(B_{1};\mathbb{R}^3)$. By Proposition \ref{Vorticity}, $|u_n|\geq 1/2$ in $B_1\setminus B_{1/2}$ for $n$ large enough. In particular, $d_r:={\rm deg}(u_n, \partial B_r)$ is well defined for $1/2\leq r\leq1$ and $d_r=d_1=1$ thanks to the boundary condition. Hence we may find $a_n\in B_{1/2}$ such that $u_n (a_n)=0$ for every $n$ sufficiently large. Again by Proposition \ref{Vorticity}, $a_n\to 0$ and $\{|u_n|\leq 1/2\}\subset B_{r_n}(a_n)$ with $r_n:={\rm diam}(\{|u_n|\leq 1/2\})=\mathcal{O}(\lambda_n^{-1})$. Therefore ${\rm deg}(u_n, \partial B_r(a_n))=1$ for any $r\in [r_n, 1/2]$. Setting $R_n:=\lambda_n(1-|a_n|)$, $R_n \to +\infty$ as $n \to +\infty$, and we define for $x\in B_{R_n}$, $\bar u_n(x):=u_n\big(\lambda_n^{-1}x+a_n\big)$ so that $\bar u_n$ satisfies $$ \Delta \bar u_n +\bar u_n(1-|\bar u_n|^2)\quad \text{in $B_{R_n}$}\,,$$ $\bar u_n(0)=0$ and $|\bar u_n|\leq 1$ for every $n$. Moreover arguing as in the previous proof, we obtain that \begin{equation}\label{quantconstr} \limsup_{n \to +\infty} R_n^{-1}E_1(\bar u_n,B_{R_n}) \leq 4\pi\,. \end{equation} Then we infer from standard elliptic theory that, up to a subsequence, $\bar u_n \to u$ in $C^2_{\rm loc}(\R^3)$ for some map $u:\R^3\to\R^3$ solving $\Delta u +u(1-|u|^2)=0$ in $\R^3$ and $u(0)=0$. By Proposition \ref{Vorticity} and the choice of $a_n$, we have $\{|\bar u_n|\leq 1/2\}\subset \overline B_{R_0}$ with $R_0:=\sup_n \lambda_n r_n<+\infty$. Hence $|u|\geq 1/2$ in $\R^3\setminus B_{R_0}$ by continuity and locally uniform convergence. As a consequence, $u$ is nonconstant, ${\rm deg}_{\infty} u$ is well defined and $${\rm deg}_{\infty} u={\rm deg}(u,\partial B_R)=\lim_{n\to+\infty} {\rm deg}(\bar u_n,\partial B_R)=\lim_{n \to+\infty} {\rm deg}(u_n, \partial B_{r_n}(a_n))=1$$ for any $R\geq R_0$. Arguing in the same way, we infer from Proposition \ref{Vorticity} that $|u(x)|\to 1$ as $|x|\to +\infty$. Next we deduce from \eqref{quantconstr}, the Monotonicity Formula \req{monotonicity} and the smooth convergence of $\bar{u}_n$ to $u$, that $\sup_{R>0}\,R^{-1} E_{1}(u,B_R)\leq4\pi$. By the quantization result \cite[Corollary D]{LW2}, we have $R^{-1} E_{1}(u,B_R)\to 4\pi k$ as $R\to+\infty$ with $k\in\{0,1\}$. Since $u$ is nonconstant, we conclude that $k=1$. Finally, the local minimality of $u$ easily follows from the minimality of $u_n$ and the strong convergence in $H^1_{\rm loc}(\R^3;\mathbb{R}^3)$ of $\bar u_n$ to $u$. \prbox \section{Energy quantization for local minimizers} This section is devoted to the proof of Theorem \ref{quantization}. For any solution $u$ of \eqref{GL} satisfying \eqref{lingro}, the scaled maps $u_R(x):=u(Rx)$ are relatively weakly compact in $H^1_{\rm loc}(\R^3;\R^3)$. This fact will allow us to study such a map $u$ near infinity. First we recall that a tangent map to $u$ at infinity is a map $\phi:\R^3\to\R^3$ obtained as a weak limit of $u_n(x):=u(x/R_n)$ in $H^1_{\rm loc}(\R^3;\R^3)$ for some sequence of radii $R_n\to+\infty$. We denote by $\mathcal{T}_\infty(u)$ the set of all possible tangent maps to $u$ at infinity. The only information given by the potential at infinity is that any $\phi \in \mathcal{T}_\infty(u)$ takes values into $\mathbb{S}^2$. This is any easy consequence of the following elementary lemma which will be used in the sequel. \begin{lemma} \label{nopotential} Let $u \in H^1_{\rm{loc}}(\mathbb{R}^3;\mathbb{R}^3)$ be a solution of \req{GL} satisfying \req{lingro}. Then \begin{equation} \label{potentialto0} \lim_{R \to +\infty} \frac{1}{R} \int_{B_R} \frac{(1-|u|^2)^2}{4}dx =0 \, . \end{equation} \end{lemma} \begin{proof} We apply \req{monotonicity} with $\lambda=1$, $r>0$ and $R=2r$ to obtain $$ \frac{1}{r} \int_{B_r} \frac{(1-|u|^2)^2}{4}dx \leq 4 \int_r^{2r} \frac{1}{t^2}\bigg( \int_{B_t} \frac{(1-|u|^2)^2}{4}\,dx\bigg) dt\leq \frac{1}{2r}E(u,B_{2r})-\frac{1}{r}E(u,B_r) \, . $$ Since the left hand side of \req{monotonicity} is bounded and increasing, the right hand side above tends to zero as $r$ tends to infinity and the conclusion follows. \end{proof} The following description of any tangent map has been obtained in \cite[Theorem~C]{LW2}. \begin{proposition}\label{descripblowdown} Let $u$ be a solution of \eqref{GL} satisfying \req{lingro}. Let $\phi \in \mathcal{T}_\infty(u)$ and let $R_n\to+\infty$ be an associated sequence of radii. Then $\phi(x)=\phi(x/|x|)$ for $x\not=0$ and $\phi_{|\mathbb{S}^2}$ is a smooth harmonic map with values into $\mathbb{S}^2$. Moreover there exists a subequence (not relabelled) such that \begin{equation}\label{defectcone} e_{R_n}(u_n)dx \mathop{\rightharpoonup}\limits^{*} \frac{1}{2}|\nabla\phi|^2dx+\nu \quad\text{as $n\to+\infty$}\,, \end{equation} weakly* as measures for some nonnegative Radon measure $\nu$. In addition, if $\nu\not\equiv 0$ there exists an integer $1\leq l<\infty$, $\{P_j\}_{j=1}^l\subset \mathbb{S}^2$ and $\{k_j\}_{j=1}^l\subset \NN^*$ such that \begin{itemize} \item[(i)] ${\rm spt}(\nu)=\cup_{j=1}^l\overline{OP_j}$ where $\overline{OP_j}$ denotes the ray emitting from the origin to $P_j$, and for $1\leq j\leq l$, $$\nu\LL\overline{OP_j}=4\pi k_j \mathcal{H}^1\LL\overline{OP_j}\,; $$ \item[(ii)] the following balancing condition holds : $$\frac{1}{2}\int_{\mathbb{S}^2}x|\nabla \phi|^2d\mathcal{H}^2 +4\pi\sum_{j=1}^l k_j P_j =0\,. $$ \end{itemize} \end{proposition} Under the assumption \req{lingro} we can apply Proposition \ref{descripblowdown} to any local minimizer of $E(\cdot)$. Now we claim that the local minimality of $u$ implies the strong convergence of the scaled maps $\{u_n\}$ to the associated tangent map. \begin{proposition}\label{propcomp} Let $u\in H^1_{\rm loc}(\R^3;\R^3)$ be a local minimizer of $E(\cdot)$ satisfying \req{lingro}. Let $\phi \in \mathcal{T}_\infty(u)$ and let $R_n\to+\infty$ be the associated sequence of radii given by Proposition~\ref{descripblowdown}. Then $u_n\to \phi$ strongly in $H^1_{\rm loc}(\R^3)$ as $n\to+\infty$ and \begin{equation}\label{convmeas} e_{R_n}(u_n)dx\mathop{\rightharpoonup}\limits^{*}\frac{1}{2}|\nabla\phi|^2dx \end{equation} weakly* as measures. \end{proposition} \noindent {\bf Proof.} In view of Proposition \ref{descripblowdown}, it suffices to prove that the defect measure $\nu$ in \eqref{defectcone} actually vanishes. We shall achieve it using a comparison argument. First we improve the convergence of $u_n$ away from ${\rm spt}(\nu)$. \vskip5pt \noindent{\it Step 1.} First observe that $R_n^2(1-|u_n|^2)^2\to 0$ in $L^1_{\rm loc}(\R^3)$ by scaling and Lemma \ref{nopotential}. Next we claim that $u_n\to \phi$ in $C^1_{\rm loc}(\R^3\setminus({\rm spt}(\nu)\cup\{0\}))$. Fix a ball $B_{4\delta}(x_0)\subset\subset \R^3\setminus({\rm spt}(\nu)\cup\{0\})$ with arbitrary center and $\delta$ to be chosen. Since $\phi$ is smooth away from the origin, we can choose $\delta$ small such that $\int_{B_{4\delta}(x_0)}|\nabla\phi|^2<4\delta \eta_0$ where the constant $\eta_0$ is given by Lemma \ref{epsregularity}. In view of \eqref{defectcone}, we have $\int_{B_{4\delta}(x_0)}e_{R_n}(u_n) \to \frac{1}{2}\int_{B_{4\delta}(x_0)}|\nabla\phi|^2$. In particular $\int_{B_{4\delta}(x_0)}e_{R_n}(u_n)\leq 4\delta\eta_0$ for $n$ large enough. By Lemma \ref{epsregularity}, we infer that $|\nabla u_n|\leq C_{\delta,x_0}$ and $|u_n|\geq 1/2$ in $B_{2\delta}(x_0)$ for $n$ large and a constant $C_{\delta,x_0}$ independent of $n$. Since $u_n$ satisfies \eqref{GLresc} (with $R=R_n$), setting $\rho_n:=1-|u_n|^2$, we have $0\leq \rho_n\leq 1$ and $-\Delta\rho_n+R_n^2\rho_n\leq 2 C^2_{\delta,x_0}$ in $B_{2\delta}(x_0)$. By a slight modification of Lemma 2 in \cite{BBH}, we infer that $\rho_n\leq C'_{\delta,x_0}R_n^{-2}$ in $B_\delta(x_0)$ for some constant $C'_{\delta,x_0}$ independent of $n$. Going back to \eqref{GLresc} we deduce that $|\Delta u_n|\leq C'_{\delta,x_0}$ in $B_\delta(x_0)$. Using standard $W^{2,p}_{\rm{loc}}$-regularity and the Sobolev embedding in $C^{1,\alpha}$-spaces, we finally conclude that $u_n\to \phi$ in $C^1(B_{\delta/2}(x_0))$. \vskip5pt \noindent{\it Step 2.} We will argue by contradiction and will assume that $\nu\not\equiv 0$ so that $k_1\geq 1$. Without loss of generality we may also assume that $P_1=(1,0,0)$ and $\phi(P_1)=(0,0,1)=:N$. We will construct for $n$ sufficiently large comparison maps $w_n$ which, roughly speaking, agree with $u_n$ except in a small cylinder around the $x_1$ axis, where they are constantly equal to $N$ and with smaller energy. We consider two small parameters $0<\delta<<1$ and $0<\sigma<<1$. In view of the explicit form of $\phi$ and $\nu$, we can find $x_\sigma \in \overline{OP_1}$ with $|x_\sigma|$ as large as needed such that $\overline Q_4(x_\sigma)\cap \overline{OP_j}=\emptyset$ for each $2\leq j\leq l$, \begin{equation}\label{controlphi} \phi(Q_4(x_\sigma))\subset B_\sigma(N)\quad\text{and}\quad \int_{Q_4(x_\sigma)}|\nabla \phi|^2<\sigma\,. \end{equation} Here we use the notation $Q_\rho(x_\sigma)=x_\sigma+\rho(-1/2,1/2)^3$ for $\rho>0$. Throughout the proof $T_\delta:=\R\times B^{(2)}_\delta(0)\subset\R^3$ will denote the infinite cylinder of size $\delta$ around the $x_1$ axis. In view of Step~1, for $n$ large enough \begin{equation}\label{controlun} |u_n-\phi|<\sigma\quad\text{in $Q_4(x_\sigma)\setminus T_{\delta/2}$}\,, \end{equation} and in particular $|u_n|$ does not vanish in $Q_4(x_\sigma)\setminus T_{\delta/2}$ and it is actually as close to one as we want. Consider a cut-off function $\chi_1\in C^\infty_c(Q_4(x_\sigma);[0,1])$ satisfying $\chi_1\equiv 1$ in $Q_3(x_\sigma)$ and set $\psi_\delta(x):=\min\{\delta^{-1}\chi_1(x)(2|x'|-\delta)^+, 1\}$ using the notation $x=(x_1,x')$. Then we define for $x\in Q_4(x_\sigma)$, $$\bar u_n(x) :=\psi_\delta (x)\,\frac{u_n(x)}{|u_n(x)|} +(1-\psi_\delta(x))u_n(x)\,.$$ Note that $\bar u_n=u_n$ in a neighborhood of $\partial Q_4(x_\sigma)$, $\bar u_n=u_n$ in $Q_4(x_\sigma)\cap T_{\delta/2}$, and $(1-|\bar u_n|^2)^2\leq (1-|u_n|^2)^2$, because the double well potential is locally convex near its minima. Then we easily infer from Step 1 that $\bar u_n \to \phi$ in $W^{1,\infty}(Q_4(x_\sigma)\setminus T_{\delta/2})$ and $$e_{R_n}(\bar u_n)dx\LL Q_4(x_\sigma)\mathop{\rightharpoonup}\limits^{*} \frac{1}{2}|\nabla\phi|^2dx\LL Q_4(x_\sigma)+\nu\LL Q_4(x_\sigma)$$ weakly* as measures. Now consider a second cut-off function $\chi_2\in C^\infty_c(Q_3(x_\sigma);[0,1])$ satisfying $\chi_2\equiv 1$ in $Q_2(x_\sigma)$ and set $\tilde \psi_\delta(x)=\min\{\delta^{-1}\chi_2(x)(|x'|-\delta)^+, 1\}$. Define for $x\in Q_4(x_\sigma)$, $$v_n(x):=\begin{cases} \ds \frac{\tilde \psi_\delta(x) N+(1-\tilde \psi_\delta(x))\bar u_n(x)}{|\tilde \psi_\delta(x) N+(1-\tilde \psi_\delta(x))\bar u_n(x)|} & \text{if $x\in Q_3(x_\sigma) \setminus T_\delta$}\,,\\[8pt] \bar u_n(x) & \text{if $x\in (Q_4(x_\sigma)\setminus Q_3(x_\sigma)) \cup (Q_4(x_\sigma)\cap T_\delta)$}\,, \end{cases}$$ and $$\phi_\delta(x):= \frac{\tilde \psi_\delta(x) N+(1-\tilde \psi_\delta(x))\phi(x)}{|\tilde \psi_\delta(x) N+(1-\tilde \psi_\delta(x))\phi(x)|}\,.$$ Note that $\phi_\delta$ and $v_n$ are well defined and smooth (Lipschitz) thanks to \eqref{controlphi} and \eqref{controlun}. Moreover $v_n=u_n$ both in a neighborhood of $\partial Q_4(x_\sigma)$ and in $Q_4(x_\sigma) \cap T_{\delta/2}$, and $v_n\equiv N$ in $Q_2(x_\sigma)\setminus T_{2\delta}$. From the construction of $\bar u_n$, we derive that $v_n \to\phi_\delta $ in $W^{1,\infty}(Q_4(x_\sigma)\setminus T_{\delta/2})$ and \begin{equation}\label{concvn} e_{R_n}(v_n)dx\LL Q_4(x_\sigma)\mathop{\rightharpoonup}\limits^{*} \frac{1}{2}|\nabla\phi_\delta|^2dx\LL Q_4(x_\sigma)+\nu\LL Q_4(x_\sigma) \end{equation} weakly* as measures. Since $\nu$ does not charge the boundary of $Q_\rho(x_\sigma)$ for every $\rho>0$, we have $$\int^{|x_\sigma|+1}_{|x_\sigma|+1/2}\bigg(\int_{\{x_1=r\}\cap T_{2\delta}}e_{R_n}(v_n)\bigg)dr\mathop{\longrightarrow}\limits_{n\to+\infty} \frac{1}{2}\int_{\{|x_\sigma|+1/2<x_1<|x_\sigma|+1\}\cap T_{2\delta}}|\nabla\phi_\delta|^2 +2\pi k_1\,.$$ On the other hand, one may derive from the explicit form of $\phi_\delta$ and \eqref{controlphi} that \begin{equation}\label{smallphidel} \int_{Q_4(x_\sigma)}|\nabla\phi_\delta|^2\leq C_\delta \sigma\,, \end{equation} where $C_\delta$ denotes a constant independent of $\sigma$. Hence we can find $r_n^+\in [|x_\sigma|+1/2,|x_\sigma|+1]$ such that $$\limsup_{n\to +\infty} \int_{\{x_1=r_n^+\}\cap T_{2\delta}}e_{R_n}(v_n)\leq 4\pi k_1 +C_\delta \sigma\,.$$ Arguing in the same way, we find $r_n^-\in[|x_\sigma|-1,|x_\sigma|-1/2]$ such that $$\limsup_{n\to +\infty} \int_{\{x_1=r_n^-\}\cap T_{2\delta}}e_{R_n}(v_n)\leq 4\pi k_1 +C_\delta \sigma\,.$$ Next we introduce the sets \begin{align*} C_n^+&:= T_{2\delta}\cap\big\{r_n^+-2\delta\leq x_1\leq r_n^+\,,\,|x'|\leq x_1-(r_n^+-2\delta)\big\}\,,\\ C_n^-&:= T_{2\delta}\cap\big\{r_n^-\leq x_1\leq r_n^-+2\delta\,,\,|x'|\leq (r_n^-+2\delta)-x_1\big\}\,,\\ D_n&:=T_{2\delta}\cap \{x\in T_{2\delta},\,x_1\in(r_n^-,r_n^+) \}\,. \end{align*} Define for $x\in Q_4(x_\sigma)$ and $n$ large enough, $$w_n(x)= \begin{cases} v_n(x) & \text{if $x\in Q_4(x_\sigma)\setminus D_n$}\\[8pt] \ds v_n\bigg(r_n^+,\frac{2\delta x'}{x_1-(r_n^+-2\delta)}\bigg) &\text{if $x\in C_n^+$\,,}\\[10pt] \ds v_n\bigg(r_n^-,\frac{2\delta x'}{(r_n^-+2\delta)-x_1}\bigg) &\text{if $x\in C_n^-$\,,}\\[8pt] N & \text{if $x\in D_n\setminus (C_n^+\cup C_n^-)$\,.} \end{cases} $$ One may check that $w_n\in H^1(Q_4(x_\sigma);\R^3)$ and $w_n=u_n$ in a neighborhood of $\partial Q_4(x_\sigma)$. Moreover, straightforward computations yield $$\int_{C_n^+}e_{R_n}(w_n)\leq C\delta \int_{\{x_1=r_n^+\}\cap T_{2\delta}} e_{R_n}(v_n)\quad\text{and}\quad \int_{C_n^-}e_{R_n}(w_n)\leq C\delta \int_{\{x_1=r_n^-\}\cap T_{2\delta}} e_{R_n}(v_n)\,,$$ for some absolute constant $C$. Recalling \eqref{concvn}, \eqref{smallphidel}, the fact that $\nu$ does not charge the boundary of $Q_\rho(x_\sigma)$ for every $\rho>0$ and $Q_4(x_\sigma)=(Q_4(x_\sigma) \setminus D_n) \cup (C_n^+\cup C_n^-) \cup (D_n \setminus (C_n^+\cup C_n^-)) $, we finally obtain \begin{equation}\label{enegcompmap} \limsup_{n\to+\infty}\int_{Q_4(x_\sigma)}e_{R_n}(w_n) \leq 12\pi k_1+C\delta+C_\delta \sigma \,, \end{equation} for some constant $C$ independent $\sigma$ and $\delta$, and some constant $C_\delta$ independent of $\sigma$. \vskip5pt \noindent{\it Step 3.} From the local minimality of $u$, we infer that $$\int_{Q_4(x_\sigma)}e_{R_n}(u_n)\leq \int_{Q_4(x_\sigma)}e_{R_n}(w_n) \,.$$ Using \eqref{defectcone} and \eqref{enegcompmap} we let $n\to+\infty$ in the above inequality to derive $$16 \pi k_1 \leq \nu(Q_4(x_\sigma))+\int_{Q_4(x_\sigma)}\frac12 |\nabla \phi|^2 dx=\lim_{n \to \infty} \intt{Q_4(x_\sigma)} e_{R_n}(u_n) \leq 12\pi k_1+C\delta+C_\delta \sigma\,.$$ Passing successively to the limits $\sigma\to0$ and $\delta\to 0$, we conclude that $k_1=0$. This contradicts our assumption $k_1\geq 1$ and the proof is complete. \prbox \begin{corollary}\label{strtangmap} Let $u\in H^1_{\rm loc}(\R^3;\R^3)$ be a nonconstant local minimizer of $E(\cdot)$ satisfying \req{lingro}. Then any $\phi \in \mathcal{T}_\infty(u)$ is of the form $\phi(x)=Tx/|x|$ for some $T\in O(3)$. \end{corollary} \noindent {\bf Proof.} {\it Step 1.} First we claim that any $\phi \in \mathcal{T}_\infty(u)$ is energy minimizing in $B_1$, {\it i.e.}, \begin{equation}\label{minphi} \int_{B_1}|\nabla\phi|^2dx \leq \int_{B_1}|\nabla \varphi |^2dx \quad\text{for all $\varphi\in H^1(B_1;S^2)$ such that $\varphi_{|\partial B_1}=\phi$}\,. \end{equation} Let $R_n\to+\infty$ be the sequence of radii given by Proposition \ref{descripblowdown}, and let $\{u_n\}$ be the associated sequence of scaled maps. It follows from Step 2 in the previous proof that $$ \int_{B_1}e_{R_n}(u_n)dx \to \frac{1}{2}\int_{B_1}|\nabla \phi|^2dx$$ as $n\to+\infty$. In particular, \begin{equation}\label{vanpot} R_n^{2}\int_{B_1}(1-|u_n|^2)^2dx\to 0\,. \end{equation} In view of the local minimality of $u$, it suffices to prove that for any $\varphi \in H^1_\phi(B_1;S^2)$, there exists a sequence $\varphi_n\in H^1_{u_n}(B_1;\R^3)$ such that \begin{equation}\label{approx} \int_{B_1}e_{R_n}(\varphi_n)dx \to \frac{1}{2}\int_{B_1}|\nabla\varphi|^2dx\,. \end{equation} We proceed as follows. From the previous proof we know that $u_n\to \phi$ uniformly in the annulus $K:=\overline B_1\setminus B_{1/2}$. In particular, $|u_n|\geq 1/2$ in $K$ for $n$ large and setting $v_n:=u_n/|u_n|$, $$\delta_n:=\|v_n-\phi\|_{L^\infty(K)}+\|1-|u_n|^2\|_{L^\infty(K)}\mathop{\longrightarrow}\limits_{n\to+\infty} 0\,. $$ Denote $\mathcal{D}:=\{(s_0,s_1)\in\mathbb{S}^2\times \mathbb{S}^2\,,\,|s_0-s_1|<1/4\}$ and consider a continuously differentiable mapping $\Pi:\mathcal{D}\times [0,1]\to \mathbb{S}^2$ satisfying $$\Pi(s_0,s_1,0)=s_0\,,\quad \Pi(s_0,s_1,1)=s_1\,,\quad \bigg|\frac{\partial\Pi}{\partial t}(s_0,s_1,t)\bigg|\leq C|s_0-s_1|\,,$$ {\it e.g.}, the map giving geodesic convex combinations between points $s_0$ and $s_1$ on $\mathbb{S}^2$. Given $\varphi \in H^1_\phi(B_1;S^2)$, we define for $n$ large enough, $$\varphi_n(x)=\begin{cases} \ds\varphi\bigg(\frac{x}{1-2\delta_n}\bigg) & \text{for $x\in B_{1-2\delta_n}$}\,,\\[10pt] \ds \Pi\bigg(v_n(x),\phi(x), \frac{1-\delta_n-|x|}{\delta_n}\bigg) & \text{for $x\in B_{1-\delta_n}\setminus B_{1-2\delta_n}$}\,,\\[10pt] \ds \bigg(\frac{1-|x|}{\delta_n}+|u_n(x)|\frac{|x|-1+\delta_n}{\delta_n}\bigg)v_n(x) & \text{for $x\in B_1\setminus B_{1-\delta_n}$}\,. \end{cases} $$ One may easily check that $\varphi_n\in H^1(B_1;\R^3)$ and that \begin{equation}\label{energapprox} \int_{B_1}e_{R_n}(\varphi_n)dx=\frac{1-2\delta_n}{2}\int_{B_1}|\nabla\varphi|^2dx+\frac{1}{2}\int_{B_{1-\delta_n}\setminus B_{1-2\delta_n}}|\nabla \varphi_n|^2dx+\int_{B_1\setminus B_{1-\delta_n}}e_{R_n}(\varphi_n)dx\,. \end{equation} Straighforward computations yield $$\int_{B_{1-\delta_n}\setminus B_{1-2\delta_n}}|\nabla \varphi_n|^2dx \leq C \int_{B_{1-\delta_n}\setminus B_{1-2\delta_n}}\bigg(|\nabla \varphi|^2+|\nabla u_n|^2+\delta_n^{-2}| v_n-\phi|^2\bigg)dx\mathop{\longrightarrow}\limits_{n\to+\infty} 0\,,$$ and $$\int_{B_{1}\setminus B_{1-\delta_n}}e_{R_n}(\varphi_n)dx \leq C \int_{B_{1}\setminus B_{1-\delta_n}}\bigg(|\nabla u_n|^2+(\delta_n^{-2}+R_n^{2}) (1-|u_n|^2)^2\bigg)dx\mathop{\longrightarrow}\limits_{n\to+\infty} 0\,, $$ where we used the fact $(1-|\varphi_n|^2)^2\leq (1-|u_n|^2)^2$ for $n$ large enough, again by convexity of the double well potential near its minima, and \eqref{vanpot} in the last estimate. In view of \eqref{energapprox}, it completes the proof of \eqref{approx}. \vskip5pt \noindent{\it Step 2.} In view of the monotonicity with respect to $R$ of $R^{-1}E(u,B_R)$, if $u$ is nonconstant then \eqref{convmeas} yields \begin{equation}\label{limeng} 0<\lim_{R\to+\infty} R^{-1}E(u,B_{R})=\lim_{n\to+\infty} E_{R_n}(u_n,B_1)=\frac{1}{2} \int_{B_1}|\nabla\phi|^2dx\,, \end{equation} and thus $\phi$ is nonconstant. Then the conclusion follows from Theorem 7.3 and Theorem 7.4 in \cite{BCL} together with \eqref{minphi}. \prbox \vskip10pt \noindent{\bf Proof of Theorem \ref{quantization}.} Let $R_n\to+\infty$ be an arbitrary sequence of radii. By \req{lingro}, Proposition \ref{descripblowdown}, Proposition \ref{propcomp} and Corollary \ref{strtangmap}, we can find a subsequence (not relabelled) and $T\in O(3)$ such that the sequence of scaled maps $u_n(x)=u(R_nx)$ converges strongly in $H^1_{\rm loc}(\R^3;\R^3)$ to $\phi(x)=Tx/|x|$. Therefore \eqref{limeng} gives $R^{-1}E(u,B_{R})\to 4\pi$ as $R\to+\infty$, and the proof is complete.\prbox \section{Asymptotic symmetry} In order to study the asymptotic behaviour of local minimizers we first derive some decay properties of solutions to \req{GL} at infinity. It will be clear that the crucial ingredients are \req{lingro}, the $H^1_{\rm{loc}}(\R^3;\R^3)$ compactness of the scaled maps and the small energy regularity lemma recalled in Section 3. Then we bootstrap the first order estimates to get higher order estimates and compactness of the rescaled maps and their derivatives of all orders. Finally we prove a decay property of the radial derivative which will give uniqueness of the asymptotic limit at infinity in the $L^2$-topology, whence uniqueness of the limit in any topology follows. \vskip5pt We start with the following result. \begin{proposition} \label{Firstderbounds} Let $u$ be a smooth solution to \req{GL} satisfying \req{lingro} and such that the scaled maps $\{u_R \}_{R>0}$ are relatively compact in $H^{1}_{\rm{loc}}(\mathbb{R}^3;\mathbb{R}^3)$. Then there is a constant $C>0$ such that for all $x\in\R^3$, \begin{equation} \label{1derbound} |x|^2(1-|u(x)|^2)+|x||\nabla u(x)| \leq C \, . \end{equation} \end{proposition} \begin{proof} We prove the statement by contradiction. Assume \req{1derbound} were false, then there would be a sequence $\{ x_n \} \subset \mathbb{R}^3$ such that $R_n=|x_n| \to +\infty$ as $n \to +\infty$ and \begin{equation}\label{hypcontr} |x_n| |\nabla u (x_n)|+|x_n|^2 (1-|u(x_n)|^2) \mathop{\longrightarrow}\limits_{n \to +\infty} +\infty \, . \end{equation} For each integer $n$, let us consider $u_n(x):=u_{R_n}(x)=u(R_nx)$ as a entire solution of \eqref{GLresc}. Up to the extraction of a subsequence, we may assume that $x_n/R_n\to \bar{x} \in \partial B_1$ as $n \to +\infty$. By Proposition \ref{descripblowdown}, up to a further subsequence the sequence of scaled maps $\{u_n\}$ converges to $u_\infty(x)= \omega \left(x/|x|\right)$ strongly in $H^1_{\rm{loc}}(\mathbb{R}^3;\mathbb{R}^3)$ as $n\to +\infty$, where $\omega:\mathbb{S}^2 \to \mathbb{S}^2$ is an harmonic map. In addition $e_{R_n}(u_n)(x)dx \overset{*}{\rightharpoonup} \frac12 |\nabla u_\infty|^2dx+\nu$ where $\nu$ is a quantized cone-measure. Combining this property together with the strong convergence in $H^1_{\rm{loc}}(\R^3;\R^3)$ and Lemma \ref{nopotential}, we conclude that $\nu\equiv0$. Since $\omega$ is a smooth map we have $u_\infty \in C^\infty(\mathbb{R}^3 \setminus \{ 0\};\mathbb{S}^2)$. In particular $u_\infty$ is smooth around $\bar{x} \in\partial B_1$. Now we can argue as in Step 1 in the proof of Proposition \ref{propcomp} to find $\delta >0$ such that $|\nabla u_n|+R_n^2(1-|u_n|^2)\leq C_\delta$ in $B_\delta(\bar x)$ for some constant $C_\delta$ independent of $n$. Scaling back we obtain for $n$ large enough, $$ |x_n| |\nabla u (x_n)|+|x_n|^2 (1-|u(x_n)|^2) \leq C_\delta\,,$$ which obviously contradicts \eqref{hypcontr}. \end{proof} \begin{remark} \label{modtoonerate} For an arbitrary entire solution $u$ to \eqref{GL}, the estimate \req{1derbound} still holds under the assumption $|u(x)|=1+\mathcal{O}(|x|^{-2})$ as $|x|\to +\infty$. Indeed, since the scaled map $u_R$ given by \eqref{defscmap} satisfies \eqref{GLresc}, $\{\Delta u_R\}_{R>0}$ is equibounded in $L^\infty_{\rm{loc}} (\mathbb{R}^3 \setminus \{ 0 \})$. Therefore standard $W^{2,p}_{\rm{loc}}$ estimates and the Sobolev embedding show that $\{\nabla u_R\}_{R>0}$ is equibounded in $L^\infty_{\rm{loc}} (\mathbb{R}^3 \setminus \{ 0 \})$ which proves \req{1derbound}. Note also that \req{1derbound} implies \req{lingro}. \end{remark} For a solution $u$ to \eqref{GL} satisfying the assumptions of Proposition \ref{Firstderbounds}, we have $|u(x)|=1 +\mathcal{O}(|x|^{-2})$ and $|\nabla u (x)| =\mathcal{O}(|x|^{-1})$ as $|x| \to +\infty$. In order to get bounds on the higher order derivatives of $u$ at infinity it is very convenient to use the polar decomposition for $u$, {\it i.e.}, to write $u=\rho w$ for some nonnegative function $\rho$ and some $\mathbb{S}^2$-valued map $w$. The following result gives the $3D$ counterpart of the asymptotic estimates of \cite{S} for the $2D$ case, and it is essentially based on the techniques introduced in the proof of \cite{BBH2}, Theorem 1. \begin{proposition} \label{BBH} Let $u$ be an entire solution of \req{GL} satisfying \req{1derbound}. Let $R_0 \geq 1$ be such that $|u(x)| \geq 1/2$ for $|x|\geq R_0/4$. For $R \geq R_0$ and $|x|\geq 1/4$, define $u_R(x)=u(Rx)= \rho_{R}(x) w_R(x)$ the polar decomposition of the scaled maps, i.e., $\rho_R(x):=|u_R(x)|$ and $w_R(x):=u_R(x)/|u_R(x)|$. Then for each $k \in\NN$ and each $\sigma \in (1,2)$ there exist constants $C=C(k,\sigma)>0$ and $C^\prime=C^\prime(k,\sigma)>0$ independent of $R$ such that \begin{equation} \label{polarderbounds} \begin{array}{ll} (\mathcal{P}_k^\prime) \qquad \qquad & \| \nabla w_R \|_{C^k\big(\overline B_{2\sigma} \setminus B_{1/{2\sigma}}\big)} \leq C^\prime(k,\sigma) \,, \\[8pt] (\mathcal{P}_k^{\prime\prime}) \qquad \qquad & \| R^2 (1- \rho_R) \|_{C^k\big(\overline B_{2\sigma} \setminus B_{1/{2\sigma}}\big)} \leq C^{\prime \prime}(k,\sigma) \, . \end{array} \end{equation} As a consequence, for each $k \in\NN$ there is a constant $C(k)>0$ such that \begin{equation} \label{highderbound} \sup_{x \in \mathbb{R}^3} \big( |x|^{k+1} |\nabla^{k+1} u(x)|+|x|^{k+2} |\nabla^{k} (1-|u(x)|^2)| \big) \leq C(k) \, . \end{equation} \end{proposition} \begin{proof} Observe that it is suffices to prove \req{polarderbounds} since \req{highderbound} follows by scaling. For $|x| \geq R_0/4$ we have $|u(x)| \geq 1/2$ so we can write and $u(x)=\rho(x) w(x)$ with $\rho(x):=|u(x)|$ and $ w(x):=u(x) \rho(x)^{-1}$ and the system \req{polarsystem} is satisfied in $\R^3\setminus \overline B_{R_0/4}$. Hence, for each $R\geq R_0$ the scaled maps $u_R$, $\rho_R$ and $w_R$ are well defined and smooth in $\R^3\setminus \overline B_{1/4}$. In addition, \req{polarsystem} yields by scaling the following Euler Lagrange equations, \begin{equation} \label{scaledpolarsystem} \begin{cases} {\rm div} (\rho_R^2\nabla w_R )+w_R \rho_R^2|\nabla w_R|^2=0 \\[5pt] \Delta \rho_R+\rho_R R^2(1-\rho_R^2)=\rho_R |\nabla w_R|^2 \end{cases} \quad \text{in $\R^3\setminus \overline B_{1/4}$}\,. \end{equation} We will prove \req{polarderbounds} by induction over $k$, the case $k=0$ being easily true by assumption \req{1derbound}. We closely follow \cite[pg. 136-137]{BBH2} with minor modifications. \vskip5pt First we prove that $(\mathcal{P}_k^\prime)$-$(\mathcal{P}_{k}^{\prime \prime})$ implies $(\mathcal{P}_{k+1}^\prime)$. We set for simplycity \begin{equation} \label{X_R} X_R:=R^2(1- \rho_R) \, , \end{equation} so that the second equation in \req{scaledpolarsystem} can be rewritten as \begin{equation} \label{eqrhoR} -\Delta \rho_R=-\rho_R |\nabla w_R|^2+ \rho_R (1+\rho_R) X_R \, . \end{equation} By the inductive assumptions \req{polarderbounds} the right hand side in \req{eqrhoR} is bounded in $C^k_{\rm{loc}}(B_4 \setminus \overline B_{1/4})$ uniformly with respect to $R \geq R_0$. Hence $\{\rho_R\}_{R\geq R_0}$ is bounded in $W^{k+2,p}_{\rm{loc}}(B_4 \setminus \overline B_{1/4})$ for each $p<+\infty$ by standard elliptic regularity theory. Then the Sobolev embedding implies that $\{\nabla \rho_R\}_{R\geq R_0}$ is also bounded in $C^k_{\rm{loc}}(B_4 \setminus \overline B_{1/4})$. Next rewrite the first equation in \req{scaledpolarsystem} as \begin{equation} \label{eqwR} -\Delta w_R=w_R |\nabla w_R|^2+\frac{2\nabla \rho_R}{\rho_R} \nabla w_R \, . \end{equation} Since all the terms in the right hand side in \eqref{eqwR} are now bounded in $C^k_{\rm{loc}}(B_4 \setminus \overline B_{1/4})$ uniformly with respect to $R\geq R_0$, standard linear theory (differentiating the equation $k$-times) also gives that $\{w_R\}_{R\geq R_0}$ is equibounded in $W^{k+2,p}_{\rm{loc}}(B_4 \setminus \overline B_{1/4})$ for each $p<+\infty$. Therefore the right hand side in \eqref{eqwR} is in fact bounded in $W^{k+1,p}_{\rm{loc}}(B_4 \setminus \overline B_{1/4})$ uniformly with respect to $R\geq R_0$. Hence the linear $L^p$-theory yields the boundedness of $\{w_R\}_{R\geq R_0}$ in $W^{k+3,p}_{\rm{loc}}(B_4 \setminus \overline B_{1/4})$ for each $p<+\infty$. Then, by the Sobolev embedding, $\{\nabla w_R\}_{R\geq R_0}$ is bounded in $C^{k+1}_{\rm{loc}} (B_4 \setminus \overline B_{1/4})$, {\it i.e}, $(\mathcal{P}_{k+1}^\prime)$ holds. \vskip5pt Now we prove that $(\mathcal{P}_k^\prime)$-$(\mathcal{P}_{k}^{\prime \prime})$ implies $(\mathcal{P}_{k+1}^{\prime\prime})$. We fix $\sigma \in (1,2)$ and we apply $(\mathcal{P}_k^\prime)$, $ (\mathcal{P}_{k}^{\prime \prime})$ and $ (\mathcal{P}_{k+1}^\prime)$ in $\overline B_{2\sigma^\prime}\setminus B_{1/2\sigma^\prime}$ for a fixed $\sigma<\sigma^\prime<2$, {\it e.g.}, $\sigma^\prime:=1+\sigma/2$. Since $K:=\overline B_{2\sigma}\setminus B_{1/2\sigma}$ is compact we can find finitely many points $\{ P_1, \ldots, P_m\} \subset K$ such that $K \subset \cup_{i=1}^m B_{\sigma^\prime-\sigma}(P_i)$ with $B_{2(\sigma^\prime -\sigma)}(P_i) \subset \overline B_{2\sigma^\prime} \setminus B_{1/2 \sigma^\prime}$ for each $i=1,\ldots, m$. Then it suffices to show that $(\mathcal{P}_{k+1}^{\prime \prime})$ holds in each ball $B_i:=B_{\sigma^\prime-\sigma}(P_i)$ assuming that $(\mathcal{P}_k^\prime)$, $ (\mathcal{P}_{k}^{\prime \prime})$ and $ (\mathcal{P}_{k+1}^\prime)$ hold in $B^\prime_i:= B_{2(\sigma^\prime-\sigma)}(P_i)$. For simplicity we shall drop the subscript $i$. Taking \req{X_R} into account, we rewrite \req{eqrhoR} as \begin{equation} \label{eqX_R} R^{-2} \Delta X_R=-\rho_R |\nabla w_R|^2+\rho_R (1+\rho_R) X_R \, . \end{equation} Denoting by $D^k$ any $k$-th derivative, since $\{\rho_R\}_{R\geq R_0}$, $\{X_R\}_{R\geq R_0}$, $\{w_R\}_{R\geq R_0}$ and $\{\nabla w_R\}_{R\geq R_0}$ are bounded in $C^k(\overline{B^\prime})$ by inductive assumption, differentiating \req{eqX_R} $k$-times leads to $$\| D^k X_R\|_{L^\infty (B^\prime)}+ R^{-2} \| \Delta D^k X_R \|_{L^\infty(B^\prime)} \leq C \, , $$ for some $C>0$ independent of $R\geq R_0$. Now we combine the above estimate with \cite[Lemma A.1]{BBH2} in $B \subset B^{\prime\prime} \subset B^\prime$ where $B^{\prime \prime}:=B_{3 (\sigma^\prime-\sigma)/2} (P_i)$ to obtain \begin{equation} \label{derk+1X_R} R^{-1} \| D^{k+1} X_R \|_{L^{\infty}(B^{\prime\prime})} \leq C \, \end{equation} for a constant $C>0$ independent of $R\geq R_0$. Finally we rewrite \req{eqX_R} as \begin{equation} \label{neweqX_R} -R^{-2}\Delta X_R +2 X_R= 3 R^{-2}X_R^2-R^{-4} X_R^3 +\rho_R |\nabla w_R|^2=: \mathcal{T}_R \, . \end{equation} As we already proved that $D^{k+1} \rho_R$ is bounded in $B^{\prime \prime}$ independently of $R\geq R_0$ and that $(\mathcal{P}_k^{\prime\prime})$, $(\mathcal{P}_{k+1}^\prime)$ hold in $B^{\prime\prime}$, taking \req{derk+1X_R} into account we infer that $f_R:=D^{k+1} \mathcal{T}_R$ satisfies $ \| f_R\|_{L^{\infty}(B^{\prime\prime})} \leq C$ for a constant $C>0$ independent of $R\geq R_0$. Then differentiating $(k+1)$-times \req{neweqX_R} we derive that $g_R:=D^{k+1} X_R$ satisfies \begin{equation} \label{eqg_R} \begin{cases} -R^{-2} \Delta g_R +2 g_R =f_R & \text{in $B^{\prime\prime}$} \, ,\\ \| g_R\|_{L^\infty (B^{\prime\prime})} \leq CR \, , \\ \| f_R \|_{L^\infty (B^{\prime\prime})} \leq C \, , \end{cases} \end{equation} for some $C>0$ independent of $R\geq R_0$. Next we write $g_R=\varphi_R +\psi_R$ in $\overline{B^{\prime\prime}}$ where $\varphi_R$ and $\psi_R$ are the unique smooth solutions of \begin{equation} \label{eqphi_R} \begin{cases} -R^{-2} \Delta \varphi_R +2 \varphi_R =0 & \text{in $B^{\prime\prime}$} \, ,\\ \varphi_R= g_R & \text{on $\partial B^{\prime\prime}$} \, , \end{cases} \end{equation} and \begin{equation} \label{eqpsi_R} \begin{cases} -R^{-2} \Delta \psi_R +2 \psi_R =f_R & \text{in $B^{\prime\prime}$} \, ,\\ \psi_R= 0 & \text{on $\partial B^{\prime\prime}$} \, . \end{cases} \end{equation} Applying \cite[Lemma 2]{BBH2} in $B \subset B^{\prime\prime}$ to \req{eqphi_R}, the comparison principle in $B^{\prime\prime}$ to \req{eqpsi_R}, and the estimates in \req{eqg_R} we finally conclude $$\| D^{k+1} X_R \|_{L^\infty (B)}= \| g_R \|_{L^\infty(B)} \leq \| \varphi_R \|_{L^\infty(B)}+\| \psi_R \|_{L^\infty(B^{\prime\prime})}\leq C \,,$$ for some $C>0$ independent of $R\geq R_0$, {\it i.e.}, $(\mathcal{P}_{k+1}^{\prime\prime})$ holds in $B$. \end{proof} \begin{remark} \label{C2compactness} As a consequence of Proposition \ref{BBH}, Remark \ref{modtoonerate} and Proposition \ref{descripblowdown}, if $u$ is an entire solution to \eqref{GL} satisfying \req{1derbound}, then $\{{u_R}_{|\mathbb{S}^2} \}_{R>0} $ is a compact subset of $C^2(\mathbb{S}^2;\mathbb{R}^3)$ and the limit as $R_n\to+\infty$ of any convergent sequence $\{{u_{R_n}}_{|\mathbb{S}^2} \}$ is an harmonic map $\omega\in C^2(\mathbb{S}^2;\mathbb{S}^2)$ (more precisely $\omega:=\phi_{|\mathbb{S}^2}$ where $\phi$ is given by Proposition \ref{descripblowdown}). In addition, for $n$ large the topological degree of ${u_{R_n}}_{|\mathbb{S}^2}$ is well defined and ${\rm deg}\,\omega={\rm deg}\,{u_{R_n}}_{|\mathbb{S}^2}={\rm deg}_{\infty}u\,$. \end{remark} In order to prove uniqueness of the asymptotic limit of a solution $u$ at infinity, we need to establish a decay estimate on the radial derivative of $u$. As it will be clear below, such estimate gives the existence of a limit for the scaled maps $u_R$ as $R \to +\infty$ in $L^2(\mathbb{S}^2;\mathbb{R}^3)$. The a priori estimates in Proposition \ref{BBH}, as they yield compactness even in stronger topologies, will imply the convergence to an $\mathbb{S}^2$-valued harmonic map in $C^k(\mathbb{S}^2;\mathbb{R}^3)$ for any $k\in\NN$. \begin{proposition} \label{radderdecay} Let $u$ be an entire solution of \req{GL} satisfiying \req{simonbound}. Then there exist $R_0\geq e$ and $C>0$ such that for any $R \geq R_0$, \begin{equation} \label{radderinequality} \int_{\{|x|>R\}} \frac{1}{|x|} \left| \frac{\partial u}{\partial r}\right|^2 dx \leq C\, \frac{\log R}{R^2} \, . \end{equation} \end{proposition} \begin{proof} By \req{simonbound} we can find $R_0 \geq e$ such that $|u(x)|\geq 1/2$ whenever $|x|\geq R_0$. Then we perform the polar decomposition of $u$, {\it i.e.}, for $|x|\geq R_0$ we write $u(x)=\rho(x) w(x)$ where $\rho(x)=|u(x)| \geq 1/2$ and $w(x) \in \mathbb{S}^2$. Due to \req{simonbound} and \req{highderbound}, it is enough to prove \req{radderinequality} for $w$ since $\rho(x) \leq 1$ and $|\nabla \rho(x)|= \mathcal{O}(|x|^{-3})$ as $|x|\to +\infty$. Taking \req{polarderbounds} into account, we have $\nabla w(x)= \mathcal{O}(|x|^{-1})$ and $\Delta w(x)=\mathcal{O}(|x|^{-2})$ as $|x|\to +\infty$ so that equation \req{polarsystem} can be rewritten as \begin{equation} \label{polareqw} \Delta w(x) +w(x)|\nabla w(x)|^2=G(x) \, , \end{equation} where $$G(x)=(1-\rho^2(x)) \big( \Delta w(x)+w(x) |\nabla w(x)|^2 \big)+ \nabla w(x) \cdot \nabla (1-\rho^2(x)) =\mathcal{O}(|x|^{-4}) $$ as $|x|\to+ \infty$ thanks to \req{highderbound}. Next we multiply \req{polareqw} by $\ds \frac{\partial w}{\partial r}=\frac{x}{|x|} \cdot \nabla w$, and since $w$ and $\ds \frac{\partial w}{\partial r}$ are orthogonal, we obtain \begin{equation} \label{radderdecay0} 0=\left(\Delta w-G(x) \right) \cdot \frac{\partial w}{\partial r}=\frac{1}{|x|} \left| \frac{\partial w}{\partial r}\right|^2 +{\rm div}\, \Psi(x)-H(x) \, , \end{equation} where $$\Psi(x)=\nabla w(x) \cdot \frac{\partial w}{\partial r}- \frac12|\nabla w(x)|^2 \frac{x}{|x|} \quad \hbox{and} \quad H(x)=G(x) \cdot \frac{\partial w}{\partial r} = \mathcal{O}(|x|^{-5}) $$ as $|x|\to +\infty$ by \req{simonbound}, \req{polarderbounds} and \req{highderbound}. Integrating by parts \req{radderdecay0} in an annulus $A_{R^\prime,R}:=B_{R^\prime \setminus \overline{B_R}}$, with $R_0 \leq R< R^\prime$ gives \begin{multline} \label{radderdecay1} \int_{A_{R^\prime,R}} \frac{1}{|x|} \left| \frac{\partial w}{\partial r}\right|^2 dx- \frac12 \int_{\partial B_R}\left| \frac{\partial w}{\partial r}\right|^2 d\mathcal{H}^2= \frac{1}{2}\int_{\mathbb{S}^2}|\nabla_T\, w_{R^\prime}|^2d\mathcal{H}^2 -\frac{1}{2}\int_{\mathbb{S}^2}|\nabla_T \,w_{R}|^2d\mathcal{H}^2+\\ + \frac12 \int_{\partial B_R^\prime}\left| \frac{\partial w}{\partial r}\right|^2 d\mathcal{H}^2 +\int_{A_{R^\prime,R}} H\,dx \, , \end{multline} where $w_R$ and $w_{R^\prime}$ are defined as Proposition \ref{BBH} and $\nabla_T$ denotes the tangential gradient. Since \req{simonbound} obviously implies \eqref{lingro}, the Monotonicity Formula \req{monotonicity} yields $$\int_{\{|x|>R_0\}} \frac{1}{|x|} \left| \frac{\partial u}{\partial r}\right|^2 dx<+\infty\,.$$ Hence we can find a sequence $R^\prime_n \to +\infty$ such that \begin{equation} \label{radderdecay2a} \int_{\partial B_{R^\prime_n}} \left| \frac{\partial u}{\partial r}\right|^2 d\mathcal{H}^2 \mathop{\longrightarrow}\limits_{n\to+\infty} 0 \,. \end{equation} In view of Remark \ref{C2compactness} we can pass to a subsequence, still denoted by $\{R^\prime_n\}$, such that \begin{equation} \label{radderdecay2} \| {u_{R_n^\prime}}_{|\mathbb{S}^2}-\omega \|_{C^2(\mathbb{S}^2;\mathbb{R}^3)} \mathop{\longrightarrow}\limits_{n\to+\infty} 0 \, , \end{equation} for some smooth harmonic map $\omega:\mathbb{S}^2 \to \mathbb{S}^2$ satisfying ${\rm deg} \, \omega= {\rm deg}_\infty u$. Taking \req{simonbound} again into account, one may easily check that \begin{equation} \label{radderdecay3} \int_{|x|>R_0} \frac{1}{|x|}\left| \frac{\partial w}{\partial r}\right|^2dx<+\infty\,,\;\; \int_{\partial B_{R^\prime_n}} \left| \frac{\partial w}{\partial r}\right|^2 d\mathcal{H}^2 \mathop{\longrightarrow}\limits_{n\to +\infty} 0 \, , \;\; \int_{\mathbb{S}^2} |\nabla_T\,w_{R^\prime_n}|^2d\mathcal{H}^2 \mathop{\longrightarrow}\limits_{n\to +\infty} \int_{\mathbb{S}^2} |\nabla_T\,\omega|^2d\mathcal{H}^2 \, . \end{equation} Choose $R^\prime=R^\prime_n$ in \req{radderdecay1}. Taking \req{radderdecay3} into account and the integrability of $H$ at infinity, we can pass to the limit $R^\prime_n\to+\infty$ to obtain \begin{equation} \label{radderdecay4} \int_{\{|x|>R\}} \frac{1}{|x|} \left| \frac{\partial w}{\partial r}\right|^2 dx- \frac12 \int_{\partial B_R}\left| \frac{\partial w}{\partial r}\right|^2 d\mathcal{H}^2 = \frac{1}{2}\int_{\mathbb{S}^2} |\nabla_T\,\omega|^2d\mathcal{H}^2 - \frac{1}{2}\int_{\mathbb{S}^2} |\nabla_T\,w_R|^2d\mathcal{H}^2+\int_{\{|x|>R\}} H\,dx \, , \end{equation} for each $R \geq R_0$. Then observe that ${\rm deg} \, {w_R}_{|\mathbb{S}^2}={\rm deg} \, \omega$ for each $R\geq R_0$ by Remark \ref{C2compactness}. On the other hand, $\omega:\mathbb{S}^2 \to \mathbb{S}^2$ is an harmonic map so that $\omega$ is energy minimizing in its own homotopy class. Therefore, \begin{equation}\label{enminhom} \int_{\mathbb{S}^2} |\nabla_T\,\omega|^2d\mathcal{H}^2\leq\int_{\mathbb{S}^2} |\nabla_T\,w_R|^2d\mathcal{H}^2\,. \end{equation} Multiplying \req{radderdecay4} by $2R$ and using \eqref{enminhom}, we derive $$\frac{d}{dR} \left( R^2 \int_{\{|x|>R\}} \frac{1}{|x|} \left| \frac{\partial w}{\partial r}\right|^2dx \right)\leq 2R \int_{\{|x|>R\}} H\, dx \, , $$ for every $R>R_0 $. Integrating the above inequality between $R_0$ and $R>R_0$, using $H(x)=\mathcal{O}(|x|^{-5})$ and \eqref{radderdecay3}, we finally obtain $$R^2 \int_{\{|x|>R\}} \frac{1}{|x|} \left| \frac{\partial w}{\partial r}\right|^2dx \leq R_0^2 \intt{\{|x|>R_0\}} \frac{1}{|x|} \left| \frac{\partial w}{\partial r}\right|^2dx +C\int_{R_0}^R \frac{1}{r}dr \leq C( \log R +1)\, , $$ and the proof is complete. \end{proof} Now we are in a position to prove the asymptotic symmetry of entire solutions of \req{GL}. \vskip5pt \noindent {\rm \bf Proof of Theorem \ref{asymmetry}.} Since $u$ satisfies \eqref{lingro} and $\{ u_R \}_{R>0}$ is relatively compact in $H^1_{\rm{loc}}(\R^3;\R^3)$, we can apply Proposition \ref{Firstderbounds} and Proposition \ref{BBH} to obtain \req{simonbound}. Next we fix $R_0 $ as in Proposition \ref{radderdecay} and we estimate for $R_0\leq \tau_1 \leq \tau_2 \leq 2\tau_1$, $$|u_{\tau_1}(\sigma)-u_{\tau_2}(\sigma)|^2\leq (\tau_2-\tau_1)\int_{\tau_1}^{\tau_2}\bigg|\frac{\partial u}{\partial r}(r\sigma)\bigg|^2dr\leq \int_{\tau_1}^{\tau_2}\bigg|\frac{\partial u} {\partial r}(r\sigma)\bigg|^2rdr\quad\text{for every $\sigma\in\mathbb{S}^2$}\,. $$ Integrating the previous inequality with respect to $\sigma$, we infer from \eqref{radderinequality} that \begin{equation}\label{dyadic} \int_{\mathbb{S}^2}|u_{\tau_1}-u_{\tau_2}|^2d\mathcal{H}^2\leq \int_{\{\tau_1\leq |x|\leq \tau_2\}} \frac{1}{|x|}\bigg|\frac{\partial u}{\partial r}\bigg|^2dx \leq C\,\frac{\log\tau_1}{\tau_1^2}\quad\text{for every $R_0\leq \tau_1\leq\tau_2\leq 2\tau_1$}\,, \end{equation} where the constant $C$ only depends on $R_0$. Next we consider $R_0\leq R <R^\prime$ arbitrary. Define $k\in\NN$ to be the largest integer satisfying $2^kR\leq R^\prime$, and set $\tau_j:=2^jR$ for $j=0,\ldots,k$ and $\tau_{k+1}:=R^\prime$. Using \eqref{dyadic} together with the triangle inequality, we estimate $$\| u_R-u_{R^\prime}\|_{L^2(\mathbb{S}^2)}\leq \sum_{j=0}^k\| u_{\tau_j}-u_{\tau_{j+1}}\|_{L^2(\mathbb{S}^2)}\leq C\sum_{j=0}^k\frac{\sqrt{\log\tau_j}}{\tau_j}\leq \frac{C}{R}\sum_{j=0}^{\infty} \frac{\sqrt{j\log 2 +\log R}}{2^j}\leq C\, \frac{\sqrt{\log R}}{R}\,,$$ for a constant $C$ which only depends on $R_0$. Obviously this estimate yields the uniqueness of the limit $\ds\omega:=\lim_{R \to +\infty} {u_R}_{|\mathbb{S}^2}$ in the $L^2$-topology. In view of Remark \ref{C2compactness} the convergence also holds in the $C^2$-topology and $\omega:\mathbb{S}^2\to\mathbb{S}^2$ is a smooth harmonic map satisfying ${\rm deg}\,\omega={\rm deg}_\infty u$. So claim {\it (i)} in the theorem is proved. Then from claim {\it (i)}, \req{simonbound} and Proposition \ref{descripblowdown} we deduce that $u_R\to u_\infty$ strongly in $H^1_{\rm loc}(\R^3;\R^3)$ as $R\to+\infty$ with $u_\infty(x)=\omega(x/|x|)$, and claim {\it (ii)}. Moreover claim {\it (ii)} in Proposition \ref{descripblowdown} yields $$\int_{S^2} x |\nabla_T \,\omega | d\mathcal{H}^2=0\,.$$ As a consequence, if ${\rm deg}_\infty u=\pm 1={\rm deg} \, \omega $ the balancing condition above gives $\omega(x) = Tx$ for some $T \in O(3)$ by \cite[Proof of Theorem 7.3]{BCL}. \prbox \section{Proof of Theorem \ref{SYMMETRY}} \noindent {\bf Proof of {\it (i)} $\Rightarrow$ {\it (ii)}.} This is just Theorem \ref{quantization}. \prbox \vskip5pt \noindent{\bf Proof of {\it (ii)} $\Rightarrow$ {\it (iii)}.} First we claim that the scaled maps $\{u_R\}_{R>0}$ given by \eqref{defscmap} are compact in $H^1_{\rm{loc}}(\R^3;\R^3)$. Indeed, by {\it (ii)} we can apply Proposition \ref{descripblowdown} to infer that from any weakly convergent sequence $\{u_{R_n}\}$ as $R_n\to+\infty$ we have $$\int_{B_1}\frac12 |\nabla \phi|^2 dx+\nu(B_1)=4\pi\,,$$ where $\phi$ is the weak limit of $\{u_{R_n}\}$ and $\nu$ is the defect measure as in Proposition \ref{descripblowdown}. If $\nu \neq 0$ the above equality together with the structure of $\nu$ yields $\phi\equiv {\rm const}$ and $l=k_1=1$ which contraddicts the balancing condition in Proposition \ref{descripblowdown}, claim {\it (ii)}. Hence $\nu \equiv 0$ and $\{u_{R_n}\}$ is strongly convergent in $H^1_{\rm{loc}}(\R^3;\R^3)$. Now we can apply Theorem \ref{asymmetry} to get \req{simonbound} which obviously implies $|u(x)|=1+\mathcal{O}(|x|^{-2})$ as $|x|\to +\infty$. Moreover $u_R\to u_\infty$ strongly in $H^1_{\rm loc}(\R^3;\R^3)$ as $R\to+\infty$ where $u_\infty(x)=\omega(x/|x|)$ for some smooth harmonic map $\omega:\mathbb{S}^2\to \mathbb{S}^2$ satisfying ${\rm deg}\,\omega={\rm deg}_\infty u$. Therefore, $$4\pi |{\rm deg} \, \omega|=\int_{B_1} \frac12|\nabla u_\infty|^2dx = \lim_{R\to+ \infty} E_R(u_R,B_1)=\lim_{R\to+ \infty} \frac{1}{R} E(u,B_R)=4\pi \, , $$ so that ${\rm deg} \, \omega={\rm deg}_\infty u=\pm 1$. \prbox \vskip5pt \noindent{\bf Proof of {\it (iii)} $\Rightarrow$ {\it (iv)}.} From Remark \ref{modtoonerate} we deduce that $u$ satisfies \eqref{lingro} and the scaled maps $\{ u_R \}_{R>0}$ are compact in $H^1_{\rm{loc}}(\R^3;\R^3)$. As a consequence we can apply Theorem \ref{asymmetry} to obtain estimate \req{simonbound}. In addition, up to an orthogonal transformation we may assume ${\rm deg}_\infty u=1$ and $\|u_R-{\rm Id}\|_{C^2(\mathbb{S}^2;\mathbb{R}^3)}\to 0$ as $R \to +\infty$. By degree theory we have $u^{-1}(\{0\}) \neq \emptyset$ and up to a translation, we may also assume that $u(0)=0$. Now we are in the position to apply the division trick of \cite{M2} (see also \cite{R} for another application). Let $f \in C^2([0,\infty))$ given by Lemma \ref{radode} and define $$v(x):=\frac{u(x)}{f(|x|)}\,.$$ Clearly $v \in C^2(\mathbb{R}^3 \setminus \{ 0\};\mathbb{R}^3)$, and it is straightforward to check that as $|x|\to 0$, \begin{equation} \label{asymptorigin} v(x)=B\,\frac{x}{|x|}+o(1) \quad\text{and} \quad \nabla v (x)=\nabla \bigg(B\, \frac{x}{|x|}\bigg)+ o(|x|^{-1}) \, , \quad\text{where $B:=\frac{\nabla u(0)}{f^\prime(0)}$} \, . \end{equation} On the other hand, using Lemma \ref{radode} and the behaviour of $u$ at infinity, one may check that as $|x|\to +\infty$, \begin{equation} \label{asymptinfinity} v(x)=\frac{x}{|x|}+o(1) \, \qquad \nabla v(x)=\nabla \bigg(\frac{x}{|x|}\bigg)+o(|x|^{-1}) \, . \end{equation} Since $u$ solves \req{GL} and $f$ solves \req{cauchypb}, simple computations lead to $$\Delta v + f^2v(1-|v|^2)=-2\frac{f^\prime}{f}\,\frac{x}{|x|} \cdot \nabla v -\frac{2}{|x|^2}\,v \, .$$ Multiplying this equation by $\ds \frac{\partial v}{\partial r}=\frac{x}{|x|}\cdot \nabla v$ yields \begin{equation} \label{phozaevforv} 0\leq \left| \frac{\partial v}{\partial r}\right|^2\left( \frac{1}{|x|}+2 \frac{f^\prime}{f}\right)+\left( \frac{(1-|v|^2)^2}{4}\right) \left( 2 f f^\prime +\frac{2}{|x|}\right)={\rm div} \, \Phi(x) \, , \end{equation} where $$\Phi(x):= \left( \frac12 |\nabla v|^2 \frac{x}{|x|}\right)- \left( \nabla v \cdot \frac{\partial v}{\partial r}\right) +\left( \frac{x}{|x|}f^2 \frac{(1-|v|^2)^2}{4} \right)+ \left( \frac{x}{|x|^3} (1-|v|^2) \right) \, . $$ Now we claim that \begin{equation}\label{lastclaim} \int_{B_R \setminus B_\delta} {\rm div} \, \Phi\, dx=\int_{\{|x|=R\}} \Phi(x)\cdot \frac{x}{|x|}d\mathcal{H}^2- \int_{\{|x|=\delta\}} \Phi(x)\cdot \frac{x}{|x|}d\mathcal{H}^2\to 0 \end{equation} as $R\to +\infty$ and $\delta \to 0$. Assume that the claim is proved. Then from \req{phozaevforv} we infer that $|v|\equiv 1$ and $\ds\frac{\partial v}{\partial r} \equiv 0$. As a consequence, in view of \req{asymptinfinity} we derive that $|u(x)|\equiv f(|x|)$ and $v(x)\equiv x/|x|$ which concludes the proof. \vskip5pt In order to prove \eqref{lastclaim} we first observe that as $|x|\to +\infty$, $$ |\nabla v|^2=\frac{2}{|x|^2}+o(|x|^{-2}) \, , \qquad \frac{\partial v}{\partial r}=o(|x|^{-1}) \, , \qquad 1-|v|^2=\mathcal{O}(|x|^{-2}) \, , $$ thanks to \req{asymptinfinity} and {\it (iii)}. Therefore, \begin{equation} \label{divPhiinfinity} \int_{\{|x|=R\}} \Phi(x)\cdot \frac{x}{|x|} d\mathcal{H}^2=\int_{\{|x|=R\}} \left( \frac{1}{|x|^2} +o(|x|^{-2}) \right)d\mathcal{H}^2=4\pi +o(1) \, \quad \hbox{as} \quad R\to +\infty \, . \end{equation} Next, using \req{asymptorigin}, we estimate as $|x|\to 0$, $$|\nabla v|^2=\bigg|\nabla\bigg( B\frac{x}{|x|}\bigg)\bigg|^2+o(|x|^{-2}) \,, \qquad \frac{\partial v}{\partial r}=o(|x|^{-1}) \, , \qquad 1-|v|^2=\frac{|x|^2-|Bx|^2}{|x|^2}+o(1) \, . $$ Consequently, \begin{multline} \label{divPhiorigin} \int_{\{|x|=\delta\}} \Phi(x) \cdot \frac{x}{|x|} d\mathcal{H}^2= \int_{\{|x|=\delta\}} \left( \frac{1}{2} \bigg|\nabla \bigg(B\frac{x}{|x|}\bigg)\bigg|^2+ \frac{|x|^2-|Bx|^2}{|x|^4}+o(|x|^{-2})\right) d\mathcal{H}^2= \\ =\int_{\{|x|=1\}} \left( \frac{1}{2} \bigg|\nabla \bigg(B\frac{x}{|x|}\bigg)\bigg|^2- \frac{|Bx|^2}{|x|^4} \right) d\mathcal{H}^2 +4\pi+o(1) \quad \hbox{as} \quad \delta \to 0 \, . \end{multline} Since a direct computation gives $$\int_{\{|x|=1\}} \left( \frac{1}{2} \bigg|\nabla \bigg(A\frac{x}{|x|}\bigg)\bigg|^2- \frac{|Ax|^2}{|x|^4} \right) d\mathcal{H}^2=0$$ for any constant matrix $A\in\mathbb{R}^{3\times 3}$, claim \eqref{lastclaim} follows combining \req{divPhiinfinity} and \req{divPhiorigin}. \prbox \vskip5pt \noindent{\bf Proof of {\it (iv)} $\Rightarrow$ {\it (i)}.} Let $u$ be a nonconstant local minimizer as given by Theorem \ref{existence}. Since $R^{-1}E(u,B_R) \to 4\pi$ as $R \to+\infty$ and $u(0)=0$, and as we already proved {\it (ii)} $\Rightarrow$ {\it (iii)} $\Rightarrow {\it (iv)}$, we conclude that up to a rotation $u(x)=U(x)$ as given by \req{GLsolutions}. Hence $U$ is a nonconstant local minimizer of the energy, which is still the case when composing with translations and orthogonal transformations. \prbox \section*{Acknowledgments} The authors would like to thank Fabrice Bethuel, Alberto Farina and Giovanni Leoni for useful discussions. This work was initiated while A.P. was visiting the Carnegie Mellon University. He would like to thank Irene Fonseca for the kind invitation and the warm hospitality. V.M. was partially supported by the Center for Nonlinear Analysis (CNA) under the National Science Fundation Grant No. 0405343.
{"config": "arxiv", "file": "0804.0128/GLlocmin.tex"}
TITLE: What rules were used to find that $\sin(2/x)-(2/x)\cos(2/x)$ is the derivative of $y= x\sin( 1/x)$? QUESTION [0 upvotes]: I am little confused as to what rule I use to find the derivative of $y=x\sin(1/x)$. Is it the Chain rule or a combination of the product rule and the chain rule? The answer is $\sin (2/x) - (2/x)\cos(2/x)$. I do not have a clue how they got that answer. Can someone explain with steps? REPLY [0 votes]: Hint: product rule: $$ y'= \sin \left(\dfrac{1}{x}\right)\dfrac{d}{dx}(x)+x\dfrac{d}{dx}\sin \left(\dfrac{1}{x}\right) $$ Chain rule: $$ \dfrac{d}{dx}\sin \left(\dfrac{1}{x}\right)=\dfrac{d}{dx}\sin \left(x^{-1}\right)= \cos \left(x^{-1}\right) \dfrac{d}{dx} \left(x^{-1}\right) $$ and the answer in OP is wrong.
{"set_name": "stack_exchange", "score": 0, "question_id": 1190700}
TITLE: question concerning continuous and locally bound functions in relation to continuous almost everywhere QUESTION [3 upvotes]: I have a question from Munkres' Analysis on Manifolds text, under the section 15 Improper Integrals, question 8. $\newcommand{\R}{\mathbb{R}}$ The question reads as follows: Let $A$ be open in $\R^n$. We say $f: A \to \R$ is locally bounded on $A$ if each $x \in A$ has a neighborhood on which $f$ is bounded. Let $F(A)$ be the set of all functions $f:A \to \R$ that are locally bounded on $A$ and continuous on $A$ except on a set of measure zero. (a) Show that if $f$ is continuous on $A$, then $f$ is an element of $F(A)$. (b) Show that if $f$ is in $F(A)$, then $f$ is bounded on each compact subset of $A$ and the definition of the extended integral, the integral of $f$ over $A$, goes through without change. Before I can do (b), I am having trouble with (a). But I thought a solution might go something like this. Suppose $f$ is not an element of $F(A)$, meaning suppose the set of discontinuities of $f$ over $A$ is not of measure zero. Since the set of discontinuities of $f$ over $A$ is not of measure zero, $f$ is not integrable on the entirety of $A$. However, $f$ is continuous on $A$, so for any element $x \in A$, and a M, we can find an open neighborhood $V_{x}$ such that $|f(x)| \le M$, and $f(V_{x})$ is a subset of $f(A)$. (Here continuity would easily force $f$ to be locally bounded). $f$ is both locally bounded and continuous on $A$, $f$ is integrable over $V_{x}$. Since it was assumed that $f$ is not integrable on any portion over $A$. Hence we have a contradiction. I think the point of this question is related to how bad discontinuities can get in the situation of improper integrals. I know the classification of discontinuities have to do with Baire category theorems. But I am not allowed to use that even if I know that part of analysis in depth. Also, in Munkres' text, seqyebce if compact sets were introduced in improper integral section, while partition of unity is introduced in the chapters on change of variables which comes after. My apologies in advanced if my post seem very not LaTex like. I don't use LaTex enough and each time, I even forget the simplest syntax. Thank you in advance. REPLY [1 votes]: Clearly f is continuous on A except on a set of measure 0 since it is a continuous function on A. Since f is continuous then for x in A and in a small delta ball (chosen small enough that it is contained in A) around b in A we have $\vert f(x)-f(b)\vert<\epsilon$. So $\vert f(x)\vert <\vert f(b)\vert +\epsilon$ for all x in the delta ball. x was arbitrary so f is locally bounded. Hence, $f\in F(A)$.
{"set_name": "stack_exchange", "score": 3, "question_id": 349840}
chapter \<open>Inductive inference of recursive functions\label{c:iirf}\<close> theory Inductive_Inference_Basics imports Standard_Results begin text \<open>Inductive inference originates from work by Solomonoff~\cite{s-ftiip1-64,s-ftiip2-64} and Gold~\cite{g-lil-67,g-lr-65} and comes in many variations. The common theme is to infer additional information about objects, such as formal languages or functions, from incomplete data, such as finitely many words contained in the language or argument-value pairs of the function. Oftentimes ``additional information'' means complete information, such that the task becomes identification of the object. The basic setting in inductive inference of recursive functions is as follows. Let us denote, for a total function $f$, by $f^n$ the code of the list $[f(0), ..., f(n)]$. Let $U$ be a set (called \emph{class}) of total recursive functions, and $\psi$ a binary partial recursive function (called \emph{hypothesis space}). A partial recursive function $S$ (called \emph{strategy}) is said to \emph{learn $U$ in the limit with respect to $\psi$} if for all $f \in U$, \begin{itemize} \item the value $S(f^n)$ is defined for all $n\in\mathbb{N}$, \item the sequence $S(f^0), S(f^1), \ldots$ converges to an $i\in\mathbb{N}$ with $\psi_i = f$. \end{itemize} Both the output $S(f^n)$ of the strategy and its interpretation as a function $\psi_{S(f^n)}$ are called \emph{hypothesis}. The set of all classes learnable in the limit by $S$ with respect to $\psi$ is denoted by $\mathrm{LIM}_\psi(S)$. Moreover we set $\mathrm{LIM}_\psi = \bigcup_{S\in\mathcal{P}} \mathrm{LIM}_\psi(S)$ and $\mathrm{LIM} = \bigcup_{\psi\in\mathcal{P}^2} \mathrm{LIM}_\psi$. We call the latter set the \emph{inference type} $\mathrm{LIM}$. Many aspects of this setting can be varied. We shall consider: \begin{itemize} \item Intermediate hypotheses: $\psi_{S(f^n)}$ can be required to be total or to be in the class $U$, or to coincide with $f$ on arguments up to $n$, or a myriad of other conditions or combinations thereof. \item Convergence of hypotheses: \begin{itemize} \item The strategy can be required to output not a sequence but a single hypothesis, which must be correct. \item The strategy can be required to converge to a \emph{function} rather than an index. \end{itemize} \end{itemize} We formalize five kinds of results (\<open>\<I>\<close> and \<open>\<I>'\<close> stand for inference types): \begin{itemize} \item Comparison of learning power: results of the form @{prop "\<I> \<subset> \<I>'"}, in particular showing that the inclusion is proper (Sections~\ref{s:fin_cp}, \ref{s:num_fin}, \ref{s:num_cp}, \ref{s:num_total}, \ref{s:cons_lim}, \ref{s:lim_bc}, \ref{s:total_cons}, \ref{s:r1_bc}). \item Whether \<open>\<I>\<close> is closed under the subset relation: @{prop "U \<in> \<I> \<and> V \<subseteq> U \<Longrightarrow> V \<in> \<I>"}. \item Whether \<open>\<I>\<close> is closed under union: @{prop "U \<in> \<I> \<and> V \<in> \<I> \<Longrightarrow> U \<union> V \<in> \<I>"} (Section~\ref{s:union}). \item Whether every class in \<open>\<I>\<close> can be learned with respect to a Gödel numbering as hypothesis space (Section~\ref{s:inference_types}). \item Whether every class in \<open>\<I>\<close> can be learned by a \emph{total} recursive strategy (Section~\ref{s:lemma_r}). \end{itemize} The bulk of this chapter is devoted to the first category of results. Most results that we are going to formalize have been called ``classical'' by Jantke and Beick~\cite{jb-cpnii-81}, who compare a large number of inference types. Another comparison is by Case and Smith~\cite{cs-cicmii-83}. Angluin and Smith~\cite{as-ii-87} give an overview of various forms of inductive inference. All (interesting) proofs herein are based on my lecture notes of the \emph{Induktive Inferenz} lectures by Rolf Wiehagen from 1999/2000 and 2000/2001 at the University of Kaiserslautern. I have given references to the original proofs whenever I was able to find them. For the other proofs, as well as for those that I had to contort beyond recognition, I provide proof sketches.\<close> section \<open>Preliminaries\<close> text \<open>Throughout the chapter, in particular in proof sketches, we use the following notation. Let $b\in\mathbb{N}^*$ be a list of numbers. We write $|b|$ for its length and $b_i$ for the $i$-th element ($i=0,\dots, |b| - 1$). Concatenation of numbers and lists works in the obvious way; for instance, $jbk$ with $j,k\in\mathbb{N}$, $b\in\mathbb{N}^*$ refers to the list $jb_0\dots b_{|b|-1}k$. For $0 \leq i < |b|$, the term $b_{i:=v}$ denotes the list $b_0\dots b_{i-1}vb_{i+1}\dots b_{|b|-1}$. The notation $b_{<i}$ refers to $b_0\dots b_{i-1}$ for $0 < i \leq |b|$. Moreover, $v^n$ is short for the list consisting of $n$ times the value $v \in \mathbb{N}$. Unary partial functions can be regarded as infinite sequences consisting of numbers and the symbol~$\uparrow$ denoting undefinedness. We abbreviate the empty function by $\uparrow^\infty$ and the constant zero function by $0^\infty$. A function can be written as a list concatenated with a partial function. For example, $jb\uparrow^\infty$ is the function \[ x \mapsto \left\{\begin{array}{ll} j & \mbox{if } x = 0,\\ b_{x-1} & \mbox{if } 0 < x \leq |b|,\\ \uparrow & \mbox{otherwise,} \end{array}\right. \] and $jp$, where $p$ is a function, means \[ x \mapsto \left\{\begin{array}{ll} j & \mbox{if } x = 0,\\ p(x-1) & \mbox{otherwise.} \end{array}\right. \] A \emph{numbering} is a function $\psi \in \mathcal{P}^2$.\<close> subsection \<open>The prefixes of a function\<close> text \<open>A \emph{prefix}, also called \emph{initial segment}, is a list of initial values of a function.\<close> definition prefix :: "partial1 \<Rightarrow> nat \<Rightarrow> nat list" where "prefix f n \<equiv> map (\<lambda>x. the (f x)) [0..<Suc n]" lemma length_prefix [simp]: "length (prefix f n) = Suc n" unfolding prefix_def by simp lemma prefix_nth [simp]: assumes "k < Suc n" shows "prefix f n ! k = the (f k)" unfolding prefix_def using assms nth_map_upt[of k "Suc n" 0 "\<lambda>x. the (f x)"] by simp lemma prefixI: assumes "length vs > 0" and "\<And>x. x < length vs \<Longrightarrow> f x \<down>= vs ! x" shows "prefix f (length vs - 1) = vs" using assms nth_equalityI[of "prefix f (length vs - 1)" vs] by simp lemma prefixI': assumes "length vs = Suc n" and "\<And>x. x < Suc n \<Longrightarrow> f x \<down>= vs ! x" shows "prefix f n = vs" using assms nth_equalityI[of "prefix f (length vs - 1)" vs] by simp lemma prefixE: assumes "prefix f (length vs - 1) = vs" and "f \<in> \<R>" and "length vs > 0" and "x < length vs" shows "f x \<down>= vs ! x" using assms length_prefix prefix_nth[of x "length vs - 1" f] by simp lemma prefix_eqI: assumes "\<And>x. x \<le> n \<Longrightarrow> f x = g x" shows "prefix f n = prefix g n" using assms prefix_def by simp lemma prefix_0: "prefix f 0 = [the (f 0)]" using prefix_def by simp lemma prefix_Suc: "prefix f (Suc n) = prefix f n @ [the (f (Suc n))]" unfolding prefix_def by simp lemma take_prefix: assumes "f \<in> \<R>" and "k \<le> n" shows "prefix f k = take (Suc k) (prefix f n)" proof - let ?vs = "take (Suc k) (prefix f n)" have "length ?vs = Suc k" using assms(2) by simp then have "\<And>x. x < length ?vs \<Longrightarrow> f x \<down>= ?vs ! x" using assms by auto then show ?thesis using prefixI[where ?vs="?vs"] \<open>length ?vs = Suc k\<close> by simp qed text \<open>Strategies receive prefixes in the form of encoded lists. The term ``prefix'' refers to both encoded and unencoded lists. We use the notation @{text "f \<triangleright> n"} for the prefix $f^n$.\<close> definition init :: "partial1 \<Rightarrow> nat \<Rightarrow> nat" (infix "\<triangleright>" 110) where "f \<triangleright> n \<equiv> list_encode (prefix f n)" lemma init_neq_zero: "f \<triangleright> n \<noteq> 0" unfolding init_def prefix_def using list_encode_0 by fastforce lemma init_prefixE [elim]: "prefix f n = prefix g n \<Longrightarrow> f \<triangleright> n = g \<triangleright> n" unfolding init_def by simp lemma init_eqI: assumes "\<And>x. x \<le> n \<Longrightarrow> f x = g x" shows "f \<triangleright> n = g \<triangleright> n" unfolding init_def using prefix_eqI[OF assms] by simp lemma initI: assumes "e_length e > 0" and "\<And>x. x < e_length e \<Longrightarrow> f x \<down>= e_nth e x" shows "f \<triangleright> (e_length e - 1) = e" unfolding init_def using assms prefixI by simp lemma initI': assumes "e_length e = Suc n" and "\<And>x. x < Suc n \<Longrightarrow> f x \<down>= e_nth e x" shows "f \<triangleright> n = e" unfolding init_def using assms prefixI' by simp lemma init_iff_list_eq_upto: assumes "f \<in> \<R>" and "e_length vs > 0" shows "(\<forall>x<e_length vs. f x \<down>= e_nth vs x) \<longleftrightarrow> prefix f (e_length vs - 1) = list_decode vs" using prefixI[OF assms(2)] prefixE[OF _ assms] by auto lemma length_init [simp]: "e_length (f \<triangleright> n) = Suc n" unfolding init_def by simp lemma init_Suc_snoc: "f \<triangleright> (Suc n) = e_snoc (f \<triangleright> n) (the (f (Suc n)))" unfolding init_def by (simp add: prefix_Suc) lemma nth_init: "i < Suc n \<Longrightarrow> e_nth (f \<triangleright> n) i = the (f i)" unfolding init_def using prefix_nth by auto lemma hd_init [simp]: "e_hd (f \<triangleright> n) = the (f 0)" unfolding init_def using init_neq_zero by (simp add: e_hd_nth0) lemma list_decode_init [simp]: "list_decode (f \<triangleright> n) = prefix f n" unfolding init_def by simp lemma init_eq_iff_eq_upto: assumes "g \<in> \<R>" and "f \<in> \<R>" shows "(\<forall>j<Suc n. g j = f j) \<longleftrightarrow> g \<triangleright> n = f \<triangleright> n" using assms initI' init_iff_list_eq_upto length_init list_decode_init by (metis diff_Suc_1 zero_less_Suc) definition is_init_of :: "nat \<Rightarrow> partial1 \<Rightarrow> bool" where "is_init_of t f \<equiv> \<forall>i<e_length t. f i \<down>= e_nth t i" lemma not_initial_imp_not_eq: assumes "\<And>x. x < Suc n \<Longrightarrow> f x \<down>" and "\<not> (is_init_of (f \<triangleright> n) g)" shows "f \<noteq> g" using is_init_of_def assms by auto lemma all_init_eq_imp_fun_eq: assumes "f \<in> \<R>" and "g \<in> \<R>" and "\<And>n. f \<triangleright> n = g \<triangleright> n" shows "f = g" proof fix n from assms have "prefix f n = prefix g n" by (metis init_def list_decode_encode) then have "the (f n) = the (g n)" unfolding init_def prefix_def by simp then show "f n = g n" using assms(1,2) by (meson R1_imp_total1 option.expand total1E) qed corollary neq_fun_neq_init: assumes "f \<in> \<R>" and "g \<in> \<R>" and "f \<noteq> g" shows "\<exists>n. f \<triangleright> n \<noteq> g \<triangleright> n" using assms all_init_eq_imp_fun_eq by auto lemma eq_init_forall_le: assumes "f \<triangleright> n = g \<triangleright> n" and "m \<le> n" shows "f \<triangleright> m = g \<triangleright> m" proof - from assms(1) have "prefix f n = prefix g n" by (metis init_def list_decode_encode) then have "the (f k) = the (g k)" if "k \<le> n" for k using prefix_def that by auto then have "the (f k) = the (g k)" if "k \<le> m" for k using assms(2) that by simp then have "prefix f m = prefix g m" using prefix_def by simp then show ?thesis by (simp add: init_def) qed corollary neq_init_forall_ge: assumes "f \<triangleright> n \<noteq> g \<triangleright> n" and "m \<ge> n" shows "f \<triangleright> m \<noteq> g \<triangleright> m" using eq_init_forall_le assms by blast lemma e_take_init: assumes "f \<in> \<R>" and "k < Suc n" shows "e_take (Suc k) (f \<triangleright> n) = f \<triangleright> k" using assms take_prefix by (simp add: init_def less_Suc_eq_le) lemma init_butlast_init: assumes "total1 f" and "f \<triangleright> n = e" and "n > 0" shows "f \<triangleright> (n - 1) = e_butlast e" proof - let ?e = "e_butlast e" have "e_length e = Suc n" using assms(2) by auto then have len: "e_length ?e = n" by simp have "f \<triangleright> (e_length ?e - 1) = ?e" proof (rule initI) show "0 < e_length ?e" using assms(3) len by simp have "\<And>x. x < e_length e \<Longrightarrow> f x \<down>= e_nth e x" using assms(1,2) total1_def \<open>e_length e = Suc n\<close> by auto then show "\<And>x. x < e_length ?e \<Longrightarrow> f x \<down>= e_nth ?e x" by (simp add: butlast_conv_take) qed with len show ?thesis by simp qed text \<open>Some definitions make use of recursive predicates, that is, $01$-valued functions.\<close> definition RPred1 :: "partial1 set" ("\<R>\<^sub>0\<^sub>1") where "\<R>\<^sub>0\<^sub>1 \<equiv> {f. f \<in> \<R> \<and> (\<forall>x. f x \<down>= 0 \<or> f x \<down>= 1)}" lemma RPred1_subseteq_R1: "\<R>\<^sub>0\<^sub>1 \<subseteq> \<R>" unfolding RPred1_def by auto lemma const0_in_RPred1: "(\<lambda>_. Some 0) \<in> \<R>\<^sub>0\<^sub>1" using RPred1_def const_in_Prim1 by fast lemma RPred1_altdef: "\<R>\<^sub>0\<^sub>1 = {f. f \<in> \<R> \<and> (\<forall>x. the (f x) \<le> 1)}" (is "\<R>\<^sub>0\<^sub>1 = ?S") proof show "\<R>\<^sub>0\<^sub>1 \<subseteq> ?S" proof fix f assume f: "f \<in> \<R>\<^sub>0\<^sub>1" with RPred1_def have "f \<in> \<R>" by auto from f have "\<forall>x. f x \<down>= 0 \<or> f x \<down>= 1" by (simp add: RPred1_def) then have "\<forall>x. the (f x) \<le> 1" by (metis eq_refl less_Suc_eq_le zero_less_Suc option.sel) with \<open>f \<in> \<R>\<close> show "f \<in> ?S" by simp qed show "?S \<subseteq> \<R>\<^sub>0\<^sub>1" proof fix f assume f: "f \<in> ?S" then have "f \<in> \<R>" by simp then have total: "\<And>x. f x \<down>" by auto from f have "\<forall>x. the (f x) = 0 \<or> the (f x) = 1" by (simp add: le_eq_less_or_eq) with total have "\<forall>x. f x \<down>= 0 \<or> f x \<down>= 1" by (metis option.collapse) then show "f \<in> \<R>\<^sub>0\<^sub>1" using \<open>f \<in> \<R>\<close> RPred1_def by auto qed qed subsection \<open>NUM\<close> text \<open>A class of recursive functions is in NUM if it can be embedded in a total numbering. Thus, for learning such classes there is always a total hypothesis space available.\<close> definition NUM :: "partial1 set set" where "NUM \<equiv> {U. \<exists>\<psi>\<in>\<R>\<^sup>2. \<forall>f\<in>U. \<exists>i. \<psi> i = f}" definition NUM_wrt :: "partial2 \<Rightarrow> partial1 set set" where "\<psi> \<in> \<R>\<^sup>2 \<Longrightarrow> NUM_wrt \<psi> \<equiv> {U. \<forall>f\<in>U. \<exists>i. \<psi> i = f}" lemma NUM_I [intro]: assumes "\<psi> \<in> \<R>\<^sup>2" and "\<And>f. f \<in> U \<Longrightarrow> \<exists>i. \<psi> i = f" shows "U \<in> NUM" using assms NUM_def by blast lemma NUM_E [dest]: assumes "U \<in> NUM" shows "U \<subseteq> \<R>" and "\<exists>\<psi>\<in>\<R>\<^sup>2. \<forall>f\<in>U. \<exists>i. \<psi> i = f" using NUM_def assms by (force, auto) lemma NUM_closed_subseteq: assumes "U \<in> NUM" and "V \<subseteq> U" shows "V \<in> NUM" using assms subset_eq[of V U] NUM_I by auto text \<open>This is the classical diagonalization proof showing that there is no total numbering containing all total recursive functions.\<close> lemma R1_not_in_NUM: "\<R> \<notin> NUM" proof assume "\<R> \<in> NUM" then obtain \<psi> where num: "\<psi> \<in> \<R>\<^sup>2" "\<forall>f\<in>\<R>. \<exists>i. \<psi> i = f" by auto then obtain psi where psi: "recfn 2 psi" "total psi" "eval psi [i, x] = \<psi> i x" for i x by auto define d where "d = Cn 1 S [Cn 1 psi [Id 1 0, Id 1 0]]" then have "recfn 1 d" using psi(1) by simp moreover have d: "eval d [x] \<down>= Suc (the (\<psi> x x))" for x unfolding d_def using num psi by simp ultimately have "(\<lambda>x. eval d [x]) \<in> \<R>" using R1I by blast then obtain i where "\<psi> i = (\<lambda>x. eval d [x])" using num(2) by auto then have "\<psi> i i = eval d [i]" by simp with d have "\<psi> i i \<down>= Suc (the (\<psi> i i))" by simp then show False using option.sel[of "Suc (the (\<psi> i i))"] by simp qed text \<open>A hypothesis space that contains a function for every prefix will come in handy. The following is a total numbering with this property.\<close> definition "r_prenum \<equiv> Cn 2 r_ifless [Id 2 1, Cn 2 r_length [Id 2 0], Cn 2 r_nth [Id 2 0, Id 2 1], r_constn 1 0]" lemma r_prenum_prim [simp]: "prim_recfn 2 r_prenum" unfolding r_prenum_def by simp_all lemma r_prenum [simp]: "eval r_prenum [e, x] \<down>= (if x < e_length e then e_nth e x else 0)" by (simp add: r_prenum_def) definition prenum :: partial2 where "prenum e x \<equiv> Some (if x < e_length e then e_nth e x else 0)" lemma prenum_in_R2: "prenum \<in> \<R>\<^sup>2" using prenum_def Prim2I[OF r_prenum_prim, of prenum] by simp lemma prenum [simp]: "prenum e x \<down>= (if x < e_length e then e_nth e x else 0)" unfolding prenum_def .. lemma prenum_encode: "prenum (list_encode vs) x \<down>= (if x < length vs then vs ! x else 0)" using prenum_def by (cases "x < length vs") simp_all text \<open>Prepending a list of numbers to a function:\<close> definition prepend :: "nat list \<Rightarrow> partial1 \<Rightarrow> partial1" (infixr "\<odot>" 64) where "vs \<odot> f \<equiv> \<lambda>x. if x < length vs then Some (vs ! x) else f (x - length vs)" lemma prepend [simp]: "(vs \<odot> f) x = (if x < length vs then Some (vs ! x) else f (x - length vs))" unfolding prepend_def .. lemma prepend_total: "total1 f \<Longrightarrow> total1 (vs \<odot> f)" unfolding total1_def by simp lemma prepend_at_less: assumes "n < length vs" shows "(vs \<odot> f) n \<down>= vs ! n" using assms by simp lemma prepend_at_ge: assumes "n \<ge> length vs" shows "(vs \<odot> f) n = f (n - length vs)" using assms by simp lemma prefix_prepend_less: assumes "n < length vs" shows "prefix (vs \<odot> f) n = take (Suc n) vs" using assms length_prefix by (intro nth_equalityI) simp_all lemma prepend_eqI: assumes "\<And>x. x < length vs \<Longrightarrow> g x \<down>= vs ! x" and "\<And>x. g (length vs + x) = f x" shows "g = vs \<odot> f" proof fix x show "g x = (vs \<odot> f) x" proof (cases "x < length vs") case True then show ?thesis using assms by simp next case False then show ?thesis using assms prepend by (metis add_diff_inverse_nat) qed qed fun r_prepend :: "nat list \<Rightarrow> recf \<Rightarrow> recf" where "r_prepend [] r = r" | "r_prepend (v # vs) r = Cn 1 (r_lifz (r_const v) (Cn 1 (r_prepend vs r) [r_dec])) [Id 1 0, Id 1 0]" lemma r_prepend_recfn: assumes "recfn 1 r" shows "recfn 1 (r_prepend vs r)" using assms by (induction vs) simp_all lemma r_prepend: assumes "recfn 1 r" shows "eval (r_prepend vs r) [x] = (if x < length vs then Some (vs ! x) else eval r [x - length vs])" proof (induction vs arbitrary: x) case Nil then show ?case using assms by simp next case (Cons v vs) show ?case using assms Cons by (cases "x = 0") (auto simp add: r_prepend_recfn) qed lemma r_prepend_total: assumes "recfn 1 r" and "total r" shows "eval (r_prepend vs r) [x] \<down>= (if x < length vs then vs ! x else the (eval r [x - length vs]))" proof (induction vs arbitrary: x) case Nil then show ?case using assms by simp next case (Cons v vs) show ?case using assms Cons by (cases "x = 0") (auto simp add: r_prepend_recfn) qed lemma prepend_in_P1: assumes "f \<in> \<P>" shows "vs \<odot> f \<in> \<P>" proof - obtain r where r: "recfn 1 r" "\<And>x. eval r [x] = f x" using assms by auto moreover have "recfn 1 (r_prepend vs r)" using r r_prepend_recfn by simp moreover have "eval (r_prepend vs r) [x] = (vs \<odot> f) x" for x using r r_prepend by simp ultimately show ?thesis by blast qed lemma prepend_in_R1: assumes "f \<in> \<R>" shows "vs \<odot> f \<in> \<R>" proof - obtain r where r: "recfn 1 r" "total r" "\<And>x. eval r [x] = f x" using assms by auto then have "total1 f" using R1_imp_total1[OF assms] by simp have "total (r_prepend vs r)" using r r_prepend_total r_prepend_recfn totalI1[of "r_prepend vs r"] by simp with r have "total (r_prepend vs r)" by simp moreover have "recfn 1 (r_prepend vs r)" using r r_prepend_recfn by simp moreover have "eval (r_prepend vs r) [x] = (vs \<odot> f) x" for x using r r_prepend \<open>total1 f\<close> total1E by simp ultimately show ?thesis by auto qed lemma prepend_associative: "(us @ vs) \<odot> f = us \<odot> vs \<odot> f" (is "?lhs = ?rhs") proof fix x consider "x < length us" | "x \<ge> length us \<and> x < length (us @ vs)" | "x \<ge> length (us @ vs)" by linarith then show "?lhs x = ?rhs x" proof (cases) case 1 then show ?thesis by (metis le_add1 length_append less_le_trans nth_append prepend_at_less) next case 2 then show ?thesis by (smt add_diff_inverse_nat add_less_cancel_left length_append nth_append prepend) next case 3 then show ?thesis using prepend_at_ge by auto qed qed abbreviation constant_divergent :: partial1 ("\<up>\<^sup>\<infinity>") where "\<up>\<^sup>\<infinity> \<equiv> \<lambda>_. None" abbreviation constant_zero :: partial1 ("0\<^sup>\<infinity>") where "0\<^sup>\<infinity> \<equiv> \<lambda>_. Some 0" lemma almost0_in_R1: "vs \<odot> 0\<^sup>\<infinity> \<in> \<R>" using RPred1_subseteq_R1 const0_in_RPred1 prepend_in_R1 by auto text \<open>The class $U_0$ of all total recursive functions that are almost everywhere zero will be used several times to construct (counter-)examples.\<close> definition U0 :: "partial1 set" ("U\<^sub>0") where "U\<^sub>0 \<equiv> {vs \<odot> 0\<^sup>\<infinity> |vs. vs \<in> UNIV}" text \<open>The class @{term U0} contains exactly the functions in the numbering @{term prenum}.\<close> lemma U0_altdef: "U\<^sub>0 = {prenum e| e. e \<in> UNIV}" (is "U\<^sub>0 = ?W") proof show "U\<^sub>0 \<subseteq> ?W" proof fix f assume "f \<in> U\<^sub>0" with U0_def obtain vs where "f = vs \<odot> 0\<^sup>\<infinity>" by auto then have "f = prenum (list_encode vs)" using prenum_encode by auto then show "f \<in> ?W" by auto qed show "?W \<subseteq> U\<^sub>0" unfolding U0_def by fastforce qed lemma U0_in_NUM: "U\<^sub>0 \<in> NUM" using prenum_in_R2 U0_altdef by (intro NUM_I[of prenum]; force) text \<open>Every almost-zero function can be represented by $v0^\infty$ for a list $v$ not ending in zero.\<close> lemma almost0_canonical: assumes "f = vs \<odot> 0\<^sup>\<infinity>" and "f \<noteq> 0\<^sup>\<infinity>" obtains ws where "length ws > 0" and "last ws \<noteq> 0" and "f = ws \<odot> 0\<^sup>\<infinity>" proof - let ?P = "\<lambda>k. k < length vs \<and> vs ! k \<noteq> 0" from assms have "vs \<noteq> []" by auto then have ex: "\<exists>k<length vs. vs ! k \<noteq> 0" using assms by auto define m where "m = Greatest ?P" moreover have le: "\<forall>y. ?P y \<longrightarrow> y \<le> length vs" by simp ultimately have "?P m" using ex GreatestI_ex_nat[of ?P "length vs"] by simp have not_gr: "\<not> ?P k" if "k > m" for k using Greatest_le_nat[of ?P _ "length vs"] m_def ex le not_less that by blast let ?ws = "take (Suc m) vs" have "vs \<odot> 0\<^sup>\<infinity> = ?ws \<odot> 0\<^sup>\<infinity>" proof fix x show "(vs \<odot> 0\<^sup>\<infinity>) x = (?ws \<odot> 0\<^sup>\<infinity>) x" proof (cases "x < Suc m") case True then show ?thesis using \<open>?P m\<close> by simp next case False moreover from this have "(?ws \<odot> 0\<^sup>\<infinity>) x \<down>= 0" by simp ultimately show ?thesis using not_gr by (cases "x < length vs") simp_all qed qed then have "f = ?ws \<odot> 0\<^sup>\<infinity>" using assms(1) by simp moreover have "length ?ws > 0" by (simp add: \<open>vs \<noteq> []\<close>) moreover have "last ?ws \<noteq> 0" by (simp add: \<open>?P m\<close> take_Suc_conv_app_nth) ultimately show ?thesis using that by blast qed section \<open>Types of inference\label{s:inference_types}\<close> text \<open>This section introduces all inference types that we are going to consider together with some of their simple properties. All these inference types share the following condition, which essentially says that everything must be computable:\<close> abbreviation environment :: "partial2 \<Rightarrow> (partial1 set) \<Rightarrow> partial1 \<Rightarrow> bool" where "environment \<psi> U s \<equiv> \<psi> \<in> \<P>\<^sup>2 \<and> U \<subseteq> \<R> \<and> s \<in> \<P> \<and> (\<forall>f\<in>U. \<forall>n. s (f \<triangleright> n) \<down>)" subsection \<open>LIM: Learning in the limit\<close> text \<open>A strategy $S$ learns a class $U$ in the limit with respect to a hypothesis space @{term "\<psi> \<in> \<P>\<^sup>2"} if for all $f\in U$, the sequence $(S(f^n))_{n\in\mathbb{N}}$ converges to an $i$ with $\psi_i = f$. Convergence for a sequence of natural numbers means that almost all elements are the same. We express this with the following notation.\<close> abbreviation Almost_All :: "(nat \<Rightarrow> bool) \<Rightarrow> bool" (binder "\<forall>\<^sup>\<infinity>" 10) where "\<forall>\<^sup>\<infinity>n. P n \<equiv> \<exists>n\<^sub>0. \<forall>n\<ge>n\<^sub>0. P n" definition learn_lim :: "partial2 \<Rightarrow> (partial1 set) \<Rightarrow> partial1 \<Rightarrow> bool" where "learn_lim \<psi> U s \<equiv> environment \<psi> U s \<and> (\<forall>f\<in>U. \<exists>i. \<psi> i = f \<and> (\<forall>\<^sup>\<infinity>n. s (f \<triangleright> n) \<down>= i))" lemma learn_limE: assumes "learn_lim \<psi> U s" shows "environment \<psi> U s" and "\<And>f. f \<in> U \<Longrightarrow> \<exists>i. \<psi> i = f \<and> (\<forall>\<^sup>\<infinity>n. s (f \<triangleright> n) \<down>= i)" using assms learn_lim_def by auto lemma learn_limI: assumes "environment \<psi> U s" and "\<And>f. f \<in> U \<Longrightarrow> \<exists>i. \<psi> i = f \<and> (\<forall>\<^sup>\<infinity>n. s (f \<triangleright> n) \<down>= i)" shows "learn_lim \<psi> U s" using assms learn_lim_def by auto definition LIM_wrt :: "partial2 \<Rightarrow> partial1 set set" where "LIM_wrt \<psi> \<equiv> {U. \<exists>s. learn_lim \<psi> U s}" definition Lim :: "partial1 set set" ("LIM") where "LIM \<equiv> {U. \<exists>\<psi> s. learn_lim \<psi> U s}" text \<open>LIM is closed under the the subset relation.\<close> lemma learn_lim_closed_subseteq: assumes "learn_lim \<psi> U s" and "V \<subseteq> U" shows "learn_lim \<psi> V s" using assms learn_lim_def by auto corollary LIM_closed_subseteq: assumes "U \<in> LIM" and "V \<subseteq> U" shows "V \<in> LIM" using assms learn_lim_closed_subseteq by (smt Lim_def mem_Collect_eq) text \<open>Changing the hypothesis infinitely often precludes learning in the limit.\<close> lemma infinite_hyp_changes_not_Lim: assumes "f \<in> U" and "\<forall>n. \<exists>m\<^sub>1>n. \<exists>m\<^sub>2>n. s (f \<triangleright> m\<^sub>1) \<noteq> s (f \<triangleright> m\<^sub>2)" shows "\<not> learn_lim \<psi> U s" using assms learn_lim_def by (metis less_imp_le) lemma always_hyp_change_not_Lim: assumes "\<And>x. s (f \<triangleright> (Suc x)) \<noteq> s (f \<triangleright> x)" shows "\<not> learn_lim \<psi> {f} s" using assms learn_limE by (metis le_SucI order_refl singletonI) text \<open>Guessing a wrong hypothesis infinitely often precludes learning in the limit.\<close> lemma infinite_hyp_wrong_not_Lim: assumes "f \<in> U" and "\<forall>n. \<exists>m>n. \<psi> (the (s (f \<triangleright> m))) \<noteq> f" shows "\<not> learn_lim \<psi> U s" using assms learn_limE by (metis less_imp_le option.sel) text \<open>Converging to the same hypothesis on two functions precludes learning in the limit.\<close> lemma same_hyp_for_two_not_Lim: assumes "f\<^sub>1 \<in> U" and "f\<^sub>2 \<in> U" and "f\<^sub>1 \<noteq> f\<^sub>2" and "\<forall>n\<ge>n\<^sub>1. s (f\<^sub>1 \<triangleright> n) = h" and "\<forall>n\<ge>n\<^sub>2. s (f\<^sub>2 \<triangleright> n) = h" shows "\<not> learn_lim \<psi> U s" using assms learn_limE by (metis le_cases option.sel) text \<open>Every class that can be learned in the limit can be learned in the limit with respect to any Gödel numbering. We prove a generalization in which hypotheses may have to satisfy an extra condition, so we can re-use it for other inference types later.\<close> lemma learn_lim_extra_wrt_goedel: fixes extra :: "(partial1 set) \<Rightarrow> partial1 \<Rightarrow> nat \<Rightarrow> partial1 \<Rightarrow> bool" assumes "goedel_numbering \<chi>" and "learn_lim \<psi> U s" and "\<And>f n. f \<in> U \<Longrightarrow> extra U f n (\<psi> (the (s (f \<triangleright> n))))" shows "\<exists>t. learn_lim \<chi> U t \<and> (\<forall>f\<in>U. \<forall>n. extra U f n (\<chi> (the (t (f \<triangleright> n)))))" proof - have env: "environment \<psi> U s" and lim: "learn_lim \<psi> U s" and extra: "\<forall>f\<in>U. \<forall>n. extra U f n (\<psi> (the (s (f \<triangleright> n))))" using assms learn_limE by auto obtain c where c: "c \<in> \<R>" "\<forall>i. \<psi> i = \<chi> (the (c i))" using env goedel_numberingE[OF assms(1), of \<psi>] by auto define t where "t \<equiv> (\<lambda>x. if s x \<down> \<and> c (the (s x)) \<down> then Some (the (c (the (s x)))) else None)" have "t \<in> \<P>" unfolding t_def using env c concat_P1_P1[of c s] by auto have "t x = (if s x \<down> then Some (the (c (the (s x)))) else None)" for x using t_def c(1) R1_imp_total1 by auto then have t: "t (f \<triangleright> n) \<down>= the (c (the (s (f \<triangleright> n))))" if "f \<in> U" for f n using lim learn_limE that by simp have "learn_lim \<chi> U t" proof (rule learn_limI) show "environment \<chi> U t" using t by (simp add: \<open>t \<in> \<P>\<close> env goedel_numbering_P2[OF assms(1)]) show "\<exists>i. \<chi> i = f \<and> (\<forall>\<^sup>\<infinity>n. t (f \<triangleright> n) \<down>= i)" if "f \<in> U" for f proof - from lim learn_limE(2) obtain i n\<^sub>0 where i: "\<psi> i = f \<and> (\<forall>n\<ge>n\<^sub>0. s (f \<triangleright> n) \<down>= i)" using \<open>f \<in> U\<close> by blast let ?j = "the (c i)" have "\<chi> ?j = f" using c(2) i by simp moreover have "t (f \<triangleright> n) \<down>= ?j" if "n \<ge> n\<^sub>0" for n by (simp add: \<open>f \<in> U\<close> i t that) ultimately show ?thesis by auto qed qed moreover have "extra U f n (\<chi> (the (t (f \<triangleright> n))))" if "f \<in> U" for f n proof - from t have "the (t (f \<triangleright> n)) = the (c (the (s (f \<triangleright> n))))" by (simp add: that) then have "\<chi> (the (t (f \<triangleright> n))) = \<psi> (the (s (f \<triangleright> n)))" using c(2) by simp with extra show ?thesis using that by simp qed ultimately show ?thesis by auto qed lemma learn_lim_wrt_goedel: assumes "goedel_numbering \<chi>" and "learn_lim \<psi> U s" shows "\<exists>t. learn_lim \<chi> U t" using assms learn_lim_extra_wrt_goedel[where ?extra="\<lambda>U f n h. True"] by simp lemma LIM_wrt_phi_eq_Lim: "LIM_wrt \<phi> = LIM" using LIM_wrt_def Lim_def learn_lim_wrt_goedel[OF goedel_numbering_phi] by blast subsection \<open>BC: Behaviorally correct learning in the limit\<close> text \<open>Behaviorally correct learning in the limit relaxes LIM by requiring that the strategy almost always output an index for the target function, but not necessarily the same index. In other words convergence of $(S(f^n))_{n\in\mathbb{N}}$ is replaced by convergence of $(\psi_{S(f^n)})_{n\in\mathbb{N}}$.\<close> definition learn_bc :: "partial2 \<Rightarrow> (partial1 set) \<Rightarrow> partial1 \<Rightarrow> bool" where "learn_bc \<psi> U s \<equiv> environment \<psi> U s \<and> (\<forall>f\<in>U. \<forall>\<^sup>\<infinity>n. \<psi> (the (s (f \<triangleright> n))) = f)" lemma learn_bcE: assumes "learn_bc \<psi> U s" shows "environment \<psi> U s" and "\<And>f. f \<in> U \<Longrightarrow> \<forall>\<^sup>\<infinity>n. \<psi> (the (s (f \<triangleright> n))) = f" using assms learn_bc_def by auto lemma learn_bcI: assumes "environment \<psi> U s" and "\<And>f. f \<in> U \<Longrightarrow> \<forall>\<^sup>\<infinity>n. \<psi> (the (s (f \<triangleright> n))) = f" shows "learn_bc \<psi> U s" using assms learn_bc_def by auto definition BC_wrt :: "partial2 \<Rightarrow> partial1 set set" where "BC_wrt \<psi> \<equiv> {U. \<exists>s. learn_bc \<psi> U s}" definition BC :: "partial1 set set" where "BC \<equiv> {U. \<exists>\<psi> s. learn_bc \<psi> U s}" text \<open>BC is a superset of LIM and closed under the subset relation.\<close> lemma learn_lim_imp_BC: "learn_lim \<psi> U s \<Longrightarrow> learn_bc \<psi> U s" using learn_limE learn_bcI[of \<psi> U s] by fastforce lemma Lim_subseteq_BC: "LIM \<subseteq> BC" using learn_lim_imp_BC Lim_def BC_def by blast lemma learn_bc_closed_subseteq: assumes "learn_bc \<psi> U s" and "V \<subseteq> U" shows "learn_bc \<psi> V s" using assms learn_bc_def by auto corollary BC_closed_subseteq: assumes "U \<in> BC" and "V \<subseteq> U" shows "V \<in> BC" using assms by (smt BC_def learn_bc_closed_subseteq mem_Collect_eq) text \<open>Just like with LIM, guessing a wrong hypothesis infinitely often precludes BC-style learning.\<close> lemma infinite_hyp_wrong_not_BC: assumes "f \<in> U" and "\<forall>n. \<exists>m>n. \<psi> (the (s (f \<triangleright> m))) \<noteq> f" shows "\<not> learn_bc \<psi> U s" proof assume "learn_bc \<psi> U s" then obtain n\<^sub>0 where "\<forall>n\<ge>n\<^sub>0. \<psi> (the (s (f \<triangleright> n))) = f" using learn_bcE assms(1) by metis with assms(2) show False using less_imp_le by blast qed text \<open>The proof that Gödel numberings suffice as hypothesis spaces for BC is similar to the one for @{thm[source] learn_lim_extra_wrt_goedel}. We do not need the @{term extra} part for BC, but we get it for free.\<close> lemma learn_bc_extra_wrt_goedel: fixes extra :: "(partial1 set) \<Rightarrow> partial1 \<Rightarrow> nat \<Rightarrow> partial1 \<Rightarrow> bool" assumes "goedel_numbering \<chi>" and "learn_bc \<psi> U s" and "\<And>f n. f \<in> U \<Longrightarrow> extra U f n (\<psi> (the (s (f \<triangleright> n))))" shows "\<exists>t. learn_bc \<chi> U t \<and> (\<forall>f\<in>U. \<forall>n. extra U f n (\<chi> (the (t (f \<triangleright> n)))))" proof - have env: "environment \<psi> U s" and lim: "learn_bc \<psi> U s" and extra: "\<forall>f\<in>U. \<forall>n. extra U f n (\<psi> (the (s (f \<triangleright> n))))" using assms learn_bc_def by auto obtain c where c: "c \<in> \<R>" "\<forall>i. \<psi> i = \<chi> (the (c i))" using env goedel_numberingE[OF assms(1), of \<psi>] by auto define t where "t = (\<lambda>x. if s x \<down> \<and> c (the (s x)) \<down> then Some (the (c (the (s x)))) else None)" have "t \<in> \<P>" unfolding t_def using env c concat_P1_P1[of c s] by auto have "t x = (if s x \<down> then Some (the (c (the (s x)))) else None)" for x using t_def c(1) R1_imp_total1 by auto then have t: "t (f \<triangleright> n) \<down>= the (c (the (s (f \<triangleright> n))))" if "f \<in> U" for f n using lim learn_bcE(1) that by simp have "learn_bc \<chi> U t" proof (rule learn_bcI) show "environment \<chi> U t" using t by (simp add: \<open>t \<in> \<P>\<close> env goedel_numbering_P2[OF assms(1)]) show "\<forall>\<^sup>\<infinity>n. \<chi> (the (t (f \<triangleright> n))) = f" if "f \<in> U" for f proof - obtain n\<^sub>0 where "\<forall>n\<ge>n\<^sub>0. \<psi> (the (s (f \<triangleright> n))) = f" using lim learn_bcE(2) \<open>f \<in> U\<close> by blast then show ?thesis using that t c(2) by auto qed qed moreover have "extra U f n (\<chi> (the (t (f \<triangleright> n))))" if "f \<in> U" for f n proof - from t have "the (t (f \<triangleright> n)) = the (c (the (s (f \<triangleright> n))))" by (simp add: that) then have "\<chi> (the (t (f \<triangleright> n))) = \<psi> (the (s (f \<triangleright> n)))" using c(2) by simp with extra show ?thesis using that by simp qed ultimately show ?thesis by auto qed corollary learn_bc_wrt_goedel: assumes "goedel_numbering \<chi>" and "learn_bc \<psi> U s" shows "\<exists>t. learn_bc \<chi> U t" using assms learn_bc_extra_wrt_goedel[where ?extra="\<lambda>_ _ _ _. True"] by simp corollary BC_wrt_phi_eq_BC: "BC_wrt \<phi> = BC" using learn_bc_wrt_goedel goedel_numbering_phi BC_def BC_wrt_def by blast subsection \<open>CONS: Learning in the limit with consistent hypotheses\<close> text \<open>A hypothesis is \emph{consistent} if it matches all values in the prefix given to the strategy. Consistent learning in the limit requires the strategy to output only consistent hypotheses for prefixes from the class.\<close> definition learn_cons :: "partial2 \<Rightarrow> (partial1 set) \<Rightarrow> partial1 \<Rightarrow> bool" where "learn_cons \<psi> U s \<equiv> learn_lim \<psi> U s \<and> (\<forall>f\<in>U. \<forall>n. \<forall>k\<le>n. \<psi> (the (s (f \<triangleright> n))) k = f k)" definition CONS_wrt :: "partial2 \<Rightarrow> partial1 set set" where "CONS_wrt \<psi> \<equiv> {U. \<exists>s. learn_cons \<psi> U s}" definition CONS :: "partial1 set set" where "CONS \<equiv> {U. \<exists>\<psi> s. learn_cons \<psi> U s}" lemma CONS_subseteq_Lim: "CONS \<subseteq> LIM" using CONS_def Lim_def learn_cons_def by blast lemma learn_consI: assumes "environment \<psi> U s" and "\<And>f. f \<in> U \<Longrightarrow> \<exists>i. \<psi> i = f \<and> (\<forall>\<^sup>\<infinity>n. s (f \<triangleright> n) \<down>= i)" and "\<And>f n. f \<in> U \<Longrightarrow> \<forall>k\<le>n. \<psi> (the (s (f \<triangleright> n))) k = f k" shows "learn_cons \<psi> U s" using assms learn_lim_def learn_cons_def by simp text \<open>If a consistent strategy converges, it automatically converges to a correct hypothesis. Thus we can remove @{term "\<psi> i = f"} from the second assumption in the previous lemma.\<close> lemma learn_consI2: assumes "environment \<psi> U s" and "\<And>f. f \<in> U \<Longrightarrow> \<exists>i. \<forall>\<^sup>\<infinity>n. s (f \<triangleright> n) \<down>= i" and "\<And>f n. f \<in> U \<Longrightarrow> \<forall>k\<le>n. \<psi> (the (s (f \<triangleright> n))) k = f k" shows "learn_cons \<psi> U s" proof (rule learn_consI) show "environment \<psi> U s" and cons: "\<And>f n. f \<in> U \<Longrightarrow> \<forall>k\<le>n. \<psi> (the (s (f \<triangleright> n))) k = f k" using assms by simp_all show "\<exists>i. \<psi> i = f \<and> (\<forall>\<^sup>\<infinity>n. s (f \<triangleright> n) \<down>= i)" if "f \<in> U" for f proof - from that assms(2) obtain i n\<^sub>0 where i_n0: "\<forall>n\<ge>n\<^sub>0. s (f \<triangleright> n) \<down>= i" by blast have "\<psi> i x = f x" for x proof (cases "x \<le> n\<^sub>0") case True then show ?thesis using i_n0 cons that by fastforce next case False moreover have "\<forall>k\<le>x. \<psi> (the (s (f \<triangleright> x))) k = f k" using cons that by simp ultimately show ?thesis using i_n0 by simp qed with i_n0 show ?thesis by auto qed qed lemma learn_consE: assumes "learn_cons \<psi> U s" shows "environment \<psi> U s" and "\<And>f. f \<in> U \<Longrightarrow> \<exists>i n\<^sub>0. \<psi> i = f \<and> (\<forall>n\<ge>n\<^sub>0. s (f \<triangleright> n) \<down>= i)" and "\<And>f n. f \<in> U \<Longrightarrow> \<forall>k\<le>n. \<psi> (the (s (f \<triangleright> n))) k = f k" using assms learn_cons_def learn_lim_def by auto lemma learn_cons_wrt_goedel: assumes "goedel_numbering \<chi>" and "learn_cons \<psi> U s" shows "\<exists>t. learn_cons \<chi> U t" using learn_cons_def assms learn_lim_extra_wrt_goedel[where ?extra="\<lambda>U f n h. \<forall>k\<le>n. h k = f k"] by auto lemma CONS_wrt_phi_eq_CONS: "CONS_wrt \<phi> = CONS" using CONS_wrt_def CONS_def learn_cons_wrt_goedel goedel_numbering_phi by blast lemma learn_cons_closed_subseteq: assumes "learn_cons \<psi> U s" and "V \<subseteq> U" shows "learn_cons \<psi> V s" using assms learn_cons_def learn_lim_closed_subseteq by auto lemma CONS_closed_subseteq: assumes "U \<in> CONS" and "V \<subseteq> U" shows "V \<in> CONS" using assms learn_cons_closed_subseteq by (smt CONS_def mem_Collect_eq) text \<open>A consistent strategy cannot output the same hypothesis for two different prefixes from the class to be learned.\<close> lemma same_hyp_different_init_not_cons: assumes "f \<in> U" and "g \<in> U" and "f \<triangleright> n \<noteq> g \<triangleright> n" and "s (f \<triangleright> n) = s (g \<triangleright> n)" shows "\<not> learn_cons \<phi> U s" unfolding learn_cons_def by (auto, metis assms init_eqI) subsection \<open>TOTAL: Learning in the limit with total hypotheses\<close> text \<open>Total learning in the limit requires the strategy to hypothesize only total functions for prefixes from the class.\<close> definition learn_total :: "partial2 \<Rightarrow> (partial1 set) \<Rightarrow> partial1 \<Rightarrow> bool" where "learn_total \<psi> U s \<equiv> learn_lim \<psi> U s \<and> (\<forall>f\<in>U. \<forall>n. \<psi> (the (s (f \<triangleright> n))) \<in> \<R>)" definition TOTAL_wrt :: "partial2 \<Rightarrow> partial1 set set" where "TOTAL_wrt \<psi> \<equiv> {U. \<exists>s. learn_total \<psi> U s}" definition TOTAL :: "partial1 set set" where "TOTAL \<equiv> {U. \<exists>\<psi> s. learn_total \<psi> U s}" lemma TOTAL_subseteq_LIM: "TOTAL \<subseteq> LIM" unfolding TOTAL_def Lim_def using learn_total_def by auto lemma learn_totalI: assumes "environment \<psi> U s" and "\<And>f. f \<in> U \<Longrightarrow> \<exists>i. \<psi> i = f \<and> (\<forall>\<^sup>\<infinity>n. s (f \<triangleright> n) \<down>= i)" and "\<And>f n. f \<in> U \<Longrightarrow> \<psi> (the (s (f \<triangleright> n))) \<in> \<R>" shows "learn_total \<psi> U s" using assms learn_lim_def learn_total_def by auto lemma learn_totalE: assumes "learn_total \<psi> U s" shows "environment \<psi> U s" and "\<And>f. f \<in> U \<Longrightarrow> \<exists>i n\<^sub>0. \<psi> i = f \<and> (\<forall>n\<ge>n\<^sub>0. s (f \<triangleright> n) \<down>= i)" and "\<And>f n. f \<in> U \<Longrightarrow> \<psi> (the (s (f \<triangleright> n))) \<in> \<R>" using assms learn_lim_def learn_total_def by auto lemma learn_total_wrt_goedel: assumes "goedel_numbering \<chi>" and "learn_total \<psi> U s" shows "\<exists>t. learn_total \<chi> U t" using learn_total_def assms learn_lim_extra_wrt_goedel[where ?extra="\<lambda>U f n h. h \<in> \<R>"] by auto lemma TOTAL_wrt_phi_eq_TOTAL: "TOTAL_wrt \<phi> = TOTAL" using TOTAL_wrt_def TOTAL_def learn_total_wrt_goedel goedel_numbering_phi by blast lemma learn_total_closed_subseteq: assumes "learn_total \<psi> U s" and "V \<subseteq> U" shows "learn_total \<psi> V s" using assms learn_total_def learn_lim_closed_subseteq by auto lemma TOTAL_closed_subseteq: assumes "U \<in> TOTAL" and "V \<subseteq> U" shows "V \<in> TOTAL" using assms learn_total_closed_subseteq by (smt TOTAL_def mem_Collect_eq) subsection \<open>CP: Learning in the limit with class-preserving hypotheses\<close> text \<open>Class-preserving learning in the limit requires all hypotheses for prefixes from the class to be functions from the class.\<close> definition learn_cp :: "partial2 \<Rightarrow> (partial1 set) \<Rightarrow> partial1 \<Rightarrow> bool" where "learn_cp \<psi> U s \<equiv> learn_lim \<psi> U s \<and> (\<forall>f\<in>U. \<forall>n. \<psi> (the (s (f \<triangleright> n))) \<in> U)" definition CP_wrt :: "partial2 \<Rightarrow> partial1 set set" where "CP_wrt \<psi> \<equiv> {U. \<exists>s. learn_cp \<psi> U s}" definition CP :: "partial1 set set" where "CP \<equiv> {U. \<exists>\<psi> s. learn_cp \<psi> U s}" lemma learn_cp_wrt_goedel: assumes "goedel_numbering \<chi>" and "learn_cp \<psi> U s" shows "\<exists>t. learn_cp \<chi> U t" using learn_cp_def assms learn_lim_extra_wrt_goedel[where ?extra="\<lambda>U f n h. h \<in> U"] by auto corollary CP_wrt_phi: "CP = CP_wrt \<phi>" using learn_cp_wrt_goedel[OF goedel_numbering_phi] by (smt CP_def CP_wrt_def Collect_cong) lemma learn_cpI: assumes "environment \<psi> U s" and "\<And>f. f \<in> U \<Longrightarrow> \<exists>i. \<psi> i = f \<and> (\<forall>\<^sup>\<infinity>n. s (f \<triangleright> n) \<down>= i)" and "\<And>f n. f \<in> U \<Longrightarrow> \<psi> (the (s (f \<triangleright> n))) \<in> U" shows "learn_cp \<psi> U s" using assms learn_cp_def learn_lim_def by auto lemma learn_cpE: assumes "learn_cp \<psi> U s" shows "environment \<psi> U s" and "\<And>f. f \<in> U \<Longrightarrow> \<exists>i n\<^sub>0. \<psi> i = f \<and> (\<forall>n\<ge>n\<^sub>0. s (f \<triangleright> n) \<down>= i)" and "\<And>f n. f \<in> U \<Longrightarrow> \<psi> (the (s (f \<triangleright> n))) \<in> U" using assms learn_lim_def learn_cp_def by auto text \<open>Since classes contain only total functions, a CP strategy is also a TOTAL strategy.\<close> lemma learn_cp_imp_total: "learn_cp \<psi> U s \<Longrightarrow> learn_total \<psi> U s" using learn_cp_def learn_total_def learn_lim_def by auto lemma CP_subseteq_TOTAL: "CP \<subseteq> TOTAL" using learn_cp_imp_total CP_def TOTAL_def by blast subsection \<open>FIN: Finite learning\<close> text \<open>In general it is undecidable whether a LIM strategy has reached its final hypothesis. By contrast, in finite learning (also called ``one-shot learning'') the strategy signals when it is ready to output a hypothesis. Up until then it outputs a ``don't know yet'' value. This value is represented by zero and the actual hypothesis $i$ by $i + 1$.\<close> definition learn_fin :: "partial2 \<Rightarrow> partial1 set \<Rightarrow> partial1 \<Rightarrow> bool" where "learn_fin \<psi> U s \<equiv> environment \<psi> U s \<and> (\<forall>f \<in> U. \<exists>i n\<^sub>0. \<psi> i = f \<and> (\<forall>n<n\<^sub>0. s (f \<triangleright> n) \<down>= 0) \<and> (\<forall>n\<ge>n\<^sub>0. s (f \<triangleright> n) \<down>= Suc i))" definition FIN_wrt :: "partial2 \<Rightarrow> partial1 set set" where "FIN_wrt \<psi> \<equiv> {U. \<exists>s. learn_fin \<psi> U s}" definition FIN :: "partial1 set set" where "FIN \<equiv> {U. \<exists>\<psi> s. learn_fin \<psi> U s}" lemma learn_finI: assumes "environment \<psi> U s" and "\<And>f. f \<in> U \<Longrightarrow> \<exists>i n\<^sub>0. \<psi> i = f \<and> (\<forall>n<n\<^sub>0. s (f \<triangleright> n) \<down>= 0) \<and> (\<forall>n\<ge>n\<^sub>0. s (f \<triangleright> n) \<down>= Suc i)" shows "learn_fin \<psi> U s" using assms learn_fin_def by auto lemma learn_finE: assumes "learn_fin \<psi> U s" shows "environment \<psi> U s" and "\<And>f. f \<in> U \<Longrightarrow> \<exists>i n\<^sub>0. \<psi> i = f \<and> (\<forall>n<n\<^sub>0. s (f \<triangleright> n) \<down>= 0) \<and> (\<forall>n\<ge>n\<^sub>0. s (f \<triangleright> n) \<down>= Suc i)" using assms learn_fin_def by auto lemma learn_fin_closed_subseteq: assumes "learn_fin \<psi> U s" and "V \<subseteq> U" shows "learn_fin \<psi> V s" using assms learn_fin_def by auto lemma learn_fin_wrt_goedel: assumes "goedel_numbering \<chi>" and "learn_fin \<psi> U s" shows "\<exists>t. learn_fin \<chi> U t" proof - have env: "environment \<psi> U s" and fin: "\<And>f. f \<in> U \<Longrightarrow> \<exists>i n\<^sub>0. \<psi> i = f \<and> (\<forall>n<n\<^sub>0. s (f \<triangleright> n) \<down>= 0) \<and> (\<forall>n\<ge>n\<^sub>0. s (f \<triangleright> n) \<down>= Suc i)" using assms(2) learn_finE by auto obtain c where c: "c \<in> \<R>" "\<forall>i. \<psi> i = \<chi> (the (c i))" using env goedel_numberingE[OF assms(1), of \<psi>] by auto define t where "t \<equiv> \<lambda>x. if s x \<up> then None else if s x = Some 0 then Some 0 else Some (Suc (the (c (the (s x) - 1))))" have "t \<in> \<P>" proof - from c obtain rc where rc: "recfn 1 rc" "total rc" "\<forall>x. c x = eval rc [x]" by auto from env obtain rs where rs: "recfn 1 rs" "\<forall>x. s x = eval rs [x]" by auto then have "eval rs [f \<triangleright> n] \<down>" if "f \<in> U" for f n using env that by simp define rt where "rt = Cn 1 r_ifz [rs, Z, Cn 1 S [Cn 1 rc [Cn 1 r_dec [rs]]]]" then have "recfn 1 rt" using rc(1) rs(1) by simp have "eval rt [x] \<up>" if "eval rs [x] \<up>" for x using rc(1) rs(1) rt_def that by auto moreover have "eval rt [x] \<down>= 0" if "eval rs [x] \<down>= 0" for x using rt_def that rc(1,2) rs(1) by simp moreover have "eval rt [x] \<down>= Suc (the (c (the (s x) - 1)))" if "eval rs [x] \<down>\<noteq> 0" for x using rt_def that rc rs by auto ultimately have "eval rt [x] = t x" for x by (simp add: rs(2) t_def) with \<open>recfn 1 rt\<close> show ?thesis by auto qed have t: "t (f \<triangleright> n) \<down>= (if s (f \<triangleright> n) = Some 0 then 0 else Suc (the (c (the (s (f \<triangleright> n)) - 1))))" if "f \<in> U" for f n using that env by (simp add: t_def) have "learn_fin \<chi> U t" proof (rule learn_finI) show "environment \<chi> U t" using t by (simp add: \<open>t \<in> \<P>\<close> env goedel_numbering_P2[OF assms(1)]) show "\<exists>i n\<^sub>0. \<chi> i = f \<and> (\<forall>n<n\<^sub>0. t (f \<triangleright> n) \<down>= 0) \<and> (\<forall>n\<ge>n\<^sub>0. t (f \<triangleright> n) \<down>= Suc i)" if "f \<in> U" for f proof - from fin obtain i n\<^sub>0 where i: "\<psi> i = f \<and> (\<forall>n<n\<^sub>0. s (f \<triangleright> n) \<down>= 0) \<and> (\<forall>n\<ge>n\<^sub>0. s (f \<triangleright> n) \<down>= Suc i)" using \<open>f \<in> U\<close> by blast let ?j = "the (c i)" have "\<chi> ?j = f" using c(2) i by simp moreover have "\<forall>n<n\<^sub>0. t (f \<triangleright> n) \<down>= 0" using t[OF that] i by simp moreover have "t (f \<triangleright> n) \<down>= Suc ?j" if "n \<ge> n\<^sub>0" for n using that i t[OF \<open>f \<in> U\<close>] by simp ultimately show ?thesis by auto qed qed then show ?thesis by auto qed end
{"subset_name": "curated", "file": "formal/afp/Inductive_Inference/Inductive_Inference_Basics.thy"}
\section{Explicit formulas} \label{sec:formulas} In this appendix we list explicit formulas for categorical invariants of Euler characteristic $\chi\geq -3$. The formulas will be written in terms of partially directed graphs. Our conventions when drawing such a graph $(G,L_G^{\sf in}\coprod L_G^{\sf out},E^{\sf dir},K)$ are as follows: \begin{itemize} \item[--] we shall omit the genus decoration of a vertex if it is clear from the combinatorics of the graph; \item[--] we shall omit the drawing of the spanning tree $K \subset E^\direct$ if there is a unique choice of it; otherwise, the spanning tree $K$ will be drawn in blue; \item[--] when drawing ribbon graphs vertices decorated by $u^0$ will not be marked; \item[--] the orientation of ribbon graphs is the one described in~\cite{CalChe}. \end{itemize} \paragraph{{\bf The formula for the $(0,1,2)$-component.}} We begin with the case when $g=0$ and $n=3$. In this case there is a unique partially directed stable graph. Thus we have \[ \iota_*F^{A,s}_{0,3}= \frac{1}{2} \includegraphics[trim=-1cm 1cm 0 0, scale=.3]{png/012.png}\] The coefficient $\frac{1}{2}$ is due to the automorphism that switches the two outputs in the stable graph. Its vertex is decorated by the tensor \[ \widehat\beta^A_{0,1,2}= \rho^A(\widehat{\mathcal{V}}_{0,1,2})=-\frac{1}{2}\rho^A\big( \begin{tikzpicture}[baseline={([yshift=-1ex]current bounding box.center)},scale=0.2] \draw [thick] (0,0) to (0,2); \draw [thick] (-0.2, 1.8) to (0.2, 2.2); \draw [thick] (0.2, 1.8) to (-0.2, 2.2); \draw [thick] (0,0) to (-2,0); \draw [thick] (0,0) to (2,0); \draw [thick] (-2.2,0) circle [radius=0.2]; \draw [thick] (2.2,0) circle [radius=0.2]; \end{tikzpicture}\big)\] using the action $\rho^A$ on the first combinatorial string vertex $\widehat{\mathcal{V}}_{0,1,2}$, see~\cite{CCT}. Note that the latter ``T"-shaped graph is a ribbon graph, not be confused with the first graph which is a (partially directed) stable graph. (The negative sign appears due to our choice of orientation of ribbon graphs.) \paragraph{{\bf The formula for the $(1,1,0)$-component.}} In this case there are two stable graphs. We have $\iota F_{1,1}^{A,s}$ is given by \[ \includegraphics[trim=-1cm 1cm 0 0, scale=.4]{png/110.png} \] For the first graph the unique vertex is decorated by the image under $\rho^A$ of the combinatorial string vertex $\widehat{\mathcal{V}}_{1,1,0}$. It was computed in~\cite{CalTu} and it is explicitly given by the following linear combination of ribbon graphs: \[ \widehat{\mathcal{V}}^{\sf comb}_{1,1,0}=-\frac{1}{24}\;\;\; \begin{tikzpicture}[baseline={(current bounding box.center)},scale=0.3] \draw [thick] (0,2) circle [radius=2]; \draw [thick] (-2,2) to (-0.6,2); \draw [thick] (-0.8,2.2) to (-0.4,1.8); \draw [thick] (-0.8, 1.8) to (-0.4, 2.2); \draw [thick] (-1.4142, 0.5858) to [out=40, in=140] (1, 0.5); \draw [thick] (1.25, 0.3) to [out=-45, in=225] (2, 0.2); \draw [thick] (2,0.2) to [out=45, in=-50] (1.732, 1); \node at (0.7, 2.2) {$u^{-1}$}; \end{tikzpicture} \;\;+\;\;\frac{1}{4}\;\;\; \begin{tikzpicture}[baseline={(current bounding box.center)},scale=0.3] \draw [thick] (0,2) circle [radius=2]; \draw [thick] (0,0) to (0,1.4); \draw [thick] (-0.2, 1.2) to (0.2, 1.6); \draw [thick] (-0.2, 1.6) to (0.2, 1.2); \draw [thick] (0,0) to [out=80, in=180] (0.5, 1); \draw [thick] (0.5,1) to [out=0, in=100] (0.9, 0.4); \draw [thick] (0,0) to [out=-80, in=180] (0.5, -1); \draw [thick] (0.5, -1) to [out=0, in=-100] (0.9, 0); \end{tikzpicture}.\] \paragraph{{\bf The formula for the $(0,1,3)$-component.}} In this case we have $\iota_*F^{A,s}_{0,4}$ is equal to \[ \includegraphics[scale=.5]{png/013.png}\] Note that the coefficient $\frac{1}{2}$ disappears due to the symmetry of the two outgoing leaves on the right hand side of the stable graph. The combinatorial string vertex $ \widehat{\mathcal{V}}^{\sf comb}_{0,1,3}$ is computed explicitly in~\cite{CCT} and it is given by \[\widehat{\mathcal{V}}^{\sf comb}_{0,1,3} =-\frac{1}{2}\; \begin{tikzpicture}[baseline={(current bounding box.center)},scale=0.4] \draw (0,2) node[cross=3pt,label=above:{}] {}; \draw [thick] (0,0.2) to (0,2); \draw [thick] (-0.2,0) to (-2,0); \draw [thick] (0.2,0) to (2,0); \draw [thick] (-2.2,0) circle [radius=0.2]; \draw [thick] (2.2,0) circle [radius=0.2]; \draw [thick] (0,0) circle [radius=0.2]; \draw [ultra thick] (.2,0) to (1,0); \end{tikzpicture}-\frac{1}{2}\; \begin{tikzpicture}[baseline={(current bounding box.center)},scale=0.4] \draw (1,1) node[cross=3pt,label=above:{}] {}; \draw [thick] (1,0) to (1,1); \draw [thick] (-0.2,0) to (-2,0); \draw [thick] (0.2,0) to (2,0); \draw [thick] (-2.2,0) circle [radius=0.2]; \draw [thick] (2.2,0) circle [radius=0.2]; \draw [thick] (0,0) circle [radius=0.2]; \draw [ultra thick] (0,-.2) to (0,-1);\end{tikzpicture} +\frac{1}{2}\; \begin{tikzpicture}[baseline={(current bounding box.center)},scale=0.4] \draw (1,1) node[cross=3pt,label=above:{}] {}; \draw [thick] (1,0) to (1,1); \draw [thick] (0,0) to (-2,0); \draw [thick] (0,0) to (2,0); \draw [thick] (-2.2,0) circle [radius=0.2]; \draw [thick] (2.2,0) circle [radius=0.2]; \draw [thick] (0,-1) circle [radius=0.2]; \draw [thick] (0,0) to (0,-.8); \node at (1,-1) {$u^{-1}$}; \end{tikzpicture}+\frac{1}{6}\; \begin{tikzpicture}[baseline={(current bounding box.center)},scale=0.4] \draw (-1,1) node[cross=3pt,label=above:{}] {}; \draw [thick] (-1,0) to (-1,1); \draw [thick] (0,0) to (-2,0); \draw [thick] (0,0) to (2,0); \draw [thick] (-2.2,0) circle [radius=0.2]; \draw [thick] (2.2,0) circle [radius=0.2]; \draw [thick] (1,1) circle [radius=0.2]; \draw [thick] (1,0) to (1,.8); \node at (-1,1.5) {$u^{-1}$};\end{tikzpicture}\] \paragraph{{\bf The formula for the $(1,1,1)$-component.}} In this case we get \begin{align*} \iota_*F_{1,2}^{A,s} & = \begin{tikzpicture}[baseline={(current bounding box.center)},scale=0.5] \draw [thick,directed] (3.4,4) to (3.4,2); \node at (3.4,2) {$\bullet$}; \node at (2.4,2) {\small $g=1$}; \draw [thick,directed] (3.4,2) to (3.4,0); \end{tikzpicture} +\frac{1}{2}\begin{tikzpicture}[baseline={(current bounding box.center)},scale=0.5] \draw [thick,directed] (3.4,4) to (3.4,2); \node at (3.4,2) {$\bullet$}; \draw [thick] (2.7,1.3) circle [radius=1]; \draw [thick,directed] (3.4,2) to (4.8, 2); \end{tikzpicture} + \begin{tikzpicture}[baseline={(current bounding box.center)},scale=0.5] \draw [thick,directed] (3.4,4) to (3.4,2); \node at (3.4,2) {$\bullet$}; \draw [thick,directed] (3.4,2) to (1.4,2); \draw [thick,directed] (3.4,2) to (7.4,2); \node at (7.4,2) {$\bullet$}; \node at (8.4,2) {\small $g=1$}; \end{tikzpicture}+\\ & + \frac{1}{2} \begin{tikzpicture}[baseline={(current bounding box.center)},scale=0.5] \draw [thick,directed] (3.4,4) to (3.4,2); \node at (3.4,2) {$\bullet$}; \node at (3.4,-2) {$\bullet$}; \draw [thick,blue,directed] (3.4,2) to [out=240, in=120] (3.4,-2); \draw [thick,directed] (3.4,2) to [out=300, in=60] (3.4,-2); \draw [thick,directed] (3.4,-2) to (3.4,-4); \end{tikzpicture}+\frac{1}{2}\begin{tikzpicture}[baseline={(current bounding box.center)},scale=0.5] \draw [thick,directed] (3.4,4) to (3.4,2); \node at (3.4,2) {$\bullet$}; \node at (3.4,-2) {$\bullet$}; \draw [thick,directed] (3.4,2) to [out=240, in=120] (3.4,-2); \draw [thick] (3.4,2) to [out=300, in=60] (3.4,-2); \draw [thick,directed] (3.4,-2) to (3.4,-4); \end{tikzpicture}+\frac{1}{2}\begin{tikzpicture}[baseline={(current bounding box.center)},scale=0.5] \draw [thick,directed] (3.4,4) to (3.4,2); \node at (3.4,2) {$\bullet$}; \draw [thick,directed] (3.4,2) to (5.4,2); \draw [thick,directed] (3.4,2) to (3.4,-1); \draw [thick] (3.4,-2) circle [radius=1]; \end{tikzpicture} \end{align*} Observe that in the first graph of the second line, there are $2$ directed edges between the two vertices. This explains how the tensors $\widehat{\beta}^A_{g,k,l}$ with $k\geq 2$ can contribute to the categorical enumerative invariants. \paragraph{{\bf The formula for the $(0,1,4)$-component.}} In this case we have $\iota_*F_{0,5}^{A,s}$ is given by \[\includegraphics[scale=.4]{png/014.png}\] \paragraph{{\bf The formula for the $(1,1,2)$-component.}} In this case, we have $\iota_*F_{1,3}^{A,s}$ is given by \[\includegraphics[scale=.5]{png/112.png}\] Note that in the above graphs the genus decoration is also omitted since in this case it is evident from the graph itself. For example, in the first graph there is a unique vertex of genus $1$. In the third graph the genus $1$ decoration is forced on the left vertex, otherwise it would not be stable. \paragraph{{\bf A partial formula for the $(2,1,0)$-component.}} We list a few formulas in $\iota_*F_{2,1}^{A,s}$ according to the number of edges in stable graphs. There is a unique star graph in $\Gamma\laurent{2,1,0}$ with no edges. Then, there are terms with only one edge given by \[\includegraphics[scale=.6]{png/210-1.png}\] The terms with two edges are \[\includegraphics[scale=.5]{png/210-2.png}\]
{"config": "arxiv", "file": "2009.06659/formulas.tex"}
TITLE: Ellipse Tangents in 3D QUESTION [1 upvotes]: I know that we can find the tangent of the ellipse in 2D by taking the derivative of the equation defining the ellipse. But I'm little bit confused about finding the ellipse tangent in 3D. Where the ellipse orientation could be in any direction. Suppose we have the following information: The center of an ellipse in 3D (cx, cy, cz). The surface normal of the ellipse (<v1; v2; v3>). The major and minor axis. How can we find the tangent of ellipse at a given point on the ellipse. Kindly help. REPLY [3 votes]: Let $\mathbf c = (cx,cy,cx)$ denote the center, and let $a$ and $b$ denote the semi-axis lengths, or "radii", as usual, and let $\mathbf u$ and $\mathbf v$ be two unit vectors in the directions of the major and minor axes respectively. Then the equation of the ellipse can be written as $$ \mathbf x(\theta) = \mathbf c + (a\cos\theta)\mathbf u + (b\sin\theta)\mathbf v $$ Then the first derivative vector is given by $$ \mathbf x'(\theta) = - (a\sin\theta)\mathbf u + (b\cos\theta)\mathbf v $$ This derivative vector gives you the direction of the tangent line at the point $\mathbf x(\theta)$ on the ellipse.
{"set_name": "stack_exchange", "score": 1, "question_id": 655853}
TITLE: Hierarchy of Mathematical Spaces QUESTION [5 upvotes]: I really got lost among all those many different spaces in mathematics, and I got really confused what is special case of what. For example, I knew for long time vector spaces, then Hilbert spaces, that I thought about as infinite dimensional vector spaces, then discovered that there is topological vector spaces, and that Hilbert spaces actually just a special case of, while the "usual" for me vector spaces are actually algebraetic vector spaces... etc. Are there some visual, or written hierarchy model for those many different space? so one will understand what is the most fundamental object we start from? And what is a special case of what? Something like Topology=>topological space=> if we equip it with metric we get.. => if we equip it with norm we get .. Thank you in advance. Edit Something like to what is mentioned in Space (Wikipedia) or Connections between metrics, norms and scalar products.. (StackExchange) but with more details and subfields and what is the additional assumed structure. REPLY [5 votes]: This is a (probably icomplete) hierarchy of vector spaces from the point of view of functional analysis. Vector spaces: algebraic structure (addition and multiplication by scalars) Topological vector space: vector space with a topology such that addition and multiplication by scalars are continuous. Locally convex topological vector spaces: TVS in which $0$ has a basis of convex neighbourhoods. Fréchet space: LCTVS whose topology is derived from a translation invariant metric, complete. Banach spaces: Fréchet space in which the metric is given by a norm. Reflexive Banach spaces: the cannonical homomorphism between the space and its double dual is an isomorphism. Hilbert space: Banach space in which the norm comes from an inner product.
{"set_name": "stack_exchange", "score": 5, "question_id": 1364375}
TITLE: Arranging Couples in a Row QUESTION [1 upvotes]: Three couples are sitting in a row. Compute the number of arrangements in which no person is sitting next to his or her partner. Answer is 240. From wikipedia, This problem is called the menage problem have a fourmula called Touchard's formula: Let $M_n$ denote the number of seating arrangements for n couples. Touchard (1934) derived the formula $M_n = 2 \cdot n! \sum_{k=0}^n (-1)^k \frac{2n}{2n-k} {2n-k\choose k} (n-k)!$ How does one mnage to prove it REPLY [6 votes]: There are $(2n)!$ ways of getting $2n$ people to sit in a row. If we wanted all aof them to sit as couples there would be $n!$ ways of arranging the couples and $2^n$ ways of arranging within the couples. If we require $k$ named couples to sit together then this becomes $(2n-k)$ units to arrange, giving $(2n-k)!$ possibilities. There are $2^k$ ways to arrange within the named couples. But there are ${n \choose k}$ ways of naming the couples. So this gives $2^k{n \choose k}(2n-k)!$. That previous figure will involve double counting so we need to use inclusion-exclusion to answer your question and get $$(2n)! - 2^1{n \choose 1} (2n-1)!+ 2^2{n \choose 2} (2n-2)! - \cdots 2^k{n \choose k} (2n-k)! \cdots 2^n n!$$ $$=\sum_{k=0}^n (-2)^k{n \choose k} (2n-k)!$$ and for $n=3$ this gives $720-720+288-48=240$.
{"set_name": "stack_exchange", "score": 1, "question_id": 110762}
TITLE: Limit of $\lim_{n\to\infty} n-ne\left(1-\frac{1}{n}\right)^n$? QUESTION [1 upvotes]: How do I calculate $$\lim_{n\to\infty} n-ne\left(1-\frac{1}{n}\right)^n$$ Edit: changed it to correct question with $1-1/n$ REPLY [0 votes]: \begin{align} n-ne\left(1-\frac{1}{n}\right)^n &= n-nee^{n\ln\left(1-\frac{1}{n}\right)} \\ &= n-nee^{\left(-1-\frac{1}{2n}+O(\frac{1}{n^2})\right)} \\ &= n-n\left(1-\frac{1}{2n}+O(\frac{1}{n^2})\right) \\ &= \frac{1}{2}+O(\frac{1}{n}) \end{align} then $$\lim_{n\to\infty} n-ne\left(1+\frac{1}{n}\right)^n=\color{blue}{\frac12}$$
{"set_name": "stack_exchange", "score": 1, "question_id": 2867565}
\begin{document} \title{Finite-Memory Prediction as Well as \\ the Empirical Mean} \author{Ronen~Dar and Meir~Feder,~\IEEEmembership{Fellow,~IEEE} \IEEEcompsocitemizethanks{ The work of Ronen Dar was supported by the Yitzhak and Chaya Weinstein research institute for signal processing. The work was also partially supported by a grant number 634/09 of the Israeli Science Foundation (ISF). This paper was presented in part at the IEEE International Symposium on Information Theory, St. Petersburg, Russia, August 2011. Ronen Dar and Meir Feder are with the Department of Electrical Engineering-Systems, Tel Aviv University, Ramat Aviv 69978, Israel (e-mail: ronendar@post.tau.ac.il ; meir@eng.tau.ac.il). \protect\\ }} \maketitle \begin{abstract} The problem of universally predicting an individual continuous sequence using a deterministic finite-state machine (FSM) is considered. The empirical mean is used as a reference as it is the constant that fits a given sequence within a minimal square error. With this reference, a reasonable prediction performance is the regret, namely the excess square-error over the reference loss, the empirical variance. The paper analyzes the tradeoff between the number of states of the universal FSM and the attainable regret. It first studies the case of a small number of states. A class of machines, denoted Degenerated Tracking Memory (DTM), is defined and the optimal machine in this class is shown to be the optimal among {\em all} machines for small enough number of states. Unfortunately, DTM machines become suboptimal as the number of available states increases. Next, the Exponential Decaying Memory (EDM) machine, previously used for predicting binary sequences, is considered. While this machine has poorer performance for small number of states, it achieves a vanishing regret for large number of states. Following that, an asymptotic lower bound of $O(k^{-2/3})$ on the achievable regret of any $k$-state machine is derived. This bound is attained asymptotically by the EDM machine. Furthermore, a new machine, denoted the Enhanced Exponential Decaying Memory machine, is shown to outperform the EDM machine for any number of states. \end{abstract} \begin{keywords} Universal prediction, individual continuous sequences, finite-memory, least-squares. \end{keywords} \section{Introduction} \label{Introduction} Consider a continuous-valued {\em individual} sequence $x_1,\ldots,x_n$, where each sample is assumed to be bounded in the interval $[a,b]$ but otherwise arbitrary with no underlying statistics. Suppose that at each time $t$, after observing $x_1^t=x_1,\ldots,x_t$, a predictor guesses the next outcome $\hat{x}_{t+1}$ and incurs a square error prediction loss $(x_{t+1}-\hat{x}_{t+1})^2$. A reasonable reference for the predictor is the best constant that fits the entire sequence within a minimal square error. This constant is the empirical mean $\bar{x}=\frac{1}{n}\sum_{t=1}^n x_t$, and its square error is the sequence's empirical variance $\frac{1}{n}\sum_{t=1}^n(x_t-\bar{x})^2$. Let $\hat{x}_{u,1},\ldots,\hat{x}_{u,n}$ denote the predictions of a (universal) predictor $U$. When the empirical mean is used as a reference, the excess loss of $U$ over the empirical mean, for an individual sequence $x_1^n$, is named the regret: \begin{equation} R(U,x_1^n)=\frac{1}{n}\sum_{t=1}^n(x_t-\hat{x}_{u,t})^2-\frac{1}{n}\sum_{t=1}^n(x_t-\bar{x})^2. \end{equation} In the setting discussed in this paper, the individual setting, the performance of $U$ is judged by the incurred regret of the worst sequence, i.e., \[\max_{x_1^n} R(U,x_1^n)~.\] Thus, the optimal $U$ should attain \[ \min_U \max_{x_1^n} R(U,x_1^n)~. \] When there are no constraints on the universal predictor, this optimal $U$ is the Cumulative Moving Average (CMA): \begin{equation} \label{CMA} \hat{x}_{t+1}=(1-\frac{1}{t+1})\hat{x}_t+\frac{1}{t+1}x_t, \end{equation} where the maximal regret tends to zero with the sequence length $n$ \cite{UniversalPredictionSurvey,UniversalSchemes}. Note that while the reference, the empirical mean predictor, is a constant and needs a single state memory, the CMA predictor is unconstrained and requires an ever growing amount of memory. A natural question arises - what happens if the universal predictor is constrained to be a finite $k$-state machine? This is the problem considered in this paper. Universal estimation and prediction problems where the estimator/predictor is a $k$-state machine have been explored extensively in the past years. Cover \cite{CoverHypothesisTesting} studied hypothesis testing problem where the tester has a finite memory. Hellman \cite{HellmanFiniteMemoryEstimation} studied the problem of estimating the mean of a Gaussian (or more generally stochastic) sequence using a finite state machine. This problem is closely related to our problem and may be considered as a stochastic version of it: if one assumes that the data is Gaussian, then predicting it with a minimal mean square error essentially boils to estimating its mean. More recently, the finite-memory universal prediction problem for individual {\em binary} sequences with various loss functions was explored thoroughly in \cite{RajwanFederDCC00,MeronFederDCC04,MeronFederPaper04,IngberFederNonAsy,IngberFederAsy,IngberThesis}. The finite-memory universal portfolio selection problem (that dealt with continuous-valued sequences but considered a very unique loss function) was also explored recently \cite{TavoryFeder}. Yet, the basic problem of finite-memory universal prediction of {\em continuous-valued, individual} sequences with square error loss was left unexplored so far. This paper provides a solution for this problem, presenting such universal predictors attaining a vanishing regret when a large memory is allowed, but also maintaining an optimal tradeoff between the regret and the number of states used by the universal predictor. The outline of the paper is as follows. In section \ref{sec:ProblemFormulation} we formulate the discussed problem and present guidelines that will be used throughout this paper. Section \ref{chapter:LowNumOfStates} is devoted to universal prediction with a small number of states. We present the class of the Degenerated Tracking Memory (DTM) machines, an algorithm for constructing the optimal DTM machine and a lower bound on the achievable regret. The optimal DTM machine is shown to be the optimal solution among {\em all} machines when a small enough number of states is available. Sections \ref{EDM}, \ref{sec:lowerBoundHigh} and \ref{sec:DesigningEEDM} are devoted to universal prediction using a large number of states. We start in \ref{EDM} by proposing a known universal machine - the Exponential Decaying Memory (EDM) machine - proving asymptotic lower and upper bounds on its worst regret. In section \ref{sec:lowerBoundHigh} we present an asymptotic lower bound on the worst regret of any deterministic $k$-states machine and in section \ref{sec:DesigningEEDM} we present a new machine named the Enhanced Exponential Decaying Memory (E-EDM) machine that can attain any vanishing desired regret while outperforming the EDM machine. In section \ref{sec:Summary} we summarize the results and discuss further research. \section{Preliminaries} \label{sec:ProblemFormulation} We consider universal predictors with continuous-valued input samples that are assumed to be bounded in the interval $[a,b]$. Giving a sequential predictor, we would like to compare the square error incurred by its predictions to the loss incurred by the empirical mean - the best off-line constant predictor. In other words, the reference class comprises all predictors that know the entire sequence in advance, however can predict throughout only a single value. The best predictor among this class is the empirical mean, where its induced loss is the empirical variance. \begin{defn} For a given sequence $\{x_1,\ldots,x_n\}$, the excess loss of a universal predictor $U$ with predictions $\{\hat{x}_{1},...,\hat{x}_{n}\}$ over the best constant predictor, the empirical mean $\bar{x}=\tfrac{1}{n}\sum_{t=1}^n x_t$, is termed the regret of the sequence and is therefore giving by \begin{equation} R(U,x_1^n)=\frac{1}{n}\sum_{t=1}^n(x_t-\hat{x}_{t})^2-\frac{1}{n}\sum_{t=1}^n(x_t-\bar{x})^2. \end{equation} \end{defn} We analyze the performance of a universal predictor $U$ by its worst sequence, i.e., by the sequence that induces maximal regret \begin{equation} R_{max}(U)=\max_{x_1^n}R(U,x_1^n), \end{equation} where we shall take the length of the sequence, $n$, to infinity. The notations $x_1^n$ and $\{{x}_t\}_{t=1}^n$ are used throughout this paper to denote $\{x_1,\ldots,x_n\}$. The universal predictors considered in this work are memory limited. Finite-State Machine (FSM) is a commonly used model for sequential machines with a limited amount of storage. We focus here on time-invariant FSM. \begin{defn} A deterministic finite-state machine is defined by: \begin{itemize} \item An array of $k$ states where $\{S_1,\ldots,S_k\}$ denote the value assigned to each state. \item The transition of the machine between states is defined by a threshold set $\underline{T}_{~i}=\{T_{i,-m_{d,i}-1},T_{i,-m_{d,i}},\ldots,T_{i,m_{u,i}-1},T_{i,m_{u,i}}\}$ for each state $i$, where $m_{u,i}$ and $m_{d,i}$ are the maximum number of states allowed to be crossed on the way up and down from state $i$, correspondingly. Hence, if at time $t$ the machine is at state $i$ and the input sample $x_t$ satisfies $T_{i,j-1} \leq x_t < T_{i,j}$, the machine jumps $j$ states. Note that the thresholds are non-intersecting, where the union of them covers the interval $[a,b]$ (where each input sample is assumed to be bounded in $[a,b]$). \item Equivalently, a transition function $\varphi(i,x)$, that is, the next state given that the current state and input sample are $i$ and $x$, can be defined \begin{equation*} \varphi(i,x) = \left\{ \begin{array}{rl} i-m_{d,i} &,T_{i,-m_{d,i}-1} \leq x < T_{i,-m_{d,i}}\\ i-m_{d,i}+1 &,T_{i,-m_{d,i}}\leq x < T_{i,-m_{d,i}+1}\\ \vdots\\ i+m_{u,i}-1 &, T_{i,m_{u,i}-2}\leq x < T_{i,m_{u,i}-1}\\ i+m_{u,i} &, T_{i,m_{u,i}-1}\leq x < T_{i,m_{u,i}} \end{array} \right. \end{equation*} \end{itemize} An FSM predictor works as follows - suppose at time $t$ the machine is at state $i$, then the prediction is $\hat{x}_t=S_i$, the value assigned to state $i$. On receiving the input sample $x_t$, the machine jumps to the next state $\varphi(i,x_t)$. The incurred loss for time $t$ is then $(x_t-\hat{x}_t)^2$. \end{defn} Throughout this paper we discuss predictors designed for input samples that are bounded in $[0,1]$. One can easily verify that any FSM that achieves a regret smaller than $R$ for any sequence bounded in $[0,1]$, can be transformed into an FSM that achieves a regret smaller than $R(b-a)^2$ for any sequence bounded in $[a,b]$, by applying the following simple transformation - each state value $S_i$ is transformed into $a+(b-a)S_i$ and each threshold set $\underline{T}_{~i}$ into $a+(b-a)\underline{T}_{~i}$. Thus, all the results presented in this paper can be expanded to the more general case, where each individual sequence is assumed to be bounded in $[a,b]$. To conclude this section, we provide the definition of a minimal circle and a Theorem that we will use throughout this paper. A version of this Theorem was first given in \cite[Theorem 6.5]{RajwanThesis} - the worst {\em binary} sequence for a given FSM with respect to (w.r.t.) the {\em log-loss} function endlessly rotates the machine in a minimal circle. Here we rederive the proof with emphasis on our case - {\em continuous} sequences and {\em square-loss} function. \begin{defn} A circle is a cyclic closed set of $L$ states/predictions $\{\hat{x}_t\}_{t=1}^L$, if there are input samples $\{x_t\}_{t=1}^L$ that rotate the machine between these states. A minimal circle is a circle that does not contain the same state more than once. An example is depict in Figure \ref{fig:minimalCircleExmple}. \end{defn} \begin{figure}[htb] \centering \includegraphics[width=0.5 \columnwidth,height=0.07\textheight]{minimal_circle.jpg} \caption[Minimal circle - example]{Five states minimal circle - arrows represent the jump at each time $t=1,\ldots,5$. \label{fig:minimalCircleExmple}} \end{figure} \begin{thm} \label{thm:problemForm} The sequence that induces maximal regret over a given FSM, endlessly rotates the machine in a minimal circle. \end{thm} \begin{proof} Let $\{x_t\}_{t=1}^n$ be any sequence of samples and $\{\hat{x}_t\}_{t=1}^n$ the induced sequence of states/predictions on a $k$-states FSM, denoted $U$. Note that $\{\hat{x}_t\}_{t=1}^n$ can be broken into a sequence of minimal circles, denoted $\{c_i\}_{i=1}^m$, and a residual sequence of transient states (which their number is less than $k$). A simple algorithm that generates this sequence of minimal circles works as follows - first search for the first minimal circle in the sequence, that is, the first pair $i$ and $j$ that satisfy $\hat{x}_i=\hat{x}_{j+1}$ where all $\{\hat{x}_t\}_{t=i}^{j}$ are different. Take out these states and their corresponding input samples $\{{x}_t\}_{t=i}^{j}$ to form the first minimal circle $c_1$. Repeat this procedure to construct a sequence of minimal circles. Note that at most $k$ samples are left as a finite residual sequence. Now, denote the length of the minimal circle $c_i$ by $n_i$ and the states and samples that form this circle by $\{\hat{x}_{i,t}\}_{t=1}^{n_i}$ and $\{{x}_{i,t}\}_{t=1}^{n_i}$, respectively. For now assume that there is no residual sequence, then the regret of the complete sequence satisfies \begin{align} R(U,x_1^n)&=\frac{1}{n}\sum_{t=1}^n(x_t-\hat{x}_{t})^2-(x_t-\bar{x})^2\\ &=\frac{1}{n}\sum_{i=1}^{m}\sum_{t=1}^{n_i}(x_{i,t}-\hat{x}_{i,t})^2-(x_{i,t}-\bar{x})^2\\ &\leq\frac{1}{n}\sum_{i=1}^{m}\sum_{t=1}^{n_i}(x_{i,t}-\hat{x}_{i,t})^2-(x_{i,t}-\bar{x}_i)^2~, \end{align} where $\bar{x}_i=\sum_{t=1}^{n_i} x_{i,t}/{n_i}$ is the empirical mean of minimal circle $c_i$. Let the regret of the minimal circle $c_i$ be $R_i$, then we can write \begin{align} R(U,x_1^n)&\leq \frac{1}{n}\sum_{i=1}^{m}n_i R_i~. \end{align} Let the minimal circle with the maximal induced regret be $c_j$. Then this regret satisfies $R_j\geq R(U,x_1^n)$. This is true since otherwise, that is, all $R_i$ satisfy $R_i< R(U,x_1^n)$, we get \begin{align} R(U,x_1^n)&\leq \frac{1}{n}\sum_{i=1}^{m}n_i R_i\\ &< R(U,x_1^n)~, \end{align} which is clearly wrong. Thus, by further noting that for $n\gg k$ the regret induced by the residual sequence is neglectable, and there are finite number of minimal circles in a given FSM, the Theorem can be concluded. \end{proof} \section{Designing an optimal FSM with a small number of states} \label{chapter:LowNumOfStates} In this section we search for the best universal predictor with relatively small number of states. We start by presenting the optimal machines for a single, two and three states. The optimality is in a sense of achieving the lowest maximal regret using the allowed number of states. We then define in subsection \ref{subsec:theDTMClass} a new class of machines termed the Degenerated Tracking Memory (DTM) machines. This class contains the optimal solutions presented for a single, two and three states. In subsection \ref{sec1} a schematic algorithm for constructing the optimal DTM machine is given. A lower bound on the achievable (maximal) regret of any DTM machine is proven in subsection \ref{subsec:lowerBoundDTM}. We conclude this section in subsection \ref{DTM:conc} by presenting the tradeoff between number of states and regret achieved by the optimal DTM machine. We further discuss the fact that up to a certain number of states, this machine is optimal, not only among the class of DTM machines, but rather among \textbf{all} machines. \subsection{Single state universal predictor} \label{single} The problem of finding the optimal single state machine has a trivial solution - from symmetry aspects, the optimal state is assigned with the value $\frac{1}{2}$ and the worst sequence, all $1$'s or $0$'s, incurs a (maximal) regret of $R=\frac{1}{4}$. \subsection{Two states universal predictor}\label{two} \begin{figure}[htb] \centering \includegraphics[width=0.75 \columnwidth,height=0.075\textheight]{twoStates_1.jpg} \caption[Two states machine]{Two states machine described geometrically over the interval $[0,1]$. \label{fig:twoStates_1}} \end{figure} A two states machine has two possible minimal circles - zero-step circle (staying at the same state) and two steps circle (toggling between the two states). The lowest maximal regret is achieved when the (maximal) regrets of both minimal circles are equal. Thus, let the lowest state be assigned with the value $S_1=\sqrt{R}$ and a transition threshold $2\sqrt{R}$ and the second state with $S_2=1-\sqrt{R}$ and a transition threshold $1-2\sqrt{R}$. In that case, the regret of the zero-step circles is no more than $R$. Now, let us analyze the regret induced by a sequence $x_1,~x_2,~x_1,~x_2,~...$ that endlessly rotate the machine in the two steps minimal circle. Since the regret is convex in the input samples, maximal regret is attained at the edges of the transition regions, that is, when $x_1=0$ or $x_1=1-2\sqrt{R}$ induces the down-step and $x_2=1$ or $x_2=2\sqrt{R}$ induces the up-step (assuming that the machine starts at the highest state). Therefore there are four combinations that may bring the regret of this minimal circle to maximum. By computing these regrets one gets that the sequence $0,1,0,1,...$ incur the highest regret: $R(U,x_1^n)=R-2\sqrt{R}+3/4$. Equalizing this regret to $R$ results in $R=(\frac{3}{8})^2$ and the maximal regret of both minimal circles is equalized. Therefore the optimal two states machine can be summarized: \begin{itemize} \item State values are: \[S_1=\frac{3}{8}~~~,~~~ S_2=\frac{5}{8}\] \item The states transition function satisfies: \begin{align*} &\varphi(1,x) = \left\{ \begin{array}{rl} 1 & ~~\text{if } ~~x <~ \tfrac{3}{4}\\ 2 & ~~\text{otherwise} \end{array} \right. \\ &\varphi(2,x) = \left\{ \begin{array}{rl} 1 & ~~\text{if } ~~x <~ \tfrac{1}{4}\\ 2 & ~~\text{otherwise} \end{array} \right. \end{align*} \end{itemize} The worst sequence that endlessly rotates the machine in one of the minimal circles incurs a (maximal) regret of $R=(\frac{3}{8})^2\approx 0.14$. Thus, if the desired regret is smaller than $(\frac{3}{8})^2$ we need to design a machine with more than two states. \subsection{Three states universal predictor} \label{three} \begin{figure}[ht] \centering \includegraphics[width=0.75 \columnwidth,height=0.075\textheight]{threeStates_1.jpg} \caption[Three states machine]{Three states machine described geometrically over the $[0,1]$ axis. \label{fig:threeStates_1}} \end{figure} With the same considerations as for the two states machine, the lowest state is assigned with $S_1=\sqrt{R}$ and the upper state with $S_3=1-\sqrt{R}$. From symmetry aspects, the middle state is assigned with $S_2=\frac{1}{2}$. We also note that if a two states jump is allowed from the lower state to the upper state, the sequence $0,1,0,1,...$ toggles the machine between these states. In that case, as was done for the two states machine, the incurred regret is no less than $(\frac{3}{8})^2$. Hence, only a single state jump is allowed, otherwise the three states machine has no gain over the two states machine. Thus, in the same manner as for the two states machine, one can get that the optimal three states machine satisfies: \begin{itemize} \item State values are: \[S_1=0.3285~~~,~~~S_2=0.5000~~~,~~~ S_3=0.6715\] \item The states transition function satisfies: \begin{align*} &\varphi(1,x) = \left\{ \begin{array}{rl} 1 & ~~\text{if } ~~x <~ 0.6570\\ 2 & ~~\text{otherwise} \end{array} \right. \\ &\varphi(2,x) = \left\{ \begin{array}{rl} 1 & ~~\text{if } ~~x <~ 0.1715\\ 2 & ~~\text{if } ~~0.1715 \leq ~~x <~ 0.8285\\ 3 & ~~\text{otherwise} \end{array} \right. \\ &\varphi(3,x) = \left\{ \begin{array}{rl} 2 & ~~\text{if } ~~~x <~ 0.3430\\ 3 & ~~\text{otherwise} \end{array} \right. \end{align*} \end{itemize} The worst sequence that endlessly rotates the machine in one of the minimal circles incurs a (maximal) regret of $R=0.1079$. Figure \ref{fig:threeStates_2} depict the states and the transition thresholds over the interval $[0,1]$. Note the {{\em hysteresis}} characteristics of the machine, providing ``memory'' or ``inertia'' to the finite-state predictor - an extreme input sample is needed for the machine to jump from the current state, that is, to change the prediction value. \begin{figure}[htb] \centering \includegraphics[width=1\columnwidth]{Hysteresis_new.jpg} \caption[Optimal three states machine]{Optimal three states machine described geometrically over the interval $[0,1]$ along with the transition thresholds of the lower state (dashed line), middle state (doted line) and upper state (solid line). The X's represent the value assigned to each state. \label{fig:threeStates_2}} \end{figure} \subsection{The class of DTM machines} \label{subsec:theDTMClass} We now want to find a more general solution for the best universal predictor with a small number of states. We start by defining a new class of machines and then provide an algorithm to construct the optimal machine among this class. This optimality is in the sense of achieving the lowest maximal regret using the allowed number of states. The optimality of our algorithm among the class of DTM machines is being proved. We Further show that for small enough number of available states, this optimal DTM machine is also optimal among {\textbf {all}} machines. \begin{defn} The class of all $k$-states \textbf{\em Degenerated Tracking Memory} (DTM) machines is of the form: \begin{itemize} \item An array of $k$ states - $\{S_{k_l},...,S_{1}\}$ are the states in the lower half (in descending order where $S_1$ is the nearest state to $\frac{1}{2}$ and $S_i\leq \frac{1}{2}$ for all $1\leq i\leq k_l$), $\{\bar{S}_{1},...,\bar{S}_{k_u}\}$ are the states at the upper half (in ascending order where $\bar{S}_1$ is the nearest state to $\frac{1}{2}$ and $\bar{S}_i> \frac{1}{2}$ for all $1\leq i\leq k_u$), where $k_l+k_u=k$. \item The maximum down-step in the lower half, i.e., from states $\{S_{k_l},...,S_{1}\}$, is no more than a single state jump. The maximum up-step in the upper half, i.e., from states $\{\bar{S}_1,...,\bar{S}_{k_u}\}$ is no more than a single state jump. \item A transition between the lower and upper halves is allowed only from and to the nearest states to $\frac{1}{2}$, $S_1$ and $\bar{S}_1$ (implying that the maximum up-jump (down-jump) from $S_1$ ($\bar{S}_1$) is a single state jump). \end{itemize} \end{defn} \label{def1} An example for a DTM machine is depict in Figure \ref{fig:DtmMachine_example}. Note, however, that the optimal solutions presented before for a single, two and three states, belong to the class of DTM machines. \begin{figure}[ht] \centering \includegraphics[width=1 \columnwidth,height=0.07\textheight]{DtmMachine_example.jpg} \caption[Example of a DTM machine]{An example of a DTM machine - note that a transition between the lower and upper halves is allowed only from (and to) $S_1$ and $\bar{S}_{1}$. Arrows represent the maximum up or down jumps from each state. \label{fig:DtmMachine_example}} \end{figure} Thus, two constraints define the class of DTM machines - no more than a single state down-step and up-step from all states in the lower and upper halves, respectively, and a transition between these halves is allowed only from and to the nearest states to $\frac{1}{2}$, $S_1$ and $\bar{S}_1$. These constraints facilitate the algorithm for constructing the optimal DTM machine. \subsection{Constructing the optimal DTM machine} \label{sec1} We now present a schematic algorithm for constructing the optimal DTM machine. Given a desired regret, $R_d$, the task of finding the optimal DTM machine can be viewed as a covering problem, that is, assigning the smallest number of states in the interval $[0,1]$, achieving a regret smaller than $R_d$ for all sequences. We note that in an optimal $k$-state machine, the upper half of the states is the mirror image of the lower half. The symmetry property arises from the fact that any sequence $\{x_1,...,x_n\}$ can be transformed into the symmetric sequence $\{1-x_1,...,1-x_n\}$. Both sequences induce the same regret if full symmetry between the lower and upper halves is applied. Thus, assuming that the lower half is optimal in sense of achieving the desired regret with the smallest number of states, the upper half must be the reflected image to achieve optimality. Note that this property allows us to design the optimal DTM machine only for the lower half. The algorithm we present here recursively finds the optimal states' allocation and their transition thresholds. Suppose states $\{S_{i-1},...,S_{1}\}$ in the lower half (in descending order where $S_1$ is the nearest state to $\frac{1}{2}$) and their transition threshold set $\{\underline{T}_{~i-1},...,\underline{T}_{1}\}$ are given and satisfying regret smaller than $R_d$ for all minimal circles between them. Our algorithm generates the optimal $S_i$, i.e., the optimal allocation for state $i$, and a threshold set, $\underline{T}_{~i}$, satisfying regret smaller than $R_d$ for all minimal circles starting at that state. We start by finding $S_1$, the nearest state to $\frac{1}{2}$ in the lower half, in the optimal DTM machine. \begin{lem} \label{lem4} In the optimal $k$-states DTM machine for a given desired regret $R_d$, $S_1=\frac{1}{2}$ if $k$ is odd and \[S_{1}=\max \Big\{1-\sqrt{R_d+\tfrac{1}{4}}~,~2+\sqrt{R_d}-2\sqrt{R_d+\sqrt{R_d}+\tfrac{1}{2}}\Big\}\] if $k$ is even. \end{lem} \begin{proof} From symmetry aspects $S_1=\frac{1}{2}$ in the optimal DTM machine with odd number of states, otherwise there are more states in one of the halves and the symmetry property presented above does not hold. For even $k$, the nearest state to $\frac{1}{2}$ in the upper half, $\bar{S}_1$, is the mirror image of $S_1$, hence $\bar{S}_{1}=1-S_{1}$. By definition, only a single state up-jump is allowed from $S_1$ and only a single state down-jump is allowed from $\bar{S}_1$. Thus, the machine can be rotated between these states, constructing a two steps minimal circle. Denote by $x_1$ and $x_2$ the samples that induce the up and down jumps, correspondingly. These samples must satisfy the transition thresholds, i.e., \begin{align} &S_{1}+\sqrt{R_d} \leq ~x_1~\leq 1 \nonumber\\ &0 \leq ~x_2~\leq \bar{S}_{1}-\sqrt{R_d}=1-S_{1}-\sqrt{R_d}~. \end{align} Since the regret is a convex function over the input samples, the regret of a minimal circle is brought to maximum by samples at the edges of the constraint regions. Thus, in a two steps minimal circle there are four combinations that may maximize the regret and need to be analyzed. By examining the regrets in all four cases we get that $S_{1}$ must satisfy two constraints $S_1\geq 1-\sqrt{R_d+\tfrac{1}{4}}$ and $S_1 \geq 2+\sqrt{R_d}-2\sqrt{R_d+\sqrt{R_d}+\tfrac{1}{2}}$.\\ \end{proof} Note that $S_1$ must satisfy $S_{1}\leq\tfrac{1}{2}$ which does not hold for low enough $R_d$, implying a lower bound on the achievable regret of the optimal DTM machine (see section \ref{subsec:lowerBoundDTM}).\\ Now, after presenting the starting state of the algorithm, we present the complete algorithm for constructing the optimal DTM machine: \begin{enumerate} \item {\em Set $i=1$ and the corresponded starting state $S_{1}$ (see Lemma \ref{lem4}). Set the maximum up-step from the starting state $m_{u,1}=1$. \item Set the next state index $i=i+1$. \item Set the maximal up-step from state $i$ to $m=1$. Find the minimal value that can be assigned to that state with valid threshold set (in sequel we present an algorithm for finding a valid threshold set). Denote this value by $S_{i,m}$ and the threshold set by $\underline{T}_{~i,m}$. Repeat this procedure for all $m=1,\ldots,i-1$ (a jump of $i-1$ states from state $i$ brings the machine to state $S_1$. Remember that an higher jump is not allowed in a DTM machine). \item Choose the minimal $S_{i,m}$ among all possible maximum up-steps, that is, set \[m_{u,i}=\arg\min_{1\leq m \leq i-1}S_{i,m}\] \[S_i=S_{i,m_{u,i}}\] \[\underline{T}_{~i}=\underline{T}_{~i,m_{u,i}}~.\] Thus we have set the parameters of state $i$: assigned value $S_i$, maximum up-jump of $m_{u,i}$ states and transition thresholds $\underline{T}_{~i}$. \item If $S_i> \sqrt{R_d}$ go to step (2). \item Set the upper half of the states to be the mirror image of the lower half.}\\ \end{enumerate} \textbf{Explanations and Comments}: \begin{itemize} \item For a given desired regret $R_d$, one should run the algorithm presented above twice - for odd and even number of states with the corresponded starting state, $S_1$. The optimal DTM machine is the one with the least states among the two (differ by a single state). \item Note that transition thresholds for state $1$ are need to be given - a single state up-jump if the input sample satisfies $x\geq S_1+\sqrt{R_d}$ and a single state down-jump if the input sample satisfies $x\leq S_1-\sqrt{R_d}$. These are the optimal transition thresholds since as the interval for transition is wider the number of possible worst sequences in other minimal circles decreases. Furthermore, with these transition thresholds the maximal regret of a zero-step minimal circle (staying at $S_1$) is $R_d$. \item A valid threshold set for state $i$ is a set of transition thresholds that satisfy regret smaller than $R_d$ for all minimal circles starting at state $i$.\\ \end{itemize} To complete the construction of the optimal DTM machine, we still need to present an algorithm for finding the optimal transition thresholds at each iteration (Step $(3)$). Consider states $\{S_{i-1},...,S_{1}\}$ in the lower half and their transition threshold set $\{\underline{T}_{~i-1},...,\underline{T}_{1}\}$ are given and satisfying regret smaller than $R_d$ for all minimal circles between them. Suppose also $S_i$ and $m$ are given, where $m$ denotes the maximum up-step from state $i$. Note that there are $m+1$ minimal circles starting at state $i$ (depict in Figure \ref{fig:DtmMachine_1}): \begin{itemize} \item Zero-step minimal circle (staying at state $i$). \item For any $2 \leq j \leq m+1$, a minimal circle of $j$ steps - one up-step (of $j-1$ states), $j-1$ down-steps (of a single state). \end{itemize} Also note that these $m+1$ minimal circles are within the lower half, that is, within the states $\{S_{i-1},...,S_1\}$ (see Step $(3)$). \begin{figure}[ht] \centering \includegraphics[width=1 \columnwidth,height=0.07\textheight]{DtmMachine_1.jpg} \caption[DTM machine minimal circle]{$m+1$ possible minimal circles starting at $S_i$, where $m$ is the maximum up-step from state $i$. \label{fig:DtmMachine_1}} \end{figure} Let $x_1^j$ be the samples that endlessly rotate the machine in a minimal circle, where $x_1$ induces the up-step from state $i$ and $x_2^j$ induce the down-steps. Since the regret is convex in the input samples, the samples $x_2^j$ that bring the regret to maximum are at the edges of the transition regions, that is, satisfying \begin{equation} \label{eq:comb} x_t=\hat{x}_t-\sqrt{R_d} \text{~~or~~} x_t=0 ~~~\forall~~~ 2 \leq t\leq\ j~. \end{equation} Thus, there are $2^{j-1}$ combinations of $x_2^j$ that may maximize the regret. Now, given $x_2^j$, Lemma \ref{lem:boundaryUpSample} below provides upper ($C_u(x_2^j)$) and lower ($C_l(x_2^j)$) bounds on $x_1$ so that in this region the induced regret is smaller than $R_d$. Therefore, by computing these bounds for all $2^{j-1}$ combinations of $x_2^j$, one may find a region for $x_1$ in which the regret is lower than $R_d$ for all of these combinations. This region may be given by \begin{equation} \tilde{C_l}=\max_{x_2^j\in A_j} C_l(x_2^j)\leq x_1\leq \min_{x_2^j\in A_j} C_h(x_2^j)=\tilde{C_h} \end{equation} where $A_j$ is the set of $2^{j-1}$ combinations of $x_2^j$. Note that this interval is valid only if $\tilde{C_l}\leq \tilde{C_h}$. In that case we can say that the maximal regret of this minimal circle is guaranteed to be lower than $R_d$ and conclude that the transition thresholds for a $j-1$ steps up-jump from state $i$ must satisfy \begin{align} &\tilde{C_l}\leq T_{i,j-2}~,\nonumber\\ &T_{i,j-1}\leq \tilde{C_h}~. \end{align} Going over all minimal circles, $2\leq j\leq m+1$, results upper and lower bounds, $\tilde{C_l}$ and $\tilde{C_h}$, for each transition threshold. Thus, if a threshold set can be found to satisfy all bounds and to cover the interval $[S_i+\sqrt{R_d}~,~1]$ (that is, $T_{i,m}\geq 1$ and $T_{i,0}\leq S_i+\sqrt{R_d}$), we say that valid transition thresholds for state $i$ were found, otherwise - there are no valid thresholds for the given $S_i$ and $m$. \begin{lem} \label{lem:boundaryUpSample} Consider a sequence $x_1^j$ that rotates a DTM machine in a minimal circle starting at state $i$. Given states $\{S_i,\ldots,S_{i-j+1}\}$, the regret is smaller than $R_d$ if $x_1$ satisfies: \[a(x_2^j)-b(x_2^j) \leq x_1 \leq a(x_2^j)+b(x_2^j)~,\] where: \begin{align} \label{eqT1} &a(x_2^j)=S_i+\sum_{t=2}^j(S_i-x_t)~, \nonumber\\ &b(x_2^j)=j\sqrt{R_d-\frac{1}{j}\sum_{t=2}^j(S_{i-j+t-1}-S_i)(S_{i-j+t-1}+S_i-2x_t)}~. \end{align} \end{lem} \begin{proof} Analyzing the regret of the sequence and claiming for regret smaller than $R_d$ results the constrain on $x_1$: \begin{equation} \frac{1}{j}\sum_{t=1}^j[(x_t-\hat{x}_t)^2-(x_t-\bar{x})^2]\leq R_d~, \end{equation} where $\hat{x}_1=S_{i}$ and $\hat{x}_t=S_{i-j+t-1}$ for $2\leq t\leq j$. \end{proof} \bigskip We can now present the algorithm for finding a threshold set for state $i$ given $S_i$ and $m$, the maximum up-step: \begin{enumerate} \item {\em Find $C_{j,l}$ and $C_{j,h}$ for all $2 \leq j\leq m+1$ as follows: \begin{align} &C_{j,l}=\max_{x_2^j\in A_j} ~\bigg\{a(x_2^j)-b(x_2^j)\bigg\}~,\nonumber\\ &C_{j,h}=\min_{x_2^j\in A_j} ~\bigg\{a(x_2^j)+b(x_2^j)\bigg\}~, \label{eq:algoConstrain} \end{align} where $a(x_2^j)$ and $b(x_2^j)$ are given in \eqref{eqT1} and $A_j$ is the set of $2^{j-1}$ combinations of $x_2^j$: \begin{equation} x_t=S_{i-j+t-1}-\sqrt{R_d} \text{~~or~~} x_t=0 ~~~\forall~~~ 2 \leq t\leq\ j~. \end{equation} \item If one of the following constraints does not hold, return and declare that there are no valid thresholds. \begin{align} &C_{j,l}< C_{j,h} \qquad \forall ~2\leq j\leq m ~,\nonumber\\ &C_{j+1,l}\leq C_{j,h} \qquad \forall ~2\leq j\leq m ~,\nonumber\\ &C_{2,l}\leq S_i+\sqrt{R_d}~, \nonumber\\ &1< C_{m+1,h}~. \end{align} \item Find a valid monotone increasing transition thresholds $\{T_{i,0},\ldots,T_{i,m}\}$ that satisfy: \begin{align} &C_{j+1,l}\leq T_{i,j-1}\leq C_{j,h} \qquad \forall ~2\leq j\leq m ~,\nonumber\\ &C_{2,l}\leq T_{i,0}\leq S_i+\sqrt{R_d}~, \nonumber\\ &1< T_{i,m}\leq C_{m+1,h}~. \end{align} \item Set the transition thresholds for the down-step $\{0,S_i-\sqrt{R_d}\}$.}\\ \end{enumerate} \textbf{Explanations and Comments}: \begin{itemize} \item $C_{j,l}< C_{j,h}$ must be satisfied otherwise there is no $x_1$ that satisfies regret smaller than $R_d$ for all $2^{j-1}$ combinations of $x_2^j$. \item $C_{j+1,l}\leq C_{j,h}$ must be satisfied otherwise there is no $T_{i,j-1}$ satisfying both $T_{i,j-1}\leq C_{j,h}$ and $C_{j+1,l}\leq T_{i,j-1}$. \item $T_{i,0}\leq x_1 <T_{i,1}$ induces a single state up-jump, hence, $T_{i,0}$ must satisfy $C_{2,l}\leq T_{i,0}$. Also $T_{i,0}$ must satisfy $T_{i,0}\leq S_i+\sqrt{R_d}$ to ensure regret smaller than $R_d$ for zero-step minimal circle (staying at state $i$). \item $T_{i,m-1}\leq x_1 <T_{i,m}$ induces $m$ states up-jump, hence, $T_{i,m}$ must satisfy $T_{i,m}\leq C_{m+1,h}$. The transition thresholds must cover the interval $[S_i+\sqrt{R_d},1]$, therefore $T_{i,m}$ must also satisfy $1< T_{i,m}$. \item This algorithm provides threshold set given the states $\{S_{i-1},...,S_{1}\}$ and $m$, the maximum up-step from state $i$. It also requires $S_i$. Recalling the algorithm for finding $S_i$ - we search for the minimal $S_{i,m}$ with a valid threshold set for a given $m$. Thus, one can provide high $S_{i,m}$ and reduce it until no valid threshold set can be found.\\ \end{itemize} \begin{thm} \label{thm:optimalDTM} The algorithm given in this section constructs the optimal DTM machine for a given desired regret, $R_d$, i.e., has the lowest number of states among all DTM machines with maximal regret smaller than $R_d$. \end{thm} \begin{proof} In each iteration the algorithm finds the minimal $S_i$ with a valid threshold set. Note that in DTM machines the transition thresholds for up-steps, $\{T_{i,0},...,T_{i,m_{u,i}}\}$, do not have an impact on regrets of minimal circles other than those starting at state $i$. Thus, given $S_i$, the optimality of these thresholds is only in the sense of satisfying regret smaller than $R_d$ for these minimal circles. As for the down thresholds - an input sample $x$ induces a down-step from state $s$ if satisfies $0\leq x <T_{s,-1}$. As $T_{s,-1}$ is smaller for all states $s=i-1,...,1$ the achievable $S_i$ with a valid threshold set is smaller (the constrains are more relaxed). We choose the smallest $T_{s,-1}$ for all states, i.e., $S_s-\sqrt{R_d}$. Furthermore, each $S_s$ is chosen to be minimal. We further show that optimality is achieved when assigning the minimal value for all states. Consider $\{S_{\lceil \tfrac{k}{2}\rceil},...,S_1\}$ in the lower half are the outputs of the algorithm for a given desired regret $R_d$. Let us examine the case where the assigned value for state $i-1$ is $\tilde{S}_{i-1}$ satisfying $\tilde{S}_{i-1}>S_{i-1}$. We note that the value assigned to state $i-1$ has no impact on the optimality of states $i-2,...,1$. Furthermore, the constrains on the up thresholds of state $i$ depend only on $S_s-S_i$ or $S_s^2-S_i^2$, where $s=i-1,...,1$ (applying $x_t=0$ or $x_t=S_{i-j+t-1}-\sqrt{R_d}$ in Equation \eqref{eqT1}). Since $S_i$ is the minimal value with valid thresholds for $\{S_{i-1},...,S_1\}~$, the minimal value with valid thresholds for $\{\tilde{S}_{i-1},S_{i-2},...,S_1\}$ is not smaller than $S_i$. This holds for all states $\lceil \tfrac{k}{2}\rceil,...,i$ and therefore, choosing $\tilde{S}_{i-1}$ does not reduce the number of states. Thus, in all aspects optimality is achieved at each iteration in the algorithm by assigning state $i$ with the minimal value $S_i$, down thresholds $\{0,S_i-\sqrt{R_d}\}$ and valid up thresholds. \end{proof} \subsection{Lower Bound on the Maximal Regret of DTM Machines} \label{subsec:lowerBoundDTM} Here we show that any DTM machine can not attain a maximal regret lower than $(\tfrac{1}{6})^2$. The constraints imposed on this class of machines (as described in section \ref{subsec:theDTMClass}), yield this lower bound. \begin{thm} The maximal regret of any DTM machine is lower bounded by \[R=(\tfrac{1}{6})^2=0.0278~.\] \end{thm} \begin{proof} In an optimal $k$-states DTM machine, where $k$ is even, the starting state $S_1$, must satisfies \begin{equation} \label{eq:lowerBound} S_1=\max \{1-\sqrt{R_d+\tfrac{1}{4}},2+\sqrt{R_d}-2\sqrt{R_d+\sqrt{R_d}+\tfrac{1}{2}}\}\leq \tfrac{1}{2}, \end{equation} implying that if the desired regret satisfies $\sqrt{R_d}<\frac{1}{6}$, then $S_1>\frac{1}{2}$ and no DTM machine with even number of states can be formed. We then conclude that also a DTM machine with odd number of states can not be formed (since otherwise a sub-optimal DTM machine with even number of states could have been formed by adding another state). \end{proof} \subsection{Conclusions} \label{DTM:conc} In Figure \ref{fig:RvsNumStates1} we present the number of states vs. maximal regret of the machines constructed by the algorithm presented above. Note how the optimal machine can not attain a maximal regret smaller than $1/36$. \begin{figure}[h] \includegraphics[width=1\columnwidth,height=0.25\textheight]{Res_smallNumOfStates_withColors_noEDM.jpg} \caption[Regret vs. number of states - optimal DTM machine]{Performance of the optimal DTM machine. \label{fig:RvsNumStates1}} \end{figure} In this section we started by presenting the optimal solution for machines with a single, two and three states. These solutions belong to the class of DTM machines. Furthermore, one can validate that our algorithm generates for these number of states machines that are identical to these optimal solutions. Thus, in addition to Theorem \ref{thm:optimalDTM}, we can conclude that up to a certain number of states, our algorithm generates the optimal solution among {\textbf {all}} machines. This number, however, is yet unknown. \section{The Exponential Decaying Memory machine} \label{EDM} In the previous section we studied the case of tracking the empirical mean when small number of states are available. In the rest of the paper we shall examine the case of large number of states. We start by proposing the Exponential Decaying Memory (EDM) machine. This machine was presented in \cite{MeronThesis} as a universal predictor for individual {\em binary} sequences. It was further shown that with $k$ states it achieves an asymptotic regret of $O(k^{-2/3})$ compared to the constant predictors class and w.r.t. the log-loss (code length) and square-error functions. Here we start by describing and adjusting the EDM machine for our case, predicting individual {\em continuous} sequences. \begin{defn} The \textbf{\emph{Exponential Decaying Memory}} machine is defined by: \begin{itemize} \item $k$ states $\{S_1,...,S_{k}\}$ distributed uniformly over the interval $[k^{-1/3},1-k^{-1/3}]$. \item The transition function between states satisfies: \begin{equation} \label{EDMeq1} \hat{x}_{t+1}=Q(\hat{x}_t(1-k^{-2/3})+x_tk^{-2/3})~, \end{equation} where $\hat{x}_t$ is the prediction (state) at time $t$ and $Q$ is the quantization function to the nearest state.\\ \end{itemize} \end{defn} Note that the spacing gap between states, denoted $\Delta$, satisfies: \begin{equation} \label{eq:DeltaEDM} \Delta=\tfrac{1-2k^{-1/3}}{k-1} \sim k^{-1}~, \end{equation} and the quantization function satisfies $Q(y)=\hat{x}_{t+1}$, if $y$ satisfies $\hat{x}_{t+1}-\tfrac{1}{2}\Delta \leq y < \hat{x}_{t+1}+\tfrac{1}{2}\Delta$. Also note that the EDM machine is a finite-memory approximation of the Cumulative Moving Average predictor given in Equation \eqref{CMA}, where $\frac{1}{t+1}$ is replaced by the constant value $k^{-2/3}$ (which was shown to be optimal in \cite{MeronThesis}). We now present asymptotic bounds on the regret attained by the EDM machine when used to predict individual \textbf{continuous} sequences. \begin{thm} \label{thm:EDMregret} The maximal regret of the $k$-states EDM machine, denoted $U_{EDM_k}$, attained by the worst continuous sequence, is bounded by \[\tfrac{1}{2}k^{-2/3}+O(k^{-1}) \leq~ \max_{x_1^n}R(U_{EDM_k},x_1^n) \leq \tfrac{17}{4}k^{-2/3}\] \end{thm} \begin{proof} Consider $L$ length sequence $\{x_t\}_{t=1}^L$ that endlessly rotates the machine in a minimal circle of $L$ states $\{\hat{x}_t\}_{t=1}^L$. The input sample at each time $t$ can be written as follows \begin{equation} x_t=\hat{x}_t+(P_t\Delta+\delta_t) k^{2/3}~, \label{1} \end{equation} where $P_t\in \mathbb{Z}$ denotes the number of states crossed by the machine at time $t$, $\delta_t$ is a quantization addition that satisfies $\abs{\delta_t}<\tfrac{1}{2}\Delta$ and has no impact on the jump at time $t$, i.e., has no impact on the prediction at time $t+1$. Since we examine a minimal circle, the sum of states crossed on the way up is equal to the sum of states crossed on the way down, i.e $\sum_{t=1}^L P_t=0$. This means that the empirical mean of the sequence is \begin{equation} \bar{x}=\frac{1}{L}\sum_{t=1}^L(\hat x_t+\delta_tk^{2/3})~. \end{equation} Now, we can write \begin{align} R(U_{EDM_k},x_1^L) &=\frac{1}{L}\sum_{t=1}^L(x_t-\hat{x}_{t})^2-(x_t-\bar{x})^2\\ &=\bar{x}^2+\frac{1}{L}\sum_{t=1}^L(\hat x_t^2-2x_t\hat x_t)~. \label{eq:regSpacQuantFirst} \end{align} By Jensen's inequality we have $\bar{x}^2\leq \sum_{t=1}^L (\hat x_t+\delta_tk^{2/3})^2/L$. Applying this and \eqref{1} into Equation \eqref{eq:regSpacQuantFirst} yields \begin{align} R(U_{EDM_k},x_1^L)&\leq \tfrac{1}{L}\sum_{t=1}^L \delta_t^2 k^{4/3}-\tfrac{1}{L}\sum_{t=1}^L 2P_t\Delta k^{2/3}\hat{x}_t~. \label{eq:regSpacQuant} \end{align} The first term on the right hand side depends only on the quantization of the input samples, $\delta_t$, thus we term it {\em quantization loss}. The second term depends on the spacing gap between states, $\Delta$, thus we term it {\em spacing loss}. Hence, the regret of the sequence is upper bounded by a loss incurred by the quantization of the input samples and a loss incurred by the quantization of the states' values, i.e., the prediction values. By applying $\abs{\delta_t}<\tfrac{1}{2}\Delta$ we bound the {\em quantization loss}: \begin{equation} \text{{\em quantization loss}} = \tfrac{1}{L}\sum_{t=1}^L \delta_t^2 k^{4/3} \leq \tfrac{1}{4}k^{-2/3}~. \end{equation} Let us now upper bound the {\em spacing loss}. We define sub-step as a a single state step that is associated with a full step. For example, a step at time $t$ of $P_t>0$ states consist of $P_t$ sub-steps. We denote these up sub-steps by $\{USS_{t,1},\ldots,USS_{t,P_t}\}$. Note that all of them are associated with a full up-step from state $\hat{x}_t$. Since in a minimal circle the number of states crossed on the way up and down are equal, we can divide all sub-steps into pairs of up and down sub-steps that cross the same state. For example, an up sub-step $USS_{t,j}$ is paired with a down sub-step that crosses the same state. The up sub-step is associated with a full up-step from state $\hat x_t$. The paired down sub-step is associated with a full down-step from a state which we denote by $\hat x_{USS_{t,j}}$. Noting that $P_t$ is positive for up-steps and negative for down-steps, we can write \begin{align} -\tfrac{1}{L}\sum_{t=1}^L P_t\hat{x}_t &= -\tfrac{1}{L}\sum_{t\in \text{\{up steps\}}}P_t\hat{x}_t+\tfrac{1}{L}\sum_{t\in \text{\{down steps\}}}\abs{P_t}\hat{x}_t \nonumber\\ & =\tfrac{1}{L}\sum_{t\in \text{\{up steps\}}} \big( -P_t\hat{x}_t+\sum_{j=1}^{P_t}\hat{x}_{USS_{t,j}}\big)~. \label{eq:subStepEq} \end{align} Now, up sub-step $USS_{t,j}$ crosses one of the states between $\hat x_t$ and $\hat x_t+P_t\Delta$. The paired down sub-step has to cross the same state. Since the farthest up or down-step in an EDM machine is $k^{-2/3}$, we can conclude that the paired down sub-step is associated with a full down-step from a state that satisfy $\hat x_{USS_{t,j}}\leq \hat{x}_t+P_t\Delta+k^{-2/3}$. By applying this into Equation \eqref{eq:subStepEq} we get \begin{align} -\tfrac{1}{L}\sum_{t=1}^L P_t\hat{x}_t &\leq \tfrac{1}{L}\sum_{t\in \text{\{up steps\}}}P_t(P_t\Delta+k^{-2/3}) \leq 2\tfrac{k^{-4/3}}{\Delta}~, \end{align} where in the last inequality we used $P_t \leq \tfrac{k^{-2/3}}{\Delta}$ (since the farthest step is $k^{-2/3}$). The {\em spacing loss}, thus, satisfies: \begin{equation} \text{{\em spacing loss}}=2\Delta k^{2/3}(-\tfrac{1}{L}\sum_{t=1}^L P_t\hat{x}_t) \leq 4k^{-2/3}~. \end{equation} By using Theorem \ref{thm:problemForm}, the upper bound is proven. The proof for the lower bound is given in Appendix \ref{app:lowerBoundTheorem} where we show that there is a sequence that endlessly rotates the $k$-states EDM machine in a minimal circle, incurring a regret of $\frac{1}{2}k^{-2/3}+O(k^{-1})$. \end{proof} Note that Theorem \ref{thm:EDMregret} implies that the $k$-state EDM machine achieves a regret smaller than $\tfrac{17}{4}k^{-2/3}$ for any individual continuous sequence bounded in $[0,1]$. Moreover, the regret of the worst sequence, that is, the maximal regret, is at least $\tfrac{1}{2}k^{-2/3}+O(k^{-1})$. In Figure \ref{fig:RvsNumStates2} the number of states vs. maximal regret achieved by the EDM machine is plotted (regret of $\frac{1}{2}k^{-2/3}$). Also plotted is the performance of the optimal DTM machine. Note that it outperforms the EDM machine for small number of states. Nevertheless, while the achievable (maximal) regret of the optimal DTM machine is lower bounded, the EDM can attain any vanishing regret with large enough number of states. \begin{figure}[h] \includegraphics[width=1\columnwidth,height=0.25\textheight]{Res_smallNumOfStates_withColors_new.jpg} \caption[Regret vs. number of states - EDM and DTM machines]{Performance of EDM and optimal DTM machines. \label{fig:RvsNumStates2}} \end{figure} \section{Lower bound on the achievable maximal regret of any $k$-states machine} \label{sec:lowerBoundHigh} In section \ref{chapter:LowNumOfStates} we have analyzed machines with relatively small number of states. We then examined the case of large number of states and proposed the EDM machine as a universal predictor. We showed that asymptotically, using enough states, it can achieve any vanishing regret. However, is it the optimal solution? Does it attain a desired (maximal) regret with the lowest number of states? In this section we present an asymptotic lower bound on the number of states used by any machine with maximal regret $R$. \begin{defn} Given a starting state $S_i$, a \textbf{\em{Threshold Sequence x}}, denoted $TS(x)$, is constructed for any $x$ in the following manner - if the current state is smaller than $x$, next sample in the sequence is $1$ (inducing an up-step), if not, next sample is $0$ (inducing a down-step). \end{defn} For any starting state and any $x$, the constructed $TS(x)$ induces a monotone jumps to the vicinity of $x$ and than rotates the machine in a minimal circle. If the starting state is below $x$, the $TS(x)$ induces monotone up-steps until the machine crosses $x$ (or monotone down-steps if the starting state is above $x$). In the vicinity of $x$ the $TS(x)$ rotates the machine only in a bounded number of states - the lowest possible state is bounded from below by the maximum down-jump from the nearest state to $x$ and the highest possible state is upper bounded by the maximum up-jump from the nearest state to $x$. Therefore, the $TS(x)$ endlessly rotates the machine in a finite number of states, thus inducing a minimal circle. Since the regret induced by the monotone sequence is neglectable, this part can be ignored, and therefore we shall assume that any $TS(x)$ endlessly rotates the machine in a minimal circle, without the monotone part. \begin{lem} \label{lemMinAway} Consider an FSM with maximal regret $R$. A $TS(x)$ induces a minimal circle where at least half of its states are within $\tfrac{R}{x}$ from $x$ for any $~x \leq \tfrac{1}{2}~$ and $~\tfrac{R}{1-x}~$ for any $~x > \tfrac{1}{2}~$. \end{lem} \begin{proof} Let us examine the regret of a $TS(x)$, where $x \leq \tfrac{1}{2}$, that rotates an FSM, denoted $U$, in a minimal circle of length $L$. Since the empirical mean of the sequence, $\bar{x}$, induces the minimal square error, the regret satisfies \begin{align} R(U,x_1^L) & \geq \tfrac{1}{L}\sum_{t=1}^{L}(x_t-\hat{x}_t)^2-(x_t-x)^2 \nonumber\\ & \geq \tfrac{1}{L}\sum_{t=1}^{L}2(x-\hat{x}_t)(x_t-x)~. \end{align} We note that by construction, $(x-\hat{x}_t)(x_t-x)$ is positive for all $t$. Moreover, since $x \leq \tfrac{1}{2}$ and $x_t=1$ for up-steps and $x_t=0$ for down-steps, it follows that: \begin{equation} R(U,x_1^L) \geq \tfrac{1}{L}\sum_{t=1}^{L}2\abs{x-\hat{x}_t}x~. \end{equation} Hence half of the states have to be within $\tfrac{R}{x}$ from $x$, otherwise we get a regret higher than $R$. In the same manner it can be shown that for $x > \tfrac{1}{2}$ half of the states have to be within $\tfrac{R}{1-x}$ from $x$.\\ \end{proof} \begin{lem} \label{lemMinStep} Consider an FSM with maximal regret $R$. The maximum number of states crossed in an up-step and in a down-step from state $S_i$, for any $i$, must satisfy \begin{align} &m_{u,i} \geq \tfrac{1-(S_i+\sqrt{R})}{2\sqrt{R}},\\ &m_{d,i} \geq \tfrac{S_i-\sqrt{R}}{2\sqrt{R}} ~.\label{lemMinStepDown} \end{align} \end{lem} \begin{proof} See Appendix \ref{app:LemmalemMinStep}.\\ \end{proof} Note that Lemma \ref{lemMinStep} implies the same lower bound on the achievable regret of any DTM machine, $R\geq (\frac{1}{6})^2$ (as presented in section \ref{chapter:LowNumOfStates}). Any DTM machine allows only a single state down-jump from all states below $\frac{1}{2}$. Thus, a DTM machine may attain maximal regret $R$ if all states below $\frac{1}{2}$ satisfy Equation \eqref{lemMinStepDown} with $m_{d,i}=1$, hence: \begin{equation} \label{eq:lowerBoundDTM3} \tfrac{\tfrac{1}{2}-\sqrt{R}}{2\sqrt{R}}\leq 1~. \end{equation} Furthermore, Lemma \ref{lemMinStep} provides a lower bound on the maximal regret of any machine that allocates a state $S_i$ with maximum up and down jumps of $m_{u,i}$ and $m_{d,i}$ states. \bigskip \begin{thm} \label{thm:nStatesLowerBound} The number of states in any deterministic FSM with maximal regret $R$, is lower bounded by \[\tfrac{1}{24}R^{-3/2}+O(R^{-1})~.\] \end{thm} \begin{proof} Consider a $k$-states machine with maximal regret $R$. Lemma \ref{lemMinAway} implies that for any $x\leq \frac{1}{2}$ there is a $TS(x)$ that forms a minimal circle in the vicinity of $x$ where at least half of the states are within $\tfrac{R}{x}$ from $x$. Since the samples of the $TS(x)$ are either $0$ or $1$, the constructed minimal circle is of at least $m_{u,i}$ states, where $m_{u,i}$ is the maximum up-jump from the nearest state to $x$, denoted state $i$. Thus, there are at least $\frac{1}{2}m_{u,i}$ states within $\tfrac{R}{x}$ from $x$. Lemma \ref{lemMinStep} implies that the maximum up-step from state $i$ is at least $m_{u,i}=\lceil\tfrac{1-S_i-\sqrt{R}}{2\sqrt{R}}\rceil$ states, where $S_i$ is the assigned value to state $i$. We define the interval $B(m_u)$ as all $x$'s satisfying \begin{equation} m_u=\lceil\tfrac{1-x-\sqrt{R}}{2\sqrt{R}}\rceil~. \end{equation} In other words, $B(m_u)$ is the interval \[(1-\sqrt(R)(2m_u+1),1-\sqrt(R)(2m_u-1)]~.\] Note that the length of this interval, $\abs{B(m_u)}$, is always equal to $2\sqrt{R}$. Now, let $N_1$ be the largest integer to satisfy $1-\sqrt(R)(2N_1-1) \geq \tfrac{1}{2}$, and $N_2$ be the smallest integer to satisfy $1-\sqrt(R)(2N_2+1) \leq 0$. We then can write \begin{equation} \bigcup_{m_u=N_1}^{N_2}B(m_u)\supseteq[0,\tfrac{1}{2}]~, \end{equation} where we note that $\{B(N_1),\ldots,B(N_2)\}$ are non-intersecting intervals. Also note that the smallest value in $B(N_1)$ (that is, $1-\sqrt(R)(2N_1+1)$) is greater than $\tfrac{1}{2}-2\sqrt{R}$. In the same manner, the smallest value in $B(N_1+i)$ (where $i$ is a positive integer), is greater than $\tfrac{1}{2}-2\sqrt{R}(i+1)$. For $x\in B(m_u)$ there are at least $\tfrac{1}{2}m_u$ states within $\tfrac{R}{x}$ from $x$. Therefore, in the interval $B(m_u)$ there are at least \[\min_{x\in B(m_u)}\tfrac{\abs{B(m_u)}}{R/x}\tfrac{1}{2}m_u\] states. Using the fact that in an optimal machine the minimal number of states in the lower and upper halves is equal (see Section \ref{sec1}), we can conclude that $k$, the number of states, satisfies \begin{align} k & \geq 2\sum_{m_u =N_1+1}^{N_2-1}\min_{x\in B(m_u)}\tfrac{\abs{B(m_u)}}{R/x}\tfrac{1}{2}m_u \nonumber\\ &= \sum_{m_u =N_1+1}^{N_2-1}\min_{x\in B(m_u)}\tfrac{2\sqrt{R}}{R/x}\lceil\tfrac{1-x-\sqrt{R}}{2\sqrt{R}}\rceil \nonumber\\ &\geq R^{-1}\sum_{m_u =N_1+1}^{N_2-1}\min_{x\in B(m_u)}x(1-x-\sqrt{R})~. \label{eq:last} \end{align} The function $x(1-x-\sqrt{R})$ is concave and has a single maximum point at $\tfrac{1}{2}(1-\sqrt{R})$. Thus, $\min_{x\in B(m_u)}x(1-x-\sqrt{R})$ is attained at the smallest value in the interval $B(m_u)$ (that is, $1-\sqrt(R)(2m_u+1)$). As was mentioned before this value is greater than $\tfrac{1}{2}-2\sqrt{R}(m_u-N_1+1)$ and therefore this further minimizes the function $x(1-x-\sqrt{R})$. Thus, we can write \begin{align} k & \geq \tfrac{1}{2}R^{-3/2}\sum_{i=2}^{\lfloor 1/(4\sqrt{R})\rfloor}2\sqrt{R} (\tfrac{1}{2}-2\sqrt{R}i) (\tfrac{1}{2}+2\sqrt{R}i-\sqrt{R}) \nonumber\\ & \geq \tfrac{1}{24}R^{-3/2}-\tfrac{7}{16}R^{-1}+\tfrac{7}{12}R^{-1/2}+2~. \end{align} This concludes the proof. \end{proof} Note that Theorem \ref{thm:nStatesLowerBound} implies that a $k$-states FSM can not attain maximal regret smaller than \begin{equation} \label{lowerBound1} (24k)^{-2/3}+O(k^{-1})~. \end{equation} \section{Enhanced Exponential Decaying Memory machine} \label{sec:DesigningEEDM} In Section \ref{EDM} we showed that the EDM machine can achieve any maximal regret, as small as desired. In this section we present a new FSM named the \textbf{\emph{Enhanced Exponential Decaying Memory}} (E-EDM) machine. We prove that it outperforms the EDM machine and better approaches the lower bound presented in the previous section. \subsection{Designing the E-EDM machine} \label{sec:DesigningEEDMAlgo} The algorithm for constructing the E-EDM machine for a desired regret $R_d$, is as follows. \begin{itemize} \label{algEEDM} \item Set $R=\tfrac{R_d}{2}$. \item Divide the interval $[0,1]$ into segments, denoted $A(m_u,m_d)$, where each contains all $x$'s satisfying both \begin{align} &m_u=\lceil \tfrac{1-x-\sqrt{R}}{2\sqrt{R}}\rceil~, \nonumber\\ &m_d=\lceil \tfrac{x-\sqrt{R}}{2\sqrt{R}}\rceil~. \label{maxUpDown} \end{align} Note that these segments are non-intersecting. \item Linearly spread states in each segment $A(m_u,m_d)$ with a $\Delta(m_u,m_d)$ spacing gap between them, where \begin{equation} \Delta(m_u,m_d)=\tfrac{\sqrt{R}}{2m_{u}\cdot m_{d}}~. \end{equation} \item Assign all states in segment $A(m_u,m_d)$ with maximum up and down jumps of $m_u,~m_d$ states, correspondingly. Note that according to Lemma \ref{lemMinStep}, these are the minimal maximum jumps allowed in order to achieve maximal regret smaller than $R$. \item Assign transition thresholds for each state $i$ as follows: \begin{equation} T_{i,j}=S_i+(2j+1)\sqrt{R} \quad \forall \quad -m_{d,i}\leq j \leq m_{u,i}~, \end{equation} that is, if the machine at time $t$ is at state $i$, it jumps $j$ states if the current outcome, $x_t$, satisfies: \begin{equation} S_i+(2j-1)\sqrt{R} \leq x_t < S_i+(2j+1)\sqrt{R}~. \end{equation} Note that as required, the transition thresholds cover the $[0,1]$ axis (arises from the chosen maximum up and down jumps). \item We further need to guarantee the desired regret when the machine traverses between segments. Consider two adjacent segments $A(m_{u,1},m_{d,1})$ and $A(m_{u,2},m_{d,2})$ and suppose the spacing gap in the second segment is smaller. Add states to the first segment such that the closest $m_{u,1}+m_{d,1}$ states to the second segment have a spacing gap of $\Delta(m_{u,2},m_{d,2})$. It can be shown that at most two states need to be added to each segment. Figure \ref{fig:EEDM1} depict the spacing gap in two adjacent segments. \end{itemize} \begin{figure}[ht] \centering \includegraphics[width=1 \columnwidth]{EEDM1.jpg} \caption[E-EDM machine - spacing Gap]{Spacing gap of the E-EDM machine. Adjacent segments $A(m_{u,1},m_{d,1})$ and $A(m_{u,2},m_{d,2})$ with spacing gap $\Delta_s=\tfrac{\sqrt{R}}{2m_{u,s}m_{d,s}}$ where $s=1,2$ and $\Delta_2<\Delta_1$. Note that the spacing gap between the highest $m_{u,1}+m_{d,1}$ states in segment $A(m_{u,1},m_{d,1})$ is $\Delta_2$ while the maximum up and down jumps from these states are $m_{u,1}$ and $m_{d,1}$ states. \label{fig:EEDM1}} \end{figure} Recall that the transition thresholds in the EDM machine are $T_{i,j}=S_i+(j+\tfrac{1}{2})\Delta k^{2/3}$. Since $\Delta\sim k^{-1}$, if we take the desired regret to be $R_d=\frac{1}{2}k^{-2/3}$, that is, $R=\frac{1}{4}k^{-2/3}$, we get that the transition thresholds in the E-EDM machine are identical to those defined for the EDM machine. Furthermore, recall that according to Theorem \ref{thm:EDMregret}, the maximal regret of the $k$-states EDM machine is greater than $\frac{1}{2}k^{-2/3}$. Thus, the new machine presented here achieves lower maximal regret by better allocating the states - the states of the EDM are uniformly distributed over the interval $[0,1]$ while in the E-EDM machine the interval $[0,1]$ is divided into segments and states are uniformly distributed with a different spacing in each segment. This will be proved more rigorously in sequel. We shall now prove that the maximal regret in an E-EDM machine, constructed by the algorithm above, indeed is no more than the desired regret $R_d$. \begin{thm} \label{thm:E-EDMRegret} The construction of the E-EDM machine according to the algorithm \ref{algEEDM}, yields a machine with maximal regret that is no more than $R_d$. \end{thm} \begin{proof} Consider a sequence $x_1^L$ that endlessly rotates the E-EDM machine (denoted $U_{E-EDM}$) in a minimal circle of $L$ states $\hat{x}_1^L$. Each input sample $x_t$ can be written as follows: \begin{equation} x_t=\hat{x}_t+2\sqrt{R}\cdot P_t+\delta_t~, \end{equation} where $P_t$ is the number of states the machine crosses at time $t$ ($-m_{d}\leq P_t \leq m_{u}$) and $\delta_t$ satisfies $\delta_t\leq \sqrt{R}$ and can be regarded as a quantization addition that has no impact on the jump at time $t$, i.e., has no impact on the next prediction. Since we examine a minimal circle, the sum of states crossed on the way up is equal to the sum of states crossed on the way down, i.e $\sum_{t=1}^L P_t=0$. By applying this and Jensen's inequality, the regret of the sequence satisfies: \begin{align} R(U_{E-EDM},x_1^L) & \leq \tfrac{1}{L}\sum_{t=1}^L\delta_t^2 -4\sqrt{R}\tfrac{1}{L}\sum_{t=1}^L P_t(\hat{x}_t-\hat{x}_1)~. \label{lemRegret:eq2} \end{align} We term the first loss in the right hand side of Equation \eqref{lemRegret:eq2} {\em quantization loss} (since it depends only on $\delta_t$, the quantization of the input sample, $x_t$). By applying $\delta_t\leq \sqrt{R}$ we get: \begin{equation} \text{\em quantization loss}=\tfrac{1}{L}\sum_{t=1}^L\delta_t^2 \leq R~. \end{equation} We term the second loss in the right hand side of Equation \eqref{lemRegret:eq2} {\em spacing loss} (since $\hat{x}_t-\hat{x}_1$ depends only on the spacing gap between states). Thus, as we sowed for the EDM machine, the regret of the sequence is upper bounded by a loss incurred by the quantization of the input samples and a loss incurred by the quantization of the states' values, i.e., the prediction values. \begin{lem} \label{lem:spacingLossWithinSegment} For any sequence $x_1^L$ that endlessly rotates the E-EDM machine in a minimal circle of states $\hat{x}_1^L$, where the spacing gap between all states is identical, the spacing loss is smaller than $R$ satisfying: \begin{equation} \text{\em spacing loss}=-4\sqrt{R}\tfrac{1}{L}\sum_{t=1}^L P_t(\hat{x}_t-\hat{x}_1) \leq R~. \end{equation} \end{lem} \begin{proof} See Appendix \ref{app:LemmaspacingLossWithinSegment}. \end{proof} \begin{lem} \label{lem:spacinglossSegments} For any sequence $x_1^L$ that rotates the E-EDM machine in a minimal circle of states $\hat{x}_1^L$, where the spacing gap is not equal between all states, the spacing loss is smaller than $R$ satisfying: \[\text{\em spacing loss}=-4\sqrt{R}\tfrac{1}{L}\sum_{t=1}^L P_t(\hat{x}_t-\hat{x}_1) \leq R~.\] \end{lem} \begin{proof} See Appendix \ref{app:LemmaspacinglossSegments}. \end{proof} Since $R=\tfrac{R_d}{2}$ and by applying Theorem \ref{thm:problemForm} the proof is concluded.\\ \end{proof} \subsection{Performance Evaluation} The following Theorem gives the number of states used by an E-EDM machine designed with a desired regret $R_d$. \begin{thm} \label{thm:nStatesEEDM} The number of states in an E-EDM machine designed to achieve maximal regret smaller than $R_d$ is \[ \tfrac{1}{12}(\tfrac{R_d}{2})^{-3/2}+O(R_d^{-1})~. \] \end{thm} \begin{proof} See Appendix \ref{app:TheoremnStatesEEDM}. \end{proof} Theorem \ref{thm:EDMregret} implies that the asymptotic worst regret of the $k$-states EDM machine is at least $\tfrac{1}{2}k^{-2/3}$. Thus, the number of states in an EDM machine with maximal regret $R_d$, is at least $(2R_d)^{-3/2}$ states. Theorem \ref{thm:nStatesLowerBound} implies that the asymptotic number of states of any deterministic FSM with maximal regret $R_d$ is at least $\tfrac{1}{24}R_d^{-3/2}$. Theorem \ref{thm:nStatesEEDM} implies that the asymptotic number of states in an E-EDM machine with maximal regret $R_d$ is $\tfrac{1}{12}(\tfrac{R_d}{2})^{-3/2}$. Thus we can conclude that: \begin{enumerate} \item For a given desired regret, the E-EDM machine outperforms the EDM machine in number of states by a factor of: \[\tfrac{\tfrac{2^{3/2}}{12}R_d^{-3/2}}{(2R_d)^{-3/2}}=\tfrac{2}{3}~,\] i.e., uses only $\frac{2}{3}$ of the states needed for the EDM machine to achieve the same maximal regret. \item For a given desired regret, the E-EDM machine approaches the lower bound with a factor of about: \[\tfrac{\tfrac{2^{3/2}}{12}R_d^{-3/2}}{\tfrac{1}{24}R_d^{-3/2}}=2^{5/2}=5.6~.\] \end{enumerate} In Figure \ref{fig:RvsNumStatesAsyPerf} we plot the (maximal) regret attained by the EDM and E-EDM machines as a function of the number of states, together with the lower bound given in Theorem \ref{thm:nStatesLowerBound}. Note that for a large number of states the E-EDM machine indeed outperforms the EDM machine by a factor of $\sim\frac{2}{3}$ and approaches the lower bound with a factor of $\sim6$. \begin{figure}[t] \includegraphics[width=1\columnwidth,height=0.25\textheight]{Res_asymptotic_performance.jpg} \caption[Regret vs. number of states - E-EDM, EDM machines and the lower bound] {Comparing the performance of the E-EDM machine, the EDM machine and the lower bound. \label{fig:RvsNumStatesAsyPerf}} \end{figure} \section{Summary and conclusions} \label{sec:Summary} In this paper we studied the problem of predicting an individual continuous sequence as well as the empirical mean with finite-state machine. For small number of states, or equivalently, when the desired maximal regret is relatively large, we presented a new class of machines, termed the Degenerated Tracking Memory (DTM) machines. An algorithm for constructing the best predictor among this class was given. For small enough number of states, this optimal DTM machine was shown to be optimal among \textbf{all} machines. It is still unknown up to which number of states this result holds true. Nevertheless, for larger number of states, one can try to attain better performance by easing the constraints imposed on the class of DTM machines and allowing more than a single state down-jump (up-jump) from all states in the lower (upper) half. The construction of the optimal machine in that case is, however, much more complex. Another important implication of these restrictions, is a lower bounded of $R=0.0278$ on the achievable maximal regret of any DTM machine. For universal predictors with a large number of states, or equivalently, when the desired maximal regret is relatively small, we proved a lower bound of $O(k^{-2/3})$ on the maximal regret of any $k$-states machine. We proposed the Exponential Decaying Memory (EDM) machine and showed that the worst sequence incurs a bounded regret of $O(k^{-2/3})$, where $k$ is the number of states. We further presented the Enhanced Exponential Decaying Memory (E-EDM) machine which outperforms the EDM machine and better approaches to the lower bound. An interesting observation is that both machines are equivalent up to the prediction values, where a better state allocation is preformed when constructing the E-EDM machine. Recalling that the EDM machine is a finite-memory approximation of the Cumulative Moving Average predictor which is the best unlimited resources universal predictor (w.r.t. the non-universal empirical mean predictor) \cite{UniversalSchemes}, we can understand why both the EDM and the E-EDM machines approach optimal performance. Analyzing the performance of the EDM and the E-EDM machines showed that the regret of any sequence can be upper bounded by the sum of two losses - {\em quantization loss}, the loss incurred by the quantization of the input samples, and {\em spacing loss}, the loss incurred by the quantization of the prediction values. It is worth mentioning that the maximal regret of the optimal DTM machine can also be upper bounded by the sum of these losses. As the number of states in the optimal DTM machine increases, the {\em quantization loss} goes to the lower bound, $R=0.0278$, and the {\em spacing loss} goes to zero. Thus, understanding the optimal allocation between these two losses may lead to the answer of up to which number of states the optimal DTM machine is the best universal predictor. It is also worth mentioning that the E-EDM machine is constructed with allocating half of the desired regret to the {\em quantization loss} and the other half to the {\em spacing loss}. A further optimization may be obtained by a different allocation. Throughout this paper we assumed that the sequence's outcomes are bounded. Note that this constraint is mandatory since the performance of a universal predictor is analyzed by the regret of the worst sequence. In the unbounded case, for any finite-memory predictor one can find a sequence that incurs an infinite regret. However, an optional further study is to expand the results presented here to a more relaxed case, e.g. sequences with a bounded difference between consecutive outcomes. In this study we essentially examined finite-memory universal predictors trying to attain the performance of the (non-universal) ``zero-order'' predictor, i.e., the empirical variance of any individual continuous sequence. We believe that our work is the first step in the search for the best finite-memory universal predictor trying to attain the performance of the best (non-universal) $L$-order predictor, for any $L$. \appendices \section{Proof of the lower bound given in Theorem \ref{thm:EDMregret}} \label{app:lowerBoundTheorem} \begin{proof} Here we show that there is a continuous-valued sequence which rotates the EDM machine (denoted $U_{EDM}$) in a minimal circle incurring a regret of $\frac{1}{2}k^{-2/3}+O(k^{-1})$. Consider the following minimal circle - $m$ states up-step, $m-1$ states down-step, $m$ states up-step, $m-1$ states down-step and so on $m-1$ times. The last step is a down-step of $m-1$ states that close the circle and return the machine to the initial state. Denoting the states' gap by $\Delta$, the described sequence can be written as follows\footnote{Note that we can always apply $\xi>0$ as small as desired to ensure that the samples are not exactly equal to the transition threshold, but otherwise inside the regions of transition. For example, we could have taken $x_1=\hat{x}_1+(m+\tfrac{1}{2}-\xi)\Delta k^{2/3}$ with $\xi\rightarrow 0$.}: \begin{align*} & x_1=\hat{x}_1+(m+\tfrac{1}{2})\Delta k^{2/3} \\ & x_2=\hat{x}_1+m\Delta-(m-1-\tfrac{1}{2})\Delta k^{2/3} \\ & x_3=\hat{x}_1+\Delta+(m+\tfrac{1}{2})\Delta k^{2/3} \\ &\vdots\\ & x_{2m-3}=\hat{x}_1+(m-2)\Delta+(m+\tfrac{1}{2})\Delta k^{2/3} \\ & x_{2m-2}=\hat{x}_1+(2m-2)\Delta-(m-1-\tfrac{1}{2})\Delta k^{2/3} \\ & x_{2m-1}=\hat{x}_1+(m-1)\Delta-(m-1-\tfrac{1}{2})\Delta k^{2/3}~. \end{align*} Now, assuming that all of these sample are between $0$ and $1$, one can note that they form a minimal circle of $2m-2$ states $\{\hat x_1,\ldots,\hat x_{2m-1}\}$ with equal $\Delta$ spacing between them. The circle is as follows: $\hat x_1 \hookrightarrow \hat x_{m+1} \mapsto \hat x_2 \hookrightarrow \hat x_{m+2} \mapsto \hat x_{3} \hookrightarrow \ldots \hookrightarrow \hat x_{2m-1} \mapsto \hat x_{m} \mapsto \hat x_1$, where $\hookrightarrow$ and $\mapsto$ denote up and down-step, accordingly. Analyzing the regret of the described sequence results in \begin{align} R(U_{EDM},x_1^{2m-1})&=\Delta^2(\tfrac{1}{4} k^{4/3}+m(m-1)k^{2/3}-\tfrac{m(m-1)}{3}). \label{appA:eq1} \end{align} Let us choose \begin{align} m=\lfloor\tfrac{\tfrac{1}{2}k^{-2/3}}{\Delta}\rfloor \label{appA:eq2}~, \end{align} where $\lfloor x \rfloor$ denotes the rounding of $x$ to the largest previous integer. In that case the highest sample, $x_{2m-3}$, satisfies $x_{2m-3}\leq \hat{x}_1+1/2k^{-2/3}-2\Delta+1/2+1/2k^{-1/3}$, and the lowest sample $x_{2m-1}$, satisfies $x_{2m-1}\geq \hat{x}_1+1/2k^{-2/3}-2\Delta-1/2+3/2k^{-1/3}$. Choosing, for example, \[~\hat{x}_1=Q(\tfrac{1}{2}-\tfrac{1}{2}k^{-1/3}-\tfrac{1}{2}k^{-2/3}+\Delta)~,\] where $Q(\cdot)$ denotes the quantization to the nearest state, results $x_{2m-3}\leq 1$ and $x_{2m-1}\geq 0$, and thus all samples $\{ x_1,\ldots, x_{2m-1}\}$ are valid, that is, satisfy $0\leq x_t \leq 1$. Now, by applying Equation \eqref{appA:eq2} into Equation \eqref{appA:eq1} we get \begin{align} R(U_{EDM},x_1^{2m-1})&=\tfrac{1}{4}\Delta^2 k^{4/3}+\tfrac{1}{4} k^{-2/3}+O(k^{-1}) \nonumber\\ &= \tfrac{1}{2}k^{-2/3}+O(k^{-1})~. \end{align} \end{proof} \section{Proof of Lemma \ref{lemMinStep}} \label{app:LemmalemMinStep} \begin{proof} Consider a sequence $x_1,...,x_{L+1}$ that rotates an FSM, denoted $U$, in a minimal circle, where $x_1$ induces a single up-jump of $L$ states and $x_2^{L+1}$ induce down-jumps of a single state. Since the regret of any zero-step minimal circle is smaller than $R$, an input sample that satisfies $x=\hat{x}_t-\sqrt{R}-\varepsilon$, where $\varepsilon\rightarrow0^+$, must induce a down-jump of at least one state. Thus, we can always choose the input samples $x_2^{L+1}$ to satisfies $x_t \geq \hat{x}_t-\sqrt{R}$. We shall also assume that $x_1$ satisfies: \begin{equation} x_1 > \hat{x}_1+(1+2L)\sqrt{R}~, \end{equation} where $\hat{x}_1=S_i$. \emph{\textbf{We show that this assumption can not hold true}}. By denoting $\lambda_t=\hat{x}_t-\hat{x}_1$ we note that the empirical mean of the sequence satisfies: \begin{align} \bar{x} & \geq \hat{x}_1+\sqrt{R}+\tfrac{1}{L+1}\sum_{t=1}^{L+1}\lambda_t ~.\label{lemMinStep:eq3} \end{align} Now, let us examine the regret incurred by the described sequence: \begin{align} R(U,x_1^L) & = \tfrac{1}{L+1}\sum_{t=1}^{L+1}(x_t-\hat{x}_t)^2-(x_t-\bar{x})^2 \nonumber\\ &= (\bar{x}-\hat{x}_1)^2+\tfrac{1}{L+1}\sum_{t=1}^{L+1}\lambda_t^2-2\lambda_t(x_t-\hat{x}_1) \nonumber\\ & {\geq}(\bar{x}-\hat{x}_1)^2-\tfrac{1}{L+1}\sum_{t=1}^{L+1}\lambda_t^2 \label{lem6:eq1}\\ & {\geq} (\sqrt{R}+\tfrac{1}{L+1}\sum_{t=1}^{L+1}\lambda_t)^2-\tfrac{1}{L+1}\sum_{t=1}^{L+1}\lambda_t^2 \label{lem6:eq2}\\ & > R+\tfrac{1}{L+1}\sum_{t=1}^{L+1}(2\sqrt{R}-\lambda_t)\lambda_t~,\label{lemMinStep:eq5} \end{align} where \eqref{lem6:eq1} follows $~\lambda_t\geq 0$ and $x_t\leq \hat{x}_t$ for all the down samples $x_2^{L+1}$, \eqref{lem6:eq2} follows \eqref{lemMinStep:eq3}. In \cite{IngberThesis} it is shown that in an FSM with maximal regret $R$ w.r.t. binary sequences, the maximal up-jump is no more than $2\sqrt{R}$. Therefore, this must hold also for continuous-valued sequences. Hence, in the discussed minimal circle all states are within $2\sqrt{R}$ from the initial state, that is $2\sqrt{R}\geq \lambda_t$ for all $t$ and we get $R(U,x_1^L)>R$. We can now conclude that to attain a regret smaller than $R$, any input sample $x$ that induces an $L$ states up-jump from state $i$, must satisfy: \begin{equation} x \leq S_i+(1+2L)\sqrt{R}~. \end{equation} Since an input sample $1$ induces an $m_{u,i}$ states jump from state $i$ we conclude that the following must be satisfied: \begin{equation} 1 \leq S_i+(1+2m_{u,i})\sqrt{R}~. \end{equation} In the same manner it can be shown that $0 \geq S_i-(1+2m_{d,i})\sqrt{R}$.\\ \end{proof} \section{Proof of Lemma \ref{lem:spacingLossWithinSegment}} \label{app:LemmaspacingLossWithinSegment} \begin{proof} First we note that: \begin{align} &-\tfrac{1}{L}\sum_{t=1}^L P_t(\hat{x}_t-\hat{x}_1)=-\tfrac{1}{L}\sum_{t=1}^L P_t\hat{x}_t~, \end{align} where we used $\sum_{t=1}^L P_t=0$. Note that $P_t\hat{x}_t$ is positive for up-steps and negative for down-steps. We consider a minimal circle within a segment $A(m_u,m_d)$ that crosses states with the same spacing gap, denoted $\Delta=\Delta(m_u,m_d)$. It follows that: \begin{align} &-\tfrac{1}{L}\sum_{t=1}^L P_t(\hat{x}_t-\hat{x}_1) = -\tfrac{1}{L}\sum_{t=1}^L P_t\sum_{j=1}^{t-1}P_j \Delta ~.\nonumber \end{align} Define {\em{\textbf{mixed}}} sequences as sequences where the up and down steps are interlaced. Define {\em{\textbf{straight}}} sequences as sequences where all the up-steps are first, followed by all the down-steps (consecutive in time). We show that any {\em{\textbf{mixed}}} sequence with $\{P_t\}_{t=1}^L$ jumps that rotates the machine in a minimal circle with the same spacing gap for all states can be transformed into a {\em{\textbf{straight}}} sequence with the same jumps only in a different order (up-jumps are first) without changing the {\em spacing loss} of the sequence. First we note that for any three interlaced jumps \[\text{up jump $\rightarrow$ down jump $\rightarrow$ up jump},\] that cross \[P_{u,1}~ \rightarrow~ P_d ~\rightarrow~ P_{u,2}~\] states (accordingly), the following holds true: \begin{align} &P_{u,1}\hat{x}_{u,1}+P_d(\hat{x}_{u,1}+P_{u,1}\Delta)+ \nonumber\\ & \qquad \qquad +P_{u,2}(\hat{x}_{u,1}+(P_{u,1}+P_d)\Delta) \nonumber\\ &\qquad = P_{u,1}\hat{x}_{u,1}+P_{u,2}(\hat{x}_{u,1}+ \nonumber\\ & \qquad \qquad+P_{u,1}\Delta)+P_d(\hat{x}_{u,1}+(P_{u,1}+P_{u,2})\Delta)~. \label{straightSeq} \end{align} Thus, Equation \eqref{straightSeq} implies that the {\em spacing loss} of these three jump does not change when the order of the jumps is: \[\text{up jump $\rightarrow$ up jump $\rightarrow$ down jump}.\] This can be shown also for a sequence with more than one consecutive down-jumps between two up-steps: \[\text{up jump $\rightarrow$ down jump $\rightarrow$ ... $\rightarrow$ down jump $\rightarrow$ up jump}~.\] Hence, in a recursive way any {\em{\textbf{mixed}}} sequence can be transformed into a {\em{\textbf{straight}}} sequence without changing the {\em spacing loss} by moving all the down-jumps to the end of the sequence. In the rest of the proof we shall assume {\em{\textbf{straight}}} sequences. Note that this transformation changes the states of the minimal circle, but since we transform the sequence only for an easier analyze, we can assume that all states still have the same spacing gap. Figure \ref{fig:EEDM2_mixedStraight} gives an example. \begin{figure}[ht] \centering \includegraphics[width=0.8\columnwidth,height=0.15\textheight]{EEDM2_mixedStraight.jpg} \caption[{\em{Mixed}} - {\em{Straight}} sequences]{An example for a {\em{\textbf{mixed}}} sequence transformed into a {\em{\textbf{straight}}} sequence. \label{fig:EEDM2_mixedStraight}} \end{figure} We continue by proving that applying maximum up and down steps maximize the {\em spacing loss}. Consider two consecutive down-steps of $P_{d_1},P_{d_2}$ states staring at state $\hat{x}$, with a total of $C$ states, i.e $\abs{P_{d_1}}+\abs{P_{d_2}}=C$. Note that we examine two down-steps, thus $C\leq 2m_d$. The {\em spacing loss} of these two down-steps is: \begin{equation} \hat{x}\cdot \abs{P_{d,1}}+(\hat{x}-\abs{P_{d,1}}\Delta)\cdot \abs{P_{d,2}}=\hat{x}\cdot C-\abs{P_{d,1}}(C-\abs{P_{d,1}})\Delta~. \end{equation} If $C\leq m_d$ the {\em spacing loss} is maximized for $\abs{P_{d,1}}=C$ and $\abs{P_{d,2}}=0$. If $m_d \leq C \leq 2m_d$ then the {\em spacing loss} is maximized for $\abs{P_{d,1}}=m_d$. We got that we can maximize the {\em spacing loss} by taking a couple of down-steps and unite them into a single down-step (if together they cross no more than $m_d$ states), or to apply maximum down-step, $m_d$, to the first and $C-m_d$ to the second (if together they cross more than $m_d$ states). Thus, assuming {\em{\textbf{straight}}} sequences, we can start with the first couple of down-steps, maximize the {\em spacing loss} by applying maximum down-step, then take the third down-step and apply maximum down-step with the new down-steps that were created. In a recursive way we can maximize the {\em spacing loss} by applying maximum down-steps (note that the number of down-steps reduces which also maximize the {\em spacing loss}). In the same manner it can be shown that applying maximum up-steps maximize the {\em spacing loss}. \begin{figure}[ht] \centering \includegraphics[width=0.4\columnwidth,height=0.06\textheight]{EEDM3_worstCaseSpacingloss.jpg} \caption[Worst case spacing loss - an example]{An example for the worst case {\em spacing loss} of a minimal circle that crosses $5$ states in the segment $A(3,2)$. \label{fig:EEDM3_worstCaseSpacingloss}} \end{figure} Consider a minimal circle of $C$ states crossed on the way up and down, all in the segment $A(m_u,m_d)$. The worst case scenario for the {\em spacing loss} is composed of $N_u$ up-steps each of $m_u$ states jump (maximum up-jump), a single up-step of $c_u$ states, where $c_u=mod(C,m_u)$, $N_d$ down-steps each of $m_d$ states jump (maximum down-jump), and a single down-step of $c_d$ states, where $c_d=mod(C,m_d)$. $N_d$ and $N_u$ satisfy $C=N_um_u+c_u$ and $C=N_dm_d+c_d$. It can be shown that the position in the sequence of the single up-step (of $c_u$ states) and the single down-step (of $c_d$ states) has no impact on the {\em spacing loss}. Let us analyze the {\em spacing loss} of the \em{straight} sequence. First, all up-steps satisfy: \begin{align} -\tfrac{1}{L}\sum_{t\in \text{\{up steps\}}} &P_t(\hat{x}_t-\hat{x}_1)= \nonumber\\ &=-\tfrac{1}{L}\Delta(\sum_{i=0}^{N_u-1}m_u(i\cdot m_u)+N_um_uc_u)\nonumber\\ &=-\tfrac{1}{L}\Delta(m_u^2\tfrac{N_u(N_u-1)}{2}+N_um_uc_u)\nonumber\\ &=-\tfrac{1}{L}\tfrac{\Delta}{2}(C^2-m_uC+c_u(m_u-c_u))~. \end{align} In the same manner, all down-steps satisfy: \begin{align} -\tfrac{1}{L}\sum_{t\in \text{\{down steps\}}} &P_t(\hat{x}_t-\hat{x}_1)= \nonumber\\ &=\tfrac{1}{L}\Delta(\sum_{i=1}^{N_d}m_d(i\cdot m_d)+c_dC)\nonumber\\ &=\tfrac{1}{L}\tfrac{\Delta}{2}(C^2+m_dC-c_d(m_d-c_d))~. \end{align} Thus, the worst case scenario of the {\em spacing loss} satisfies: \begin{align} -\tfrac{1}{L}\sum_{t=1}^L &P_t(\hat{x}_t-\hat{x}_1)= \nonumber\\ &=\tfrac{1}{L}\tfrac{\Delta}{2}(C(m_u+m_d)-c_u(m_u-c_u)-c_d(m_d-c_d)) \label{eq:forLemma7}\\ &\leq \tfrac{1}{L}\tfrac{\Delta}{2}C(m_u+m_d)~, \end{align} where the length of the circle satisfies: \begin{equation} L=\lceil\tfrac{C}{m_u}\rceil+\lceil\tfrac{C}{m_d}\rceil \geq \tfrac{C}{m_u}+\tfrac{C}{m_d}~. \end{equation} Therefore, the worst case scenario satisfies: \begin{align} -\tfrac{1}{L}\sum_{t=1}^L P_t(\hat{x}_t-\hat{x}_1)&\leq \tfrac{m_um_d}{2}\Delta~. \end{align} Since $\Delta= \Delta(m_u,m_d)=\tfrac{\sqrt{R}}{2m_um_d}$ we get that the {\em spacing loss} for any minimal circle within a segment (and with identical spacing gap between all states) satisfies: \begin{align} \text{\em spacing loss} \leq 4\sqrt{R}\tfrac{m_um_d}{2}\Delta(m_u,m_d) = R~. \end{align} \end{proof} \section{Proof of Lemma \ref{lem:spacinglossSegments}} \label{app:LemmaspacinglossSegments} \begin{proof} We denote two adjacent segments by $A(m_{u,1},m_{d,1})$ and $A(m_{u,2},m_{d,2})$. Assume $A(m_{u,1},m_{d,1})$ is the lower segment and the minimal circle starts at the lowest state. Denote the spacing gap of each segment by $\Delta_1=\Delta(m_{u,1},m_{d,1})$ and $\Delta_2=\Delta(m_{u,2},m_{d,2})$. Note that if $\Delta_1<\Delta_2$ then $m_{u,2}=m_{u,1}-1~,~m_{d,2}=m_{d,1}$ and if $\Delta_1 > \Delta_2$ then $m_{u,2}=m_{u,1}~,~m_{d,2}-1=m_{d,1}$. \begin{figure}[ht] \centering \includegraphics[width=1\columnwidth]{appA1.jpg} \caption[Spacing gap in the E-EDM machine]{Spacing gap between states in the connection between the segments $A(m_{u,1},m_{d,1})$ and $A(m_{u,2},m_{d,2})$. See the E-EDM machine definitions in section \ref{sec:DesigningEEDM}. \label{fig:appA_spacingGap}} \end{figure} First we assume that the minimal circle traverse between the segments only once (that is, once on the way up and once on the way down). We also assume that $\Delta_1<\Delta_2$. We can now divide the minimal circle into two virtual minimal circles - take the up-step that traverse the machine to the higher segment and denote the destination state of this jump by $\hat{x}_c$. Take a down-step that crosses state $\hat{x}_c$ and split it into two steps - assuming the down-step crosses $P_d$ states, $c_d$ states jump to state $\hat{x}_c$ and $(P_d-c_d)$ states jump from state $\hat{x}_c$. Note that two minimal circles were constructed - left minimal circle that traverse $C_1$ states and right minimal circle that traverse $C_2$ states. This is depict in Figure \ref{fig:appA_splitDownStep}. The {\em spacing loss} of the down-step satisfies: \begin{equation} P_d(\hat{x}_c+c_d\Delta_1)=c_d(\hat{x}_c+c_d\Delta_1)+(P_d-c_d)\hat{x}_c+(P_d-c_d)c_d\Delta_1~. \label{eq:downStep} \end{equation} \begin{figure}[ht] \centering \includegraphics[width=1\columnwidth]{appA_leftSmall.jpg} \caption[Minimal circle between segments - splitting a down-step]{Minimal circle that traverse once between segments. Splitting the marked down-step that crosses state $\hat{x}_c$ into two down-steps, creating two virtual minimal circles to the right and left. Note that since the first $m_{u,2}+m_{d,2}$ states at the second segment are with spacing gap $\Delta_1$, the marked down-step must only cross states with spacing gap $\Delta_1$. \label{fig:appA_splitDownStep}} \end{figure} Note that $\hat{x}_c$ is in the upper segment but we used $\Delta_1$ since the first $m_{u,2}+m_{d,2}$ states in the upper segment have spacing gap of $\Delta_1$ (see the construction of the E-EDM machine in section \ref{sec:DesigningEEDMAlgo}). Also note that the first term in the right hand side of Equation \eqref{eq:downStep} belongs to the {\em spacing loss} of the right minimal circle and the middle term belongs to the {\em spacing loss} of the left minimal circle. Note that the {\em spacing loss} of the minimal circle is compose of the {\em spacing loss} of the left and right minimal circles and the last term in Equation \eqref{eq:downStep}. The left minimal circle traverse $C_1$ states, all with spacing gap $\Delta_1$. The right minimal circle traverse $C_2$ states, some with spacing gap $\Delta_1$ and some with $\Delta_2$. We can now conclude that the {\em spacing loss} satisfies: \begin{align} \text{\em spacing} & \text{\em ~loss}\leq 4\sqrt{R}\tfrac{1}{L} \big(~[C_1(m_{u,1}+m_{d,1}) \nonumber \\ &-(P_d-c_d)(m_{d,1}-(P_d-c_d))]\tfrac{\Delta_1}{2}\nonumber\\ &+[C_2(m_{u,2}+m_{d,2})-c_d(m_{d,2}-c_d)]\tfrac{\Delta_2}{2} \nonumber\\ &+c_d(P_d-c_d)\Delta_1~\big)~, \label{eq:app1} \end{align} where we applied Lemma \ref{lem:spacingLossWithinSegment} (Equation \eqref{eq:forLemma7}) to bound the {\em spacing loss} of the left and right minimal circles. Note that Lemma \ref{lem:spacingLossWithinSegment} is true for the right minimal circle since all states have a spacing gap that is no more than $\Delta_2$. Now, since $m_{d,1}=m_{d,2}$ and $\Delta_1<\Delta_2$ we get: \begin{align} \text{\em spacing loss}&\leq 4\sqrt{R}\tfrac{1}{L}(C_1(m_{u,1}+m_{d,1})\tfrac{\Delta_1}{2}+ \nonumber\\ &\qquad \qquad +C_2(m_{u,2}+m_{d,2})\tfrac{\Delta_2}{2}) \nonumber\\ &=R\tfrac{1}{L}(\tfrac{C_1}{m_{d,1}}+\tfrac{C_1}{m_{u,1}}+\tfrac{C_2}{m_{d,2}}+\tfrac{C_2}{m_{u,2}})~. \label{eq:app2} \end{align} Let us bound the length of the minimal circle: \begin{align} L &\geq \lceil\tfrac{C_1}{m_{u,1}}\rceil +\lceil\tfrac{C_2}{m_{u,2}}\rceil+\lceil\tfrac{C_1+C_2}{m_{d,1}}\rceil \nonumber\\ &\geq \tfrac{C_1}{m_{u,1}} +\tfrac{C_2}{m_{u,2}}+\tfrac{C_1+C_2}{m_{d,1}}~. \end{align} Applying this into Equation \eqref{eq:app2} results: \begin{equation} \text{\em spacing loss}\leq R~. \end{equation} Assume again that the minimal circle traverse between the segments only once but now assume $\Delta_1>\Delta_2$. Divide the minimal circle into two virtual minimal circles in the same manner as above but now take the down-step that traverse the machine to the lower segment and split an up-step. In the same manner we can show that the {\em spacing loss} is not more than R. If assuming that the minimal circle traverse between segments $m$ times, in the same manner as above we can divide the circle into $m$ left minimal circles and $m$ right minimal circles and bound the {\em spacing loss}.\\ \end{proof} \section{Proof of Theorem \ref{thm:nStatesEEDM}} \label{app:TheoremnStatesEEDM} \begin{proof} Consider an E-EDM machine that was designed to attain maximal regret $R_d$. By denoting $R=\frac{R_d}{2}$, the number of states satisfies: \begin{equation} \label{eq:nEEDMStates1} k \leq \sum_{m_u,m_d \in \mathbb{N}}(\tfrac{\abs{A(m_u,m_d)}}{\Delta(m_u,m_d)}+2)~, \end{equation} where all states in the segment $A(m_u,m_d)$ have a maximum up and down step of $m_u,~m_d$ states and $\Delta(m_u,m_d)$ spacing gap. As shown in the definitions of the E-EDM machine in section \ref{sec:DesigningEEDM}, we add to each segment at most two states to ensure regret smaller than $R_d$ for sequences that rotate the E-EDM machine in a minimal circle that traverse between segments. Note that there are at most $\lceil\tfrac{1}{2\sqrt{R}}\rceil$ segments.\\ Let us examine Equation \eqref{eq:nEEDMStates1}: \begin{align} k &\leq R^{-1/2}+2+\sum_{m_u,m_d \in \mathbb{N}}\tfrac{\abs{A(m_u,m_d)}}{\Delta(m_u,m_d)} \nonumber\\ & =R^{-1/2}+2+\sum_{m_u,m_d \in \mathbb{N}}\tfrac{\abs{A(m_u,m_d)}}{\sqrt{R}}2 m_u m_d \nonumber\\ & =R^{-1/2}+2+2 R^{-1/2}\sum_{m_u,m_d \in \mathbb{N}}\abs{A(m_u,m_d)}\cdot \nonumber\\ & \qquad \qquad \qquad \qquad \qquad \cdot \lceil\tfrac{1-x-\sqrt{R}} {2\sqrt{R}}\rceil \cdot \lceil\tfrac{x-\sqrt{R}}{2\sqrt{R}}\rceil \Big|_{x\in A(m_u,m_d)} \nonumber\\ & \leq R^{-1/2}+2+\tfrac{1}{2} R^{-3/2}\sum_{m_u,m_d \in \mathbb{N}}\abs{A(m_u,m_d)}\cdot \nonumber\\ & \qquad \qquad \qquad \qquad \qquad \cdot \big(x(1-x)+\sqrt{R}+R\big)\Big|_{x\in A(m_u,m_d)}~. \end{align} By denoting the segments with the same maximum up-step as $B(m_u)$, we can further bound the number of states: \begin{align} k & \leq \tfrac{1}{2} (R^{-1} +3R^{-1/2})+2+\tfrac{1}{2}R^{-3/2}\sum_{m_u \in \mathbb{N}}\abs{B(m_u)}\cdot \nonumber\\ & \qquad \qquad \qquad \qquad \qquad \cdot \max_{x\in B(m_u)}x(1-x)~. \end{align} Since $\abs{B(m_u)}=2\sqrt{R}$ for almost all $m_u$ ($\abs{B(m_u)}\leq 2\sqrt{R}$ at the edges of the interval $[0,\tfrac{1}{2}]$), $x(1-x)$ is a concave function with a singular maximum point at $\frac{1}{2}$ and the number of states in the lower and upper halves is equal, we get: \begin{align} k &\leq \tfrac{1}{2} \big(R^{-1} +3R^{-1/2}\big)+2+ \nonumber \\ &\qquad +R^{-3/2}\sum_{i=1}^{\lceil\tfrac{1}{4\sqrt{R}}\rceil}2\sqrt{R}(\sqrt{R}+i2\sqrt{R})(1-(\sqrt{R}+i2\sqrt{R})) \nonumber\\ &\leq \tfrac{1}{12}R^{-3/2}-\tfrac{5}{12}R^{-1}-12R^{-1/2}-32 \nonumber\\ &= \tfrac{2^{3/2}}{12}R_d^{-3/2}+O(R_d^{-1})~, \end{align} where we applied $R=\frac{R_d}{2}$. We can also bound the number of states from below by: \begin{align} k &\geq \sum_{m_u,m_d \in \mathbb{N}}\tfrac{\abs{A(m_u,m_d)}}{\Delta(m_u,m_d)} \nonumber\\ & \geq \tfrac{1}{2} R^{-3/2}\sum_{m_u,m_d \in \mathbb{N}}\abs{A(m_u,m_d)}\cdot \big(x(1-x)- \nonumber\\ &\qquad \qquad \qquad \qquad \qquad -\sqrt{R}+R\big)\Big|_{x\in A(m_u,m_d)}~. \end{align} By denoting the segments with the same maximum up-step as $B(m_u)$, we can bound the number of states from below: \begin{align} k & \geq \tfrac{1}{2} \big(-R^{-1} +R^{-1/2}+ \nonumber\\ & \qquad \qquad \qquad +R^{-3/2}\sum_{m_u \in \mathbb{N}}\abs{B(m_u)}\cdot \min_{x\in B(m_u)}x(1-x)\big)~. \end{align} Using the approximation we made to calculate the lower bound we get: \begin{align} k &\geq \tfrac{1}{12} (R^{-3/2}-15R^{-1}+2R^{-1/2}) \nonumber\\ &= \tfrac{1}{12}(\tfrac{R_d}{2})^{-3/2}+O(R_d^{-1})~. \end{align} Thus, we upper and lower bounded the number of states in the E-EDM machine by $\tfrac{1}{12}(\tfrac{R_d}{2})^{-3/2}+O(R_d^{-1})$. \end{proof} \bibliographystyle{IEEEtran} \bibliography{mybib2} \end{document}
{"config": "arxiv", "file": "1102.2836/Thesis_Journal_arxiv.tex"}
\begin{document} \begin{abstract} We investigate the asymptotics of the expected number of real roots of random trigonometric polynomials $$ X_n(t)=u+\frac{1}{\sqrt{n}}\sum_{k=1}^n (A_k\cos(kt)+B_k\sin(kt)), \quad t\in [0,2\pi],\quad u\in\R $$ whose coefficients $A_k, B_k$, $k\in\N$, are independent identically distributed random variables with zero mean and unit variance. If $N_n[a, b]$ denotes the number of real roots of $X_n$ in an interval $[a,b]\subseteq [0,2\pi]$, we prove that $$ \lim_{n\rightarrow\infty} \frac{\E N_n[a,b]}{n}=\frac{b-a}{\pi\sqrt{3}} \EXP{-\frac{u^2}{2}}. $$ \end{abstract} \maketitle \section{Introduction} \subsection{Main result} In this paper we are interested in the number of real roots of a random trigonometric polynomial $X_n:[0,2\pi]\rightarrow \R$ defined as \begin{equation}\label{eq:trigpolynomial} X_n(t):=u+\frac{1}{\sqrt{n}}\sum_{k=1}^n (A_k\cos(kt)+B_k\sin(kt)), \end{equation} where $n\in \N$, $u\in\R$, and the coefficients $(A_k)_{k\in\N}$ and $(B_k)_{k\in \N}$ are independent identically distributed random variables with \begin{equation}\label{eq:exp_var} \E A_k = \E B_k = 0, \quad \E [A_k^2] = \E [B_k^2] =1. \end{equation} The random variable which counts the number of real roots of $X_n$ in an interval $[a,b]\subseteq [0,2\pi]$ is denoted by $N_n[a,b]$. By convention, the roots are counted with multiplicities and a root at $a$ or $b$ is counted with weight $1/2$. The main result of this paper is as follows. \begin{theorem}\label{satz:hauptaussage} Under assumption~\eqref{eq:exp_var} and for arbitrary $0\leq a<b\leq 2\pi$, the expected number of real roots of $X_n$ satisfies \begin{equation}\label{eq:hauptaussage} \lim_{n\rightarrow\infty} \frac{\E N_n[a,b]}{n}=\frac{b-a}{\pi\sqrt{3}} \EXP{-\frac{u^2}{2}}. \end{equation} \end{theorem} The number of real roots of random trigonometric polynomials has been much studied in the case when the coefficients $A_k, B_k$ are Gaussian; see~\cite{dunnage}, \cite{das}, \cite{qualls}, \cite{wilkins}, \cite{farahmand1}, \cite{sambandham1}, to mention only few references, and the books~\cite{farahmand_book}, \cite{bharucha_reid_book}, where further references can be found. In particular, a proof of~\eqref{eq:hauptaussage} in the Gaussian case can be found in~\cite{dunnage}. Recently, a central limit theorem for the number of real roots was obtained in~\cite{granville_wigman} and then, by a different method employing Wiener chaos expansions, in~\cite{azais_leon}. For random trigonometric polynomials involving only cosines, the asymptotics for the variance (again, only in the Gaussian case) was obtained in~\cite{su_shao}. All references mentioned above rely heavily on the Gaussian assumption which allows for explicit computations. Much less is known when the coefficients are non-Gaussian. In the case when the coefficients are uniform on $[-1,1]$ and there are no terms involving the sine, an analogue of~\eqref{eq:hauptaussage} was obtained in~\cite{sambandham}. The case when the third moment of the coefficients is finite, has been studied in~\cite{sambandham_thangaraj}. After the main part of this work was completed, we became aware of the work of Jamrom~\cite{jamrom} and a recent paper by Angst and Poly~\cite{angst_poly}. Angst and Poly~\cite{angst_poly} proved~\eqref{eq:hauptaussage} (with $u=0$) assuming that the coefficients $A_k$ and $B_k$ have finite $5$-th moment and satisfy certain Cram\'er-type condition. Although this condition is satisfied by some discrete probability distributions, it excludes the very natural case of $\pm1$-valued Bernoulli random variables. Another recent work by Aza\"is et.\ al.~\cite{azais_etal} studies the local distribution of zeros of random trigonometric polynomials and also involves conditions stronger than just the existence of the variance. In the paper of Jamrom~\cite{jamrom}, Theorem~\ref{satz:hauptaussage} (and even its generalization to coefficients from an $\alpha$-stable domain of attraction) is stated without proof. Since full details of Jamrom's proof do not seem to be available and since there were at least three works following~\cite{jamrom} in which the result was established under more restrictive conditions (namely, \cite{sambandham}, \cite{sambandham_thangaraj}, \cite{angst_poly}), it seems of interest to provide a full proof of Theorem~\ref{satz:hauptaussage}. \subsection{Method of proof} The proof uses ideas introduced by Ibragimov and Maslova~\cite{ibragimov_maslova1} (see also the paper by Erd\"os and Offord~\cite{erdoes_offord}) who studied the expected number of real zeros of a random algebraic polynomial of the form \begin{equation*} Q_n(t):=\sum_{k=1}^n A_k t^k. \end{equation*} For an interval $[a,b]\subset[0,2\pi]$ and $n\in\N$ we introduce the random variable $N_n^*[a,b]$ which is the indicator of a \emph{sign change} of $X_n$ on the endpoints of $[a,b]$ and is more precisely defined as follows: \begin{equation}\label{eq:definitionnstern} N_n^*[a,b]:=\frac{1}{2}-\frac{1}{2}\sgn(X_n(a)X_n(b))=\begin{cases} 0 &\textrm{if } X_n(a)X_n(b)>0, \\ 1/2 &\textrm{if } X_n(a)X_n(b)=0, \\ 1 &\textrm{if } X_n(a)X_n(b)<0. \end{cases} \end{equation} The proof of Theorem \ref{satz:hauptaussage} consists of two main steps. \vspace*{2mm} \noindent \textit{Step 1: Reduce the study of roots to the study of sign changes.} Intuition tells us that $N_n[\alpha,\beta]$ and $N_n^*[\alpha,\beta]$ should not differ much if the interval $[\alpha,\beta]$ becomes small. More concretely, one expects that the number of real zeros of $X_n$ on $[0,2\pi]$ should be of order $n$, hence the distance between consecutive roots should be of order $1/n$. This suggests that on an interval $[\alpha,\beta]$ of length $\delta n^{-1}$ (with small $\delta>0$) the event of having at least two roots (or a root with multiplicity at least $2$) should be very unprobable. The corresponding estimate will be given in Lemma~\ref{lemma:abschaetzungdmj}. For this reason, it seems plausible that on intervals of length $\delta n^{-1}$ the events ``there is at least one root'', ``there is exactly one root'' and ``there is a sign change'' should almost coincide. A precise statement will be given in Lemma~\ref{lemma:ewuntnsternn}. This part of the proof relies heavily on the techniques introduced by Ibragimov and Maslova~\cite{ibragimov_maslova1} in the case of algebraic polynomials. \vspace*{2mm} \noindent \textit{Step 2: Count sign changes.} We compute the limit of $\E N_n^*[\alpha_n,\beta_n]$ on an interval $[\alpha_n,\beta_n]$ of length $\delta n^{-1}$. This is done by establishing a bivariate central limit theorem stating that as $n\to\infty$ the random vector $(X_n(\alpha_n), X_n(\beta_n))$ converges in distribution to a Gaussian random vector with mean $(u,u)$, unit variance, and covariance $\delta^{-1}\sin\delta$. From this we conclude that $\E N_n^*[\alpha_n,\beta_n]$ converges to the probability of a sign change of this Gaussian vector. Approximating the interval $[a,b]$ by a lattice with mesh size $\delta n^{-1}$ and passing to the limits $n\to\infty$ and then $\delta\downarrow 0$ completes the proof. This part of the proof is much simpler than the corresponding argument of Ibragimov and Maslova~\cite{ibragimov_maslova1}. \vspace*{2mm} \noindent \textit{Notation.} The common characteristic function of the random variables $(A_k)_{k\in \N}$ and $(B_k)_{k\in\N}$ is denoted by $$ \phi(t):=\E \EXP{\I t A_1}, \quad t\in\R. $$ Due to the assumptions on the coefficients in~\eqref{eq:trigpolynomial}, we can write \begin{equation}\label{eq:charfuncchar} \phi(t)=\EXP{-\frac{t^2}{2}H(t)} \end{equation} for sufficiently small $|t|$, where $H$ is a continuous function with $H(0)=1$. In what follows, $C$ denotes a generic positive constant which may change from line to line. \section{Estimate for $\E N_n[a,b]-\E N_n^*[a,b]$ on small intervals}\label{abschnitt:unterschied} In this section we investigate the expected difference between $N_n[\alpha,\beta]$ and $N_n^*[\alpha,\beta]$ on small intervals $[\alpha,\beta]$ of length $n^{-1} \delta $, where $\delta>0$ is fixed. \subsection{Expectation and variance} The following lemma will be frequently needed. \begin{lemma}\label{lemma:varianzderableitung} For $j\in\N_0$ let $X_n^{(j)}(t)$ denote the $j$th derivative of $X_n(t)$. The expectation and the variance of $X_n^{(j)}$ are given by \begin{equation*} \E X_n^{(j)}(t)= \begin{cases} u ,&j=0,\\ 0 ,&j\in\N, \end{cases} \quad \quad \VAR{X^{(j)}_n(t)} = \frac{1}{n}\sum_{k=1}^n k^{2j}. \end{equation*} \end{lemma} \begin{proof} The $j$th derivative of $X_n$ reads as follows: \begin{align*} &\quad X_n^{(j)}(t)-u\IND{j=0} \\ &=\frac{1}{\sqrt{n}}\sum_{k=1}^n \left(A_k\frac{\d^j}{\d t^j}\cos(kt) +B_k\frac{\d^j}{\d t^j}\sin(kt)\right) \\ &= \frac{1}{\sqrt{n}}\sum_{k=1}^n k^j \begin{cases} (-1)^{j/2}A_k\cos(kt)+(-1)^{j/2}B_k\sin(kt), &\textrm{if }j \textrm{ is even,} \\ (-1)^{\frac{j-1}{2}}A_k\sin(kt)+(-1)^{\frac{j-1}{2}}B_k\cos(kt), &\textrm{if }j \textrm{ is odd.} \end{cases} \end{align*} Recalling that $(A_k)_{k\in \N}$ and $(B_k)_{k\in\N}$ have zero mean and unit variance we immediately obtain the required formula. \end{proof} \subsection{Estimate for the probability that $X_n^{(j)}$ has many roots}\label{subsec:D_m_j} Given any interval $[\alpha,\beta]\subset [0,2\pi]$, denote by $D^{(j)}_m=D^{(j)}_m(n; \alpha, \beta)$ the event that the $j$th derivative of $X_n(t)$ has at least $m$ roots in $[\alpha,\beta]$ (the roots are counted with their multiplicities and the roots on the boundary are counted without the weight $1/2$). Here, $j\in \N_0$ and $m\in\N$. A key element in our proofs is an estimate for the probability of this event presented in the next lemma. \begin{lemma}\label{lemma:abschaetzungdmj} Fix $j\in \N_0$ and $m\in\N$. For $\delta>0$ and $n\in\N$ let $[\alpha,\beta]\subset [0,2\pi]$ be any interval of length $\beta-\alpha=n^{-1}\delta$. Then, \begin{equation*} \P{D_m^{(j)}} \leq C(\delta^{(2/3)m} +\delta^{-(1/3)m}n^{-(2j+1)/4}), \end{equation*} where $C=C(j,m)>0$ is a constant independent of $n$, $\delta$, $\alpha$, $\beta$. \end{lemma} \begin{proof} For arbitrary $T>0$ we may write \begin{align*} \P{D^{(j)}_m}\leq \P{D^{(j)} _m\cap\left\{\left|\frac{X^{(j)}_n(\beta)}{n^j}\right|\geq T\right\}} + \P{\left|\frac{X^{(j)}_n(\beta)}{n^j}\right|< T}. \end{align*} The terms on the right-hand side will be estimated in Lemmas \ref{lemma:abschaetzungzwei} and \ref{lemma:abschaetzungdrei} below. Using these lemmas, we obtain \begin{align*} \P{D^{(j)}_m} \leq C\left[\frac{n^m}{T}\frac{(\beta-\alpha)^m}{m!}\right]^2 + C\left(T+T^{-1/2}n^{-(2j+1)/4}\right). \end{align*} Setting $T=\delta^{(2/3)m}$ yields the statement. \end{proof} \begin{lemma}\label{lemma:abschaetzungzwei} For all $j\in\N_0$, $m\in\N$ there exists a constant $C=C(j,m)>0$ such that the estimate \begin{equation*} \P{D^{(j)} _m\cap\left\{\left|\frac{X^{(j)}_n(\beta)}{n^j}\right|\geq T\right\}}\leq C\left[\frac{n^m}{T}\frac{(\beta-\alpha)^m}{m!}\right]^2 \end{equation*} holds for all $T>0$, $n\in \N$ and all intervals $[\alpha,\beta]\subseteq [0,2\pi]$. \end{lemma} \begin{proof} By Rolle's theorem, on the event $D^{(j)}_m$ we can find (random) $t_0\geq \ldots \geq t_{m-1}$ in the interval $[\alpha,\beta]$ such that $$ X^{(j+l)}_n(t_l)=0 \text{ for all } l\in\{0,\dots, m-1\}. $$ Thus we may consider the random variable \begin{equation*} Y^{(j)}_n:=\IND{D^{(j)}_m}\times \INT{t_0}{\beta}\INT{t_1}{x_1}\dots \INT{t_{m-1}}{x_{m-1}} X^{(j+m)}_n(x_m)\D x_m\dots \d x_1. \end{equation*} On the event $D^{(j)}_m$, the random variables $X^{(j)}_n(\beta)$ and $Y^{(j)}_n$ are equal. On the complement of $D^{(j)}_m$, $Y^{(j)}_n=0$. Hence, it follows that \begin{equation*} \P{D_m^{(j)}\cap\left\{\frac{|X_n^{(j)}(\beta)|}{n^j}\geq T\right\}}\leq \P{\frac{|Y_n^{(j)}|}{n^j}\geq T}. \end{equation*} Markov's inequality yields \begin{align*} \P{|Y^{(j)}_n|\geq Tn^{j}} \leq\frac{1}{T^2n^{2j}}\E \left|\int_{t_0}^{\beta}\int_{t_1}^{x_1}\dots \int_{t_{m-1}}^{x_{m-1}}X_n^{(j+m)}(x_m)\D x_m\dots\d x_1\right|^2. \end{align*} Using Hölder's inequality we may proceed as follows \begin{align*} \P{|Y^{(j)}_n|\geq Tn^{j}}&\leq \frac{1}{T^2n^{2j}}\frac{(\beta-\alpha)^m}{m!} \E \int_{t_0}^{\beta}\int_{t_1}^{x_1} \dots \int_{t_{m-1}}^{x_{m-1}}|X^{(j+m)}_n(x_m)|^2\D x_m\dots \d x_1 \\ &\leq \frac{1}{T^2n^{2j}}\left[\frac{(\beta-\alpha)^m}{m!}\right]^2 \sup_{x\in[\alpha,\beta]} \mathbb{E}|X_n^{(j+m)}(x)|^2. \end{align*} It remains to find a suitable estimate for $\sup_{x\in [\alpha,\beta]} \mathbb{E}|X^{(j+m)}_n(x)|^2$. From Lemma~\ref{lemma:varianzderableitung} it follows that \begin{equation*} \mathbb{E}|X^{(m+j)}_n(x)|^2 = \VAR X^{(j+m)}_n(x) = \frac{1}{n}\sum_{k=1}^n k^{2(j+m)}\leq C(j,m) n^{2(j+m)} \end{equation*} holds, whence the statement follows immediately. \end{proof} \begin{lemma}\label{lemma:abschaetzungdrei} Fix $j\in\N_0$. There exists a constant $C=C(j)>0$ such that for all $n\in\N$, $T>0$, $\beta\in[0,2\pi]$, \begin{equation} \label{eq:concentration} \P{\left|\frac{X^{(j)}_n(\beta)}{n^j}\right|\leq T}\leq C\left(T+T^{-1/2}n^{-(2j+1)/4}\right). \end{equation} \end{lemma} \begin{proof} For $\lambda >0$ let $\eta$ be a random variable (independent of $X_n^{(j)}(\beta)$) with characteristic function \begin{equation*} \psi(t):=\EW{\EXP{\I t\eta}}=\frac{\sin^2(t\lambda)}{t^2\lambda^2}. \end{equation*} That is, $\eta$ is the sum of two independent random variables which are uniformly distributed on $[-\lambda,\lambda]$. Consider the random variable $$ \tilde{X}^{(j)}_n(\beta):=n^{-j} X^{(j)}_n(\beta)+\eta. $$ For all $T>0$ we have \begin{equation}\label{eq:abschaetzungdrei} \P{\left|\frac{X^{(j)}_n(\beta)}{n^j}\right|\leq T}\leq \P{|\tilde{X}^{(j)}_n(\beta)|\leq \frac{3}{2}T} +\P{|\eta|\geq \frac{1}{2}T} \end{equation} and we estimate the terms on the right-hand side separately. \vspace*{2mm} \noindent \textit{First term on the RHS of~\eqref{eq:abschaetzungdrei}}. The density of $\tilde{X}_n^{(j)}(\beta)$ exists and can be expressed using the inverse Fourier transform of its characteristic function denoted in the following by $$ \tilde{\phi}_n(t):=\E\EXP{\I t \tilde{X}_n^{(j)}(\beta)}. $$ Using the representation for $X_n^{(j)}(\beta)$ obtained in the proof of Lemma~\ref{lemma:varianzderableitung} and recalling that $\varphi$ is the characteristic function of $A_k$ and $B_k$, we obtain $$ |\tilde{\phi}_n(t)| =\psi(t) \prod_{k=1}^n\left|\phi\left(k^j\frac{t\cos(k \beta)}{n^{j+1/2}}\right)\right|\left|\phi\left(k^j\frac{t\sin(k\beta)}{n^{j+1/2}}\right)\right|. $$ Using Fourier inversion, for every $y\geq 0$ we may write \begin{align*} \P{|\tilde{X}^{(j)}_n(\beta)|\leq y} &=\frac{2}{\pi} \INT{0}{\infty} \frac{\sin(yt)}{t} \RE \tilde{\phi}_n(t) \D t\\ &\leq \frac{2y}{\pi} \INT{0}{\infty} \psi(t) \prod_{k=1}^n \left|\phi\left(k^j\frac{t\cos(k \beta)}{n^{j+1/2}}\right)\right|\left|\phi\left(k^j\frac{t\sin(k\beta)}{n^{j+1/2}}\right)\right| \D t. \end{align*} We used that $|t^{-1}\sin(yt)|\leq y$ for every $y\geq 0$ and $t\neq 0$. The coefficients $A_k$ and $B_k$ are supposed to have zero mean and unit variance. From this we can conclude that \begin{equation}\label{eq:varphi_est} |\phi(t)|\leq \exp(-t^2/4) \text{ for } t\in [-c,c], \end{equation} where $c>0$ is a sufficiently small constant. Let $\{\Gamma_l:l=0,\dots, n\}$ be a disjoint partition of $\R_+$ defined by \begin{align*} \Gamma_{l}&:=\left\{ t:\frac{cn^{j+1/2}}{(l+1)^j}\leq t<\frac{cn^{j+1/2}}{l^j} \right\} \quad \textrm{for }l=1,\dots, n-1,\\ \Gamma_{n}&:=\left\{t:0\leq t< c\sqrt{n}\right\},\\ \Gamma_{0}&:=\{t:t\geq cn^{j+1/2}\}. \end{align*} We decompose the integral above as follows: $$ \P{|\tilde{X}^{(j)}_n(\beta)|\leq y} \leq \frac{2y}{\pi}\sum_{l=0}^n I_l, $$ where $$ I_l := \int_{\Gamma_l} \psi(t) \prod_{k=1}^n \left|\phi\left(k^j\frac{t\cos(k \beta)}{n^{j+1/2}}\right)\right|\left|\phi\left(k^j\frac{t\sin(k\beta)}{n^{j+1/2}}\right)\right|\D t. $$ For the integral over $\Gamma_0$ we may write using $|\phi(t)|\leq 1$ and $\sin^2(\lambda t)\leq 1$, \begin{equation*} I_0 \leq \INT{cn^{j+1/2}}{\infty} \psi(t) \D t= \INT{cn^{j+1/2}}{\infty} \frac{\sin^2(\lambda t)}{\lambda^2t^2} \D t \leq \frac{1}{c\lambda^2} n^{-(j+1/2)}. \end{equation*} The integral over $\Gamma_n$ is smaller than a positive constant $C>0$ independent of $n$ because we can estimate all terms involving $\varphi$ by means of~\eqref{eq:varphi_est} as follows: \begin{equation*} I_n\leq \INT{0}{c\sqrt{n}}\psi(t) \EXP{-\frac{1}{4}\frac{t^2}{n^{2j+1}}\sum_{k=1}^n k^{2j}}\D t \leq \INT{0}{\infty} \EXP{-t^2\gamma}\D t \leq C, \end{equation*} where $\gamma>0$ is a small constant and we used that \begin{equation*} \sum_{k=1}^n k^{2j}\sim \frac{n^{2j+1}}{2j+1} \quad \textrm{as }n\rightarrow\infty. \end{equation*} For $t\in\Gamma_l$ with $l=1,\dots, n-1$ we have \begin{equation*} \left|l^j\frac{t\cos(l\beta)}{n^{j+1/2}}\right| \leq \frac{tl^j}{n^{j+1/2}} \leq c, \quad \left|l^j\frac{t\sin(l\beta)}{n^{j+1/2}}\right| \leq \frac{tl^j}{n^{j+1/2}} \leq c. \end{equation*} Thus, we can estimate all factors with $k=1,\ldots,l$ using~\eqref{eq:varphi_est}, whereas for all other factors we use the trivial estimate $|\varphi(t)|\leq 1$: \begin{align*} I_l&\leq \int_{\Gamma_l} \psi(t)\EXP{-\frac{1}{4}\frac{t^2}{n^{2j+1}}\sum_{k=1}^l k^{2j}} \D t \\ &\leq \int_{\frac{cn^{j+1/2}}{(l+1)^j}}^{\frac{cn^{j+1/2}}{l^j}} \frac{1}{\lambda^2t^2} \EXP{-\gamma_1 t^2\left(\frac{l}{n}\right)^{2j+1}} \D t \\ &=\frac{1}{\lambda^2} \left(\frac{l}{n}\right)^{j+1/2} \int_{c\frac{l^{j+1/2}}{(l+1)^j}}^{c\sqrt{l}}\frac{1}{u^2} \EXP{-\gamma_1u^2}\D u \\ &\leq \frac{C}{\lambda^2} \left(\frac{l}{n}\right)^{j+1/2}\EXP{-\gamma_2 l}, \end{align*} where $\gamma_1,\gamma_2>0$ are small constants and we substituted $u^2=t^2(l/n)^{2j+1}$. Summing up yields \begin{equation*} \sum_{l=1}^{n-1} I_l \leq C \lambda^{-2}n^{-(j+1/2)}\sum_{l=1}^{n-1}l^{j+1/2}\EXP{-\gamma_2l} \leq C'\lambda^{-2}n^{-(j+1/2)}. \end{equation*} Taking the estimates for $I_0,\ldots,I_n$ together, for every $y\geq 0$ we obtain \begin{equation}\label{eq:beweisesteins} \P{|\tilde{X}_n^{(j)}(\beta)|\leq y}\leq Cy\left(\frac{1}{\lambda^2}n^{-(j+1/2)}+1\right). \end{equation} \vspace*{2mm} \noindent \textit{Second term on the RHS of~\eqref{eq:abschaetzungdrei}}. The second term on the right hand-side of \eqref{eq:abschaetzungdrei} can be estimated using Chebyshev's inequality (and $\E \eta=0$). Namely, for every $z>0$, \begin{equation}\label{eq:beweisestzwei} \P{|\eta|\geq z} \leq \frac{\VAR{\eta}}{z^2} = \frac{2}{3}\frac{\lambda^2}{z^2}. \end{equation} \vspace*{2mm} \noindent \textit{Proof of~\eqref{eq:concentration}}. We arrive at the final estimate setting $y=3T/2$ and $z=T/2$ in \eqref{eq:beweisesteins} and \eqref{eq:beweisestzwei} respectively. We obtain that for every $\lambda>0$ and $T>0$ the inequality \begin{equation*} \P{\left|\frac{X^{(j)}_n(\beta)}{n^j}\right|\leq T}\leq C\left(\frac{T}{\lambda^2}n^{-(j+1/2)}+T+\frac{\lambda^2}{T^2}\right) \end{equation*} holds for a positive constant $C=C(j)>0$. This bound can be optimized by choosing a suitable $\lambda>0$. Setting $\lambda=T^{3/4} n^{-(j/4+1/8)}$ the statement of the lemma follows. \end{proof} \subsection{Roots and sign changes} The next lemma contains the main result of this section. \begin{lemma}\label{lemma:ewuntnsternn} For every $\delta\in (0,1/2)$ there exists $n_0=n_0(\delta)\in \N$ such that for all $n\geq n_0$ and every interval $[\alpha,\beta]\subset [0,2\pi]$ of length $\beta-\alpha=\delta n^{-1}$ we have the estimate \begin{equation*} 0\leq \E N_n[\alpha,\beta]- \E N_{n}^*[\alpha,\beta] \leq C (\delta^{4/3}+\delta^{-7}n^{-1/4}), \end{equation*} where $C>0$ is a constant independent of $n$, $\delta$, $\alpha$, $\beta$. \end{lemma} A crucial feature of this estimate is that the exponent $4/3$ of $\delta$ is $>1$, while the exponent of $n$ is negative. \begin{proof} Let $D^{(j)}_m$ be the random event defined as in Section~\ref{subsec:D_m_j}. Observe that due to the convention in which way $N_n[\alpha,\beta]$ counts the roots, the difference between $N_n^*[\alpha,\beta]$ and $N_n[\alpha,\beta]$ vanishes in the following cases: \begin{itemize} \item $X_n$ has no roots in $[\alpha,\beta]$ (in which case $N_n[\alpha,\beta]=N_n^*[\alpha,\beta]=0$); \item $X_n$ has exactly one simple root in $(\alpha,\beta)$ and no roots on the boundary (in which case $N_n[\alpha,\beta]=N_n^*[\alpha,\beta]=1$); \item $X_n$ has no roots in $(\alpha,\beta)$ and one simple root (counted as $1/2$) at either $\alpha$ or $\beta$ (in which case $N_n[\alpha,\beta]=N_n^*[\alpha,\beta]=1/2$). \end{itemize} In all other cases (namely, on the event $D^{(0)}_2$ when the number of roots in $[\alpha,\beta]$, with multiplicities, but without $1/2$-weights on the boundary, is at least $2$) we only have the trivial estimate $$ 0\leq N_n[\alpha,\beta]-N_n^*[\alpha,\beta]\leq N_n[\alpha,\beta]. $$ Since $D_2^{(0)}\supseteq D_3^{(0)}\supseteq \ldots$ and on the event $D^{(0)}_m \backslash D^{(0)}_{m+1}$ it holds that $N_n[\alpha,\beta]\leq m$, we obtain \begin{align*} 0\leq \E N_n[\alpha,\beta]-\E N_n^*[\alpha,\beta] &\leq \EW{N_n[\alpha,\beta]\IND{D^{(0)}_2}}\\ &\leq\P{D^{(0)}_2}+\sum_{m=2}^{2n}\P{D^{(0)}_m} \\ &\leq \P{D^{(0)}_2}+\sum_{m=2}^{21}\P{D^{(0)}_m} +\sum_{m=2}^{2n-20} \P{D^{(20)}_{m}}, \end{align*} where in the last step we passed to the $20$-th derivative of $X_n$ using Rolle's theorem. The upper bounds for the first two terms on the right-hand side follow immediately by Lemma \ref{lemma:abschaetzungdmj}, namely $$ \P{D^{(0)}_2} + \sum_{m=2}^{21}\P{D^{(0)}_m} \leq C(\delta^{4/3} + \delta^{-7} n^{-1/4}). $$ Thus we focus on the last term. For every $\delta>0$ (and $n$ big enough) we can find a number $k_0 = k_0(\delta,n)\in\{2,\dots, 2n\}$ such that \begin{equation*} n^{2}\leq\delta^{-k_0/3}<\delta^{-2k_0/3} \leq n^{5}. \end{equation*} For $m=2,\dots, k_0$ the estimate for the probability of $D^{(20)}_m$ presented in Lemma \ref{lemma:abschaetzungdmj} is good enough, whereas for $m=k_0+1,\dots, 2n-20$ we use the fact that $D^{(20)}_{k_0}\supseteq D^{(20)}_{k_0+l}$ for all $l\in \N$. This yields \begin{align*} \sum_{m=2}^{2n-20} \P{D^{(20)}_{m}} &\leq \sum_{m=2}^{k_0}\P{D^{(20)}_{m}} +\sum_{m=k_0+1}^{2n-20}\P{D^{(20)}_{k_0}} \\ & \leq \sum_{m=2}^{k_0} C(\delta^{2m/3}+ \delta^{-m/3} n^{-10}) + 2Cn (\delta^{2k_0/3} + \delta^{-k_0/3}n^{-10})\\ &\leq C(\delta^{4/3}+n^{-5})+2Cn (n^{-2} + n^{-5})\\ &\leq C(\delta^{4/3}+\delta^{-7}n^{-1/4}). \end{align*} Combining the above estimates yields the statement of the lemma. \end{proof} \section{The related stationary Gaussian process}\label{abschnitt:berechnung} \subsection{Convergence to the Gaussian case} In the following let $(Z(t))_{t\in\R}$ denote the stationary Gaussian process with $\E Z(t)=u$, $\VAR Z(t)=1$, and covariance \begin{equation*} \Cov\left[Z(t),Z(s)\right]=\frac{\sin(t-s)}{t-s}, \quad t\neq s. \end{equation*} The following lemma states the weak convergence of the bivariate distribution of $(X_n(\alpha),X_n(\beta))$ with $\beta-\alpha = n^{-1}\delta$ to $(Z(0), Z(\delta))$, as $n\to\infty$. \begin{lemma}\label{lem:lim_S_n} Let $\delta>0$ be arbitrary but fixed. For $n\in\N$ let $[\alpha_n,\beta_n]\subseteq [0,2\pi]$ be an interval of length $\beta_n-\alpha_n=n^{-1}\delta$. Then \begin{equation*} \begin{pmatrix} X_n(\alpha_n)\\ X_n(\beta_n) \end{pmatrix} \to \begin{pmatrix} Z(0) \\ Z(\delta) \end{pmatrix} \quad \textrm{in distribution as $n\to\infty$}. \end{equation*} \end{lemma} \begin{proof} To prove the statement it suffices to show the pointwise convergence of the corresponding characteristic functions. Let \begin{equation*} \phi_n(\lambda,\mu):=\E \e^{\I(\lambda X_n(\alpha_n)+\mu X_n(\beta_n))} \end{equation*} denote the characteristic function of $(X_n(\alpha_n),X_n(\beta_n))$. Recall that $\phi$ represents the common characteristic function of the coefficients $(A_k)_{k\in\N}$ and $(B_k)_{k\in\N}$. Then the expression reads \begin{align*} &\phi_n(\lambda,\mu)= \\ &\e^{\I u(\lambda+\mu)} \prod_{k=1}^n \phi\left(\frac{\lambda \cos(k\alpha_n)+\mu\cos(k\beta_n)}{\sqrt{n}}\right) \phi\left(\frac{\lambda \sin(k\alpha_n)+\mu\sin(k\beta_n)}{\sqrt{n}}\right). \end{align*} Using \eqref{eq:charfuncchar} we have \begin{align*} \phi_n(\lambda,\mu)&=\e^{-S_n(\lambda,\mu)}, \\ S_n(\lambda, \mu)&:= -\I u(\lambda+\mu)+\frac{1}{2n}\sum_{k=1}^n (\lambda\cos(k\alpha_n)+\mu\cos(k\beta_n ))^2H_1(n,k)\\ &+\frac{1}{2n}\sum_{k=1}^n (\lambda\sin(k\alpha_n)+\mu\sin(k\beta_n))^2H_2(n,k), \end{align*} where we have shortened the writing by defining \begin{align*} H_1(n,k)&:=H\left(\frac{\lambda\cos(k\alpha_n)+\mu\cos(k\beta_n)}{\sqrt{n}}\right),\\ H_2(n,k)&:=H\left(\frac{\lambda\sin(k\alpha_n)+\mu\sin(k\beta_n)}{\sqrt{n}}\right). \end{align*} After elementary transformations and using that $\beta_n-\alpha_n=n^{-1}\delta$ we obtain \begin{align*} & S_n(\lambda, \mu)= \\ &-\I u(\lambda+\mu) +\frac{1}{n}\sum_{k=1}^n H_1(n,k) \left( \frac{\lambda^2}{2}+\frac{\mu^2}{2}+\lambda\mu \cos\left(k\frac{\delta}{n}\right) \right) +R_n(\lambda,\mu), \end{align*} where we have abbreviated \begin{equation*} R_n(\lambda,\mu):=\frac{1}{2n}\sum_{k=1}^n(\lambda\sin(k\alpha_n)+\mu\sin(k\beta_n ))^2(H_2(n,k)-H_1(n,k)). \end{equation*} Since Riemann sums converge to Riemann integrals, we have $$ \lim_{n\to\infty} \frac{1}{n}\sum_{k=1}^n \left( \frac{\lambda^2}{2}+\frac{\mu^2}{2}+\lambda\mu \cos\left(k\frac{\delta}{n}\right) \right) = \frac{\lambda^2}{2}+\frac{\mu^2}{2}+\lambda\mu \frac{\sin \delta}{\delta}. $$ For $i=1,2$ we have that $\lim_{n\to\infty} H_i(n,k) = H(0)=1$ uniformly in $k=1,2,\dots, n$. Hence, \begin{equation*} \left|\frac{1}{n}\sum_{k=1}^n (H_1(n,k)-1)\left( \frac{\lambda^2}{2}+\frac{\mu^2}{2}+\lambda\mu \cos\left(k\frac{\delta}{n}\right) \right)\right| \leq \frac{C}{n}\sum_{k=1}^n |H_1(n,k)-1| \longrightarrow 0 \end{equation*} as $n\to\infty$. The remaining term of the sum \begin{equation*} |R_n(\lambda,\mu)|\leq \frac{1}{2n}\sum_{k=1}^nC|H_2(n,k)-H_1(n,k)| \longrightarrow 0 \end{equation*} goes to $0$ for all fixed $\lambda,\mu$, as $n\to \infty$. Therefore we have \begin{equation}\label{eq:S_infty} S_\infty(\lambda,\mu):=\lim_{n\rightarrow\infty} S_n(\lambda, \mu)= -\I u(\lambda+\mu)+ \frac{\lambda^2+\mu^2}{2} + \lambda\mu \frac{\sin(\delta)}{\delta} \end{equation} and $\phi_\infty(\lambda,\mu):= \EXP{-S_\infty(\lambda,\mu)}$ is nothing but the characteristic function of $(Z(0),Z(\delta))$. This implies the statement. \end{proof} \subsection{The Gaussian case} Denote by $\tilde N^*[\alpha,\beta]$ the analogue of $N_n^*[\alpha, \beta]$ for the process $Z$, that is \begin{equation}\label{eq:definitionnstern} \tilde N^*[\alpha,\beta]:=\frac{1}{2}-\frac{1}{2}\sgn(Z(\alpha)Z(\beta)). \end{equation} \begin{lemma}\label{lem:gaussian} As $\delta\downarrow 0$, we have \begin{equation}\label{eq:gaussian_crossing_probab} \E \tilde{N}^*[0,\delta]= \frac{1}{\pi\sqrt{3}}\EXP{-\frac{u^2}{2}}\delta+o(\delta). \end{equation} \end{lemma} \begin{proof} The bivariate random vector $(Z(0),Z(\delta))$ is normal-distributed with mean $(u,u)$ and covariance $\rho=\delta^{-1}\sin \delta$. We have \begin{align*} \E \tilde{N}^*[0,\delta] &=\P{Z(0)Z(\delta)<0}\\ &=2\P{Z(0)-u < -u, Z(\delta)-u>-u}\\ &\sim\frac{\sqrt{1-\rho^2}}{\pi}\EXP{-\frac{u^2}{2}} \end{align*} as $\delta\downarrow 0$ (equivalently, $\rho\uparrow 1$), where the last step will be justified in Lemma~\ref{lemma:levelcrossings}, below. Using the Taylor series of $\rho^{-1}\sin \rho$ which is given by \begin{equation} \frac{\sin(\delta)}{\delta} = 1-\frac{\delta^2}{6}+o(\delta^2) \quad \textrm{as } \delta \downarrow 0, \end{equation} we obtain the required relation~\eqref{eq:gaussian_crossing_probab}. \end{proof} \begin{lemma}\label{lemma:levelcrossings} Let $(X,Y)\sim N(\mu,\Sigma)$ be bivariate normal distributed with parameters \begin{equation*} \mu=\begin{pmatrix} 0 \\ 0 \end{pmatrix} \quad \textrm{und} \quad \Sigma=\begin{pmatrix} 1 & \rho \\ \rho & 1 \end{pmatrix}. \end{equation*} Let $u\in \R$ be arbitrary but fixed. Then, \begin{equation*} \P{X\leq u,Y\geq u}\sim \frac{\sqrt{1-\rho^2}}{2\pi}\EXP{-\frac{u^2}{2}} \qquad \textrm{as } \rho \uparrow 1. \end{equation*} \end{lemma} \begin{proof} In the special case $u=0$ the lemma could be deduced from the explicit formula $$ \P{X\geq 0,Y\geq 0} = \frac 14 + \frac{\arcsin \rho}{2\pi} $$ due to F.\ Sheppard; see~\cite{bingham_doney} and the references therein. For general $u$, no similar formula seems to exist and we need a different method. By the formula for the density of the random vector $(X,Y)$, we have to investigate the integral \begin{equation*} \int_{x\leq u} \int_{y\geq u} \frac{1}{2\pi\sqrt{(1-\rho^2)}} \exp\left(-\frac{1}{2(1-\rho^2)}(x^2+y^2-2\rho xy)\right)\ \d x \d y \end{equation*} as $\rho\rightarrow 1$. After the substitution $x=u-\epsilon v$ and $y=u+\epsilon w$ with a parameter $\epsilon>0$ to be chosen below, the integral becomes \begin{align*} &\frac{\epsilon^2}{2\pi\sqrt{1-\rho^2}}\exp\left(-\frac{u^2}{1+\rho}\right) \\ &\quad \times \INT{0}{\infty}\INT{0}{\infty} \EXP{ \frac{u\epsilon}{1+\rho}(v-w) -\frac{\epsilon^2}{2(1-\rho^2)}(v^2+w^2+2\rho vw) }\mathrm{d}v\mathrm{d}w. \end{align*} Setting $\epsilon:=\sqrt{1-\rho^2}$ we have $\epsilon\rightarrow 0$ for $\rho\rightarrow 1$ and furthermore (using the dominated convergence theorem) \begin{align*} \P{X\leq u,Y\geq u}&\sim \frac{\sqrt{1-\rho^2}}{2\pi}\exp\left(-\frac{u^2}{2}\right) \INT{0}{\infty}\INT{0}{\infty} \exp\left(-\frac{1}{2}(v+w)^2\right)\mathrm{d}v\mathrm{d}w \\ &=\frac{\sqrt{1-\rho^2}}{2\pi}\exp\left(-\frac{u^2}{2}\right) \quad \textrm{as }\rho \rightarrow 1, \end{align*} where we have used that \begin{equation*} \INT{0}{\infty}\INT{0}{\infty} \exp\left(-\frac{1}{2}(v+w)^2\right)\D v\d w = \INT{0}{\infty}z\EXP{-\frac{1}{2}z^2}\D z =1. \end{equation*} \end{proof} \section{Proof of the main result} \subsection{Approximation by a lattice} Fix an interval $[a,b]\subset [0,2\pi]$ and take some $\delta>0$. We will study the sign changes of $X_n$ on the lattice $\delta n^{-1} \Z$. Unfortunately, the endpoints of the interval $[a,b]$ need not be elements of this lattice. To avoid boundary effects, we notice that $[a_n', b_n']\subset [a,b] \subset [a_n,b_n]$ with \begin{equation*} a_n:=\frac{\delta}{n}\left\lfloor \frac{an}{\delta} \right\rfloor, \quad b_n:=\frac{\delta}{n}\left\lceil \frac{bn}{\delta} \right\rceil, \quad a'_n:=\frac{\delta}{n}\left\lceil \frac{an}{\delta} \right\rceil, \quad b'_n:=\frac{\delta}{n}\left\lfloor \frac{bn}{\delta} \right\rfloor. \end{equation*} Since $N_n[a_n',b_n']\leq N_n[a,b]\leq N_n[a_n,b_n]$, it suffices to show that $$ \lim_{n\to\infty} \frac{\E N_n[a_n,b_n]}{n} = \lim_{n\to\infty} \frac{\E N_n[a_n',b_n']}{n} =\frac{b-a}{\pi\sqrt{3}}\EXP{-\frac{u^2}{2}}. $$ In the following, we compute the first limit because the second one is completely analogous. Let $N^*_{n,\delta}[a_n,b_n]$ be a random variable counting the number of sign changes of $X_n$ on the lattice $\delta n^{-1}\Z$ between $a_n$ and $b_n$, namely \begin{equation*} N^*_{n,\delta}[a_n,b_n]:=\sum_{k=\lfloor \delta^{-1}an\rfloor}^{\lceil\delta^{-1}bn\rceil-1} N_n^*\left[\frac{k\delta}{n},\frac{(k+1)\delta}{n}\right]. \end{equation*} The following lemma claims that the expected difference between the number of roots and the number of sign changes is asymptotically small. \begin{lemma}\label{lemma:hauptbeweiszwei} It holds that \begin{equation*} \lim_{\delta\downarrow 0}\limsup_{n\to\infty} \frac{\E N_{n}[a_n,b_n] - \E N^*_{n,\delta}[a_n,b_n]}{n}=0. \end{equation*} \end{lemma} \begin{proof} The triangle inequality and Lemma \ref{lemma:ewuntnsternn} yield that for any $\delta\in (0,1/2)$ and all sufficiently large $n$, \begin{align*} &\left|\E N_n[a_n,b_n]-\E N^*_{n,\delta}[a_n,b_n]\right| \\ =&\quad \left|\sum_{k=\lfloor \delta^{-1}an\rfloor}^{\lceil\delta^{-1}bn\rceil-1} \left(\E N_n\left[\frac{k\delta}{n},\frac{(k+1)\delta}{n}\right] - \E N^*_n\left[\frac{k\delta}{n},\frac{(k+1)\delta}{n}\right]\right)\right| \\ \leq&\quad \sum_{k=\lfloor \delta^{-1}an\rfloor}^{\lceil\delta^{-1}bn\rceil-1} \left|\E N_n\left[\frac{k\delta}{n},\frac{(k+1)\delta}{n}\right] - \E N^*_n\left[\frac{k\delta}{n},\frac{(k+1)\delta}{n}\right]\right| \\ \leq &\quad C\frac{n}{\delta}\left(\delta^{4/3}+\delta^{-7}n^{-1/4}\right). \end{align*} It follows that for every fixed $\delta\in (0,1/2)$, \begin{equation*} \limsup_{n\to\infty} \frac{\left|\E N_n[a_n,b_n]-\E N^*_{n,\delta}[a_n,b_n]\right|}{n} \leq C\delta^{1/3}. \end{equation*} Letting $\delta \to 0$ completes the proof. \end{proof} \subsection{Sign changes over the lattice} In the next lemma we find the asymptotic number of sign changes of $X_n$ over a lattice with mesh size $\delta n^{-1}$, as $n\to\infty$ and then $\delta\downarrow 0$. \begin{lemma}\label{lemma:hauptbeweisdrei} It holds that \begin{equation*} \lim_{\delta\downarrow 0} \lim_{n\to\infty} \frac{\E N^*_{n,\delta}[a_n,b_n]}{n}=\frac{b-a}{\pi\sqrt{3}}\EXP{-\frac{u^2}{2}}. \end{equation*} \end{lemma} \begin{proof} Fix $0< \delta\leq 1$. For every $x\in\R$ we can find a unique integer $k=k(x;n,\delta)$ such that $x\in (n^{-1}\delta k,n^{-1}\delta (k+1)]$. Thus the function $f_n: \R\rightarrow [0,1]$ given by \begin{equation*} f_n(x):=\E N_n^*\left[\frac{k\delta}{n},\frac{(k+1)\delta}{n}\right] \quad \textrm{for} \quad \frac{k\delta}{n}<x\leq \frac{(k+1)\delta}{n} \end{equation*} is well-defined for all $n\in\N$. Now $\E N^*_{n,\delta}[a_n,b_n]$ can be expressed as \begin{equation*} \frac{\E N_{n,\delta}^*[a_n,b_n]}{n}=\frac{1}{\delta}\INT{a_n}{b_n}f_n(x)\D x. \end{equation*} Recall that $(Z(t))_{t\in\R}$ denotes the stationary Gaussian process with mean $\E Z(t)=u$ and covariance \begin{equation*} \Cov\left[Z(t),Z(s)\right]=\frac{\sin(t-s)}{t-s}. \end{equation*} We want to show that for all $x\in \R$, \begin{equation}\label{eq:konvergenz} \lim_{n\to\infty} f_n(x)=\P{Z(0)Z(\delta)\leq 0}. \end{equation} Write $\alpha_n:=k n^{-1}\delta$ and $\beta_n:=(k+1)n^{-1}\delta$, so that $\beta_n-\alpha_n=n^{-1}\delta$. We obtain from Lemma~\ref{lem:lim_S_n} that \begin{equation*} \begin{pmatrix} X_n(\alpha_n) \\ X_n(\beta_n) \end{pmatrix} \to \begin{pmatrix} Z(0) \\Z(\delta) \end{pmatrix} \quad \textrm{in distribution as $n\to\infty$.} \end{equation*} Now consider the function \begin{equation*} h:\R^2\to \{-1,0,1\},\quad h(x,y):=\sgn(x)\sgn(y). \end{equation*} Let $D_h\subseteq \R^2$ be the set of discontinuities of $h$, which in this case is the union of the coordinate axes. Since $(Z(0),Z(\delta))^T$ is bivariate normal with unit variances, it follows that \begin{equation*} \P{ \begin{pmatrix} Z(0) \\Z(\delta) \end{pmatrix} \in D_h}=0. \end{equation*} Using the continuous mapping theorem (see, e.g., \cite[Theorem 2.7]{billingsley_book}), we conclude that \begin{equation*} h(X_n(\alpha_n),X_n(\beta_n)) \to h(Z(0), Z(\delta)) \quad \textrm{in distribution as $n\to\infty$}. \end{equation*} Since these random variables are bounded by $1$, it follows that \begin{equation*} \lim_{n\to\infty} \EW{\sgn X_n(\alpha_n)\sgn X_n(\beta_n)}= \EW{\sgn Z(0)\sgn Z(\delta)}. \end{equation*} Recalling that \begin{align*} f_n(x)=\E N^*_n[\alpha_n,\beta_n]&=\EW{\frac{1}{2}-\frac{1}{2}\sgn X_n(\alpha_n)\sgn X_n(\beta_n)},\\ \P{Z(0)Z(\delta)\leq 0}&=\EW{\frac{1}{2}-\frac{1}{2}\sgn Z(0)\sgn Z(\delta)} \end{align*} completes the proof of~\eqref{eq:konvergenz}. Since $0\leq f_n\leq 1$ for all $n\in \N$, we may use the dominated convergence theorem to obtain \begin{align*} \lim_{n\to\infty} \INT{a_n}{b_n} f_n(x)\D x =\INT{a}{b} \P{Z(0)Z(\delta)\leq 0} \D x =(b-a)\P{Z(0)Z(\delta)\leq 0}. \end{align*} Therefore for all $\delta>0$, \begin{equation*} \lim_{n\to\infty} \frac{\E N_{n,\delta}^* [a_n,b_n]}{n}= \frac{\P{Z(0)Z(\delta)\leq 0}}{\delta} (b-a). \end{equation*} Due to Lemma \ref{lem:gaussian}, \begin{equation*} \lim_{\delta\downarrow 0}\frac{\P{Z(0)Z(\delta)\leq 0}}{\delta}= \EXP{-\frac{u^2}{2}}\frac{1}{\pi\sqrt{3}}, \end{equation*} whence the statement follows. \end{proof} \noindent \emph{Proof of Theorem 1.} The triangle inequality yields \begin{align*} &\quad \left|\frac{\E N_n[a_n,b_n]}{n}-\frac{b-a}{\pi\sqrt{3}}\EXP{-\frac{u^2}{2}}\right| \\ &\leq \left|\frac{\E N_{n}[a_n,b_n] - \E N^*_{n,\delta}[a_n,b_n]}{n}\right|+ \left|\frac{\E N^*_{n,\delta}[a_n,b_n]}{n} -\frac{b-a}{\pi\sqrt{3}}\EXP{-\frac{u^2}{2}}\right|. \end{align*} Taking first $n$ to infinity and $\delta>0$ to zero afterwards, the first term of the sum on the right-hand side converges to $0$ due to Lemma \ref{lemma:hauptbeweiszwei}, while the second term of the sum converges to $0$ due to Lemma \ref{lemma:hauptbeweisdrei}. This proves that $$ \lim_{n\to\infty} \frac{\E N_n[a_n,b_n]}{n} = \frac{b-a}{\pi\sqrt{3}}\EXP{-\frac{u^2}{2}}. $$ Analogous argument shows that $[a_n,b_n]$ can be replaced by $[a_n', b_n']$. This completes the proof. \beweisende \bibliographystyle{alpha} \bibliography{random_trig} \end{document}
{"config": "arxiv", "file": "1601.01841/hfarxiv.tex"}
\begin{document} \maketitle \begin{abstract} Model order selection (MOS) in linear regression models is a widely studied problem in signal processing. Techniques based on information theoretic criteria (ITC) are algorithms of choice in MOS problems. This article proposes a novel technique called residual ratio thresholding for MOS in linear regression models which is fundamentally different from the ITC based MOS criteria widely discussed in literature. This article also provides a rigorous mathematical analysis of the high signal to noise ratio (SNR) and large sample size behaviour of RRT. RRT is numerically shown to deliver a highly competitive performance when compared to popular model order selection criteria like Akaike information criterion (AIC), Bayesian information criterion (BIC), penalised adaptive likelihood (PAL) etc. especially when the sample size is small. \end{abstract} \section{Introduction} Consider a linear regression model ${\bf y}={\bf X}\boldsymbol{\beta}+{\bf w}$, where ${\bf X}=[{\bf x}_1,\dotsc,{\bf x}_p] \in \mathbb{R}^{n \times p}$ is a known design matrix with columns $\{{\bf x}_k\}_{k=1}^p$, $\boldsymbol{\beta}\in \mathbb{R}^p$ is an unknown regression vector and ${\bf w}$ is a Gaussian distributed noise vector with mean ${\bf 0}_n$ and covariance matrix $\sigma^2{\bf I}_n$. Here ${\bf 0}_n$ is the $n \times 1$ zero vector and ${\bf I}_n$ is the $n\times n$ identity matrix. We assume that the design matrix ${\bf X}$ has full column rank, i.e., $rank({\bf X})=p$ which is possible only if $n \geq p$. The noise variance $\sigma^2$ is assumed to be unknown. Model order of the regression vector $k_0$ is the last index $k$ such that $\boldsymbol{\beta}_j=0$ for all $j>k$. Mathematically, $k_0=\max\{k:\boldsymbol{\beta}_k\neq 0\}$ or equivalently $k_0=\min\{k:\boldsymbol{\beta}_j= 0,\forall j> k\}$. In many cases of practical interest, $\boldsymbol{\beta}_k\neq 0$ for all $k\leq k_0$. In those situations, model order also corresponds to the number of non-zero entries in the regression vector $\boldsymbol{\beta}$. We also assume that the regression vector $\boldsymbol{\beta}\neq {\bf 0}_p$ which ensures that $k_0\geq 1$. Model order selection (MOS)\cite{stoica2004model}, i.e., identification or detection of model order $k_0$ using ${\bf y}$ and ${\bf X}$ has got many applications including channel estimation in wireless communications\cite{raghavendra2005improving,tomasoni2013efficient}, fixing filter lengths in digital signal processing\cite{filter_design}, fixing the order in auto regressive (AR) time series models\cite{schmidt2011estimating} etc. This article deals with the development of novel techniques for MOS. After presenting the notations used in this article, we discuss the prior art on MOS and the novel contributions in this article. \subsection{Notations used} Bold upper case letters represent matrices and bold lower case letters represent vectors. $span({\bf X})$ is the column space of ${\bf X}$. ${\bf X}^T$ is the transpose and ${\bf X}^{\dagger}=({\bf X}^T{\bf X})^{-1}{\bf X}^T$ is the pseudo inverse of ${\bf X}$. $[k]$ denotes the set $\{1,2,\dotsc,k\}$. ${\bf X}_{\mathcal{J}}$ denotes the sub-matrix of ${\bf X}$ formed using the columns indexed by $\mathcal{J}$. In particular ${\bf X}_{[k]}=[{\bf x}_1,\dotsc,{\bf x}_k]$. ${\bf P}_{k}={\bf X}_{[k]}{\bf X}^{\dagger}_{[k]}$ is the projection matrix onto $span({\bf X}_{[k]})$. ${\bf a}_{\mathcal{J}}$ and ${\bf a}({\mathcal{J}})$ both denote the entries of vector ${\bf a}$ indexed by $\mathcal{J}$. $\|{\bf a}\|_q=(\sum\limits_{j=1}^m|{\bf a}_j|^q)^{1/q} $ is the $l_q$ norm of ${\bf a}\in \mathbb{R}^m$. $\phi$ represents the null set. ${\bf O}_n$ is the $n\times n$ zero matrix. For any two index sets $\mathcal{J}_1$ and $\mathcal{J}_2$, the set difference $\mathcal{J}_1/\mathcal{J}_2=\{j:j \in \mathcal{J}_1\& j\notin \mathcal{J}_2\}$. $f(m)=O(g(m))$ iff $\underset{m \rightarrow \infty}{\lim}\frac{f(m)}{g(m)}<\infty$. $\mathbb{E}(Z)$ represents the expectation of random variable/vector (R.V) $Z$ and $\mathbb{P}(\mathcal{A})$ represents the probability of event $\mathcal{A}$. ${\bf a}\sim \mathcal{N}({\bf u},{\bf C})$ implies that ${\bf a}$ is a Gaussian R.V with mean ${\bf u}$ and covariance matrix ${\bf C}$. $\mathbb{B}(a,b)$ denotes a Beta R.V with parameters $a$ and $b$. $B(a,b)=\int_{t=0}^1t^{a-1}(1-t)^{b-1}dt$ is the beta function with parameters $a$ and $b$. $\chi^2_k$ is a central chi square R.V with $k$ degrees of freedom (d.o.f), whereas, $\chi^2_k(\lambda)$ is a non-central chi square R.V with $k$ d.o.f and non-centrality parameter $\lambda$. $G(x)=\int_{t=0}^{\infty}e^{-t}t^{x-1}dt$ is the Gamma function. A R.V $Z$ converges in probability to a constant $c$ (i.e., $Z\overset{P}{\rightarrow } c$) as $n \rightarrow \infty$ (or as $\sigma^2 \rightarrow 0$) if $\underset{n \rightarrow \infty}{\lim}\mathbb{P}(|Z-c|> \epsilon)=0$ (or $\underset{\sigma^2 \rightarrow 0}{\lim}\mathbb{P}(|Z-c|> \epsilon)=0$) for every fixed $\epsilon>0$. \subsection{Prior art on MOS} MOS is one of the most widely studied topics in signal processing. Among the plethora of MOS techniques discussed in literature, the ones based on information theoretic criteria (ITC) are the most popular. The operational form of ITC based MOS techniques is given by \begin{equation}\label{itc} \hat{k_0}=\underset{k=1,\dotsc,p}{\min}n\log(\sigma^2_k)+h(k,\sigma^2_k), \end{equation} where $\sigma^2_k=\|({\bf I}_n-{\bf P}_k){\bf y}\|_2^2/n$ is the maximum likelihood estimate of $\sigma^2$ assuming that the model ${\bf y}={\bf X}_{[k]}\boldsymbol{\beta}_{[k]}+{\bf w}$ is true. The term $n\log(\sigma^2_k)$ measures how well the observation ${\bf y}$ is approximated using the columns in ${\bf X}_{[k]}$. Since one can approximate ${\bf y}$ better using more number of columns, the term $n\log(\sigma^2_k)$ is a decreasing function of $k$. Hence, $\underset{k=1,\dotsc,p}{\arg\min}\ n\log(\sigma^2_k)$ always equals the maximum possible model order, i.e., $p$. The second term $h(k,\sigma^2_k)$ in (\ref{itc}) popularly called penalty function is typically an increasing function of $k$. Consequently, the ITC in (\ref{itc}) select as model order estimate that $k$ which provides a good trade-off between data fit represented by $n\log(\sigma^2_k)$ and complexity represented by $h(k,\sigma^2_k)$. Please note that the term $n\log(\sigma^2_k)$ is two times the negative of log likelihood of data ${\bf y}$ maximized w.r.t the unknown parameters $\{\boldsymbol{\beta}_{[k]},\sigma^2\}$. Consequently, (\ref{itc}) has an interpretation of minimizing penalised log likelihood, a widely popular statistical concept. Another interesting interpretation of (\ref{itc}) from the perspective of sequential hypothesis testing is derived in \cite{stoica2004information}. The properties of ITC is completely determined by the penalty function $h(k,\sigma^2_k)$ and different penalty functions give different ITC. The penalty function $h(k,\sigma^2_k)$ in (\ref{itc}) can either be a deterministic function of $k$ as in Akaike information criteria \textit{aka} AIC ($h(k,\sigma^2_k)=2k$), large sample version of Bayesian information criteria \textit{aka} BIC ($h(k,\sigma^2_k)=k\log(n)$)\cite{stoica2004model} etc. or a stochastic function as in penalized adaptive likelihood PAL\cite{stoica2013model}, normalised minimum description length NMDL\cite{rissanen2000mdl}, finite sample forms of BIC\cite{stoica2012proper}, empirical BIC\cite{nielsen2013bayesian}, exponentially embedded families (EEF)\cite{EEFPDF,ding2011inconsistency} etc. Penalty functions in popular ITC like AIC, BIC, MDL, NMDL, EEF etc. are derived using statistical concepts like Kullbeck Leibler divergence, Laplace approximation for integrals, information theoretic complexity, exponential family of distributions etc. Please see \cite{tsp} for a list of popular penalty functions. Most of the analytical results on ITC are based on either the large sample asymptotics, i.e., $n \rightarrow \infty, p/n \rightarrow 0$ \cite{nishii1988maximum,asymptotic_map,rao1989strongly,shao1997asymptotic,zheng1995consistent,minimal} or the high signal to noise ratio (SNR) asymptotics, i.e., as $\sigma^2\rightarrow 0$\cite{ding2011inconsistency,tsp,schmidt2012consistency,stoica2012proper}. These asymptotic results are summarized in the following lemma. \begin{lemma} ITC based MOS estimate in (\ref{itc}) satisfies the following consistency results.\cite{nishii1988maximum,tsp}\\ a). Suppose that $h(k,\sigma^2_k)=vk$ and maximum model order $p$ is fixed. Then $v/\log(\log(n))\rightarrow \infty$ and $v/n\rightarrow 0$ is sufficient for the large sample consistency, i.e., the probability of correct selection $PCS=\mathbb{P}(\hat{k}_0=k_0)\rightarrow 1$ as $n \rightarrow \infty$. \\ b). ITC with $h(k,\sigma^2_k)=(ak+b)\log(1/\sigma^2_k)$ in (\ref{itc}) is high SNR consistent (i.e., $PCS\rightarrow 1$ as $\sigma^2\rightarrow 0$) if $ak_0+b<n$. \end{lemma} Using these consistency results, it is easy to show that ITC like NMDL, proper forms of BIC, EEF etc. are high SNR consistent. Similarly, one can show that BIC, NMDL etc. are also large sample consistent. Techniques to create novel penalty functions based on the high SNR behaviour of ITC were proposed in \cite{designITC, tsp, stoica2013model}. Please note that ITC based MOS rules are also developed for non-linear order selection problems like source number enumeration in \cite{lu2013generalized,haddadi2010statistical}, sinusoidal enumeration\cite{nielsen2014model,asymptotic_map} etc. \subsection{Contribution of this article} This article proposes a novel technique for MOS called residual ratio thresholding (RRT). RRT is based on the behaviour of adjacent residual norm ratio $RR(k)=\dfrac{\sigma^2_k}{\sigma^2_{k-1}}$ and is structurally different from the ITC based MOS criteria in (\ref{itc}). { Unlike popular algorithms like BIC, AIC etc. which are motivated by large sample asymptotics, RRT is motivated by a finite sample distributional result (see Theorem \ref{thm:beta_mos}). This finite sample nature of RRT is reflected in its superior empirical performance compared to AIC, BIC etc. when the sample size is small. } RRT involves a tuning parameter $\alpha$ for which we give a proper semantic interpretation using high SNR and large sample analysis. In particular, one can set the hyper parameters in RRT so as to achieve a predetermined high SNR and large sample lower bound on PCS. In this sense, RRT is similar to the ITC design technique in \cite{designITC}. However, for the same high SNR error bound, RRT is numerically shown to deliver better PCS in the low to medium SNR regime than \cite{designITC}. Further, the conditions required for the large sample consistency of RRT is also derived. { Numerical simulations indicate that RRT performs better than existing ITC based MOS techniques in many situations including but not limited to the case of small $n$ and $p$. In situations where RRT is outperformed by other MOS criteria, RRT performed close to the best performing MOS criterion.} Based on the derived analytical results and observed numerical results, we believe that RRT deserves a place in the algorithmic toolkit for MOS problems. This article is organized as follows. Section \rom{2} analyses the behaviour of residual ratio $RR(k)$. Section \rom{3} presents and analyses the RRT based MOS technique. Section \rom{4} presents numerical simulations. \section{Behaviour of residual ratios} Define ${\bf r}^k=({\bf I}_n-{\bf P}_k){\bf y}$, the residual after projecting onto the column space of ${\bf X}_{[k]}$. In terms of $\sigma^2_k$, $\|{\bf r}^k\|_2^2=n\sigma^2_k$. In this section, we rigorously analyse the behaviour of residual ratios $RR(k)=\dfrac{\|{\bf r}^k\|_2^2}{\|{\bf r}^{k-1}\|_2^2}=\dfrac{\sigma^2_k}{\sigma^2_{k-1}}=\dfrac{\|({\bf I}_n-{\bf P}_k){\bf y}\|_2^2}{\|({\bf I}_n-{\bf P}_{k-1}){\bf y}\|_2^2}$ for $k\geq k_0$. The proposed RRT technique for MOS is based on this analysis. The basic distributional results are listed in the following lemma. \begin{lemma}\label{lemma:basic_distributions} $RR(k)$ satisfies the following for all $\sigma^2>0$.\\ a). $RR(k)$ for $k>k_0$ satisfies $RR(k)\sim \mathbb{B}(\dfrac{n-k}{2},\dfrac{1}{2})$.\\ b). $RR(k_0)=\dfrac{Z_1}{Z_1+Z_2}$, where $Z_1=\|({\bf I}_n-{\bf P}_{k_0}){\bf w}\|_2^2\sim \sigma^2\chi^2_{n-k_0}$ and $Z_2=\|({\bf P}_{k_0}-{\bf P}_{k_0-1}){\bf y}\|_2^2\sim \sigma^2\chi^2_1\left(\dfrac{\|({\bf I}_{n}-{\bf P}_{k_0-1}){\bf x}_{k_0}\|_2^2\boldsymbol{\beta}_{k_0}^2}{\sigma^2}\right)$. \end{lemma} \begin{proof} Please see Appendix A. \end{proof} We now give a bound in Theorem \ref{thm:beta_mos} which is a direct consequence of the distributional result a) in Lemma \ref{lemma:basic_distributions}. \begin{thm}\label{thm:beta_mos} Define $\Gamma_{RRT}^{\alpha}(k)=F_{\frac{n-k}{2},\frac{1}{2}}^{-1}\left(\frac{\alpha}{p}\right)$ for $1\leq k\leq p<n$, where $F_{\frac{n-k}{2},\frac{1}{2}}^{-1}()$ is the inverse cumulative distribution function (CDF) of a $\mathbb{B}(\frac{n-k}{2},\frac{1}{2})$ R.V. Then $\mathbb{P}(RR(k)>\Gamma_{RRT}^{\alpha}(k),\forall k> k_0)\geq 1-\alpha$ for each $0<\alpha<1$ and $\sigma^2>0$. \end{thm} \begin{proof} The proof follows directly from the union bound\footnote{For any $n$ events $\{\mathcal{A}_i\}_{i=1}^n$, union bound is $\mathbb{P}(\cup_{i=1}^nA_i)\leq \sum \limits_{i=1}^n\mathbb{P}(A_i)$}, the definition of $\Gamma_{RRT}^{\alpha}(k)$ and the result $RR(k)=\dfrac{\|{\bf r}^k\|_2^2/\sigma^2}{\|{\bf r}^{k-1}\|_2^2/\sigma^2}\sim \mathbb{B}(\dfrac{n-k}{2},\dfrac{1}{2})$ for $k>k_0$. \begin{equation} \begin{array}{ll} \mathbb{P}(RR(k)>\Gamma_{RRT}^{\alpha}(k),\forall k> k_0)\\ \ \ \ \ \ \ =1-\mathbb{P}(\exists k>k_0: RR(k)<\Gamma_{RRT}^{\alpha}(k))\\ \ \ \ \ \ \ \geq 1-\sum\limits_{k>k_0}^p\mathbb{P}(RR(k)<\Gamma_{RRT}^{\alpha}(k))\\ \ \ \ \ \ \ =1-\sum\limits_{k>k_0}^pF_{\frac{n-k}{2},\frac{1}{2}}\left(F_{\frac{n-k}{2},\frac{1}{2}}^{-1}\left(\frac{\alpha}{p}\right)\right)\\ \ \ \ \ \ \ = 1-\frac{(p-k_0)}{p}\alpha\geq 1-\alpha. \end{array} \end{equation} \end{proof} Theorem \ref{thm:beta_mos} implies that $RR(k)$ for $k>k_0$ is lower bounded by $\Gamma_{RRT}^{\alpha}(k)$ with a very high probability (for small values of $\alpha$). Please note that the bound in Theorem \ref{thm:beta_mos} hold true irrespective of the value of $k_0$. Also note that the lower bound $\Gamma_{RRT}^{\alpha}(k)$ on $RR(k)$ for $k>k_0$ is independent of $\sigma^2$. Certain other interesting properties of $\Gamma_{RRT}^{\alpha}(k)$ is listed below. \begin{lemma} $\Gamma_{RRT}^{\alpha}(k)$ satisfies the following properties. \\ a). For fixed $n$ and $p$, $\Gamma_{RRT}^{\alpha}(k)$ decreases with decreasing $\alpha$. In particular $\Gamma_{RRT}^0(k)=0$ and $\Gamma_{RRT}^p(k)=1$.\\ b). For fixed $n$ and $\alpha$, $\Gamma_{RRT}^{\alpha}(k)$ decreases with increasing $p$.\\ c). For fixed $n$, $p$ and $\alpha$, $\Gamma_{RRT}^{\alpha}(k)$ decreases with increasing $k$. \end{lemma} \begin{proof} a) and b) follow from the monotonicity of CDF and the fact that a Beta distribution has support only in [0,1]. c) is true since the Beta CDF $F_{a,b}(x)$ is a decreasing function of $a$ for fixed values of $b$ and inverse CDF $F^{-1}_{a,b}(x)$ is an increasing function of $a$ for fixed values of $b$. \end{proof} We next consider the behaviour of $RR(k_0)$ as $\sigma^2 \rightarrow 0$. The main result is stated in the following Theorem. \begin{thm}\label{thm:rrknot} $RR(k_0)\overset{P}{\rightarrow}0$ as $\sigma^2 \rightarrow 0$. \end{thm} \begin{proof}Please see Appendix B. \end{proof} Theorem \ref{thm:rrknot} implies that $RR(k_0)$ takes smaller and smaller values with increasing SNR. This is in contrast with $RR(k)$ for $k>k_0$ which is lower bounded by a constant independent of the operating SNR. { The analysis of $RR(k)$ for $k<k_0$ is not relevant to the RRT algorithm discussed in this article and is thereby omitted. However, following the proof of Theorem \ref{thm:rrknot}, one can easily show that $RR(k)$ for $k<k_0$ converges in probability to a constant $c_k=\dfrac{\|({\bf I}_n-{\bf P}_k){\bf X}\boldsymbol{\beta}\|_2^2}{\|({\bf I}_n-{\bf P}_k){\bf X}\boldsymbol{\beta}\|_2^2+\|({\bf P}_{k}-{\bf P}_{k-1}){\bf X}\boldsymbol{\beta}\|_2^2}$ which is strictly bounded away from zero and one. } \subsection{Numerical Validation } \begin{figure*}[htb] \begin{multicols}{2} \includegraphics[width=1\linewidth]{plotting_0dB.eps} \caption*{a). SNR=0dB. $\{RR(k)<\Gamma_{RRT}^{\alpha}(k),\forall\ k>k_{0}\}$ $93.5\%$ for ($\alpha=0.1$), $99\%$ for ($\alpha=0.01$) } \includegraphics[width=1\linewidth]{plotting_20dB.eps} \caption*{b). SNR=20dB. $\{RR(k)>\Gamma_{RRT}^{\alpha}(k),\forall\ k>k_{0}\}$ $94.1\%$ for ($\alpha=0.1$), $99.3\%$ for ($\alpha=0.01$) } \end{multicols} \squeezeup \caption{Behaviour of $RR(k)$. $k_0=5$. SNR=0dB (left) and SNR=20dB (right). Circles in Fig.1 represent the values of $RR(k)$, squares represent $\Gamma_{RRT}^{\alpha}(k)$ with $\alpha=0.1$ and diamonds represent $\Gamma_{RRT}^{\alpha}(k)$ with $\alpha=0.01$. } \label{fig:evolution} \squeezeup \end{figure*} We next numerically validate the distributional results derived in previous subsections, \textit{viz}. Theorem \ref{thm:beta_mos} and Theorem \ref{thm:rrknot}. Consider a $30 \times 20$ design matrix ${\bf X}$ generated using independent $\mathcal{N}(0,1/n)$ entries. $k_0$ is set at $k_0=5$ and $\boldsymbol{\beta}_k=\pm 1$ for all $k\leq k_0$. We plot 1000 realizations of $\{RR(k)\}_{k=1}^{p}$ at two different SNRs, \textit{viz} SNR=0dB (Fig.1.a) and SNR=20dB (Fig.1.b). From these plots and the empirically evaluated probabilities of $\{RR(k)>\Gamma_{RRT}^{\alpha}(k),\forall k>k_0\}$ reported alongside, it is clear that the $1-\alpha$ probability bound predicted by Theorem \ref{thm:beta_mos} holds true. Further, as one can see from Fig.1.a and Fig.1.b, the value of $RR(k_0)=RR(5)$ decreases with increasing SNR. This observation is in agreement with the convergence result $RR(k_0)\overset{P}{\rightarrow} 0$ as $\sigma^2\rightarrow 0$ in Theorem \ref{thm:rrknot}. \section{Residual ratio thresholding based MOS} From the behaviour of $RR(k)$ discussed analytically and numerically in section \rom{2}, it is clear that $RR(k)$ for $k>k_0$ is larger than $\Gamma_{RRT}^{\alpha}(k)$ with a very high probability (for smaller values of $\alpha$), whereas, $RR(k_0)$ decreases to zero with increasing SNR or equivalently decreasing $\sigma^2$. Consequently, $RR(k_0)$ will be smaller than $\Gamma_{RRT}^{\alpha}(k_0)$ at high SNR, whereas, $RR(k)>\Gamma_{RRT}^{\alpha}(k)$ for all $k>k_0$ with a high probability. Hence, with increasing SNR, the model order estimate \begin{equation}\label{rrt} \hat{k}_{RRT}=\max\{k:RR(k)\leq \Gamma_{RRT}^{\alpha}(k)\} \end{equation} will corresponds to $k_0$ with a very high probability. This is the RRT based MOS criterion proposed in this article. The efficacy of RRT is visible from Fig.1.b itself where $\hat{k}_{RRT}=k_0$ with probability $94.1\%$ for $\alpha=0.1$ and probability $99.3\%$ for $\alpha=0.01$ respectively. \begin{remark} \label{rem:substitute} An important aspect regarding the RRT based MOS in (\ref{rrt}) is the choice of $\hat{k}_{RRT}$ when the set $\{k:RR(k)<\Gamma_{RRT}^{\alpha}(k)\}=\phi$. This situation happens only at very low SNR. Further, throughout this article, we assumed that $k_0\geq 1$. Hence, setting $\hat{k}_{RRT}=0$ when $\{k:RR(k)<\Gamma_{RRT}^{\alpha}(k)\}=\phi$ is not a prudent choice. In this article, we set $\hat{k}_{RRT}=\max\{k:RR(k)\leq \Gamma_{RRT}^{\alpha_{new}}(k)\}$ where \begin{equation}\label{alphanew} \alpha_{new}=\underset{a>\alpha}{\min} \{a: \{k:RR(k)\leq \Gamma_{RRT}^{\alpha}(k)\}\neq\phi\}. \end{equation} Since $\alpha=p$ gives $\Gamma_{RRT}^{\alpha}(1)=1$ and $RR(1)\leq 1$, a value of $\alpha_{new}\leq p$ always exist. $\alpha_{new}$ can be easily computed by first pre-computing $\{\Gamma_{RRT}^{a}(k)\}_{k=1}^p$ for say 100 prefixed values of $a$ in the interval $(\alpha,p]$. Note that the value of $\alpha_{new}$ can be greater than 1 and hence $\alpha_{new}$ does not have any probabilistic interpretation. Please note that $\Gamma_{RRT}^{\alpha}(k)$ and $\{\Gamma_{RRT}^{a}(k)\}_{k=1}^p$ in $(\alpha,p]$ can all be precomputed. Hence, the online computational complexity of RRT is same as that of ITC in (\ref{itc}). \end{remark} \begin{remark} RRT is directly based on the evolution of residual norms and residual ratios with increasing SNR. This is in contrast with AIC, BIC etc. whose penalty terms are based on information theoretic arguments and their asymptotic approximations. This is a fundamental philosophical difference between AIC, BIC etc. and RRT. In this sense, RRT is philosophically closer to PAL\cite{stoica2013model}, whose penalty term is also derived directly from the behaviour of residual norms. { The fact that RRT is based on finite sample results directly related to the statistics involved in MOS explains the superior performance of RRT \textit{viz a viz} BIC, AIC etc. when the sample size is small (see section \rom{4}).} \end{remark} \subsection{High SNR behaviour and interpretation of $\alpha$} We next explain the high SNR behaviour of $\hat{k}_{RRT}$. Define the probabilities of overestimation and underestimation of $\hat{k}_{RRT}$ as $\mathbb{P}_{\mathcal{O}}=\mathbb{P}(\{\hat{k}_{RRT}>k_0\})$ for $k_0<p$ and $\mathbb{P}_{\mathcal{U}}=\mathbb{P}(\{\hat{k}_{RRT}<k_0\})$ for $k_0>1$ respectively. \begin{thm} \label{thm:highSNR}Overestimation and underestimation probabilities of RRT satisfy $\underset{\sigma^2 \rightarrow 0}{\lim }\mathbb{P}_{\mathcal{O}}\leq \alpha$ and $\underset{\sigma^2 \rightarrow 0}{\lim }\mathbb{P}_{\mathcal{U}}=0$ respectively. Consequently, $\underset{\sigma^2 \rightarrow 0}{\lim }PCS\geq 1-\alpha$. \end{thm} \begin{proof} Please see Appendix C. \end{proof} \begin{remark} Theorem 1 gives a straight forward operational interpretation for the tuning parameter $\alpha$ as the high SNR upper bound on the probability of overestimation $\underset{\sigma^2 \rightarrow 0}{\lim }\mathbb{P}_{\mathcal{O}}\leq \alpha$ and probability of error $\underset{\sigma^2 \rightarrow 0}{\lim }1-PCS\leq \alpha$. Such a straight forward semantic interpretation is not available for the tuning parameters in AIC, BIC etc. At all SNR where the set $\{k:RR(k)<\Gamma_{RRT}^{\alpha}(k)\}\neq \phi$, overestimation probability is given by $\mathbb{P}(\exists k>k_0: RR(k)<\Gamma_{RRT}^{\alpha}(k))$ which by Theorem \ref{thm:beta_mos} is less than $\alpha$. Further, the probability that the set $\{k:RR(k)<\Gamma_{RRT}^{\alpha}(k)\}= \phi$ is very low at all practical SNR regimes. Hence, the bound $\mathbb{P}_{\mathcal{O}}\leq \alpha$ hold true even when the SNR is very low. However, the bound $1-PCS\leq \alpha$ hold true only when the SNR is very high. These observations will be numerically validated in section \rom{4}. \end{remark} \begin{remark} While designing the penalty function $h(k,\sigma^2_k)$ in ITC (\ref{itc}) or the parameter $\alpha$ in RRT, the user has control only over the high SNR behaviour of $\mathbb{P}_{\mathcal{O}}$. When $h(k,\sigma^2_k)$ is of the form $vk$ for some fixed parameter $v>0$ (like AIC, BIC etc.) and the user requires the high SNR $\mathbb{P}_{\mathcal{O}}$ to be lower than a predefined value $\mathbb{P}^{des}_{\mathcal{O}}$, \cite{designITC} proposed to set $v=v^{des}$, where $v^{des}$ is the minimum value of $v$ that delivers $\underset{\sigma^2 \rightarrow 0}{\lim}\mathbb{P}_{\mathcal{O}}\leq \mathbb{P}^{des}_{\mathcal{O}}$ assuming that $k_0=0$. The case of $k_0=0$ is worst case scenario in terms of overestimation. To operate RRT satisfying $\underset{\sigma^2 \rightarrow 0}{\lim}\mathbb{P}_{\mathcal{O}}\leq \mathbb{P}^{des}_{\mathcal{O}}$, one can set $\alpha=\mathbb{P}^{des}_{\mathcal{O}}$. Numerical simulations indicate that for the same value of $\mathbb{P}^{des}_{\mathcal{O}}$, RRT very often deliver PCS higher than that of the design criteria in \cite{designITC} in the low to moderate SNR regime. \end{remark} \subsection{High SNR inconsistency of RRT} From the $RR(k)\sim \mathbb{B}\left(\dfrac{n-k}{2},\dfrac{1}{2}\right)$ distribution in (\ref{beta_prelim}) and $\Gamma_{RRT}^{\alpha}(k)=F_{\frac{n-k}{2},\frac{1}{2}}^{-1}(\frac{\alpha}{p})$, it is true that $\mathbb{P}(RR(k)<\Gamma_{RRT}^{\alpha}(k))=F_{\frac{n-k}{2},\frac{1}{2}}(F_{\frac{n-k}{2},\frac{1}{2}}^{-1}(\frac{\alpha}{p}))=\alpha/p$ for $k>k_0$. This implies that \begin{equation} \mathbb{P}_{\mathcal{O}}\geq \mathbb{P}(\exists k>k_0: RR(k)<\Gamma_{RRT}^{\alpha}(k))\geq \alpha/p>0,\forall \sigma^2>0. \end{equation} Consequently, RRT with $\sigma^2$ independent values of $\alpha$ is not high SNR consistent. However, even in a small scale problem with $p=10$, the lower bound on $\mathbb{P}_{\mathcal{O}}$ gives $0.01$ for $\alpha=0.1$ and $0.001$ for $\alpha=0.01$, whereas, the upper bound gives $0.1$ and $0.01$ respectively. Hence, for values of $\alpha$ like $\alpha=0.1$ or $\alpha=0.01$, the difference in PCS between a high SNR consistent MOS and RRT at high SNR would be negligible. Please note that MOS criteria like AIC, BIC, design criteria in \cite{designITC} etc. are also inconsistent at high SNR. Further, numerical simulations indicate that RRT outperforms high SNR consistent MOS criteria such as \cite{tsp, stoica2013model} etc. in the low and moderate SNR regimes. Consequently, the negligible performance loss at high SNR due to inconsistency is compensated by the good overall performance of RRT. Also please note that the high SNR performance of RRT with $\alpha=0.1$ or $\alpha=0.01$ is better than that of other high SNR inconsistent criteria like AIC, BIC etc. when the sample size is small. \subsection{Large sample behaviour of $\Gamma_{RRT}^{\alpha}(k_0)$} \begin{figure*}[htb] \begin{multicols}{3} \includegraphics[width=1\linewidth]{n_increasing_p10_ko5.eps} \caption*{a). $p$, $k_0$ fixed. $n$ increasing.} \includegraphics[width=1\linewidth]{n_increasing_p9n_ko5.eps} \caption*{b). $k_0$ fixed. $p=0.9n$, $n$ increasing.} \includegraphics[width=1\linewidth]{n_increasing_p9n_ko8n.eps} \caption*{c). $k_0=0.8n$, $p=0.9n$, $n$ increasing. } \end{multicols} \squeezeup \caption{Asymptotic behaviour of $RR(k_0)$. } \label{fig:asymptotic} \squeezeup \end{figure*} In the following two subsections, we evaluate the large sample behaviour of RRT. As a prelude, we first analyse the behaviour of the function $\Gamma_{RRT}^{\alpha}(k_0)$ as $n \rightarrow \infty$. \begin{thm}\label{thm:asymptotic_rrt} Let $n$ increase to $\infty$ such that $p/n\rightarrow [0,1)$ and $k_{lim}=\underset{n\rightarrow \infty}{\lim}k_0/n\in[0,1)$. Parameter $0\leq \alpha\leq 1$ is either a fixed number or a function of $n$ with limits $\underset{n \rightarrow \infty}{\lim}\alpha=0$ and $-\infty\leq \alpha_{lim}=\underset{n\rightarrow \infty}{\lim}\log(\alpha)/n\leq 0$. Then, $\Gamma_{RRT}^{\alpha}(k_0)=F_{\frac{n-k_0}{2},\frac{1}{2}}^{-1}(\frac{\alpha}{p})$ satisfies the following asymptotic limits.\\ A1). $\underset{n \rightarrow \infty}{\lim}\Gamma_{RRT}^{\alpha}(k_0)=1$ if $\alpha_{lim}=0$.\\ A2). $0<\underset{n \rightarrow \infty}{\lim}\Gamma_{RRT}^{\alpha}(k_0)=e^{\frac{2\alpha_{lim}}{1-k_{lim}}}<1$ if $-\infty<\alpha_{lim}<0$.\\ A3). $\underset{n \rightarrow \infty}{\lim}\Gamma_{RRT}^{\alpha}(k_0)=0$ if $\alpha_{lim}=-\infty$. \end{thm} \begin{proof}Please see Appendix D. \end{proof} Theorem \ref{thm:asymptotic_rrt} implies that when $\alpha$ is reduced to zero with increasing $n$ at a rate slower than $a^{-n}$ for some $a>1$, (i.e., $\alpha_{lim}=0$), then it is possible to achieve a value of $\Gamma_{RRT}^{\alpha}(k_0)$ arbitrarily close to one at large $n$. Choices of $\alpha$ that satisfy $\alpha_{lim}=0$ include $\alpha=\text{constant}$, $\alpha=1/\log(n)$, $\alpha=1/n^c$ for some $c>0$ etc. However, if one decreases $\alpha$ to zero at a rate $a^{-n}$ for some $a>1$ (i.e., $-\infty<\alpha_{lim}<0$), then it is impossible to achieve a value of $\Gamma_{RRT}^{\alpha}(k_0)$ closer to one. However, $\Gamma_{RRT}^{\alpha}(k_0)$ will still be bounded away from zero. When $\alpha$ is reduced to zero at a rate faster than $a^{-n}$ for some $a>1$ (say $a^{-n^2}$), then $\Gamma_{RRT}^{\alpha}(k_0)$ converges to zero with increasing $n$. This behaviour of $\Gamma_{RRT}^{\alpha}({k_0})$ have a profound impact on the large sample behaviour of RRT. Theorem \ref{thm:asymptotic_rrt} is numerically validated in Fig.\ref{fig:asymptotic} where we plot $\Gamma_{RRT}^{\alpha}(k_0)$ for three asymptotic regimes of practical interest, \textit{viz.}, a). $(p,k_0)$ fixed and $n\rightarrow \infty$, b). $k_0$ fixed while $(p,n)\rightarrow \infty$ and c). $(n,p,k_0)\rightarrow \infty$. In all the three asymptotic regimes, adaptations of $\alpha$ satisfying $\alpha_{lim}= 0$ achieve $\underset{n \rightarrow \infty}{\lim}\Gamma_{RRT}^{\alpha}(k_0)=1$. These numerical results are in accordance with Theorem \ref{thm:asymptotic_rrt}. \subsection{Large sample consistency of RRT} In this section, we establish the conditions required for the large sample consistency of RRT, i.e., $\underset{n \rightarrow \infty}{\lim}PCS=1$. The main result in this section is Theorem \ref{thm:large_sample} presented below. \begin{thm}\label{thm:large_sample} Consider a situation where $n \rightarrow \infty$ such that \\ a) $0\leq k_{lim}=\underset{n \rightarrow \infty}{\lim}k_0/n<1$. \\ b). $\exists M_1>0$ and $n_0<\infty$ satisfying $\|({I}_{n}-{\bf P}_{k_0-1}){\bf x}_{k_0}\|_2^2|\boldsymbol{\beta}_{k_0}|^2/\sigma^2\geq M_1n>0$ for all $n>n_0$. Then \\ R1). RRT is large sample consistent provided that the parameter $\alpha$ satisfies $\alpha_{lim}=\underset{n \rightarrow \infty}{\lim}\log(\alpha)/n=0$ and $\underset{n \rightarrow \infty}{\lim}\alpha=0$.\\ R2). For a fixed $0<\alpha\leq 1$, $\underset{n \rightarrow \infty}{\lim}\mathbb{P}_{\mathcal{U}}=0$ and $\underset{n \rightarrow \infty}{\lim}\mathbb{P}_{\mathcal{O}}\leq \alpha$. \end{thm} \begin{proof} Please see Appendix E. \end{proof} Theorem \ref{thm:large_sample} implies that with proper adaptations on the parameter $\alpha$, it is possible to achieve a PCS arbitrarily close to one at large sample sizes. We first relate the requirements on $\alpha$ to the probabilities of overestimation and underestimation. \begin{remark} To avoid underestimation at large $n$, i.e., $\underset{n \rightarrow \infty}{\lim}\mathbb{P}_{\mathcal{U}}=0$, it is sufficient that $\alpha_{lim}=0$. By Theorem \ref{thm:asymptotic_rrt}, a fixed value of $\alpha=0.1$ or $\alpha=0.01$ is sufficient for this. The adaptation $\alpha \rightarrow 0$ as $n \rightarrow \infty$ is necessary only to prevent overestimation. Further, in addition to the worst case high SNR overestimation probability, the bound $\underset{n \rightarrow \infty}{\lim}\mathbb{P}_{\mathcal{O}}\leq \alpha$ implies that the parameter $\alpha$ in RRT when set independent of $n$ also has the semantic interpretation of worst case large sample overestimation probability. \end{remark} \begin{remark} The only user specified parameter in RRT is $\alpha$. Theorem \ref{thm:large_sample} implies that for all choices of $\alpha$ that satisfies $\alpha_{lim}=0$ and $\underset{n \rightarrow \infty}{\lim}\alpha=0$, RRT will have similar value of PCS at large values of $n$. Note that the conditions $\alpha_{lim}=0$ and $\underset{n \rightarrow \infty}{\lim}\alpha=0$ are satisfied by a wide range of adaptations like $\alpha=1/\log(n)$, $\alpha=1/n$ etc. This points to the insensitivity of RRT to the choice of $\alpha$ as $ n\rightarrow \infty$, i.e., RRT is asymptotically tuning free. \end{remark} We next discuss as corollaries the specific conditions under which the SNR condition in Theorem \ref{thm:large_sample}, i.e., $\exists M_1>0$ and $n_0<\infty$ such that $\|({\bf I}_{n}-{\bf P}_{k_0-1}){\bf x}_{k_0}\|_2^2|\boldsymbol{\beta}_{k_0}|^2/\sigma^2\geq M_1n>0$ for all $n>n_0$ for hold true. \begin{corollary} Let ${\bf X} \in \mathbb{R}^{n \times p}$ with $p<n$ be an orthonormal matrix and $\boldsymbol{\beta}_k=b$ for all $k\leq k_0$ and $-\infty<b<\infty$. Then SNR is given by SNR=$\|{\bf X}\boldsymbol{\beta}\|_2^2/(n\sigma^2)=\|\boldsymbol{\beta}\|_2^2/(n\sigma^2)= k_0b^2/(n\sigma^2)$. Further ${\bf X}$ orthonormal implies that $({\bf I}_n-{\bf P}_{k_0-1}){\bf x}_{k_0}={\bf x}_{k_0}$ and hence $\|({\bf I}_{n}-{\bf P}_{k_0-1}){\bf x}_{k_0}\|_2^2|\boldsymbol{\beta}_{k_0}|^2/\sigma^2=b^2/\sigma^2=n SNR/k_0$. This setting has $M_1=SNR/k_0$ and $n_0=1$. Hence, when $n$ is increased to infinity fixing $k_0$ and SNR constant, RRT is large sample consistent. \end{corollary} \begin{corollary} Following Corollary 1, consider a situation where $k_0$ is increasing with $n$ and SNR increasing atleast linearly with $k_0$ asymptotically, i.e., $SNR/k_0>1$ for some $n\geq n_0$. Then $\|({\bf I}_{n}-{\bf P}_{k_0-1}){\bf x}_{k_0}\|_2^2|\boldsymbol{\beta}_{k_0}|^2/\sigma^2=n SNR/k_0>n$ for $n>n_0$. Here $M_1=1$. Hence if SNR increases atleast linearly with $k_0$, then RRT is large sample consistent with increasing $k_0$ as long as $k_{lim}=\underset{n \rightarrow \infty}{\lim}k_0/n<1$. When $k_0$ increases and SNR is kept fixed, then $\|({\bf I}_{n}-{\bf P}_{k_0-1}){\bf x}_{k_0}\|_2^2|\boldsymbol{\beta}_{k_0}|^2/\sigma^2=nSNR/k_0$ increases at the most sub-linearly with $n$ denying the existence of $M_1>0$ and $n_0<\infty$. In that situation, RRT may not be large sample consistent. \end{corollary} \begin{corollary} Next consider the situation ${\bf X} \in \mathbb{R}^{n\times p}$ with $p<n$ and ${\bf X}_{i,j}\overset{i.i.d}{\sim}\mathcal{N}(0,1/n)$. By Lemma 5 of \cite{cai2011orthogonal}, \begin{equation} \|({\bf I}_{n}-{\bf P}_{k_0-1}){\bf x}_{k_0}\|_2^2\geq \lambda_{min}({\bf X}_{[k_0]}^T{\bf X}_{[k_0]}), \end{equation} where $\lambda_{min}({\bf X}_{[k_0]}^T{\bf X}_{[k_0]})$ is the minimum eigenvalue of the matrix ${\bf X}_{[k_0]}^T{\bf X}_{[k_0]}$. Under the limit $0\leq k_{lim}<1$, it is true that \cite{decoding_candes} \begin{equation} \lambda_{min}({\bf X}_{[k_0]}^T{\bf X}_{[k_0]})\overset{P}{\rightarrow }(1-\sqrt{k_{lim}})^2\ \text{as} \ n \rightarrow \infty. \end{equation} Further, the SNR is fixed at $SNR=\dfrac{\mathbb{E}(\|{\bf X}\boldsymbol{\beta}\|_2^2)}{\mathbb{E}(\|{\bf w}\|_2^2)}=\dfrac{\|\boldsymbol{\beta}\|_2^2}{n\sigma^2}=\dfrac{k_0b^2}{n\sigma^2}$. Consequently at large sample sizes, $\|({\bf I}_{n}-{\bf P}_{k_0-1}){\bf x}_{k_0}\|_2^2|\boldsymbol{\beta}_{k_0}|^2/\sigma^2=(1-\sqrt{k_{lim}})^2b^2/\sigma^2=n (1-\sqrt{k_{lim}})^2 SNR/k_0$. It then follows from Corollaries 1-2 that RRT is consistent when $n$ increases to $\infty$ such that \\ a). $k_0$ and SNR are kept fixed. \\ b). $k_0$ increases to $\infty$ and SNR increases atleast linearly with $k_0$. \end{corollary} \subsection{Comparison between RRT and ITC hyper parameters} { In this subsection, we briefly compare the role played by hyper parameter $\alpha$ in RRT and the hyper prameter $v$ in the MOS criteria of the form $\hat{k_0}=\underset{k=1,2,\dotsc,p}{\arg\min}\|({\bf I}_n-{\bf P}_k){\bf y}\|_2^2+vk$ (like AIC, BIC). It is well known that with increasing values of $v$, $\mathbb{P}_{\mathcal{O}}$ decreases, whereas, $\mathbb{P}_{\mathcal{U}}$ increases. Exactly similar behaviour is visible in RRT with decreasing values of $\alpha$, i.e., a smaller value of $\alpha$ is qualitatively equivalent to a larger value of penalty parameter $v$. This observation explains the similarity in the conditions required for large sample consistency of ITC and RRT. Note that to avoid overestimation as $n \rightarrow \infty$, one need $v\rightarrow \infty$ at the rate $v/\log\log(n)\rightarrow \infty$, whereas, to avoid underestimation one would require $v/n \rightarrow 0$, i.e., $v$ should not grow to $\infty$ at a very fast rate \cite{nishii1988maximum}. Once we take into account the fact that smaller $\alpha$ is equivalent to a higher $v$, the conditions that $\alpha \rightarrow 0$ to avoid overestimation and $\log(\alpha)/n \rightarrow 0$ to avoid underestimation are similar to the rules imposed on $v$. Similarly, for fixed values of $v$ and $\alpha$, both ITC and RRT overestimate the model order at high SNR, i.e., $\underset{\sigma^2\rightarrow 0}{\lim}\mathbb{P}_{\mathcal{O}}>0$, whereas, underestimation probability $\mathbb{P}_{\mathcal{U}}$ satisfies $\underset{\sigma^2\rightarrow 0}{\lim}\mathbb{P}_{\mathcal{U}}=0$\cite{tsp}.} \section{Numerical Simulations} In this section, we numerically validate the high SNR and large sample consistency results derived in Section \rom{3}. We also compare the performance of RRT with popular MOS techniques. We compare RRT with classical ITC based MOS like AIC $h(k,\sigma^2_k)=2k$, BIC $h(k,\sigma^2_k)=k\log(n)$ and the recently proposed PAL in \cite{stoica2013model}. We also consider a recently proposed high SNR consistent (HSC) MOS with penalty $h(k,\sigma^2_k)=\max(k\log(n),2k\log(\frac{1}{\sigma^2_k}))$ \cite{tsp}. By Lemma 1, this technique is HSC as long as $n>2k_0$ and this condition is true in all our experiments. A technique to design penalty functions based on the high SNR behaviour of $\mathbb{P}_{\mathcal{O}}$ is proposed in \cite{designITC}. This technique is also implemented (called ``Design" in figures) with desired error levels $0.1$ and $0.01$ shown in brackets. Simulation results for other popular algorithms like EEF, NMDL, g-MDL etc. are not included because of space constraints. However, we have observed that the relative performance comparisons between RRT and algorithms like PAL, Design, BIC etc. also hold true for NMDL, EEF etc. The entries of matrix ${\bf X}$ are sampled independently from $\mathcal{N}(0,1)$ and the columns are later normalised to have unit $l_2$ norm. We consider two models for $\boldsymbol{\beta}$, (1) model 1 has $\boldsymbol{\beta}_k=\pm 1$ for all $k\leq k_0$ (i.e., signal component of $\boldsymbol{\beta}$ given by $\boldsymbol{\beta}_{[k_0]}$ is not sparse) and 2) model 2 has $\boldsymbol{\beta}_k=\pm 1$ only for few entries between $k=1$ and $k_0$ (i.e., $\boldsymbol{\beta}_{[k_0]}$ is sparse). The non-zero locations will be reported alongside the figures. Model 2 is typical of auto regressive (AR) model order selection where the maximum lag (i.e., true order of AR process $k_0$) can be very high, however, the generator polynomial has only few non-zero coefficients. Likewise, in sparse channel estimation\cite{raghavendra2005improving,tomasoni2013efficient}, it is likely that the length of channel impulse response (i.e., $k_0$) is high. However, the CIR contains only few non-zero coefficients. Model 2 represents this scenario too. All the results presented in this section are obtained after $10^4$ iterations. \subsection{Validating Theorem \ref{thm:highSNR} and Theorem \ref{thm:large_sample}} \begin{figure*} \begin{multicols}{2} \includegraphics[width=\linewidth]{Theorem2_rrt.eps} \caption*{a)Verification of Theorem \ref{thm:highSNR}.$n=20$, $p=10$ and $k_0=3$.} \includegraphics[width=\linewidth]{Theorem4_rrt.eps} \caption*{b). Verification of Theorem \ref{thm:large_sample}. $p=5$,$k_0=3$ and SNR=-10dB (left).$p=0.3n$,$k_0=0.1n$ and SNR=0.1$k_0$ (right).} \end{multicols} \squeezeup \caption{Verification of Theorem \ref{thm:highSNR} and Theorem \ref{thm:large_sample}. $\boldsymbol{\beta}_k=\pm 1$ for all $k\leq k_0$.} \label{fig:verification} \squeezeup \end{figure*} In this section, we numerically validate the high SNR and large sample results presented in Theorem \ref{thm:highSNR} and Theorem \ref{thm:large_sample} of Section \rom{3}. Fig.\ref{fig:verification}.a) presents the variations in $\mathbb{P}_{\mathcal{O}}$ and $\mathbb{P}_{\mathcal{U}}$ with increasing SNR. From the L.H.S of Fig.\ref{fig:verification}.a), one can see that $\mathbb{P}_{\mathcal{O}}$ floors at $\approx 10^{-1.5}$ from zero dB SNR onwards when $\alpha=0.1$ and $\approx 10^{-2.5}$ from 3dB SNR onwards when $\alpha=0.01$. These evaluated values of $\mathbb{P}_{\mathcal{O}}$ satisfy the bound $\underset{\sigma^2 \rightarrow 0}{\lim}\mathbb{P}_{\mathcal{O}}\leq \alpha$ predicted by Theorem \ref{thm:highSNR}. In fact, the bound $\mathbb{P}_{\mathcal{O}}\leq \alpha$ hold true even at a low SNR of 3dB. Likewise, as one can see from the R.H.S of Fig.\ref{fig:verification}.a), $\mathbb{P}_{\mathcal{U}}$ decreases with increasing SNR. This is also in accordance with the limit $\underset{\sigma^2 \rightarrow 0}{\lim}\mathbb{P}_{\mathcal{U}}=0$ predicted by Theorem \ref{thm:highSNR}. Note that restricting overestimation probability to smaller values in MOS problems will always leads to an increase in finite SNR underestimation probability for any MOS criterion. This explains the increase in $\mathbb{P}_{\mathcal{U}}$ for $\alpha=0.01$ at finite SNR compared to $\alpha=0.1$. Fig.\ref{fig:verification}.b) presents the variations in PCS with increasing sample size $n$. Among the four choices of $\alpha$ considered, only $\alpha=1/\sqrt{n}$ and $\alpha=1/n$ can lead to large sample consistency according to Theorem \ref{thm:large_sample}. We consider two regimes of interest. Regime 1 depicted in the L.H.S of Fig.\ref{fig:verification}.b) deals with the situation where $n$ increases to $\infty$ keeping $p$, $k_0$ and SNR fixed. As one can see from Fig.\ref{fig:verification}.b), PCS for all values of $\alpha$ increases to one with increasing $n$. However, $PCS$ for $\alpha=0.01$ and $\alpha=0.1$ floor near one satisfying the bounds $\underset{n \rightarrow \infty}{\lim}PCS\geq 1-\alpha$ in R2) of Theorem \ref{thm:large_sample}, whereas, PCS for $\alpha=1/\sqrt{n}$ and $\alpha=1/n$ converges to one satisfying R1) of Theorem \ref{thm:large_sample}. Regime 2 deals with a situation where all $n$, $p$ and $k_0$ increases to $\infty$ with SNR increasing linearly with $k_0$. As one can see from Fig.\ref{fig:verification}.b), PCS for $\alpha=1/n$ and $\alpha=1/\sqrt{n}$ converge to one, whereas, $PCS$ for $\alpha=0.01$ and $\alpha=0.1$ floor near one satisfying the bounds $\underset{n \rightarrow \infty}{\lim}PCS\geq 1-\alpha$. These results validate Theorem \ref{thm:large_sample} and its' corollaries. \begin{figure*} \begin{multicols}{2} \includegraphics[width=1\linewidth]{small_sample_n10_p5_nact2.eps} \caption*{a). $n=10$, $p=5$ and $k_0=2$.} \includegraphics[width=1\linewidth]{small_sample_n10_p5_nact4.eps} \caption*{b). $n=10$, $p=5$ and $k_0=4$.} \end{multicols} \begin{multicols}{2} \includegraphics[width=1\linewidth]{small_sample_n10_p9_nact2.eps} \caption*{c). $n=10$, $p=9$ and $k_0=2$.} \includegraphics[width=1\linewidth]{small_sample_n10_p9_nact4.eps} \caption*{d). $n=10$, $p=9$ and $k_0=4$.} \end{multicols} \squeezeup \caption{Small sample performance: $\boldsymbol{\beta}_k=\pm 1$ for all $k\leq k_0$.} \label{fig:small_sample} \squeezeup \end{figure*} \subsection{Experiment 1: PCS when both $n$ and $p$ are small } We first compare the PCS performance of MOS techniques when sample size $n$ is very small in absolute terms. Fig.\ref{fig:small_sample}.a) and Fig.\ref{fig:small_sample}.b) illustrate a situation where $p$ is much smaller than $n$, i.e., $p=n/2$. When $k_0=2$, one can see that RRT with $\alpha=0.1$ and $\alpha=0.01$ outperform other algorithms at very low SNR. In the medium SNR regime, HSC,PAL, Design($0.1$) and RRT with both $\alpha=0.1$ and $\alpha=0.01$ have similar performances. At high SNR, the best performance is delivered by RRT with $\alpha=0.01$, Design($0.01$) and HSC. The PCS of PAL appears to floor below one. The reason for this is the slow growth of penalty function in PAL with increasing SNR\cite{tsp} which causes overestimation. PCS of AIC and BIC are much inferior compared to other MOS techniques. When $k_0$ is increased to $k_0=4$, performance of HSC deteriorates significantly. The high SNR performance of PAL, AIC, MDL, RRT etc. improve when $k_0=4$. {This can be reasoned as follows. At high SNR, the error in MOS criteria like AIC, BIC, PAL etc. is overwhelmingly due to overestimation. Please note that when $k_0$ is increased keeping $p$ constant, the probability of overestimation decreases. This explains the improvement in PCS with increasing $k_0$ for MOS criteria like PAL, AIC, BIC etc. which have a tendency to overestimate at high SNR. HSC incorporates a SNR adaptation to the BIC penalty to decrease its' $\mathbb{P}_{\mathcal{O}}$. Note that any attempt to decrease overestimation probability will result in an increase in underestimation probability in the low to moderate SNR. Since, with increasing $k_0$, the importance of overestimation decreases and underestimation increases, the SNR adaptation intended for avoiding overestimation will result in more underestimation in the low to moderate SNR regime. The explains the deteriorating performance of HSC with increasing $k_0$.} \begin{figure*}[htb] \begin{multicols}{2} \includegraphics[width=1\linewidth]{model2_n100_p30_nact10_notsparse.eps} \caption*{a). $n=100$, $p=30$ and $k_0=10$.} \includegraphics[width=1\linewidth]{model2_n100_p30_nact25_notsparse.eps} \caption*{b). $n=100$, $p=30$ and $k_0=25$.} \end{multicols} \begin{multicols}{2} \includegraphics[width=1\linewidth]{model2_n100_p60_nact10_notsparse.eps} \caption*{c). $n=100$, $p=60$ and $k_0=10$.} \includegraphics[width=1\linewidth]{model2_n100_p60_nact25_notsparse.eps} \caption*{d). $n=100$, $p=60$ and $k_0=25$.} \end{multicols} \squeezeup \caption{$\boldsymbol{\beta}_k=\pm 1$ for $k=1,2...\dotsc,k_0$. $\boldsymbol{\beta}_{[k_0]}$ is long and dense.} \label{fig:nonsparse} \squeezeup \end{figure*} Next we compare the performance of MOS techniques when $p$ and $n$ are nearly the same. As one can see from Fig.\ref{fig:small_sample}.c) and Fig.\ref{fig:small_sample}.d), performances of PAL, AIC and BIC are much worse in this case than with $p=5$. Again this is because of the fact that $\mathbb{P}_{\mathcal{O}}$ increases when $p$ is increased while keeping $k_0$ fixed. When $k_0=2$, HSC achieves the best overall performance. However, when $k_0=4$, the performance of HSC is remarkably poor in the low to moderate high SNR regime. This general trend of HSC performing badly with increasing $k_0$ is observed in many other simulations too. When $k_0=4$ and $p=9$, RRT with both $\alpha=0.1$ and $\alpha=0.01$ outperform all other algorithms by a significant margin. From Fig.\ref{fig:small_sample}, it is clear that no single algorithm outperformed all other algorithms in all the four scenarios. However, RRT delivered the best performance in atleast one scenario, whereas, it delivered near best performance in all the three other scenarios. Further, RRT with $\alpha=0.1$ outperformed Design($0.1$) and RRT with $\alpha=0.01$ outperformed Design($0.01$) in all the four experiments. This is significant considering the fact both these schemes guarantee same value of high SNR error probability. \subsection{Experiment 2: PCS when $n$ is large and SNR is varying } \begin{figure*}[htb] \begin{multicols}{2} \includegraphics[width=1\linewidth]{model2_n100_p30_nact10.eps} \caption*{a). $n=100$, $p=30$ and $k_0=10$.} \includegraphics[width=1\linewidth]{model2_n100_p30_nact25.eps} \caption*{b). $n=100$, $p=30$ and $k_0=25$.} \end{multicols} \begin{multicols}{2} \includegraphics[width=1\linewidth]{model2_n100_p60_nact10.eps} \caption*{c). $n=100$, $p=60$ and $k_0=10$.} \includegraphics[width=1\linewidth]{model2_n100_p60_nact25.eps} \caption*{d). $n=100$, $p=60$ and $k_0=25$.} \end{multicols} \squeezeup \caption{$\boldsymbol{\beta}_k=\pm 1$ for $k=1,6...\dotsc,$ and $k=k_0$. $\boldsymbol{\beta}_{[k_0]}$ is long but sparse.} \label{fig:sparse} \squeezeup \end{figure*} Next we consider the performance of algorithms with increasing SNR when the problem dimensions $(n,p,k_0)$ are moderately large. From the PCS figures for Model 1 given in Fig.\ref{fig:nonsparse}, it is clear that the performance of algorithms like AIC, BIC, PAL etc. have improved tremendously compared to the case when $n$ was set at $n=10$. From the four scenarios considered in Fig.\ref{fig:nonsparse}, it is difficult to pick a single winner. However, apart from Fig.\ref{fig:nonsparse}.c, in all the other situations RRT with $\alpha=0.1$ performed closer to most of the other algorithms for all values of SNR. Unlike the previous case, Design($0.1$) does outperforms RRT with $\alpha=0.1$ many often. Next we consider the performance of algorithms when $\boldsymbol{\beta}$ is of Model 2, i.e., sparse. Unlike the case of Model 1, RRT with $\alpha=0.1$ is a clear winner throughout the low to high SNR in all the four experiments considered in Fig.\ref{fig:sparse}. In fact this trend of RRT performance improving with increasing sparsity of $\boldsymbol{\beta}_{[k_0]}$ and a corresponding deterioration in the performance of ITC based MOS techniques was observed in a large number of experiments conducted. \subsection{Experiment 3: PCS of algorithms with increasing $n$} \begin{figure*}[htb] \begin{multicols}{2} \includegraphics[width=1\linewidth]{largesample_model1_SNR_minus10.eps} \caption*{a). $p=10$, $k_0=5$ and SNR=$-10$dB.} \includegraphics[width=1\linewidth]{largesample_model1_SNR_0.eps} \caption*{b). $p=10$, $k_0=5$ and SNR=$0$dB.} \end{multicols} \begin{multicols}{2} \includegraphics[width=1\linewidth]{largesample_model2_SNR_minus10.eps} \caption*{c). $p=10$, $k_0=5$ and SNR$=-10$dB.} \includegraphics[width=1\linewidth]{largesample_model2_SNR_0.eps} \caption*{d). $p=10$, $k_0=5$ and SNR$=0$dB.} \end{multicols} \squeezeup \caption{Large sample performance: $\boldsymbol{\beta}_k=\pm1,\forall k\leq k_0$ for a) and b). $\boldsymbol{\beta}_k=\pm1$ for $k=1$ and $k= k_0=5$ for c) and d). } \label{fig:largesample} \squeezeup \end{figure*} We depict in Fig.\ref{fig:largesample} the performance of MOS criteria when the sample size $n$ is increasing while keeping SNR, $k_0$ and $p$ fixed. When $\boldsymbol{\beta}$ is of Model 1, one can see from Fig.\ref{fig:largesample}.a) and b) that RRT performs as good as most of the other algorithms under consideration. However, when $\boldsymbol{\beta}$ is of Model 2, it is clear from Fig.\ref{fig:largesample}.c) and d) that RRT with $\alpha=0.1$ and $\alpha=0.01$ clearly outperform all the other algorithms. To summarize, RRT has definitive performance advantages over many existing MOS techniques when the sample size $n$ is very small. When the sample size $n$ is large and $\boldsymbol{\beta}_{[k_0]}$ is dense, RRT did not exhibit any significant performance advantages. Indeed, the observed performance of RRT in the low to moderately high SNR is inferior compared to the best performing MOS criteria like Design($0.1$). However, when the vector $\boldsymbol{\beta}_{[k_0]}$ is sparse, RRT clearly outperformed all the other MOS criteria under consideration. \subsection{Choice of $\alpha$ in RRT } The performance of RRT depends crucially on the choice of $\alpha$. From the 18 experiments presented in this section and many other experiments not shown in this article, we found out that $\alpha=0.1$ delivered the best overall PCS performance in the low to moderately high SNR regime. Indeed, this choice is purely empirical. However, even with this choice, one can guarantee a value of $\mathbb{P}_{\mathcal{O}}$ less than $10\%$ throughout the operating SNR regimes. When the SNR is very high, a situation not so common in practical applications, one can set $\alpha$ to smaller values like $\alpha=0.01$. Likewise, when the sample size $n$ is very large, one can set $\alpha=1/\sqrt{n}$ which was also found to deliver a very good performance. Finding a completely data dependent choice of $\alpha$ in RRT is of tremendous operational importance and will be part of the future research. \section{Conclusions and directions of future research} This article proposes a novel MOS criterion based on the behaviour of residual norm ratios. The proposed technique is philosophically different from the widely used ITC based MOS techniques. This article also provides high SNR and large sample performance guarantees for RRT. In particular, the large sample consistency of RRT is established. Numerical simulations too demonstrate a highly competitive performance of the proposed technique over many widely used MOS techniques. Extending the operational philosophy of RRT to non linear model order selection problems like source number enumeration and developing completely data dependent choices for hyper parameter $\alpha$ are two possible avenues for extending the RRT technique proposed in this article. \section*{Appendix A:Proof of Lemma \ref{lemma:basic_distributions} \cite{yanai2011projection},\cite{tsp}} Since the model order is $k_0$, $\boldsymbol{\beta}_{k}=0$ for $k> k_0$. Consequently, the signal component in ${\bf y}$, i.e., ${\bf X}\boldsymbol{\beta}$ is equal to ${\bf X}_{[k_0]}\boldsymbol{\beta}_{[k_0]}$. Hence, ${\bf X}\boldsymbol{\beta}\in span({\bf X}_{[k_0]})$. This along with the full rank assumption on ${\bf X}$ implies that $({\bf I}_n-{\bf P}_k){\bf X}\boldsymbol{\beta}=({\bf I}_n-{\bf P}_k){\bf X}_{[k_0]/[k]}\boldsymbol{\beta}_{[k_0]/[k]}\neq {\bf 0}_n$ for $k<k_0$, whereas, $({\bf I}_n-{\bf P}_k){\bf X}\boldsymbol{\beta}={\bf 0}_n$ for $k\geq k_0$. Consequently, $({\bf I}_n-{\bf P}_k){\bf y}=({\bf I}_n-{\bf P}_k){\bf X}_{[k_0]/[k]}\boldsymbol{\beta}_{[k_0]/[k]}+({\bf I}_n-{\bf P}_k){\bf w}$ for $k<k_0$ and $({\bf I}_n-{\bf P}_k){\bf y}=({\bf I}_n-{\bf P}_k){\bf w}$ for $k\geq k_0$. The distribution of norm of Gaussian vectors is given in the following Lemma. \begin{lemma}\label{lemma:chi2}\cite{yanai2011projection} Let ${\bf x} \sim \mathcal{N}({\bf u},\sigma^2{\bf I}_n)$ and ${\bf P}\in \mathbb{R}^{n \times n}$ be any projection matrix of rank $j$. Then, \\ a). ${\bf P}{\bf x}\sim \mathcal{N}({\bf P}{\bf u},\sigma^2{\bf P} )$.\\ b). $\|{\bf P}{\bf x}\|_2^2/\sigma^2\sim \chi^2_j(\frac{\|{\bf P}{\bf u}\|_2^2}{\sigma^2})$ if ${\bf Pu}\neq {\bf 0}_n$.\\ c). $\|{\bf P}{\bf x}\|_2^2/\sigma^2\sim \chi^2_j$ if ${\bf Pu}= {\bf 0}_n$. \end{lemma} Since ${\bf P}_k$ is a projection matrix of rank $k$, $({\bf I}_n-{\bf P}_k)$ is a projection matrix of rank $n-k$. Hence, by Lemma \ref{lemma:chi2} \begin{equation} \|{\bf r}^k\|_2^2/\sigma^2=\chi^2_{n-k}\left(\dfrac{\|({\bf I}_n-{\bf P}_k){\bf X}_{[k_0]/[k]}\boldsymbol{\beta}_{[k_0]/[k]}\|_2^2}{\sigma^2}\right)\text{\ for\ } k<k_0 \end{equation} \begin{equation}\label{residual1} \text{and} \ \|{\bf r}^k\|_2^2/\sigma^2=\chi^2_{n-k}\text{\ for\ } k\geq k_0 \end{equation} Since ${\bf P}_k{\bf P}_{k-1}={\bf P}_{k-1}$, $({\bf I}_n-{\bf P}_k)({\bf P}_k-{\bf P}_{k-1})={\bf O}_n$. This implies that $\|{\bf r}^{k-1}\|_2^2=\|({\bf I}_n-{\bf P}_k+{\bf P}_k-{\bf P}_{k-1}){\bf y}\|_2^2=\|{\bf r}^{k}\|_2^2+\|({\bf P}_k-{\bf P}_{k-1}){\bf y}\|_2^2$. Note that $({\bf P}_k-{\bf P}_{k-1})$ is a projection matrix of rank one projecting onto the subspace $span({\bf X}_{[k]})\cap span({\bf X}_{[k-1]})^{\perp}$, i.e., $\{{\bf v} \in \mathbb{R}^n: {\bf v} \in span({\bf X}_{[k]}) \& {\bf v} \notin span({\bf X}_{[k-1]})\}$. This implies that $({\bf P}_k-{\bf P}_{k-1}){\bf X}\boldsymbol{\beta}=({\bf P}_k-{\bf P}_{k-1}){\bf X}_{[k_0]}\boldsymbol{\beta}_{[k_0]}\neq {\bf 0}_n$ for $k\leq k_0$, whereas, $({\bf P}_k-{\bf P}_{k-1}){\bf X}\boldsymbol{\beta}={\bf 0}_n$ for $k>k_0$. This implies \begin{equation}\label{residual12} \dfrac{\|({\bf P}_k-{\bf P}_{k-1}){\bf y}\|_2^2}{\sigma^2}\sim \chi^2_1\left(\dfrac{\|({\bf P}_k-{\bf P}_{k-1}){\bf X}_{[k_0]}\boldsymbol{\beta}_{[k_0]}\|_2^2}{\sigma^2}\right) \text{\ for \ } k\leq k_0 \end{equation} \begin{equation}\label{residual2} \text{and} \ \|({\bf P}_k-{\bf P}_{k-1}){\bf y}\|_2^2/\sigma^2\sim \chi^2_1 \text{\ for\ } k> k_0 \end{equation} The orthogonality of matrices ${\bf I}_n-{\bf P}_k$ and ${\bf P}_k-{\bf P}_{k-1}$ implies that the vectors ${\bf r}_k=({\bf I}_n-{\bf P}_k){\bf y}$ and $({\bf P}_k-{\bf P}_{k-1}){\bf y}$ are uncorrelated. Since these vectors are Gaussian, they and their norms ($\|{\bf r}^{k}\|_2^2$, $\|({\bf P}_k-{\bf P}_{k-1}){\bf y}\|_2^2$) are independent too. \begin{lemma}\label{lemma:Beta_def}\cite{ravishanker2001first} Let $Z_1\sim \chi^2_{k_1}$ and $Z_2\sim \chi^2_{k_2}$ be two independent $\chi^2$ R.Vs. Then the ratio $\dfrac{Z_1}{Z_1+Z_2}\sim \mathbb{B}(\dfrac{k_1}{2},\dfrac{k_2}{2})$ \end{lemma} Using (\ref{residual1}) and (\ref{residual2}) along with the independence of $({\bf I}_n-{\bf P}_k){\bf y}$ and $({\bf P}_k-{\bf P}_{k-1}){\bf y}$ in Lemma \ref{lemma:Beta_def} gives the following distributional result. \begin{equation}\label{beta_prelim} RR(k)=\dfrac{\|{\bf r}^k\|_2^2/\sigma^2}{\|{\bf r}^{k-1}\|_2^2/\sigma^2}\sim \dfrac{\chi^2_{n-k}}{\chi^2_{n-k}+\chi^2_1}\sim \mathbb{B}(\dfrac{n-k}{2},\dfrac{1}{2}) \end{equation} for $k>k_0$ and all $\sigma^2>0$. This is a) of Lemma \ref{lemma:basic_distributions}. From (\ref{residual1}) and (\ref{residual12}), $RR(k_0)=\frac{{ Z}_1}{Z_1+Z_2}$, where $Z_1=\|({\bf I}_n-{\bf P}_{k_0}){\bf w}\|_2^2\sim \sigma^2\chi^2_{n-k_0}$ and $Z_2=\|({\bf P}_{k_0}-{\bf P}_{k_0-1}){\bf y}\|_2^2\sim\sigma^2 \chi^2_{1}\left(\dfrac{\|({\bf P}_{k_0}-{\bf P}_{k_0-1}){\bf X}_{[k_0]}\boldsymbol{\beta}_{[k_0]}\|_2^2}{\sigma^2}\right) $. Note that $({\bf P}_{k_0}-{\bf P}_{k_0-1}){\bf X}_{[k_0]}\boldsymbol{\beta}_{[k_0]}= ({\bf P}_{k_0}-{\bf I}_n+{\bf I}_n-{\bf P}_{k_0-1}){\bf X}_{[k_0]}\boldsymbol{\beta}_{[k_0]}={\bf 0}_n+({\bf I}_n-{\bf P}_{k_0-1}){\bf X}_{[k_0]}\boldsymbol{\beta}_{[k_0]}=({\bf I}_n-{\bf P}_{k_0-1}){\bf x}_{k_0}\boldsymbol{\beta}_{k_0}$. Hence, $Z_2\sim \chi^2_1\left(\dfrac{\|({\bf I}_n-{\bf P}_{k_0-1}){\bf x}_{k_0}\|_2^2\boldsymbol{\beta}_{k_0}^2}{\sigma^2}\right)$. This proves b) of Lemma \ref{lemma:basic_distributions}. \section*{Appendix B: Proof of Theorem \ref{thm:rrknot}} \begin{proof} Note that $RR(k_0)=\dfrac{Z_1}{Z_1+Z_2}$, where $Z_1=\|({\bf I}_n-{\bf P}_{k_0}){\bf w}\|_2^2\sim \sigma^2\chi^2_{n-k_0}$ and $Z_2=\|({\bf P}_{k_0}-{\bf P}_{k_0-1}){\bf y}\|_2^2\sim \sigma^2\chi^2_1\left(\dfrac{\|({\bf I}_{n}-{\bf P}_{k_0-1}){\bf x}_{k_0}\|_2^2\boldsymbol{\beta}_{k_0}^2}{\sigma^2}\right)$ as discussed in Lemma \ref{lemma:basic_distributions}. The proof of Theorem \ref{thm:rrknot} is based on the following lemma. \begin{lemma} \label{lemma:noncentral} $\chi^2$ R.V satisfies the following limits\cite{tsp}.\\ a). Let $Z\sim \chi^2_k$ for a fixed $k\in \mathbb{N}$. Then $\sigma^2Z\overset{P}{\rightarrow }0$ as $\sigma^2\rightarrow 0$. \\ b). Let $Z\sim \chi^2_{k}(\lambda/\sigma^2)$ for fixed $k\in \mathbb{N}$ and fixed $\lambda>0$. Then $\sigma^2 Z\overset{P}{\rightarrow } \lambda$ as $\sigma^2 \rightarrow 0$. \end{lemma} It follows directly from Lemma \ref{lemma:noncentral} that $Z_1\overset{P}{\rightarrow} 0 \ \text{as} \ \sigma^2\rightarrow 0$ and $Z_2\overset{P}{\rightarrow} \|({\bf I}_n-{\bf P}_{k_0-1}){\bf x}_{k_0}\|_2^2\boldsymbol{\beta}_{k_0}^2>0 \ \text{as} \ \sigma^2\rightarrow 0$. Hence, the numerator $Z_1$ in $RR(k_0)=\dfrac{Z_1}{Z_1+Z_2}$ converges in probability to zero, whereas, the denominator $Z_1+Z_2$ converges in probability to a positive constant $ \|({\bf I}_n-{\bf P}_{k_0-1}){\bf x}_{k_0}\|_2^2\boldsymbol{\beta}_{k_0}^2$. Consequently\footnote{Please note that if $X_n \overset{P}{\rightarrow }c_1$ and $Y_n\overset{P}{\rightarrow } c_2\neq 0$ as $n\rightarrow \infty$, then $X_n/Y_n\overset{P}{\rightarrow } c_1/c_2$. Likewise, $X_n+Y_n\overset{P}{\rightarrow}c_1+c_2$ [Theorem 5.5,\cite{wasserman2013all}].}, $RR(k_0)\overset{P}{\rightarrow } 0$ as $\sigma^2 \rightarrow 0$. \end{proof} \section*{Appendix C: Proof of Theorem \ref{thm:highSNR}} \begin{proof} The event $\{\hat{k}_{RRT}<k_0\}$ can happen when either of the events $\mathcal{A}_1=\{\{\exists k<k_0 : RR(k)<\Gamma_{RRT}^{\alpha}(k)\}\cap \{RR(k)>\Gamma_{RRT}^{\alpha}(k),\forall k\geq k_0\}\}$ or $\mathcal{A}_2=\{\{k:RR(k)<\Gamma_{RRT}^{\alpha}(k)\}=\phi\}=\{RR(k)>\Gamma_{RRT}^{\alpha}(k),\ \forall k\}$ is true. $\mathcal{A}_1$ definitely results in $\hat{k}_{RRT}<k_0$, whereas, when $\mathcal{A}_2$ is true, $\hat{k}_{RRT}=\max\{k:RR(k)<\Gamma_{RRT}^{\alpha_{new}}\}$ in (\ref{alphanew}) can be larger or smaller than $k_0$. Thus, \begin{equation}\label{under} \mathbb{P}_{\mathcal{U}}\leq \mathbb{P}(\mathcal{A}_1)+\mathbb{P}(\mathcal{A}_2). \end{equation} Using the bound $\mathbb{P}(\mathcal{B}_1\cap \mathcal{B}_2)\leq \mathbb{P}(\mathcal{B}_1)$ for any two events $\mathcal{B}_1$ and $\mathcal{B}_2$, one can bound \begin{equation} \mathbb{P}(\mathcal{A}_1)\leq \mathbb{P}(\{RR(k_0)>\Gamma_{RRT}^{\alpha}(k_0)\}) \end{equation} $RR(k_0)\overset{P}{\rightarrow } 0$ in Theorem \ref{thm:rrknot} implies that \begin{equation} \underset{\sigma^2 \rightarrow 0}{\lim}\mathbb{P}(\mathcal{A}_1)\leq \underset{\sigma^2 \rightarrow 0}{\lim}\mathbb{P}(\{RR(k_0)>\Gamma_{RRT}^{\alpha}(k_0)\})=0. \end{equation} $RR(k_0)\overset{P}{\rightarrow } 0$ also implies that \begin{equation} \begin{array}{ll} \underset{\sigma^2 \rightarrow 0}{\lim}\mathbb{P}(\mathcal{A}_2)&=\underset{\sigma^2 \rightarrow 0}{\lim}\mathbb{P}(\{RR(k)>\Gamma_{RRT}^{\alpha}(k),\ \forall k\})\\ &\leq \underset{\sigma^2 \rightarrow 0}{\lim}\mathbb{P}(\{RR(k_0)>\Gamma_{RRT}^{\alpha}({k_0})\})=0. \end{array} \end{equation} Applying these limits in (\ref{under}) give $\underset{\sigma^2 \rightarrow 0}{\lim }\mathbb{P}_{\mathcal{U}}=0$. Similarly, the event $\hat{k}_{RRT}>k_0$ can happen either when $\mathcal{A}_3=\{\exists k>k_0:RR(k)<\Gamma_{RRT}^{\alpha}(k)\}$ or when $\mathcal{A}_2$ is true. When $\mathcal{A}_3$ is true, then definitely $\hat{k}_{RRT}>k_0$, whereas, when $\mathcal{A}_2$ is true, then $\hat{k}_{RRT}$ can be either greater than or smaller than $k_0$. Hence, \begin{equation}\label{over} \mathbb{P}_{\mathcal{O}}\leq \mathbb{P}(\mathcal{A}_3)+\mathbb{P}(\mathcal{A}_2) \end{equation} By Theorem \ref{thm:beta_mos}, $\mathbb{P}(\mathcal{A}_3)=1-\mathbb{P}( \{RR(k)<\Gamma_{RRT}^{\alpha}(k),\forall k>k_0\})\leq \alpha$ for all $\sigma^2>0$. Applying this along with $\underset{\sigma^2 \rightarrow 0}{\lim}\mathbb{P}(\mathcal{A}_2)=0$ in (\ref{over}) give $\underset{\sigma^2 \rightarrow 0}{\lim }\mathbb{P}_{\mathcal{O}}\leq \alpha$. Note that the events $\{\hat{k}_{RRT}<k_0\}$ and $\{\hat{k}_{RRT}>k_0\}$ are disjoint and hence \begin{equation} PCS=1-\mathbb{P}(\hat{k}_{RRT}\neq k_0)=1-\mathbb{P}_{\mathcal{U}}-\mathbb{P}_{\mathcal{O}}. \end{equation} Thus the limit $\underset{\sigma^2 \rightarrow 0}{\lim }PCS\geq 1-\alpha$ directly follows from the limits on $\mathbb{P}_{\mathcal{U}}$ and $\mathbb{P}_{\mathcal{O}}$. \end{proof} \section*{Appendix D: Proof of Theorem \ref{thm:asymptotic_rrt}} $\Gamma_{RRT}^{\alpha}(k_0)=F^{-1}_{\frac{n-k_0}{2},\frac{1}{2}}(x_n)$, where $x_n=\frac{\alpha}{p}$ is an implicit function of $n$. Depending on the behaviour of $x_n$ as $n \rightarrow \infty$, we consider the following two cases. {\bf Case 1:} $p$ fixed and $\alpha$ fixed. Here $x_n$ is a constant function of $n$ and $k_{lim}=\underset{n \rightarrow \infty}{\lim}k_0/n<1$. Using the limit $\underset{a \rightarrow \infty}{\lim}F^{-1}_{a,b}(x)=1$ for every fixed $b \in (0,\infty)$ and $x \in(0,1)$ (see proposition 1, \cite{askitis2016asymptotic}), it follows that $\underset{n\rightarrow \infty}{\lim}\Gamma_{RRT}^{\alpha}(k_0)=\underset{n \rightarrow \infty}{\lim}F^{-1}_{\frac{n-k_0}{2},\frac{1}{2}}(x_n)=1$.\\ {\bf Case 2:-} ($p$ fixed, $\alpha \rightarrow 0$), ($p\rightarrow \infty$, $\alpha$ fixed) or ($p\rightarrow \infty$, $\alpha \rightarrow 0$). In all these cases, $x_n\rightarrow 0$ as $n\rightarrow \infty$. Expanding $F^{-1}_{a,b}(z)$ at $z=0$ using the expansion given in [http://functions.wolfram.com/06.23.06.0001.01] gives \begin{equation}\label{beta_exp} \begin{array}{ll} F^{-1}_{a,b}(z)=\rho(n,1)+\dfrac{b-1}{a+1}\rho(n,2) \\ +\dfrac{(b-1)(a^2+3ab-a+5b-4)}{2(a+1)^2(a+2)}\rho(n,3) +O(z^{(4/a)}) \end{array} \end{equation} for all $a>0$. We associate $a=\frac{n-k_0}{2}$, $b=1/2$ , $z=x_n$ and $\rho(n,l)=(az{B}(a,b))^{(l/a)}=\left(\frac{\left(\frac{n-k_0}{2}\right)\alpha{B}(\frac{n-k_0}{2},0.5)}{p}\right)^{\frac{2l}{n-k_0}}$ for $l\geq 1$. Then $\log(\rho(n,l))$ gives \begin{equation}\label{log_rho} \begin{array}{ll} \log(\rho(n,l))=\frac{2l}{n-k_0}\log\left({\frac{n-k_0}{2p}}\right) +\frac{2l}{n-k_0}\log({B}(\frac{n-k_0}{2},0.5))\\ +\frac{2l}{n-k_0}\log(\alpha) \end{array} \end{equation} In the limits $n\rightarrow \infty$, $0\leq\underset{n\rightarrow \infty}{\lim} p/n<1$ and $0\leq k_{lim}<1$, $\underset{n \rightarrow \infty}{\lim}\frac{2l}{n-k_0}\log\left({\frac{n-k_0}{2p}}\right)=0$. Using the asymptotic expansion ${B}(a,b)=G(b)a^{-b}\left(1-\frac{b(b-1)}{2a}(1+O(\frac{1}{a}))\right)$ as $a \rightarrow \infty$ (given in [http://functions.wolfram.com/06.18.06.0006.01]) in the second term of (\ref{log_rho}) gives \begin{equation} \underset{n \rightarrow \infty}{\lim}\frac{2l}{n-k_0}\log\left({B}(\frac{n-k_0}{2},0.5)\right)=0. \end{equation} Hence, only the behaviour of $\frac{2l}{n-k_0}\log(\alpha)$ needs to be considered. Now we consider the three cases depending on the behaviour of $\alpha$. {\bf Case 2.A:-} When $\underset{n \rightarrow \infty}{\lim}\log(\alpha)/n=0$ one has $\underset{n \rightarrow \infty}{\lim}\log(\rho(n,l))=0$ which in turn implies that $\underset{n \rightarrow \infty}{\lim}\rho(n,l)=1$ for every $l$. {\bf Case 2:-} When $-\infty<\alpha_{lim}=\underset{n \rightarrow \infty}{\lim}\log(\alpha)/n<0$ and $\underset{n \rightarrow \infty}{\lim}\dfrac{k_0}{n}=k_{lim}<1$, one has $-\infty<\underset{n \rightarrow \infty}{\lim}\log(\rho(n,l))=(2l\alpha_{lim})/(1-k_{lim})<0$. This in turn implies that $0<\underset{n \rightarrow \infty}{\lim}\rho(n,l)=e^{\dfrac{2l\alpha_{lim}}{1-k_{lim}}}<1$ for every $l$. {\bf Case 3:-} When $\underset{n \rightarrow \infty}{\lim}\log(\alpha)/n=-\infty$, one has $\underset{n \rightarrow \infty}{\lim}\log(\rho(n,l))=-\infty$ which in turn implies that $\underset{n \rightarrow \infty}{\lim}\rho(n,l)=0$ for every $l$. Note that the coefficient of $\rho(n,l)$ in (\ref{beta_exp}) for $l>1$ is asymptotically $1/a\approx 2/(n-k_0)$. Hence, these coefficients decay to zero in the limits $n\rightarrow \infty$ and $0\leq k_{lim}<1$. Consequently, only the $\rho(n,1)$ term in (\ref{beta_exp}) is non zero as $n \rightarrow \infty$. This implies that $\underset{n \rightarrow \infty}{\lim}\Gamma_{RRT}^{\alpha}(k_0)=1$ for Case 2.A, $0<\underset{n \rightarrow \infty}{\lim}\Gamma_{RRT}^{\alpha}(k_0)=e^{\dfrac{2\alpha_{lim}}{1-k_{lim}}}<1$ for Case 2.B and $\underset{n \rightarrow \infty}{\lim}\Gamma_{RRT}^{\alpha}(k_0)=0$ for Case 2.C. This proves Theorem \ref{thm:asymptotic_rrt}. \section*{Appendix E. Proof of Theorem \ref{thm:large_sample}} \begin{proof} Consider the events $\mathcal{A}_1$, $\mathcal{A}_2$ and $\mathcal{A}_3$ defined in the proof of Theorem \ref{thm:highSNR}. Following the proof of Theorem \ref{thm:highSNR}, one has $\mathbb{P}_{\mathcal{U}}\leq \mathbb{P}(\mathcal{A}_1)+\mathbb{P}(\mathcal{A}_2)$ and $\mathbb{P}_{\mathcal{O}}\leq \mathbb{P}(\mathcal{A}_3)+\mathbb{P}(\mathcal{A}_2)$, where $\mathbb{P}(\mathcal{A}_1)\leq \mathbb{P}(\{RR(k_0)>\Gamma_{RRT}^{\alpha}(k_0)\})$, $\mathbb{P}(\mathcal{A}_2)\leq \mathbb{P}(\{RR(k_0)>\Gamma_{RRT}^{\alpha}(k_0)\})$ and $\mathbb{P}(\mathcal{A}_3)\leq \alpha,\forall n$. Hence, only the large sample behaviour of $\mathbb{P}(\{RR(k_0)>\Gamma_{RRT}^{\alpha}(k_0)\})$ needs to be analysed. Let $Z_1=\|({\bf I}_n-{\bf P}_{k_0}){\bf w}\|_2^2\sim \sigma^2\chi^2_{n-k_0}$ and $Z_2=\|({\bf P}_{k_0}-{\bf P}_{k_0-1}){\bf y}\|_2^2\sim \sigma^2 \chi^2_1\left(\dfrac{\|({\bf I}_n-{\bf P}_{k_0-1}){\bf x}_{k_0}\|_2^2\boldsymbol{\beta}_{k_0}^2}{\sigma^2}\right)$. Following Lemma \ref{lemma:basic_distributions}, $RR(k_0)=\frac{Z_1}{Z_1+Z_2}$. Hence, \begin{equation}\label{under_asymptptic} \begin{array}{ll} \mathbb{P}(\mathcal{A}_1)=\mathbb{P}\left(\frac{Z_1}{Z_1+Z_2}>\Gamma_{RRT}^{\alpha}(k_0)\right) =\mathbb{P}\left(\frac{1-\Gamma_{RRT}^{\alpha}(k_0)}{\Gamma_{RRT}^{\alpha}(k_0)}\frac{Z_1}{n\sigma^2}>\frac{Z_2}{n\sigma^2}\right) \end{array} \end{equation} The large sample behaviour of chi squared R.Vs are characterized in the following lemma. \begin{lemma}\label{lemma:noncentral2} Chi squared R.Vs satisfy the following limits.\\ A1). Let $Z\sim {\chi^2_l}$, then $Z/l \overset{P}{\rightarrow} 1$ as $l \rightarrow \infty$. \\ A2). Let $Z\sim \chi^2_k(M l)$ for a fixed $k$ and $M>0$, then $Z/l\overset{P}{\rightarrow} M$ as $l \rightarrow \infty$. \cite{eefenumeration} \end{lemma} By Theorem 4, $\alpha_{\lim}=0$ implies that $\Gamma_{RRT}^{\alpha}(k_0)\rightarrow 1$ and $\dfrac{1-\Gamma_{RRT}^{\alpha}(k_0)}{\Gamma_{RRT}^{\alpha}(k_0)}\rightarrow 0$ as $n \rightarrow \infty$. A1) of Lemma \ref{lemma:noncentral2} and $0\leq k_{lim}<1$ imply $\dfrac{Z_1}{n\sigma^2}=\dfrac{Z_1}{(n-k_0)\sigma^2}\dfrac{n-k_0}{n}\overset{P}{\rightarrow }1-k_{lim}$. Combining these limits, L.H.S in (\ref{under_asymptptic}), i.e., $\dfrac{1-\Gamma_{RRT}^{\alpha}(k_0)}{\Gamma_{RRT}^{\alpha}(k_0)}\dfrac{Z_1}{n\sigma^2}\overset{P}{\rightarrow} 0$ as $n \rightarrow \infty$. Next we consider the behaviour of $\dfrac{Z_2}{n\sigma^2}$. Let $\tilde{Z}_2\sim \chi^2_1(M_1n)$ be some R.V. Then \begin{equation}\label{gargon} \mathbb{P}\left(\frac{Z_2}{n\sigma^2}>\frac{M_1}{2}\right)\geq \mathbb{P}\left(\frac{\tilde{Z}_2}{n}>\frac{M_1}{2}\right), \ \forall n>n_0. \end{equation} Eq.\ref{gargon} follows from the monotonicity of $\chi^2_k(\lambda)$ w.r.t $\lambda$ and the fact that the noncentrality parameter in $Z_2$ satisfies $\dfrac{\|({\bf I}_n-{\bf P}_{k_0-1}){\bf x}_{k_0}\|_2^2\boldsymbol{\beta}_{k_0}^2}{\sigma^2}\geq M_1n$ for all $n>n_0$. A2) of Lemma \ref{lemma:noncentral2} implies that $\tilde{Z}_2/n\overset{P}{\rightarrow }M_1$ as $n \rightarrow \infty$. This implies that \begin{equation} \underset{n \rightarrow \infty}{\lim}\mathbb{P}\left(\frac{\tilde{Z}_2}{n}>\frac{M_1}{2}\right)=1\ \text{and} \ \ \underset{n \rightarrow \infty}{\lim} \mathbb{P}\left(\frac{Z_2}{n\sigma^2}>\frac{M_1}{2}\right)=1. \end{equation} Since the L.H.S of (\ref{under_asymptptic}) converges to zero and R.H.S is bounded away from zero with probability one, it is true that $\underset{n \rightarrow \infty}{\lim}\mathbb{P}(\mathcal{A}_1)=0$. Similarly, $\underset{n \rightarrow \infty}{\lim}\mathbb{P}(\mathcal{A}_2)=0$. Note that the limits derived so far assumed only $\alpha_{lim}=0$. Hence, as long as $\alpha_{lim}=0$, it is true that $\underset{n \rightarrow \infty}{\lim}\mathbb{P}_{\mathcal{U}}= 0$ and $\underset{n \rightarrow \infty}{\lim}\mathbb{P}_{\mathcal{O}}\leq \alpha$. Since, $\alpha_{lim}=0$ for fixed $0\leq \alpha\leq 1$, this proves R2) of Theorem \ref{thm:large_sample}. Once $\underset{n \rightarrow \infty}{\lim}\alpha=0$ is also true, then $\underset{n \rightarrow \infty}{\lim}\mathbb{P}_{\mathcal{O}}= 0$ and $\underset{n \rightarrow \infty}{\lim}PCS=1-\underset{n \rightarrow \infty}{\lim}\mathbb{P}_{\mathcal{O}}-\underset{n \rightarrow \infty}{\lim}\mathbb{P}_{\mathcal{U}}=1$. This proves R1) of Theorem \ref{thm:large_sample}. \end{proof} \bibliographystyle{IEEEtran} \bibliography{compressive} \end{document}
{"config": "arxiv", "file": "1805.02229/residual_ratio_thresholding.tex"}
TITLE: Deformation Retraction and Projection/Closest Vector QUESTION [1 upvotes]: How do I compute the projection/closest vector to a subset? I have been thinking about this for far too long without any progress. If it helps, I am working in $\Bbb{R}^2$, but I would like formalue in terms of norms and inner products, if possible. For context, I am trying to prove that the figure eight is a deformation retract of the doubly punctured plane. And it is annoying that everything hinges on this annoyingly simple question. I have already shown that $\overline{B}(0,1) \setminus \{p,q\}$ is a deformation retraction of $\Bbb{R}^2 \setminus \{p,q\}$, where $p = (-\frac{1}{2},0)$ and $q = (\frac{1}{2},0)$. Now I just need to show that $\overline{B}(0,1) \setminus \{p,q\}$ deformation retracts to the union of the two discs with one centered at $p$, the other centered at $q$, but I am currently facing the obstacle discussed above. EDIT: It just occurred to me that deformation retraction I had in mind won't be well-defined. Any point on the y-axis contained in $\overline{B}(0,1)$ won't have a unique projection/closest vector in the union of the two open discs contained in $\overline{B}(0,1)$...Hmm..need to rethink my approach...Of course, I wouldn't be opposed to any suggestions! REPLY [1 votes]: I do not really understand what you mean by "the projection/closest vector to a subset". However, I shall explain how to get the desired strong deformation retraction. Let $p_{\pm1}$ denote the points $(\pm1,0) \in \mathbb{R}^2$, let the figure eight be the space $E = S_{+1} \cup S_{-1}$, where $S_{\pm1}$ is the circle around $p_{\pm1}$ with radius $1$ and let the doubly punctured plane be the space $P = \mathbb{R}^2 \setminus \{ p_{+1}, p_{-1} \}$. Define $r : P \to E$ as follows. For $p = (x,y)$ set $$r(p) = \begin{cases} p_{+1} + \dfrac{p - p_{+1}}{\lVert p - p_{+1} \rVert} & \lVert p - p_{+1} \rVert \le 1 \\ p_{-1} + \dfrac{p - p_{-1}}{\lVert p - p_{-1} \rVert} & \lVert p - p_{-1} \rVert \le 1 \\ \dfrac{2xp}{\lVert p \rVert^2} & \lVert p - p_{+1} \rVert \ge 1, p \ne 0, x \ge 0 \\ -\dfrac{2xp}{\lVert p \rVert^2} & \lVert p - p_{-1} \rVert \ge 1, p \ne 0, x \le 0 \end{cases} $$ Here $\lVert - \rVert$ denotes the Euclidean norm $\lVert (x,y) \rVert = \sqrt{x^2+ y^2}$. Note that the denominator $\lVert p - p_{\pm 1} \rVert$ does not vanish on $P$. What happen geometrically? Let $D_{\pm 1}$ = closed unit disk with center $p_{\pm 1}$, $H_{\pm 1}$ = right/left half plane minus the interior of $D_{\pm 1}$. The first two lines describe the radial strong deformation retractions from $B_{\pm 1} = D_{\pm 1} \setminus \{ p_{\pm 1} \}$ to $S_{\pm 1}$. In fact, for $p \in B_{\pm 1}$ we have $\lVert r(p) - p_{\pm 1} \rVert = \lVert \dfrac{p - p_{\pm 1}}{\lVert p - p_{\pm 1} \rVert} \rVert = 1$, and for $p \in S_{\pm 1}$ we have $p_{\pm 1} + \dfrac{p - p_{\pm 1}}{\lVert p - p_{\pm 1} \rVert} = p$. Note that $B_{+1} \cap B_{-1} = \{ 0 \}$. Both line 1 and line 2 yield $r(0) = 0$. The last two lines (together with $r(0) = 0$) describe strong deformation retractions of $H_{\pm 1}$ to $S_{\pm 1}$. This is done by shifting each point $p \ne 0$ along the line through $0$ and $p$ until it reaches $S_{\pm 1}$. To be formal, this line is given by $l_p(t) = t p$, and for $p \in H_{\pm 1} \setminus \{ 0 \}$ we must find $t$ such that $\lVert t p - p_{\pm 1} \rVert = 1$. Easy computations show $t = \pm \dfrac{2x}{\lVert p \rVert^2}$, and in fact we defined $r(p) = l_p(t) = t p$. Note that for $p \in Y = H_{+1} \cap H_{-1}$ = $y$-axis = set of points with $x = 0$ we have $r(p) = 0$. Thus all four lines give us a consistent definition on the whole space $P$. It remains to show that $r \mid_{H_{\pm 1}}$ is continuous in $p = 0$. We have $\lVert p - p_{\pm 1} \rVert \ge 1$, i.e. $(x \mp 1)^2 + y^2 \ge 1$. This is equivalent to $\pm 2x \le x^2 + y^2$ which means $2\lvert x \rvert \le \lVert p \rVert^2$ since $p \in H_{\pm 1}$. Hence $\lVert r(p) \rVert = \dfrac{2 \lvert x \rvert}{\lVert p \rVert} \le \lVert p \rVert$ for $p \ne 0$. This immediately implies continuity. We now have constructed a retraction $r$. To see that it is a strong deformation retraction, define a homotopy $$H : P \times I \to P, H(p,t) = (1-t)p + tr(p) .$$ It is readily verified that in fact $H(p,t) \ne p_{\pm 1}$ for all $(p,t)$ (check all 4 lines in the definition of $r$). This is a homotopy from $id_P$ to $r$ which is stationary on $E$.
{"set_name": "stack_exchange", "score": 1, "question_id": 3068599}
TITLE: Cardinal numbers of Setminus QUESTION [0 upvotes]: Could you help me check the following fact? Let $A,B,C$ be sets such that $C\subseteq A$ and $C\subseteq B$. Then $1.$ $|A\setminus C|=|A|−|C|$ $2.$ $|A\setminus C|=|B\setminus C|$ if and only if $|A|=|B|$ where $|A|$ is the cardinal number of $A$ and $A\setminus C$ is the set of all elements of $A$ that are not elements of $C$. I'm worrying about an infinite cardinal number. Thank you. REPLY [0 votes]: If $A,C$ are infinite with $C \subseteq A$, even only countably infinite, you cannot really say anything about $|A\setminus C|$: it could be any finite number, including $0$, or countably infinite (so $\aleph_0$). So there is no really good way to define $|A| - |C|$ for this case, so the rule would hold. Substraction of infinite cardinals is not really done. Rule 2 will be true in the left to right , as $|A \setminus C| = |B \setminus C|$ gives us a bijection $f$ between $A \setminus C$ and $B \setminus C$, which we extend by the identity on $C$ to get a bijection between $B$ and $A$, so $|A| = |B|$. However the reverse does not hold with infinite sets: $C = \mathbb{N}, A = \mathbb{N} \cup \{-1\}, B = \mathbb{N} \cup \{-1,-2\}$ shows that while $|A| = |B| = \aleph_0$, the differences have size 1 resp. 2.
{"set_name": "stack_exchange", "score": 0, "question_id": 939853}
TITLE: How do I stop overcomplicating proofs? QUESTION [30 upvotes]: I'm a third year student majoring in Math. Whenever I sit down and try to prove something, I just don't know what and where to start with. The first proofs course I took was graded very strictly so missing a very tiny detail made me lose a lot of marks (which does make sense since it is an introductory class to proofs and the "little details" could have been not "little"). But after that, I just get way too anxious when I do proofs because I don't know what kind of detail I would be missing. I end up completing the proofs by getting a lot of hints on where to start, and it takes way too much time for me to do a single proof (almost 2-3 days per one theorem). And because I don't want to get the proofs wrong, I keep searching up resources to do the proofs; so I kind of end up not doing the proofs myself. But when I see the "solutions" to the proofs, I realize they were very simple and I have been over-complicating it a lot. I really love math and I want to be able to really understand courses like Real Analysis, and how scared I am with proofs definitely is an issue that I want to overcome. So my question is (i) If you have gone through this stage, how did you overcome? (ii) Are there any general tips on starting proofs? Thanks. REPLY [3 votes]: Lots of great suggestions here, especially working with other people. But here's one thing I did through most of college which is a bit extreme but really helps with these skills. I would solve homework problems on a whiteboard, and when I had solved a problem I would look at it until I thought I understood it fully, and then I would erase the board. Then the next day I would write up the solution. If my solution was a huge mess then there's no way I would remember it the next morning, but if I understood the ideas then it's usually pretty easy to reconstruct the argument.
{"set_name": "stack_exchange", "score": 30, "question_id": 4546324}
TITLE: Approach for optimization problem with polynomial constraints? QUESTION [0 upvotes]: I have a problem where the objective function is linear and constraints have polynomials (in one variable). So, my question is what are the main approaches to this issue? I can construct a small example, just to illustrate it. $ \max \sum_{i} a_i x_i - \sum_{j} b_j y_j $ $\qquad c_1 x_i + c_2 x_i^2 + c_3 x_i^3 +\ldots + c_k x_i^k = \sum_{j} d_j y_j, \quad \forall i\in N $ $\qquad x_i \geq 0, \quad i\in N $ $\qquad y_j \in \{0,1\}, \quad j\in M $ REPLY [1 votes]: For small scale problems, simply using a global solver appears to work very well, at least for the data I tried. Here is some YALMIP code (MATLAB Toolbox, developed by me) to solve a small instance using YALMIPs global solver bmibnb. It is solved in a second or so if you have a good MILP solver installed. Similiar with scips global solver N = 10; M = 20; degree = 4; a = randn(N,1); b = randn(M,1); c = rand(M,degree); d = randn(M,1); x = sdpvar(N,1); y = binvar(M,1); objective = a'*x - b'*y; Model = x>=0 for i = 1:N Model = [Model, [-d'*y c(i,:)]*monolist(x(i),4) == 0]; end optimize(Model,objective,sdpsettings('solver','bmibnb')) value(x) value(y) %optimize(Model,objective,sdpsettings('solver','scip')) EDIT: To follow up on your comment, here is a model based on a PWA approximation (using sos2 constructs in cplex, as that speeds up things). Of course, the bound 5 should be chosen more carefully by performing bound propagation etc. Solved in a fraction of a second, but the drawback is of course the lack of an exact solution xi = linspace(0,5,200)'; Model = x>=0 for i = 1:N fi = c(i,1)*xi + c(i,2)*xi.^2 + c(i,3)*xi.^3 + c(i,4)*xi.^4; lambda = sdpvar(length(fi),1); Model = [Model,sos2(lambda)]; Model = [Model, x(i) == lambda'*xi, d'*y == lambda'*fi,lambda>=0,sum(lambda)==1]; end
{"set_name": "stack_exchange", "score": 0, "question_id": 1408419}
\begin{document} \begin{frontmatter} \title{On Strong Data-Processing and Majorization Inequalities with Applications to Coding Problems} \author[First]{Igal Sason} \address[First]{The Andrew and Erna Viterbi Faculty of Electrical Engineering, Technion - Israel Institute of Technology, Technion City,\\ Haifa 3200003, Israel, (e-mail: sason@ee.technion.ac.il).} \begin{abstract} This work provides data-processing and majorization inequalities for $f$-divergences, and it considers some of their applications to coding problems. This work also provides tight bounds on the R\'{e}nyi entropy of a function of a discrete random variable with a finite number of possible values, where the considered function is not one-to-one, and their derivation is based on majorization and the Schur-concavity of the R\'{e}nyi entropy. One application of the $f$-divergence inequalities refers to the performance analysis of list decoding with either fixed or variable list sizes; some earlier bounds on the list decoding error probability are reproduced in a unified way, and new bounds are obtained and exemplified numerically. Another application is related to a study of the quality of approximating a probability mass function, which is induced by the leaves of a Tunstall tree, by an equiprobable distribution. The compression rates of finite-length Tunstall codes are further analyzed for asserting their closeness to the Shannon entropy of a memoryless and stationary discrete source. In view of the tight bounds for the R\'{e}nyi entropy and the work by Campbell, non-asymptotic bounds are derived for lossless data compression of discrete memoryless sources. \end{abstract} \vspace*{0.2cm} \begin{keyword} Cumulant generating functions; $f$-divergences; list decoding; lossless source coding; R\'{e}nyi entropy. \end{keyword} \vspace*{0.2cm} \end{frontmatter} \section{Introduction} \vspace*{-0.35cm} Divergences are non-negative measures of dissimilarity between pairs of probability measures which are defined on the same measurable space. They play a key role in the development of information theory, probability theory, statistics, learning, signal processing, and other related fields. One important class of divergence measures is defined by means of convex functions $f$, and it is called the class of $f$-divergences. It unifies fundamental and independently-introduced concepts in several branches of mathematics such as the chi-squared test for the goodness of fit in statistics, the total variation distance in functional analysis, the relative entropy in information theory and statistics, and it is closely related to the R\'{e}nyi divergence which generalizes the relative entropy. The class of $f$-divergences satisfies pleasing features such as the data-processing inequality, convexity, continuity and duality properties, finding interesting applications in information theory and statistics. Majorization theory is a simple and productive concept in the theory of inequalities, which also unifies a variety of familiar bounds (see the book by \cite{MarshallOA}). The concept of majorization finds various applications in diverse fields of pure and applied mathematics, including information theory and communication. This work, presented in the papers by \cite{IS18, IS19}, is focused on new data-processing and majorization inequalities for $f$-divergences and the R\'{e}nyi entropy. The reason for discussing both types of inequalities in this work is the interplay which exists between majorization and data processing where a probability mass function $P$, defined over a finite set, is majorized by another probability mass function $Q$ which is defined over the same set if and only if there exists a doubly-stochastic transformation $W_{Y|X}$ such that an input distribution that is equal to $Q$ yields an output distribution that is equal to $P$ (denoted by, $Q \rightarrow W_{Y|X} \rightarrow P$). We consider applications of the inequalities which are derived in this work to information theory, statistics, and coding problems. One application refers to the performance analysis of list decoding with either fixed or variable list sizes; some earlier bounds on the list decoding error probability are reproduced in a unified way, and new bounds are obtained and exemplified numerically. A second application, covered in \cite{IS19}, is related to a study of the quality of approximating a probability mass function, induced by the leaves of a Tunstall tree, by an equiprobable distribution. The compression rates of finite-length Tunstall codes are further analyzed for asserting their closeness to the Shannon entropy of a memoryless and stationary discrete source. A third application of our bounds relies on our tight bounds for the R\'{e}nyi entropy (see \cite{IS18}) and the source coding theorem by \cite{Campbell65} to obtain tight non-asymptotic bounds for lossless compression of discrete memoryless sources. \section{Coding Problems and main Results} \subsection{Bounds on the List Decoding Error Probability with $f$-divergences} \label{subsection: Fano - list decoder} The minimum probability of error of a random variable $X$ given $Y$, denoted by $\varepsilon_{X|Y}$, can be achieved by a deterministic function (\textit{maximum-a-posteriori} decision rule) $\mathcal{L}^\ast \colon \mathcal{Y} \to \mathcal{X}$ (see \cite{ISSV18}): \begin{align} \varepsilon_{X|Y} &= \min_{\mathcal{L} \colon \mathcal{Y} \to \mathcal{X}} \mathbb{P} [ X \neq \mathcal{L} (Y) ] \label{20170904} \\ &= \mathbb{P} [ X \neq \mathcal{L}^\ast (Y) ] \label{eq:MAP}\\ &= 1- \mathbb{E} \left[ \max_{x \in \mathcal{X}} P_{X|Y}(x|Y) \right]. \label{eq1: cond. epsilon} \end{align} Fano's inequality gives an upper bound on the conditional entropy $H(X|Y)$ as a function of $\varepsilon_{X|Y}$ (or, otherwise, providing a lower bound on $\varepsilon_{X|Y}$ as a function of $H(X|Y))$ when $X$ takes a finite number of possible values. The list decoding setting, in which the hypothesis tester is allowed to output a subset of given cardinality, and an error occurs if the true hypothesis is not in the list, has great interest in information theory. A generalization of Fano's inequality to list decoding, in conjunction with the blowing-up lemma \cite[Lemma~1.5.4]{Csiszar_Korner}, leads to strong converse results in multi-user information theory. The main idea of the successful combination of these two tools is that, given a code, it is possible to blow-up the decoding sets in a way that the probability of decoding error can be as small as desired for sufficiently large blocklengths; since the blown-up decoding sets are no longer disjoint, the resulting setup is a list decoder with sub-exponential list size (as a function of the block length). In this section, we further study the setup of list decoding, and derive bounds on the average list decoding error probability. We first consider the special case where the list size is fixed, and then consider the more general case of a list size which depends on the channel observation. All of the following bounds on the list decoding error probability are derived in the paper by \cite{IS19}. \subsubsection{Fixed-Size List Decoding} \label{subsubsection: fixed-size list decoding} The next result provides a generalized Fano's inequality for fixed-size list decoding, expressed in terms of an arbitrary $f$-divergence. Some earlier results in the literature are reproduced from the next result. \begin{thm} \label{theorem: generalized Fano Df} Let $P_{XY}$ be a probability measure defined on $\mathcal{X} \times \mathcal{Y}$ with $|\mathcal{X}|=M$. Consider a decision rule $\mathcal{L} \colon \mathcal{Y} \to \binom{\mathcal{X}}{L}$, where $\binom{\mathcal{X}}{L}$ stands for the set of subsets of $\mathcal{X}$ with cardinality $L$, and $L < M$ is fixed. Denote the list decoding error probability by $P_{\mathcal{L}} := \Prob \bigl[ X \notin \mathcal{L}(Y) \bigr]$. Let $U_M$ denote an equiprobable probability mass function on $\mathcal{X}$. Then, for every convex function $f \colon (0, \infty) \to \Reals$ with $f(1)=0$, \begin{align} \label{generalized Fano Df} & \expectation\Bigl[D_f \bigl(P_{X|Y}(\cdot|Y) \, \| \, U_M \bigr) \Bigr] \nonumber \\ & \geq \frac{L}{M} \; f\biggl(\frac{M \, (1-P_{\mathcal{L}})}{L} \biggr) + \biggl(1-\frac{L}{M}\biggr) \; f\biggl(\frac{M P_{\mathcal{L}}}{M-L} \biggr). \end{align} \end{thm} The special case where $L=1$ (i.e., a decoder with a single output) gives \cite[(5)]{Guntuboyina11}. As consequences of Theorem~\ref{theorem: generalized Fano Df}, we first reproduce some earlier results as special cases. \begin{thm} \cite[(139)]{ISSV18} \label{corollary: Fano - list} Under the assumptions in Theorem~\ref{theorem: generalized Fano Df}, \begin{align} \label{ISSV18 - Fano} H(X|Y) \leq \log M - d\biggl(P_{\mathcal{L}} \, \| \, 1-\frac{L}{M} \biggr) \end{align} where $d(\cdot \| \cdot) \colon [0,1] \times [0,1] \to [0, +\infty]$ denotes the binary relative entropy, defined as the continuous extension of $D([p, 1-p] \| [q, 1-q]) := p \log \frac{p}{q} + (1-p) \log \frac{1-p}{1-q}$ for $p,q \in (0,1)$. \end{thm} \vspace*{0.2cm} The following refinement of the generalized Fano's inequality in Theorem~\ref{theorem: generalized Fano Df} relies on the version of the strong data-processing inequality for $f$-divergences in \cite[Theorem~1]{IS19}. \begin{thm} \label{theorem: refined Fano's inequality} Under the assumptions in Theorem~\ref{theorem: generalized Fano Df}, let $f \colon (0, \infty) \to \Reals$ be twice differentiable, and assume that there exists a constant $m_f>0$ such that \begin{align} \label{m_f} f''(t) \geq m_f, \quad \forall \, t \in \mathcal{I}(\xi_1^\ast, \xi_2^\ast), \end{align} where \begin{align} \label{28062019a1} & \xi_1^\ast := M \inf_{(x,y) \in \mathcal{X} \times \mathcal{Y}} P_{X|Y}(x|y), \\ \label{28062019a2} & \xi_2^\ast := M \sup_{(x,y) \in \mathcal{X} \times \mathcal{Y}} P_{X|Y}(x|y), \end{align} and the interval $\mathcal{I}(\cdot, \cdot)$ is the interval \begin{align} \label{I_interval} \mathcal{I} := \mathcal{I}(\xi_1, \xi_2) = [\xi_1, \xi_2] \cap (0, \infty). \end{align} Let $u^+ := \max\{u, 0\}$ for $u \in \Reals$. Then, \begin{enumerate}[a)] \item \label{Part a - refined Fano's inequality} \begin{align} \label{list dec.-26062019a} & \expectation\Bigl[D_f \bigl(P_{X|Y}(\cdot|Y) \, \| \, U_M \bigr) \Bigr] \\ & \geq \frac{L}{M} \; f\biggl(\frac{M \, (1-P_{\mathcal{L}})}{L} \biggr) + \left(1-\frac{L}{M}\right) \; f\biggl(\frac{M P_{\mathcal{L}}}{M-L} \biggr) \nonumber \\ & \hspace*{0.4cm} + \tfrac12 m_f \, M \left( \expectation\bigl[P_{X|Y}(X|Y)\bigr] -\frac{1-P_{\mathcal{L}}}{L} - \frac{P_{\mathcal{L}}}{M-L} \right)^+. \nonumber \end{align} \item \label{Part b - refined Fano's inequality} If the list decoder selects the $L$ most probable elements from $\mathcal{X}$, given the value of $Y \in \mathcal{Y}$, then \eqref{list dec.-26062019a} is strengthened to \begin{align} & \expectation\Bigl[D_f \bigl(P_{X|Y}(\cdot|Y) \, \| \, U_M \bigr) \Bigr] \nonumber \\ & \geq \frac{L}{M} \; f\biggl(\frac{M \, (1-P_{\mathcal{L}})}{L} \biggr) + \biggl(1-\frac{L}{M}\biggr) \; f\biggl(\frac{M P_{\mathcal{L}}}{M-L} \biggr) \nonumber \\ \label{list dec.-26062019b} & \hspace*{0.4cm} + \tfrac12 m_f \, M \left( \expectation\bigl[P_{X|Y}(X|Y)\bigr] -\frac{1-P_{\mathcal{L}}}{L} \right), \end{align} where the last term in the right side of \eqref{list dec.-26062019b} is necessarily non-negative. \end{enumerate} \end{thm} Discussions and numerical experimentation of these proposed bounds are provided in the paper by \cite{IS19}, showing the obtained improvement over Fano's inequality. \subsubsection{Variable-Size List Decoding} \label{subsubsection: variable-size list decoding} In the more general setting of list decoding where the size of the list may depend on the channel observation, Fano's inequality has been generalized as follows. \begin{thm} (\cite{AhlswedeK75} and \cite[Appendix~3.E]{RS_FnT19}) \label{prop: Fano-Ahlswede-Korner} Let $P_{XY}$ be a probability measure defined on $\mathcal{X} \times \mathcal{Y}$ with $|\mathcal{X}|=M$. Consider a decision rule $\mathcal{L} \colon \mathcal{Y} \to 2^{\mathcal{X}}$, and let the (average) list decoding error probability be given by $P_{\mathcal{L}} := \Prob \bigl[ X \notin \mathcal{L}(Y) \bigr]$ with $|\mathcal{L}(y)| \geq 1$ for all $y \in \mathcal{Y}$. Then, \begin{align} \label{Fano-Ahlswede-Korner 1} H(X|Y) \leq h(P_\mathcal{L}) + \expectation[\log |\mathcal{L}(Y)|] + P_{\mathcal{L}} \log M, \end{align} where $h \colon [0,1] \to [0, \log 2]$ denotes the binary entropy function. If $|\mathcal{L}(Y)| \leq N$ almost surely, then also \begin{align} \label{Fano-Ahlswede-Korner 2} H(X|Y) \leq h(P_\mathcal{L}) + (1-P_{\mathcal{L}}) \log N + P_{\mathcal{L}} \log M. \end{align} \end{thm} By relying on the data-processing inequality for $f$-divergences, we derive in the following an alternative explicit lower bound on the average list decoding error probability $P_{\mathcal{L}}$. The derivation relies on the $E_\gamma$ divergence (see, e.g., \cite{LCV17}), which forms a subclass of the $f$-divergences. \begin{thm} \label{theorem: LB - variable list size} Under the assumptions in \eqref{Fano-Ahlswede-Korner 1}, for all $\gamma \geq 1$, \begin{align} \label{LB - variable list size} P_{\mathcal{L}} \geq \frac{1+\gamma}{2} - \frac{\gamma \expectation[|\mathcal{L}(Y)|]}{M} - \frac12 \, \expectation \left[ \, \sum_{x \in \mathcal{X}} \, \biggl| P_{X|Y}(x|Y) - \frac{\gamma}{M} \biggr| \right]. \end{align} Let $\gamma \geq 1$, and let $|\mathcal{L}(y)| \leq \frac{M}{\gamma}$ for all $y \in \mathcal{Y}$. Then, \eqref{LB - variable list size} holds with equality if, for every $y \in \mathcal{Y}$, the list decoder selects the $|\mathcal{L}(y)|$ most probable elements in $\mathcal{X}$ given $Y=y$; if $x_\ell(y)$ denotes the $\ell$-th most probable element in $\mathcal{X}$ given $Y=y$, where ties in probabilities are resolved arbitrarily, then \eqref{LB - variable list size} holds with equality if \begin{align} & P_{X|Y}(x_\ell(y) \, | y) \nonumber \\ \label{02072019a19} &= \begin{dcases} \alpha(y), \quad & \forall \, \ell \in \bigl\{1, \ldots, |\mathcal{L}(y)| \bigr\}, \\ \frac{1-\alpha(y) \, |\mathcal{L}(y)|}{M-|\mathcal{L}(y)|}, \quad & \forall \, \ell \in \bigl\{|\mathcal{L}(y)|+1, \ldots, M\}, \end{dcases} \end{align} with $\alpha \colon \mathcal{Y} \to [0,1]$ being an arbitrary function which satisfies \begin{align} \label{02072019a20} \frac{\gamma}{M} \leq \alpha(y) \leq \frac1{|\mathcal{L}(y)|}, \quad \forall \, y \in \mathcal{Y}. \end{align} \end{thm} As an example, let $X$ and $Y$ be random variables taking their values in $\mathcal{X} = \{0, 1, 2, 3, 4\}$ and $\mathcal{Y} = \{0, 1\}$, respectively, and let $P_{XY}$ be their joint probability mass function, which is given by \begin{align} \label{03072019a1} \begin{dcases} & P_{XY}(0,0) = P_{XY}(1,0) = P_{XY}(2,0) = \tfrac18, \\[0.1cm] & P_{XY}(3,0) = P_{XY}(4,0) = \tfrac1{16}, \\[0.1cm] & P_{XY}(0,1) = P_{XY}(1,1) = P_{XY}(2,1) = \tfrac1{24}, \\[0.1cm] & P_{XY}(3,1) = P_{XY}(4,1) = \tfrac3{16}. \end{dcases} \end{align} Let $\mathcal{L}(0) := \{0,1,2\}$ and $\mathcal{L}(1) := \{3,4\}$ be the lists in $\mathcal{X}$, given the value of $Y \in \mathcal{Y}$. We get $P_Y(0) = P_Y(1) = \tfrac12$, so the conditional probability mass function of $X$ given $Y$ satisfies $P_{X|Y}(x|y) = 2 P_{XY}(x,y)$ for all $(x,y) \in \mathcal{X} \times \mathcal{Y}$. It can be verified that, if $\gamma = \tfrac54$, then $\max\{|\mathcal{L}(0)|, |\mathcal{L}(1)|\} = 3 \leq \frac{M}{\gamma}$, and also \eqref{02072019a19} and \eqref{02072019a20} are satisfied (here, $M:=|\mathcal{X}|=5$, $\alpha(0) = \tfrac14 = \frac{\gamma}{M}$ and $\alpha(1) = \tfrac38 \in \bigl[\tfrac14, \tfrac12\bigr]$). By Theorem~\ref{theorem: LB - variable list size}, it follows that \eqref{LB - variable list size} holds in this case with equality, and the list decoding error probability is equal to $P_{\mathcal{L}}=1-\expectation\bigl[ \alpha(Y) \, |\mathcal{L}(Y)| \bigr]=\tfrac14$ (i.e., it coincides with the lower bound in the right side of \eqref{LB - variable list size} with $\gamma = \tfrac54$). On the other hand, the generalized Fano's inequality in \eqref{Fano-Ahlswede-Korner 1} gives that $P_\mathcal{L} \geq 0.1206$ (the left side of \eqref{Fano-Ahlswede-Korner 1} is $H(X|Y) = \tfrac52 \, \log 2 - \tfrac14 \, \log 3 = 2.1038$~bits); moreover, by letting $N := \underset{y \in \mathcal{Y}}{\max} \, |\mathcal{L}(y)| = 3$, \eqref{Fano-Ahlswede-Korner 2} gives the looser bound $P_\mathcal{L} \geq 0.0939$. This exemplifies a case where the lower bound in Theorem~\ref{theorem: LB - variable list size} is tight, whereas the generalized Fano's inequalities in \eqref{Fano-Ahlswede-Korner 1} and \eqref{Fano-Ahlswede-Korner 2} are looser. \subsection{Lossless Source Coding} \label{subsubsection: lossless source coding} For uniquely-decodable (UD) source codes, \cite{Campbell65} proposed the cumulant generating function of the codeword lengths as a generalization to the frequently used design criterion of average code length. The motivation in the paper by \cite{Campbell65} was to control the contribution of the longer codewords via a free parameter in the cumulant generating function: if the value of this parameter tends to zero, then the resulting design criterion becomes the average code length per source symbol; on the other hand, by increasing the value of the free parameter, the penalty for longer codewords is more severe, and the resulting code optimization yields a reduction in the fluctuations of the codeword lengths. We introduce the coding theorem by \cite{Campbell65} for lossless compression of a discrete memoryless source (DMS) with UD codes, which serves for our analysis (see \cite{IS18}). \begin{thm} \label{theorem: Campbell} Consider a DMS which emits symbols with a probability mass function $P_X$ defined on a (finite or countably infinite) set $\mathcal{X}$. Consider a UD fixed-to-variable source code operating on source sequences of $k$ symbols with an alphabet of the codewords of size $D$. Let $\ell(x^k)$ be the length of the codeword which corresponds to the source sequence $x^k := (x_1, \ldots, x_k) \in \mathcal{X}^k$. Consider the scaled {\em cumulant generating function} of the codeword lengths: \begin{align} \label{eq: cumulant generating function} \Lambda_k(\rho) := \frac1{k} \, \log_D \left( \, \sum_{x^k \in \mathcal{X}^k} P_{X^k}(x^k) \, D^{\rho \, \ell(x^k)} \right), \quad \rho > 0 \end{align} where \begin{align} \label{eq: pmf} P_{X^k}(x^k) = \prod_{i=1}^k P_X(x_i), \quad \forall \, x^k \in \mathcal{X}^k. \end{align} Then, for every $\rho > 0$, the following hold: \begin{enumerate}[a)] \item Converse result: \begin{align} \label{eq: Campbell's converse result} \frac{\Lambda_k(\rho)}{\rho} \geq \frac{1}{\log D} \; H_{\frac1{1+\rho}}(X). \end{align} \item Achievability result: there exists a UD source code, for which \begin{align} \label{eq: Campbell's achievability result} \frac{\Lambda_k(\rho)}{\rho} \leq \frac{1}{\log D} \; H_{\frac1{1+\rho}}(X) + \frac{1}{k}. \end{align} \end{enumerate} \end{thm} The bounds in Theorem~\ref{theorem: Campbell}, expressed in terms of the R\'{e}nyi entropy, imply that for sufficiently long source sequences, it is possible to make the scaled cumulant generating function of the codeword lengths approach the R\'{e}nyi entropy as closely as desired by a proper fixed-to-variable UD source code; moreover, the converse result shows that there is no UD source code for which the scaled cumulant generating function of its codeword lengths lies below the R\'{e}nyi entropy. By invoking L'H\^{o}pital's rule, one gets from \eqref{eq: cumulant generating function} \begin{align} \label{eq: limit rho tends to zero} \lim_{\rho \downarrow 0} \frac{\Lambda_k(\rho)}{\rho} = \frac1k \sum_{x^k \in \mathcal{X}^k} P_{X^k}(x^k) \, \ell(x^k) = \frac1k \, \expectation[\ell(X^k)]. \end{align} Hence, by letting $\rho$ tend to zero in \eqref{eq: Campbell's converse result} and \eqref{eq: Campbell's achievability result}, it follows that Campbell's result in Theorem~\ref{theorem: Campbell} generalizes the well-known bounds on the optimal average length of UD fixed-to-variable source codes: \begin{align} \label{eq: Shannon} \frac{1}{\log D} \; H(X) \leq \frac1k \; \expectation[\ell(X^k)] \leq \frac{1}{\log D} \; H(X) + \frac1k, \end{align} and \eqref{eq: Shannon} is satisfied by Huffman coding. Campbell's result therefore generalizes Shannon's fundamental result for the average codeword lengths of lossless compression codes, expressed in terms of the Shannon entropy. Following the work by \cite{Campbell65}, non-asymptotic bounds were derived by \cite{CV2014a} for the scaled cumulant generating function of the codeword lengths for $P_X$-optimal variable-length lossless codes. These bounds were used by \cite{CV2014a} to obtain simple proofs of the asymptotic normality of the distribution of codeword lengths, and the reliability function of memoryless sources allowing countably infinite alphabets. The analysis which leads to the following result for lossless source compression with uniquely-decodable (UD) codes is provided in the paper by \cite{IS18}. Let $X_1, \ldots, X_k$ be i.i.d. symbols which are emitted from a DMS according to a probability mass function $P_X$ whose support is a finite set $\mathcal{X}$ with $|\mathcal{X}|=n$. In order to cluster the data, suppose that each symbol $X_i$ is mapped to $Y_i = f(X_i)$ where $f \in \mathcal{F}_{n,m}$ is an arbitrary deterministic function (independent of the index $i$) with $m<n$. Consequently, the i.i.d. symbols $Y_1, \ldots, Y_k$ take values on a set $\mathcal{Y}$ with $|\mathcal{Y}|=m<|\mathcal{X}|$. Consider two UD fixed-to-variable source codes: one operating on the sequences $x^k \in \mathcal{X}^k$, and the other one operates on the sequences $y^k \in \mathcal{Y}^k$; let $D$ be the size of the alphabets of both source codes. Let $\ell(x^k)$ and $\overline{\ell}(y^k)$ denote the length of the codewords for the source sequences $x^k$ and $y^k$, respectively, and let $\Lambda_k(\cdot)$ and $\overline{\Lambda}_k(\cdot)$ denote their corresponding scaled cumulant generating functions (see \eqref{eq: cumulant generating function}). Relying on our tight bounds on the R\'{e}nyi entropy (of any positive order) in \cite[Theorems~1, 2]{IS18} and Theorem~\ref{theorem: Campbell}, we obtain upper and lower bounds on $\frac{\Lambda_k(\rho) - \overline{\Lambda}_k(\rho)}{\rho}$ for all $\rho > 0$ (see \cite[Theorem~5]{IS18}). To that end, for $m \in \{2, \ldots, n-1\}$, if $P_X(1) < \frac1m$, let $\widetilde{X}_m$ be the equiprobable random variable on $\{1, \ldots, m\}$; otherwise, if $P_X(1) \geq \frac1m$, let $\widetilde{X}_m \in \{1, \ldots, m\}$ be a random variable with the probability mass function \begin{align*} P_{\widetilde{X}_m}(i) = \begin{dcases} P_X(i), & i \in \{1, \ldots, n^\ast\}, \\ \frac1{m-n^\ast} \sum_{j = n^\ast+1}^n P_X(j), & i \in \{n^\ast+1, \ldots, m\}, \end{dcases} \end{align*} where $n^\ast$ is the maximal integer $i \in \{1, \ldots, m-1\}$ such that \begin{align} \label{eq: n ast} P_X(i) \geq \frac1{m-i} \sum_{j=i+1}^n P_X(j). \end{align} The result in \cite[Theorem~5]{IS18} is of interest since it provides upper and lower bounds on the reduction in the cumulant generating function of close-to-optimal UD source codes as a result of clustering data, and \cite[Remark~11]{IS18} suggests an algorithm to construct such UD codes which are also prefix codes. For long enough sequences (as $k \to \infty$), the upper and lower bounds on the difference between the scaled cumulant generating functions of the suggested source codes for the original and clustered data almost match, being roughly equal to $\rho \left( H_{\frac1{1+\rho}}(X)- H_{\frac1{1+\rho}}(\widetilde{X}_m) \right)$ (with logarithms on base $D$, which is the alphabet size of the source codes), and as $k \to \infty$, the gap between these upper and lower bounds is less than $0.08607 \log_D 2$. Furthermore, in view of \eqref{eq: limit rho tends to zero}, \begin{align} \lim_{\rho \downarrow 0} \frac{\Lambda_k(\rho) - \overline{\Lambda}_k(\rho)}{\rho} = \frac1k \left( \expectation[\ell(X^k)] - \expectation[\overline{\ell}(Y^k)] \right), \end{align} so, it follows from \cite[Theorem~5]{IS18} that the difference between the average code lengths (normalized by~$k$) of the original and clustered data satisfies \begin{align} - \frac1k & \leq \frac{\expectation[\ell(X^k)] - \expectation[\overline{\ell}(Y^k)]}{k} - \frac{H(X) - H(\widetilde{X}_m)}{\log D} \nonumber \\ \label{eq: 20181030e} & \leq 0.08607 \log_D 2, \end{align} and the gap between the upper and lower bounds is small.
{"config": "arxiv", "file": "2103.16901/sason.tex"}
TITLE: how to solve this differential equation $\tau\frac{dc}{dt} = -(1-\lambda)c(t) + \textbf{k}\cdot \textbf{h}(t)$ QUESTION [1 upvotes]: I haven't solved diff eq in a very long time and I'm having trouble with the following differential equation: $$\tau\frac{dc}{dt} = -(1-\lambda)c(t) + \textbf{k}\cdot \textbf{h}(t)$$ where $\textbf{k}$ and $\textbf{h}$ are vectors, and $\textbf{k}\cdot \textbf{h}(t)$ is the dot product of the two vectors. How do I separate $\textbf{h}$ from $\textbf{c}$ so I can integrate both sides? REPLY [1 votes]: You have to use the Duhamel formula, the scalar product is irrelevant. This will yield $$ c(t)=c(0)e^{-\frac{1-\lambda}{\tau}t}+\frac{1}{\tau}\int_0^tdt'e^{-\frac{1-\lambda}{\tau}(t-t')}{\bf k\cdot h}(t'). $$ This solution can be obtained with the following procedure (Duhamel formula). Rewrite the equation as $$ \frac{dc}{dt} = -\frac{1-\lambda}{\tau}c(t) + \frac{1}{\tau}\textbf{k}\cdot \textbf{h}(t). $$ Then note that, without the last term, the homogeneous equation has the solution $$ c_0(t)=c_0(0)e^{-\frac{1-\lambda}{\tau}t}. $$ So, you assume that your equation would have a solution given by $$ c(t)=u(t)e^{-\frac{1-\lambda}{\tau}t}. $$ Put this into the given equation and you are left with $$ \frac{du}{dt}=\frac{1}{\tau}\textbf{k}\cdot \textbf{h}(t)e^{\frac{1-\lambda}{\tau}t} $$ that is very easy to solve giving the final solution written at the beginning.
{"set_name": "stack_exchange", "score": 1, "question_id": 3262100}
TITLE: Physical meaning of the sign basis in quantum mechanics QUESTION [5 upvotes]: If we take a hydrogen atom as qubit, let $\lvert0\rangle$ = unexcited state $\lvert1\rangle$ = excited state then what is the meaning of measuring the qubit value in the sign basis? If the atom may only be in excited or unexcited state, but $\lvert+\rangle$ and $\lvert-\rangle$ are superpositions of those states — then what would the outcome of the measurement be — also a superposition of $\lvert+\rangle$ and $\lvert-\rangle$? Can anyone please help to understand the idea behind the sign basis? REPLY [4 votes]: As the measurement postulate says, if you projectively measure a qubit, initially in a state $|\psi\rangle$, in the basis $\{|+\rangle,|-\rangle\}$, you will get the state $|+\rangle$ with probability $|\langle+|\psi\rangle|^2$, and similarly for $|-\rangle$. For the particular implementation you mention, a two-level atom whose eigenstates are the logical $|0\rangle,|1\rangle$ states, there is no general, useful, real physical quantity${}^1$ represented by the operator $$X=|0\rangle\langle1|+|1\rangle\langle0|$$ whose eigenstates are $|+\rangle$ and $|-\rangle$ (check it!). To do a projective measurement on that basis, the standard (though not necessarily unique) procedure is to apply a $\pi/2$ Rabi pulse which will bring $|+\rangle$ to $|0\rangle$ and $|-\rangle$ to $|1\rangle$, and measure in the computational basis. One can then apply an inverse pulse if needed. There are other implementations, however, where this basis has a more physical significance. For example, if your logical states are the up and down states of a spin-$\frac{1}{2}$ particle measured along the $z$ direction, then $X$ is the spin along the $x$ direction (which is no coincidence). ${}^1$ For any given atom, though, you can probably find detectable physical properties of interest. If, say, $|0\rangle$ is an $s$ state and $|1\rangle$ is a $p_z$ state, which may very well be the case, you'll find that the $|\pm\rangle$ states are localized towards either pole. A measurement of position above/below the $xy$ plane will closely approximate an $X$ measurement in most such circumstances. Similarly, a measurement of momentum going to positive or negative $z$ will approximate a measurement along $Y=i|0\rangle\langle1|-i|1\rangle\langle0|$, whose eigenstates $|\pm i\rangle=\frac{1}{\sqrt{2}}(|0\rangle\pm| i\rangle)$ look like $e^{\pm ikz}$ near the origin.
{"set_name": "stack_exchange", "score": 5, "question_id": 34924}
TITLE: Why the Feynman diagrams contributing to the effective action $\Gamma[\phi_{\rm cl}]$ are stripped/amputated/have no external lines? QUESTION [8 upvotes]: I am reading P&S Chapter 11 and specifically I am trying to understand the derivation of $\Gamma[\phi_{\rm cl}]$. All the algebra is okay, but I am failing to understand the connection to Feynman diagrams. I have also read Chapter 9 from Srednicki and the reply on this stack-exchange question, which I find very illuminating: Perturbation expansion of effective action My question, however, is: Why does the author say that the "the Feynman diagrams contributing to $\Gamma[\phi_{\rm cl}]$ have no external lines"? How can I understand that (pictorially or algebraically)? My guess is that it has something to do with the source term missing from the expression of the effective action, but I do not understand it that much. REPLY [1 votes]: Here is one argument: Recall that the 1PI effective/proper action$^1$ $$\Gamma[\phi_{\rm cl}]~=~W_c[J]-J_k \phi^k_{\rm cl} \tag{1} $$ is the Legendre transformation of the generator $W_c[J]$ of connected diagrams. We can recursively construct higher and higher $n$-point 1PI correlator functions $\Gamma_{n,k_1\ldots k_n}$ from pertinent combinations of connected $m$-point correlation functions $W_{c,m}^{k_1,\ldots k_m}$, where $m\leq n$, cf. e.g. my Phys.SE answer here. Notice that in this context the connected 2-point function $W_{c,2}^{k\ell}$ plays the role of an (inverse) metric that raises and lowers the DeWitt indices. The connected $m$-point correlation function $W_{c,m}^{k_1,\ldots k_m}$ has upper indices because it includes its external legs (which are attached to the sources $J_{k_1}\ldots J_{k_m}$ with lower indices). The $n$-point 1PI correlator function $\Gamma_{n,k_1\ldots k_n}$ has lower indices because its external legs are stripped/amputated. Instead it is attached to the classical fields $\phi_{\rm cl}^{k_1}\ldots \phi_{\rm cl}^{k_n}$ with with upper indices in the effective action $\Gamma[\phi_{\rm cl}]$. Conversely, and perhaps more illuminating diagramatically, the connected $m$-point correlation function $W_{c,m}^{k_1,\ldots k_m}$ is a sum of all possible trees made from connected propagators $W_{c,2}^{k\ell}$ and (amputated) 1PI vertices $\Gamma_{n,k_1\ldots k_n}$, where $n\leq m$, cf. e.g. this Phys.SE post. -- $^1$ We use DeWitt condensed notation to not clutter the notation.
{"set_name": "stack_exchange", "score": 8, "question_id": 693927}
TITLE: Has this operator $0$ as an eigenvalue / where is my error? QUESTION [3 upvotes]: I know of a theorem that tells me, that every compact linear operator on an infinitedimensional Hilbert space has to have the eigenvalue $0$. On the other hand I have the operator \begin{eqnarray*} & T:\ell^{2}\rightarrow\ell^{2}\\ & \left(x_{1},x_{2},\ldots\right)\mapsto\left(\lambda_{1}x_{1},\lambda_{2}x_{2},\ldots\right), \end{eqnarray*} where $\left(\lambda_{n}\right)_{n}$ is a sequence of real nonnegative numbers, tending to $0$. Then this mapping can't have $0$ as an eigenvalue, since if that were the case, there had to be a $\left(y_{1},y_{2},\ldots\right)\in\ell^{2}$ with not all $y_{n}$'s being zero, such that $\lambda_{n}y_{n}=0$ for all $n\in\mathbb{N}$. Since $\lambda_{n}\neq0$, that would imply that all $y_{n}$'s are there. Where is my error ? The operator $T$ is compact and $\ell^{2}$ is infinitedimensional, so this should be a counterexample to the theorem above. REPLY [5 votes]: $0$ being in the spectrum means that $T$ isn't invertible, which in infinite-dimensional space no longer means that it's not injective. You should be able to show that $T$ isn't surjective.
{"set_name": "stack_exchange", "score": 3, "question_id": 176103}
TITLE: What is $\limsup\limits_{n\to\infty} \cos (n)$, when $n$ is a natural number? QUESTION [3 upvotes]: I think the answer should be $1$, but am having some difficulties proving it. I can't seem to show that, for any $\delta$ and $n > m$, $|n - k(2\pi)| < \delta$. Is there another approach to this or is there something I'm missing? REPLY [1 votes]: You are on the right track. If $|n-2\pi k|<\delta$ then $|\frac{n}{k}-2\pi|<\frac \delta k$. So $\frac{n}{k}$ must be a "good" approximation for $2\pi$ to even have a chance. Then it depends on what you know about rational approximations of irrational numbers. Do you know about continued fractions?
{"set_name": "stack_exchange", "score": 3, "question_id": 136897}
TITLE: How can infinite sine waves localize to a single pulse in space? QUESTION [2 upvotes]: I have heard countless times (and not just when discussing the Heisenberg's Uncertainty Principle) that making a short pulse using sine waves requires more and more sine waves to localize the pulse closer and closer, and for a pulse to be localized perfectly you would need infinite sine waves. Unfortunately, this explanation makes no mathematical sense to me. Sine waves cycle infinitely, no matter the phase, amplitude, or frequency. So, how can they destructively interfere everywhere except at one location? If there is any location that doesn't destructively interfere, you should see infinite other locations where it also doesn't interfere, presumably at some set spacing, much more like a square wave for computer timing than a single pulse. The waves are infinite and unchanging throughout space; how can infinitely repeating waves all add up to produce something at only one location? My only guess towards a plausible answer (as I've found no direct explanation of this, and in college they always used cyclic examples when discussing fourier transforms) has me simply limiting the extents of each sine wave such that you only get one localized pulse (instead of infinite). However, creating arbitrary constraints to make the math give you only a single pulse seems wholly inconsistent with the typical physics explanations provided. Thank you for any guidance and understanding related to this topic. I debated if this should be in the math stackexchange (as it's most directly related to fourier transforms), but because this comes up so often while discussing Heisenberg's Uncertainty Principle, and that is where I keep seeing it come up, I felt this was a more appropriate outlet for this question. REPLY [0 votes]: The Fourier transform of a rectangle is the sinc function. Since the Fourier transform is simply a superposition of different sine-waves (a cosine-wave is a sine-wave, which is shifted by $\pi/2$) this is an example of your problem. Please go ahead an check out the derivate of the above mentioned Fourier transform.
{"set_name": "stack_exchange", "score": 2, "question_id": 560127}
/- Copyright (c) 2022 Yaël Dillies. All rights reserved. Released under Apache 2.0 license as described in the file LICENSE. Authors: Yaël Dillies -/ import data.sum.order import order.locally_finite /-! # Finite intervals in a disjoint union This file provides the `locally_finite_order` instance for the disjoint sum of two orders. ## TODO Do the same for the lexicographic sum of orders. -/ open function sum namespace finset variables {α₁ α₂ β₁ β₂ γ₁ γ₂ : Type*} section sum_lift₂ variables (f f₁ g₁ : α₁ → β₁ → finset γ₁) (g f₂ g₂ : α₂ → β₂ → finset γ₂) /-- Lifts maps `α₁ → β₁ → finset γ₁` and `α₂ → β₂ → finset γ₂` to a map `α₁ ⊕ α₂ → β₁ ⊕ β₂ → finset (γ₁ ⊕ γ₂)`. Could be generalized to `alternative` functors if we can make sure to keep computability and universe polymorphism. -/ @[simp] def sum_lift₂ : Π (a : α₁ ⊕ α₂) (b : β₁ ⊕ β₂), finset (γ₁ ⊕ γ₂) | (inl a) (inl b) := (f a b).map embedding.inl | (inl a) (inr b) := ∅ | (inr a) (inl b) := ∅ | (inr a) (inr b) := (g a b).map embedding.inr variables {f f₁ g₁ g f₂ g₂} {a : α₁ ⊕ α₂} {b : β₁ ⊕ β₂} {c : γ₁ ⊕ γ₂} lemma mem_sum_lift₂ : c ∈ sum_lift₂ f g a b ↔ (∃ a₁ b₁ c₁, a = inl a₁ ∧ b = inl b₁ ∧ c = inl c₁ ∧ c₁ ∈ f a₁ b₁) ∨ ∃ a₂ b₂ c₂, a = inr a₂ ∧ b = inr b₂ ∧ c = inr c₂ ∧ c₂ ∈ g a₂ b₂ := begin split, { cases a; cases b, { rw [sum_lift₂, mem_map], rintro ⟨c, hc, rfl⟩, exact or.inl ⟨a, b, c, rfl, rfl, rfl, hc⟩ }, { refine λ h, (not_mem_empty _ h).elim }, { refine λ h, (not_mem_empty _ h).elim }, { rw [sum_lift₂, mem_map], rintro ⟨c, hc, rfl⟩, exact or.inr ⟨a, b, c, rfl, rfl, rfl, hc⟩ } }, { rintro (⟨a, b, c, rfl, rfl, rfl, h⟩ | ⟨a, b, c, rfl, rfl, rfl, h⟩); exact mem_map_of_mem _ h } end lemma inl_mem_sum_lift₂ {c₁ : γ₁} : inl c₁ ∈ sum_lift₂ f g a b ↔ ∃ a₁ b₁, a = inl a₁ ∧ b = inl b₁ ∧ c₁ ∈ f a₁ b₁ := begin rw [mem_sum_lift₂, or_iff_left], simp only [exists_and_distrib_left, exists_eq_left'], rintro ⟨_, _, c₂, _, _, h, _⟩, exact inl_ne_inr h, end lemma inr_mem_sum_lift₂ {c₂ : γ₂} : inr c₂ ∈ sum_lift₂ f g a b ↔ ∃ a₂ b₂, a = inr a₂ ∧ b = inr b₂ ∧ c₂ ∈ g a₂ b₂ := begin rw [mem_sum_lift₂, or_iff_right], simp only [exists_and_distrib_left, exists_eq_left'], rintro ⟨_, _, c₂, _, _, h, _⟩, exact inr_ne_inl h, end lemma sum_lift₂_eq_empty : (sum_lift₂ f g a b) = ∅ ↔ (∀ a₁ b₁, a = inl a₁ → b = inl b₁ → f a₁ b₁ = ∅) ∧ ∀ a₂ b₂, a = inr a₂ → b = inr b₂ → g a₂ b₂ = ∅ := begin refine ⟨λ h, _, λ h, _⟩, { split; { rintro a b rfl rfl, exact map_eq_empty.1 h } }, cases a; cases b, { exact map_eq_empty.2 (h.1 _ _ rfl rfl) }, { refl }, { refl }, { exact map_eq_empty.2 (h.2 _ _ rfl rfl) } end lemma sum_lift₂_nonempty : (sum_lift₂ f g a b).nonempty ↔ (∃ a₁ b₁, a = inl a₁ ∧ b = inl b₁ ∧ (f a₁ b₁).nonempty) ∨ ∃ a₂ b₂, a = inr a₂ ∧ b = inr b₂ ∧ (g a₂ b₂).nonempty := by simp [nonempty_iff_ne_empty, sum_lift₂_eq_empty, not_and_distrib] lemma sum_lift₂_mono (h₁ : ∀ a b, f₁ a b ⊆ g₁ a b) (h₂ : ∀ a b, f₂ a b ⊆ g₂ a b) : ∀ a b, sum_lift₂ f₁ f₂ a b ⊆ sum_lift₂ g₁ g₂ a b | (inl a) (inl b) := map_subset_map.2 (h₁ _ _) | (inl a) (inr b) := subset.rfl | (inr a) (inl b) := subset.rfl | (inr a) (inr b) := map_subset_map.2 (h₂ _ _) end sum_lift₂ end finset open finset function namespace sum variables {α β : Type*} /-! ### Disjoint sum of orders -/ section disjoint variables [preorder α] [preorder β] [locally_finite_order α] [locally_finite_order β] instance : locally_finite_order (α ⊕ β) := { finset_Icc := sum_lift₂ Icc Icc, finset_Ico := sum_lift₂ Ico Ico, finset_Ioc := sum_lift₂ Ioc Ioc, finset_Ioo := sum_lift₂ Ioo Ioo, finset_mem_Icc := by rintro (a | a) (b | b) (x | x); simp, finset_mem_Ico := by rintro (a | a) (b | b) (x | x); simp, finset_mem_Ioc := by rintro (a | a) (b | b) (x | x); simp, finset_mem_Ioo := by rintro (a | a) (b | b) (x | x); simp } variables (a₁ a₂ : α) (b₁ b₂ : β) (a b : α ⊕ β) lemma Icc_inl_inl : Icc (inl a₁ : α ⊕ β) (inl a₂) = (Icc a₁ a₂).map embedding.inl := rfl lemma Ico_inl_inl : Ico (inl a₁ : α ⊕ β) (inl a₂) = (Ico a₁ a₂).map embedding.inl := rfl lemma Ioc_inl_inl : Ioc (inl a₁ : α ⊕ β) (inl a₂) = (Ioc a₁ a₂).map embedding.inl := rfl lemma Ioo_inl_inl : Ioo (inl a₁ : α ⊕ β) (inl a₂) = (Ioo a₁ a₂).map embedding.inl := rfl @[simp] lemma Icc_inl_inr : Icc (inl a₁) (inr b₂) = ∅ := rfl @[simp] lemma Ico_inl_inr : Ico (inl a₁) (inr b₂) = ∅ := rfl @[simp] lemma Ioc_inl_inr : Ioc (inl a₁) (inr b₂) = ∅ := rfl @[simp] lemma Ioo_inl_inr : Ioo (inl a₁) (inr b₂) = ∅ := rfl @[simp] lemma Icc_inr_inl : Icc (inr b₁) (inl a₂) = ∅ := rfl @[simp] lemma Ico_inr_inl : Ico (inr b₁) (inl a₂) = ∅ := rfl @[simp] lemma Ioc_inr_inl : Ioc (inr b₁) (inl a₂) = ∅ := rfl @[simp] lemma Ioo_inr_inl : Ioo (inr b₁) (inl a₂) = ∅ := rfl lemma Icc_inr_inr : Icc (inr b₁ : α ⊕ β) (inr b₂) = (Icc b₁ b₂).map embedding.inr := rfl lemma Ico_inr_inr : Ico (inr b₁ : α ⊕ β) (inr b₂) = (Ico b₁ b₂).map embedding.inr := rfl lemma Ioc_inr_inr : Ioc (inr b₁ : α ⊕ β) (inr b₂) = (Ioc b₁ b₂).map embedding.inr := rfl lemma Ioo_inr_inr : Ioo (inr b₁ : α ⊕ β) (inr b₂) = (Ioo b₁ b₂).map embedding.inr := rfl end disjoint end sum
{"subset_name": "curated", "file": "formal/lean/mathlib/data/sum/interval.lean"}
TITLE: Estimating Gaussian parameters of a set of data points QUESTION [2 upvotes]: I have a set of data points. When I draw a histogram of them, plotting their frequency of occurrence against them, I get a curve that looks like a normal curve. I am also able to perform test on the data set to know whether it follows a normal distribution or more precisely whether the population it comes from follows a normal probability distribution. I am using Shapiro Wilk test for it. However, how can I know what the equation of that normal curve will be? Moreover, is there a way I can test whether other standard distributions fit the points more accurately, and estimate their parameters? REPLY [1 votes]: You can estimate the parameters $\mu$ and $\sigma$ by using the statistics: $$\hat{\mu}=\bar{X}=\frac{1}{n}\sum X_i$$ and $$\hat{\sigma}^2=\frac{1}{n-1}\sum(X_i-\bar{X})^2$$ Where $X_i$ would be the $i$th sample element. Thus $\bar{X}$ is the sample mean. So the equation of the fitted distribution would be: $$f(x)=\dfrac{1}{\sqrt{2\pi\hat{\sigma}^2}}e^{-\dfrac{(x-\hat{\mu})^2}{2\hat{\sigma}^2}}$$ You can use the Pearson Chi Squared test to check the hypothesis that the data comes from the distribution being tested.
{"set_name": "stack_exchange", "score": 2, "question_id": 2735188}
TITLE: If $S$ is an $R$-algebra, how does $S$ being a finitely generated $R$-module imply that $S$ is a finite-type $R$-algebra? QUESTION [0 upvotes]: Let $R$ be a commutative ring. If $S$ is a finitely generated $R$-module, then there is an onto $R$-module homomorphism $$R^{\oplus n} \to S.$$ If $S$ is a finite-type $R$-algebra, then there is a onto $R$-algebra homomorphism $$R[x_1, \dots, x_k] \to S.$$ If $S$ is an $R$-algebra and a finitely generated $R$-module, then according this wiki, $S$ is a finite-type $R$-algebra. I'm having trouble seeing this. How is the onto $R$-algebra hom defined? REPLY [2 votes]: Let $x_1, …, x_n$ be a generating system for $S$ as an $R$-module. Then the $R$-algebra morphism $$R[X_1,…, X_n] → S,~X_1 ↦ x_n,~…,~X_n ↦ x_n$$ is surjective, as its image certainly contains $⟨x_1, …, x_n⟩_R = S$.
{"set_name": "stack_exchange", "score": 0, "question_id": 3305081}
TITLE: History of the theory of equations: John Colson QUESTION [22 upvotes]: This is an EDIT version of my original question: Recently I've been interested in the history of the Theory of Equations. The thing is that I learned about this mathematician named John Colson, he published a very interesting paper: Aequationum Cubicarum & Biquadraticarum, tum Analytica, tum Geometrica & Mechanica, Resolutio Universalis in the Philosophical Transactions, he was a contemporary of De Moivre, althought he published his infamous formula in 1722, also in the Philosophical Transactions (I made a question about it before) and Colson in 1706. Now, on to the paper, remember, I'm not fluent in latin, so there might be things that I miss. He presents his work with the three roots of a given universal cubic equation, & the way he presents them is by representing them as linear combination of the roots of unity, first he shows with 7 examples that considering the roots as such, works! Subsequently to the examples, he actually shows how he got that the roots can be written like that. I was thinking of writing down all the procedure, but is very long, and is actually very understandable from the paper. However, if you want to know my interpretation from a particular sentence, I'm pretty much done with the translation (just the cubic part), just remember I'm not fluent in latin, & probably I committed big mistakes with the interpretation. I consider his work to be very original (diferent) for his time, I don't know when the use of roots of the unity was introduced, probably before him, but he definitely gave them some play, since (almost) everything related to solving those equations seemed very geometrically based, i.e., a lot of the things people would use to solve the cubic, for example, were derived from geometric properties, we can see that he works the other way around, remember the name of the article: "Universal solution of the biquadratic and cubic equations, both analytical and geometrical and mechanical", from algebra he gets the geometry. I don't know if I'm making much sense, but let see if this picture helps. I belive this is something Descartes said, I don't remember very well, but this was the way people of Colson's time used to think. What this represents, is that if you have a problem in geometry, then you can represent it with algebra, and if you have a problem in algebra, then it belongs to a problem in geometry. I hope this helps to illustrate my point. (BTW, Galois, Abel and others later showed us that this not true) Now the point of my question is, if Colson could represent the start of the independence of algebra from geometry. If we look at the big picture, and since many of us were born in the XX century, we know how this is going to end, so would it be so naive of my part to consider Colson as this kind of hero? Who could be a better representative for this? Thanks! REPLY [2 votes]: Now the point of my question is, if Colson could represent the start of the independence of algebra from geometry. [...] Who could be a better representative for this? I would imagine Thomas Harriot would be a better representative than John Colson in terms of separating algebra from geometry. From page 490 of Jacqueline A. Stedall. (2000). "Rob’d of Glories: The Posthumous Misfortunes of Thomas Harriot and His Algebra," Archive for History of Exact Sciences, vol. 54, pp. 455–497: What should we now consider to be the ‘Improvements of Algebra to be found in Harriot’? The first and most obvious must be his notation: the use of lower case letters, with repetition to indicate multiplication, freed algebra for the first time from the geometrical connotations it had always previously carried. [...] Dispensing with geometrical baggage, however, led to more than just the simplification of notation: it also made possible Harriot’s second great achievement, the handling of equations at a purely symbolic level. If the achievement of Descartes was to show how algebra could be applied to geometry, the achievement of Harriot was to liberate algebra from geometry altogether, so that for the first time it could become truly a subject in its own right. [...] Harriot’s finest contribution, however, was ‘to treat of Algebra purely by itself, and from its own principles, without dependance on Geometry, or any connexion therewith’. [...] Harriot should be seen as the first to dispense entirely with geometric considerations, and as the first forerunner of modern abstract algebra. Harriot's Artis analyticae praxis (The Practice of the Analytic Art) was published in 1631, ten years after his death.
{"set_name": "stack_exchange", "score": 22, "question_id": 493016}
TITLE: why are motives more serious than "naive" motives? QUESTION [16 upvotes]: I know my question is a bit vague, sorry for this. Let $k$ be a field of characteristic zero. Consider the Grothendieck ring of varieties over $k$, usually denoted by $K_0(Var_k)$. This is generated by isomorphism classes of varieties over $k$ modulo the relations [X]=[Y]+[X-Y] whenever $Y$ is a closed subvariety of $X$. People usually refer to [X] as the "naive" motive of $X$. On the other hand, one has Voevodsky's "true" motives $DM_{gm}(k)$ (not as true as we would like to, I know !) and to any variety $X$ we can attach an object $M(X)$in $DM_{gm}(k)$. Why is this $M(X)$ more serious than the naive one? That is, can you give some examples of properties that cannot be read at the level of $K_0(Var_k)$ but that one sees when working in $DM_{gm}(k)$? REPLY [10 votes]: Note that the Grothendieck ring of varieties does, at least conjecturally, remember some information about varieties that the category of motives does not. Under the cut-and-paste conjecture, two varieties are equivalent in the Grothendieck ring if and only if they can both be decomposed into the same set of locally closed pieces. There are many pairs of varieties which have the same motive but cannot be cut and pasted into each other (like a $\mathbb P^2$ and a fake projective plane). So assuming the cut-and-paste conjecture, the Grothendieck group remembers a lot of extra information about the varieties. In particular, because the fundamental group is a birational invariant for smooth projective varieties, the Grothendieck group class tells you the fundamental group, which is highly non-abelian invariant. So the category of motives loses some information that is contained the Grothendieck group of varieties - in some sense, the non-abelian information. One reason that the category of motives is good to work with is that it simplifies things by getting rid of this additional structure. For tasks where you only need that structure, motives are more serious.
{"set_name": "stack_exchange", "score": 16, "question_id": 186290}
TITLE: Observer in the double slit experiment with photons QUESTION [1 upvotes]: In the double slit experiment with photons, the interacting observer is an instrument, detector… If you replace the detector with a piece of metal with the same mass as the mass of the detector, the wave will collapse? REPLY [1 votes]: In this experiment a changeable detection is designed Overall, the results suggest that the type of scattering an electron undergoes determines the mark it leaves on the back wall, and that a detector at one of the slits can change the type of scattering. The physicists concluded that, while elastically scattered electrons can cause an interference pattern, the inelastically scattered electrons do not contribute to the interference process. So, imo, a double slit experiment with detectors at the slits changes the boundary conditions , whether the detectors are active or not, at least this how what this more recent experiment can be interpreted.
{"set_name": "stack_exchange", "score": 1, "question_id": 247329}
\subsection{Complex surfaces} Throughout this paper, $X$ will denote a complex surface, by which we mean a connected compact complex manifold of complex dimension two. Usually $X$ will be rational. The book \cite{BHPV} is a good general reference for complex surfaces. Here we recount only needed facts. Given divisors, $D,D'$ on $X$, we will write $D\sim D'$ to denote linear equivalence, $D\leq D'$ if $D' = D + E$ where $E$ is an effective divisor and $D\lesssim D'$ if $D' - D$ is linearly equivalent to an effective divisor. By a \emph{curve} in $X$, we will mean a reduced effective divisor. We let $\pic(X)$ denote the Picard group on $X$, i.e. divisors modulo linear equivalence. We let $K_X\in\pic(X)$ denote the (class of) a canonical divisor on $X$, which is to say, the divisor of a meromorphic two form on $X$. Taking chern classes associates each element of $\pic(X)$ with a cohomology class in $H^{1,1}(X)\cap H^2(X,\Z)$. We will have need of the larger group $H^{1,1}_\R(X) \eqdef H^{1,1}(X)\cap H^2(X,\R)$. We call a class $\theta\in H^{1,1}_\R(X)$ \emph{nef} if $\theta^2 \geq 0$ and $\theta\cdot C\geq 0$ for any complex curve. We will repeatedly rely on the following consequence of the Hodge index theorem. \begin{thm} \label{hodgethm} If $\theta\in H_\R^{1,1}(X)$ is a non-trivial nef class, and $C$ is a curve, then $\theta\cdot C = 0$ implies that either \begin{itemize} \item the intersection form is negative when restricted to divisors supported on $C$; or \item $\theta^2=0$ and there exists an effective divisor $D$ supported on $C$ such that $D \sim t\theta$ for some $t>0$. \end{itemize} In particular if $\theta$ has positive self-intersection, then the intersection form is negative definite on $C$. \end{thm} \begin{proof} The hypotheses imply that $\theta\cdot D = 0$ for every divisor $D$ supported on $C$. Suppose that the intersection form restricted to $C$ is not negative definite. That is, there is a non-trivial divisor $D$ with $\supp D\subset C$ and $D^2 \geq 0$. Then we may write $D = D_+ - D_-$ as a difference of effective divisors supported on $C$ with no irreducible components in common. Since $D_+\cdot D_-\geq 0$, we have $$ 0\leq D^2 \leq D_+^2 + D_-^2, $$ so replacing $D$ with $D_+$ or $D_-$ allows us to assume that $D$ is effective. In particular, $D$ represents a non-trivial class in $H^{1,1}_{\R}(X)$. Since $D\cdot \theta = 0$ and $\theta^2,D^2 \geq 0$, we see that the intersection form is non-negative on the subspace of $H^{1,1}_\R(X)$ generated by $D$ and $\theta$. By Corollary 2.15 in \cite[page 143]{BHPV}, such a subspace must be one-dimensional. Thus $D = t\theta$ for some $t>0$. \end{proof} By the \emph{genus} $g(C)$ of a curve $C\subset X$, we will mean the quantity $1-\chi(\mathcal{O}_C)$, or equivalently, $1+h^0(K_C)$ minus the number of connected components of $C$. If $C$ is smooth and irreducible, then $g(C)$ is just the usual genus of $C$ as a Riemann surface. If $C$ is merely irreducible, then $g(C)$ is usually called the \emph{arithmetic genus} of $C$, and in this case it dominates the genus of the Riemann surface obtained by desingularizing $C$. If $C$ is connected then $g(C) \geq 0$, but our notion of genus is a bit non-standard in that we do not generally require connectedness of $C$ in what follows. For any curve $C$, connected or not, we have the following \emph{genus formula} \begin{equation} \label{genusformula} g(C) = \frac{C\cdot(C+K_X)}{2} + 1. \end{equation} \subsection{Birational maps} Now suppose that $Y$ is a second complex surface and $f:X\to Y$ is a birational map of $X$ onto $Y$. That is, $f$ maps some Zariski open subset of $X$ biholomorphically onto its image in $Y$. In general the complement of this subset will consist of a finite union of rational curves collapsed by $f$ to points, and a finite set $I(f)$ of points on which $f$ cannot be defined as a continuous map. We call the contracted curves \emph{exceptional} and the points in $I(f)$ \emph{indeterminate} for $f$. The birational inverse $f^{-1}:Y\to X$ of $f$ is obtained by inverting $f$ on the Zariski open set where $f$ acts biholomorphically. Note that what we call a birational map is perhaps more commonly called a birational correspondence, the former term often being understood to mean that $I(f)=\emptyset$. We adopt the following conventions concerning images of proper subvarieties of $X$. If $C\subset X$ is an irreducible curve, then $f(C)$ is defined to be $\overline{f(C-I(f))}$, which is a point if $C$ is exceptional for $f$ and a curve otherwise. If $p\in X$ is a point of indeterminacy, then $f(p)$ will denote the union of $f^{-1}$-exceptional curves that $f^{-1}$ maps to $p$. We apply the same conventions to images under $f^{-1}$. Our convention for the inverse image of an irreducible curve extends by linearity to a \emph{proper transform} action $f^\sharp D$ of $f$ on divisors $D$, provided we identify points with zero. We also have the \emph{total transform} action $f^* D$ of $f$ on divisors obtained by pulling back local defining functions for $D$ by $f$. Total transform has the advantage that it preserves linear equivalence and therefore descends to a linear map $f^*:\pic(Y)\to \pic(X)$. We denote the proper and total transform under $f^{-1}$ by $f_*$ and $f_\sharp$, respectively. In general, $f^* D - f^\sharp D$ is an effective divisor with support equal to a union of exceptional curves mapped by $f$ to points in $\supp D$. It will be important for us to be more precise about this point. To do so, we use the `graph' $\Gamma(f)$ of $f$ obtained by minimally desingularizing the variety $$ \overline{\{(x,f(x))\in X\times Y:x\notin I(f)\}}. $$ We let $\pi_1:\Gamma(f)\to X$, $\pi_2:\Gamma(f)\to Y$ denote projections onto first and second coordinates. Thus $\Gamma(f)$ is an irreducible complex surface and $\pi_1,\pi_2$ are \emph{proper modifications} of their respective targets, each holomorphic and birational and therefore each equal to a finite composition of point blowups. One sees readily that $f = \pi_2\circ \pi_1^{-1}$, and that the exceptional and indeterminacy sets of $f$ are the images under $\pi_1$ of the exceptional sets of $\pi_2$ and $\pi_1$, respectively. Given a decomposition $\sigma_n\circ\dots \circ\sigma_1$ of $\pi_2$ into point blowups, we let $E(\sigma_j)$ denote the center of the blowup $\sigma_j$ and $$ \hat E_j(f) = \sigma_1^*\dots \sigma_{j-1}^* E(\sigma_j),\quad E_j(f) = \pi_{1*}\hat E_j(f). $$ In particular, $\bigcup \supp E_j(f)$ is the exceptional set. We call the individual divisors $E_j(f)$ the \emph{exceptional components} of $f$ and call their sum $E(f)\eqdef \sum E_j(f)$ the \emph{exceptional divisor} of $f$. It should be noted that, as we have defined them, the exceptional components of $f$ are connected, but in general they are neither reduced nor irreducible. The following proposition assembles some further information about the exceptional components. These can be readily deduced from well-known facts about point blowups. We recall that the \emph{multiplicity} of a curve $C$ at a point $p$ is just the minimal multiplicity of the intersection of $C$ with an analytic disk meeting $C$ only at $p$. \begin{prop} \label{exceptional} Let $\sigma_j$, $E_j(f)$, and $E(f)$ be as above, and $C\subset X$ be a curve. \begin{itemize} \item $E(f) = (f^*\eta) - f^*(\eta)$ for any meromorphic two form $\eta$ on $X$ (here $(\eta)$ denotes the divisor of $\eta$). Less precisely, $E(f)\sim K_X - f^* K_X.$ \item $E_j(f)$ and $E_i(f)$ have irreducible components in common if and only if $f(E_j(f)) = f(E_i(f))$. If this is the case, then $i \leq j$ implies that $E_i(f)\leq E_j(f)$. \item The multiplicity with which an irreducible curve $E$ occurs in $E(f)$ is bounded above by a constant that depends only on the number of exceptional components $E_j(f)$ that include $E$. \item $f^* C - f^\sharp C = \sum c_j E_j(f)$ where $c_j$ is the multiplicity of $(\sigma_n\circ \dots\circ\sigma_{j+1})^\sharp(C)$ at the point $\sigma_j(E(\sigma_j))$. \item In particular, $c_j$ vanishes if $p_j \eqdef f(E_j(f)) \notin C$, $c_j\leq 1$ if $p_j$ is a smooth point of $C$, and $c_j>0$ if $p_j\in C$ and $E_j(f)$ is not dominated by any other exceptional component of $f$. \item Hence (in light of the 2nd and 5th items), $\supp f^* C - f^\sharp C = f^{-1}(C\cap I(f^{-1}))$. \end{itemize} \end{prop} We will also need the following elementary fact. \begin{lem} \label{sing1} Let $C\subset X$ be a curve such that no component of $C$ is exceptional for $f$. If $p\in C-I(f)$, then multiplicity of $f(C)$ at $f(p)$ is no smaller than that of $C$ at $p$. In particular, $f(p)$ is singular for $f(C)$ if $p$ is singular for $C$. \end{lem} \ignore{\begin{proof} If $f$ acts biholomorphically at $p$, the result is obvious. Otherwise $f$ is holomorphic near $p$ and decomposes locally as a composition of point blowups. Hence it suffices to verify the lemma for the case where $f$ is a point blowup. In this case, the result follows from the facts that the multiplicity of the intersection of $C$ with a generic smooth disk will be minimal and that the image of a generic smooth disk under a point blowup will be smooth. \end{proof}} \subsection{Classification of birational self-maps} Supposing that $f:X\self$ is a birational self-map, we now recall some additional information from \cite{DiFa01}. First of all, there are pullback and pushforward actions $f^*,f_*:H^{1,1}_\R (X)\self$ compatible with the total transforms $f^*,f_*:\pic(X)\self$. The actions are adjoint with respect to intersections, which is to say that \begin{equation} \label{adjoint} f^*\alpha \cdot \beta = \alpha\cdot f_*\beta, \end{equation} for all $\alpha,\beta\in H^{1,1}_\R(X)$. Less obviously, $f^{n*}$ is `intersection increasing', meaning $$ (f^{n*}\alpha)^2 \geq \alpha^2 $$ The \emph{first dynamical degree} of $f$ is the quantity $$ \lambda(f) := \lim_{n\to\infty} \norm{f^{n*}}^{1/n} \geq 1. $$ It is less clear than it might seem that $\lambda(f)$ is well-defined, as it can happen that $(f^n)^* \neq (f^*)^n$ for $n$ large enough. However, $\lambda(f)$ can be shown to be invariant under birational change of coordinate and one can take advantage of this to choose a good surface on which to work. \begin{thm} \label{asthm} The following are equivalent for a birational map $f:X\self$ on a complex surface. \begin{itemize} \item $(f^n)^* = (f^*)^n$ for all $n\in\Z$. \item $I(f^n)\cap I(f^{-n}) = \emptyset$ for all $n\in\N$. \item $f^n(C) \notin I(f)$ for any $f$-exceptional curve $C$. \item $f^{-n}(C)\notin I(f^{-1})$ for any $f^{-1}$ exceptional curve $C$. \end{itemize} By blowing up finitely many points in $X$, one can always arrange that these conditions are satisfied. \end{thm} We will call maps satisfying the equivalent conditions of this theorem \as (for \emph{algebraically} or \emph{analytically stable}). If $f$ is \as, then $\lambda = \lambda(f)$ is just the spectral radius of $f^*$. If $X$ is K\"ahler, then there is a nef class $\theta^+$ satisfying $$ f^* \theta^+ = \lambda\theta^+. $$ From \eqref{adjoint} we have that $\lambda(f^{-1}) = \lambda(f)$, so we let $\theta^-$ denote the corresponding class for $f^{-1}$. The following theorem summarizes many of the main results of \cite{DiFa01}, and we will rely heavily on it here. \begin{thm} \label{classthm} If $f:X\self$ is an \as birational map of a complex K\"ahler surface $X$ with $\lambda(f)=1$, then exactly one of the following is true (after contracting curves in $\supp E(f^n)$, if necessary). \begin{itemize} \item $\norm{f^{n*}}$ is bounded independent of $n$, and $f$ is an automorphism some iterate of which is isotopic to the identity. \item $\norm{f^{n*}} \sim n$ and $f$ preserves a rational fibration. In this case $\theta^+ = \theta^-$ is the class of a generic fiber. \item $\norm{f^{n*}} \sim n^2$ and $f$ is an automorphism preserving an elliptic fibration. Again $\theta^+=\theta^-$ is the class of a generic fiber. \end{itemize} If, on the other hand, $\lambda(f) > 1$, then $\theta^+\cdot \theta^- > 0$ and either $X$ is rational or $f$ is (up to contracting exceptional curves) an automorphism of a torus, an Enriques surface, or a K3 surface. \end{thm} \noindent We remark that the classes $\theta^\pm$ are unique up to positive multiples whenever $\norm{f^{n*}}$ is unbounded, and indeed under the unboundedness assumption, we have $$ \lim_{n\to\infty} \frac{f^{n*}\theta}{\norm{f^{n*}}} = c\theta^+ $$ for any K\"ahler class $\theta$ and some constant $c = c(\theta) > 0$. In what follows, we will largely ignore the case in which $\norm{f^{n*}}$ is bounded. After all, if some iterate of $f$ is the identity map, then every curve in $X$ will be $f$-invariant. To close this section, we recall a result from \cite{BeDi05a}, which we will use in section \ref{elliptic}. \begin{thm} \label{criticalthm} If $f:X\self$ is an \as birational map of a complex K\"ahler surface $X$ with $\lambda(f)>1$, then after contracting curves in $\supp E(f^n)$, we can arrange additionally that $\theta^+\cdot f(p) > 0$ for every $p\in I(f)$ and $\theta^-\cdot f^{-1}(p) > 0$ for every $p\in I(f^{-1})$. \end{thm}
{"config": "arxiv", "file": "math0505014/texfiles/background.tex"}
TITLE: Supremum infimum on more general space QUESTION [0 upvotes]: What is the definition of supremum and infimum on more general spaces, say $\mathbb{R}^2$, $\mathbb{R}^n$. REPLY [1 votes]: Once we have a partially ordered set, we can define supremum and infimum, as the same way we define in $\mathbb{R}$. Now, we can put a order in $\mathbb{R}^n$ (i.e. lexicographic order). The question is whether or not that supremum\infimum does exist.
{"set_name": "stack_exchange", "score": 0, "question_id": 1764217}
In mathematics, in the field of harmonic analysis, an oscillatory integral operator is an integral operator of the form $$ T_\lambda u(x)=\int_{\R^n}e^{i\lambda S(x, y)} a(x, y) u(y)dy, \qquad x\in\R^m, \quad y\in\R^n, $$ where the function S(x,y) is called the phase of the operator and the function a(x,y) is called the symbol of the operator. λ is a parameter. One often considers S(x,y) to be real-valued and smooth, and a(x,y) smooth and compactly supported. Usually one is interested in the behavior of T<sub>λ</sub> for large values of λ. Oscillatory integral operators often appear in many fields of mathematics (analysis, partial differential equations, integral geometry, number theory) and in physics. Properties of oscillatory integral operators have been studied by Elias Stein and his school. The following bound on the L<sup>2</sup> → L<sup>2</sup> action of oscillatory integral operators (or L<sup>2</sup> → L<sup>2</sup> operator norm) was obtained by Lars Hörmander in his paper on Fourier integral operators: Assume that x,y ∈ R<sup>n</sup>, n ≥ 1. Let S(x,y) be real-valued and smooth, and let a(x,y) be smooth and compactly supported. If $\det_{j,k} \frac{\partial^2 S}{\partial x_j \partial y_k}(x,y)\ne 0$ everywhere on the support of a(x,y), then there is a constant C such that T<sub>λ</sub>, which is initially defined on smooth functions, extends to a continuous operator from L<sup>2</sup>(R<sup>n</sup>) to L<sup>2</sup>(R<sup>n</sup>), with the norm bounded by $C \lambda^{-n/2} $, for any λ ≥ 1: $$ \|T_\lambda\|_{L^2(\mathbf{R}^n)\to L^2(\mathbf{R}^n)}\le C\lambda^{-n/2}. $$
{"config": "wiki", "file": "wikipedia/2464.txt"}
TITLE: Funny Time Dilation Relation QUESTION [0 upvotes]: Today I was curiously calculating/comparing the times of moving observers and the time recorded by a corresponding stationary observer using Einstein's time dilation equation as detailed in Special Relativity. I was just trying to see the relation between the time recorded by the moving observer compared to the stationary one at various fractions of the speed of light. I noticed that the dilation experienced by the stationary observer when 1 second passes for the moving observer are the same values I get when I divide the time passed for the stationary observer by the time passed for the moving observer at different spans of time (i.e 1 hour and 1 year passed for the moving observer) I'll post a pic of my results below. Picture of Excel table for clarification and e.g values Can anyone tell me the connection between the ratio of time dilation of the moving observer and the stationary observer and the values I'm getting for the time dilation of 1 sec? REPLY [0 votes]: The ratio of $\Delta t_s$ to $\Delta t_r$ is always $$ \frac{ \Delta t_s }{ \Delta t_r } = \frac{1}{\sqrt{1-\frac{v^2}{c^2} } } = \gamma $$ This is completely independent of what $\Delta t_s$ or $\Delta t_r$ you choose. Also, if you choose $\Delta t_{r_1} = 1$, then $\Delta t_{s_1} = \gamma$. We must therefore have $$ \Delta t_{s_1} = \frac{ \Delta t_{s_2} }{ \Delta t_{r_2} } = \frac{ \Delta t_{s_3} }{ \Delta t_{r_3} } = \frac{1}{\sqrt{1-\frac{v^2}{c^2} } } = \gamma $$ as you are finding.
{"set_name": "stack_exchange", "score": 0, "question_id": 251396}
TITLE: Show that this family of open sets forms a topology of pointwise convergence QUESTION [0 upvotes]: This is Exercise 22.12 (b) on page 185 of Elementary Analysis, second edition, written by Kenneth Ross. I searched the site for similar questions and found a couple that looked similar (this and particularly this), but neither of those questions match mine in scope, and the language and notation used are too advanced. I am about halfway through a book called "Elementary Analysis" and I think that is a reasonable description of my mathematical skill set in regards to this question. (In other words, I don't even understand real analysis very well, so I know essentially nothing about topology.) Background Let $S$ be a subset of $\mathbb{R}$. Let $C(S)$ be the set of all bounded continuous real-valued functions on $S$. For $f, g \in C(S)$, let $d(f, g) = \sup \{ \lvert f(x) - g(x) \rvert \colon x \in S \}$. This makes $\big( C(S), d \big)$ a metric space. Now I quote from Exercise 22.12 (b): 22.12 Consider a subset $\mathcal{E}$ of $C(S), S \subseteq \mathbb{R}$. For this exercise, we say a function $f_0$ in $\mathcal{E}$ is interior to $\mathcal{E}$ if there exists a finite subset $F$ of $S$ and an $\epsilon > 0$ such that $\{ f \in C(S) \colon \lvert f(x) - f_0 (x) \rvert < \epsilon \text{ for } x \in F \} \subseteq \mathcal{E}$. The set $\mathcal{E}$ is open if every function in $\mathcal{E}$ is interior to $\mathcal{E}$. (b) Show the family of open sets defined above forms a topology for $C(S)$. From Discussion 13.7 on page 87, I gather that I need to show $C(S)$ is open in $C(S)$. The empty set $\emptyset$ is open in $C(S)$. The union of any collection of open sets is open. The intersection of finitely many open sets is again an open set. My attempt Consider $f_0 \in C(S)$. We want to show that there exists a finite subset $F$ and $\epsilon > 0$ such that $\{ f \in C(S) \colon \lvert f(x) - f_0(x) \rvert < \epsilon \text{ for } x \in F \} \subseteq C(S)$. It seems to me that any $F$ and $\epsilon$ will suffice here. Consider $f_0 \in \emptyset$. (Bear with me.) We want to show that there exists a finite subset $F$ and $\epsilon > 0$ such that $\{ f \in C(S) \colon \lvert f(x) - f_0(x) \rvert < \epsilon \text{ for } x \in F \} \subseteq \emptyset$. Question This is nonsense as stated. How do I conclude that $\emptyset$ is open? I would say: there are no functions in $\emptyset$. So the set of functions in $C(S)$ that are within $\epsilon$ of some non-existent function on some finite set is empty. Therefore $\emptyset \subseteq \emptyset$. Is that right? Consider open sets $\mathcal{E}_1, \mathcal{E}_2, \mathcal{E}_3, \dots$ and $f_0 \in \bigcup_{i = 1}^{\infty} \mathcal{E}_i$. (We can presumably recover the finite case by letting $\mathcal{E}_n = \emptyset$ for all $n$ greater than some $N$.) We want to show that there exists a finite subset $F$ and $\epsilon > 0$ such that $\{ f \in C(S) \colon \lvert f(x) - f_0(x) \rvert < \epsilon \text{ for } x \in F \} \subseteq \bigcup_{i = 1}^{\infty} \mathcal{E}_i$. Consider any one $\mathcal{E}_i$ such that $f_0 \in \mathcal{E}_i$. (We know there is at least one such $\mathcal{E}_i$.) This $\mathcal{E}_i$ has a corresponding $F$ and $\epsilon$, call them $F_i$ and $\epsilon_i$. We can use $F_i$ and $\epsilon_i$ to show that the set above in curly brackets is a subset of $\mathcal{E}_i$, which means it's in the infinite union. Consider open sets $\mathcal{E}_1, \dots, \mathcal{E}_n$ and $f_0 \in \bigcap_{i = 1}^n \mathcal{E}_i$. We want to show that there exists a finite subset $F$ and $\epsilon > 0$ such that $\{ f \in C(S) \colon \lvert f(x) - f_0(x) \rvert < \epsilon \text{ for } x \in F \} \subseteq \bigcap_{i = 1}^n \mathcal{E}_i$. There exist $F_1, \dots, F_n$ and $\epsilon_1, \dots, \epsilon_n$ such that $\{ f \in C(S) \colon \lvert f(x) - f_0(x) \rvert < \epsilon_1 \text{ for } x \in F_1 \} \subseteq \mathcal{E}_1$. $\vdots$ $\{ f \in C(S) \colon \lvert f(x) - f_0(x) \rvert < \epsilon_n \text{ for } x \in F_n \} \subseteq \mathcal{E}_n$. It seems intuitively clear that $\epsilon = \min \{\epsilon_1, \dots, \epsilon_n \}$ will work. Question What do I do for $F$? I thought about using $F = \bigcap_{i = 1}^n F_i$ but I have a feeling that is wrong because I have no assurance the intersection contains anything. I thought about using $F = \bigcup_{i = 1}^n F_i$ but how do I know that any function $f$ will actually be within $\min \{\epsilon_1, \dots, \epsilon_n \}$ of $f_0$ over all elements of $\bigcup_{i = 1}^n F_i$? I guess worst case, only $f_0$ meets that criterion? In that case we could say $\{ f_0 \} \subseteq \bigcap_{i = 1}^n \mathcal{E}_i$. Final comment Sorry for such a long post. I am trying to be thorough and provide relevant background. I have no intuition for what I am doing in this exercise---I am just manipulating symbols. Is there any intuition for this topic that can be provided to someone at my mathematical level? REPLY [1 votes]: Your $(1)$ is fine. $(2)$ is vacuously true: there is no $f\in\varnothing$, so there is nothing to be verified. In $(3)$ you cannot assume that the collection of open sets is countable: the union of any collection of open sets must be open. Let $\Bbb E$ be a family of open sets, and let $\mathscr{U}=\bigcup\Bbb E$; we want to show that $\mathscr{U}$ is open. To that end let $f\in\mathscr{U}$; then there is an $\mathscr{E}\in\Bbb E$ such that $f\in\mathscr{E}$, which by definition means that there are a finite $F\subseteq S$ and $\epsilon>0$ such that $$\{g\in C(S):|g(x)-f(x)|<\epsilon\text{ for each }x\in F\}\subseteq\mathscr{E}\;.$$ But $\mathscr{E}\subseteq\mathscr{U}$, so $\{g\in C(S):|g(x)-f(x)|<\epsilon\text{ for each }x\in F\}\subseteq\mathscr{U}$, and therefore $\mathscr{U}$ is open. In $(4)$ your choice of $\epsilon$ is fine, but you want to take $F=\bigcup_{i=1}^nF_i$, the union of the finite sets $F_i$. Then for any $f\in C(S)$ and any $i\in\{1,\ldots,n\}$ you have $$\begin{align*} &\{f\in C(S):|f(x)-f_0(x)|<\epsilon\text{ for each }x\in F\}\tag{1}\\ \subseteq\;&\{f\in C(S):|f(x)-f_0(x)|<\epsilon_i\text{ for each }x\in F_i\}\tag{2}\\ \subseteq\;&\mathscr{E}_i\;, \end{align*}$$ which is exactly what you want. The point is that by making $\epsilon$ at least as small as each $\epsilon_i$ and making $F$ at least as big as each $F_i$, you are ensuring that the restrictions defining the set in $(1)$ are at least as strong as those in $(2)$ that are needed to ensure that all of the functions in the set that you’re defining are also in the set $\mathscr{E}_i$.
{"set_name": "stack_exchange", "score": 0, "question_id": 3668419}
\begin{document} \title{Partial Euler Characteristic, Normal Generations and the stable $D(2)$ problem} \author{Feng Ji and Shengkui Ye} \maketitle \begin{abstract} We obtain relations among normal generation of perfect groups, Swan's inequality involving partial Euler characteristic, and deficiency of finite groups. The proof is based on the study of a stable version of Wall's $D(2)$ problem. Moreover, we prove that a finite 3-dimensional CW complex of cohomological dimension at most $2$ with fundamental group $G$ is homotopy equivalent to a 2-dimensional CW complex after wedging $n$ copies of the $2$ -sphere $S^{2},$ where $n$ depends only on $G.$ \end{abstract} \section{Introduction} In this article, we study several classical problems in the low dimension homotopy theory and group theory, focusing on the interplay among these problems. We start by describing Swan's problem. Let $G$ be a group and $\mathbb{Z}G$ be the group ring. Swan \cite{sw} defines the partial Euler characteristic $ \mu _{n}(G)$ as follows. Let $F$ be a resolution \begin{equation*} \cdots \rightarrow F_{2}\rightarrow F_{1}\rightarrow F_{0}\rightarrow \mathbb{Z}\rightarrow 0 \end{equation*} of the trivial $\mathbb{Z}G$-module $Z$ in which each $F_{i}$ is $\mathbb{Z} G $-free on $f_{i}$ generators. If \begin{equation*} f_{0},f_{1},f_{2},\cdots ,f_{n} \end{equation*} are finite, define \begin{equation*} \mu _{n}(F)=f_{n}-f_{n-1}+f_{n-2}-\cdots +(-1)^{n}f_{0}. \end{equation*} If there exists a resolution $F$ such that $\mu _{n}(F)$ is defined, we define $\mu _{n}(G)$ as the infimum of $\mu _{n}(F)$ over all such resolution $F.$ We call the truncated free resolution \begin{equation*} F_{n}\rightarrow \ldots \rightarrow F_{1}\rightarrow F_{0}\rightarrow \mathbb{Z}\rightarrow 0 \end{equation*} an algebraic $n$-complex (following the terminology of Johnson \cite{Jo}). On the other hand, we have the following geometric counterpart in the case $ n=2$. For a finitely presentable $G,$ the deficiency $\mathrm{def}(G)$ is the maximum of $d-k$ over all presentations $\langle g_{1},g_{2},\cdots ,g_{d}\mid r_{1},r_{2},\cdots ,r_{k}\rangle $ of $G.$ It is not hard to see that \begin{equation*} \mathrm{def(}G)\leq 1-\mu _{2}(G) \end{equation*} (\cite{sw}, Proposition (1)). However, Swan mentioned in \cite{sw} that \textquotedblleft the problem of determing when $\mathrm{def}(G) = 1-\mu _{2}(G)$ seems very difficult even if $G$ is a finite $p$-group \textquotedblright . \bigskip It is a well-known open problem, since the 1970s, whether a finitely generated perfect group can be normally generated by a single element or not. We formulate such a problem in the following conjecture. \begin{conjecture}[Normal Generation Conjecture] \label{ng}Let $G$ be any finitely generated perfect group, i.e. $G=[G,G]$, the commutator subgroup of $G$. Then $G$ can be normally generated by a single element. \end{conjecture} \bigskip This conjecture is known to be true when $G$ is finite (see e.g. [ \cite{lw}, 4.2]. For infinite groups, it is a long-standing open problem attributed to J. Wiegold (cf. [\cite{bm}, FP14] and [\cite{mk}, 5.52]). One of our results relates Wiegold's normal generation conjecture to Swan's inequality of partial Euler characteristic, as follows. \begin{theorem} \label{th1}Assume that Conjecture \ref{ng} is true. Then \begin{equation*} \mathrm{def(}G)\geq -\mu _{2}(G) \end{equation*} for any finite group $G.$ \end{theorem} \bigskip The proof of Theorem \ref{th1} is based on the study of a stable version of the $D(2)$ problem (for details, see Section \ref{sec}). Recall that Wall's $D(2)$ problem asks that for a finite 3-dimensional CW complex $ X $ of cohomological dimension $\leq 2$, is $X$ homotopy equivalent to a 2-dimensional CW complex? A positive answer to this problem will imply a finite version of the Eilenberg-Ganea conjecture, which says that a group of cohomological dimension two has a $2$-dimensional classifying space. We obtain the following result, which shows that such a CW complex $X$ with a finite fundamental group is homotopic to a $2$-dimensional CW complex after wedging several copies of the sphere $S^{2}$. \begin{theorem} \label{th3}Let $G$ be a finite group. For a finite 3-dimensional CW complex $ X$ of cohomological dimension at most $2$ with fundamental group $\pi _{1}(X)=G,$ the wedge $X\vee (S^{2})^{n}$ of $X$ and $n$ copies of $S^{2}$ is homotopy to a 2-dimensional CW complex$,$ where $n$ depends only on $G.$ \end{theorem} \bigskip The integer $n$ in the previous theorem can be determined completely (see Theorem \ref{igia}). On the other hand, for groups of low geometric dimensions, we have the following result, which confirms the equality of partial Euler characteristic and deficiency. \begin{theorem} \label{th2}Let $G$ be a group having a finite classifying space $\mathrm{B}G$ of dimension at most $2.$ Then $\mathrm{def}(G)=1-\mu _{2}(G).$ \end{theorem} Finally, we present an application (cf. Corollary \ref{sxia}) of the results to the Whitehead conjecture, which claims that any subcomplex of aspherical complexes is aspherical. \newline The article is organized as follows. In Section 2, we discuss the Quillen plus construction of 2-dimensional CW complexes. This motivates the stable Wall's $D(2)$ property being discussed in Section 3. In the last section, the Euler characteristics are studied for both finite groups and groups of low geometric dimension. \section{Quillen's plus construction of 2-dimensional CW complexes} Let $X$ be a CW complex with fundamental group $G$ and $P$ a perfect normal subgroup of $G$, i.e. $P=[P,P].$ Quillen shows that there exists a CW complex $X_{P}^{+}$, whose fundamental group is $G/P;$ and an inclusion $ f:X\rightarrow X_{P}^{+}$ such that \begin{equation*} H_{n}(X;f_{\ast }M)\cong H_{n}(X_{P}^{+};M) \end{equation*} for any integer $n$ and local coefficient system $M$ over $X_{P}^{+}.$ Here $ X_{P}^{+}$ is called the plus-construction of $X$ with respect to $P$ and is unique up to homotopy equivalence. One of the main applications of the plus construction is to define higher algebraic $K$-theory. In general, the space $X_{P}^{+}$ is obtained from $X$ by attaching 2-cells and 3-cells. The following discussion shows that for certain particular 2-dimensional CW complex $X,$ the Quillen plus construction is homotopy equivalent to a 2-dimensional CW complex. We need to define the following. \begin{definition} The cohomological dimension $\mathrm{cd}(X)$ of a CW complex $X$ is defined as the smallest integer $n$ ($\infty $ is allowed) such that $H^{m}(G,M)=0$ for any integer $m>n$ and any local coefficient system $M$. \end{definition} Clearly, an $n$-dimensional CW complex is of cohomological dimension $\leq n$ . We start with a lemma showing a property enjoyed by any $3$-dimensional CW complex with cohomlogical dimension $2$. \begin{lemma} \label{2.2}Suppose that $X$ is a $3$-dimensional CW complex and $\tilde{X}$ is the universal cover of $X$. Let $C_{\ast }(\tilde{X})$ be the cellular chain complex of $\tilde{X}$. Then $X$ is of cohomological dimension $2$ if and only if $C_{3}(\tilde{X})$ is a direct summand of $C_{2}(\tilde{X})$ as $ \mathbb{Z}\pi _{1}(X)$-modules. \end{lemma} This result is well-known and implicit in literature. We include a proof as we are unable to locate a suitable reference. \begin{proof} If $X$ is of cohomological dimension $2$. Consider the $\mathbb{Z}\pi _{1}(X) $-module $C_{3}(\tilde{X})$ as coefficients. The condition that $ H^{3}(X,C_{3}(\tilde{X}))=0$ implies that the identity map $C_{3}(\tilde{X} )\rightarrow C_{3}(\tilde{X})$ factors throughout $C_{2}(\tilde{X})$. As all these are free $\mathbb{Z}\pi _{1}(X)$-modules, $C_{3}(\tilde{X})$ is a direct summand of $C_{2}(\tilde{X})$. The converse is true as any homomorphism from $C_{3}(\tilde{X})$ to a $ \mathbb{Z}\pi _{1}(X)$-module factor through $C_{2}(\tilde{X})$, if $C_{3}( \tilde{X})$ is a direct summand of $C_{2}(\tilde{X})$. \end{proof} \begin{theorem} \label{lxba} Let $X$ be a finite 2-dimensional CW complex. Suppose that a perfect normal subgroup $P$ in $\pi _{1}(X)$ is normally generated by $n$ elements. Then the plus construction $(X\vee (S^{2})^{n})^{+}$ taken with respect to $P$ of the wedge of $X$ and $n$ copies of $S^{2}$ , is homotopy equivalent to the $2$-skeleton of $X^{+},$ which is a 2-dimensional CW complex. \end{theorem} \begin{proof} It is not hard to see that the plus construction $(X\vee (S^{2})^{n})^{+}$ is the same as the wedge $X^{+}\vee (S^{2})^{n}.$ Indeed, a simple calculation of the homology groups (for any local coefficient system) shows that $X^{+}\vee (S^{2})^{n}$ satisfies the defining properties of $(X\vee (S^{2})^{n})^{+}$. Denote by $Y$ the complex $X^{+}\vee (S^{2})^{n}.$ Consider the cellular chain complex $C_{\ast }(\tilde{Y})$ of the universal cover $\tilde{Y}.$ By the process of Quillen's plus construction (cf. the proof of Theorem 1 in \cite{ye}), we see that the number of attached $3$-cells and the number of attached $2$-cells are both $n$. Since $X$ is 2-dimensional, so is $X\vee (S^{2})^{n}$. As the plus construction does not change homology groups (for any local coefficient system), $Y$ is of cohomological dimension $2.$ This implies that $C_{3}(\tilde{Y})\cong \mathbb{Z}\pi _{1}(Y)^{n}$ is isomorphic to a direct summand of $C_{2}(\tilde{X}^{+})$ from the previous lemma. Moreover, $C_{\ast }(\tilde{Y})$ is chain homotopy equivalent to the following chain complex \begin{equation*} 0\rightarrow C_{2}(\tilde{Y})/C_{3}(\tilde{Y})\rightarrow C_{1}(\tilde{Y} )(=C_{1}(\tilde{X}^{+}))\rightarrow \mathbb{Z}\pi _{1}(Y)(=\mathbb{Z}\pi _{1}(X^{+}))\rightarrow \mathbb{Z}\rightarrow 0.(\ast ) \end{equation*} Notice that $C_{2}(\tilde{Y})$ is isomorphic to $C_{2}(\tilde{X} ^{+})\bigoplus \mathbb{Z}\pi _{1}(Y)^{n}.$ Therefore, there is an isomorphism \begin{equation*} C_{2}(\tilde{Y})/C_{3}(\tilde{Y})\cong C_{2}(\tilde{X}^{+}). \end{equation*} This gives a chain homotopy from $(\ast )$ to the chain complex of the universal cover of the $2$-skeleton of $X^{+}.$ By the following lemma, we see that $Y$ is homotopy equivalent to the $2$-skeleton of $X^{+}.$ \end{proof} \begin{lemma}[Johnson \protect\cite{Jo}, Mannan \protect\cite{ma1}] \label{ge}Let $Y$ be a finite 3-dimensional CW complex of cohomological dimension 2. If the chain complex \begin{equation*} 0\rightarrow C_{2}(\tilde{Y})/C_{3}(\tilde{Y})\rightarrow C_{1}(\tilde{Y} )\rightarrow \mathbb{Z}\pi _{1}(Y)\rightarrow \mathbb{Z}\rightarrow 0 \end{equation*} is homotopy equivalent to the chain complex of a $2$-dimensional CW complex $ X$, $Y$ is homotopy equivalent to a 2-dimensional complex $X$. \end{lemma} \section{Wall's D(2) problem and its stable version\label{sec}} In this section, we apply the results obtained in the previous section to the $D(2)$ problem. Let us recall the $D(2)$ problem raised in \cite{wa}. \begin{conjecture} (The $D(2)$ problem) If $X$ is a finite 3-dimensional CW complex of cohomological dimension $\leq 2$, then $X$ is homotopy equivalent to a 2-dimensional CW complex. \end{conjecture} In \cite{Jo}, Johnson proposes to systematically study the problem by categorizing 3-dimensional CW complexes according to their fundamental groups. For a finitely presentable group $G$, we say the $D(2)$ problem is true for $G$, if any finite 3-dimensional CW complex $X,$ of cohomological dimension $\leq 2$ with fundamental group $\pi _{1}(X)=G,$ is homotopy equivalent to a 2-dimensional CW complex. The $D(2)$ problem is very difficult in general; and it is known to be true for limited amount of groups (\cite{Ed}, \cite{Ma}). We propose the following stable version by allowing taking wedge with copies of $S^{2}$. \begin{conjecture} (The $D(2,n)$ problem) For a finitely presentable group $G$ and $n\geq 0$, we say that the $D(2,n)$ problem holds for $G$ if the following is true. If $ X$ is a finite 3-dimensional CW complex of cohomological dimension $\leq 2$ with fundamental group $\pi _{1}(X)=G$, then $X\vee (S^{2})^{n}$ is homotopy equivalent to a 2-dimensional CW complex. \end{conjecture} It is immediate that $D(2)$ implies $D(2,n)$ and $D(2,n)$ implies $D(2,n+1)$ for any group $G$ and any integer $n\geq 0$. The $D(2,0)$ problem is the original $D(2)$ problem. We first consider CW complexes with finite fundamental groups. In order to relate the problem to the previous section, we record the following observation due to Mannan. \begin{lemma}[Mannan \protect\cite{ma}] \label{ma}A finite 3-dimensional CW complex $X$ of cohomological dimension 2 is a Quillen's plus construction of some 2-dimensional complex $Y.$ \end{lemma} Let $G$ be a finitely generated perfect group. It is conjectured that $G$ could be normally generated by one element (cf. Conjecture \ref{ng}). Assuming Conjecture \ref{ng} holds, we have the following. \begin{theorem} \label{thq}Let $X$ be a finite 3-dimensional complex of cohomological dimension 2 with $\pi _{1}(X)$ finite. Suppose that Conjecture \ref{ng} holds. Then $X\vee S^{2}$ is homotopy equivalent to a 2-complex, i.e. the $ D(2,1)$ problem holds for any finite group. \end{theorem} \begin{proof} By Lemma \ref{ma} , $X$ is the plus construction of a finite 2-complex $Z$ with respect to a perfect normal subgroup $P\leq \pi _{1}(Z).$ Therefore we have a short exact sequence of groups \begin{equation*} 1\rightarrow P\rightarrow \pi _{1}(Z)\rightarrow \pi _{1}(X)\rightarrow 1. \end{equation*} Since $\pi _{1}(Z)/P=\pi _{1}(X)$ is finite and $Z$ is finite, we claim that $P$ is finitely generated. To see this, as $\pi _{1}(X)$ is finite, the covering space of $Z$ with fundamental group $P$ is again a finite CW complex. Hence $P$ is finitely generated. If the normal generation conjecture holds, $P$ is normally generated by a single element. Theorem \ref{lxba} in the previous section says that $X\vee S^{2}$ is homotopy equivalent to a 2-dimensional CW complex. \end{proof} \bigskip We now study the relation between the stabilization by \textquotedblleft wedging\textquotedblright\ copies of $S^{2}$ with that by \textquotedblleft attaching\textquotedblright\ 3-cells. \begin{proposition} Suppose that $X$ is a finite 3-dimensional CW complex of cohomological dimension $\leq 2$. Then $X\vee (S^{2})^{n}$ is homotopy equivalent to a finite 2-dimensional CW complex if and only if $X$ is homotopy equivalent to a 3-dimensional CW complex with $n$ 3-cells. \end{proposition} \begin{proof} Assume that $X$ is homotopy equivalent to a 3-dimensional CW complex with $n$ 3-cells. By Lemma \ref{ma}, $X$ is the plus construction of a 2-complex $Y$ with respect to a perfect normal subgroup $K$ of $\pi _{1}(Y)$. We thus have a short exact sequence \begin{equation*} 1\rightarrow K\rightarrow \pi _{1}(Y)\rightarrow \pi _{1}(X)\rightarrow 1. \end{equation*} Moreover, $K$ is normally generated by $n$-elements. Therefore $X\vee (S^{2})^{n}$ is homotopy equivalent to a 2-dimensional CW complex by Theorem \ref{lxba}. Conversely, suppose that $X\vee (S^{2})^{n}$ is homotopy equivalent to a finite 2-complex $Y$ via $f:X\vee (S^{2})^{n}\rightarrow Y$. It is clear that \begin{equation*} \pi _{1}(X)=\pi _{1}(X\vee (S^{2})^{n})\cong \pi _{1}(Y). \end{equation*} Let $G=\pi _{1}(X)$ and $\tilde{X},\tilde{Y}$ be the universal covers of $ X,Y $ respectively. We proceed as follows. By the Hurewicz theorem, we have isomorphisms \begin{equation*} \pi _{2}(Y)\cong \pi _{2}(\tilde{Y})\cong H_{2}(\tilde{Y})\cong \pi _{2}( \tilde{X})\oplus \mathbb{Z}G^{n}. \end{equation*} Therefore, there are $n$ maps $f_{i}:S^{2}\rightarrow Y,1\leq i\leq n$ correspond to the following inclusion onto the second factor (for a fixed basis of $\mathbb{Z}G^{n}$) \begin{equation*} \mathbb{Z}G^{n}\rightarrow H_{2}(\tilde{Y})\cong \pi _{2}(\tilde{X})\oplus \mathbb{Z}G^{n}. \end{equation*} Using these $f_{i},1\leq i\leq n$ as the attaching maps, we obtain a 3-dimensional CW complex $Y\cup _{f_{i},1\leq i\leq n}e_{2}^{n}$. Let $i:X \overset{i}{\rightarrow }X\vee (S^{2})^{n}$ be the natural inclusion. By our construction, the canonical composition \begin{equation*} f^{\prime }:X\overset{i}{\rightarrow }X\vee (S^{2})^{n}\overset{f}{ \rightarrow }Y\rightarrow Y\cup _{f_{i},1\leq i\leq n}e_{2}^{n} \end{equation*} induces isomorphisms on both $\pi _{1}$ and $\pi _{2}$ (the same as the second homology groups of the universal covers). It is not hard to see that $ H_{3}(\tilde{X})=H_{3}(\widetilde{Y\cup _{f_{i},1\leq i\leq n}e_{2}^{n}})=0.$ Therefore, $f^{\prime }$ induces a homotopy equivalence between the chain complexes of the universal covering spaces. By the Whitehead theorem, $ f^{\prime }$ is a homotopy equivalence. \end{proof} \bigskip If we combine this proposition with Theorem \ref{thq}, we obtain the following immediately. \begin{corollary} \label{sagg} Suppose that a group $G$ satisfies $D(2,n)$ for some $n\geq 0$. Then any finite 3-dimensional CW complex of cohomological dimension 2 with fundamental group $G$ is homotopy equivalent to a complex with at most $n$ 3-cells. In particular, if Conjecture \ref{ng} holds, then any finite 3-dimensional CW complex of cohomological dimension 2 with finite fundamental group is homotopy equivalent to a complex with at most a single 3-cell. \end{corollary} We remark that in fact, we shall see in the next section that each finite group $G$ satisfies $D(2,n)$ for some $n$, even if we do not assume the normal generation conjecture. Other examples of groups satisfying $D(2,n)$ for some $n$ include groups with both cohomological dimensional $\leq 2$ and cancellation property. \section{Partial Euler characteristic of groups and $(G,n)$-complexes} Recall definitions of $\mu _{n}(F)$ for an algebraic $n$-complex $F_{\ast }$ and $\mu _{n}(G)$ from Introduction. For a finitely presentable group $G$, we start with the following lemma. It follows from Swan \cite{sw} easily, although it is not stated explicitly. \begin{lemma} \label{agif} Assume that $G$ is finitely presentable. The invariant $\mu _{2}(G)$ can be realized by an algebraic 2-complex. In other words, there exists an algebraic 2-complex \begin{equation*} F_{2}\rightarrow F_{1}\rightarrow F_{0}\rightarrow \mathbb{Z}\rightarrow 0 \end{equation*} such that \begin{equation*} \mu _{2}(G)=\dim _{\mathbb{Z}G}F_{2}+\dim _{\mathbb{Z}G}F_{0}-\dim _{\mathbb{ Z}G}F_{1}. \end{equation*} \end{lemma} \begin{proof} Let $M = \mathbb{Q}$ in Theorem 1.2 of \cite{sw}. As $G$ is finitely presentable, all the Betti numbers are finite for $n\leq 2$. Therefore Theorem 1.2 of \cite{sw} asserts $\mu_2(G)$ is finite and hence realizable by an algebraic 2-complex. \end{proof} \begin{lemma}[Johnson \protect\cite{Jo} Theorem 60.2] \label{ea2c} Every algebraic 2-complex is geometric realizable by a $3$ -dimensional CW complex. In other words, for every algebraic 2-complex \begin{equation*} (F_{\ast }):F_{2}\rightarrow F_{1}\rightarrow F_{0}\rightarrow \mathbb{Z} \rightarrow 0 \end{equation*} over a finitely presentable group $G,$ there is a finite 3-dimensional CW complex of cohomological dimension 2 such that the reduced chain complex \begin{equation*} C_{2}(\tilde{Y})/C_{3}(\tilde{Y})\rightarrow C_{1}(\tilde{Y})\rightarrow \mathbb{Z}\pi _{1}(Y)\rightarrow \mathbb{Z}\rightarrow 0 \end{equation*} is homotopy equivalent to $(F_{\ast }).$ \end{lemma} \begin{proposition} If a finitely presentable group $G$ satisfies the $D(2,n)$ problem, then $ \mathrm{def}(G)\geq (1-n)-\mu _{2}(G)$. \end{proposition} Theorem \ref{th1} follows immediately from this proposition in view of Theorem \ref{thq}. \begin{proof} By Lemma \ref{agif}, we can choose an algebraic 2-complex \begin{equation*} F_{2}\rightarrow F_{1}\rightarrow F_{0}\rightarrow \mathbb{Z}\rightarrow 0 \end{equation*} such that \begin{equation*} \mu _{2}(G)=\dim _{\mathbb{Z}G}F_{2}+\dim _{\mathbb{Z}G}F_{0}-\dim _{\mathbb{ Z}G}F_{1}. \end{equation*} By Lemma \ref{ea2c}, there is a finite 3-dimensional CW complex of cohomological dimension 2 such that the reduced chain complex \begin{equation*} 0\rightarrow C_{3}(\tilde{Y})\rightarrow C_{2}(\tilde{Y})\rightarrow C_{1}( \tilde{Y})\rightarrow \mathbb{Z}\pi _{1}(Y)\rightarrow \mathbb{Z}\rightarrow 0 \end{equation*} is homotopy equivalent to $(F_{\ast }).$ Assuming that $G$ satisfies the $ D(2,n)$ problem, the wedge $X\vee (S^{2})^{n}$ is homotopy equivalent to a 2-dimensional CW complex, which gives a presentation of $G.$ This implies that $\mu _{2}(G)+n\geq 1-\mathrm{def}(G),$ i.e. $\mathrm{def}(G)\geq (1-n)-\mu _{2}(G).$ \end{proof} \bigskip It is possible to place $\mu _{2}(G)$ in a broader setting following \cite {Ha}. \begin{definition} We define a $(G,n)$-complex as a finite $n$-dimensional CW complex with fundamental group $G$ and vanishing higher homotopy groups up to dimension $ n-1$. \end{definition} In particular, a $(G,2)$-complex is a usual finite 2-dimensional CW complex. \begin{definition} Let $G$ be a finitely presentable group. Define \begin{equation*} \mu _{n}^{g}(G)=\min \{(-1)^{n}\chi (X)\mid X \text{ is a }(G,n) \text{ -complex}\}. \end{equation*} If there is no such $X$ exists, define $\mu _{n}^{g}(G)=+\infty .$ We call a $(G,n)$-complex $X$ with $(-1)^n\chi(X) = \mu_n^g(G)$ realize $\mu_n^g(G)$. \end{definition} A few observations are immediate. It is clearly true that $\mu _{n}(G)\leq \mu _{n}^{g}(G).$ Therefore, $\mu _{n}^{g}(G)>-\infty $ since $\mu _{n}(G)>-\infty $ (cf. Swan \cite{sw}). Moreover, $\mu _{2}(G)=\mu _{2}^{g}(G)$ if and only if $\mu _{2}(G)=1-\mathrm{def}(G).$ We can use this language to discuss the $D(2,n)$ problem for finite groups without assuming the normal generation conjecture. \begin{theorem} \label{igia} If $G$ is a finite group, then the $D(2,n)$ problem holds for $ G $ when $n=2-\mathrm{def}(G)-\mu _{2}(G)$. \end{theorem} \begin{proof} The key point here is that algebraic $m$-complexes of $\mathbb{Z}G$ are classified according to the partial Euler characteristics, when $G$ is finite. More precisely, for any two algebraic $m$-complexes $F$ and $ F^{\prime }$ with $\mu _{m}(F)=\mu _{m}(F^{\prime })\neq \mu _{m}(G)$, we have that $F$ and $F^{\prime }$ are chain homotopic. For a proof, see \cite {Gr}, for example. Apply this result to the case $m=2$. As $G$ is finitely presentable, we fix a $(G,2)$-complex $X$ with the Euler characteristic $1-\mathrm{def}(G),$ where $\mathrm{def}(G)$ is the deficiency of $G$. We claim that $G$ satisfies $D(2,n)$ for $n=(1-\mathrm{def}(G))-\mu _{2}(G)+1$. To see this, any algebraic 2-complex $F$ such that $\mu _{2}(F)>1-\mathrm{def }(G)$ is homotopy equivalent to the chain complex of the universal covering space of the wedge product $X\vee (S^{2})^{\mu _{2}(F)+\mathrm{def}(G)-1},$ since both complexes have the same Euler characteristics (cf. \cite{Gr}). For any finite 3-complex $X^{\prime }$ of cohomological dimension $2$, the chain complex of the universal covering space of the wedge $X^{\prime }\vee (S^{2})^{n},$ denoted by $Y,$ is homotopic to the algebraic $2$-complex \begin{equation*} F:C_{2}(\tilde{Y})/C_{3}(\tilde{Y})\rightarrow C_{1}(\tilde{Y})\rightarrow \mathbb{Z}G\rightarrow \mathbb{Z}\rightarrow 0. \end{equation*} Clearly, \begin{equation*} \mu _{2}(F)=\chi (X^{\prime })+n-1>1-\mathrm{def}(G). \end{equation*} By Lemma \ref{ge} and Lemma \ref{ea2c}, the complex $X^{\prime }\vee (S^{2})^{n}$ is homotopy equivalent to a $2$-dimensional CW complex. The proof is finished. \end{proof} \bigskip Theorem \ref{th3} is an easy corollary of the previous theorem. In view of the results of the previous section, for each finite group $G$, there is an integer $n,$ which depends only on $G,$ such that each 3-complex with fundamental group $G$ and cohomological dimension $\leq 2$ is homotopy equivalent to a 3-complex with at most $n$ 3-cells (Corollary \ref{sagg}). The argument of the proof of Theorem \ref{igia} does not work for an infinite finitely presentable group. For example, for the trefoil knot group, Euler characteristic is not enough to classify the chain homotopy classes of algebraic 2-complexes (see \cite{Du}). The $D(2,n)$ problem stems from the $D(2)$ problem. On the other hand, the question on the equality $\mu _{2}(G)=1-\mathrm{def}(G)$ is a second \textquotedblleft face" of the $D(2)$ problem. We can say something on this question when $G$ is torsion free of low geometric dimension. Recall that for a group $G$, the classifying space $\mathrm{B}G$ of $G$ is defined as the connected CW complex with $\pi _{1}(\mathrm{B}G)=G$ and $\pi _{i}(\mathrm{B}G)=0,i\geq 2$. It is unique up to homotopy. Theorem \ref{th2} is a special case of (i) in the following theorem. \begin{theorem} \label{lgba} Let $G$ be a group having a finite $n$-dimensional classifying space $\mathrm{B}G.$ We have the following. \begin{enumerate} \item[(i)] $\mu _{n}(G)=\mu _{n}^{g}(G);$ In particular, $\mu _{2}(G)=1- \mathrm{def}(G)$ if $G$ has a finite $2$-dimensional $\mathrm{B}G$ \item[(ii)] Any finite CW complex $X$ with the following property: \begin{description} \item[a)] the dimension is at most $n+1;$ \item[b)] the cohomological dimension $\mathrm{cd}(X)$ is at most $n;$ \item[c)] if $n\geq 3,$ the homotopy group $\pi _{i}(X)=0$ for $2\leq i\leq n-1;$ \item[d)] $(-1)^{n}\chi (X)=\mu _{n}^{g}(G),$ \end{description} is homotopy equivalent to $\mathrm{B}G.$ \end{enumerate} \end{theorem} \begin{proof} Let $\mathrm{E}G$ be the universal cover of $\mathrm{B}G$. Since $\mathrm{E} G $ is contractible, one obtains the exact cellular chain complex of $ \mathrm{E}G$: \begin{equation*} C_{\ast }(\mathrm{E}G):0\rightarrow C_{n}(\mathrm{E}G)\rightarrow C_{n-1}( \mathrm{E}G)\ldots \rightarrow \mathbb{Z}G\rightarrow 0. \end{equation*} This gives a (truncated) free resolution of $G$. In order to prove (i), it suffices to show that this resolution gives the minimal Euler characteristic $\mu _{n}(G)$ (as we notice earlier that $\mu _{n}(G)\leq \mu _{n}^{g}(G)$). Suppose that $\mu _{n}(G)$ is obtained from the following partial resolution of finitely generated free $\mathbb{Z}G$-mdoules: \begin{equation*} F_{\ast }:F_{n}\overset{d}{\rightarrow }F_{n-1}\ldots \rightarrow F_{1}\rightarrow \mathbb{Z}G\rightarrow 0. \end{equation*} We claim that $F_{\ast }$ is exact at $d:F_{n}\rightarrow F_{n-1}$. Once this is proved $C_{\ast }(\mathrm{E}G)$ and $F_{\ast }$ are chain homotopic to each other; and hence have the same Euler characteristic. To prove the claim, let $J$ be the kernel of $d$. By Schanuel's lemma, there is an isomorphism \begin{equation*} J\oplus C_{n}(\mathrm{E}G)\oplus F_{n-1}\ldots \cong F_{n}\oplus C_{n-1}( \mathrm{E}G)\ldots . \end{equation*} Applying the functor $-\otimes _{\mathbb{Z}G}\mathbb{Z}$ to both sides of this isomorphisms, we see that $\mu _{n}(F)=(-1)^{n}\chi (\mathrm{B}G)$ and $ J\otimes _{\mathbb{Z}G}\mathbb{Z}=0$ by noticng the fact that the complex $ F_{\ast }$ attains minimal Euler characteristic multiplying $(-1)^{n}$ among all the algebraic $n$-complexes. This implies that $C_{n}(\mathrm{E}G)\oplus F_{n-1}\ldots $ and $F_{n}\oplus C_{n-1}(\mathrm{E}G)\ldots $ have the same finite free $\mathbb{Z}G$-rank. By Kaplansky's theorem, $J$ is the trivial $ \mathbb{Z}G$-module(cf. \cite{Ka}, p. 328). This proves (i). We now prove (ii). Let $C_{\ast }(\tilde{X})$ be the chain complex of the universal covering space of $X.$ Since \textrm{cd}$(X)\leq n,$ $C_{n+1}( \tilde{X})$ is direct summand of $C_{n}(\tilde{X}),$ by the same argument as Lemma \ref{2.2}. Let $F$ be the chain complex \begin{equation*} F_{\ast }:C_{n}(\tilde{X})/C_{n+1}(\tilde{X})\overset{d}{\rightarrow } C_{n-1}(\tilde{X})\rightarrow \cdots \rightarrow C_{1}(\tilde{X})\rightarrow \mathbb{Z}G\rightarrow 0. \end{equation*} It is not hard to see that $\pi _{n}(X)\cong \ker d.$ Note that \begin{equation*} \mu _{n}(F)=(-1)^{n}\chi (X)=\mu _{n}(G). \end{equation*} By the same argument as the first part of the proof, we get that $\ker d=0.$ This implies that $\tilde{X}$ is $n$-connected. Since $H_{n+1}(\tilde{X})=0,$ $\tilde{X}$ is contractible and $X$ is homotopy equivalent to $\mathrm{B}G.$ \end{proof} \begin{remark} Under the condition of Theorem \ref{lgba}, Harlander and Jensen \cite{Ha} already prove that a $(G,n)$-complex realizing $\mu _{n}^{g}(G)$ is homotopy equivalent to $BG.$ Note that a $(G,n)$-complex is a special case of $X$ in Theorem \ref{lgba}. \end{remark} Since the cohomological dimension of a finite group $G$ is always infinity, the finite group $G$ cannot have a finite dimensional $BG$. The previous theorem is thus a discussion on torsion free \textquotedblleft low dimension\textquotedblright\ groups. The trefoil knot group is an example in the case $n=2$. We conclude with an application to another situation in low dimensional homotopy theory. Suppose that $G$ is a finitely presentable group and \begin{equation*} P=\langle x_{1},\cdots ,x_{n}\mid r_{1},\cdots ,r_{m}\rangle \end{equation*} is a presentation of $G$. Call \begin{equation*} P^{\prime }=\langle y_{1},\cdots ,y_{n^{\prime }}\mid s_{1},\cdots ,s_{m^{\prime }}\rangle \end{equation*} a sub-presentation of $P$ if \begin{equation*} \{y_{1},\cdots ,y_{n^{\prime }}\}\subseteq \{x_{1},\cdots ,x_{n}\} \end{equation*} and \begin{equation*} \{s_{1},\cdots ,s_{m^{\prime }}\}\subseteq \{r_{1},\cdots ,r_{m}\}. \end{equation*} Denote by $G_{P^{\prime }}$ the group given by the presentation $P^{\prime }. $ From each finite $2$-complex, one obtains a finite presentation of $\pi_1(X)$ . Namely, the 1-cells correspond one-one to a set of generators; while the 2-cells correspond one-one to a set of relators. \begin{lemma} \label{spog} Suppose that \begin{equation*} P^{\prime }=\langle y_{1},\cdots ,y_{n^{\prime }}\mid s_{1},\cdots ,s_{m^{\prime }}\rangle \end{equation*} is a sub-presentation of \begin{equation*} P=\langle x_{1},\cdots ,x_{n}\mid r_{1},\cdots ,r_{m}\rangle \end{equation*} of $G$ as above. If $P^{\prime \prime }$ is another finite presentation of $ G_{P^{\prime }}$, then one can obtain a presentation of $G$ from $P$ by adding $n-n^{\prime }$ generators and $m-m^{\prime }$ relations. In particular, if $P$ realizes $\mu _{2}^{g}(G)$, then $P^{\prime }$ realizes $\mu _{2}^{g}(G_{P^{\prime }})$. \end{lemma} \begin{proof} Re-indexing and re-naming if necessary, we assume that \begin{equation*} y_{1}=x_{1},\cdots ,y_{n^{\prime }}=x_{n^{\prime }},n^{\prime }\leq n \end{equation*} and \begin{equation*} s_{1}=r_{1},\cdots ,s_{m^{\prime }}=r_{m^{\prime }},m^{\prime }\leq m. \end{equation*} It is clear that the words corresponding to $s_{1},\cdots ,s_{m^{\prime }}$ do not involve $x_{n^{\prime }+1},\cdots ,x_{n}$. If \begin{equation*} P^{\prime \prime }=\langle y_{1}^{\prime },\cdots ,y_{u}^{\prime }\mid s_{1}^{\prime },\cdots ,s_{v}^{\prime }\rangle \end{equation*} is another presentation of $G_{P^{\prime }}$, we form a group $G^{\prime \prime }$ with the presentation \begin{equation*} \langle y_{1}^{\prime },\cdots ,y_{u}^{\prime },x_{n^{\prime }+1},\cdots ,x_{n}\mid s_{1}^{\prime },\cdots ,s_{v}^{\prime }\rangle \end{equation*} by adding $n-n^{\prime }$ free generators to $P^{\prime \prime }$. For each $ 1\leq i<m^{\prime },$ the letter $x_{i},$ viewed as an element in $ G_{P^{\prime }},$ has a lifting $w_{i}$ in the free group $\langle y_{1}^{\prime },\cdots ,y_{u}^{\prime }\rangle .$ For each $1\leq i\leq n,$ define the word $\omega _{i}$ of $\{y_{1}^{\prime },\cdots ,y_{u}^{\prime },x_{n^{\prime }+1},\cdots ,x_{n}\}$ as \begin{equation*} \omega _{i}=\left\{ \begin{array}{c} w_{i},1\leq i<m^{\prime }; \\ x_{i},m^{\prime }<i\leq m. \end{array} \right. \end{equation*} Denote by $\phi $ the bijection \begin{equation*} \phi :\{x_{1},\cdots ,x_{n}\}\rightarrow \{\omega _{1},\cdots ,\omega _{n}\} \end{equation*} given by $x_{i}\mapsto \omega _{i}.$ For each $m^{\prime }<i\leq m,$ write $ r_{i}=\Pi _{j=1}^{k_{i}}x_{ij}$ as a reduced word of $\{x_{1},\cdots ,x_{n}\}.$ Let $r_{i}^{\prime }=\Pi _{j=1}^{k_{i}}\phi (x_{ij})$ be the corresponding word of \begin{equation*} \{y_{1}^{\prime },\cdots ,y_{u}^{\prime },x_{n^{\prime }+1},\cdots ,x_{n}\}. \end{equation*} Let $K$ be the normal subgroup of $G^{\prime \prime }$ normally generated by the $m^{\prime }-m$ elements $r_{m^{\prime }+1}^{\prime },\cdots ,r_{m}^{\prime }$. We obtain a short exact sequence of groups \begin{equation*} 1\rightarrow K\rightarrow G^{\prime \prime }\rightarrow G\rightarrow 1, \end{equation*} where the third arrow is induced by the map $G_{P^{\prime }}\rightarrow G$ from the natural inclusions of generators and relators. From this exact sequence, obtains a desired presentation \begin{equation*} \langle y_{1}^{\prime },\cdots ,y_{u}^{\prime },x_{n^{\prime }+1},\cdots ,x_{n}\mid s_{1}^{\prime },\cdots ,s_{v}^{\prime },r_{m^{\prime }+1}^{\prime },\cdots ,r_{m}^{\prime }\rangle \end{equation*} of $G$. Assume that $P$ realizes $\mu _{2}^{g}(G)$, while $P^{\prime }$ does not realize $\mu _{2}^{g}(G_{P^{\prime }})$. We apply the above construction to a presentation $P^{\prime \prime }$ of $G_{P^{\prime }}$ realizing $\mu _{2}^{g}(G^{\prime })$. In doing so, we obtain of presentation of $G$ with Euler characteristic strictly smaller than that of $P$. This gives a contradiction. \end{proof} Recall that the famous Whitehead conjecture says that any subcomplex of $ X^{\prime }$ of an aspherical complex $X$ is aspherical as well (for more details, see the survey article \cite{Bo}). As an application of results proved above, we gives an equivalent condition of asphericity of $X^{\prime },$ as follows. \begin{corollary} \label{sxia} Suppose that $X$ is a finite aspherical 2-complex and $ X^{\prime }$ is a subcomplex of $X^{\prime }$. We have the following. \begin{enumerate} \item[(i)] The complex $X^{\prime }$ realizes $\mu _{2}^{g}(\pi _{1}(X^{\prime }));$ \item[(ii)] The complex $X^{\prime }$ is aspherical if and only if the fundamental group $\pi _{1}(X^{\prime })$ is of geometric dimension at most 2. \end{enumerate} \end{corollary} \begin{proof} As $X$ is aspherical, it realizes $\mu _{2}^{g}(\pi _{1}(X))$ by Theorem \ref {lgba}. Notice that $X^{\prime }$ gives a subpresentation of $\pi _{1}(X^{\prime })$ of the presentation of $\pi _{1}(X)$, which corresponds to $X$. Lemma \ref{spog} implies that $X^{\prime }$ realizes $\mu _{2}^{g}(\pi _{1}(X^{\prime }))$. This proves part (i). If $X^{\prime }$ is aspherical, it is $B\pi _{1}(X^{\prime })$ and hence $ \pi _{1}(X^{\prime })$ is of geometric dimension at most 2. Conversely, assume that $\pi _{1}(X^{\prime })$ is of geometric dimension at most 2. By Theorem \ref{lgba}, all the $(\pi _{1}(X^{\prime }),2)$-complexes realizing $ \mu _{2}^{g}(\pi _{1}(X^{\prime }))$ are homtopic to $B\pi _{1}(X^{\prime })$ . Therefore $X^{\prime }$ is aspherical by part (i). \end{proof} \bigskip We remark that in a recent article \cite{Ge}, Gersten obtains a result stronger than Corollary \ref{sxia} (ii) using the method of $L_{2}$-theory. Namely, he is able to replace the condition \textquotedblleft geometric dimension 2\textquotedblright\ by \textquotedblleft cohomological dimension 2\textquotedblright . These two condition are equivalent to each other if the Eilenberg-Ganea conjecture (\cite{Br} VIII. 7) is true. \bigskip \bigskip
{"config": "arxiv", "file": "1503.01987.tex"}
TITLE: intersection of the complement of two disjoint sets is not disjoint QUESTION [1 upvotes]: I have a question regarding whether the intersection of the complement of two disjoint set is disjoint or not. I mean given say $A$ and $B$ with $A$ and $B$ disjoint, i.e. $A \subset X$ and $B \subset X$, and $A \cap B = \emptyset$. It seems that $A^{c} \cap B^{c} \ne \emptyset$, at least when I draw a Venn Diagram, it seems the intersection of the complement is not empty given the condition that $A$ and $B$ are disjoint. But somehow I am having some difficulty proving it. Could someone give me some hint. Because it seems it is not a very difficulty proof. But I kind of get stuck. Thank you REPLY [0 votes]: The phenomenon you observed doesn't always hold true. Since $B=(B\cap A)\cup(B\cap A^c)$, and we know that $B\cap A=\emptyset$, so $B\cap A^{c}=B$. Therefore $B\subseteq A^{c}$. If $B=A^c$, then $A^{c}\cap B^{c}=A^{c}$\ $B=\emptyset$. If $B\neq A^{c}$ then $A^{c}\cap B^{c}=A^{c}$\ $B\neq\emptyset$.
{"set_name": "stack_exchange", "score": 1, "question_id": 2445856}
TITLE: Prob. 6, Sec. 3.4 in Kreyszig's functional analysis book: The fourier coefficients minimise the distance. QUESTION [0 upvotes]: Let $n \in \mathbb{N}$, let $\{ e_1, \ldots, e_n \}$ be an orthonormal set in an inner product space $X$, let $x \in X$, let $y(x) \colon= \sum_{j=1}^n \langle x, e_j \rangle e_j$, and let $z \colon= \sum_{j=1}^n \gamma_j e_j$, where $\gamma_1, \ldots, \gamma_n$ are any scalars. Then how to show that $$\Vert x-y(x) \Vert \leq \Vert x-z \Vert,$$ where $\Vert \cdot \Vert$ is the norm induced by the inner product $\langle \cdot \rangle$? My effort: \begin{align} \Vert x-z \Vert^2 - \Vert x-y(x) \Vert^2 &= \langle x-z, \ x-z \rangle - \langle x-y(x), \ x-y(x) \rangle \\ &= - 2 \Re \langle x, z \rangle + \Vert z \Vert^2 + 2 \Re \langle x, y(x) \rangle - \Vert y(x) \Vert^2 \\ &= - 2 \Re \sum_{j=1}^n \langle x, e_j \rangle \overline{\gamma_j} + \sum_{j=1}^n \vert \gamma_j \vert^2 + 2 \sum_{j=1}^n \vert \langle x, e_j \rangle \vert^2 - \sum_{j=1}^n \vert \langle x, e_j \rangle \vert^2 \\ &= - 2 \Re \sum_{j=1}^n \langle x, e_j \rangle \overline{\gamma_j} + \sum_{j=1}^n \vert \gamma_j \vert^2 + \sum_{j=1}^n \vert \langle x, e_j \rangle \vert^2 \\ &\geq - 2 \left\vert \sum_{j=1}^n \langle x, e_j \rangle \overline{\gamma_j} \right\vert + \sum_{j=1}^n \vert \gamma_j \vert^2 + \sum_{j=1}^n \vert \langle x, e_j \rangle \vert^2 \\ &\geq - 2 \sum_{j=1}^n \left\vert \langle x, e_j \rangle \overline{\gamma_j} \right\vert + \sum_{j=1}^n \vert \gamma_j \vert^2 + \sum_{j=1}^n \vert \langle x, e_j \rangle \vert^2 \\ &\geq - 2 \sqrt{ \sum_{j=1}^n \left\vert \langle x, e_j \rangle \right\vert^2} \cdot \sqrt{\sum_{j=1}^n \left\vert \overline{\gamma_j} \right\vert^2} + \sum_{j=1}^n \vert \gamma_j \vert^2 + \sum_{j=1}^n \vert \langle x, e_j \rangle \vert^2 \\ &= - 2 \sqrt{ \sum_{j=1}^n \left\vert \langle x, e_j \rangle \right\vert^2} \cdot \sqrt{\sum_{j=1}^n \left\vert \gamma_j \right\vert^2} + \sum_{j=1}^n \vert \gamma_j \vert^2 + \sum_{j=1}^n \vert \langle x, e_j \rangle \vert^2 \\ &= \left(\sqrt{ \sum_{j=1}^n \left\vert \langle x, e_j \rangle \right\vert^2} - \sqrt{\sum_{j=1}^n \left\vert \gamma_j \right\vert^2} \right)^2 \\ &\geq 0. \end{align} Is this calculation correct? If not, then where does the problem lie? REPLY [2 votes]: Notice that $$ \left(x-\sum_{k=1}^{n}\langle x,e_k\rangle e_k\right) \perp e_j,\;\;\; j=1,2,3,\cdots, n. $$ Therefore, for all scalars $\beta_1,\beta_2,\cdots,\beta_n$, $$ \left(x-\sum_{k=1}^{n}\langle x,e_k\rangle e_k\right)\perp\sum_{j=1}^{n}\beta_j e_j $$ Therefore, by the Pythagorean Theorem, \begin{align} \left\|x-\sum_{j=1}^{n}\alpha_je_j\right\|^2&= \left\|\left(x-\sum_{j=1}^{n}\langle x,e_j\rangle e_j\right)+\sum_{j=1}^{n}(\langle x,e_j\rangle-\alpha_j)e_j\right\|^2 \\ & = \left\|x-\sum_{j=1}^{n}\langle x,e_j\rangle e_j\right\|^2+\left\|\sum_{j=1}^{n}(\langle x,e_j\rangle-\alpha_j)e_j\right\|^2 \end{align} Now you can see that the following holds for all choices of $\alpha_j$ $$ \left\|x-\sum_{j=1}^{n}\alpha_je_j\right\| \ge \left\|x-\sum_{j=1}^{n}\langle x,e_j\rangle e_j\right\|, $$ and you have equality iff $\alpha_j = \langle x,e_j\rangle$ for all $j$.
{"set_name": "stack_exchange", "score": 0, "question_id": 1752394}
\begin{document} \begin{abstract} We extend the results of \cite{ZZ} on LDP's (large deviations principles) for the empirical measures $$ Z_s: = \frac{1}{N} \sum_{\zeta: s(\zeta) = 0} \delta_{\zeta}, \;\;\; (N: = \# \{\zeta: s(\zeta) = 0)\}$$ of zeros of Gaussian random polynomials $s$ in one variable to $P(\phi)_2$ random polynomials. The speed and rate function are the same as in the associated Gaussian case. It follows that the expected distribution of zeros in the $P(\phi)_2$ ensembles tends to the same equilibrium measure as in the Gaussian case. \end{abstract} \maketitle The purpose of this note is to extend the LDP (large deviation principle) of \cite{ZZ} for the empirical measure \begin{equation}\label{ZN} Z_s: = d\mu_{\zeta}: = \frac{1}{N} \sum_{\zeta: s(\zeta) = 0} \delta_{\zeta}, \;\;\; N: = \# \{\zeta: s(\zeta) = 0\} \end{equation} of zeros of Gaussian random holomorphic polynomials $s$ of degree $N$ in one variable to certain non-Gaussian measures which we call $P(\phi)_2$ random polynomials. These are finite dimensional analogues of (or approximations to) the ensembles of quantum field theory, where the probability measure on the space of functions (or distributions) has the form $ e^{- S(f)} df$, with \begin{equation}\label{Sa} S(f) = \int (|\nabla f|^2 + |f|^2 + Q(|f|^2)d \nu, \end{equation} where $Q$ is a semi-bounded polynomial. A more precise definition is given below; we refer to \cite{Si} for background on $P(\phi)_2$ theories. Our main results are that the empirical measures of zeros for such $P(\phi)_2$ random polynomials satisfies an LDP with precisely the same speed and rate functional as in the Gaussian case in \cite{ZZ} where $Q = 0$. In fact, our proof is to reduce the LDP to that case. As a corollary, the expected distribution $\frac{1}{N} \E_N Z_s$ of zeros in the $P(\phi)_2$ case tends to the same weighted equilibrium measure as in the Gaussian case. In the Gaussian case, the proof of the last statement is derived from the asymptotics of the two point function (see \cite{SZ1,SZ2,B}); in the $P(\phi)_2$ case, the large deviations proof is the first and only one we know. To state the result precisely, we need some notation and terminology. By a random polynomial, one means a probability measure $\gamma_N$ on the vector space $\pcal_N$ of polynomials $p(z) = \sum_{j = 0}^N a_j z^j$ of degree $N$. As in \cite{ZZ}, we identify polynomials $p(z)$ on $\C$ with holomorphic sections $s \in H^0(\CP^1, \ocal(N))$, where $\ocal(N)$ is the $N$th power of the hyperplane section line bundle $\ocal(1)$; strictly speaking, in the local coordinate, $s = p e^N$ where $p$ is the polynomial of degree $N$ and $e^N$ is a frame for $\ocal(N)$. The geometric language is useful for compactifying the problem to $\CP^1$, and we refer to \cite{SZ1,ZZ} for background. In \cite{ZZ}, the authors chose $\gamma_N$ to be a Gaussian measure, $$\gamma_N = e^{- ||s||_{(h^N, \nu)}^2} ds,$$ determined by an inner product on $\pcal_N$, \begin{equation} \label{INNERPRODUCT} ||s||^2_{(h^N, \nu)} := \int_{\CP^1} |s(z)|^2_{h^N} d\nu(z). \end{equation} Here, $\nu$ is an auxiliary probability measure and $h $ is a smooth Hermitian metric on $\ocal(1)$ and $h^N$ is the induced metric on the powers $\ocal(N)$. In the local frame $e$, $h$ takes the classical form of a weight $h = e^{- \phi}$; the assumption is that it extends smoothly to $\ocal(1) \to \CP^1$. Thus in the local coordinate, we rewrite \begin{equation} ||s||^2_{(h^N, \nu)} = \int_{\C} |p(z)|^2 e^{- N \phi(z)} d\nu(z). \end{equation} In this article, we study the probability measures \begin{equation} \label{GL} \gamma_N = e^{- S(s)} ds \;\;\; \mbox{on}\;\; \pcal_N, \end{equation} where $ds$ denotes Lebesgue measure and the action $S$ has the form, \begin{equation}\label{S} S(s) = \int_{\CP^1} |\nabla s(z)|_{h^N \otimes g}^2 d\nu + \int_{\CP^1} P(|s|_{h^N}^2) d \nu, \end{equation} where \begin{equation}\label{mnS} P(x) = \sum_{j = 1}^k c_j x^j, \;\; \mbox{with}\; c_k = 1 \end{equation} is a semi-bounded polynomial. Here, $\nabla: C^{\infty}(\CP^1, \ocal(N)) \to C^{\infty}(\CP^1, \ocal(N) \otimes T^*)$ is a smooth connection on the line bundle $\ocal(N) \to \CP^1$, and $g$ is a smooth Riemannian metric on $\mathbb{CP}^1$. We recall that connections are the first order derivatives which are well-defined on sections of line bundles. We will take $\nabla$ to be the Chern connection of a smooth connection $h$ on $\ocal(1)$ and its extension to the tensor powers $\ocal(N)$ (which strictly speaking should be denoted by $\nabla_N$). Note that the more elementary holomorphic derivative $\partial p(z) = p'(z)$ defines a meromorphic connection on $\ocal(N)$ with a pole at infinity, rather than a smooth connection. We refer to \S \ref{KINETIC} and \cite{GH,ZZ} for further background. The integral $\int_{\CP^1} |\nabla s(z)|_{h^N \otimes g}^2 d\nu$ is expressed in (\ref{kic}) in local coordinates. We often denote the first integral in $S(s)$ as $\|\nabla s\|_{(h^N\otimes g, \nu)}^2$ and the second as $\int P(|s|_{h^N}^2)$. In $P(\phi)_2$ Euclidean quantum field theory, $S(s)$ is known as the action, $\|\nabla s\|^2$ is known as the kinetic energy term, $P$ is the potential, and $\lcal(s) = |\nabla s|^2 + P(|s|^2)$ is the Lagrangian (see e.g. \cite{GJ,Si}). The Gaussian case is the `non-interacting' or free field theory with quadratic Lagrangian $\lcal_0 = |\nabla s|^2 + m|s|^2$; while in the general $P(\phi)_2$ case, the non-quadratic part of $P$ is known as the interaction term. The Gaussian case was studied in \cite{ZZ} without the (also Gaussian) kinetic term. The large deviations result for empirical measures of zeros concerns a sequence $\{\PR_N\}$ of probability measures on the space $\mcal(\CP^1)$ of probability measures on $\CP^1$. Roughly, $\PR_N(B)$ is the probability that the empirical measure of zeros of a random $p \in \pcal_N$ lies in the set $B$. To be precise, we recall some of the definitions from \cite{ZZ}. The zero set $\{\zeta_1, \dots, \zeta_N\}$ of a polynomial of degree $N$ is a point of the $N$th configuration space, \begin{equation} \label{CONFIG} (\CP^1)^{(N)} = Sym^{N} \CP^1 := \underbrace{\CP^1\times\cdots\times \CP^1}_N /S_{N}. \end{equation} Here, $S_N$ is the symmetric group on $N$ letters. We push forward the measure $\gamma_N$ on $\pcal_N$ under the `zeros' map \begin{equation} \label{ZEROSMAP} \dcal: \pcal_N \to (\CP^1)^{(N)}, \;\;\;\dcal(s)= \{\zeta_1, \dots, \zeta_N\}, \end{equation} where $\{\zeta_1, \dots, \zeta_N\}$ is the zero set of $s$, to obtain a measure \begin{equation} \label{JPCDEF}\vec K^N(\zeta_1, \dots, \zeta_N) : = \dcal_* d \gamma_N \end{equation} on $ (\CP^1)^{(N)}$, known as the joint probability current (or distribution), which we abbreviate by JPC. We then embed the configuration spaces into $\mcal(\CP^1)$ (the space of probability measures on $\CP^1$) under the map, \begin{equation}\label{DELTADEF} \mu : (\CP^1)^{(N)} \to \mcal(\CP^1), \;\;\; d\mu_{\zeta}: = \frac{1}{N} \sum_{j = 1}^{N} \delta_{\zeta_j}. \end{equation} The measure $d\mu_{\zeta}$ is known as the empirical measure of zeros of $p$. We then push forward the joint probability current to obtain a probability measure \begin{equation} \label{LDPNa} \PR_N = \mu_* \dcal_* \gamma^N \end{equation} on $\mcal(\CP^1)$. The sequence $\{\PR_N\}$ is said to satisfy a large deviations principle with speed $N^2$ and rate functional (or rate function) $I$ if (roughly speaking) for any Borel subset $E \subset \mcal(X)$, $$\frac{1}{N^2} \log \PR_N \{\sigma \in \mcal: \sigma \in E\} \to - \inf_{\sigma \in E} I(\sigma). $$ To be precise, the condition is that \begin{equation} \label{eq-ref1} - I(\sigma):= \limsup_{\delta \to 0} \limsup_{N \to \infty} \frac{1}{N^2} \log {\bf Prob} _N(B(\sigma, \delta)) = \liminf_{\delta \to 0} \liminf_{N \to \infty} \frac{1}{N^2} \log {\bf Prob}_N(B(\sigma, \delta)), \end{equation} for balls in the natural (Wasserstein) metric (see Theorem 4.1.11 of \cite{DZ}). \subsection{Statement of results} Our first results give an LDP for slightly simpler $P(\phi)_2$ ensembles where the action does not contain the kinetic term, i.e., we choose the probability measure to be $\gamma_N=e^{-S(s)}ds$ where $S(s)=\int P(|s|_{h^N}^2)$. In \S \ref{KINETIC} we add the kinetic term. To obtain a large deviations result, we need to impose some conditions on the probability measure $\nu$ that is used to defined the integration measure on $\CP^1$ in the inner product (\ref{INNERPRODUCT}) and the $P(\phi)_2$ measures (\ref{GL}). In the pure potential case in \S \ref{without}, it must satisfy the mild conditions of \cite{ZZ}: (i) the Bernstein-Markov condition, and (ii) that the support $K$ of $\nu$ must be `regular' in the sense that it is non-thin at all of its points. We call such measures {\it admissible}. We refer to \cite{B,ZZ} for background on Bernstein-Markov measures and regularity. When we include the kinetic term, we must assume more about $\nu$ (see below). If $\gamma_N$ is defined by an \textit{admissible} measure $\nu$, then we prove that the speed and the rate function are the same as in the associated Gaussian case \cite{ZZ} where $P(x)=x$. \begin{maintheo}\label{POTENTIAL} Let $h = e^{- \phi}$ be a smooth Hermitian metric on $\ocal(1) \to \CP^1$ and let $\nu \in \mcal(\CP^1)$ be an admissible measure. Let $P(|s|^2_{h^N})$ be a semi-bounded polynomial defined by (\ref{mnS}), and let $\gamma_N$ be the probability measure defined by the action $S(s)=\int_{\mathbb{CP}^1}P(|s|^2_{h^N})d\nu$ without the kinectic term. Then the sequence of probability measures $\{ \PR_N\}$ on $\mcal(\CP^1)$ defined by (\ref{LDPNa}) satisfies a large deviations principle with speed $N^2$ and rate functional \begin{equation} \label{IGREEN} I^{h, K} (\mu) = - \frac{1}{2} \ecal_{h}(\mu) + \sup_K U^{\mu}_{h} + E(h) . \end{equation} This rate functional is lower semi-continuous, proper and convex, and its unique minimizer $\nu_{h, K} \in \mcal(\CP^1)$ is the Green's equilibrium measure of $K$ with respect to $h$. \end{maintheo} Here, $\ecal_h(\mu) = \int_{\CP^1 \times \CP^1} G_h(z,w) d\mu(z) d\mu(w)$ is the Green's energy, where $G_h(z,w)$ is the Green's function with respect to $h$ (see \cite{ZZ} (6)). Also, $U_h^{\mu}(\mu) = \int_{\CP^1} G_h(z,w) d\mu(w)$ is the Green's potential of $\mu$. Things become more complicated when the action includes the kinetic term. We could choose independently the integration measures in the kinetic and potential terms, but for the sake of simplicity we only use the same measure $\nu$ for both terms. We then impose an extra condition on $\nu$ (and $\nabla$), namely that $\nabla$ satisfies a weighted $L^2$ Bernstein inequality, \begin{equation} \label{BERN} \|\nabla s\|^2_{(h^N \otimes g,\nu)}\leq CN^k \|s\|^2_{(h^N, \nu)} \end{equation} on all $H^0(\CP^1, \ocal(N))$, for some $k, C(h,g,\nu)> 0.$ When $\nu$ is admissible and such bounds hold, we say that $\nu$ (or $(h, \nu, \nabla)$) is {\it kinetic admissible}. In Lemma \ref{volume}, we show that if $h=e^{-\phi}$ is a Hermitian metric on $\ocal(1)$ with positive curvature form $\omega_h$ and $g$ is any fixed Riemannian metric, then $\nu = \omega_h$ is kinetic admissible, and in fact \begin{equation}\label{crucialt}\|\nabla s\|^2_{(h^N \otimes g,\nu)}\leq CN^2\|s\|^2_{(h^N, \nu)}. \end{equation} We then extend Theorem \ref{POTENTIAL} to the full $P(\phi)_2$ case. Perhaps surprisingly, when $(h,\nu, \nabla)$ is kinetic admissible, the kinetic term becomes a `lower order term' if $P(x)$ contains non-quadratic terms. \begin{maintheo}\label{POTENTIALKINETIC} Let $(h,\nu, \nabla)$ be kinetic admissible in the sense that (\ref{BERN}) holds. Let $P(|s|^2_{h^N})$ be a semi-bounded polynomial as above, and let $\gamma_N$ be the associated $P(\phi)_2$ measure defined by the action (\ref{S}). Then the sequence of probability measures $\{ \PR_N\}$ on $\mcal(\CP^1)$ defined by (\ref{LDPNa}) satisfies a large deviations principle with speed $N^2$ and the same rate functional $I^{h, K} (\mu)$ as in Theorem \ref{POTENTIAL}. \end{maintheo} The proofs of Theorems \ref{POTENTIAL}-\ref{POTENTIALKINETIC} are to relate the LDP for the $P(\phi)_2$ ensemble to the LDP for the associated (quadratic) Gaussian ensemble without kinetic term studied in \cite{ZZ}. To avoid duplication, we refer the reader to the earlier article for steps in the proof which carry over to $P(\phi)_2$ measures with no essential change. There are two new steps that are not in \cite{ZZ}. The first new step (Propositions \ref{FSVOLZETA2intro} and \ref{FSVOLZETA2introb}) is the calculation of the JPC (joint probability current, or distribution) of zeros in the $P(\phi)_2$ ensembles. The main observation underlying this note is that the calculation of the JPC in the Gaussian ensemble in \cite{ZZ} extends easily to the $P(\phi)_2$ case. The second new step (loc. cit.) is the reduction of the proof of the LDP to that of \cite{ZZ} by bounding the approximate rate function in the $P(\phi)_2$ case above and below by that in the Gaussian case. As a direct consequence of Theorems \ref{POTENTIAL}-\ref{POTENTIALKINETIC} we obtain, \begin{cor}\label{EQDIST} With all assumptions in Theorems \ref{POTENTIAL}-\ref{POTENTIALKINETIC} , let $\E_N (Z_s)$ be the expected value of the empirical measure with respect to $\gamma_N$. Then, $\E_N(Z_s) \to \nu_{h, K}$ which is the equilibrium measure determined by $h$ and $K$. \end{cor} Indeed, the limit measure $\lim_{N \to \infty} \E_N(Z_s)$ must be the unique minimizer of the rate functional. Convergence of the expected distribution of zeros to the equilibrium measure was first proved for Gaussian random polynomials with `subharmonic weights' in \cite{SZ1} and for flat weights and real analytic $K$ in \cite{SZ2}. In \cite{B}, the flat result was generalized to admissible measures. Corollary \ref{EQDIST} is the first result to our knowledge for probability measures of the form (\ref{GL}). In fact, we are not aware of prior results on these finite dimensional approximations to $P(\phi)_2$ quantum field theories. The results may have an independent interest in illustrating a novel kind of high frequency cutoff for such theories (in a holomorphic sector). In conclusion, we thank O. Zeitouni for discussions and correspondence on this note. \subsection{An example: Kac-Hammersley} As an illustration of the methods and results, we consider a $P(\phi)_2$ generalization of the Kac-Hammersley ensemble. The classical Kac-Hammersley ensemble is the Gaussian random polynomial $$s(z)=\sum_{j=0}^N a_j z^j, \,\,\,\, z\in \mathbb{C}$$ where the coefficients $a_j$ are independent complex Gaussian random variable of mean 0 and variance 1. In this case, $\E(Z_s)\rightarrow \delta_{S^1}$ as the week limit. In the Gaussian case, $d\nu = \delta_{S^1}$ (the invariant probability measure on the unit circle), the weight $e^{-\phi} = 1$ and $g$ is the flat metric. Hence the inner product (\ref{INNERPRODUCT}) reads $$\|s\|_{\delta_{S^1}}^2=\frac{1}{2\pi}\int_{0}^{2\pi}|s(e^{i\theta})|^2d\theta$$ where $s$ is a polynomial of degree $N$. We now use the same metrics and measures, together with any semi-bounded polynomial $P(|s|^2)$, to define the kinetic $P(\phi)_2$ Kac-Hammersley ensemble. We note that $\delta_{S^1}$ is \emph{admissible} \cite{ZZ}. Second, inequality (\ref{crucialt}) holds for any polynomials: In the setting of Kac-Hammersley, the connection $\nabla$ is equal to $d=\partial+\bar \partial$, thus $$\nabla s=(\sum _{j=1}^N ja_j z^{j-1})dz$$ thus $$\|\nabla s\|_{\delta_{S^1}}^2=\sum _{j=1}^N j^2|a_j|^2 \leq N^2 \sum _{j=0}^N |a_j|^2=N^2\| s\|_{\delta_{S^1}}^2$$ Hence, Theorems \ref{POTENTIAL} - \ref{POTENTIALKINETIC} hold in this case and we have \begin{cor} In the setting of Kac-Hammersley, let Let $\E_N(Z_s)$ be the expected value of the empirical measure with respect to $\gamma_N$ defined by $P(\phi)_2$ action (\ref{S}) with the kinetic term. Then, $\E_N(Z_s)\rightarrow \delta_{S^1}$. \end{cor} \section{Proof of the Theorem \ref{POTENTIAL}}\label{without} In this section, we drop the kinetic term $\|\nabla s\|_{(h^N \otimes g,\nu)}^2 $ and only consider actions of the form $\int P(|s|^2_{h^N}) d\nu$. We assume that $c_{k} > 0$ and with no essential loss of generality we put $c_{k} = 1$ (the coefficient could be re-scaled in the calculation). The following calculation generalizes Proposition 3 of \cite{ZZ}. \begin{prop} \label{FSVOLZETA2intro} Let $(\pcal_N, \gamma_N)$ be the $P(\phi)_2$ ensemble with $S(s) = \int_{\CP^1} P(|s|_{h^N}) d\nu$, where $d\nu$ is an admissible measure. Denote by $k$ the maximal non-zero power occurring in $P$ (\ref{mnS}). Let $\vec K^N$ be the joint probability current (\ref{JPCDEF}). Then, \begin{eqnarray} \label{eq-030209ba} \vec K^N(\zeta_1, \dots, \zeta_N) & = & \frac{(\Gamma_N(\zeta_1, \dots \zeta_N))}{Z_N(h)} \frac{|\Delta(\zeta_1, \dots, \zeta_N)|^2 d^2 \zeta_1 \cdots d^2 \zeta_N}{\left(\int_{\CP^1} \prod_{j = 1}^N |(z - \zeta_j)|^{2k} e^{-k N \phi(z)} d\nu(z) \right)^{\frac{N+1}{k}}} \\ \label{eq-030209d} & = & \frac{(\Gamma_N(\zeta_1, \dots \zeta_N))}{\hat{Z}_N(h)} \frac{\exp \left( \sum_{i < j} G_{h}(\zeta_i, \zeta_j) \right) \prod_{j = 1}^N e^{- 2 N \phi(\zeta_j)} d^2 \zeta_j }{\left(\int_{\CP^1} e^{k N \int_{\CP^1} G_{h}(z,w) d\mu_{\zeta}} d\nu(z) \right)^{\frac{N+1}{k}}}. \end{eqnarray} where $$\sup_{\{\zeta_1, \dots, \zeta_N\} \in (\CP^1)^{(N)}} \frac{1}{N^2} \log \Gamma_N (\zeta_1, \dots, \zeta_N) \to 0$$ and where $ Z_N(h)$, resp. $\hat{Z}_N(h)$, is the normalizing constant in Proposition 3 of \cite{ZZ}. \end{prop} We note that (\ref{eq-030209ba}) (resp. (\ref{eq-030209d})) is almost the same as (23) (resp. (24)) in Proposition 3 of \cite{ZZ} except that we raise $||s||_{h^N}$ to the power $k$ instead of the power $k = 2$. It is shown in \cite{ZZ} that $\frac{1}{\hat{Z}_N} = e^{-\frac{1}{2}N(N-1)+N(N+1))E(h)}. $ The existence of such an explicit JPC in the general $P(\phi)_2$ case is the reason why it is possible to prove Theorem \ref{POTENTIAL}. \begin{proof} We coordinatize $\pcal_N$ using the basis $z^j$ and put $$s = a_0 \prod_{j = 1}^N (z - \zeta_j) = \sum_{j = 0}^N a_{N -j} z^j. $$ Any smooth probability measure on $\pcal_N$ thus has a density $\dcal(a_0, \dots, a_N) \prod_{j = 0}^N d^2 a_j, $ where $d^2 a = da \wedge d \bar{a}$ is Lebesgue measure. As in \cite{ZZ}, the first step is to push this measure forward under the natural projection from $\pcal_N$ to the projective space $\PP \pcal_N$ of polynomials, whose points consists of lines $\C s$ of polynomials. This is natural since $Z_s$ is the same for all multiples of $s$. Monic polynomials with $a_0 = 1$ form an affine space of $\PP \pcal_N$. As affine coordinates on $\PP \pcal_N$ we use $[1:b_1:\cdots :b_N]$ with $b_j=a_j/a_0$. We then change variables from the affine coordinates $b_j$ to the zeros coordinates $\zeta_k$. Since $a_{N - j} = e_{N - j}(\zeta_1, \dots, \zeta_N)$ (the $(N - j)$th elementary symmetric polynomial), the pushed forward probability measure on $\PP \pcal_N$ then has the form \begin{equation} \label{JPCa} \vec K^N(\zeta_1, \dots, \zeta_N) = \left(\int \dcal(a_0; \zeta_1, \dots, \zeta_N) |a_0|^{2 N} d^2a_0 \right) \times |\Delta(\zeta_1, \dots, \zeta_N)|^2 d^2\zeta_1 \cdots d^2\zeta_N, \end{equation} where $\dcal(a_0; \zeta_1, \dots, \zeta_N)$ is the density of the JPC in the coordinates $(a_0, \dots, a_N)$ followed by the change of coordinates, and $\Delta(\zeta_1, \dots, \zeta_N) = \prod_{i < j} (\zeta_i - \zeta_j)$ is the Vandermonde determinant. We refer to \cite{ZZ} (proof of Proposition 3) for further details. For the $P(\phi)_2$ measures (\ref{GL}) without a kinetic term, \begin{equation} \label{JPCb} \dcal(a_0; \zeta_1, \dots, \zeta_N) = e^{- \int_{\CP^1} P (|a_0|^2 | \prod_{j = 1}^N (z - \zeta_j)|^2_{h^N}) d\nu(z)}. \end{equation} Put \begin{equation} \label{alphaj} \alpha_i(\zeta_1 \dots, \zeta_N) : = \alpha_i=\int_{\cp}|\prod_{j=1}^N(z-\zeta_j)|^{2i}_{h^N}d\nu(z). \end{equation} Then \begin{equation} \label{DCAL} \dcal(a_0; \zeta_1, \dots, \zeta_N) = e^{-(\alpha_k|a_0|^{2k}+\alpha_{k-1}c_{k - 1} |a_0|^{2k-2}+\cdots + \alpha_1 c_1 |a_0|^2)}, \end{equation} and the pushed-forward density is \begin{equation} \label{DCAL} \begin{array}{l} \int \dcal(a_0; \zeta_1, \dots, \zeta_N) |a_0|^{2 N} d^2a_0 \\ \\= \int_{\mathbb{C}}e^{-(\alpha_k|a_0|^{2k}+\alpha_{k-1}c_{k - 1} |a_0|^{2k-2}+\cdots + \alpha_1 c_1 |a_0|^2)}|a_0|^{2N}da_0 \wedge d \bar a_0. \end{array} \end{equation} We change variables to $\rho = |a_0|^2 \to \alpha_k^{-\frac{1}{k}} \rho$ to get \begin{equation} \label{alphak} \int_0^{\infty}e^{-(\alpha_k\rho^{k}+\alpha_{k-1} c_{k-1}\rho^{k-1}+\cdots + \alpha_1 c_1 \rho)} \rho^{N}d\rho=(\alpha_k)^{\frac{N+1}{k}} \Gamma_N, \end{equation} where \begin{equation} \label{GAMMADEF} \Gamma_N(\zeta_1, \dots, \zeta_N): = \int_0^{\infty}e^{-(\rho^{k}+\beta_{k-1} c_{k-1} \rho^{k-1}+\cdots + \beta_1 c_1 \rho)}\rho^{N}d\rho. \end{equation} with $\beta_i=\frac{\alpha_i}{\alpha_k^{\frac{i}{k}}}. $ We observe that \begin{equation} \label{alphajk} (\alpha_k)^{\frac{N+1}{k}} = \left(\int_{\cp}|\prod_{j=1}^N(z- \zeta_j)|^{2k}_{h^N}d\nu(z) \right)^{\frac{N + 1}{k}}, \end{equation} so that (\ref{alphak}) implies the identity (\ref{eq-030209ba}). The identity (\ref{eq-030209d}) is derived from (\ref{eq-030209ba}) exactly as in Proposition 3 of \cite{ZZ}, so we refer there for the details. To complete the proof of the Proposition, we prove the key \begin{lem} \label{BOUNDS} We have, $$\sup_{\{\zeta_1, \dots, \zeta_N\} \in (\CP^1)^{(N)}} \frac{1}{N^2}\log \Gamma_N(\zeta_1, \dots, \zeta_N) \rightarrow 0$$ \end{lem} \begin{proof} By the H\"older inequality with exponent $\frac{k}{i}$, $\beta_i \leq (\int_{\cp}d\nu)^{1-\frac{i}{k}}= 1$, hence $\beta_i$ is bounded independent of $N$ for any polynomial $s$ or roots $\{\zeta_1, \dots, \zeta_N\}$. We first note that $$\begin{array}{lll} \rho^{k}+\beta_{k-1} c_{k-1} \rho^{k-1}+\cdots + \beta_1 c_1 \rho &\geq & \rho^{k} - | c_{k-1}| \rho^{k-1} - \dots - |c_1| \rho \\ && \\ & \geq & \frac{1}{2} \rho^k, \;\;\;\mbox{for}\;\; \rho \geq \rho_k : = \rho_k(c_1, \dots, c_{k-1}), \end{array} $$ where $\frac{|c_1|}{\rho} + \cdots + \frac{|c_1|}{\rho^{k-1}} \leq \half$ for $\rho \geq \rho_k$. It follows that $$\begin{array}{lll} \Gamma_N(\zeta_1, \dots, \zeta_N) & \leq & \int_0^{\rho_k} e^{-(\rho^{k}+\beta_{k-1} c_{k-1} \rho^{k-1}+\cdots + \beta_1 c_1 \rho)}\rho^{N}d\rho + \int_{\rho_k}^{\infty} e^{- \half \rho^k} \rho^N d \rho \\ && \\ & \leq & \int_0^{\rho_k} e^{-(\rho^{k} - |c_{k-1}|\rho^{k-1}-\cdots - | c_1| \rho)} \rho^{N}d\rho+ \int_{0}^{\infty} e^{- \half \rho^k} \rho^N d \rho. \end{array} $$ But $$ \int_0^{\infty}e^{- \half \rho^{k}}\rho^{N}d\rho =N^{\frac{N+1}{k}} \int_0^\infty e^{N(\log \rho- \half \rho^k)}d\rho \sim N^{\frac{N+1}{k}} e^{N(\frac{1}{k}\log \frac{1}{k}-\frac{1}{k})}\frac{1}{\sqrt{N}}.$$ Also, $$\int_0^{\rho_k} e^{-(\rho^{k} - |c_{k-1}|\rho^{k-1}- \cdots - | c_1 | \rho)} \rho^{N} d \rho \leq (\rho_k)^N C_k,$$ where $C_k$ is a constant independent of $N$ and $\{\zeta_1, \dots, \zeta_N\}$. Hence, $$\Gamma_N \leq (\rho_k)^N C_k + N^{\frac{N+1}{k}} e^{N(\frac{1}{k}\log \frac{1}{k}-\frac{1}{k})}\frac{1}{\sqrt{N}}.$$ To obtain a lower bound, we write $$\int_0^{\infty}e^{-(\rho^{k}+\beta_{k-1} c_{k-1} \rho^{k-1}+\cdots + \beta_1 c_1 \rho)}\rho^{N}d\rho= \int_0^{1} +\int_1^{\infty}$$ For $\rho \in [0, 1]$ we have, $$\rho^{k}+\beta_{k-1} |c_{k-1}| \rho^{k-1}+\cdots + \beta_1 |c_1| \rho \leq k C, \;\;\; C = \max\{|c_j|\}_{j = 1}^k$$ since each $\beta_i$ is bounded by $1$, thus $$\int_0^{1}e^{-(\rho^{k}+\beta_{k-1} c_{k-1} \rho^{k-1}+\cdots + \beta_1 c_1 \rho)}\rho^{N}d\rho \geq \int_0^{1} e^{- C k}\rho^{N}d\rho \geq e^{- Ck}\frac{1}{N+1}$$ For $\rho \geq 1$ we have, $$\rho^{k}+\beta_{k-1} c_{k-1} \rho^{k-1}+\cdots + \beta_1 c_1 \rho \leq k C \rho^{k},$$ hence $$\begin{array}{l} \int_1^{\infty}e^{-(\rho^{k}+\beta_{k-1} c_{k-1} \rho^{k-1}+\cdots + \beta_1 c_1 \rho)}\rho^{N}d\rho \geq \int_1^{\infty}e^{- C k\rho^k}\rho^Nd\rho\\ \\ = (Ck)^{- (N + 1)/k} \int_1^{\infty}e^{- \rho^k} \rho^N d\rho \geq (Ck)^{- (N + 1)/k}. \end{array} $$ Putting together the two bounds, we get $$(Ck)^{- (N + 1)/k} +e^{-k}\frac{1}{N+1} \leq \Gamma_N \leq (\rho_k)^N C_k + N^{\frac{N+1}{k}} e^{N(\frac{1}{k}\log \frac{1}{k}-\frac{1}{k})}\frac{1}{\sqrt{N}}.$$ This completes the proof of Lemma \ref{BOUNDS}, and hence of the Proposition. \end{proof} \begin{rem} In retrospect, what we proved is that \begin{equation} \label{DCALab} \frac{1}{N^2} \log \int \dcal(a_0; \zeta_1, \dots, \zeta_N) |a_0|^{2 N} d^2a_0 \sim \frac{1}{N^2} \log \int_{0}^{\infty} e^{-\alpha_k \rho^{k}} \rho^{N} d \rho \end{equation} We could obtain the limit by a slight generalization of the saddle point method, \begin{equation} \begin{array}{lll} \frac{1}{N^2} \log \int_{0}^{\infty} e^{-\alpha_k \rho^{k}} \rho^{N} d \rho & \sim & -\frac{1}{N^2} \inf_{\rho \in \R_+} (\alpha_k \rho^{k} - N \log \rho) \\ && \\ & \sim & -\frac{1}{k N} \log \alpha_N = - \frac{1}{k N} \log \int_{\cp}|\prod_{j=1}^N(z-\zeta_j)|^{2k}_{h^N}d\nu, \end{array} \end{equation} since the minimum occurs at $\rho_N = (\frac{N}{k})^{\frac{1}{k}} \alpha^{- \frac{1}{k}}. $ This is the same answer we are about to get by the more rigorous argument in \cite{ZZ}. \end{rem} \subsection{\label{COMPLETE} Completion of the proof of Theorem \ref{POTENTIAL} without kinetic term } We now modify the calculations of \cite{ZZ}, Section 4.7, of the approximate rate function $I_N$. As in that section, we define $$\mathcal{E}_N^h(\mu_{\zeta})=\int_{\cp \times \cp \backslash \Delta}G_h(z,w)d\mu_{\zeta}(z)d\mu_{\zeta} (w),$$ where $\Delta \subset \cp \times \cp$ is the diagonal. We also define \begin{equation} \label{JCAL} {\mathcal{J}_N^{h,\nu}(\mu_{\zeta})=\log \|e^{U_h^{\mu_{\zeta}}}\|_{L^{kN}(\nu)}}. \end{equation} It is almost the same functional of the same notation in \cite{ZZ}, Section 4.7, except that the $L^N$ norm there now becomes the $L^{kN}$ norm. We define the approximate rate functional by \begin{equation} \label{IN} - N^2 I_N(\mu_{\zeta}) : = -\frac{1}{2}\mathcal{E}_N^h(\mu_{\zeta})+\frac{N+1}{N}\mathcal{J}_N^{h,\nu}(\mu_{\zeta}). \end{equation} The following is the analogue of Lemma 18 of \cite{ZZ}. \begin{prop}\label{MAIN1} With the same notation as in Proposition \ref{FSVOLZETA2intro}, we have $$\vec K^N(\zeta_1, \dots, \zeta_N) = \frac{\Gamma_N(\zeta_1, \dots, \zeta_N)}{\hat{Z}_N(h)} e^{- N^2 \left(-\frac{1}{2}\mathcal{E}_N^h(\mu_{\zeta})+\frac{N+1}{N}\mathcal{J}_N^{h,\nu}(\mu_{\zeta}))\right)}. $$ \end{prop} The proof is the same calculation as in \cite{ZZ} and we therefore omit most of the details. Indeed, the remainder of the proof of Theorem \ref{POTENTIAL} for $P(\phi)_2$ measure without kinetic term is identical to that of Theorem 1 of \cite{ZZ}, since the only change in the approximate rate functional is the change $1 \to k$ in $\mathcal{J}_N^{h,\nu}$ and the factor $\Gamma_N$. The change in $\jcal_N^{h, \nu}$ cancels out in the limit, since (as in \cite{ZZ}), we have $$\lim _{N\rightarrow \infty}\mathcal{J}_N^{h,\nu}(\mu_{\zeta})=\log \|e^{U_h^{\mu_{\zeta}}}\|_{L^{kN}}(\nu)\uparrow \log \|e^{U_h^{\mu_{\zeta}}}\|_{L^{\infty}}(\nu)=\sup_K U_h^{\mu_{\zeta}}.$$ We briefly re-do the calculation for the sake of completeness, referring to \cite{ZZ} for further details: \begin{equation} \begin{array}{lll}\int_{\CP^1} \prod_{j = 1}^N |(z - \zeta_j)|^{2k} e^{- k N \phi} d\nu(z) & = & \left( \int_{\CP^1} e^{k \int_{\CP^1} G_{h}(z,w) dd^c \log ||s_{\zeta}(w)||_{h^N}^2} d\nu \right) e^{ k \int_{\CP^1} \log ||s_{\zeta}||_{h^N}^2(z) \omega_h} \\ && \\ & = & \left( \int_{\CP^1} e^{k N \int_{\CP^1} G_{h}(z,w) d\mu_{\zeta}(w)} d\nu \right) e^{ k \int_{\CP^1} \log ||s_{\zeta}||_{h^N}^2(z) \omega_h}. \end{array} \end{equation} The right side is then raised to the power $- \frac{N + 1}{k}$. If we take $\frac{1}{N^2} \log$ of the result we get the supremum of $\int_{\CP^1} G_{h}(z,w) d\mu_{\zeta}(w) $ on the support of $d\nu$. Further, by Proposition \ref{FSVOLZETA2intro} the $\Gamma_N$ factor does not contribute to the rate function $I^{h, K}$. Therefore the special case of Theorem \ref{POTENTIAL} for $P(\phi)_2$ measures where the $\|\nabla s\|_{(h^N\otimes g,\nu)}^2$ term is omitted follows from Proposition \ref{MAIN1} and from the proof of Theorem 1 in \cite{ZZ}. \end{proof} \section{\label{KINETIC} Large deviations for Lagrangians with kinetic term.} We now include the kinetic energy term. In order to define $\nabla s$ we need to introduce a connection $\nabla: C^{\infty}(\CP^1, \ocal(1)) \to C^{\infty} (\CP^1, \ocal(1) \otimes T^*)$. To define the norm-square $||\nabla s||_{(H^N\otimes g,\nu)}^2$ we introduce a metric $g$ on $\CP^1$ and a Hermitian metric $H$ on $\ocal(1)$ to define $|\nabla s|^2_{H^N \otimes g}$ pointwise and a measure $d\mu$ on $\CP^1$ to integrate the result. The kinetic term is independent of the potential term, and we could choose $H, \mu$ differently from $h, \nu$ in the potential term. But to avoid excessive technical complications, we choose the metrics and connections to be closely related to those in the potential term. We first assume that $h=e^{-\phi}$ is a hermitian metric on $\ocal(1) \rightarrow \CP^1$ with positive $(1,1)$ curvature, $\omega_h = \frac{i}{\pi} \ddbar \phi>0$. We then choose $\nabla$ to be the Chern connection of $h$. Thus, $\nabla s \in C^{\infty}(\CP^1, \ocal(N) \otimes T^{*(1,0)})$ if $s \in H^0(\CP^1, \ocal(N))$. We fix a local frame $e$ over $\C$ and express holomorphic sections of $\ocal(N)$ as $s = p e^N$. The connection 1-form is defined by $\nabla e = e \otimes \alpha$ and in the case of the Chern connection for $h$ it is given by $\alpha = h^{-1} \partial h = \partial \phi$. We further fix a smooth Riemannian metric $g$ on $\CP^1$ (which could be $\omega_h$ but need not be). We assume that the auxiliary probability measure $d\nu$ on $\CP^1$ satisfies the following $L^2$-condition: There exists $r \geq 0$ so that \begin{equation}\label{condition} \int_{\CP^1} |p|^2e^{-N\phi}\omega_h \leq C N^r\int_{\CP^1} |p|^2e^{-N\phi}d\nu,\end{equation} for all $p \in \pcal_N$. That is, the inner product defined by $(h^N, \omega_h)$ is polynomially bounded by the inner product defined by $(h^N, \nu)$. We say that $(h, \nu)$ is {\it kinetic admissible} if the data satisfies these conditions. The metrics $h$ and $g$ and the measure $\nu$ induce inner products on $\Gamma(L^N \otimes T^{*(1, 0)})$ by $$\langle s \otimes dz, s \otimes dz \rangle_{h^N \otimes g} = \int_{\CP^1} (s, s)_{h^N} (dz, dz)_g\; d\nu. $$ Since $\nabla p e^N = e^N \otimes \partial p + N p e^N \otimes \alpha, $ the kinetic energy is given in the local coordinate as \begin{equation}\label{kic}\begin{array}{lll} \int_{\CP^1} |\nabla s|^2_{h^N \otimes g}d\nu : & = & \int_{\C} (e^N \otimes \partial p + N p e^N \otimes \alpha, e^N \otimes \partial p + N p e^N \otimes \alpha)_{h^N \otimes g} d\nu \\ && \\ & = & \int_{\C} \left( | \partial p |^2_g + N p (\alpha, \partial p)_g + N \bar{p} (\partial p, \alpha)_g + N^2 |p|^2 |\alpha|^2_g \right) e^{- N \phi} d\nu \end{array}\end{equation} \subsection{Kinetic admissible $(h, \nu,\nabla)$} We now show that some natural choices of $(h, \nu,\nabla)$ are kinetic admissible. We first observe that $\frac{1}{N} \nabla$ is a bounded operator on $H^0(M, L^N)$ for any positive line bundle $L$ over the projective \kahler manifold $M$, when the inner product is defined by a smooth volume form. This is an obvious result of Toeplitz calculus but we provide a proof using the Boutet de Monvel-Sj\"ostrand parametrix for the \szego kernel. It is at this point that we need the assumption that $\omega_h > 0$. \begin{lem} \label{volume} Assume $h = e^{-\phi}$ is a Hermitian metric on a holomorphic line bundle $L \rightarrow M$ over any compact projective \kahler manifold with $\omega_h=\partial \bar \partial \phi > 0$. Assume $d\nu$ is a smooth volume form and $g$ is a Riemannian metric over $M$. Then we have $$\|\nabla s\|_{(h^N \otimes g,\nu)}^2\leq C(h,g,\nu) N^2 \|s\|^2_{(h^N,\nu)} $$ where $s$ is the holomorphic section of line bundle $L^N$.\end{lem} \begin{proof} Let $\Pi_{N,\nu}: L^2(M,L^N)\rightarrow H^0(M,L^N)$ be the orthogonal projection with respect to the inner product,$$\langle f_1\otimes e^N,f_2\otimes e^N \rangle=\int_M f_1\bar f_2 e^{-N\phi}d\nu,$$ Here in the local coordinate, we write $s=fe^N$ as the section of the line bundle $L^N$. Let $ \Pi_{N,\nu}(z,w)$ be its Schwartz kernel with respect to $d\nu$, $$(\Pi_{N,\nu} s)(z)=\int_M \Pi_{N,\nu}(z,w)f(w) e^{-N\phi(w)}d\nu(w)$$ Then Bergman kernel has the paramatrix \cite{BBS, BS} $$\Pi_{N,\nu}(z,w)=e^{N\phi(z\cdot w)}A_{N}e^N(z)\otimes \bar e^N(w)$$ where $A_N$ is a symbol of order $m = \dim M$ depending on $h$ and $\nu$ and where $\phi(z\cdot w)$ is the almost-analytic extension of $\phi(z)$. It follows that the Schwartz kernel of $\frac{1}{N} \nabla \Pi_{N, \nu}$ has the local form, $$\begin{array}{l} \frac{1}{N}\nabla\Pi_{N,\nu}(z,w) = \left((\frac{1}{N} \partial +\partial\phi dz) e^{N\phi(z\cdot w)}A_{N}(z,w) \right)e^N(z) \otimes \bar{e}^N(w)\\ \\ =\left((\partial \phi+\partial_z \phi(z\cdot w) + \frac{1}{N} \partial \log A_N) e^{N\phi(z\cdot w)}A_{N}\right) e^N(z) \otimes \bar{e}^N(w). \end{array}$$ Put $\Phi(z,w):=\partial \phi+\partial_z \phi(z\cdot w) +\partial \log A_N$. Denote by $\Phi \Pi_{N, \nu}$ the product of $\Phi$ and the Schwartz kernel of $\Pi_{N, \nu}$. Then, $$\|\frac{1}{N}\nabla s\|_{(h^N\otimes g,\nu)}^2= \|\frac{1}{N}\nabla\Pi_{N,\nu} s\|_{(h^N\otimes g,\nu)}^2= \|(\Phi\Pi_{N,\nu}) s\|_{(h^N\otimes g,\nu)}^2$$ We now claim that $$\|(\Phi\Pi_N)s\|_{L^2(h^N\otimes g,\nu)}\leq C \|s\|_{L^2(h^N,\nu)}. $$ This follows from the Schur-Young bound on the $L^2 \to L^2$ mapping norm of the integral operator $\Phi\Pi_N$, \begin{equation} \label{SY} \|\Phi\Pi_N\| \leq C \sup_M \int_M |\Pi_{N,\nu}(z,w)|d\nu(z), \end{equation} since for any metric $g$ on $M$, $|\Phi|_{g}\leq C$ uniformly on $M$. To estimate the norm, we use the following known estimates on the Bergman kernel (see \cite{SZ4} for a similar estimate and for background): when $d(z,w)\leq CN^{-\frac{1}{3}}$, we have $$|\Pi_{N,\nu}(z,w)|_{h^N \otimes h^N}\leq C N^m e^{-\frac{1}{4}Nd^2(z,w)}+O(N^{-\infty}),$$ and in general, $$|\Pi_{N,\nu}(z,w)|\leq C N^m e^{-\lambda \sqrt{N}d{(z,w)}}$$ for some constant $C$ and $\lambda$. Since we assume $d\nu$ is a volume form on $M$, there exists a positive function $J\in C^\infty(M)$ such that $d\nu=J \omega_h^m$. We break up the right side of (\ref{SY}) into $$ \int_{d(z,w)\leq N^{-1/3}}+\int_{d(z,w)\geq N^{-1/3}}.$$ The first term is bounded by $$\begin{array}{l} \leq CN^m \int_{d(z,w)\leq N^{-1/3}} e^{-\frac{1}{4}Nd^2(z,w)} J \omega^m(z) \\ \\ \leq C(h,\nu)N^m \int_{0}^\infty e^{-\frac{1}{4}N\rho^2}d \rho^{2m} +O(N^{-\infty})\leq C'(h,\nu)\end{array}$$ The second term is bounded by $$\begin{array}{l} \leq CN^m \int_ {d(z,w)\geq N^{-1/3}}e^{-\lambda \sqrt{N}d(z,w)}J\omega^m \\ \\ \leq CN^m \int_{M}e^{-\lambda N^{\frac{1}{6}}} d\nu\leq O(N^{-\infty}) \end{array}$$ as $N$ large enough. Thus the operator norm $\Phi\Pi_N$ is bounded by some constant $C'(h,g,\nu)$. \end{proof} \begin{rem} The assumption that $d\nu$ is a smooth volume form allows us to take the adjoint of $\nabla$. \end{rem} We now give a more general estimate. We assume again that $h=e^{-\phi}$ has positive curvature $\omega_h > 0$. But we now relax the assumption that $d\nu$ is a smooth volume form, and only assume that $d\nu$ satisfies the $L^2$ condition:$$ \int_M |s|^2e^{-N\phi}\omega_h^m \leq C N^r\int_M |s|^2e^{-N\phi}d\nu$$ for any $s \in H^0(M, L^N)$ and for some $r\geq 0$. \begin{lem} \label{carl} Let $\dim M = m$. Under the above assumptions, we have $$\|\nabla s\|_{(h^N\otimes g,\nu)}^2\leq C N^{r+2m + 2} \|s\|^2_{(h^N,\nu)} $$ where $s \in H^0(M, L^N)$. \end{lem} \begin{proof}First we consider the following Bergman kernel $\Pi_{N,\omega_h}(z,w)$ with respect to the inner product, $$\Pi_N (f e^N)(z)=\int _M \Pi_{N,\omega_h}(z,w) f (w)e^{-N\phi} \omega_h^m(w) .$$ As above, we write $\Phi(z,w)=\partial \phi+\partial_z \phi(z\cdot w) +\partial \log A_N$. By Schwartz' inequality, we have (in an obvious notation) $$\begin{array}{l}\|\frac{1}{N}\nabla \Pi_{N,\omega_h} s\|^2_{L^2(h^N, \nu)}\\ \\ \leq (\int_M |f|^2 e^{-N\phi} \omega_h^m)(\int_{M\times M} |\Phi|^2|\Pi_{N,\omega_h}|^2e^{-N\phi(w)-N\phi(z)} |dz|_g^2 \omega^m_h(w) d\nu(z))\end{array}$$ Since $|\Phi\Pi_{N,\omega_h}|^2 |dz|_g^2 \leq CN^{2m}$ uniformly, this implies $$\begin{array}{lll}\|\nabla s\|^2_{L^2(h^N\otimes g, \nu)} &= &\|\nabla \Pi_{N,\omega_h} s\|^2_{L^2(h^N\otimes g, \nu)} \\ && \\ & \leq & C N^{2m + 2}\int_M |f|^2 e^{-N\phi} \omega_h \leq CN^{r+2 m + 2} \|s\|^2_{L^2(h^N, \nu)},\end{array} $$ under the $L^2$ condition. \end{proof} \subsection{Proof of Theorem \ref{POTENTIALKINETIC}.} We now prove Theorem \ref{POTENTIALKINETIC}. At first one might expect the kinetic term to dominate the action, since its square root is the $H^1_2(d\nu)$ norm of $s$ and since that norm cannot be bounded by the $L^p$ norm for any $p < \infty$, at least when $\nu$ is a smooth area form. However, we are only integrating over holomorphic sections of $\ocal(N)$ and with the admissibility assumption, the ratios of all norms are bounded above and below by positive constants depending on $N$. Taking logarithm asymptotics erases any essential difference between these norms. The main step in the proof is the following generalization of Proposition \ref{FSVOLZETA2intro}. \begin{prop} \label{FSVOLZETA2introb} Let $(\pcal_N, \gamma_N)$ be the $P(\phi)_2$ ensemble with action (\ref{S}), where $(h, \nabla, \nu)$ is kinetic admissible. Let $\vec K^N$ be the joint probability current (\ref{JPCDEF}). Then, \begin{eqnarray} \label{eq-030209b} \vec K^N(\zeta_1, \dots, \zeta_N) & = & \frac{(\tilde{\Gamma}_N(\zeta_1, \dots \zeta_N))}{\hat{Z}_N(h)} \frac{\exp \left( \sum_{i < j} G_{h}(\zeta_i, \zeta_j) \right) \prod_{j = 1}^N e^{- 2 N \phi(\zeta_j)} d^2 \zeta_j }{\left(\int_{\CP^1} e^{k N \int_{\CP^1} G_{h}(z,w) d\mu_{\zeta}} d\nu(z) \right)^{\frac{N+1}{k}}}. \end{eqnarray} where $$(**) \;\;\sup_{\{\zeta_1, \dots, \zeta_N\} \in (\CP^1)^{(N)}} \frac{1}{N^2} \log \tilde{\Gamma}_N (\zeta_1, \dots, \zeta_N) \to 0$$ and where $ Z_N(h)$, resp. $\hat{Z}_N(h)$, is the normalizing constant in Proposition 3 of \cite{ZZ}. \end{prop} \begin{proof} We closely follow the proof of Proposition \ref{FSVOLZETA2intro}, and do not repeat the common steps. For the $P(\phi)_2$ measures (\ref{GL}) with kinetic term, \begin{equation} \label{JPCc} \begin{array}{lll} \dcal(a_0; \zeta_1, \dots, \zeta_N)& =& e^{- \int_{\CP^1} (|\nabla \prod_{j = 1}^N (z - \zeta_j)|^2 + P (|a_0|^2 | \prod_{j = 1}^N (z - \zeta_j)|^2_{h^N}) d\nu(z)} \\ && \\ &= & e^{-(\alpha_k|a_0|^{2k}+\alpha_{k-1}c_{k - 1} |a_0|^{2k-2}+\cdots + \alpha_1 c_1 |a_0|^2 + \eta |a_0^2|)}, \end{array} \end{equation} where \begin{equation} \label{eta} \eta = |a_0|^{-2}\|\nabla s\|^2_{L^2(h^N\otimes g,\nu)}. \end{equation} Thus, the addition of the kinetic term changes the pushed forward probability density from (\ref{DCAL}) to $$\begin{array}{l} \int \dcal(a_0; \zeta_1, \dots, \zeta_N) |a_0|^{2 N} d^2a_0 \\ \\ = \int_{\mathbb{C}}e^{-(\alpha_k|a_0|^{2k}+\alpha_{k-1}c_{k - 1} |a_0|^{2k-2}+\cdots + c_1 \alpha_1 |a_0|^2 + \eta |a_0|^2)}|a_0|^{2N}da_0 \wedge d \bar a_0 \\ \\ = \int_{0}^\infty e^{-(\alpha_k\rho^{k}+\alpha_{k-1}c_{k - 1} \rho^{k-1}+\cdots + c_1 \alpha_1 \rho + \eta \rho)}\rho^{N}d\rho, \end{array}$$ where $\rho=|a_0|^2$ and $\alpha_i$ is defined by (\ref{alphaj}). We only need to understand the effect of the new $\eta$ term. We change variable $\rho\rightarrow \rho \alpha _k^{\frac{1}{k}}$, to get $$ \int \dcal(a_0; \zeta_1, \dots, \zeta_N) |a_0|^{2 N} d^2a_0 = \alpha _k^{\frac{N+1}{k}} \tilde{\Gamma}_N(\zeta_1, \dots, \zeta_N), $$ where $$\tilde{\Gamma}_N(\zeta_1, \dots, \zeta_N) : = \int_{0}^\infty e^{-(\rho^{k}+\beta_{k-1}c_{k - 1} \rho^{k-1}+\cdots + c_1 \beta_1 \rho + \frac{ \eta}{\alpha_k^{\frac{1}{k}}} \rho)}\rho^{N}d\rho.$$ This is the same expression as in Proposition \ref{FSVOLZETA2intro} except that the $\Gamma_N$ factor has changed. Hence to prove (**), it suffices to prove $$\frac{1}{N^2}\log \int_{0}^\infty e^{-(\rho^{k}+\beta_{k-1}c_{k - 1} \rho^{k-1}+\cdots + c_1 \beta_1 \rho + \frac{ \eta}{\alpha_k^{\frac{1}{k}}} \rho)}\rho^{N}d\rho \rightarrow 0.$$ We first prove that the limit is bounded above by $0$. Since the addition of the positive quantity $\eta \alpha_k^{-\frac{1}{k}}$ increases the exponent, we have $$\begin{array}{l}\frac{1}{N^2}\log \int_{0}^\infty e^{-(\rho^{k}+\beta_{k-1}c_{k - 1} \rho^{k-1}+\cdots + c_1 \beta_1 \rho + \frac{ \eta}{\alpha_k^{\frac{1}{k}}} \rho)}\rho^{N}d\rho \\ \\ \leq \frac{1}{N^2}\log \int_{0}^\infty e^{-(\rho^{k}+\beta_{k-1}c_{k - 1} \rho^{k-1}+\cdots + c_1 \beta_1 \rho )}\rho^{N}d\rho, \end{array}$$ so the integral is bounded above by its analogue in the pure potential case, and it follows from the proof in section \ref{without} that the last integral tends to 0. We now consider the lower bound. By Lemmas \ref{volume} and \ref{carl} (with $m = 1$) and by H\"{o}lder inequality, we have $$\eta\leq CN^n|a_0|^{-2}\|s\|^2_{L^2(h^N,\nu)}\leq CN^n \alpha_k^{\frac{1}{k}},$$ in the cases $n=2$ with $\nu$ a smooth volume form or $n\geq 4$ when $\nu$ satisfies the weighted $L^2$ Bernstein inequality (\ref{BERN}). We then have, $$\begin{array}{l} \frac{1}{N^2}\log \int_{0}^\infty e^{-(\rho^{k}+\beta_{k-1}c_{k - 1} \rho^{k-1}+\cdots + c_1 \beta_1 \rho + \frac{ \eta}{\alpha_k^{\frac{1}{k}}} \rho)}\rho^{N}d\rho\\ \\ \geq \frac{1}{N^2}\log \int_{0}^\infty e^{-(\rho^{k}+\beta_{k-1}c_{k - 1} \rho^{k-1}+\cdots + c_1 \beta_1 \rho + CN^n \rho)}\rho^{N}d\rho \\ \\ \geq \frac{1}{N^2}\log \int_{0}^\infty e^{-(\rho^{k}+\beta_{k-1}|c_{k - 1}| \rho^{k-1}+\cdots + |c_1| \beta_1 \rho + CN^n \rho)}\rho^{N}d\rho. \end{array}$$ Hence, it suffices to prove that $$\frac{1}{N^2}\log \int_{0}^\infty e^{-(\rho^{k}+\beta_{k-1}|c_{k - 1}| \rho^{k-1}+\cdots + |c_1| \beta_1 \rho + CN^n \rho)}\rho^{N}d\rho \geq 0. $$ We use the steepest descent method to show that the latter tends to zero. The maximum of the phase function occurs when $$k\rho_N^{k}+(k-1)\beta_{k-1}|c_{k - 1}| \rho_N^{k-1}+\cdots + |c_1| \beta_1 \rho_N + CN^n \rho_N =N.$$ It follows first that $\rho_N \leq \frac{1}{CN^{ n-1}}<1$. Thus $$\begin{array}{lll}N&=&k\rho_N^{k}+(k-1)\beta_{k-1}|c_{k - 1}| \rho_N^{k-1}+\cdots + |c_1| \beta_1 \rho_N + CN^n \rho_N\\ &&\\ &\leq & k\rho_N+(k-1)\beta_{k-1}|c_{k - 1}| \rho_N+\cdots + |c_1| \beta_1 \rho_N + CN^n \rho_N \end{array}$$which implies $$\rho_N \geq \frac{N}{C(k,c_{k-1},\cdots,c_1)+CN^n},$$ and therefore $$\rho_N \sim \frac{1}{C'N^{n-1}}$$ for $N$ large enough. Thus by the formula of steepest descent, $$\begin{array}{l}\frac{1}{N^2}\log \int_{0}^\infty e^{-(\rho^{k}+\beta_{k-1}|c_{k - 1}| \rho^{k-1}+\cdots + |c_1| \beta_1 \rho + CN^n \rho)}\rho^{N}d\rho \\ \\ \sim \frac{1}{N}\log\rho_N-\frac{1}{N^2}( \rho_N^{k}+\beta_{k-1}|c_{k - 1}| \rho^{k-1}_N+\cdots + |c_1| \beta_1 \rho_N + CN^n \rho_N)\\ \\ \sim -\frac{(n-1)\log (C'N)}{N}-\frac{1}{N^2}((\frac{1}{C'N^{n-1}})^k+\cdots+\beta_1|c_1|\frac{1}{C'N^{n-1}})-C\frac{1}{C'N} \end{array}$$ which goes to $0$ as $N\rightarrow \infty$, and (**) holds. \end{proof} This completes the proof of Proposition \ref{FSVOLZETA2introb}. The rest of the proof proceeds exactly as in \S \ref{COMPLETE}, completing the proof of Theorem \ref{POTENTIALKINETIC}.
{"config": "arxiv", "file": "1009.5142.tex"}
TITLE: Find direction of tangent vector based on trajectory along circle QUESTION [1 upvotes]: A particle travels along an arc (in green) from A = $(x_A, y_A)$ to B= $(x_B, y_B)$. The arc is on a circle defined by its center C = $(x_C, y_C)$ and its radius r. The vector u points from C to A and the vector v points from C to B. The goal is to find the direction vectors at the beginning (point A) and at the end (point B) of the trajectory. It is easy to find the gradient m of the tangent line at point A from the gradient n of the radius from C to A, using the fact that the radius and the tangent line are perpendicular and hence that the product of m and n is equal to -1. For instance, the equation of the tangent line at point A is $y = m(x - x_A) + y_A$. However, I do not need a line, but rather a vector pointing toward the direction of motion at the beginning of the trajectory. Knowing the equation of the tangent line is not enough to determine the direction of this vector. The data is from real measurements and hence all possible scenarios are present. The angle $\alpha$ between u and v can be large, as shown in the figure, or on the opposite very small, and in addition, it can run in the clockwise (as in the figure) or in the anti-clockwise direction. The only information I have is the equation of the circle and the coordinates of the many trajectory points that form the arc along the circle. Many thanks. This figure may help understand the problem. REPLY [0 votes]: As you say, the tangent vector to a circle is always perpendicular to the radial vector. Since you know the radial vectors at the two points, finding the corresponding direction vectors is a simple matter of rotating the radial vectors 90° in the appropriate direction—the same direction in which you’re measuring the angle of arc. Assuming the standard mathematical convention of angles measured counterclockwise begin positive, if the radial vector from $C$ to a point has coordinates $(a,b)$, then the direction vector at that point is $(-b,a)$ for a counterclockwise motion along the arc and $(b,-a)$ for a clockwise motion. Adjust the length of this vector as desired. If you don’t happen to have the direction of motion handy, you can determine it from a pair of points near each other along the path. Let $(x_1,y_1)$ and $(x_2,y_2)$ be the coordinates of the radial vectors from $C$ to these two points, with the latter point being reached later in time. Compute the cross product $$(x_1,y_1,0)\times(x_2,y_2,0) = (0,0,x_1y_2-x_2y_1).$$ By the right-hand rule, the sign of the last coordinate of the result will give you the direction of motion: positive for counterclockwise, negative for clockwise. You want the two points to be near each other so that the direction is computed correctly. If they’re too far apart, the direction computed by the cross product will be the reverse of the actual direction of motion, as would happen if you used the two end points in your illustration.
{"set_name": "stack_exchange", "score": 1, "question_id": 2680855}
TITLE: Conditional Probablity QUESTION [0 upvotes]: 100 coins with two sides (head and tail) 20 coins are fair (50% of getting head and 50% of getting tail) 80 coins are biased (70% of getting head and 30% of getting tail) What is the probability of get head if we throw a randomly chosen coin from the 100 coins once I said .2*.5 + .8*.7 = .66 Now, given that we got a head, what is the conditional probability that the coin we threw was biased? This is tricky for me as to how to set it up. I understand that the formula is P(A|B) = P(A intersect B)/ P(B) but im not sure how to get P(A intersect B) or P(B) or how to assign those REPLY [1 votes]: Let $A$ be the event that you the coin you chose is biased and $B$ the event that the coin shows heads. Then $A\cap B$ is the event that your chosen coin is biased and shows heads, i.e. $\mathbb{P}(A\cap B) = 0.8 * 0.7$ and, as you correctly calculated, $\mathbb{P}(B) = 0.66$. Therefore $\mathbb{P}(A|B) = \frac{\mathbb{P}(A\cap B)}{\mathbb{P}(B)} \approx 0.85.$
{"set_name": "stack_exchange", "score": 0, "question_id": 236809}
TITLE: Find maximum and minimum argument of $f(z)=(z^4-1)/4$ where $z$ belongs to square with vertices $\{1,i,-1,-i\}$ QUESTION [0 upvotes]: let z be a complex number with |z|<1, now how to find the max and min argument of $(z^4-1)/4$ This original problem given below comes from kheavan.files.wordpress.com/2011/10/mathematical-olympiads-1997-1998-problems-solutions-from-around-the-world-maa-problem-book-225p-b002kypabi.pdf (question number 1 section 1.9) Do read the solution to understand what I am really asking. Let P be a point inside or on the sides of a square ABCD. Determine the minimum and maximum possible values of f(P) = ∠ABP + ∠BCP + ∠CDP + ∠DAP. I want the answer to my question, not to the original problem REPLY [1 votes]: Your function $$f(z)=(z^4-1)/4$$ can be considered as the composition of 3 functions/transforms : $$z \to z^4=u \mapsto u-1=v \mapsto v/4$$ the two last transforms are a translation and a homothety ("shrinking" from the origin with ratio 4:1) The closed unit disk $C((0,0),1)$ is sent onto itself by the first transform (each point being reached 4 times). Then it is transformed into C((-1,0),1), and finally into $C((-\tfrac14,0),\tfrac14)$ itself (each point being reached 4 times as can be understood by seing the second figure). Edit : I have had a look at the original question, with which I had some difficulty like you had. I understand now properly with the aid of a figure. This figure represents the initial square with circumscribed unit circle ; and their images : no surprise for the image of the circle, but we couldn't truely await that the image of the square is a kind of droplet with limiting angles $3 \pi/4$ and $5 \pi/4$. All the vertices of the square are "mapped" on the apex of the drop : maybe you know that a complex transformation preserves the angles : the image of a $\pi/2$ angle remains a $\pi/2$ angle... Fig. 1 : The image by $f$ 1) of the unit circle is the small circle 2) of the square is the shape looking like a drop. See Fig. 2. Matlab program for the generation of this figure : clear all;close all;hold on;axis equal t=0:0.01:1; f=@(z)((z.^4-1)/4), for k=1:4 z=(i^k)*(t+(1-t)*i);plot(z,'b');plot(f(z),'b'); % 4 square's sides end; z=exp(2*pi*i*t);plot(z,'r');plot(f(z),'r') In order to understand the mapping, here is a complementary representation displaying arrows joining $z$ to $f(z)$ for different values of $z$, either at the boundary of a quarter of circle (green arrows) or of a side of the square (blue arrows); please note for example that the midpoint of the side (any side in fact) of the square is mapped onto the bottom of the drop. Fig. 2 : Some examples of points $z$ and their resp. images $f(z)$ connected by an arrow. A quarter of the unit circle or a single side of the square are enough for defining the little circle and the drop.
{"set_name": "stack_exchange", "score": 0, "question_id": 3562208}
\section{Proofs of Section~\ref{jc}} \subsection{Proof of Lemma~\ref{pk}} \label{wx} Fix $X,$ and let $W$ be its orthogonal projection onto the closed subspace $\calV.$ Consider the symmetric matrix $\bM=\BE[\bV \bV^T].$ Since $\bc^T \bM \bc = \BE[|\bc^T \bV|^2]$ for each $\bc\in \BR^{n+1},$ linear independence of the $V_j$ implies that $\bM$ is positive-definite. Let $\bM^{1/2}$ be the unique lower-triangular matrix with positive diagonal such that $\bM^{1/2} \left(\bM^{1/2}\right)^{T} = \bM,$ and denote its inverse by $\bM^{-1/2}.$ Let $(U_0,\cdots,U_n)^T = \bU = \bM^{-1/2}\bV.$ Then, $\{U_j\}_{j\in [n]} \subset \calV$ and $\BE[\bU\bU^T] = \bI_{n+1}.$ Therefore, $\{U_j\}_{j\in [n]}$ is an orthonormal basis of $\calV$; indeed, it is the output of Gram-Schmidt orthonormalization on $\{V_j\}_{j\in [n]}.$ Then, because $W$ is the orthogonal projection of $X$ onto $\calV,$ we can express $W$ as \begin{equation} \label{pl} W = \sum_{j\in [n]} \BE[XU_j] ~ U_j = \BE[X\bU^T] \bU. \end{equation} Plugging the defining formulas of $\bU$ and $\bM$ into \eqref{pl}, \begin{equation} W = \BE\left[ X \bV \right]^T \BE\left[ \bV \bV^T \right]^{-1} \bV. \end{equation} Finally, being the orthogonal projection of $X$ onto $\calV,$ $W$ is the unique closest element in $\calV$ to $X$; hence, equation \eqref{pj} follows. \subsection{Proof of Corollary~\ref{pd}} \label{wy} Fix $q\ge 1.$ For each $j,$ Carleman's condition on $Y_j$ yields that the set of polynomials in $Y_j,$ i.e., $\bigcup_{n\in \BN} \SP_n(Y_j),$ is dense in $L^{2q}(\sigma(Y_j)).$ Therefore, by Theorem \ref{pe}, \begin{equation} \overline{\bigcup_{n\in \BN} \SP_{n,m}(\bY)} = L^q(\sigma(\bY)). \end{equation} Now, fix $(f_1,\cdots,f_m)^T=\bff \in L^{q}(\BR^m,\sigma(\bY)).$ For each $j,$ $f_j \in L^{q}(\sigma(\bY)).$ Hence, there is a sequence $\{g_{j,n} \}_{n\in \BN} \subset \SP_{n,m}(\bY)$ such that $f_j = \lim_{n\to \infty} g_{j,n}$ in $L^{q}(\sigma(\bY))$-norm. Set $\bg_n = (g_{1,n},\cdots,g_{m,n})^T,$ and note that $\bg_n \in \SP_{n,m}^m(\bY).$ By definition of the norm in $L^q(\BR^m,\sigma(\bY)),$ we deduce \begin{equation} \lim_{n\to \infty} \|\bff - \bg_n \|_q^q = \lim_{n\to \infty} \sum_{j=1}^m \|f_j - g_{j,n}\|_q^q = 0, \end{equation} and the desired denseness result follows. \subsection{Proof of Theorem~\ref{wu}} \label{xa} Since the $Y_j$ do not satisfy a polynomial relation, the matrix $\bM_{\bY,n}$ is invertible for each $n\in \BN.$ Further, the entries of $\bY^{(n,m)}$ are linearly independent for each $n.$ Then, by Lemma~\ref{pk}, equation~\eqref{wz} follows, i.e., $E_n[\bX\mid \bY]$ is the $\ell$-RV whose $k$-th entry is $\BE\left[X_k \bY^{(n,m)}\right]^T \bM_{\bY,n}^{-1} \bY^{(n,m)}.$ By Corollary~\ref{pd}, since each $Y_j$ satisfies Carleman's condition, the set of vectors of polynomials $\bigcup_{n\in \BN} \SP_{n,m}^m(\bY)$ is dense in $L^2(\BR^m,\sigma(\bY)).$ In particular, $\bigcup_{n\in \BN} \SP_{n,m}(\bY)$ is dense in $L^2(\sigma(\bY)).$ By Theorem~\ref{pi}, we have the $L^2(\sigma(\bY))$ limits \begin{equation} \BE[X_k\mid \bY] = \lim_{n\to \infty} \BE\left[X_k \bY^{(n,m)}\right]^T \bM_{\bY,n}^{-1} \bY^{(n,m)} \end{equation} for each $k\in \{1,\cdots,\ell\}.$ We conclude that $E_n[\bX\mid \bY] \to \BE[\bX \mid \bY]$ in $L^2(\BR^\ell,\sigma(\bY)),$ as desired. \subsection{Proof of Proposition~\ref{xb}} \label{xc} Set $\bY = (Y_1,Y_2)^T.$ Equation \eqref{pm} is straightforward: since $E_n[X \mid Y_1] \in \SP_n(Y_1) \subset \SP_{n,2}(\bY),$ the projection of $E_n[X \mid Y_1]$ onto $\SP_{n,2}(\bY)$ is $E_n[X \mid Y_1]$ again. Equation \eqref{pn} also follows by an orthogonal projection argument. There is a unique representation $X=p_{1,2}+p_{1,2}^{\perp}$ for $(p_{1,2},p_{1,2}^{\perp}) \in \SP_{n,2}(\bY)\times \SP_{n,2}(\bY)^{\perp}.$ There is also a unique representation $p_{1,2}=q_2 + q_2^{\perp}$ for $(q_2,q_2^{\perp})\in \SP_n(Y_2)\times \SP_n(Y_2)^{\perp}.$ The projection of $X$ onto $\SP_{n,2}(\bY)$ is $p_{1,2},$ whose projection onto $\SP_n(Y_2)$ is $q_2,$ i.e., \begin{equation} \label{po} E_n\left[ E_n[X \mid Y_1,Y_2] \mid Y_2 \right] = q_2. \end{equation} Furthermore, we have the representation $X=q_2+(q_2^{\perp}+p_{1,2}^{\perp}),$ for which $(q_2,q_2^{\perp}+p_{1,2}^{\perp}) \in \SP_n(Y_2)\times \SP_n(Y_2)^{\perp}.$ Hence, the projection of $X$ onto $\SP_n(Y_2)$ is $q_2$ too, i.e., \begin{equation} \label{pp} E_n[X\mid Y_2] = q_2. \end{equation} From \eqref{po} and \eqref{pp} we get \eqref{pn}. Equation \eqref{pn} can also be deduced from the formula of $W:=\BE[X \mid \bY].$ Denote $\bY_2^{(n)} = (1,Y_2,\cdots,Y_2^n)^T.$ We have that \begin{equation} \label{pq} W = \BE\left[ X \bY^{(n,2)} \right]^T \bM_{\bY,n}^{-1} \bY^{(n,2)} \end{equation} and \begin{equation} \label{pr} E_n[W \mid Y_2] = \BE\left[ W \bY_2^{(n)} \right]^T \bM_{Y_2,n}^{-1} \bY_2^{(n)}. \end{equation} For $k\in [n],$ let $\delta(k)\in \left[ \binom{n+2}{2} - 1 \right]$ be the index of the entry in $\bY^{(n,2)}$ that equals $Y_2^k.$ Then, \begin{equation} \BE\left[ Y_2^k \bY^{(n,2)} \right] = \bM_{\bY,n} \be_{\delta(k)}, \end{equation} where $\be_0,\cdots,\be_{\binom{n+2}{2}-1}$ are the standard basis vectors of $\BR^{\binom{n+2}{2}}.$ Therefore, plugging \eqref{pq} into \eqref{pr}, we obtain \begin{equation} E_n[W \mid Y_2] = \BE\left[ X \bY_2^{(n)} \right]^T \bM_{Y_2,n}^{-1} \bY_2^{(n)}, \end{equation} which is just $E_n[X \mid Y_2],$ as desired.
{"config": "arxiv", "file": "2109.00649/Appendix/Generalizations.tex"}
TITLE: Show that there exists a $c\in \mathbb{C}$ such that $f(z)=c \bar{z}$ for all $\vert z\vert=1$. QUESTION [2 upvotes]: The problem is: Suppose that $f(z)$ is continuous on a domain $D$ that contains the unit circle, and that $f(z)$ satisfies: $$\vert f(e^{i\theta})\vert \leq M \; \forall \theta \, \in [0, 2\pi )$$ and $$\bigg \vert \int _{\vert z \vert=1} f(z) \text{d}z \bigg \vert =2\pi M $$ show that there exists a $c\in \mathbb{C}$ such that $f(z)=c \bar{z}$ for all $\vert z\vert=1$. I've tried lots of different approaches such as defining a function $g(z)=\frac{f(z)}{\bar{z}}$ and trying to show that $g'(z)=0$ so that $g$ is constant on the circle, or trying to use some bounding theorems for analytic functions to show that $g$ is constant but this requires $g$ to be analytic. Some other methods were using the Cauchy-Riemann equations but I couldn't find any reason why $g$ should be analytic. Any help would be greatly appreciated, thanks! REPLY [2 votes]: The two conditions imply that $|f(z)|=M$ for all $|z|=1$, because if the inequality is strict on some $\theta$, then it will be so in an interval, and that prevents the equality in the integral (note that $2\pi M$ is an upper bound for the integral). So $f(e^{i\theta})=Me^{ig(\theta)}$ for some continuous $g$. Then $$ \int_{|z|=1}f(z)dz=M\int_{0}^{2\pi}e^{ig(\theta)}\,ie^{i\theta}\,d\theta=iM\int_0^{2\pi}e^{i(\theta+g(\theta))}\,d\theta. $$ So $$ \left|\int_0^{2\pi}e^{i(\theta+g(\theta))}\,d\theta\right|=2\pi, $$ and then $$ 2\pi=\int_0^{2\pi}e^{i(\theta+g(\theta)+d)}\,d\theta $$ for an appropriate $d$. This last equality shows that the integral of the imaginary part is zero, and the integral of the real part is $2\pi$. If at any point the real part were less than $1$, we would not achieve the $2\pi$: we deduce that the real part of $e^{i(\theta+g(\theta)+d)}$ is $1$, which implies $$ e^{i(\theta+g(\theta)+d)}=1 $$ for all $\theta$. This implies that $g(\theta)=2k(\theta)\pi-\theta-d$, and so $$ f(e^{i\theta})=Me^{-i\theta-id}=Me^{-id}\,e^{-i\theta}, $$ i.e. $$ f(z)=c\overline z $$ where $c=Me^{-id}$.
{"set_name": "stack_exchange", "score": 2, "question_id": 478642}
\begin{document} \title [Stable broken {${\boldsymbol H}(\ccurl)$} polynomial extensions \& $\MakeLowercase{p}$-robust \revision{broken} equilibration] {Stable broken {${\boldsymbol H}(\MakeLowercase{\ccurl})$} polynomial extensions\\ and $\MakeLowercase{p}$-robust a posteriori error estimates \revision{by broken patchwise equilibration} for \revision{the curl--curl problem}$^\star$} \author{T. Chaumont-Frelet$^{1,2}$} \author{A. Ern$^{3,4}$} \author{M. Vohral\'ik$^{4,3}$} \address{\vspace{-.5cm}} \address{\noindent \tiny \textup{$^\star$This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020}} \address{\noindent \tiny \textup{\hspace{.2cm}research and innovation program (grant agreement No 647134 GATIPOR).}} \address{\noindent \tiny \textup{$^1$Inria, 2004 Route des Lucioles, 06902 Valbonne, France}} \address{\noindent \tiny \textup{$^2$Laboratoire J.A. Dieudonn\'e, Parc Valrose, 28 Avenue Valrose, 06108 Nice Cedex 02, 06000 Nice, France}} \address{\noindent \tiny \textup{$^3$Universit\'e Paris-Est, CERMICS (ENPC), 6 et 8 av. Blaise Pascal 77455 Marne la Vall\'ee cedex 2, France}} \address{\noindent \tiny \textup{$^4$Inria, 2 rue Simone Iff, 75589 Paris, France}} \date{} \begin{abstract} We study extensions of piecewise polynomial data prescribed in a patch of tetrahedra sharing an edge. We show stability in the sense that the minimizers over piecewise polynomial spaces with prescribed tangential component jumps across faces and prescribed piecewise curl in elements are subordinate in the broken energy norm to the minimizers over the \revision{broken} $\HH(\ccurl)$ space with the same prescriptions. Our proofs are constructive and yield constants independent of the polynomial degree. We then detail the application of this result to the a posteriori error analysis of \revision{the curl--curl problem} discretized with N\'ed\'elec finite elements of arbitrary order. The resulting estimators are \revision{reliable,} locally efficient, polynomial-degree-robust, and inexpensive. \revision{They are constructed by a \revision{broken patchwise} equilibration which, in particular, does not produce a globally $\HH(\ccurl)$-conforming flux. The equilibration is only related to} edge patches and can be realized without solutions of patch problems by a sweep through tetrahedra around every mesh edge. The error estimates become guaranteed when the regularity pick-up constant is explicitly known. Numerical experiments illustrate the theoretical findings. \noindent {\sc Key Words.} A posteriori error estimates; Finite element methods; \revision{Electromagnetics}; High order methods. \noindent {\sc AMS subject classification.} Primary 65N30, 78M10, 65N15. \end{abstract} \maketitle \section{Introduction} \label{sec_int} The so-called N\'ed\'elec or also edge element spaces of~\cite{Ned_mix_R_3_80} form, on meshes consisting of tetrahedra, the most natural piecewise polynomial subspace of the space $\HH(\ccurl)$ composed of square-integrable fields with square-integrable weak curl. They are instrumental in numerous applications in link with electromagnetism, see for example~\cite{Ass_Ciar_Lab_foundat_electr_18,Boss_elctr_98,Hiptmair_acta_numer_2002,Monk_FEs_Maxwell_03}. The goal of this paper is to study two different but connected questions related to these spaces. \subsection{Stable broken {\em H}$(\ccurl)$ polynomial extensions} Polynomial extension operators are an essential tool in numerical analysis involving N\'ed\'elec spaces, in particular in the case of high-order discretizations. Let $K$ be a tetrahedron. Then, given a boundary datum in the form of a suitable polynomial on each face of $K$, satisfying some compatibility conditions, a {\em polynomial extension} operator constructs a curl-free polynomial in the interior of the tetrahedron $K$ whose tangential trace fits the {\em boundary datum} and which is stable with respect to the datum in the intrinsic norm. Such an operator was derived in~\cite{Demk_Gop_Sch_ext_II_09}, as a part of equivalent developments in the $H^1$ and $\HH(\ddiv)$ spaces respectively in~\cite{Demk_Gop_Sch_ext_I_09} and~\cite{Demk_Gop_Sch_ext_III_12}, see also~\cite{MunSol_pol_lift_97} and the references therein. An important achievement extending in a similar stable way a given polynomial {\em volume datum} to a polynomial with curl given by this datum in a single simplex, along with a similar result in the $H^1$ and $\HH(\ddiv)$ settings, was presented in~\cite{Cost_McInt_Bog_Poinc_10}. The above results were then combined together and extended from a single simplex to a patch of simplices sharing the given vertex in several cases: in $\HH(\ddiv)$ in two space dimensions in~\cite{Brae_Pill_Sch_p_rob_09} and in $H^1$ and $\HH(\ddiv)$ in three space dimensions in~\cite{Ern_Voh_p_rob_3D_20}. These results have important applications to a posteriori analysis but also to localization and optimal $hp$ estimates in a priori analysis, see~\cite{Ern_Gud_Sme_Voh_loc_glob_div_21}. To the best of our knowledge, a similar patchwise result in the $\HH(\ccurl)$ setting is not available yet, and it is our goal to establish it here. We achieve it in our first main result, Theorem~\ref{theorem_stability}, see also the equivalent form in Proposition~\ref{prop_stability_patch} \revision{and the construction in Theorem~\ref{thm_sweep}}. Let $\TTe$ be a patch of tetrahedra sharing a given edge $\edge$ from a shape-regular mesh $\TT_h$ and let $\ome$ be the corresponding patch subdomain. Let $p\ge0$ be a polynomial degree. \revision{Let $\jj_p \in \RT_p(\TTe) \cap \HH(\ddiv,\ome)$ with $\div \jj_p = 0$ be a divergence-free Raviart--Thomas field, and let $\ch_p$ be in the broken N\'ed\'elec space $\NN_p(\TTe)$.} In this work, we establish that \begin{equation} \label{eq_BPE} \min_{\substack{\vv_p \in \NN_p(\TTe) \cap \HH(\ccurl,\ome) \\ \curl \vv_p = \jj_p}} \|\ch_p - \vv_p\|_\ome \leq C \min_{\substack{\vv \in \HH(\ccurl,\ome) \\ \curl \vv = \jj_p}} \|\ch_p - \vv\|_\ome, \end{equation} which means that the {\em discrete} constrained {\em best-approximation error} in the patch is subordinate to the {\em continuous} constrained best-approximation error up to a constant $C$. Importantly, $C$ only depends on the shape-regularity of the edge patch and does {\em not depend} on the {\em polynomial degree $p$} under consideration. Our proofs are constructive, which has a particular application in a posteriori error analysis, as we discuss now. \subsection{$p$-robust a posteriori error estimates \revision{by broken patchwise equilibration} for \revision{the curl--curl problem}} \label{sec_a_post_intr} Let $\Omega \subset \mathbb R^3$ be a Lipschitz polyhedral domain with unit outward normal $\nn$. Let $\GD,\GN$ be two disjoint, open, possibly empty subsets of $\partial \Omega$ such that $\partial \Omega = \overline \GD \cup \overline \GN$. Given a divergence-free field $\jj: \Omega \to \mathbb R^3$ \revision{with zero normal trace on $\GN$}, \revision{the curl--curl problem} amounts to seeking a field $\ee: \Omega \to \mathbb R^3$ satisfying \begin{subequations} \label{eq_maxwell_strong} \begin{alignat}{2} \label{eq_maxwell_strong_volume} &\curl \curl \ee = \jj, \quad \div \ee = 0,&\qquad&\text{in $\Omega$}, \\ &\ee \times \nn = \bzero,&\qquad&\text{on $\GD$},\\ &(\curl \ee) \times \nn = \bzero,\quad \ee \cdot \nn = 0,&\qquad&\text{on $\GN$}. \end{alignat} Note that $\ee \times \nn = 0$ implies that $(\curl \ee) \cdot \nn=0$ on $\GD$. \revision{When $\Omega$ is not simply connected and/or when $\GD$ is not connected, the additional conditions \begin{equation} \label{eq_maxwell_cohomology} (\ee,\ttheta)_\Omega = 0, \qquad (\jj,\ttheta)_\Omega = 0, \qquad \forall \ttheta \in \LH \end{equation} must be added in order to ensure existence and uniqueness of a solution to~\eqref{eq_maxwell_strong}, where $\LH$ is the finite-dimensional ``cohomology'' space associated with $\Omega$ and the partition of its boundary (see Section \ref{sec_notat}).} \end{subequations} \revision{The boundary-value problem \eqref{eq_maxwell_strong} appears immediately in this form in magnetostatics. In this case, $\jj$ and $\ee$ respectively represent a (known) current density and the (unknown) associated magnetic vector potential, while the key quantity of interest is the magnetic field $\hh \eq \curl \ee$. We refer the reader to \cite{Ass_Ciar_Lab_foundat_electr_18,Boss_elctr_98,Hiptmair_acta_numer_2002,Monk_FEs_Maxwell_03} for reviews of models considered in computational electromagnetism.} In the rest of the introduction, we assume for simplicity that $\GD=\partial\Omega$ (so that the boundary conditions reduce to $\ee \times \nn = \bzero$ on $\partial\Omega$) and that $\jj$ is a piecewise polynomial in the Raviart--Thomas space, $\jj \in \RT_p(\TT_h) \cap \HH(\ddiv,\Omega)$, $p \geq 0$. Let $\ee_h \in \NN_p(\TT_h) \cap \HH_0(\ccurl,\Omega)$ be a numerical approximation to $\ee$ in the N\'ed\'elec space. Then, the Prager--Synge equality~\cite{Prag_Syng_47}, \cf, \eg, \cite[equation~(3.4)]{Rep_a_post_Maxw_07} or~\cite[Theorem~10]{Braess_Scho_a_post_edge_08}, implies that \begin{equation} \label{eq_PS} \|\curl(\ee - \ee_h)\|_\Omega \leq \min_{\substack{ \hh_h \in \NN_p(\TT_h) \cap \HH(\ccurl,\Omega) \\ \curl \hh_h = \jj}} \|\hh_h - \curl \ee_h\|_\Omega. \end{equation} Bounds such as~\eqref{eq_PS} have been used in, \eg, \cite{ Creus_Men_Nic_Pir_Tit_guar_har_19, Creus_Nic_Tit_guar_Maxw_17, Han_a_post_Maxw_08, Neit_Rep_a_post_Maxw_10}, see also the references therein. The estimate~\eqref{eq_PS} leads to a guaranteed and sharp upper bound. Unfortunately, as written, it involves a global minimization over $\NN_p(\TT_h) \cap \HH(\ccurl,\Omega)$, and is consequently too expensive in practical computations. Of course, a further upper bound follows from~\eqref{eq_PS} for {\em any} $\hh_h \in \NN_p(\TT_h) \cap \HH(\ccurl,\Omega)$ such that $\curl \hh_h = \jj$. At this stage, though, it is not clear how to find an {\em inexpensive local} way of constructing a suitable field $\hh_h$, called an {\em equilibrated flux}. A proposition for the lowest degree $p=0$ was given in~\cite{Braess_Scho_a_post_edge_08}, but suggestions for higher-order cases were not available until very recently in~\cite{Ged_Gee_Per_a_post_Maxw_19,Licht_FEEC_a_post_H_curl_19}. In particular, the authors in~\cite{Ged_Gee_Per_a_post_Maxw_19} also prove efficiency, \ie, they devise a field $\hh_h^* \in \NN_p(\TT_h) \cap \HH(\ccurl,\Omega)$ such that, up to a generic constant $C$ independent of the mesh size $h$ but possibly depending on the polynomial degree $p$, \begin{equation} \label{eq_eff} \|\hh_h^* - \curl \ee_h\|_{\Omega} \leq C \|\curl(\ee - \ee_h)\|_{\Omega}, \end{equation} as well as a local version of~\eqref{eq_eff}. Numerical experiments in~\cite{Ged_Gee_Per_a_post_Maxw_19} reveal very good effectivity indices, also for high polynomial degrees $p$. A number of a posteriori error estimates that are {\em polynomial-degree robust}, \ie, where no generic constant depends on $p$, were obtained recently. For equilibrations (reconstructions) in the $\HH(\ddiv)$ setting in two space dimensions, they were first obtained in~\cite{Brae_Pill_Sch_p_rob_09}. Later, they were extended to the $H^1$ setting in two space dimensions in~\cite{Ern_Voh_p_rob_15} and to both $H^1$ and $\HH(\ddiv)$ settings in three space dimensions in~\cite{Ern_Voh_p_rob_3D_20}. Applications to problems with arbitrarily jumping diffusion coefficients, second-order eigenvalue problems, the Stokes problem, linear elasticity, or the heat equation are reviewed in~\cite{Ern_Voh_p_rob_3D_20}. In the $\HH(\ccurl)$ setting, with application to \revision{the curl--curl problem}~\eqref{eq_maxwell_strong}, however, to the best of our knowledge, such a result was missing\footnote{We have learned very recently that a modification of~\cite{Ged_Gee_Per_a_post_Maxw_19} can lead to a polynomial-degree-robust error estimate, see~\cite{Ged_Gee_Per_Sch_post_Maxw_20}.}. It is our goal to establish it here, and we do so in our second main result, Theorem~\ref{theorem_aposteriori}. Our upper bound in Theorem~\ref{theorem_aposteriori} actually does {\em not derive} from the Prager--Synge equality to take the form~\eqref{eq_PS}, since we do not construct an equilibrated flux $\hh_h^* \in \NN_p(\TT_h) \cap \HH(\ccurl,\Omega)$. We instead perform a {\em \revision{broken patchwise} equilibration} producing locally on each edge patch $\TTe$ a piecewise polynomial $\hh_h^{\edge} \in \NN_p(\TTe) \cap \HH(\ccurl,\ome)$ such that $\curl \hh_h^{\edge} = \jj$. Consequently, our error estimate rather takes the form \begin{equation} \label{eq_up_intr} \|\curl(\ee - \ee_h)\|_\Omega \leq \sqrt{6} \Clift \Ccont \left (\sum_{\edge \in \EE_h} \|\hh_h^{\edge} - \curl \ee_h\|_{\ome}^2 \right )^{1/2}. \end{equation} We obtain each local contribution $\hh_h^{\edge}$ in a single-stage procedure, in contrast to the three-stage procedure of~\cite{Ged_Gee_Per_a_post_Maxw_19}. Our \revision{broken patchwise} equilibration is also rather inexpensive, since the edge patches are smaller than the usual vertex patches employed in~\cite{Braess_Scho_a_post_edge_08, Ged_Gee_Per_a_post_Maxw_19}. Moreover, we can either solve the patch problems, see~\eqref{eq_definition_estimator_2}, or replace them by a {\em sequential sweep} through tetrahedra sharing the given edge $e$, see~\eqref{eq_definition_estimator_sweep_2}. This second option yields \revision{a cheaper procedure where merely elementwise, in place of patchwise, problems are to be solved and even delivers} a {\em fully explicit} a posteriori error estimate in the {\em lowest-order} setting $p=0$. The price we pay for these advantages is the emergence of the constant $\sqrt{6} \Clift \Ccont$ in our upper bound~\eqref{eq_up_intr}; here $\Ccont$ is fully computable, only depends on the mesh shape-regularity, and takes values around 10 for usual meshes, whereas $\Clift$ only depends on the shape of the domain $\Omega$ \revision{and boundaries $\GD$ and $\GN$}, with in particular $\Clift = 1$ whenever $\Omega$ is convex. Crucially, our error estimates are {\em locally efficient} and polynomial-degree robust in that \begin{equation} \label{eq_low_intr} \|\hh_h^{\edge} - \curl \ee_h\|_{\ome} \leq C \|\curl(\ee - \ee_h)\|_{\ome} \end{equation} for all edges $\edge$, where the constant $C$ only depends on the shape-regularity of the mesh, as an immediate application of our first main result in Theorem~\ref{theorem_stability}. It is worth noting that the lower bound~\eqref{eq_low_intr} is completely local to the edge patches $\ome$ and does not comprise any neighborhood. \subsection{Organization of this contribution} The rest of this contribution is organised as follows. In Section~\ref{sec_not}, we recall the functional spaces, state a weak formulation of problem~\eqref{eq_maxwell_strong}, describe the finite-dimensional Lagrange, N\'ed\'elec, and Raviart--Thomas spaces, and introduce the numerical discretization of~\eqref{eq_maxwell_strong}. Our two main results, Theorem~\ref{theorem_stability} \revision{(together with its sequential form in Theorem~\ref{thm_sweep})} and Theorem~\ref{theorem_aposteriori}, are formulated and discussed in Section~\ref{sec_main_res}. Section~\ref{sec_num} presents a numerical illustration of our a posteriori error estimates for \revision{curl--curl problem}~\eqref{eq_maxwell_strong}. Sections~\ref{sec_proof_a_post} and~\ref{sec_proof_stability} are then dedicated to the proofs of our two main results. \revision{Finally, Appendix~\ref{appendix_weber} establishes an auxiliary result of independent interest: a Poincar\'e-like inequality using the curl of divergence-free fields in an edge patch.} \section{\revision{Curl--curl problem} and N\'ed\'elec finite element discretization} \label{sec_not} \subsection{Basic notation} \label{sec_notat} Consider a Lipschitz polyhedral subdomain $\omega \subseteq \Omega$. We denote by $H^1(\omega)$ the space of scalar-valued $L^2(\omega)$ functions with $\LL^2(\omega)$ weak gradient, $\HH(\ccurl,\omega)$ the space of vector-valued $\LL^2(\omega)$ fields with $\LL^2(\omega)$ weak curl, and $\HH(\ddiv,\omega)$ the space of vector-valued $\LL^2(\omega)$ fields with $L^2(\omega)$ weak divergence. Below, we use the notation $({\cdot},{\cdot})_\omega$ for the $L^2(\omega)$ or $\LL^2(\omega)$ scalar product and $\|{\cdot}\|_\omega$ for the associated norm. $L^\infty(\omega)$ and $\LL^\infty(\omega)$ are the spaces of essentially bounded functions with norm $\|{\cdot}\|_{\infty,\omega}$. Let $\HH^1(\omega) \eq \{\vv \in \LL^2(\omega)| \, v_i \in H^1(\omega), \, i=1, 2, 3\}$. Let $\gD$, $\gN$ be two disjoint, open, possibly empty subsets of $\partial \omega$ such that $\partial \omega = \overline \gD \cup \overline \gN$. Then $H^1_\gD(\omega) \eq \{v \in H^1(\omega)| \, v=0$ on $\gD\}$ is the subspace of $H^1(\omega)$ formed by functions vanishing on $\gD$ in the sense of traces. Furthermore, $\HH_\gD(\ccurl,\omega)$ is the subspace of $\HH(\ccurl,\omega)$ composed of fields with vanishing tangential trace on $\gD$, $\HH_\gD(\ccurl,\omega) \eq \{\vv \in \HH(\ccurl,\omega)$ such that $(\curl \vv, \bvf)_\omega - (\vv, \curl \bvf)_\omega = 0$ for all functions $\bvf \in \HH^1(\omega)$ such that $\bvf \times \nn_{\omega} = \bzero$ on $\partial \omega \setminus \gD\}$, where $\nn_{\omega}$ is the unit outward normal to $\omega$. Similarly, $\HH_\gN(\ddiv,\omega)$ is the subspace of $\HH(\ddiv,\omega)$ composed of fields with vanishing normal trace on $\gN$, $\HH_\gN(\ddiv,\omega) \eq \{\vv \in \HH(\ddiv,\omega)$ such that $(\div \vv, \varphi)_\omega + (\vv, \grad \varphi)_\omega = 0$ for all functions $\varphi \in H^1_\gD(\omega)\}$. We refer the reader to~\cite{Fer_Gil_Maxw_BC_97} for further insight on vector-valued Sobolev spaces with mixed boundary conditions. \revision{The space $\KK(\Omega) \eq \{ \vv \in \HH_\GD(\ccurl,\Omega) \; | \; \curl \vv = \bzero \}$ will also play an important role. When $\Omega$ is simply connected and $\GD$ is connected, one simply has $\KK(\Omega) = \grad \left (H^1_\GD(\Omega)\right )$. In the general case, one has $\KK(\Omega) = \grad \left (H^1_\GD(\Omega)\right ) \oplus \LH$, where $\LH$ is a finite-dimensional space called the ``cohomology space'' associated with $\Omega$ and the partition of its boundary \cite{Fer_Gil_Maxw_BC_97}.} \subsection{\revision{The curl--curl problem}} \label{sec_Maxw} If $\jj \in \revision{\KK(\Omega)^\perp}$ \revision{(the orthogonality being understood in $\LL^2(\Omega)$)}, then the classical weak formulation of~\eqref{eq_maxwell_strong} consists in finding a pair $(\ee,\revision{\vvphi}) \in \HH_{\GD}(\ccurl,\Omega) \times \revision{\KK(\Omega)}$ such that \begin{equation} \label{eq_maxwell_weak} \left \{ \begin{alignedat}{2} (\ee,\revision{\ttheta})_\Omega &= 0 &\quad& \forall \revision{\ttheta \in \KK(\Omega)} \\ (\curl \ee,\curl \vv)_\Omega + (\revision{\vvphi},\vv)_\Omega &= (\jj,\vv)_\Omega &\quad& \forall \vv \in \HH_{\GD}(\ccurl,\Omega). \end{alignedat} \right . \end{equation} Picking the test function $\vv = \revision{\vvphi}$ in the second equation of~\eqref{eq_maxwell_weak} shows that $\revision{\vvphi = \bzero}$, so that we actually have \begin{equation} \label{eq_maxwell_weak_II} (\curl \ee,\curl \vv)_\Omega = (\jj,\vv)_\Omega \quad \forall \vv \in \HH_{\GD}(\ccurl,\Omega). \end{equation} \revision{Note that when $\Omega$ is simply connected and $\GD$ is connected, the condition $\jj \in \KK(\Omega)^\perp$ simply means that $\jj$ is divergence-free with vanishing normal trace on $\GN$, $\jj \in \HH_{\GN}(\ddiv,\Omega)$ with $\div \jj = 0$, and the same constraint follows from the first equation of~\eqref{eq_maxwell_weak} for $\ee$.} \subsection{Tetrahedral mesh} \label{sec_mesh} We consider a matching tetrahedral mesh $\TT_h$ of $\Omega$, \ie, $\bigcup_{K \in \TT_h} \overline K$ $= \overline \Omega$, each $K$ is a tetrahedron, and the intersection of two distinct tetrahedra is either empty or their common vertex, edge, or face. We also assume that $\TT_h$ is compatible with the partition $\partial \Omega = \overline{\GD} \cup \overline{\GN}$ of the boundary, which means that each boundary face entirely lies either in $\overline{\GD}$ or in $\overline{\GN}$. We denote by $\EE_h$ the set of edges of the mesh $\TT_h$ and by $\FF_h$ the set of faces. The mesh is oriented which means that every edge $\edge\in\EE_h$ is equipped with a fixed unit tangent vector $\ttau_\edge$ and every face $F\in\FF_h$ is equipped with a fixed unit normal vector $\nn_F$ (see~\cite[Chapter~10]{Ern_Guermond_FEs_I_21}). Finally for every mesh cell $K\in\TT_h$, $\nn_K$ denotes its unit outward normal vector. The choice of the orientation is not relevant in what follows, but we keep it fixed in the whole work. If $K \in \TT_h$, $\EE_K \subset \EE_h$ denotes the set of edges of $K$, whereas for each edge $\edge \in \EE_h$, we denote by $\TTe$ the associated ``edge patch'' that consists of those tetrahedra $K \in \TT_h$ for which $\edge \in \EE_K$, see Figure~\ref{fig_patch}. We also employ the notation $\ome \subset \Omega$ for the open subdomain associated with the patch $\TTe$. We say that $\edge\in\EE_h$ is a boundary edge if it lies on $\partial\Omega$ and that it is an interior edge otherwise (in this case, $\edge$ may touch the boundary at one of its endpoints). The set of boundary edges is partitioned into the subset of Dirichlet edges $\EED_h$ with edges $\edge$ that lie in $\overline{\GD}$ and the subset of Neumann edges $\EEN_h$ collecting the remaining boundary edges. For all edges $\edge \in \EE_h$, we denote by $\GeN$ the open subset of $\partial \ome$ corresponding to the collection of faces having $e$ as edge and lying in $\overline \GN$. Note that for interior edges, $\GeN$ is empty and that for boundary edges, $\GeN$ never equals the whole $\partial \ome$. We also set $\GeD \eq (\partial \ome \setminus \GeN)^\circ$. \revision{Note that, in all situations, $\ome$ is simply connected and $\GeD$ is connected, so that we do not need to invoke here the cohomology spaces}. \begin{figure}[htb] \centerline{\includegraphics[height=0.35\textwidth]{figures/patch_edge.pdf} \qquad \qquad \includegraphics[height=0.35\textwidth]{figures/patch_edge_bound.pdf}} \caption{Interior (left) and Dirichlet boundary (right) edge patch $\TTe$} \label{fig_patch} \end{figure} For every tetrahedron $K \in \TT_h$, we denote the diameter and the inscribed ball diameter respectively by \begin{equation*} h_K \eq \sup_{\xx,\yy \in K} |\xx - \yy|, \quad \rho_K \eq \sup \left \{ r > 0 \; | \; \exists \xx \in K; B(\xx,r) \subset K \right \}, \end{equation*} where $B(\xx,r)$ is the ball of diameter $r$ centered at $\xx$. For every edge $\edge \in \EE_h$, $|\edge|$ is its measure (length) and \begin{equation} \label{eq_patch_not} h_\ome \eq \sup_{\xx,\yy \in \ome} |\xx - \yy|, \quad \rho_\edge \eq \min_{K \in \TTe} \rho_K. \end{equation} The shape-regularity parameters of the tetrahedron $K$ and of the edge patch $\TTe$ are respectively defined by \begin{equation} \label{eq_regularities} \kappa_K \eq h_K/\rho_K \quad \text{ and } \quad \kappa_\edge \eq h_{\ome}/\rho_\edge. \end{equation} \subsection{Lagrange, N\'ed\'elec, and Raviart--Thomas elements} If $K$ is a tetrahedron and $\pp \geq 0$ is an integer, we employ the notation $\mathcal P_\pp(K)$ for the space of scalar-valued (Lagrange) polynomials of degree less than or equal to $\pp$ on $K$ and $\widetilde{\mathcal P}_\pp(K)$ for homogeneous polynomials of degree $\pp$. The notation $\PP_\pp(K)$ (resp. $\widetilde{\PP}_\pp(K)$) then stands for the space of vector-valued polynomials such that all their components belong to $\mathcal P_\pp(K)$ (resp. $\widetilde{\mathcal P}_\pp(K)$). Following~\cite{Ned_mix_R_3_80} and~\cite{Ra_Tho_MFE_77}, we then define on each tetrahedron $K \in \TT_h$ the polynomial spaces of N\'ed\'elec and Raviart--Thomas functions as follows: \begin{equation} \label{eq_RT_N} \NN_\pp(K) \eq \PP_\pp(K) + \widetilde{\boldsymbol{\mathcal S}}_{\pp+1}(K) \quad \text{ and } \quad \RT_\pp(K) \eq \PP_\pp(K) + \xx \widetilde{\mathcal P}_\pp(K), \end{equation} where $\widetilde{\boldsymbol{\mathcal S}}_\pp(K) \eq \big\{ \vv \in \widetilde{\PP}_\pp(K) \; | \; \xx \cdot \vv(\xx) = 0 \quad \forall \xx \in \overline{K} \big \}$. For any collection of tetrahedra $\TT = \bigcup_{K \in \TT}\{K\}$ and the corresponding open subdomain $\omega = \big(\bigcup_{K \in \TT} \overline{K} \big)^\circ \subset \Omega$, we also write \begin{align*} \mathcal P_\pp(\TT) &\eq \left \{ v \in L^2(\omega) \; | \; v|_K \in \mathcal P_\pp(K) \quad \forall K \in \TT \right \}, \\ \NN_\pp(\TT) &\eq \left \{ \vv \in \LL^2(\omega) \; | \; \vv|_K \in \NN_\pp(K) \quad \forall K \in \TT \right \}, \\ \RT_\pp(\TT) &\eq \left \{ \vv \in \LL^2(\omega) \; | \; \vv|_K \in \RT_\pp(K) \quad \forall K \in \TT \right \}. \end{align*} \subsection{N\'ed\'elec finite element discretization} For the discretization of problem~\eqref{eq_maxwell_weak}, we consider in this work\revision{, for a fixed polynomial degree $p \geq 0$, the N\'ed\'elec finite element space given by} \begin{equation*} \VVh \eq \NN_p(\TT_h) \cap \HH_{\GD}(\ccurl,\Omega). \end{equation*} \revision{The discrete counterpart of $\KK(\Omega)$, namely \begin{equation*} \KK_h \eq \left \{\vv_h \in \VVh \; | \; \curl \vv_h = \bzero \right \} \end{equation*} can be readily identified as a preprocessing step by introducing cuts in the mesh \cite[Chapter 6]{gross_kotiuga_2004a}.} The discrete problem then consists in finding a pair $(\ee_h,\revision{\vvphi}_h) \in \VVh \times \revision{\KK_h}$ such that \begin{equation} \label{eq_maxwell_discrete} \left \{ \begin{alignedat}{2} (\ee_h,\revision{\ttheta_h})_{\Omega} &= 0 && \quad \forall \revision{\ttheta_h \in \KK_h} \\ (\curl \ee_h,\curl \vv_h)_{\Omega} + (\revision{\vvphi_h},\vv_h)_{\Omega} &= (\jj,\vv_h)_{\Omega} && \quad \forall \vv_{h} \in \VVh. \end{alignedat} \right. \end{equation} Since \revision{$\KK_h \subset \KK(\Omega)$}, picking $\vv_h = \revision{\vvphi_h}$ in the second equation of~\eqref{eq_maxwell_discrete} shows that \revision{$\vvphi_h = \bzero$}, so that we actually have \begin{equation} \label{eq_maxwell_discrete_II} (\curl \ee_h,\curl \vv_h)_{\Omega} = (\jj,\vv_h)_{\Omega} \quad \forall \vv_h \in \VVh. \end{equation} \revision{As for the continuous problem, we remark that when $\Omega$ is simply connected and $\GD$ is connected, $\KK_h = \grad S_h$, where $S_h \eq \mathcal P_{p+1}(\TT_h) \cap H^1_\GD(\Omega)$ is the usual Lagrange finite element space.} \section{Main results} \label{sec_main_res} This section presents our two main results. \subsection{Stable discrete best-approximation of broken polynomials in {\em H}$(\ccurl)$} Our first main result is the combination and extension of~\cite[Theorem~7.2]{Demk_Gop_Sch_ext_II_09} and~\cite[Corollary~3.4]{Cost_McInt_Bog_Poinc_10} to the edge patches $\TTe$, complementing similar previous achievements in $\HH(\ddiv)$ in two space dimensions in~\cite[Theorem~7]{Brae_Pill_Sch_p_rob_09} and in $H^1$ and $\HH(\ddiv)$ in three space dimensions~\cite[Corollaries~3.1 and~3.3]{Ern_Voh_p_rob_3D_20}. \begin{theorem}[$\HH(\ccurl)$ best-approximation in an edge patch] \label{theorem_stability} Let an edge $\edge\in\EE_h$ and the associated edge patch $\TTe$ with subdomain $\ome$ be fixed. Then, for every polynomial degree $p \geq 0$, all $\jj_h^{\edge} \in \RT_p(\TTe) \cap \HH_{\GeN}(\ddiv,\ome)$ with $\div \jj_h^{\edge} = 0$, and all $\ch_h \in \NN_p(\TTe)$, the following holds: \begin{equation} \label{eq_stab} \min_{\substack{ \hh_h \in \NN_p(\TTe) \cap \HH_{\GeN}(\ccurl,\ome) \\ \curl \hh_h = \jj_h^\edge}} \|\hh_h - \ch_h\|_\ome \leq \Cste \min_{\substack{ \hh \in \HH_{\GeN}(\ccurl,\ome) \\ \curl \hh = \jj_h^{\edge}}} \|\hh - \ch_h\|_\ome. \end{equation} Here, both minimizers are uniquely defined and the constant $\Cste$ only depends on the shape-regularity parameter $\kappa_\edge$ of the patch $\TTe$ defined in~\eqref{eq_regularities}. \end{theorem} Note that the converse inequality to~\eqref{eq_stab} holds trivially with constant $1$, \ie, \[ \min_{\substack{ \hh \in \HH_{\GeN}(\ccurl,\ome) \\ \curl \hh = \jj_h^\edge}} \|\hh - \ch_h\|_\ome \leq \min_{\substack{ \hh_h \in \NN_p(\TTe) \cap \HH_{\GeN}(\ccurl,\ome) \\ \curl \hh_h = \jj_h^\edge}} \|\hh_h - \ch_h\|_\ome. \] This also makes apparent the power of the result~\eqref{eq_stab}, stating that for piecewise polynomial data $\jj_h^\edge$ and $\ch_h$, the best-approximation error over a piecewise polynomial subspace of $\HH_{\GeN}(\ccurl,\ome)$ of degree $p$ is, up to a $p$-independent constant, equivalent to the best-approximation error over the entire space $\HH_{\GeN}(\ccurl,\ome)$. The proof of this result is presented in Section~\ref{sec_proof_stability}. We remark that Proposition~\ref{prop_stability_patch} below gives an equivalent reformulation of Theorem~\ref{theorem_stability} in the form of a stable broken $\HH(\ccurl)$ polynomial extension in the edge patch. \revision{Finally, the following form, which follows from the proof in Section~\ref{sec:proof_stability_patch}, see Remark~\ref{rem_sweep_brok}, has important practical applications: \begin{theorem}[$\HH(\ccurl)$ best-approximation by an explicit sweep through an edge patch] \label{thm_sweep} Let the assumptions of Theorem~\ref{theorem_stability} be satisfied. Consider a sequential sweep over all elements $K$ sharing the edge $\edge$, $K \in \TTe$ such that \textup{(i)} the enumeration starts from an arbitrary tetrahedron if $\edge$ is an interior edge and from a tetrahedron containing a face that lies in $\GeN$ (if any) or in $\GeD$ (if none in $\GeN$) if $\edge$ is a boundary edge; \textup{(ii)} two consecutive tetrahedra in the enumeration share a face. On each $K \in \TTe$, consider \begin{equation} \label{eq_def_estimator_sweep} \hh_h^{\edge,\heartsuit}|_K \eq \min_{\substack{ \hh_h \in \NN_p(K)\\ \curl \hh_h = \jj_h^\edge\\ \hh_h|_{\FF}^\ttau = \rr_\FF}} \|\hh_h - \ch_h\|_K. \end{equation} Here, $\FF$ is the set of faces that $K$ shares with elements $K'$ previously considered or lying in $\GeN$, and $\hh_h|_{\FF}^\ttau$ denotes the restriction of the tangential trace of $\hh_h$ to the faces of $\FF$ (see Definition \ref{definition_partial_trace} below for details). The boundary datum $\rr_{\FF}$ is either the tangential trace of $\hh_h^{\edge,\heartsuit}|_{K'}$ obtained after minimization over the previous tetrahedron $K'$, or $\mathbf 0$ on $\GeN$. Then, \begin{equation} \label{eq_stab_sweep} \|\hh_h^{\edge,\heartsuit} - \ch_h\|_\ome \leq \Cste \min_{\substack{ \hh \in \HH_{\GeN}(\ccurl,\ome) \\ \curl \hh = \jj_h^{\edge}}} \|\hh - \ch_h\|_\ome. \end{equation} \end{theorem} } \subsection{$p$-robust \revision{broken patchwise} equilibration a posteriori error estimates for \revision{the curl--curl problem}} Our second main result is a polynomial-degree-robust a posteriori error analysis of N\'ed\'elec finite elements~\eqref{eq_maxwell_discrete} applied to \revision{curl--curl problem}~\eqref{eq_maxwell_strong}. The local efficiency proof is an important application of Theorem~\ref{theorem_stability}. To present these results in detail, we need to prepare a few tools. \subsubsection{Functional inequalities and data oscillation} \label{sec_Poinc} For every edge $\edge \in \EE_h$, we associate with the subdomain $\ome$ a local Sobolev space $H^1_\star(\ome)$ with mean/boundary value zero, \begin{equation} \label{eq_Hse} H^1_\star(\ome) \eq \begin{cases} \{ v \in H^1(\ome)\; | \; v=0 \text{ on faces having $e$ as edge}\\ \hspace{5.3cm}\text{and lying in $\overline \GD$}\} \qquad & \text{ if } \edge \in \EED_h,\\ \left \{ v \in H^1(\ome) \; | \; \int_\ome v = 0 \right \} \qquad & \text{ otherwise}. \end{cases} \end{equation} Poincar\'e's inequality then states that there exists a constant $\CPe$ only depending on the shape-regularity parameter $\kappa_\edge$ such that \begin{equation} \label{eq_local_poincare} \|v\|_{\ome} \leq \CPe h_\ome \|\grad v\|_{\ome} \qquad \forall v \in H^1_\star(\ome). \end{equation} To define our error estimators, it is convenient to introduce a piecewise polynomial approximation of the datum $\jj\in \HH_{\GN}(\ddiv,\Omega)$ by setting on every edge patch $\TTe$ associated with the edge $\edge \in \EE_h$, \begin{equation} \label{eq_definition_jj_h} \jj_h^\edge \eq \argmin{\jj_h \in \RT_p(\TTe) \cap \HH_{\GeN}(\ddiv,\ome) \\ \div \jj_h = 0} \|\jj - \jj_h\|_{\ome}. \end{equation} This leads to the following data oscillation estimators: \begin{equation} \label{eq_definition_osc} \osc_\edge \eq \CPVe h_\ome \|\jj - \jj_h^\edge\|_{\ome}, \end{equation} \revision{where the constant $\CPVe$ is such that for every edge $\edge \in \EE_h$, we have} \begin{equation} \label{eq_local_poincare_vectorial} \|\vv\|_{\ome} \leq \CPVe h_{\ome}\|\curl \vv\|_{\ome} \quad \forall \vv \in \HH_{\GeD}(\ccurl,\ome) \cap \HH_{\GeN}(\ddiv,\ome) \text{ with } \div \vv = 0. \end{equation} \revision{We show in Appendix~\ref{appendix_weber} that $\CPVe$ only depends on the shape-regularity parameter $\kappa_\edge$. Notice that \eqref{eq_local_poincare_vectorial} is a local Poincar\'e-like inequality using the curl of divergence-free fields in the edge patch. This type of inequality is known under various names in the literature. Seminal contributions can be found in the work of Friedrichs~\cite[equation~(5)]{Fried:55} for smooth manifolds (see also Gaffney~\cite[equation~(2)]{Gaffney:55}) and later in Weber~\cite{Weber:80} for Lipschitz domains. This motivates the present use of the subscript ${}_{\rm PFW}$ in~\eqref{eq_local_poincare_vectorial}.} \revision{Besides the above local functional inequalities, we shall also use the fact} that there exists a constant $\Clift$ such that for all $v \in \HH_\GD(\ccurl,\Omega)$, there exists $\ww \in \HH^1(\Omega) \cap \HH_\GD(\ccurl,\Omega)$ such that $\curl \ww = \curl \vv$ and \begin{equation} \label{eq_estimate_lift} \|\grad \ww\|_{\Omega} \leq \Clift \|\curl \vv\|_\Omega. \end{equation} When either $\GD$ or $\GN$ has zero measure, the existence of $\Clift$ follows from Theorems 3.4 and 3.5 of~\cite{Cost_Dau_Nic_sing_Maxw_99}. If in addition $\Omega$ is convex, one can take $\Clift = 1$ (see~\cite{Cost_Dau_Nic_sing_Maxw_99} together with~\cite[Theorem~3.7]{Gir_Rav_NS_86} for Dirichlet boundary conditions and \cite[Theorem~3.9]{Gir_Rav_NS_86} for Neumann boundary conditions). \revision{For mixed boundary conditions, the existence of $\Clift$ can be obtained as a consequence of \cite[Section~2]{Hipt_Pechs_discr_reg_dec_19}. Indeed, we first project $\vv \in \HH_\GD(\ccurl,\Omega)$ onto $\tvv \in \KK(\Omega)^\perp$ without changing its curl. Then, we define $\ww \in \HH^1(\Omega)$ from $\tvv$ using \cite{Hipt_Pechs_discr_reg_dec_19}. Finally, we control $\|\tvv\|_\Omega$ by $\|\curl \vv\|_\Omega$ with the inequality from \cite[Proposition 7.4]{Fer_Gil_Maxw_BC_97} which is a global Poincar\'e-like inequality in the spirit of~\eqref{eq_local_poincare_vectorial}.} \subsubsection{\revision{Broken patchwise} equilibration by edge-patch problems} Our a posteriori error estimator is constructed via a simple restriction of the right-hand side of~\eqref{eq_PS} to edge patches, where no hat function is employed, no modification of the source term appears, and no boundary condition is imposed for interior edges, in contrast to the usual equilibration in~\cite{Dest_Met_expl_err_CFE_99, Braess_Scho_a_post_edge_08, Ern_Voh_p_rob_15}. For each edge $\edge \in \EE_h$, introduce \begin{subequations}\label{eq_definition_estimator} \begin{equation} \eta_\edge \eq \|\hh_h^{\edge,\star} - \curl \ee_h\|_\ome, \label{eq_definition_estimator_1} \end{equation} \revision{where $\hh_h^{\edge,\star}$ is the argument of the left minimizer in~\eqref{eq_stab} for the datum $\jj_h^\edge$ from~\eqref{eq_definition_jj_h} and $\ch_h \eq (\curl \ee_h)|_{\ome}$, \ie,} \begin{equation} \hh_h^{\edge,\star} \eq \argmin{\substack{ \hh_h \in \NN_p(\TTe) \cap \HH_{\GeN}(\ccurl,\ome) \\ \curl \hh_h = \jj_h^\edge}} \|\hh_h - \curl \ee_h\|_\ome. \label{eq_definition_estimator_2} \end{equation} \end{subequations} \revision{In practice, $\hh_h^{\edge,\star}$ is computed from the Euler--Lagrange conditions for the minimization problem~\eqref{eq_definition_estimator_2}. This leads to the} following patchwise mixed finite element problem: Find $\hh_h^{\revision{\edge},\star} \in \NN_p(\TTe) \cap \HH_{\GeN}(\ccurl,\ome)$, $\sig_h^{\revision{\edge},\star} \in \RT_p(\TTe) \cap \HH_{\GeN}(\ddiv,\ome)$, and $\zeta^{\revision{\edge},\star}_h \in \mathcal P_p(\TTe)$ such that \begin{equation} \label{broken_equil} \left \{ \arraycolsep=2pt \begin{array}{rclclcl} (\hh_h^{\revision{\edge},\star},\vv_h)_{\ome} &+& (\sig_h^{\revision{\edge},\star},\curl \vv_h)_{\ome} & & &=& (\curl \ee_h,\vv_h)_{\ome}, \\ (\curl \hh_h^{\revision{\edge},\star},\ww_h)_{\ome} && &+ & (\zeta^{\revision{\edge},\star}_h,\div \ww_h)_\ome &=& (\jj,\ww_h)_{\ome}, \\ & & (\div \sig^{\revision{\edge},\star}_h,\varphi_h)_\ome & & &=& 0 \end{array} \right . \end{equation} for all $\vv_h \in \NN_p(\TTe) \cap \HH_{\GeN}(\ccurl,\ome)$, $\ww_h \in \RT_p(\TTe) \cap \HH_{\GeN}(\ddiv,\ome)$, and $\varphi_h \in \mathcal P_p(\TTe)$. We note that from the optimality condition associated with~\eqref{eq_definition_jj_h}, using $\jj$ or $\jj_h^\edge$ in~\eqref{broken_equil} is equivalent. \subsubsection{\revision{Broken patchwise} equilibration by sequential sweeps} The patch problems~\eqref{eq_definition_estimator_2} lead to the solution of the linear systems~\eqref{broken_equil}. Although these are local around each edge and are mutually independent, they \revision{entail} some computational cost. This cost can be significantly reduced by taking inspiration from~\cite{Dest_Met_expl_err_CFE_99}, \cite{Luce_Wohl_local_a_post_fluxes_04}, \cite[Section~4.3.3]{Voh_guar_rob_VCFV_FE_11}, the proof of~\cite[Theorem~7]{Brae_Pill_Sch_p_rob_09}, or~\cite[Section~6]{Ern_Voh_p_rob_3D_20} and literally following the proof in Section~\ref{sec:proof_stability_patch} below\revision{, as summarized in Theorem~\ref{thm_sweep}}. This leads to \revision{an alternative error estimator} whose price is the sequential sweep through tetrahedra sharing the given edge, \revision{where for each tetrahedron, one} solve\revision{s the elementwise problem~\eqref{eq_def_estimator_sweep} for the datum $\jj_h^\edge$ from~\eqref{eq_definition_jj_h} and $\ch_h \eq (\curl \ee_h)|_{\ome}$, \ie, \begin{subequations} \label{eq_definition_estimator_sweep} \begin{equation} \label{eq_definition_estimator_sweep_2} \hh_h^{\edge,\heartsuit}|_K \eq \min_{\substack{ \hh_h \in \NN_p(K)\\ \curl \hh_h = \jj_h^\edge\\ \hh_h|_{\FF}^\ttau = \rr_\FF}} \|\hh_h - \curl \ee_h\|_K \qquad \forall K \in \TTe, \end{equation} and then set \begin{equation} \label{eq_definition_estimator_sweep_1} \eta_\edge \eq \|\hh_h^{\edge,\heartsuit} - \curl \ee_h\|_{\ome}. \end{equation} \end{subequations}} \subsubsection{Guaranteed, locally efficient, and $p$-robust a posteriori error estimates} For each edge $\edge \in \EE_h$, let $\ppsi_\edge$ be the (scaled) edge basis functions of the lowest-order N\'ed\'elec space, in particular satisfying $\supp \ppsi_\edge = \overline{\ome}$. More precisely, let $\ppsi_\edge$ be the unique function in $\NN_0(\TT_h) \cap \HH(\ccurl,\Omega)$ such that \begin{equation} \label{eq_BF} \int_{\edge'} \ppsi_\edge \cdot \ttau_{\edge'} = \delta_{\edge,\edge'} |\edge|, \end{equation} recalling that $\ttau_{\edge'}$ is the unit tangent vector orienting the edge $\edge'$. We define \begin{equation} \label{eq_definition_Ccont} \Cconte \eq \|\ppsi_\edge\|_{\infty,\ome} + \CPe h_\ome \|\curl \ppsi_\edge\|_{\infty,\ome} \quad \forall \edge \in \EE_h, \end{equation} where $\CPe$ is Poincar\'e's constant from~\eqref{eq_local_poincare} and $h_\ome$ is the diameter of the patch domain $\ome$. We actually show in Lemma~\ref{lem_c_stab} below that \begin{equation} \label{eq_bound_Ccont} \Cconte \leq \Cke \eq \frac{2|\edge|}{\rho_\edge} \left ( 1 + \CPe \kappa_\edge \right ) \quad \forall \edge \in \EE_h, \end{equation} where $\rho_\edge$ is defined in~\eqref{eq_patch_not}; $\Cconte$ is thus uniformly bounded by the patch-regularity parameter $\kappa_\edge$ defined in~\eqref{eq_regularities}. \begin{theorem}[$p$-robust a posteriori error estimate] \label{theorem_aposteriori} Let $\ee$ be the weak solution of \revision{the curl--curl problem}~\eqref{eq_maxwell_weak} and let $\ee_h$ be its N\'ed\'elec finite element approximation solving~\eqref{eq_maxwell_discrete}. Let the data oscillation estimators $\osc_\edge$ be defined in~\eqref{eq_definition_osc} and the \revision{broken patchwise} equilibration estimators $\eta_\edge$ be defined in either~\eqref{eq_definition_estimator} or~\eqref{eq_definition_estimator_sweep}. Then, with the constants $\Clift$, $\Cconte$, and $\Cste$ from respectively~\eqref{eq_estimate_lift}, \eqref{eq_definition_Ccont}, and~\eqref{eq_stab}, the following global upper bound holds true: \begin{equation} \label{eq_upper_bound} \|\curl(\ee - \ee_h)\|_\Omega \leq \sqrt{6} \Clift \left (\sum_{\edge \in \EE_h}\Cconte^2 \left (\eta_\edge + \osc_\edge\right )^2 \right )^{1/2}, \end{equation} as well as the following lower bound local to the edge patches $\ome$: \begin{equation} \label{eq_lower_bound} \eta_\edge \leq \Cste \left (\|\curl(\ee - \ee_h)\|_\ome + \osc_\edge\right ) \qquad \forall \edge \in \EE_h. \end{equation} \end{theorem} \subsection{Comments} A few comments about Theorem~\ref{theorem_aposteriori} are in order. \begin{itemize} \item The constant $\Clift$ from~\eqref{eq_estimate_lift} can be taken as $1$ for convex domains $\Omega$ and if either $\GD$ or $\GN$ is empty. In the general case however, we do not know the value of this constant. The presence of the constant $\Clift$ is customary in a posteriori error analysis of \revision{the curl--curl problem}, it appears, e.g., in Lemma 3.10 of~\cite{Nic_Creus_a_post_Maxw_03} and Assumption 2 of~\cite{Beck_Hipt_Hopp_Wohl_a_post_Maxw_00}. \item The constant $\Cconte$ defined in~\eqref{eq_definition_Ccont} can be fully computed in practical implementations. Indeed, computable values of Poincar\'e's constant $\CPe$ from~\eqref{eq_local_poincare} \revision{are discussed in, \eg, \cite{Chua_Whee_est_Poin_06, Vees_Verf_Poin_stars_12}, see also the concise discussion in~\cite{Blech_Mal_Voh_loc_res_NL_20}; $\CPe$ can be taken as $1/\pi$ for convex interior patches and as $1$ for most Dirichlet boundary patches.} Recall also that $\Cconte$ only depends on the shape-regularity parameter $\kappa_\edge$ of the edge patch $\TTe$. \item A computable upper bound on the constant $\Cste$ from~\eqref{eq_stab} can be obtained by proceeding as in~\cite[Lemma~3.23]{Ern_Voh_p_rob_15}. The crucial property is again that $\Cste$ can be uniformly bounded by the shape-regularity parameter $\kappa_\edge$ of the edge patch $\TTe$. \item The key feature of the error estimators of Theorem~\ref{theorem_aposteriori} is their polynomial-degree-ro\-bust\-ness (or, shortly, $p$-robustness). This suggests to use them in $hp$-adaptation strategies, \cf, \eg, \cite{Dan_Ern_Sme_Voh_guar_red_18,Demk_hp_book_07,Schwab_hp_98} and the references therein. \item In contrast to~\cite{Braess_Scho_a_post_edge_08, Ged_Gee_Per_a_post_Maxw_19,Ged_Gee_Per_Sch_post_Maxw_20, Licht_FEEC_a_post_H_curl_19}, we do not obtain here an equilibrated flux, \ie, a piecewise polynomial $\hh_h^\star$ in the global space $\NN_p(\TT_h) \cap \HH_{\GN}(\ccurl,\Omega)$ satisfying, for piecewise polynomial $\jj$, $\curl \hh_h^\star = \jj$. We only obtain from~\eqref{eq_definition_estimator_2} or~\eqref{eq_definition_estimator_sweep_2} that $\hh_h^{\edge,\star} \in \NN_p(\TTe) \cap \HH_{\GeN}(\ccurl,\ome)$ and $\curl \hh_h^{\edge,\star} = \jj$ locally in every edge patch $\TTe$ and similarly for $\hh_h^{\edge,\heartsuit}$, but we do not build a $\HH_{\GN}(\ccurl,\Omega)$-conforming discrete field; we call this process \revision{broken patchwise} equilibration. \item The upper bound~\eqref{eq_upper_bound} does not come from the Prager--Synge inequality~\eqref{eq_PS} and is typically larger than those obtained from~\eqref{eq_PS} with an equilibrated flux $\hh_h^\star \in \NN_p(\TT_h) \cap \HH_{\GN}(\ccurl,\Omega)$, because of the presence of the multiplicative factors $\sqrt{6} \Clift \Cconte$. On the other hand, it is typically cheaper to compute the upper bound~\eqref{eq_upper_bound} than those based on an equilibrated flux since 1) the problems~\eqref{eq_definition_estimator} and~\eqref{eq_definition_estimator_sweep} involve edge patches, whereas full equilibration would require solving problems also on vertex patches which are larger than edge patches; 2) the error estimators are computed in one stage only solving the problems~\eqref{eq_definition_estimator_2} or~\eqref{eq_definition_estimator_sweep_2}; 3) \revision{ the broken patchwise equilibration procedure enables the construction of a $p$-robust error estimator using polynomials of degree $p$, in contrast to the usual procedure requiring the use of polynomials of degree $p+1$, \cf~\cite{Brae_Pill_Sch_p_rob_09, Ern_Voh_p_rob_15, Ern_Voh_p_rob_3D_20}; the reason is that the usual procedure involves multiplication by the ``hat function'' $\psi_{\boldsymbol a}$ inside the estimators, which increases the polynomial degree by one, whereas the current procedure only encapsulates an operation featuring $\ppsi_\edge$ into the multiplicative constant $\Cconte$, see~\eqref{eq_definition_Ccont}.} \item The sequential sweep through the patch in~\eqref{eq_definition_estimator_sweep_2} eliminates the patchwise problems~\eqref{eq_definition_estimator_2} and leads instead to merely elementwise problems. These are much cheaper than~\eqref{eq_definition_estimator_2}, and, in particular, for $p=0$, \ie, for lowest-order N\'ed\'elec elements in~\eqref{eq_maxwell_discrete} with one unknown per edge, \revision{they can be made} explicit. \revision{Indeed, there is only one unknown in~\eqref{eq_definition_estimator_sweep_2} for each tetrahedron $K \in \TTe$ if $K$ is not the first or the last tetrahedron in the sweep. In the last tetrahedron, there is no unknown left except if it contains a face that lies in $\GeD$, in which case there is also only one unknown in~\eqref{eq_definition_estimator_sweep_2}. If the first tetrahedron contains a face that lies in $\GeN$, there is again only one unknown in~\eqref{eq_definition_estimator_sweep_2}. Finally, if the first tetrahedron does not contain a face that lies in $\GeN$, it is possible, instead of $\FF = \emptyset$, to consider the set $\FF$ formed by the face $F$ that either 1) lies in $\GeD$ (if any) or 2) is shared with the last element and to employ for the boundary datum $\rr_{\FF}$ in~\eqref{eq_definition_estimator_sweep_2} the 1) value or 2) the mean value of the tangential trace $\curl \ee_h$ on $F$. This again leads to only one unknown in~\eqref{eq_definition_estimator_sweep_2}, with all the theoretical properties maintained.} \end{itemize} \section{Numerical experiments} \label{sec_num} In this section, we present some numerical experiments to illustrate the a posteriori error estimates from Theorem~\ref{theorem_aposteriori} and its use within an adaptive mesh refinement procedure. We consider a test case with a smooth solution and a test case with a solution featuring an edge singularity. \revision{Below, we rely on the indicator $\eta_\edge$ evaluated using~\eqref{eq_definition_estimator}, \ie, involving the edge-patch solves~\eqref{broken_equil}. Moreover, we let \begin{equation} \label{eq_ests} \left (\eta_{\rm ofree}\right )^2 \eq 6 \sum_{e \in \EE_h} \left (\Cconte \eta_\edge\right )^2, \qquad \left (\eta_{{\rm cofree}}\right )^2 \eq \sum_{e \in \EE_h} \left (\eta_\edge\right )^2. \end{equation} Here, $\eta_{\rm ofree}$ corresponds to an ``oscillation-free'' error estimator, obtained by discarding the oscillation terms $\osc_\edge$ in~\eqref{eq_upper_bound}, whereas $\eta_{{\rm cofree}}$ corresponds to a ``constant-and-oscillation-free'' error estimator, discarding in addition the multiplicative constants $\sqrt{6} \Clift$ and $\Cconte$}. \subsection{Smooth solution in the unit cube} We first consider an example in the unit cube $\Omega \eq (0,1)^3$ and Neumann boundary conditions, $\GN \eq \partial\Omega$ in~\eqref{eq_maxwell_strong} and its weak form~\eqref{eq_maxwell_weak}. The analytical solution reads \begin{equation*} \ee(\xx) \eq \left ( \begin{array}{c} \sin(\pi \xx_1)\cos(\pi \xx_2)\cos(\pi \xx_3) \\ -\cos(\pi \xx_1)\sin(\pi \xx_2)\cos(\pi \xx_3) \\ 0 \end{array} \right ). \end{equation*} One checks that $\div \ee = 0$ and that \begin{equation*} \jj(\xx) \eq (\curl \curl \ee)(\xx) = 3\pi^2 \left ( \begin{array}{c} \sin(\pi \xx_1)\cos(\pi \xx_2)\cos(\pi \xx_3) \\ -\cos(\pi \xx_1)\sin(\pi \xx_2)\cos(\pi \xx_3) \\ 0 \end{array} \right ). \end{equation*} We notice that $\Clift = 1$ since $\Omega$ is convex. We first propose an ``$h$-convergence'' test case in which, for a fixed polynomial degree $p$, we study the behavior of the N\'ed\'elec approximation $\ee_h$ solving~\eqref{eq_maxwell_discrete} and of the error estimator of Theorem~\ref{theorem_aposteriori}. We consider a sequence of meshes obtained by first splitting the unit cube into an $N \times N \times N$ Cartesian grid and then splitting each of the small cubes into six tetrahedra, with the resulting mesh size $h = \sqrt{3}/N$. More precisely, each resulting edge patch is convex here, so that the constant $\CPe$ in~\eqref{eq_definition_Ccont} can be taken as $1/\pi$ for all internal patches, see the discussion in Section~\ref{sec_Poinc}. Figure~\ref{figure_unit_cube_hconv} presents the results. The top-left panel shows that the expected convergence rates of $\ee_h$ are obtained for $p=0,\dots,3$. The top-right panel presents the local efficiency of the error estimator based on the \revision{indicator} $\eta_\edge$ evaluated using~\eqref{eq_definition_estimator}. We see that it is very good, the ratio of the patch indicator to the patch error being at most $2$ for $p=0$, and close to $1$ for higher-order polynomials. This seems to indicate that the constant $\Cste$ in~\eqref{eq_stab} is rather small. The bottom panels of Figure~\ref{figure_unit_cube_hconv} report on the global efficiency of the error indicators \revision{$\eta_{\rm ofree}$ and $\eta_{{\rm cofree}}$ from~\eqref{eq_ests}.} As shown in the bottom-right panel, the global efficiency of $\eta_{{\rm cofree}}$ is independent of the mesh size. The bottom-left panel shows a slight dependency of the global efficiency of $\eta_{\rm ofree}$ on the mesh size, but this is only due to the fact that Poincar\'e's constants differ for boundary and internal patches. These two panels show that the efficiency actually slightly improves as the polynomial degree is increased, highlighting the $p$-robustness of the proposed error estimator. \revision{We also notice that the multiplicative factor $\sqrt{6}\Cconte$ can lead to some error overestimation.} \input{figures/numerics/unit_cube/hconv.tex} We then present a ``$p$-convergence'' test case where for a fixed mesh, we study the behavior of the solution and of the error estimator when the polynomial degree $p$ is increased. We provide this analysis for four different meshes. The three first meshes are structured as previously described with $N=1,2$, and $4$, whereas the last mesh is unstructured. The unstructured mesh has $358$ elements, $1774$ edges, and $h = 0.37$. Figure~\ref{figure_unit_cube_pconv} presents the results. The top-left panel shows an exponential convergence rate as $p$ is increased for all the meshes, which is in agreement with the theory, since the solution is analytic. The top-right panel shows that the local patch-by-patch efficiency is very good, and seems to tend to $1$ as $p$ increases. The bottom-right panel shows that the global efficiency of $\eta_{{\rm cofree}}$ also slightly improves as $p$ is increased, and it seems to be rather independent of the mesh. The bottom-left panel shows that the global efficiency of $\eta_{\rm ofree}$ is significantly worse on the unstructured mesh. This is because in the absence of convex patches, we employ for $\CPe$ the estimate from~\cite{Vees_Verf_Poin_stars_12} instead of the constant $1/\pi$. We believe that this performance could be improved by providing sharper Poincar\'e constants. \input{figures/numerics/unit_cube/pconv.tex} \subsection{Singular solution in an L-shaped domain} We now turn our attention to an L-shaped domain featuring a singular solution. Specifically, $\Omega \eq L \times (0,1)$, where \begin{equation*} L \eq \left \{ \xx = (r\cos\theta,r\sin\theta) \ | \ |\xx_1|,|\xx_2| \leq 1 \quad 0 \leq \theta \leq 3\pi/2 \right \}, \end{equation*} see Figure~\ref{figure_lshape_errors}, where $L$ is represented. \revision{We consider the case $\GD \eq \partial \Omega$, and t}he solution reads $\ee(\xx) \eq \big(0,0,\chi(r) r^\alpha \sin(\alpha \theta)\big)^{\mathrm{T}}$, where $\alpha \eq 3/2$, $r^2 \eq |\xx_1|^2 + |\xx_2|^2$, $(\xx_1,\xx_2) = r(\cos\theta,\sin\theta)$, and $\chi:(0,1) \to \mathbb R$ is a smooth cutoff function such that $\chi = 0$ in a neighborhood of $1$. We emphasize that $\div \ee = 0$ and that, since $\Delta \left (r^\alpha \sin(\alpha \theta)\right ) = 0$ near the origin, the right-hand side $\jj$ associated with $\ee$ belongs to $\HH(\ddiv,\Omega)$. We use an adaptive mesh-refinement strategy based on D\"orfler's marking~\cite{Dorf_cvg_FE_96}. The initial mesh we employ for $p=0$ and $p=1$ consists of $294$ elements and $1418$ edges with $h=0.57$, whereas a mesh with $23$ elements, $86$ edges, and $h=2$ is employed for $p=2$ and $3$. The meshing package {\tt MMG3D} is employed to generate the sequence of adapted meshes~\cite{Dobr_mmg3d}. Figure~\ref{figure_lshape_conv} shows the convergence histories of the adaptive algorithm for different values of $p$. In the top-left panel, we observe the optimal convergence rate (limited to $N_{\rm dofs}^{2/3}$ for isotropic elements in the presence of an edge singularity). We employ the indicator $\eta_{{\rm cofree}}$ defined in~\eqref{eq_ests}. The top-right and bottom-left panels respectively present the \revision{local} and \revision{global} efficiency indices. In both cases, the efficiency is good considering that the mesh is fully unstructured with localized features. We also emphasize that the efficiency does not deteriorate when $p$ increases. Finally, Figure~\ref{figure_lshape_errors} depicts the estimated and the actual errors at the last iteration of the adaptive algorithm. The face on the top of the domain $\Omega$ is represented, and the colors are associated with the edges of the mesh. The left panels correspond to the values of the estimator $\eta_\edge$ of~\eqref{eq_definition_estimator}, whereas the value of $\|\curl(\ee-\ee_h)\|_\ome$ is represented in the right panels. Overall, this figure shows excellent agreement between the estimated and actual error distribution. \input{figures/numerics/lshape/conv.tex} \begin{figure} \includegraphics[width=\linewidth]{figures/numerics/lshape/P0.pdf} $p=0$ \includegraphics[width=\linewidth]{figures/numerics/lshape/P1.pdf} $p=1$ \includegraphics[width=\linewidth]{figures/numerics/lshape/P3.pdf} $p=3$ \caption{Estimated error (left) and actual error (right) for the L-shaped domain experiment} \label{figure_lshape_errors} \end{figure} \section{Proof of Theorem~\ref{theorem_aposteriori} ($p$-robust a posteriori error estimate)} \label{sec_proof_a_post} In this section, we prove Theorem~\ref{theorem_aposteriori}. \subsection{Residuals} Recall that $\ee_h \in \VVh \subset \HH_{\GD}(\ccurl,\Omega)$ solves \eqref{eq_maxwell_discrete} and satisfies~\eqref{eq_maxwell_discrete_II}. In view of the characterization of the weak solution~\eqref{eq_maxwell_weak_II}, we define the residual $\RR \in (\HH_{\GD}(\ccurl,\Omega))'$ by setting \begin{equation*} \langle \RR,\vv \rangle \eq (\jj,\vv)_{\Omega} - (\curl \ee_h,\curl \vv)_{\Omega} = (\curl (\ee - \ee_h),\curl \vv)_{\Omega} \quad \forall \vv \in \HH_{\GD}(\ccurl,\Omega). \end{equation*} Taking $\vv = \ee - \ee_h$ and using a duality characterization, we have the error--residual link \begin{equation} \label{eq_res} \|\curl(\ee - \ee_h)\|_{\Omega} = \langle \RR,\ee-\ee_h\rangle^{1/2} = \sup_{\substack{ \vv \in \HH_{\GD}(\ccurl,\Omega)\\ \|\curl \vv\|_{\Omega} = 1} } \langle \RR,\vv \rangle. \end{equation} We will also employ local dual norms of the residual $\RR$. Specifically, for each edge $\edge \in \EE_h$, we set \begin{equation} \label{eq_RR_ome} \|\RR\|_{\star,\edge} \eq \sup_{\substack{ \vv \in \HH_{\GeD}(\ccurl,\ome)\\ \|\curl \vv\|_{\ome} = 1} } \langle \RR,\vv \rangle. \end{equation} For each $\edge \in \EE_h$, we will also need an oscillation-free residual $\RR_h^\edge \in \left (\HH_{\GeD}(\ccurl,\ome)\right )'$ defined using the projected right-hand side introduced in~\eqref{eq_definition_jj_h}, \begin{equation*} \langle \RR_h^\edge,\vv \rangle \eq (\jj_h^\edge,\vv)_{\ome} - (\curl \ee_h,\curl \vv)_{\ome} \quad \forall \vv \in \HH_{\GeD}(\ccurl,\ome). \end{equation*} We also employ the notation \begin{equation*} \|\RR^\edge_h\|_{\star,\edge} \eq \sup_{\substack{ \vv \in \HH_{\GeD}(\ccurl,\ome)\\ \|\curl \vv\|_{\ome} = 1}} \langle \RR_h^\edge,\vv \rangle \end{equation*} for the dual norm of $\RR_h^\edge$. Note that $\RR^\edge_h = \RR_{|\HH_{\GeD}(\ccurl,\ome)}$ whenever the source term $\jj$ is a piecewise $\RT_p(\TT_h)$ polynomial. \subsection{Data oscillation} Recalling the definition~\eqref{eq_definition_osc} of $\osc_\edge$, we have the following comparison: \begin{lemma}[Data oscillation] The following holds true: \begin{subequations}\begin{equation} \label{eq_data_oscillations_lower_bound} \|\RR_h^\edge\|_{\star,\edge} \leq \|\RR\|_{\star,\edge} + \osc_\edge \end{equation} and \begin{equation} \label{eq_data_oscillations_upper_bound} \|\RR\|_{\star,\edge} \leq \|\RR_h^\edge\|_{\star,\edge} + \osc_\edge. \end{equation}\end{subequations} \end{lemma} \begin{proof} Let $\vv \in \HH_{\GeD}(\ccurl,\ome)$ with $\|\curl \vv\|_{\ome} = 1$ be fixed. We have \begin{equation*} \langle \RR_h^\edge,\vv \rangle = \langle \RR,\vv \rangle - (\jj-\jj_h^\edge,\vv)_{\ome}. \end{equation*} We define $q$ as the unique element of $H^1_{\GeD}(\ome)$ such that \begin{equation*} (\grad q,\grad w) = (\vv,\grad w) \quad \forall w \in H^1_{\GeD}(\ome), \end{equation*} and set $\widetilde \vv \eq \vv - \grad q$. Since $\div \jj = \div \jj_h^\edge = 0$ and $\jj-\jj_h^\edge \in \HH_{\GeN}(\ddiv,\ome)$, we have $(\jj-\jj_h^\edge,\grad q)_{\ome} = 0$, and it follows that \begin{equation*} \langle \RR_h^\edge,\vv \rangle = \langle \RR,\vv \rangle - (\jj-\jj_h^\edge,\widetilde \vv)_{\ome} \leq \|\RR\|_{\star,\edge} + \|\jj - \jj_h^\edge\|_{\ome}\|\widetilde \vv\|_{\ome}. \end{equation*} Since $\tvv \in \HH_{\GeD}(\ccurl,\ome) \cap \HH_{\GeN}(\ddiv,\ome)$ with $\div \tvv = 0$ in $\ome$, recalling~\eqref{eq_local_poincare_vectorial}, we have \begin{equation*} \|\widetilde \vv\|_{\ome} \leq \CPVe h_\ome \|\curl \widetilde \vv\|_{\ome} = \CPVe h_\ome \|\curl \vv\|_{\ome} = \CPVe h_\ome, \end{equation*} and we obtain~\eqref{eq_data_oscillations_lower_bound} by taking the supremum over all $\vv$. The proof of~\eqref{eq_data_oscillations_upper_bound} follows exactly the same path. \end{proof} \subsection{Partition of unity and cut-off estimates} We now analyze a partition of unity for vector-valued functions that we later employ to localize the error onto edge patches. Recalling the notation $\ttau_{\edge}$ for the unit tangent vector orienting $\edge$, we quote the following classical partition of unity~\cite[Chapter~15]{Ern_Guermond_FEs_I_21}: \begin{lemma}[Vectorial partition of unity] \label{lem_PU} Let $\mathbb I$ be the identity matrix in ${\mathbb R}^{3\times 3}$. The edge basis functions $\ppsi_\edge$ from~\eqref{eq_BF} satisfy \[ \sum_{\edge \in \EE_h}\ttau_\edge \otimes \ppsi_\edge = \sum_{\edge \in \EE_h}\ppsi_\edge \otimes \ttau_\edge = {\mathbb I}, \] where $\otimes$ denotes the outer product, so that we have \begin{equation} \label{eq_partition_unity} \ww = \sum_{\edge \in \EE_h} (\ww \cdot \ttau_\edge )|_\ome \ppsi_\edge \qquad \forall \ww \in \LL^2(\Omega). \end{equation} \end{lemma} \begin{lemma}[Cut-off stability] \label{lem_c_stab} For every edge $\edge \in \EE_h$, recalling the space $H^1_\star(\ome)$ defined in~\eqref{eq_Hse} and the constant $\Cconte$ defined in~\eqref{eq_definition_Ccont}, we have \begin{equation} \label{eq_estimate_cont} \|\curl(v \ppsi_\edge)\|_{\ome} \leq \Cconte \|\grad v\|_\ome \qquad \forall v \in H^1_\star(\ome). \end{equation} Moreover, the upper bound~\eqref{eq_bound_Ccont} on $\Cconte$ holds true. \end{lemma} \begin{proof} Let an edge $\edge \in \EE_h$ and $v \in H^1_\star(\ome)$ be fixed. Since $\curl (v \ppsi_\edge) = v \curl \ppsi_\edge + \grad v \times \ppsi_\edge$, we have, using~\eqref{eq_local_poincare}, \begin{align*} \|\curl (v \ppsi_\edge)\|_{\ome} &\leq \|\curl \ppsi_\edge\|_{\infty,\ome}\|v\|_\ome + \|\ppsi_\edge\|_{\infty,\ome}\|\grad v\|_\ome \\ &\leq (\|\curl \ppsi_\edge\|_{\infty,\ome}\CPe h_\ome + \|\ppsi_\edge\|_{\infty,\ome})\|\grad v\|_\ome \\ &= \Cconte \|\grad v\|_\ome. \end{align*} This proves~\eqref{eq_estimate_cont}. To prove~\eqref{eq_bound_Ccont}, we remark that in every tetrahedron $K \in \TTe$, we have (see for instance~\cite[Section~5.5.1]{Monk_FEs_Maxwell_03}, \cite[Chapter~15]{Ern_Guermond_FEs_I_21}) \begin{equation*} \ppsi_\edge|_K = |\edge| (\lambda_1 \grad \lambda_2 - \lambda_2 \grad \lambda_1), \quad (\curl \ppsi_\edge)|_K = 2 |\edge|\grad \lambda_1 \times \grad \lambda_2, \end{equation*} where $\lambda_1$ and $\lambda_2$ are the barycentric coordinates of $K$ associated with the two endpoints of $\edge$ such that $\ttau_\edge$ points from the first to the second vertex. Since $\|\lambda_j\|_{\infty,K} = 1$ and $\|\grad \lambda_j\|_{\infty,K} \leq \rho_K^{-1}$, we have \begin{equation*} \|\ppsi_\edge\|_{\infty,K} \leq \frac{2}{\rho_K}|\edge|, \quad \|\curl \ppsi_\edge\|_{\infty,K} \leq \frac{2}{\rho_K^2}|\edge| \end{equation*} for every $K \in \TTe$. Recalling the definition~\eqref{eq_patch_not} of $\rho_\edge$, which implies that $\rho_\edge \leq \rho_K$, as well as the definition $\kappa_\edge$ in~\eqref{eq_regularities}, we conclude that \begin{align*} \Cconte = \|\ppsi_\edge\|_{\infty,\ome} + \CPe h_\ome \|\curl \ppsi_\edge\|_{\infty,\ome} \leq \frac{2 |\edge|}{\rho_\edge}\left(1 + \CPe \frac{h_\ome}{\rho_\edge} \right) = \Cke. \end{align*} \end{proof} \subsection{Upper bound using localized residual dual norms} We now establish an upper bound on the error using the localized residual dual norms $\|\RR_h^\edge\|_{\star,\edge}$, in the spirit of~\cite{Blech_Mal_Voh_loc_res_NL_20}, \cite[Chapter~34]{Ern_Guermond_FEs_II_21}, and the references therein. \begin{proposition}[Upper bound by localized residual dual norms] Let $\Cconte$ and $\Clift$ be defined in~\eqref{eq_definition_Ccont} and~\eqref{eq_estimate_lift}, respectively. Then the following holds: \begin{equation} \label{eq_upper_bound_residual} \|\curl(\ee - \ee_h)\|_{\Omega} \leq \sqrt{6} \Clift \left ( \sum_{\edge \in \EE_h} \Cconte^2 \left (\|\RR_h^\edge\|_{\star,\edge} + \osc_\edge\right )^2 \right )^{1/2}. \end{equation} \end{proposition} \begin{proof} We start with~\eqref{eq_res}. Let $\vv \in \HH_{\GD}(\ccurl,\Omega)$ with $\|\curl \vv\|_{\Omega} = 1$ be fixed. Following~\eqref{eq_estimate_lift}, we define $\ww \in \HH^1(\Omega) \cap \HH_{\GD}(\ccurl,\Omega)$ such that $\curl \ww = \curl \vv$ with \begin{equation} \label{tmp_estimate_lift} \|\grad \ww\|_{\Omega} \leq \Clift \|\curl \vv\|_{\Omega}. \end{equation} As a consequence of~\eqref{eq_maxwell_weak_II} and~\eqref{eq_maxwell_discrete_II}, the residual $\RR$ is (in particular) orthogonal to $\NN_0(\TT_h) \cap \HH_{\GD}(\ccurl,\Omega)$. Thus, by employing the partition of unity~\eqref{eq_partition_unity}, we have \begin{equation*} \langle \RR,\vv \rangle = \langle \RR,\ww \rangle = \sum_{\edge \in \EE_h} \langle \RR, (\ww \cdot \ttau_\edge )|_\ome \ppsi_\edge \rangle = \sum_{\edge \in \EE_h} \langle \RR, (\ww \cdot \ttau_\edge - \overline{w_\edge})|_\ome \ppsi_\edge \rangle, \end{equation*} where $\overline{w_\edge} \eq 0$ if $\edge \in \EED_h$ and \begin{equation*} \overline{w_\edge} \eq \frac{1}{|\ome|} \int_\ome \ww \cdot \ttau_\edge \end{equation*} otherwise. Since $(\ww \cdot \ttau_\edge - \overline{w_\edge}) \ppsi_\edge \in \HH_{\GeD}(\ccurl,\ome)$ for all $\edge\in \EE_h$, we have from~\eqref{eq_RR_ome} \begin{equation*} \langle \RR,\vv \rangle \leq \sum_{\edge \in \EE_h} \|\RR\|_{\star,\edge} \|\curl \left ( \left (\ww \cdot \ttau_\edge - \overline{w_\edge} \right ) \ppsi_\edge \right )\|_{\ome}. \end{equation*} We observe that $\ww \cdot \ttau_\edge - \overline{w_\edge} \in H^1_\star(\ome)$ for all $\edge \in \EE_h$ and that \begin{equation*} \|\grad(\ww \cdot \ttau_\edge - \overline{w_\edge})\|_{\ome} = \|\grad (\ww \cdot \ttau_\edge)\|_{\ome} \leq \|\grad\ww\|_{\ome}. \end{equation*} As a result, \eqref{eq_estimate_cont} shows that \begin{equation*} \langle \RR,\vv \rangle \leq \sum_{\edge \in \EE_h} \Cconte \|\RR\|_{\star,\edge}\|\grad \ww\|_{\ome} \leq \left (\sum_{\edge \in \EE_h} \Cconte^2 \|\RR\|_{\star,\edge}^2\right )^{1/2} \left (\sum_{\edge \in \EE_h} \|\grad \ww\|_{\ome}^2\right )^{1/2}. \end{equation*} At this point, as each tetrahedron $K \in \TT_h$ has $6$ edges, we have \begin{equation*} \sum_{\edge \in \EE_h} \|\grad \ww\|_{\ome}^2 \revision{=} 6 \|\grad \ww\|_{\Omega}^2, \end{equation*} and using~\eqref{tmp_estimate_lift}, we infer that \begin{equation*} \langle \RR,\vv \rangle\leq \sqrt{6} \Clift \left (\sum_{\edge \in \EE_h} \Cconte^2\|\RR\|_{\star,\edge}^2\right )^{1/2} \|\curl \vv\|_{\Omega}. \end{equation*} Then, we conclude with~\eqref{eq_data_oscillations_upper_bound}. \end{proof} \subsection{Lower bound using localized residual dual norms} We now consider the derivation of local lower bounds on the error using the residual dual norms. We first establish a result for the residual $\RR$. \begin{lemma}[Local residual] For every edge $\edge \in \EE_h$, the following holds: \begin{equation} \label{eq_minimization} \|\RR\|_{\star,\edge} = \min_{\substack{ \hh \in \HH_{\GeN}(\ccurl,\ome)\\ \curl \hh = \jj}} \|\hh - \curl \ee_h\|_{\ome}, \end{equation} as well as \begin{equation} \label{eq_lower_bound_residual} \|\RR\|_{\star,\edge} \leq \|\curl(\ee - \ee_h)\|_{\ome}. \end{equation} \end{lemma} \begin{proof} Let us define $\hh^\star$ as the unique element of $\LL^2(\ome)$ such that \begin{equation} \label{tmp_definition_minimizer} \left \{ \begin{array}{rcll} \div \hh^\star &=& 0 & \text{ in } \ome, \\ \curl \hh^\star &=& \jj & \text{ in } \ome, \\ \hh^\star \cdot \nn_{\omega_\edge} &=& \curl \ee_h \cdot \nn_{\omega_\edge} & \text{ on } \GeD, \\ \hh^\star \times \nn_{\omega_\edge} &=& \boldsymbol 0 & \text{ on } \GeN. \end{array} \right . \end{equation} The existence and uniqueness of $\hh^\star$ follows from \cite[Proposition 7.4]{Fer_Gil_Maxw_BC_97} after lifting by $\curl \ee_h$. The second and fourth equations in~\eqref{tmp_definition_minimizer} imply that $\hh^\star$ belongs to the minimization set of~\eqref{eq_minimization}. If $\hh' \in \HH_{\GeN}(\ccurl,\ome)$ with $\curl \hh' = \jj$ is another element of the minimization set, then $\hh^\star - \hh' = \grad q$ for some $q \in H^1_{\GeN}(\ome)$, and we see that \begin{align*} \|\hh' - \curl \ee_h\|_\ome^2 &= \|\hh^\star - \curl \ee_h - \grad q\|_\ome^2 \\ &= \|\hh^\star - \curl \ee_h\|_\ome^2 - 2(\hh^\star - \curl \ee_h,\grad q)_\ome + \|\grad q\|_\ome^2 \\ &= \|\hh^\star - \curl \ee_h\|_\ome^2 + \|\grad q\|_\ome^2 \\ &\geq \|\hh^\star - \curl \ee_h\|_\ome^2, \end{align*} where we used that $\hh^\star$ is divergence-free, $(\hh^\star - \curl \ee_h) \cdot \nn_{\omega_\edge} = \boldsymbol 0$ on $\GeD$, and $q=0$ on $\GeN$ to infer that $(\hh^\star - \curl \ee_h,\grad q)_\ome=0$. Hence, $\hh^\star$ is a minimizer of~\eqref{eq_minimization}. Let $\vv \in \HH_{\GeD}(\ccurl,\ome)$. Since $(\curl \hh^\star,\vv)_\ome = (\hh^\star,\curl \vv)_\ome$, we have \begin{align*} \langle \RR,\vv \rangle &= (\jj,\vv)_\ome - (\curl \ee_h,\curl \vv)_\ome \\ &= (\curl \hh^\star,\vv)_\ome - (\curl \ee_h,\curl \vv)_\ome \\ &= (\hh^\star,\curl \vv)_\ome - (\curl \ee_h,\curl \vv)_\ome \\ &= (\pphi,\curl \vv)_\ome, \end{align*} where we have set $\pphi \eq \hh^\star - \curl \ee_h$. As above, $\div \pphi = 0$ in $\ome$ and $\pphi \cdot \nn_{\omega_\edge} = 0$ on $\GeD$. Therefore, Theorem 8.1 of~\cite{Fer_Gil_Maxw_BC_97} shows that $\pphi = \curl \oome$ for some $\oome \in \HH_{\GeD}(\ccurl,\ome)$, and \begin{equation*} \langle \RR,\vv\rangle = (\curl \oome,\curl \vv)_\ome \quad \forall \vv \in \HH_{\GeD}(\ccurl,\ome). \end{equation*} At this point, it is clear that \begin{equation*} \|\RR\|_{\star,\edge} = \sup_{\substack{ \vv \in \HH_{\GeD}(\ccurl,\ome)\\ \|\curl \vv\|_{\ome} = 1}} (\curl \oome,\curl \vv)_\ome = \|\curl \oome\|_\ome = \|\hh^\star - \curl \ee_h\|_\ome. \end{equation*} Finally, we obtain~\eqref{eq_lower_bound_residual} by observing that $\widetilde \hh \eq (\curl \ee )|_\ome$ is in the minimization set of~\eqref{eq_minimization}. \end{proof} We are now ready to state our results for the oscillation-free residuals $\RR_h^\edge$. \begin{lemma}[Local oscillation-free residual] \label{lem_res} For every edge $\edge \in \EE_h$, the following holds: \begin{equation} \label{eq_minimization_jh} \|\RR_h^\edge\|_{\star,\edge} = \min_{\substack{ \hh \in \HH_{\GeN}(\ccurl,\ome)\\ \curl \hh = \jj_h^\edge}} \|\hh - \curl \ee_h\|_{\ome}, \end{equation} as well as \begin{equation} \label{eq_lower_bound_residual_osc} \|\RR_h^\edge\|_{\star,\edge} \leq \|\curl(\ee - \ee_h)\|_{\ome} + \osc_\edge. \end{equation} \end{lemma} \begin{proof} We establish~\eqref{eq_minimization_jh} by following the same path as for~\eqref{eq_minimization}. On the other hand, we simply obtain~\eqref{eq_lower_bound_residual_osc} as a consequence of~\eqref{eq_data_oscillations_lower_bound} and~\eqref{eq_lower_bound_residual}. \end{proof} \subsection{Proof of Theorem~\ref{theorem_aposteriori}} We are now ready to give a proof of Theorem~\ref{theorem_aposteriori}. On the one hand, the \revision{broken patchwise} equilibration estimator $\eta_\edge$ defined in~\eqref{eq_definition_estimator} is evaluated from a field $\hh_h^{\edge,\star} \in \NN_p(\TTe) \cap \HH_{\GeN}(\ccurl,\ome)$ such that $\curl \hh_h^{\edge,\star} = \jj_h^\edge$, and the sequential sweep~\eqref{eq_definition_estimator_sweep} produces $\hh_h^{\edge,\heartsuit}$ also satisfying these two properties. Since the minimization set in~\eqref{eq_minimization_jh} is larger, it is clear that \begin{equation*} \|\RR_h^\edge\|_{\star,\edge} \leq \eta_\edge \end{equation*} for both estimators $\eta_\edge$. Then, \eqref{eq_upper_bound} immediately follows from~\eqref{eq_upper_bound_residual}. On the other hand, Theorem~\ref{theorem_stability} with the choice $\ch_h \eq (\curl \ee_h)|_\ome$ and the polynomial degree $p$ together with~\eqref{eq_minimization_jh} of Lemma~\ref{lem_res} implies that \begin{equation*} \eta_\edge \leq \Cste \|\RR_h^\edge\|_{\star,\edge} \end{equation*} for the estimator~\eqref{eq_definition_estimator}, whereas \revision{the same result for} $\hh_h^{\edge,\heartsuit}$ from~\eqref{eq_definition_estimator_sweep} \revision{follows from Theorem~\ref{thm_sweep} with again $\ch_h \eq (\curl \ee_h)|_\ome$.} Therefore, \eqref{eq_lower_bound} is a direct consequence of~\eqref{eq_lower_bound_residual_osc}. \section{Equivalent reformulation and proof of Theorem~\ref{theorem_stability} ($\HH(\ccurl)$ best-approximation in an edge patch)} \label{sec_proof_stability} In this section, we consider the minimization problem over an edge patch as posed in the statement of Theorem~\ref{theorem_stability}\revision{, as well as its sweep variant of Theorem~\ref{thm_sweep}}, which were central tools to establish the efficiency of the \revision{broken patchwise equilibrated} error estimators in Theorem~\ref{theorem_aposteriori}. These minimization problems are similar to the ones considered in~\cite{Brae_Pill_Sch_p_rob_09, Ern_Voh_p_rob_15, Ern_Voh_p_rob_3D_20} in the framework of $H^1$ and $\HH(\ddiv)$ spaces. We prove here Theorem~\ref{theorem_stability} via its equivalence with a stable broken $\HH(\ccurl)$ polynomial extension on an edge patch, as formulated in Proposition~\ref{prop_stability_patch} below. \revision{By virtue of Remark~\ref{rem_sweep_brok}, this also establishes the validity of Theorem~\ref{thm_sweep}}. \subsection{Stability of discrete minimization in a tetrahedron} \subsubsection{Preliminaries} \label{sec:preliminaries_K} We first recall some necessary notation from~\cite{Chaum_Ern_Voh_curl_elm_20}. Consider an arbitrary mesh face $F\in \FF_h$ oriented by the fixed unit vector normal $\nn_F$. For all $\ww \in \LL^2(F)$, we define the tangential component of $\ww$ as \begin{equation} \label{eq:def_pi_tau_F} \ppi^\ttau_F (\ww) \eq \ww - (\ww \cdot \nn_F) \nn_F. \end{equation} Note that the orientation of $\nn_F$ is not important here. Let $K\in\TT_h$ and let $\FF_K$ be the collection of the faces of $K$. For all $\vv \in \HH^1(K)$ and all $F\in\FF_K$, the tangential trace of $\vv$ on $F$ is defined (with a slight abuse of notation) as $\ppi^\ttau_F(\vv) \eq \ppi^\ttau_F (\vv|_F)$. Consider now a nonempty subset $\FF \subseteq \FF_K$. We denote $\Gamma_\FF \subset \partial K$ the corresponding part of the boundary of $K$. Let $p\ge0$ be the polynomial degree and recall that $\NN_p(K)$ is the N\'ed\'elec space on the tetrahedron $K$, see~\eqref{eq_RT_N}. We define the piecewise polynomial space on $\Gamma_\FF$ \begin{equation} \label{eq_tr_K} \NN_p^\ttau(\Gamma_\FF) \eq \left \{ \ww_\FF \in \LL^2(\Gamma_\FF) \; | \; \exists \vv_p \in \NN_p(K); \ww_F \eq (\ww_\FF)|_F = \ppi^\ttau_F (\vv_p) \quad \forall F \in \FF \right \}. \end{equation} Note that $\ww_\FF \in \NN_p^\ttau(\Gamma_\FF)$ if and only if $\ww_F\in \NN_p^\ttau(\Gamma_{\{F\}})$ for all $F\in\FF$ and whenever $|\FF|\ge2$, for every pair $(F_-,F_+)$ of distinct faces in $\FF$, the following tangential trace compatibility condition holds true along their common edge $\edge \eq F_+\cap F_-$: \begin{equation} \label{eq_edge_compatibility_condition} (\ww_{F_+})|_\edge \cdot \ttau_\edge = (\ww_{F_-})|_\edge \cdot \ttau_\edge. \end{equation} For all $\ww_\FF \in \NN_p^\ttau(\Gamma_\FF)$, we set \begin{equation} \label{eq_scurl_curl_el} \scurl_F (\ww_F) \eq (\curl \vv_p)|_F \cdot \nn_F \qquad \forall F \in \FF, \end{equation} which is well-defined independently of the choice of $\vv_p$. Note that the orientation of $\nn_F$ is relevant here. The definition~\eqref{eq:def_pi_tau_F} of the tangential trace cannot be applied to fields with the minimal regularity $\vv \in \HH(\ccurl,K)$. In what follows, we use the following notion to prescribe the tangential trace of a field in $\HH(\ccurl,K)$. \begin{definition}[Tangential trace by integration by parts in a single tetrahedron] \label{definition_partial_trace} Let $K$ be a tetrahedron and $\FF \subseteq \FF_K$ a nonempty (sub)set of its faces. Given $\rr_\FF \in \NN_p^\ttau(\Gamma_\FF)$ and $\vv \in \HH(\ccurl,K)$, we employ the notation ``$\vv|^\ttau_\FF = \rr_\FF$'' to say that \begin{equation*} (\curl \vv,\pphi)_K - (\vv,\curl \pphi)_K = \sum_{F \in \FF} (\rr_F,\pphi \times \nn_K)_F \quad \forall \pphi \in \HH^1_{\ttau,\FF^{\mathrm{c}}}(K), \end{equation*} where \begin{equation*} \HH_{\ttau,\FF^{\mathrm{c}}}^1(K) \eq \left \{ \pphi \in \HH^1(K) \; | \; \pphi|_F \times \nn_{\revision{K}} = \boldsymbol 0 \quad \forall F \in \FF^{\mathrm{c}} \eq \FF_K \setminus \FF \right \}. \end{equation*} Whenever $\vv \in \HH^1(K)$, $\vv|^\ttau_\FF = \rr_\FF$ if and only if $\ppi^\ttau_F (\vv) = \rr_F$ for all $F \in \FF$. \end{definition} \subsubsection{Statement of the stability result in a tetrahedron} Recall the Raviart--Thomas space $\RT_p(K)$ on the simplex $K$, see~\eqref{eq_RT_N}. We are now ready to state a key technical tool from~\cite[Theorem~2]{Chaum_Ern_Voh_curl_elm_20}, based on~\cite[Theorem~7.2]{Demk_Gop_Sch_ext_II_09} and~\cite[Proposition~4.2]{Cost_McInt_Bog_Poinc_10}. \begin{proposition}[Stable $\HH(\ccurl)$ polynomial extension on a tetrahedron] \label{prop_stability_tetrahedra} Let $K$ be a tetrahedron and let $\emptyset\subseteq \FF \subseteq \FF_K$ be a (sub)set of its faces. Then, for every polynomial degree $p \geq 0$, for all $\rr_K \in \RT_p(K)$ such that $\div \rr_K = 0$, and if $\FF \ne \emptyset$, for all $\rr_\FF \in \NN_p^\ttau(\Gamma_\FF)$ such that $\rr_K \cdot \nn_{F} = \scurl_F (\rr_F)$ for all $F \in \FF$, the following holds: \begin{equation} \label{eq_minimization_element_K} \min_{\substack{ \vv_p \in \NN_p(K) \\ \curl \vv_p = \rr_K \\ \vv_{p}|^\ttau_\FF = \rr_\FF }} \|\vv_p\|_{K} \le C_{\mathrm{st},K} \min_{\substack{ \vv \in \HH(\ccurl,K) \\ \curl \vv = \rr_K \\ \vv|^\ttau_\FF = \rr_\FF }} \|\vv\|_{K}, \end{equation} where the condition on the tangential trace in the minimizing sets is null if $\emptyset=\FF$. Both minimizers in~\eqref{eq_minimization_element_K} are uniquely defined and the constant $C_{\mathrm{st},K}$ only depends on the shape-regularity parameter $\kappa_K$ of $K$. \end{proposition} \subsection{Piola mappings} \label{sec_Piola} This short section reviews some useful properties of Piola mappings used below, see~\cite[Chapter~9]{Ern_Guermond_FEs_I_21}. Consider two tetrahedra $\Kin,\Kout \subset \mathbb R^3$ and an invertible affine mapping $\TTT: \mathbb R^3 \to \mathbb R^3$ such that $\Kout = \TTT(\Kin)$. Let $\JJJ_{\TTT}$ be the (constant) Jacobian matrix of $\TTT$. Note that we do not require that $\det \JJJ_{\TTT}$ is positive. The affine mapping $\TTT$ can be identified by specifying the image of each vertex of $\Kin$. We consider the covariant and contravariant Piola mappings \begin{equation*} \ppsi^{\mathrm{c}}_{\TTT}(\vv) = \left (\JJJ_{\TTT}\right )^{-T} \left (\vv \circ \TTT^{-1} \right ), \qquad \ppsi^{\mathrm{d}}_{\TTT}(\vv) = \frac{1}{\det \left (\JJJ_{\TTT}\right )} \JJJ_{\TTT} (\vv \circ \TTT^{-1}) \end{equation*} for vector-valued fields $\vv: \Kin \to \mathbb R^3$. It is well-known that $\ppsi^{\mathrm{c}}_{\TTT}$ maps $\HH(\ccurl,\Kin)$ onto $\HH(\ccurl,\Kout)$ and it maps $\NN_p(\Kin)$ onto $\NN_p(\Kout)$ for any polynomial degree $p\ge0$. Similarly, $\ppsi^{\mathrm{d}}_{\TTT}$ maps $\HH(\ddiv,\Kin)$ onto $\HH(\ddiv,\Kout)$ and it maps $\RT_p(\Kin)$ onto $\RT_p(\Kout)$. Moreover, the Piola mappings $\ppsi^{\mathrm{c}}_{\TTT}$ and $\ppsi^{\mathrm{d}}_{\TTT}$ commute with the curl operator in the sense that \begin{equation} \label{eq_piola_commute} \curl \left (\ppsi^{\mathrm{c}}_{\TTT}(\vv)\right ) = \ppsi^{\mathrm{d}}_{\TTT}\left (\curl \vv\right ) \quad \forall \vv \in \HH(\ccurl,\Kin). \end{equation} In addition, we have \begin{equation} \label{eq_piola_adjoint} (\ppsi^{\mathrm{c}}_{\TTT}(\vv_{\rm in}),\vv_{\rm out})_{\Kout} = \sign (\det \JJJ_{\TTT}) (\vv_{\rm in},(\ppsi^{\mathrm{d}}_{\TTT})^{-1}(\vv_{\rm out}))_{\Kin}, \end{equation} for all $\vv_{\rm in} \in \HH(\ccurl,\Kin)$ and $\vv_{\rm out} \in \HH(\ccurl,\Kout)$. We also have $\|\ppsi_{\TTT}^{\mathrm{c}}(\vv)\|_{\Kout} \leq \frac{h_{\Kin}}{\rho_{\Kout}} \|\vv\|_{\Kin}$ for all $\vv \in \LL^2(\Kin)$, so that whenever $\Kin,\Kout$ belong to the same edge patch $\TTe$, we have \begin{equation} \label{eq_stab_piola_L2} \|\ppsi_{\TTT}^{\mathrm{c}}(\vv)\|_{\Kout} \leq C \|\vv\|_{\Kin} \quad \forall \vv \in \LL^2(\Kin), \end{equation} for a constant $C$ only depending on the shape-regularity $\kappa_\edge$ of the patch $\TTe$ defined in~\eqref{eq_regularities}. \subsection{Stability of discrete minimization in an edge patch} \subsubsection{Preliminaries} In this section, we consider an edge patch $\TTe$ associated with a mesh edge $\edge\in\EE_h$ consisting of tetrahedral elements $K$ sharing the edge $\edge$, cf. Figure~\ref{fig_patch}. We denote by $n \eq |\TTe|$ the number of tetrahedra in the patch, by $\FFe$ the set of all faces of the patch, by $\FFei \subset \FFe$ the set of ``internal'' faces, i.e., those being shared by two different tetrahedra from the patch, and finally, by $\FFee \eq \FFe \setminus \FFei$ the set of ``external'' faces. The patch is either of ``interior'' type, corresponding to an edge in the interior of the domain $\Omega$, in which case there is a full loop around $\edge$, see Figure~\ref{fig_patch}, left, or of ``boundary'' type, corresponding to an edge on the boundary of the domain $\Omega$, in which case there is no full loop around $\edge$, see Figure~\ref{fig_patch}, right, and Figure~\ref{figure_numbering_patch}. We further distinguish three types of patches of boundary type depending on the status of the two boundary faces sharing the associated boundary edge: the patch is of Dirichlet boundary type if both faces lie in $\overline \GD$, of ``mixed boundary'' type if one face lies in $\overline \GD$ an the other in $\overline \GN$, and of Neumann boundary type if both faces lie in $\overline \GN$. Note that for an interior patch, $|\FFei| = n$, whereas $|\FFei| = n-1$ for a boundary patch. The open domain associated with $\TTe$ is denoted by $\ome$, and $\nn_{\ome}$ stands for the unit normal vector to $\partial \ome$ pointing outward $\ome$. \begin{figure}[htb] \centerline{\includegraphics[height=0.35\textwidth]{figures/patch_edge_bound_DN.pdf} \qquad \qquad \includegraphics[height=0.35\textwidth]{figures/patch_edge_bound_NN.pdf}} \caption{Mixed (left) and Neumann (right) boundary patch $\TTe$} \label{figure_numbering_patch} \end{figure} We denote by $\adown$ and $\aup$ the two vertices of the edge $\edge$. The remaining vertices are numbered consecutively in one sense of rotation around the edge $\edge$ (this sense is only specific for the ``mixed boundary'' patches) and denoted by $\aaa_{0},\aaa_{1},\dots,\aaa_{n}$, with $\aaa_{0} = \aaa_{n}$ if the patch is interior. Then $\TTe = \bigcup_{j\in\{1:n\}} K_j$ for $K_j \eq \conv(\aaa_{j-1},\aaa_{j},\adown,\aup)$; we also denote $K_0 \eq K_n$ and $K_{n+1} \eq K_1$. For all $j\in\{0:n\}$, we define $F_{j} \eq \conv(\aaa_{j},\adown,\aup)$, and for all $j\in\{1:n\}$, we let $\Fdown_j \eq \conv(\aaa_{j-1},\aaa_{j},\adown)$ and $\Fup_j \eq \conv(\aaa_{j-1},\aaa_{j},\aup)$. Then $\FF_{K_j} = \{F_{j-1},F_{j},\Fdown_j,\Fup_j\}$, and $F_0 = F_n$ if the patch is interior. We observe that, respectively for interior and boundary patches, $\FFei = \bigcup_{j\in\{0:n-1\}} \{F_{j}\}$, $\FFee = \bigcup_{j\in\{1:n\}} \{\Fdown_j,\Fup_j\}$ and $\FFei = \bigcup_{j\in\{1:n-1\}} \{F_{j}\}$, $\FFee = \bigcup_{j\in\{1:n\}} \{\Fdown_j,\Fup_j\} \cup \{F_0,F_n\}$. Finally, if $F_{j} \in \FFei$ is an internal face, we define its normal vector by $\nn_{F_{j}} \eq \nn_{K_{j+1}} = -\nn_{K_j}$, whereas for any external face $F \in \FFee$, we define its normal vector to coincide with the normal vector pointing outward the patch, $\nn_F \eq \nn_{\ome}$. We now extend the notions of Section~\ref{sec:preliminaries_K} to the edge patch $\TTe$. Consider the following broken Sobolev spaces: \begin{align*} \HH(\ccurl,\TTe) &\eq \left \{ \vv \in \LL^2(\ome) \; | \; \vv|_K \in \HH(\ccurl,K) \; \forall K \in \TTe \right \}, \\ \HH^1(\TTe) &\eq \left \{ \vv \in \LL^2(\ome) \; | \; \vv|_K \in \HH^1(K) \; \forall K \in \TTe \right \}, \end{align*} as well as the broken N\'ed\'elec space $\NN_{p}(\TTe)$. For all $\vv \in \HH^1(\TTe)$, we employ the notation $\jump{\vv}_F \in \LL^2(F)$ for the ``(strong) jump'' of $\vv$ across any face $F\in\FF^\edge$. Specifically, for an internal face $F_{j} \in \FFei$, we set $\jump{\vv}_{F_{j}} \eq (\vv|_{K_{j+1}})|_{F_{j}} - (\vv|_{K_j})|_{F_{j}}$, whereas for an external face $F \in \FFee$, we set $\jump{\vv}_F \eq \vv|_F$. Note in particular that piecewise polynomial functions from $\NN_{p}(\TTe)$ belong to $\HH^1(\TTe)$, so that their strong jumps are well-defined. To define a notion of a ``weak tangential jump'' for functions of $\HH(\ccurl,\TTe)$, for which a strong (pointwise) definition cannot apply, some preparation is necessary. Let $\FF$ be a subset of the faces of an edge patch $\TTe$ containing the internal faces, i.e. $\FFei \subseteq \FF \subseteq \FFe$, and denote by $\Gamma_\FF$ the corresponding open set. The set $\FF$ \revision{represents the set of faces appearing in the minimization. It} depends on the type of edge patch and is reported in Table~\ref{Tab:type_of_calF}. In extension of~\eqref{eq_tr_K}, we define the piecewise polynomial space on $\Gamma_\FF$ \begin{equation} \label{eq_tr_Te} \NN_p^\ttau(\Gamma_\FF) \eq \left \{ \ww_\FF \in \LL^2(\Gamma_\FF) \; | \; \exists \vv_p \in \NN_{p}(\TTe); \ww_F \eq (\ww_\FF)|_F = \ppi^\ttau_F (\jump{\vv_p}_F) \quad \forall F \in \FF \right \}. \end{equation} In extension of~\eqref{eq_scurl_curl_el}, for all $\ww_\FF \in \NN_p^\ttau(\Gamma_\FF)$, we set \begin{equation} \label{eq_scurl_curl_patch} \scurl_F (\ww_F) \eq \jump{\curl \vv_p}_F \cdot \nn_F \qquad \forall F \in \FF. \end{equation} Then we can extend Definition~\ref{definition_partial_trace} to prescribe weak tangential jumps of functions in $\HH(\ccurl,\TTe)$ as follows: \begin{table}[tb] \begin{center}\begin{tabular}{|l|l|l|} \hline patch type&\hfil $\FFei$&\hfil $\FF$\\ \hline interior&$\{F_1,\ldots,F_n\}$&$\FFei = \{F_1,\ldots,F_n\}$\\ Dirichlet boundary&$\{F_1,\ldots,F_{n-1}\}$&$\FFei = \{F_1,\ldots,F_{n-1}\}$\\ mixed boundary&$\{F_1,\ldots,F_{n-1}\}$&$\{F_0\}\cup \FFei = \{F_0,F_1,\ldots,F_{n-1}\}$\\ Neumann boundary&$\{F_1,\ldots,F_{n-1}\}$&$\{F_0\}\cup\FFei\cup\{F_n\}=\{F_0,F_1,\ldots,F_{n-1},F_n\}$\\ \hline \end{tabular}\end{center} \caption{The set of internal faces $\FFei$ and the set $\FF$ used for the minimization problems on the edge patch for the four patch types.} \label{Tab:type_of_calF} \end{table} \begin{definition}[Tangential jumps by integration by parts in an edge patch] \label{definition_jumps} Given $\rr_{\FF} \in \NN_p^\ttau(\Gamma_\FF)$ and $\vv \in \HH(\ccurl,\TTe)$, we employ the notation ``$\jump{\vv}^\ttau_\FF = \rr_\FF$'' to say that \begin{equation}\label{eq_prescription_tang_jump_e} \sum_{K \in \TTe} \left \{ (\curl \vv,\pphi)_K - (\vv,\curl \pphi)_K \right \} = \sum_{F \in \FF} (\rr_F,\pphi \times \nn_F)_F \quad \forall \pphi \in \HH^1_{\ttau,\FF^{\mathrm{c}}}(\TTe), \end{equation} where \begin{equation} \label{eq_Htpatch} \HH^1_{\ttau,\FF^{\mathrm{c}}}(\TTe) \eq \left \{ \pphi \in \HH^1(\TTe) \; | \; \jump{\pphi}_F \times \nn_F = \boldsymbol 0 \quad \forall F \in \FFei \cup (\FFee \setminus \FF) \right \}. \end{equation} Whenever $\vv \in \HH^1(\TTe)$, $\jump{\vv}^\ttau_\FF = \rr_\FF$ if and only if $\ppi^\ttau_F(\jump{\vv}_F) = \rr_F$ for all $F \in \FF$. Note that $\pphi\times \nn_F$ in~\eqref{eq_prescription_tang_jump_e} is uniquely defined for all $\pphi \in \HH^1_{\ttau,\FF^{\mathrm{c}}}(\TTe)$. \end{definition} \subsubsection{Statement of the stability result in an edge patch} Henceforth, if $\rr_\TT \in \RT_p(\TTe)$ is an elementwise Raviart--Thomas function, we will employ the notation $\rr_K \eq \rr_\TT|_K$ for all $K \in \TTe$. In addition, if $\vv_{\TT} \in \NN_{p}(\TTe)$ is an elementwise N\'ed\'elec function, the notations $\div \rr_\TT$ and $\curl \vv_{\TT}$ will be understood elementwise. \begin{definition}[Compatible data] \label{definition_compatible_data} Let $\rr_\TT \in \RT_p(\TTe)$ and $\rr_\FF \in \NN_p^\ttau(\Gamma_\FF)$. We say that the data $\rr_\TT$ and $\rr_\FF$ are compatible if \bse\begin{align} \div \rr_\TT & = 0, \label{eq_comp_a} \\ \jump{\rr_\TT}_{F} \cdot \nn_F & = \scurl_F (\rr_F) \quad \forall F \in \FF, \label{eq_comp_b} \end{align} and with the following additional condition whenever the patch is either of interior or Neumann boundary type: \begin{alignat}{2} \sum_{j\in\{1:n\}} \rr_{F_j}|_\edge \cdot \ttau_\edge &= 0 &\qquad& \textrm{(interior type)}, \label{eq_comp_c} \\ \sum_{j\in\{0:n-1\}} \rr_{F_j}|_\edge \cdot \ttau_\edge &= \rr_{F_n}|_\edge \cdot \ttau_\edge &\qquad& \textrm{(Neumann boundary type)}. \label{eq_comp_d} \end{alignat}\ese \end{definition} \begin{definition}[Broken patch spaces]\label{def_spaces} Let $\rr_\TT \in \RT_p(\TTe)$ and $\rr_\FF \in \NN_p^\ttau(\Gamma_\FF)$ be compatible data as per Definition~\ref{definition_compatible_data}. We define \bse\begin{align} \boldsymbol V(\TTe) & \eq \left \{ \vv \in \HH(\ccurl,\TTe) \; \left | \begin{array}{rl} \curl \vv &= \rr_\TT \\ \jump{\vv}^\ttau_\FF &= \rr_\FF \end{array} \right . \right \}, \label{eq_VTe}\\ \boldsymbol V_p(\TTe) & \eq \boldsymbol V(\TTe) \cap \NN_p(\TTe). \label{eq_VqTe} \end{align}\ese \end{definition} We will show in Lemma~\ref{lem_EU} below that the space $\boldsymbol V_p(\TTe)$ (and therefore also $\boldsymbol V(\TTe)$) is nonempty. We are now ready to present our central result of independent interest. To facilitate the reading, the proof is postponed to Section~\ref{sec:proof_stability_patch}. \begin{proposition}[Stable broken $\HH(\ccurl)$ polynomial extension in an edge patch] \label{prop_stability_patch} Let an edge $\edge\in\EE_h$ and the associated edge patch $\TTe$ with subdomain $\ome$ be fixed. Let the set of faces $\FF$ be specified in Table~\ref{Tab:type_of_calF}. Then, for every polynomial degree $p \geq 0$, all $\rr_\TT \in \RT_p(\TTe)$, and all $\rr_\FF \in \NN_p^\ttau(\Gamma_\FF)$ compatible as per Definition~\ref{definition_compatible_data}, the following holds: \begin{equation} \label{eq_stab_shift} \min_{\vv_p\in {\boldsymbol V}_p(\TTe)} \|\vv_p\|_{\ome} = \min_{\substack{ \vv_p \in \NN_p(\TTe) \\ \curl \vv_p = \rr_\TT \\ \jump{\vv_p}^\ttau_\FF = \rr_\FF }} \|\vv_p\|_{\ome} \le C_{\mathrm{st},\edge} \min_{\substack{ \vv \in \HH(\ccurl,\TTe) \\ \curl \vv = \rr_\TT \\ \jump{\vv}^\ttau_\FF = \rr_\FF }} \|\vv\|_{\ome} = C_{\mathrm{st},\edge} \min_{\vv\in {\boldsymbol V}(\TTe)} \|\vv\|_{\ome}. \end{equation} Here, all the minimizers are uniquely defined and the constant $C_{\mathrm{st},\edge}$ only depends on the shape-regularity parameter $\kappa_\edge$ of the patch $\TTe$ defined in~\eqref{eq_regularities}. \end{proposition} \begin{remark}[Converse inequality in Proposition~\ref{prop_stability_patch}] \label{rem_conv} Note that the converse to the inequality~\eqref{eq_stab_shift} holds trivially with constant one. \end{remark} \subsection{Equivalence of Theorem~\ref{theorem_stability} with Proposition~\ref{prop_stability_patch}} $ $ \revision{We have the following important link, establishing Theorem~\ref{theorem_stability}, including the existence and uniqueness of the minimizers. \begin{lemma}[Equivalence of Theorem~\ref{theorem_stability} with Proposition~\ref{prop_stability_patch}] Theorem~\ref{theorem_stability} holds if and only if Proposition~\ref{prop_stability_patch} holds. {\color{black} More precisely, let $\hh_p^\star \in \NN_p(\TTe) \cap \HH_{\GeN}(\ccurl,\ome)$ and $\hh^\star \in \HH_{\GeN}(\ccurl,\ome)$ by any solutions to the minimization problems of Theorem~\ref{theorem_stability} for the data $\jj_h^{\edge} \in \RT_p(\TTe) \cap \HH_{\GeN}(\ddiv,\ome)$ with $\div \jj_h^{\edge} = 0$ and $\ch_h \in \NN_p(\TTe)$. Let $\vv^\star_p \in {\boldsymbol V}_p(\TTe)$ and $\vv^\star \in \boldsymbol V(\TTe)$ be any minimizers of Proposition~\ref{prop_stability_patch} for the data \begin{equation} \label{eq_data_eq} \rr_\TT \eq \jj_h^{\edge}-\curl \ch_h, \qquad \rr_F \eq - \ppi^\ttau_F \left (\jump{\ch_h}_F\right ) \quad \forall F \in \FF, \end{equation} where $\FF$ is specified in Table~\ref{Tab:type_of_calF}.} Then {\color{black} \begin{equation} \label{eq_eq} \hh_p^\star - \ch_h = \vv^\star_p, \quad \hh^\star - \ch_h = \vv^\star. \end{equation} In the converse direction, for given data $\rr_\TT$ and $\rr_\FF$} in Proposition~\ref{prop_stability_patch}, {\color{black} compatible as per Definition~\ref{definition_compatible_data}, taking any $\ch_h \in \NN_p(\TTe)$ such that $- \ppi^\ttau_F \left (\jump{\ch_h}_F\right ) = \rr_F$ for all $F \in \FF$ and $\jj_h^{\edge} \eq \rr_\TT + \curl \ch_h$} gives minimizers of Theorem~\ref{theorem_stability} such that{\color{black} ~\eqref{eq_eq} holds true}. \end{lemma} } \begin{proof} \revision{The proof follows} via a shift by the datum $\ch_h$. In order to show~\eqref{eq_eq} in the forward direction \revision{(the converse direction is actually easier)}, we merely need to show that $\rr_\TT$ and $\rr_\FF$ prescribed by~\eqref{eq_data_eq} are compatible data as per Definition~\ref{definition_compatible_data}. Indeed, to start with, since $\jj_h^{\edge}, \curl \ch_h \in \RT_p(\TTe)$, we have $\rr_\TT \in \RT_p(\TTe)$. In addition, since $\div \jj_h^{\edge} = 0$ from~\eqref{eq_definition_jj_h}, \begin{equation*} \div \rr_\TT = \div \jj_h^{\edge} - \div (\curl \ch_h) = 0, \end{equation*} which is~\eqref{eq_comp_a}. Then, for all $j\in\{1:n\}$ if the patch is of interior type and for all $j\in\{1:n-1\}$ if the patch is of boundary type, we have \begin{equation*} \rr_{F_{j}} = \ppi^\ttau_{F_j} (\ch_h|_{K_j}) - \ppi^\ttau_{F_j} (\ch_h|_{K_{j+1}}), \end{equation*} and therefore, recalling the definition~\eqref{eq_scurl_curl_el} of the surface curl, we infer that \begin{align*} \scurl_{F_{j}} (\rr_{F_{j}}) &= \scurl_{F_{j}} (\ppi^\ttau_{F_j} (\ch_h|_{K_j})) - \scurl_{F_{j}} (\ppi^\ttau_{F_j} (\ch_h|_{K_{j+1}})) \\ &= (\curl \ch_h)|_{K_{j}}|_{F_j} \cdot \nn_{F_{j}} - (\curl \ch_h)|_{K_{j+1}}|_{F_j} \cdot \nn_{F_{j}} \\ &= - \jump{\curl \ch_h}_{F_{j}} \cdot \nn_{F_{j}}. \end{align*} On the other hand, since $\jj_h^{\edge} \in \HH(\ddiv,\ome)$, we have $\jump{\jj_h^{\edge}}_{F_{j}} \cdot \nn_{F_{j}} = 0$, and therefore \begin{align*} \jump{\rr_\TT}_{F_{j}} \cdot \nn_{F_{j}} &= - \jump{\curl \ch_h}_{F_{j}} \cdot \nn_{F_{j}} = \scurl_{F_{j}} (\rr_{F_{j}}). \end{align*} Since a similar reasoning applies on the face $F_0$ if the patch is of Neumann or mixed boundary type and on the face $F_n$ if the patch is of Neumann boundary type, \eqref{eq_comp_b} is established. It remains to show that $\rr_\FF$ satisfies the edge compatibility condition~\eqref{eq_comp_c} or~\eqref{eq_comp_d} if the patch is interior or Neumann boundary type, respectively. Let us treat the first case (the other case is treated similarly). Owing to the convention $K_{n+1}=K_1$, we infer that \begin{equation*} \sum_{j\in\{1:n\}} \rr_{F_{j}}|_\edge \cdot \ttau_\edge = \sum_{j\in\{1:n\}} (\ch_h|_{K_j} - \ch_h|_{K_{j+1}})|_\edge \cdot \ttau_\edge = 0, \end{equation*} which establishes~\eqref{eq_comp_c}. We have thus shown that $\rr_\TT$ and $\rr_\FF$ are compatible data as per Definition~\ref{definition_compatible_data}. \end{proof} \subsection{Proof of Proposition~\ref{prop_stability_patch}} \label{sec:proof_stability_patch} The proof of Proposition~\ref{prop_stability_patch} is performed in two steps. First we prove that ${\boldsymbol V}_p(\TTe)$ is nonempty by providing a generic elementwise construction of a field in ${\boldsymbol V}_p(\TTe)$; this in particular implies the existence and uniqueness of all minimizers in~\eqref{eq_stab_shift}. Then we prove the inequality~\eqref{eq_stab_shift} by using one such field $\xxi_p^\star \in \boldsymbol V_p(\TTe)$. Throughout this section, if $A,B \geq 0$ are two real numbers, we employ the notation $A \lesssim B$ to say that there exists a constant $C$ that only depends on the shape-regularity parameter $\kappa_\edge$ of the patch $\TTe$ defined in~\eqref{eq_regularities}, such that $A \leq CB$. We note that in particular we have $n = |\TTe| \lesssim 1$ owing to the shape-regularity of the mesh $\TT_h$. \subsubsection{Generic elementwise construction of fields in ${\boldsymbol V}_p(\TTe)$} The generic construction of fields in ${\boldsymbol V}_p(\TTe)$ is based on a loop over the mesh elements composing the edge patch $\TTe$. This loop is enumerated by means of an index $j\in\{1:n\}$. \begin{definition}[Element spaces]\label{def_spaces_elm} For each $j\in\{1:n\}$, let $\emptyset\subseteq \FF_j \subseteq \FF_{K_j}$ be a (sub)set of the faces of $K_j$. Let $\rr_{K_j} \in \RT_p(K_j)$ with $\div \rr_{K_j} = 0$, and if $\emptyset\ne \FF_j$, let $\trr_{\FF_j}^j \in \NN_p^\ttau(\Gamma_{\FF_j})$ in the sense of~\eqref{eq_tr_K} be given data. We define \bse\begin{align} \boldsymbol V(K_j) & \eq \left \{ \vv \in \HH(\ccurl,K_j) \; \left | \; \begin{array}{l} \curl \vv = \rr_{K_j} \\ \vv|^\ttau_{\FF_j} = \trr^j_{\FF_j}, \end{array} \right . \right \}, \label{eq_VKj} \\ \boldsymbol V_p(K_j) & \eq \boldsymbol V(K_j) \cap \NN_p(K_j). \label{eq_VqKj} \end{align}\ese \end{definition} In what follows, we are only concerned with the cases where $\FF_j$ is either empty or composed of one or two faces of $K_j$. In this situation, the subspace ${\boldsymbol V}_p(K_j)$ is nonempty if and only if \bse \label{eq:cond_comp} \begin{alignat}{2} &\scurl_{F} (\trr_{F}^{j}) = \rr_{K_j} \cdot \nn_F&\quad&\forall F\in\FF_j,\label{eq:cond_comp1} \\ &\trr_{F_+}^{j}|_{\edge} \cdot \ttau_\edge = \trr_{F_-}^{j}|_{\edge} \cdot \ttau_\edge&\quad&\text{if $\FF_j=\{F_+,F_-\}$ with $\edge=F_+\cap F_-$}, \label{eq:cond_comp2} \end{alignat} \ese where $\nn_F$ is the unit normal orienting $F$ used in the definition of the surface curl (see~\eqref{eq_scurl_curl_el}). The second condition~\eqref{eq:cond_comp2} is relevant only if $|\FF_j|=2$. \begin{lemma}[Generic elementwise construction]\label{lem_EU} Let $\edge\in\EE_h$, let $\TTe$ be the edge patch associated with $\edge$, and let the set of faces $\FF$ be specified in Table~\ref{Tab:type_of_calF}. Let $\rr_\TT \in \RT_p(\TTe)$ and $\rr_\FF \in \NN_p^\ttau(\Gamma_\FF)$ be compatible data as per Definition~\ref{definition_compatible_data}. Define $\rr_{K_j}\eq \rr_{\TT}|_{K_j}$ for all $j\in\{1:n\}$. Then, the following inductive procedure yields a sequence of nonempty spaces $({\boldsymbol V}_p(K_j))_{j\in\{1:n\}}$ in the sense of Definition~\ref{def_spaces_elm}, as well as a sequence of fields $(\xxi_p^j)_{j\in\{1:n\}}$ such that $\xxi_p^j\in {\boldsymbol V}_p(K_j)$ for all $j\in\{1:n\}$. Moreover, the field $\xxi_p$ prescribed by $\xxi_p|_{K_j} \eq \xxi_p^j$ for all $j\in\{1:n\}$ belongs to the space $\boldsymbol V_p(\TTe)$ of Definition~\ref{def_spaces}: \\ \bse {\bf 1)} First element ($j=1$): Set $\FF_1 \eq \emptyset$ if the patch is of interior or Dirichlet boundary type and set $\FF_1 \eq \{F_0\}$ if the patch is of Neumann or mixed boundary type together with \begin{equation} \trr^1_{F_0} \eq \rr_{F_0}.\label{eq_F_0} \end{equation} Define the space ${\boldsymbol V}_p(K_1)$ according to~\eqref{eq_VqKj} and pick any $\xxi_p^1\in {\boldsymbol V}_p(K_1)$. \\ {\bf 2)} Middle elements ($j\in\{2:n-1\}$): Set $\FF_j\eq \{F_{j-1}\}$ together with \begin{equation} \trr^j_{F_{j-1}} \eq \ppi^\ttau_{F_{j-1}} (\xxi^{j-1}_p) + \rr_{F_{j-1}}, \label{eq_F_j} \end{equation} with $\xxi^{j-1}_p$ obtained in the previous step of the procedure. Define the space ${\boldsymbol V}_p(K_j)$ according to~\eqref{eq_VqKj} and pick any $\xxi_p^j\in {\boldsymbol V}_p(K_j)$. \\ {\bf 3)} Last element ($j=n$): Set $\FF_{n} \eq \{F_{n-1}\}$ if the patch is of Dirichlet or mixed boundary type and set $\FF_n \eq \{ F_{n-1}, F_n \}$ if the patch is of interior or Neumann boundary type and define $\trr^n_{\FF_n}$ as follows: For the four cases of the patch, \begin{equation} \trr^n_{F_{n-1}} \eq \ppi^\ttau_{F_{n-1}} (\xxi^{n-1}_p) + \rr_{F_{n-1}}, \label{eq_F_nn} \end{equation} with $\xxi^{n-1}_p$ obtained in the previous step of the procedure, and in the two cases where $\FF_n$ also contains $F_n$: \begin{alignat}{2} \trr_{F_n}^n & \eq \ppi^\ttau_{F_n} (\xxi_p^1) - \rr_{F_n} &\qquad&\text{interior type}, \label{eq_F_int}\\ \trr_{F_n}^n & \eq \rr_{F_n} &\qquad&\text{Neumann boundary type}. \label{eq_F_n} \end{alignat} Define the space ${\boldsymbol V}_p(K_n)$ according to~\eqref{eq_VqKj} and pick any $\xxi_p^n\in {\boldsymbol V}_p(K_n)$. \\ \ese \end{lemma} \begin{proof} We first show that $\xxi^j_p$ is well-defined in ${\boldsymbol V}_p(K_j)$ for all $j\in\{1:n\}$. We do so by verifying that $\trr^{j}_{\FF_{j}} \in \NN_p^\ttau(\Gamma_{\FF_{j}})$ (recall~\eqref{eq_tr_K}) and that the conditions~\eqref{eq:cond_comp} hold true for all $j\in\{1:n\}$. Then, we show that $\xxi_p \in \boldsymbol V_p(\TTe)$. {\bf (1)} First element ($j=1$). If the patch is of interior or Dirichlet boundary type, there is nothing to verify since $\FF_1$ is empty. If the patch is of Neumann or mixed boundary type, $\FF_1 = \{F_0\}$ and we need to verify that $\trr^1_{F_0} \in \NN_p^\ttau(\Gamma_{\{F_0\}})$ and that $\scurl_{F_0} (\trr^{1}_{F_0})=\rr_{K_1} \cdot \nn_{F_0}$, see~\eqref{eq:cond_comp1}. Since $\trr^1_{\FF_1}=\rr_{F_0}\in \NN_p^\ttau(\Gamma_{\{F_0\}})$ by assumption, the first requirement is met. The second one follows from $\rr_{K_1} \cdot \nn_{F_0} = \jump{\rr_\TT}_{F_0} \cdot \nn_{F_0} = \scurl_{F_0}(\rr_{F_0}) = \scurl_{F_0} (\trr^{1}_{F_0})$ owing to~\eqref{eq_comp_b}. {\bf (2)} Middle elements ($j\in\{2:n-1\}$). Since $\FF_j = \{F_{j-1}\}$, we need to show that $\trr^j_{F_{j-1}} \in \NN_p^\ttau(\Gamma_{\{F_{j-1}\}})$ and that $\scurl_{F_{j-1}} (\trr^{j}_{F_{j-1}})=\rr_{K_j} \cdot \nn_{F_{j-1}}$. The first requirement follows from the definition~\eqref{eq_F_j} of $\trr^j_{F_{j-1}}$. To verify the second requirement, we recall the definition~\eqref{eq_scurl_curl_el} of the surface curl and use the curl constraint from~\eqref{eq_VKj} to infer that \begin{align*} \scurl_{F_{j-1}} (\trr_{F_{j-1}}^{j}) &= \scurl_{F_{j-1}} (\ppi^\ttau_{F_{j-1}} (\xxi_p^{j-1})) + \scurl_{F_{j-1}} (\rr_{F_{j-1}}) \\ &= \left (\curl \xxi_p^{j-1}\right )|_{F_{j-1}} \cdot \nn_{F_{j-1}} + \scurl_{F_{j-1}} (\rr_{F_{j-1}}) \\ &= \rr_{K_{j-1}} \cdot \nn_{F_{j-1}} + \scurl_{F_{j-1}} (\rr_{F_{j-1}}). \end{align*} By virtue of assumption~\eqref{eq_comp_b}, it follows that \begin{align*} \rr_{K_{j}} \cdot \nn_{F_{j-1}} - \scurl_{F_{j-1}} (\trr_{F_{j-1}}^{j}) &= \rr_{K_{j}} \cdot \nn_{F_{j-1}} - \rr_{K_{j-1}} \cdot \nn_{F_{j-1}} - \scurl_{F_{j-1}} (\rr_{F_{j-1}}) \\ &= \jump{\rr_\TT}_{F_{j-1}} \cdot \nn_{F_{j-1}} - \scurl_{F_{j-1}} (\rr_{F_{j-1}}) = 0. \end{align*} {\bf (3)} Last element ($j=n$). We distinguish two cases. {\bf (3a)} Patch of Dirichlet or mixed boundary type. In this case, $\FF_n = \{F_{n-1}\}$ and the reasoning is identical to the case of a middle element. {\bf (3b)} Patch of interior or Neumann boundary type. In this case, $\FF_n = \{F_{n-1},F_n\}$. First, the prescriptions~\eqref{eq_F_nn}--\eqref{eq_F_int}--\eqref{eq_F_n} imply that $\trr^n_{\FF_n} \in \NN^\ttau_p(\Gamma_{\FF_n})$ in the sense of~\eqref{eq_tr_K}. It remains to show~\eqref{eq:cond_comp1}, i.e. \begin{equation} \label{eq:cond_comp_n1} \scurl_{F_{n-1}}(\trr^n_{F_{n-1}})=\rr_{K_n}\cdot\nn_{F_{n-1}}, \qquad \scurl_{F_{n}}(\trr^n_{F_{n}})=\rr_{K_n}\cdot\nn_{F_{n}}, \end{equation} and, since $\FF_n$ is composed of two faces, we also need to show the edge compatibility condition~\eqref{eq:cond_comp2}, i.e. \begin{equation} \label{eq:cond_comp_n2} \trr^n_{F_{n-1}}|_\edge \cdot \ttau_\edge = \trr^n_{F_{n}}|_\edge \cdot \ttau_\edge. \end{equation} The proof of the first identity in~\eqref{eq:cond_comp_n1} is as above, so we now detail the proof of the second identity in~\eqref{eq:cond_comp_n1} and the proof of~\eqref{eq:cond_comp_n2}. {\bf (3b-I)} Let us consider the case of a patch of interior type. To prove the second identity in~\eqref{eq:cond_comp_n1}, we use definition~\eqref{eq_scurl_curl_el} of the surface curl together with the curl constraint in~\eqref{eq_VKj} and infer that \begin{align*} \scurl_{F_n} (\trr_{F_n}^n) &= \scurl_{F_n} (\ppi^\ttau_{F_n}(\xxi_p^1) - \rr_{F_n}) \\ &= \curl \xxi_p^1 \cdot \nn_{F_n} - \scurl_{F_n} (\rr_{F_n}) \\ &= \rr_{K_1} \cdot \nn_{F_n} - \scurl_{F_n} (\rr_{F_n}). \end{align*} This gives \begin{align*} \rr_{K_n} \cdot \nn_{F_n} - \scurl_F (\trr_{F_n}^n) &= (\rr_{K_n} - \rr_{K_1}) \cdot \nn_{F_n} + \scurl_F (\rr_{F_n}) \\ &= -\jump{\rr}_{F_n} \cdot \nn_{F_n} + \scurl_F (\rr_{F_n}) = 0, \end{align*} where the last equality follows from~\eqref{eq_comp_b}. This proves the expected identity on the curl. Let us now prove~\eqref{eq:cond_comp_n2}. For all $j\in\{1:n-1\}$, since $\xxi_p^j \in \NN_p(K_j)$, its tangential traces satisfy the edge compatibility condition \begin{equation} \label{eq:edge_compatibility_xxi} \left . \left (\ppi^\ttau_{F_{j-1}} (\xxi_p^j)\right ) \right |_\edge \cdot \ttau_\edge = \left .\left (\ppi^\ttau_{F_{j}} (\xxi_p^j)\right ) \right |_\edge \cdot \ttau_\edge. \end{equation} Moreover, for all $j\in\{1:n-2\}$, we have $F_j\in\FF_{j+1}$, so that by~\eqref{eq_VKj} and the definition~\eqref{eq_F_j} of $\trr^{j+1}_{F_{j}}$, we have \begin{equation*} \ppi^\ttau_{F_{j}} (\xxi_p^{j+1}) = \trr^{j+1}_{F_{j}} = \ppi^\ttau_{F_{j}} (\xxi_p^j) + \rr_{F_{j}}, \end{equation*} and, therefore, using~\eqref{eq:edge_compatibility_xxi} yields \begin{equation*} \left . \left (\ppi^\ttau_{F_{j-1}} (\xxi_p^j)\right ) \right |_\edge \cdot \ttau_\edge = \left . \left (\ppi^\ttau_{F_{j}} (\xxi_p^{j+1})\right ) \right |_\edge \cdot \ttau_\edge - \rr_{F_{j}}|_\edge \cdot \ttau_\edge. \end{equation*} Summing this identity for all $j\in\{1:n-2\}$ leads to \[ \left . \left (\ppi^\ttau_{F_{0}} (\xxi_p^1)\right ) \right |_\edge \cdot \ttau_\edge = \left . \left (\ppi^\ttau_{F_{n-2}} (\xxi_p^{n-1})\right ) \right |_\edge \cdot \ttau_\edge - \sum_{j\in\{1:n-2\}} \rr_{F_{j}}|_\edge \cdot \ttau_\edge. \] In addition, using again the edge compatibility condition~\eqref{eq:edge_compatibility_xxi} for $j=n-1$ and the definition~\eqref{eq_F_nn} of $\trr_{F_{n-1}}^n$ leads to \[ \left . \left (\ppi^\ttau_{F_{n-2}} (\xxi_p^{n-1})\right ) \right |_\edge \cdot \ttau_\edge = \trr_{F_{n-1}}^n|_\edge \cdot \ttau_\edge - \rr_{F_{n-1}}|_\edge \cdot \ttau_\edge. \] Summing the above two identities gives \begin{equation} \label{tmp_induction_edge2} \left . \left (\ppi^\ttau_{F_{0}} (\xxi_p^1)\right ) \right |_\edge \cdot \ttau_\edge = \trr_{F_{n-1}}^n|_\edge \cdot \ttau_\edge - \sum_{j\in\{1:n-1\}} \rr_{F_{j}}|_\edge \cdot \ttau_\edge. \end{equation} Since $F_0=F_n$ for a patch of interior type and $\trr_{F_n}^n = \ppi^\ttau_{F_n} (\xxi_p^1) - \rr_{F_n}$ owing to~\eqref{eq_F_int}, the identity~\eqref{tmp_induction_edge2} gives \begin{align*} \trr_{F_n}^n |_\edge\cdot \ttau_\edge &= \left . \left (\ppi^\ttau_{F_{0}} (\xxi_p^1)\right ) \right |_\edge \cdot \ttau_\edge - \left ( \rr_{F_n} |_\edge \cdot \ttau_\edge \right ) \\ &= \trr_{F_{n-1}}^n|_\edge \cdot \ttau_\edge - \sum_{j\in\{1:n-1\}} \rr_{F_{j}}|_\edge \cdot \ttau_\edge - \left ( \rr_{F_n} |_\edge \cdot \ttau_\edge \right ) \\ &= \trr^n_{F_{n-1}}|_\edge \cdot \ttau_\edge - \sum_{j\in\{1:n\}} \rr_{F_{j}}|_\edge \cdot \ttau_\edge = \trr^n_{F_{n-1}}|_\edge \cdot \ttau_\edge, \end{align*} where we used the edge compatibility condition~\eqref{eq_comp_c} satisfied by $\rr_\FF$ in the last equality. This proves~\eqref{eq:cond_comp_n2} in the interior case. {\bf (3b-N)} Let us finally consider a patch of Neumann boundary type. The second identity in~\eqref{eq:cond_comp_n1} follows directly from~\eqref{eq_comp_b} and~\eqref{eq_F_n}. Let us now prove~\eqref{eq:cond_comp_n2}. The identity~\eqref{tmp_induction_edge2} still holds true. Using that $(\ppi^\ttau_{F_{0}} (\xxi_p^1)) |_\edge \cdot \ttau_\edge = \rr_{F_0}|_\edge \cdot \ttau_\edge$, this identity is rewritten as \[ \trr_{F_{n-1}}^n|_\edge \cdot \ttau_\edge = \sum_{j\in\{0:n-1\}} \rr_{F_{j}}|_\edge \cdot \ttau_\edge = \rr_{F_n}|_\edge \cdot \ttau_\edge, \] where the last equality follows from the edge compatibility condition~\eqref{eq_comp_d} satisfied by $\rr_\FF$. But since $\trr_{F_n}^n = \rr_{F_n}$ owing to~\eqref{eq_F_n}, this again proves~\eqref{eq:cond_comp_n2}. {\bf (4)} It remains to show that $\xxi_p \in \boldsymbol V_p(\TTe)$ as per Definition~\ref{def_spaces}. By construction, we have $\ppi^\ttau_{F_{j}}(\xxi_p|_{K_{j+1}}) - \ppi^\ttau_{F_{j}}(\xxi_p|_{K_j}) = \rr_{F_{j}}$ for all $j\in\{1:n-1\}$, $\ppi^\ttau_{F_{n}}(\xxi_p|_{K_{0}}) - \ppi^\ttau_{F_{n}}(\xxi_p|_{K_n}) = \rr_{F_{n}}$ if the patch is of interior type, $\ppi^\ttau_{F_{0}}(\xxi_p|_{K_{1}}) = \rr_{F_{0}}$ if the patch is of Neumann or mixed boundary type, and $\ppi^\ttau_{F_{n}}(\xxi_p|_{K_{n}}) = \rr_{F_{n}}$ if the patch is of Neumann type. This proves that $\ppi^\ttau_F(\jump{\xxi_p}_F) = \rr_F$ for all $F \in \FF$, i.e., $\jump{\xxi_p}^\ttau_\FF = \rr_\FF$ in the sense of Definition~\ref{definition_jumps}. \end{proof} \subsubsection{The actual proof} We are now ready to prove Proposition~\ref{prop_stability_patch}. \begin{proof}[Proof of Proposition~\ref{prop_stability_patch}] Owing to Lemma~\ref{lem_EU}, the fields \begin{equation} \label{eq_min_K} \xxi_p^{\star j} \eq \argmin{\vv_p \in \boldsymbol V_p(K_j)} \|\vv_p\|_{K_j}, \qquad j\in\{1:n\}, \end{equation} are uniquely defined in ${\boldsymbol V}_p(K_j)$, and the field $\xxi_p^{\star}$ such that $\xxi_p^{\star}|_{K_j} \eq \xxi_p^{\star j}$ for all $j\in\{1:n\}$ satisfies $\xxi_p^\star \in \boldsymbol V_p(\TTe)$. Since the minimizing sets in~\eqref{eq_stab_shift} are nonempty (they all contain $\xxi_p^{\star}$), both the discrete and the continuous minimizers are uniquely defined owing to standard convexity arguments. Let us set \begin{equation*} \vv^\star \eq \argmin{\vv \in \boldsymbol V(\TTe)} \|\vv\|_{\ome}, \qquad \vv^\star_j \eq \vv^\star|_{K_j}, \quad j\in\{1:n\}. \end{equation*} To prove Proposition~\ref{prop_stability_patch}, it is enough to show that \begin{equation} \label{eq res} \|\xxi_p^\star\|_{\ome} \lesssim \|\vv^\star\|_{\ome}. \end{equation} Owing to Proposition~\ref{prop_stability_tetrahedra} applied with $K\eq K_j$ and $\FF\eq\FF_j$ for all $j\in\{1:n\}$, we have \begin{equation} \label{eq_ineq_K} \|\xxi_p^\star\|_{K_j} \lesssim \min_{\zzeta \in \boldsymbol V(K_j)} \|\zzeta\|_{K_j}, \end{equation} where $\boldsymbol V(K_{j})$ is defined in~\eqref{eq_VKj}. Therefore, recalling that $|\TTe|\lesssim 1$, \eqref{eq res} will be proved if for all $j\in\{1:n\}$, we can construct a field $\zzeta_j \in \boldsymbol V(K_j)$ such that $\|\zzeta_j\|_{K_j} \revision{\lesssim} \|\vv^\star\|_{\ome}$. To do so, we proceed once again by induction. {\bf (1)} First element ($j=1$). Since $\vv^\star_1 \in \boldsymbol V(K_1)$, the claim is established with $\zzeta_1\eq \vv^\star_1$ which trivially satisfies $\|\zzeta_1\|_{K_1} = \|\vv^\star\|_{K_1} \leq \|\vv^\star\|_{\ome}$. {\bf (2)} Middle elements ($j\in\{2:n-1\}$). We proceed by induction. Given $\zzeta_{j-1}\in {\boldsymbol V}(K_{j-1})$ such that $\|\zzeta_{j-1}\|_{K_{j-1}} \lesssim \|\vv^\star\|_{\ome}$, let us construct a suitable $\zzeta_{j}\in {\boldsymbol V}(K_{j})$ such that $\|\zzeta_j\|_{K_j} \lesssim \|\vv^\star\|_{\ome}$. We consider the affine geometric mapping $\TTT_{j-1,j}: K_{j-1} \to K_{j}$ that leaves the three vertices $\adown$, $\aaa_{j-1}$, and $\aup$ (and consequently the face $F_{j-1}$) invariant, whereas $\TTT_{j-1,j}(\aaa_{j-2}) = \aaa_{j}$. We denote by $\ppsi^{\mathrm{c}}_{j-1,j} \eq \ppsi^{\mathrm{c}}_{\TTT_{j-1,j}}$ the associated Piola mapping, see Section~\ref{sec_Piola}. Let us define the function $\zzeta_{j} \in \HH(\ccurl,K_{j})$ by \begin{equation} \label{eq_zeta_j} \zzeta_{j} \eq \vv^\star_{j} - \epsilon_{j-1,j} \ppsi^{\mathrm{c}}_{j-1,j}(\xxi_p^{\star j-1} - \vv^\star_{j-1}), \end{equation} where $\epsilon_{j-1,j} \eq \sign \left (\det \JJJ_{\TTT_{j-1,j}}\right)$ \revision{(notice that here $\epsilon_{j-1,j}=-1$)}. Using the triangle inequality, the $L^2$-stability of the Piola mapping (see~\eqref{eq_stab_piola_L2}), inequality~\eqref{eq_ineq_K}, and the induction hypothesis, we have \begin{equation} \label{eq_bound} \begin{split} \|\zzeta_{j}\|_{K_j} &\leq \|\vv^\star\|_{K_j} + \|\ppsi^{\mathrm{c}}_{j-1,j}(\xxi_p^{\star j-1} - \vv^\star_{j-1})\|_{K_j} \\ &\lesssim \|\vv^\star\|_{K_j} + \|\xxi_p^\star - \vv^\star\|_{K_{j-1}} \\ &\leq \|\vv^\star\|_{K_j} + \|\xxi_p^\star\|_{K_{j-1}} + \|\vv^\star\|_{K_{j-1}}\\ &\lesssim \|\vv^\star\|_{K_j} + \|\zzeta_{j-1}\|_{K_{j-1}} + \|\vv^\star\|_{K_{j-1}} \lesssim \|\vv^\star\|_{\ome}. \end{split}\end{equation} Thus it remains to establish that $\zzeta_{j} \in \boldsymbol V(K_{j})$ in the sense of Definition~\ref{def_spaces_elm}, i.e., we need to show that $\curl \zzeta_{j}=\rr_{K_{j}}$ and $\zzeta_{j}|^\ttau_{\FF_{j}} = \trr^{j}_{\FF_{j}}$. Recalling the curl constraints in~\eqref{eq_VTe} and~\eqref{eq_VKj} which yield $\curl \xxi_p^\star = \curl \vv^\star = \rr_{\TT}$ and using~\eqref{eq_piola_commute}, we have \begin{equation} \label{tmp_curl_vv_middle} \begin{split} \curl \zzeta_{j} &= \curl \vv^\star_{j} - \epsilon_{j-1,j}\curl \ppsi^{\mathrm{c}}_{j-1,j}(\xxi_p^{\star j-1} - \vv^\star_{j-1}) \\ &= \rr_{K_{j}} - \epsilon_{j-1,j} \ppsi^{\mathrm{d}}_{j-1,j}\left (\curl (\xxi_p^{\star j-1} - \vv^\star_{j-1})\right ) = \rr_{K_{j}}, \end{split} \end{equation} which proves the expected condition on the curl of $\zzeta_j$. It remains to verify the weak tangential trace condition $\zzeta_{j}|^\ttau_{\FF_{j}} = \trr^{j}_{\FF_{j}}$ as per Definition~\ref{definition_partial_trace}. To this purpose, let $\pphi \in \HH^1_{\ttau,\FF_{j}^{\mathrm{c}}}(K_{j})$ and define $\tpphi$ by \begin{equation}\label{eq_thpi} \tpphi|_{K_{j}} \eq \pphi \quad \tpphi|_{K_{j-1}} \eq (\ppsi_{j-1,j}^{\mathrm{c}})^{-1}(\pphi), \quad \tpphi|_{K_l} = \boldsymbol 0 \quad \forall l \in\{1:n\}\setminus\{ j-1,j\}. \end{equation} These definitions imply that $\tpphi \in \HH(\ccurl,\ome) \cap \HH^1_{\ttau,\FF^{\mathrm{c}}}(\TTe)$ (recall~\eqref{eq_Htpatch}) with \begin{equation*} \left . \left (\tpphi|_{K_{j-1}} \right ) \right |_{F_{j-1}} \times \nn_{F_{j-1}} = \left . \left (\tpphi|_{K_{j}} \right ) \right |_{F_{j-1}} \times \nn_{F_{j-1}} = \pphi|_{F_{j-1}} \times \nn_{F_{j-1}}, \end{equation*} as well as \begin{equation*} \tpphi|_F \times \nn_F = \boldsymbol 0 \quad \forall F \in \FFe \setminus \{F_{j-1}\}. \end{equation*} (Note that $\tpphi|_F\times\nn_F$ is uniquely defined by assumption.) Recalling definition~\eqref{eq_zeta_j} of $\zzeta_j$ and that $\curl \zzeta_j = \rr_{K_{j}} = \curl\vv^\star_{j}$, see~\eqref{tmp_curl_vv_middle}, we have \begin{align*} & (\curl \zzeta_j,\pphi)_{K_{j}} - (\zzeta_j,\curl \pphi)_{K_{j}} \\ &= (\curl \vv^\star,\pphi)_{K_{j}} - (\vv^\star,\curl \pphi)_{K_{j}} + \epsilon_{j-1,j} (\ppsi^{\mathrm{c}}_{j-1,j} (\xxi_p^{\star j-1} - \vv^\star_{j-1}),\curl \pphi)_{K_{j}} \\ &= (\curl \vv^\star,\tpphi)_{K_{j}} - (\vv^\star,\curl \tpphi)_{K_{j}} + (\xxi_p^\star - \vv^\star,\curl \tpphi )_{K_{j-1}}, \end{align*} where we used the definition of $\tpphi$, properties~\eqref{eq_piola_adjoint}, \eqref{eq_piola_commute} of the Piola mapping, and the definition of $\epsilon_{j-1,j}$ to infer that \begin{align*} \epsilon_{j-1,j} (\ppsi^{\mathrm{c}}_{j-1,j} (\xxi_p^{\star j-1} - \vv^\star_{j-1}),\curl \pphi)_{K_{j}} &= \epsilon_{j-1,j}^2 (\xxi_p^\star - \vv^\star,\curl \left ((\ppsi^{\mathrm{c}}_{j-1,j})^{-1}\pphi|_{K_j} \right ))_{K_{j-1}}. \\ &= (\xxi_p^\star - \vv^\star,\curl \tpphi )_{K_{j-1}}. \end{align*} Since $\curl \xxi_p^\star = \rr_{\TT} = \curl \vv^\star$ and $\tpphi = \boldsymbol 0$ outside $K_{j-1} \cup K_{j}$, this gives \begin{align} \label{tmp_middle_trace0} & (\curl \zzeta_j,\pphi)_{K_{j}} - (\zzeta_j,\curl \pphi)_{K_{j}} \\ \nonumber &= (\curl \vv^\star,\tpphi)_{K_{j}} - (\vv^\star,\curl \tpphi)_{K_{j}} + (\xxi_p^\star - \vv^\star,\curl \tpphi )_{K_{j-1}} + (\curl (\vv^\star-\xxi_p^\star),\tpphi)_{K_{j-1}} \\ \nonumber &= \sum_{K \in \TTe} \left \{ (\curl \vv^\star,\tpphi)_K - (\vv^\star,\curl \tpphi)_K \right \} - \left ( (\curl \xxi_p^\star,\tpphi)_{K_{j-1}} - (\xxi_p^\star,\curl \tpphi)_{K_{j-1}} \right ). \end{align} Since $\vv^\star \in \boldsymbol V(\TTe)$, $\tpphi \in \HH^1_{\ttau,\FF^{\mathrm{c}}}(\TTe)$, and $\jump{\vv^\star}^\ttau_\FF=\rr_\FF$, we have from Definitions~\ref{definition_jumps} and~\ref{def_spaces} \begin{align} \label{tmp_middle_trace1} \sum_{K \in \TTe} \left \{ (\curl \vv^\star,\tpphi)_K - (\vv^\star,\curl \tpphi)_K \right \} &= \sum_{F \in \FF} (\rr_F,\tpphi \times \nn_F)_{F} \\ \nonumber &= (\rr_{F_{j-1}},\pphi \times\nn_{F_{j-1}})_{F_{j-1}}, \end{align} where in the last equality, we employed the definition~\eqref{eq_thpi} of $\tpphi$. On the other hand, since $\xxi^\star_p|_{K_{j-1}},\tpphi|_{K_{j-1}} \in \HH^1(K_{j-1})$, we can employ the pointwise definition of the trace and infer that \begin{align} \label{tmp_middle_trace2} (\curl \xxi_p^\star,\tpphi)_{K_{j-1}} - (\xxi_p^{\star},\curl \tpphi)_{K_{j-1}} &= (\ppi^\ttau_{F_{j-1}} (\xxi_p^{\star j-1}),\tpphi|_{K_{j-1}} \times \nn_{K_{j-1}})_{F_{j-1}} \\ \nonumber &= -(\ppi^\ttau_{F_{j-1}} (\xxi_p^{\star j-1}),\pphi \times \nn_{F_{j-1}})_{F_{j-1}}, \end{align} where we used that $\nn_{K_{j-1}} = -\nn_{F_{j-1}}$. Then, plugging~\eqref{tmp_middle_trace1} and~\eqref{tmp_middle_trace2} into~\eqref{tmp_middle_trace0} and employing~\eqref{eq_F_j} and $\nn_{K_{j}} = \nn_{F_{j-1}}$, we obtain \begin{align*} (\curl \zzeta_j,\pphi)_{K_{j}} - (\zzeta_j,\curl \pphi)_{K_{j}} &= (\rr_{F_{j-1}},\pphi \times\nn_{F_{j-1}})_{F_{j-1}} +(\ppi^\ttau_{F_{j-1}} (\xxi_p^{\star j-1}),\pphi \times \nn_{F_{j-1}})_{F_{j-1}} \\ &= (\rr_{F_{j-1}}+\ppi^\ttau_{F_{j-1}} (\xxi_p^{\star j-1}),\pphi \times \nn_{F_{j-1}})_{F_{j-1}} \\ &= (\trr^{j}_{F_{j-1}},\pphi \times \nn_{K_{j}})_{F_{j-1}}. \end{align*} Since $\FF_j\eq \{F_{j-1}\}$, this shows that $\zzeta_j$ satisfies the weak tangential trace condition in $\boldsymbol V(K_{j})$ by virtue of Definition~\ref{definition_partial_trace}. {\bf (3)} Last element ($j=n$). We need to distinguish the type of patch. {\bf (3a)} Patch of Dirichlet or mixed boundary type. In this case, we can employ the same argument as for the middle elements since $\FF_n=\{F_{n-1}\}$ is composed of only one face. {\bf (3b)} Patch of interior type. Owing to the induction hypothesis, we have $\|\zzeta_j\|_{K_j} \lesssim \|\vv^\star\|_{\ome}$ for all $j\in\{1:n-1\}$. Let us first assume that there is an even number of tetrahedra in the patch $\TTe$, as in Figure~\ref{fig_patch}, left. The case where this number is odd will be discussed below. We build a geometric mapping $\TTT_{j,n}:K_j\to K_n$ for all $j\in\{1:n-1\}$ as follows: $\TTT_{j,n}$ leaves the edge $\edge$ pointwise invariant, $\TTT_{j,n}(\aaa_{j-1})\eq \aaa_{n}$, $\TTT_{j,n}(\aaa_j)\eq \aaa_{n-1}$ if $(n-j)$ is odd, and $\TTT_{j,n}(\aaa_{j})\eq \aaa_{n}$, $\TTT_{j,n}(\aaa_{j-1})\eq \aaa_{n-1}$ if $(n-j)$ is even. Since $n$ is by assumption even, one readily sees that $\TTT_{j,n}(F_j)=\TTT_{j+1,n}(F_j)$ with $F_j=K_j\cap K_{j+1}$ for all $j\in\{1;n-2\}$. We define $\zzeta_n \in \HH(\ccurl,K_n)$ by setting \begin{equation} \label{eq_v_def} \zzeta_n \eq \vv^\star_n - \sum_{j\in\{1:n-1\}} \epsilon_{j,n} \ppsi^{\mathrm{c}}_{j,n}(\xxi_p^{\star j}-\vv^\star_j), \end{equation} where $\epsilon_{j,n} \eq \sign(\det \JJJ_{\TTT_{j,n}})$ and $\ppsi^{\mathrm{c}}_{j,n}$ is the Piola mapping associated with $\TTT_{j,n}$. Reasoning as above in~\eqref{eq_bound} shows that \begin{equation*} \|\zzeta_n\|_{K_n} \lesssim \|\vv^\star\|_{\ome}. \end{equation*} It now remains to establish that $\zzeta_{n} \in \boldsymbol V(K_{n})$ as per Definition~\ref{def_spaces_elm}, i.e. $\curl \zzeta_{n}=\rr_{K_{n}}$ and $\zzeta_{n}|^\ttau_{\FF_{n}} = \trr^{n}_{\FF_{n}}$ with $\FF_n\eq\{F_{n-1},F_n\}$. Since $\curl \xxi_p^\star = \rr_{\TT} = \curl \vv^\star$, using~\eqref{eq_piola_commute} leads to $\curl \zzeta_n = \curl \vv^\star_n = \rr_{K_n}$ as above in~\eqref{tmp_curl_vv_middle}, which proves the expected condition on the curl of $\zzeta_n$. It remains to verify the weak tangential trace condition as per Definition~\ref{definition_partial_trace}. To this purpose, let $\pphi \in \HH_{\ttau,\FF_n^{\mathrm{c}}}^1(K_n)$ and define $\tpphi$ by \begin{equation} \label{eq_ttet} \tpphi|_{K_n} \eq \pphi, \qquad \tpphi|_{K_j} \eq \left (\ppsi_{j,n}^{\mathrm{c}}\right )^{-1}(\pphi) \quad \forall j\in\{1:n-1\}. \end{equation} As $\pphi \in \HH_{\ttau,\FF_n^{\mathrm{c}}}^1(K_n)$, its trace is defined in a strong sense, and the preservation of tangential traces by Piola mappings shows that $\tpphi \in \HH^1_{\ttau,\FF^{\mathrm{c}}}(\TTe)$ in the sense of~\eqref{eq_Htpatch}. Then, using $\curl \zzeta_n = \curl \vv^\star_n$ and~\eqref{eq_v_def}, we have \begin{align*} & (\curl \zzeta_n,\pphi)_{K_n} - (\zzeta_n,\curl \pphi)_{K_n} \\ &= (\curl \vv^\star,\tpphi)_{K_n} - (\vv^\star,\curl \tpphi)_{K_n} + \sum_{j\in\{1:n-1\}} \epsilon_{j,n} (\ppsi^{\mathrm{c}}_{j,n}(\xxi^{\star j}_p - \vv^\star_{j}),\curl \pphi)_{K_n}, \end{align*} where we used the definition of $\tpphi$ for the first two terms on the right-hand side. Moreover, using~\eqref{eq_piola_adjoint} and~\eqref{eq_piola_commute} for all $j\in\{1:n-1\}$, we have \begin{align*} \epsilon_{j,n} (\ppsi^{\mathrm{c}}_{j,n}(\xxi^{\star j}_p - \vv^\star_{j}),\curl \pphi)_{K_n} &= \epsilon_{j,n}^2 (\xxi^\star_p - \vv^\star,\curl ((\ppsi^{\mathrm{c}}_{j,n})^{-1}(\pphi|_{K_n})))_{K_j} \\ &= (\xxi^\star_p - \vv^\star,\curl \tpphi)_{K_j} \\ &= (\xxi^\star_p - \vv^\star,\curl \tpphi)_{K_j} - (\curl (\xxi^\star_p - \vv^\star),\tpphi)_{K_j}, \end{align*} since $\curl \xxi^\star_p = \rr_\TT = \curl \vv^\star$. It follows that \begin{align*} & (\curl \zzeta_n,\pphi)_{K_n} - (\zzeta_n,\curl \pphi)_{K_n}\\ &= \sum_{j\in\{1:n\}} \left \{ (\curl \vv^\star,\tpphi)_{K_j} - (\vv^\star,\curl \tpphi)_{K_j} \right \} - \sum_{j\in\{1:n-1\}} \left \{ (\curl \xxi_p^\star,\tpphi)_{K_j} - (\xxi_p^\star,\curl \tpphi)_{K_j} \right \} \\ &= (\curl \xxi_p^\star,\tpphi)_{K_n} - (\xxi_p^\star,\curl \tpphi)_{K_n} + \sum_{j\in\{1:n\}} \left \{ (\curl \vv^\star,\tpphi)_{K_j} - (\vv^\star,\curl \tpphi)_{K_j} \right \} \\ & \qquad - \sum_{j\in\{1:n\}} \left \{ (\curl \xxi_p^\star,\tpphi)_{K_j} - (\xxi_p^\star,\curl \tpphi)_{K_j} \right \} \\ &= (\curl \xxi_p^\star,\tpphi)_{K_n} - (\xxi_p^\star,\curl \tpphi)_{K_n} = (\curl \xxi_p^\star,\pphi)_{K_n} - (\xxi_p^\star,\curl \pphi)_{K_n}\\ &= \sum_{F\in\FF_n} (\trr^{n}_{F},\pphi \times \nn_{K_n})_{F}, \end{align*} where we employed the fact that, since both $\xxi_p^\star,\vv^\star \in \VV(\TTe)$, Definition~\ref{definition_jumps} gives \begin{align*} \sum_{j\in\{1:n\}} \left \{ (\curl \vv^\star,\tpphi)_{K_j} - (\vv^\star,\curl \tpphi)_{K_j} \right \} &= \sum_{F \in \FF} (\rr_F,\tpphi \times \nn_F)_{F} \\ &= \sum_{j\in\{1:n\}} \left \{ (\curl \xxi_p^\star,\tpphi)_{K_j} - (\xxi_p^\star,\curl \tpphi)_{K_j} \right \}. \end{align*} Thus $\zzeta_n|^\ttau_{\FF_n} = \trr^n_{\FF_n}$ in the sense of Definition~\ref{definition_partial_trace}. This establishes the weak tangential trace condition on $\zzeta_n$ when $n$ is even. If $n$ is odd, one can proceed as in~\cite[Section~6.3]{Ern_Voh_p_rob_3D_20}. For the purpose of the proof only, one tetrahedron different from $K_n$ is subdivided into two subtetrahedra as in~\cite[Lemma~B.2]{Ern_Voh_p_rob_3D_20}. Then, the above construction of $\zzeta_n$ can be applied on the newly created patch which has an even number of elements, and one verifies as above that $\zzeta_n\in {\boldsymbol V}(K_n)$. {\bf (3c)} Patch of Neumann boundary type. In this case, a similar argument as for a patch of interior type applies, and we omit the proof for the sake of brevity. \end{proof} \begin{remark}[Quasi-optimality of $\xxi_p^\star$] \label{rem_sweep_brok} Let $\xxi_p^\star\in{\boldsymbol V}_p(\TTe)$ be defined in the above proof (see in particular~\eqref{eq_min_K}). Since $\|\vv^\star\|_{\ome} \le \min_{\vv_p\in {\boldsymbol V}_p(\TTe)}\|\vv_p\|_{\ome}$, inequality~\eqref{eq res} implies that $\|\xxi_p^\star\|_{\ome} \lesssim \min_{\vv_p\in {\boldsymbol V}_p(\TTe)}\|\vv_p\|_{\ome}$ (note that the converse inequality is trivial with constant one). This elementwise minimizer is the one used in \revision{Theorem~\ref{thm_sweep} and in} the simplified a posteriori error estimator~\eqref{eq_definition_estimator_sweep_2}. \end{remark} \bibliographystyle{amsplain} \bibliography{biblio} \appendix \section{Poincar\'e-like inequality using the curl of divergence-free fields} \label{appendix_weber} \begin{theorem}[Constant in the Poincar\'e-like inequality~\eqref{eq_local_poincare_vectorial}] For every edge $\edge \in \EE_h$, the constant \begin{equation} \CPVe \eq \frac{1}{h_\ome} \sup_{\substack{ \vv \in \HH_{\GeD}(\ccurl,\ome) \cap \HH_{\GeN}(\ddiv,\ome) \\ \div\vv=0\\ \|\curl \vv\|_{\ome} = 1 }} \|\vv\|_{\ome} \end{equation} only depends on the shape-regularity parameter $\kappa_\edge$ of the edge patch $\TT^\edge$. \end{theorem} \begin{proof} We proceed in two steps. (1) Let us first establish a result regarding the transformation of this type of constant by a bilipschitz mapping. Consider a Lipschitz and simply connected domain $U$ with its boundary $\partial U$ partitioned into two disjoint relatively open subdomains $\Gamma$ and $\Gamma_{\mathrm{c}}$. Let $\TTT: U \to \tU$ be a bilipschitz mapping with Jacobian matrix $\JJJ$, and let $\tG \eq \TTT(\Gamma)$ and $\tG_{\mathrm{c}} \eq \TTT(\Gamma_{\mathrm{c}})$. Let us set \begin{equation*} C_{\rm PFW}(U,\Gamma) \eq \sup_{\substack{ \uu \in \HH_{\Gamma}(\ccurl,U) \cap \HH_{\Gamma_{\mathrm{c}}}(\ddiv,U) \\ \div \uu = 0\\ \|\curl \uu\|_U = 1 }} \|\uu\|_U, \qquad C_{\rm PFW}(\tU,\tG) \eq \sup_{\substack{ \tuu \in \HH_{\tG}(\ccurl,\tU) \cap \HH_{\tG_{\mathrm{c}}}(\ddiv,\tU) \\ \div \tuu = 0\\ \|\curl \tuu\|_{\tU} = 1 }} \|\tuu\|_{\tU}. \end{equation*} Remark that both constants are well-defined real numbers owing to \cite[Proposition 7.4]{Fer_Gil_Maxw_BC_97}. Then, we have \begin{equation} \label{eq:transfo_C_PFW} C_{\rm PFW}(U,\Gamma) \leq \|\phi\|_{L^\infty(U)}^2 C_{\rm PFW}(\tU,\tG), \end{equation} with $\phi(\xx)\eq |\det \JJJ(\xx)|^{-\frac12} \|\JJJ(\xx)\|$ for all $\xx\in U$. To show~\eqref{eq:transfo_C_PFW}, let $\uu \in \HH_{\Gamma}(\ccurl,U) \cap \HH_{\Gamma_{\mathrm{c}}}(\ddiv,U)$ be such that $\div\uu=0$. Let us set $\tuu \eq (\ppsi^{\mathrm{c}}_U)^{-1}(\uu)$ where $\ppsi^{\mathrm{c}}_U:\HH_{\tG}(\ccurl,\tU)\to \HH_{\Gamma}(\ccurl,U)$ is the covariant Piola mapping. Since $\tuu$ is not necessarily divergence-free and does not have necessarily a zero normal trace on $\tG_{\mathrm{c}}$, we define (up to a constant) the function $\tq \in H^1_{\tG}(\tU)$ such that \begin{equation*} (\grad \tq,\grad {\widetilde w})_{\tU} = (\tuu,\grad {\widetilde w})_{\tU} \qquad \forall {\widetilde w} \in H^1_{\tG}(\tU). \end{equation*} Then, the field $\tvv \eq \tuu-\grad \tq$ is in $\HH_{\tG}(\ccurl,\tU) \cap \HH_{\tG_{\mathrm{c}}}(\ddiv,\tU)$ and is divergence-free. Therefore, we have \begin{equation*} \|\tvv\|_{\tU} \leq C_{\rm PFW}(\tU,\tG) \|\curl \tvv\|_{\tU} = C_{\rm PFW}(\tU,\tG) \|\curl \tuu\|_{\tU}. \end{equation*} Let us set \begin{equation*} \vv \eq \ppsi^{\mathrm{c}}_U(\tvv) = \uu - \ppsi^{\mathrm{c}}_U(\grad \tq) = \uu - \grad q, \end{equation*} with $q\eq\psi^{\mathrm{g}}_U(\tq)\eq \tq\circ \TTT$. Since $\uu \in \HH_{\Gamma_{\mathrm{c}}}(\ddiv,U)$ with $\div\uu=0$ and $q \in H^1_{\Gamma}(U)$, there holds $(\uu,\grad q)_{U} = 0$, which implies that $\|\uu\|_U \leq \|\uu-\grad q\|_U=\|\vv\|_U$. Moreover, proceeding as in the proof of \cite[Lemma 11.7]{Ern_Guermond_FEs_I_21} shows that \begin{equation*} \|\vv\|_U \leq \|\phi\|_{L^\infty(U)} \|\tvv\|_{\tU}. \end{equation*} Combining the above bounds shows that \begin{equation*} \|\uu\|_U \le \|\phi\|_{L^\infty(U)} C_{\rm PFW}(\tU,\tG) \|\curl \tuu\|_{\tU}. \end{equation*} Finally, we have $\curl \tuu=(\ppsi^{\mathrm{d}}_U)^{-1}(\curl \uu)$ where $\ppsi^{\mathrm{d}}_U$ is the contravariant Piola mapping, and proceeding as in the proof of \cite[Lemma 11.7]{Ern_Guermond_FEs_I_21} shows that \begin{equation*} \|\curl\tuu\|_{\tU} \leq \|\phi\|_{L^\infty(U)} \|\curl\uu\|_{U}. \end{equation*} Altogether, this yields \begin{equation*} \|\uu\|_U \le \|\phi\|_{L^\infty(U)}^2 C_{\rm PFW}(\tU,\tG) \|\curl\uu\|_{U}, \end{equation*} and \eqref{eq:transfo_C_PFW} follows from the definition of $C_{\rm PFW}(U,\Gamma)$. (2) The maximum value of the shape-regularity parameter $\kappa_\edge$ for all $\edge\in\EE_h$ implicitly constrains the minimum angle possible between two faces of each tetrahedron in the edge patch $\TT^\edge$. Therefore, there exists an integer $n(\kappa_\edge)$ such that $|\TT^\edge| \leq n(\kappa_\edge)$. Moreover, there is a finite possibility for choosing the Dirichlet faces composing $\GeD$. As a result, there exists a finite set of pairs $\{(\widehat \TT,\widehat \Gamma)\}$ (where $\widehat\TT$ is a reference edge patch and $\widehat\Gamma$ is a (possibly empty) collection of its boundary faces) such that, for every $\edge\in \EE_h$, there is a pair $(\widehat \TT,\widehat \Gamma)$ and a bilipschitz, piecewise affine mapping satisfying $\TTT_\edge: \widehat \omega \to \ome$ and $\TTT_\edge(\widehat \Gamma) = \GeD$, where $\widehat \omega$ is the simply connected domain associated with $\widehat \TT$. Step (1) above implies that \begin{equation*} \CPVe \leq \max_{\widehat \xx \in \widehat \omega} \left ( \frac{\|\JJJ_\edge(\widehat \xx)\|^2}{|\det \JJJ_\edge(\widehat \xx)|} \right ) C_{\rm PFW}(\widehat \omega,\widehat \Gamma), \end{equation*} where $\JJJ_\edge$ is the Jacobian matrix of $\TTT_\edge$. Standard properties of affine mappings show that \begin{equation*} \max_{\widehat \xx \in \widehat \omega} \left ( \frac{\|\JJJ_\edge(\widehat \xx)\|^2}{|\det \JJJ_\edge(\widehat \xx)|} \right ) = \max_{K \in \TT^\edge} \frac{h_{\widehat K}^2}{\rho_K^{2}}\frac{|K|}{|\widehat K|}, \end{equation*} where $\widehat K=\TTT_\edge^{-1}(K)$ for all $K\in\TT_\edge$. Since $|K| \leq h_K^3$, we have \begin{equation*} \max_{\widehat \xx \in \widehat \omega} \left ( \frac{\|\JJJ(\widehat \xx)\|^2}{|\det \JJJ(\widehat \xx)|} \right ) \leq \left ( \max_{\widehat K \in \widehat \TT} \frac{h_{\widehat K}^2}{|\widehat K|} \right ) \kappa_\edge^2 h_{\ome}. \end{equation*} This concludes the proof. \end{proof} \end{document}
{"config": "arxiv", "file": "2005.14537/chaumontfrelet_ern_vohralik_2020b.tex"}
\begin{document} \date{\today} \subjclass{} \keywords{} \begin{abstract} We study two--generated subgroups $\langle f,g\rangle<\Homeo^+(I)$ such that $\langle f^2,g^2\rangle$ is isomorphic to Thompson's group $F$, and such that the supports of $f$ and $g$ form a chain of two intervals. We show that this class contains uncountably many isomorphism types. These include examples with nonabelian free subgroups, examples which do not admit faithful actions by $C^2$ diffeomorphisms on $1$--manifolds, examples which do not admit faithful actions by $PL$ homeomorphisms on an interval, and examples which are not finitely presented. We thus answer questions due to M. Brin. We also show that many relatively uncomplicated groups of homeomorphisms can have very complicated square roots, thus establishing the behavior of square roots of $F$ as part of a general phenomenon among subgroups of $\Homeo^+(I)$. \end{abstract} \maketitle \section{Introduction}\label{sec:intro} Thompson's group $F$ is a remarkable group of piecewise linear (abbreviated $PL$) homeomorphisms of the interval $I=[0,1]$ that occurs naturally and abundantly as a group of homeomorphisms of the real line, and that has been extensively studied since the 1970s. The group $F$ has been shown to satisfy various exotic properties, and has been proposed as a counterexample to well--known conjectures in group theory \cite{BrownGeoghegan,CFP1996,BieriStrebel16}. Among the most well--known facts about Thompson's group $F$ are the following: \begin{thm}[Brin--Squier,~\cite{BS1985}]\label{thm:brinsquier} The group $F$ satisfies no law and contains no nonabelian free subgroups. \end{thm} \begin{thm}[Ghys--Sergiescu,~\cite{GS1987}]\label{thm:ghyssergiescu} The group $F$ admits a faithful action by $C^{\infty}$ diffeomorphisms of the circle. \end{thm} \begin{thm}[Thompson, see~\cite{CFP96}]\label{thm:fp} The group $F$ is finitely presented. \end{thm} \begin{thm}[See~\cite{CFP96,BurilloBook,Higman,HigmanBook,Brown1985}]\label{thm:simple} The commutator subgroup of $F$ is an infinite simple group. \end{thm} In this article, we study a certain class of groups which we call \emph{square roots of Thompson's group $F$}. These are two--generated subgroups $\langle f,g\rangle <\Homeo^+(I)$ of the group of orientation-preserving homeomorphisms of the interval, which satisfy \[\langle f^2,g^2\rangle\cong \form{A,B \mid [A, (AB)^{-k}B(AB)^k]\text{ for }k\in\{1,2\}} \cong F,\] and for which the supports $\supp f$ and $\supp g$ of $f$ and $g$ respectively form a \emph{two--chain} of intervals. That is, $\supp f$ and $\supp g$ are both open intervals, and the intersection $\supp f\cap \supp g$ is a proper subinterval of both $\supp f$ and $\supp g$. Among other things, we demonstrate that (the second part of) Theorem~\ref{thm:brinsquier}, Theorem~\ref{thm:ghyssergiescu}, and Theorem \ref{thm:fp} all fail for square roots of $F$. In particular, we show that there are square roots of $F$ which contain nonabelian free subgroups, that there are square roots of $F$ which do not admit faithful actions by $C^2$ diffeomorphisms on the interval, circle, or real line, and that there are uncountably many isomorphism types of square roots of $F$. \subsection{Main results} We denote the set of isomorphism classes of square roots of $F$ by $\mathcal{S}$. The goal of this paper is to produce interesting elements of $\mathcal{S}$. Note that $\mathcal{S}$ contains $F$ for example, since squaring the generators in the standard presentation for $F$ as given in the previous subsection results in a group isomorphic to $F$. In this article we use two different finite presentations of the group $F$. The first presentation, which was mentioned in the previous section is: \[\form{A,B \mid [A, (AB)^{-1}B(AB)], [A, (AB)^{-2}B(AB)^2]} \cong F.\] The second presentation is obtained by performing a Tietze transformation to produce generators $a=AB,\, b=B$, and is given by: \[\form{a,b \mid [ab^{-1}, a^{-1}ba], [ab^{-1}, a^{-2}ba^2]} \cong F.\] Next, we describe a certain subgroup $P$ of $F$, which will be needed to state and prove our results. We fix two copies of $F$: \[F_1=\form{p_1,p_2 \mid [p_1p_2^{-1}, p_1^{-1}p_2p_1],[p_1p_2^{-1}, p_1^{-2}p_2p_1^2]},\] \[F_2=\form{q_1,q_2 \mid [q_1q_2^{-1}, q_1^{-1}q_2q_1],[q_1q_2^{-1}, q_1^{-2}q_2q_1^2]}.\] We will write $P$ for the subgroup of $F_1\times F_2$ generated by $(p_1,q_2)$ and $(p_2,q_1)$. Note that the group $P$ is isomorphic to a subgroup of $F$ which itself contains an isomorphic copy of $F$ as a subgroup. The fact that $P$ is isomorphic to a subgroup of $F$ is an elementary exercise that we leave to the reader, and that $F$ is isomorphic to a subgroup of $P$ is a direct consequence of Brin's Ubiquity Theorem (see~\cite{BrinJLMS99}). We denote the free group on two generators by ${\bf F_2}$, and we call a group $H=\langle h_1,h_2\rangle$ a \emph{marked extension} of $P$ if there exists a surjective homomorphism $H\to P$, where \[h_1\mapsto (p_1,q_2),\,h_2\mapsto (p_2,q_1).\] Even though the map $H\to P$ may be suppressed from the notation, we always think of a marked extension of $P$ as equipped with such a homomorphism. A (countable) group is \emph{left orderable} if it admits a left invariant total ordering, or equivalently if it admits a faithful action by orientation preserving homeomorphisms of the real line (see Proposition 1.1.8 of~\cite{DeroinNavasRivas} or Theorem 2.2.19 of~\cite{Navas2011}). Our main result is the following: \begin{thm}\label{thm:main} Let $H$ be a marked, left orderable extension of $P$. Then there exists a square root of Thompson's group $G\in\mathcal{S}$ such that $H<G$. \end{thm} Since the free group $\bf F_2$ is left orderable and is naturally a marked extension of $P$, we immediately obtain the following: \begin{cor}\label{cor:free} There exists a square root $G\in\mathcal{S}$ such that ${\bf F_2}<G$. \end{cor} We will show that square roots of $F$ can contain torsion--free nilpotent groups of arbitrary nilpotence degree. As a consequence of Theorem \ref{thm:main} and the Plante--Thurston Theorem~\cite{PT1976}, we have the following: \begin{cor}\label{cor:smooth} There exists a square root $G\in\mathcal{S}$ such that $G$ does not admit a faithful action by $C^2$ diffeomorphisms on a compact one--manifold or on the real line. \end{cor} Corollary \ref{cor:smooth} gives an example of a subgroup $\langle f,g\rangle<\Homeo^+(I)$ which admits no faithful $C^2$ action on the interval, the circle, or the real line, but where $\langle f^2,g^2\rangle$ admits a faithful $C^{\infty}$ action on every one--manifold (cf.~\cite{GS1987,KKL16}). Nonabelian nilpotent groups cannot act by piecewise--linear homeomorphisms on $I$ or on $S^1$: \begin{cor}\label{cor:pl} There exists a square root $G\in\mathcal{S}$ such that $G$ does not admit a faithful action by $PL$ homeomorphisms of a compact one--manifold. \end{cor} Corollary~\ref{cor:pl} stands in contrast to the standard definition of $F$, which is as a group of $PL$ homeomorphisms of the interval. Corollary~\ref{cor:pl} answers a question due to M. Brin~\cite{BrinPersonal}. In order to show that square roots of $F$ may not be finitely presented, we prove the following result which is similar in spirit to some of the methods in~\cite{KKL16}: \begin{thm}\label{thm:uncountable} The class $\mathcal{S}$ contains uncountably many distinct isomorphism types. \end{thm} Since there are only countably many isomorphism types of finitely presented groups, we immediately obtain the following: \begin{cor}\label{cor:fp} There exists an element $G\in\mathcal{S}$ which admits no finite presentation. \end{cor} Analogues of Theorem~\ref{thm:simple} for square roots of $F$ are not the primary topic of this paper, though we can give the following statement which follows immediately from the discussion of commutator subgroups of chain groups in~\cite{KKL16}. Recall that the action of a group on a topological space is \emph{minimal} if every point has a dense orbit: \begin{prop}\label{prop:simple} Let $G\in\mathcal{S}$ act minimally on its support. Then the commutator subgroup $[G,G]$ is simple. \end{prop} It follows from Proposition~\ref{prop:simple} that if $G=\langle f,g\rangle$ and that if $\langle f^2,g^2\rangle$ generate a copy of Thompson's group $F$ which acts minimally on the interior of $I$, then the commutator subgroup of $G$ is simple. \subsection{Square roots of other groups}\label{subsec:other} Essential in the discussion of square roots of $F$ in this paper is the \emph{dynamical realization} of $F$ on a two--chain of intervals, which is a dynamical setup in which $F$ occurs naturally (see Subsection~\ref{subsec:2prechain}, cf. Proposition 1.1.8 of~\cite{DeroinNavasRivas}). If one abandons the dynamical framework of chains of intervals, the group theoretic diversity phenomena witnessed by Theorems~\ref{thm:main} and~\ref{thm:uncountable} become so common as to be a general feature of homeomorphism groups. To be precise, let $H=\langle h_1,\ldots,h_n\rangle<\Homeo^+(I)$ be a finitely generated subgroup. An $n$--generated subgroup $G=\langle g_1,\ldots,g_n\rangle<\Homeo^+(I)$ is called a \emph{square root of $H$} if \[H\cong\langle g_1^2,\ldots,g_n^2\rangle.\] We note that the definition of a square root of $H$ depends implicitly on a choice of generators for $H$, and is therefore really a square root of a marked group. If $H=\langle h_1,\ldots,h_n\rangle$ is a generating set for a group $H$, we will define the \emph{skew subdirect product} of $H$ to be the subgroup of $H\times H$ generated by $\{(h_i,h_i^{-1})\}_{i=1}^n$, and we will denote this group by $\yh{H}$. \begin{thm}\label{thm:Z^n} Let $\Z=\langle t_1,\ldots,t_{n+1}\mid t_1=\cdots=t_{n+1}\rangle$, and let $H<\Homeo^+(I)$ be an $n$--generated group. Then there exists a square root $G$ of $\Z$ such that $\yh H<G$. \end{thm} \begin{cor}\label{cor:uncgeneral} There exist uncountably many isomorphism types of three--generated subgroup of $\Homeo^+(I)$ such that the squares of the generators generate a cyclic group. Moreover, there exists a three--generated subgroup of $\Homeo^+(I)$ such that the squares of the generators generate a cyclic group and which contains a nonabelian free group. \end{cor} \begin{thm}\label{thm:lamplighter} Let $L=\Z\wr\Z$ be the lamplighter group, equipped with standard cyclic generators of the two factors of the wreath product. Then $L$ has uncountably many isomorphism types of (marked) square roots. \end{thm} In Subsection~\ref{subsec:generalroot}, we will define the notion of a formal square root of a finitely generated group. We will show that formal square roots of left orderable groups are again left orderable, and generally contain nonabelian free groups. \subsection{Notes and references} \subsubsection{Remarks on context} The bulk of the present work could just as well be a discussion of the very general setup of two--generated subgroups of $\Homeo^+(I)$ whose generators are supported on intervals $J_1$ and $J_2$, which in turn form a chain. It is well--known that under suitable dynamical hypotheses (cf. Subsection~\ref{sssec:relation} below), the resulting subgroup is isomorphic to $F$. The class $\mathcal{S}$ of square roots of $F$ is merely the first instance of interesting algebraic behavior for such homeomorphism groups which does not follow from the properties of Thompson's group $F$. In particular, the results of this article apply to higher roots of $F$ beyond the square root. \subsubsection{Relation to other authors' work}\label{sssec:relation} To the authors' knowledge, it was M. Brin~\cite{BrinPersonal} who first asked what sorts of groups can occur as square roots of $F$, and in particular if square roots of $F$ can contain nonabelian free groups, whether they can fail to be finitely presented, and whether they can fail to act by $PL$ homeomorphisms on the interval. The main results of this paper form a natural complement to the joint work of the authors with S. Kim in~\cite{KKL16}. In that paper, Kim and the authors introduced the notions of a \emph{prechain group} and of a \emph{chain group}. In the terminology of~\cite{KKL16}, square roots of $F$ form a restricted subclass of $2$--prechain groups, namely those which square to become $2$--chain groups. The class of $2$--chain groups in turn consists of just one isomorphism type (i.e. Thompson's group $F$). Chain groups with ``fast" dynamics also fall into very few isomorphism types (namely the Higman--Thompson groups $\{F_n\}_{n\geq 2}$, and their subgroup structure has been studied independently by Bleak--Brin--Kassabov--Moore--Zarmesky~\cite{BBKMZ16} (cf.~\cite{Brown1985}). For generalities on Thompson's group $F$, the reader is directed to the classical Cannon--Floyd--Parry notes~\cite{CFP96}, as well as Burillo's book~\cite{BurilloBook}. \subsubsection{Bi-orderability} We briefly remark that many of the groups we construct in this paper, though they are manifestly orderable, will fail to be bi-orderable. Indeed, bi-orderable groups are known to have the \emph{unique root property}. That is, if $f^n=g^n$ for some elements $f$ and $g$ in a biorderable group for some $n\neq 0$, then $f=g$ (see Section 1.4.2 of~\cite{DeroinNavasRivas}). One of the themes of this paper is the non-uniqueness of roots of homeomorphisms. Thus, the moment a given element in a left orderable group has two distinct square roots, the group cannot be bi-orderable. See for instance Corollary~\ref{cor:uncgeneral}. \section{Square roots of $F$} In this section we establish the main result, after gathering some relevant preliminary facts and terminology. \subsection{$2$--prechain groups and variations thereupon}\label{subsec:2prechain} Let $\mathcal{J}=\{J_1,J_2\}$ be two nonempty open subintervals of $\bR$. We call $\mathcal{J}$ a \emph{chain of intervals} if $J_1\cap J_2$ is a proper nonempty subinterval of $J_1$ and of $J_{2}$. See Figure~\ref{f:coint}. \begin{figure}[h!] \begin{tikzpicture}[ultra thick,scale=.5] \draw [red] (-5,0) -- (-1,0); \draw (-3,0) node [above] {\small $J_1$}; \draw [blue] (-2,.5) -- (2,.5); \draw (0,.5) node [above] {\small $J_2$}; \end{tikzpicture} \caption{A chain of two intervals.} \label{f:coint} \end{figure} If $f\in\Homeo^+(\R)$, we write $\supp f=\{x\in\R\mid f(x)\neq x\}$. Let $f$ and $g$ satisfy $\supp f= J_1$ and $\supp g=J_2$. In the terminology of~\cite{KKL16}, the group $\langle f,g\rangle$ is a $2$--prechain group. Note that, up to replacing $f$ and $g$ by their inverses, we may assume $f(x),g(x)\geq x$ for $x\in\R$. Writing $J_1=(a,c)$ and $J_2=(b,d)$ with $a<b<c<d$, we have the following basic dynamical stability result, a proof of which can be found as a special case of Lemma 3.1 of~\cite{KKL16}: \begin{lem}\label{lem:dyn criterion} Suppose $g\circ f(b)\geq c$. Then $\langle f,g\rangle\cong F$. \end{lem} Under the dynamical hypotheses of Lemma \ref{lem:dyn criterion}, the group $\langle f,g\rangle$ is a chain group. There is another configuration of intervals and homeomorphisms closely related to chain groups, which naturally gives rise to $F$, which we will need in the sequel, and which we will describe in the next subsection. \subsection{Nested generators for $F$} A natural generating set for $F$ emerges as homeomorphisms supported on a nested pair of intervals, satisfying elementary dynamical conditions. This shall be useful in our construction to follow. \begin{lem}\label{nestedL} Let $[a,b_1]$ and $ [a,b_2]$ be compact intervals in $\mathbb{R}$ such that $b_1<b_2$. Let $f,g$ be homeomorphisms satisfying: \begin{enumerate} \item The supports of $g$ and $f$ are contained in $[a,b_1]$ and $ [a,b_2]$ respectively. \item $f$ is a decreasing map on $(a,b_2)$. \item $f,g$ agree on the interval $[a, f(b_1)]$. \end{enumerate} Then $\langle f,g \rangle\cong F$. \end{lem} \begin{lem}\label{nestedR} Let $[a_1,b]$ and $ [a_2,b]$ be compact intervals in $\mathbb{R}$ such that $a_1<a_2$. Let $f,g$ be homeomorphisms satisfying: \begin{enumerate} \item The supports of $f$ and $g$ are contained in $[a_1,b]$ and $[a_2,b]$ respectively. \item $f$ is an increasing map on $(a_1,b)$. \item $f,g$ agree on the interval $[f(a_2), b]$. \end{enumerate} Then $\langle f,g \rangle\cong F$. \end{lem} \begin{proof}[Proofs of Lemmas~\ref{nestedL} and~\ref{nestedR}] The proofs of both lemmas above follow from checking that the homeomorphisms $f$ and $g$ in each lemma satisfy the relations $$[fg^{-1},f^{-1}gf]=1\qquad [fg^{-1}, f^{-2}g f^2]=1$$ Since $f$ and $g$ do not commute, and since every proper quotient of $F$ is abelian (see Theorem 4.3 of~\cite{CFP96}), they generate a group isomorphic to $F$. \end{proof} \subsection{Orderable extensions of $P$}\label{sec:extension} We will use the following standard facts from the theory of orderable groups: \begin{lem}[See Remark 2.1.5 of~\cite{DeroinNavasRivas}]\label{lem:extension} Let $1\to K\to G\to Q\to 1$ be an exact sequence of groups, and suppose that $Q$ and $K$ are left orderable. Then $G$ admits a left ordering which agrees with any prescribed ordering on $K$. Moreover, any countable, left orderable group can be embedded in $\Homeo^+(I)$. \end{lem} The second claim of the lemma is implied by the fact that $\R\cong (0,1)$. The following lemma is obvious, after the observation that $F\times F$ is left orderable, applying the Brin--Squier Theorem~\cite{BS1985}, and Brin's Ubiquity Theorem~\cite{BrinJLMS99}: \begin{lem}\label{lem:P} The group $P$ is a two--generated sub-direct product of $F\times F$. It is a left orderable group which contains no free subgroups. \end{lem} Let $\mathcal{R}<{\bf F_2}=\langle A,B\rangle$ be such that ${\bf F_2}/\mathcal{R}\cong P$. Here, the generators $A$ and $B$ of ${\bf F_2}$ get sent to the generators $(p_1,q_2)$ and $(p_2,q_1)$ respectively. Note that since $P<F\times F$, we have $\mathcal{R}\neq 1$. Let $\mathcal{R}_k$ denote the $k^{th}$ term of the derived series of $\mathcal{R}$ and let $\mathcal{R}^k$ denote the $k^{th}$ term of the lower central series of $\mathcal{R}$, with the convention $\mathcal{R}_1=\mathcal{R}^1=\mathcal{R}$. \begin{lem}\label{lem:solvable} For each $k\geq 1$, the groups \[S_k=\langle A,B\mid \mathcal{R}_k\rangle\,\,\,\,\textrm{ and } \,\,\,\, N_k=\langle A,B\mid \mathcal{R}^k\rangle\] are marked, left orderable extensions of $P$. \end{lem} \begin{proof} It is clear that for each $k$, the groups $S_k$ and $N_k$ are quotients of the free group $\bf F_2$ via the canonical map. Since $\mathcal{R}_k,\mathcal{R}^k\subset\mathcal{R}$, we have that $S_k$ and $N_k$ both surject to $P$ simply by imposing the relations in $\mathcal{R}$. It therefore suffices to show that $S_k$ and $N_k$ are both left orderable, which since $P$ is left orderable, reduces to showing that $\mathcal{R}/\mathcal{R}_k$ and $\mathcal{R}/\mathcal{R}^k$ are left orderable by Lemma \ref{lem:extension}. Since $\mathcal{R}$ is an infinitely generated free group, these quotients are merely the universal $k$--step solvable and nilpotent quotients of the infinitely generated free group. We proceed by induction on $k$. The case $k=1$ is trivial, and in the case $k=2$, we obtain the group $\Z^{\infty}$ which is easily seen to be left orderable. By induction, $\mathcal{R}/\mathcal{R}_k$ (resp. $\mathcal{R}/\mathcal{R}^k$) is left orderable, and $\mathcal{R}_k/\mathcal{R}_{k+1}$ (resp. $\mathcal{R}^k/\mathcal{R}^{k+1}$) is again isomorphic to $\Z^{\infty}$, so the conclusion follows by applying Lemma \ref{lem:extension} again. \end{proof} \subsection{Building square roots of $F$}\label{subsec:building} In this section we provide a recipe that produces a square root of $F$ that contains a given group $H$ as a subgroup, provided $H$ is an orderable marked extension of $P$. {\bf Step 1}: Partition $[1,2)$ into left closed, right open intervals $\{J_1,\ldots,J_{16}\}$ so that $J_i$ occurs to the left of $J_j$ in $\mathbb{R}$ whenever $i<j$. Moreover, we require that these intervals are of the same length. For ease of notation, we denote by $J_X$ for some $X\subset \{1,\ldots,16\}$ the union $\bigcup_{i\in X} J_i$. For example, $$J_{\{1,\ldots,4\}}=J_1\cup J_2\cup J_3\cup J_4.$$ {\bf Step 2}: Construct homeomorphisms $f$ and $g$ of the real line that satisfy the following: \begin{enumerate} \item $f$ and $g$ are increasing maps on $(0,2)$ and on $(1,3)$, respectively, and equal the identity outside these respective intervals. \item $f$ maps $J_i$ isometrically onto $J_{i+4}$ for $1\leq i\leq 11$. \item $g$ maps $J_i$ isometrically onto $J_{i+4}$ for $2\leq i\leq 12$. \item The map $gf^{-1}$ has two components of support, which are $$[0,1]\cup J_1,\qquad J_{\{12,\ldots,16\}}\cup [2,3].$$ \end{enumerate} It is elementary to construct homeomorphisms $f$ and $g$ that satisfy (1)--(3) above. If $f$ and $g$ satisfy (1)--(3), then it holds that $gf^{-1}$ is the identity on $J_{\{2,\ldots,11\}}$. Hence the support of $gf^{-1}$ is contained in $$[0,1]\cup J_1\bigcup J_{\{12,\ldots,16\}}\cup [2,3].$$ To ensure that the components of support of $gf^{-1}$ are precisely as stated in $(4)$, we choose $g$ such that it is sufficiently slow on $J_1$, and $f$ so that it is sufficiently slow on $J_{\{12,\ldots,16\}}$. Note that $gf^{-1}$ is decreasing on the interior of $[0,1]\cup J_1$ and increasing on the interior of $J_{\{12,\ldots,16\}}\cup [2,3]$. {\bf Step 3}: Let $H=\langle h_1,h_2\rangle$ be a marked, left orderable extension of $P$. We identify the elements $h_1$ and $h_2$ with their dynamical realizations, both supported on the interval $J_6$. Here, by \emph{dynamical realization} of a countable left orderable group $H$, we mean an embedding of into $\Homeo^+(I)$ (see Proposition 1.1.8 of~\cite{DeroinNavasRivas} or Theorem 2.2.19 of~\cite{Navas2011}). Define a map $h_3$ as: $$h_3=g^{-1}h_2g=f^{-1}h_2f.$$ By definition, $h_3$ is supported on the interval $J_{10}$. Finally, we define homeomorphisms $$\lambda_1= h_1^{-1} h_3^{-1} f,\qquad \lambda_2=g.$$ Our goal for the rest of this section will be to demonstrate the following: \begin{prop}\label{main} The group $\langle \lambda_1,\lambda_2\rangle$ is a marked square root of $F$ which contains $H$ as a subgroup. \end{prop} The group $\langle \lambda_1,\lambda_2\rangle$ is manifestly orderable, since it is presented as a group of orientation preserving homeomorphisms of the interval. It is clear by our construction that $\lambda_1^2$ and $ \lambda_2^2$ satisfy the dynamical condition of Lemma \ref{lem:dyn criterion}, and hence generate a copy of $F$. So it suffices to show that $H<\langle \lambda_1,\lambda_2\rangle$. \begin{prop}\label{mainsub} The elements $\lambda_2\lambda_1^{-1}$ and $\lambda_1^{-1}\lambda_2$ generate an isomorphic copy of $H$. \end{prop} \begin{proof} The element $\lambda_2\lambda_1^{-1}$ has four components of support: $$[0,1]\cup J_1,\qquad J_6,\qquad J_{10},\qquad J_{\{12,\ldots,16\}}\cup [2,3].$$ Note that: $$\lambda_2\lambda_1^{-1}\restriction J_6= h_1,\qquad \lambda_2\lambda_1^{-1}\restriction J_{10}= h_3=f^{-1}h_2f.$$ We denote by $p_2$ the following restriction: $$\lambda_2\lambda_1^{-1}\restriction [0,1]\cup J_1= gf^{-1}\restriction [0,1]\cup J_1.$$ We denote by $q_1$ the following restriction: $$\lambda_2\lambda_1^{-1}\restriction J_{\{12,\ldots,16\}}\cup [2,3]=gf^{-1}\restriction J_{\{12,\ldots,16\}}\cup [2,3].$$ The element $\lambda_1^{-1}\lambda_2$ has four components of support: $$[0,1]\cup J_{\{1,\ldots,5\}},\qquad J_{10},\qquad J_{14},\qquad J_{16}\cup [2,3].$$ Note that $$\lambda_1^{-1}\lambda_2\restriction J_{10}= f^{-1}h_1 f.$$ Denote by $p_1$ the following restriction: $$\lambda_1^{-1}\lambda_2\restriction [0,1]\cup J_{\{1,\ldots,5\}}=f^{-1}g\restriction [0,1]\cup J_{\{1,\ldots,5\}}.$$ Denote by $q_2$ the following restriction: $$\lambda_1^{-1}\lambda_2\restriction J_{\{14,\ldots,16\}}\cup [2,3].$$ First observe that the restrictions on $J_{10}$ are: $$\lambda_1^{-1}\lambda_2\restriction J_{10}=f^{-1}h_1f\restriction J_{10},\qquad \lambda_2\lambda_1^{-1}\restriction J_{10}=h_3\restriction J_{10}=f^{-1} h_2 f\restriction J_{10}.$$ It follows that this restriction to $J_{10}$ corresponds to the isomorphism $$H\to \langle\lambda_1^{-1}\lambda_2\restriction J_{10}, \lambda_2\lambda_1^{-1}\restriction J_{10} \rangle,$$ given by $$h_2\mapsto \lambda_2\lambda_1^{-1}\restriction J_{10},\qquad h_1\mapsto \lambda_1^{-1}\lambda_2\restriction J_{10},$$ since these restrictions generate a dynamical realization of $H$ on $J_{10}$. Next observe that $$\lambda_2\lambda_1^{-1}\restriction J_6=h_1\restriction J_6,\qquad \lambda_1^{-1}\lambda_2\restriction J_6=id\restriction J_6.$$ Since $H$ is a marked extension of $P$, every relation in $H$ is necessarily a product of commutators. It follows that the abelianization of $H$ is $\mathbb{Z}^2$. We have then that this restriction to $J_6$ corresponds to the quotient $$H\to \langle\lambda_1^{-1}\lambda_2\restriction J_6, \lambda_2\lambda_1^{-1}\restriction J_6 \rangle,$$ given by $$h_2\mapsto \lambda_2\lambda_1^{-1}\restriction J_6,\qquad h_1\mapsto \lambda_1^{-1}\lambda_2\restriction J_6.$$ which is a homomorphism whose kernel is the normal closure of $h_1$ in $H$. Next, we observe that by construction, the maps $p_1,p_2$ and $q_1,q_2$ satisfy the dynamical conditions described in Lemmas \ref{nestedL} and \ref{nestedR} respectively. Define $$j_1=\sup(J_1),\qquad j_2=\inf(J_{14}).$$ By construction, we have $$p_1(j_1)=\lambda_1^{-1}\lambda_2(j_1)= f^{-1}g(j_1)=g (f^{-1}(j_1))< 1,$$ and $$p_1\restriction [0,1]=\lambda_1^{-1}\lambda_2\restriction [0,1]= f^{-1}\restriction [0,1]= \lambda_2\lambda_1^{-1}\restriction [0,1]=p_2\restriction [0,1].$$ It follows that: $$\langle p_1,p_2\rangle \cong \langle p_1,p_2\mid [p_1p_2^{-1},p_1^{-1}p_2p_1], [p_1p_2^{-1},p_1^{-2}p_2p_1^2]\rangle\cong F.$$ Next, observe that by construction we have $$q_1(j_2)=\lambda_2\lambda_1^{-1}(j_2)= gf^{-1}(j_2)=f^{-1}(g (j_2))> 2,$$ and $$q_2\restriction [2,3]=\lambda_1^{-1}\lambda_2\restriction [2,3]= g\restriction [2,3]= \lambda_2\lambda_1^{-1}\restriction [2,3]=q_1\restriction [2,3].$$ It follows that: $$ \langle q_1,q_2\rangle\cong \langle q_1,q_2\mid [q_1q_2^{-1},q_1^{-1}q_2q_1], [q_1q_2^{-1},q_1^{-2}q_2q_1^2]\rangle\cong F.$$ In particular, the subgroup of $ \langle p_1,p_2\rangle\times \langle q_1,q_2\rangle$ generated by the elements $(p_1,q_2)$ and $(p_2,q_1)$ is isomorphic to $P$. Now we claim that the map $$h_2\mapsto \lambda_2\lambda_1^{-1},\qquad h_1\mapsto \lambda_1^{-1}\lambda_2,$$ extends to an embedding $$\langle h_1,h_2 \rangle\to \langle \lambda_1,\lambda_2\rangle.$$ This is true for the component $J_{10}$, where this is a dynamical realization of $H$. So it suffices to show that each relation in $h_1$ and $h_2$ is satisfied by the restrictions of $\lambda_1^{-1}\lambda_2$ and $\lambda_2\lambda_1^{-1}$ on other components. As we saw before, for $J_6$, this via the $\mathbb{Z}$--quotient given by killing the normal closure of the generator $h_1\in H$, which factors through the abelianization map. For the components $$[0,1]\bigcup J_{\{1,\ldots,5\}},\qquad J_{\{12,\ldots,16\}}\bigcup [2,3],$$ the action of $H$ is precisely as $P$, and since $H$ is a marked extension of $P$, whence the desired conclusion. \end{proof} \subsection{Smoothability}\label{sec:smooth} To construct square roots of $F$ which are not conjugate into $\Diff^2(I)$ or $\Diff^2(\R)$, the group of $C^2$ orientation-preserving diffeomorphisms of the interval and the real line respectively, we use the following result due to Plante--Thurston and its generalizations due to Farb--Franks: \begin{thm}[See~\cite{PT1976,FF2003}]\label{thm:PT} Let $N<\Diff^2(M)$ be a finitely generated nilpotent subgroup, where here $M$ is a compact and connected one--manifold. Then $N$ is abelian. Moreover, any nilpotent subgroup of $\Diff^2(\R)$ is metabelian. \end{thm} \begin{proof}[Proof of Corollary~\ref{cor:smooth}] Let $N_k=\bf F_2/\mathcal{R}^k$ be as in Lemma \ref{lem:solvable}. Then $\mathcal{R}/\mathcal{R}^k<N_k$. Taking a finite subset of a free generating set $S$ for $\mathcal{R}$, we have that the image of $S$ in $N_k$ generates a nilpotent subgroup $\Gamma_S<N_k$. It is straightforward to check that $\Gamma_S$ is a retract of $\mathcal{R}/\mathcal{R}^k$, and is therefore a nonabelian nilpotent subgroup of $N_k$ whenever $k\geq 3$. Applying Theorem \ref{thm:main} and Theorem \ref{thm:PT} gives the desired conclusion in the case where $M$ is compact. Choosing a $k\gg 0$ such that $N_k$ contains a nilpotent subgroup which is not metabelian, we get the desired conclusion for $\R$ as well. \end{proof} Corollary~\ref{cor:pl} similarly follows from Theorem~\ref{thm:main} and Theorem 4.1 of~\cite{FF2003}. \section{Uncountability of $\mathcal{S}$ and infinitely presented examples} In this section, we prove Theorem \ref{thm:uncountable}. For this, we retain the notation from the previous discussion. \subsection{Sources of uncountability} A construction of P. Hall (sometimes attributed to B. Neumann) on the existence of uncountably many distinct isomorphism classes of two--generated groups as outlined by de la Harpe in part III.C.40 of~\cite{MR1786869} has the advantage that the resulting groups are all left orderable, as observed in~\cite{KKL16}. We summarize the relevant conclusions here: \begin{prop}\label{prop:neumann} There exists an uncountable class $\mathcal{N}$ of pairwise non--isomorphic groups such that if $N\in\mathcal{N}$ then $N$ is two--generated, left orderable, and $N^{ab}=\Z^2$. In particular, $N$ can be realized as a subgroup of $\Homeo^+(\R)$. \end{prop} The reader will also find groups in the class $\mathcal{N}$ described explicitly below in the proof of Corollary~\ref{cor:uncgeneral}. \subsection{Equations over $\Homeo^+(\R)$} In order to prove Theorem~\ref{thm:uncountable}, we will construct an explicit orderable marked extension of $P$ which contains a given element of $\mathcal{N}$ as a subgroup. To do this, we will need to solve equations over $\Homeo^+(\R)$. Let $\{f_1,\ldots,f_k,g\}\subset\Homeo^+(\R)$ be given, and let $w\in \bf F_n$ be a reduced word in the free group on $n$ fixed generators, where here $k<n$. An \emph{equation} over $\Homeo^+(\R)$ is an expression of the form \[w(f_1,\ldots,f_k,x_1,\ldots,x_{n-k})=g.\] A tuple $\{y_1,\ldots,y_{n-k}\}\subset\Homeo^+(\R)$ is a \emph{solution} to the equation if this expression becomes an equality after substituting $y_i$ for $x_i$ for each $i$, and interpreting the expression in $\Homeo^+(\R)$. We will restrict out attention to the case where $n=2$. Even here, equations may not admit solutions. A trivial example can be given by taking $f\neq g$ and setting $w$ to be the first free generator. A slightly less trivial example can be given by taking $f$ to be fixed point free, taking $g$ to have at least one fixed point, and setting $w$ to be a conjugate of the first free generator. We will concern ourselves with a particular commutator word $w$ with free generators $s$ and $t$, so that under the map ${\bf F_2}\to P$ given by $s\mapsto (p_1,q_2)$ and $t\mapsto (p_2,q_1)$, the element $w$ lies in the kernel. The following lemma is key in proving Theorem~\ref{thm:uncountable}: \begin{lem}\label{lem:equation} Fix a group $N\in \mathcal{N}$ and let $\tau$ be the map $\tau(t)=t+1$. There is a homeomorphism $\kappa\in \Homeo^+(I)$ and a nontrivial commutator word $w\in \ker\{{\bf F_2}\to P\}$ such that: \begin{enumerate} \item The group $\langle \kappa,\tau \rangle$ contains $N$ as a subgroup. \item The equation $w(\tau,x)=\kappa$ admits a solution $y\in\Homeo^+(\R)$. \end{enumerate} \end{lem} We first show how Lemma~\ref{lem:equation} implies Theorem~\ref{thm:uncountable}: \begin{proof}[Proof of Theorem~\ref{thm:uncountable}] We recall some of the notation and the construction in Subsection~\ref{subsec:building}. We will use informal language below, since we already have a precise description in that subsection. Given any $h_1,h_2\in \Homeo^+(I)$, we can build a square root $G\in\mathcal{S}$ generated by $\lambda_1,\lambda_2$ such that the group $H=\langle \lambda_1^{-1}\lambda_2,\lambda_2\lambda_1^{-1}\rangle$ satisfies the following. \begin{enumerate} \item $H$ acts as a dynamical realization of $P$ on $$([0,1]\cup J_{\{1,...,5\}})\bigcup (J_{\{12,...,16\}}\cup [2,3])$$ \item The group $\langle \lambda_2\lambda_1^{-1}\rangle$ acts faithfully by $\mathbb{Z}$ on the interval $J_6$ and the element $ \lambda_1^{-1}\lambda_2$ acts trivially on the interval $J_6$. \item The element $\lambda_1^{-1}\lambda_2$ acts as $h_1$ on $J_{10}$ and the element $\lambda_2\lambda_1^{-1}$ acts as $h_2$ on $J_{10}$. \item The action of $H$ outside the above intervals is trivial. \end{enumerate} Let $\tau,\kappa$ and $y$ be the homeomorphisms of the real line from Lemma \ref{lem:equation}. For the rest of the proof, we fix dynamical realizations of $\tau,\kappa$ on $J_{10}$ obtained from conjugating by a homeomorphism of $\mathbb{R}$ to the interior of $J_{10}$. We shall now denote by $\tau,\kappa,y$ as these homeomorphisms supported on $J_{10}$. We use the input $h_1=\tau$ and let $h_2=y$ to produce a square root $G$ of $F$. Consider the subgroup $K$ of $H$ generated by $$k_1=\lambda_1^{-1}\lambda_2,\qquad k_2=w(\lambda_1^{-1}\lambda_2,\lambda_2\lambda_1^{-1}).$$ We check the following: \begin{enumerate} \item $k_1\restriction J_{10}=\tau$ and $k_2\restriction J_{10}=\kappa$. \item $k_2$ acts trivially outside $J_{10}$ since $w(s,t)$ represents the identity in $P$ under the map $s\mapsto (p_1,q_2)$ and $t\mapsto (p_2,q_1)$. \item Any commutator vanishes on $J_6$. \item $k_1$ acts trivially on $J_6$ and by $\mathbb{Z}$ on $$([0,1]\cup J_{\{1,...,5\}})\bigcup (J_{\{12,...,16\}}\cup [2,3])$$ \end{enumerate} By our assumption, $N<\langle k_1,k_2\rangle\restriction J_{10}$. We claim that in fact, $N<\langle k_1,k_2\rangle$. This follows from the fact that the relations in $N$ are elements of the commutator subgroup of the free group, and since $\langle k_1, k_2\rangle$ acts by $\mathbb{Z}$ outside $J_{10}$. Therefore $N<G$ where $G$ is the corresponding square root of $F$. We thus obtain that if $N\in\mathcal{N}$ is given, then there is a square root $G_N\in\mathcal{S}$ which contains $N$ as a subgroup. Since the class $\mathcal{N}$ contains uncountably many different isomorphism types and since any element of $\mathcal{S}$ is two--generated and hence countable, the class $\{G_N\mid N\in\mathcal{N}\}\subset\mathcal{S}$ consists of uncountably many different isomorphism types. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:equation}] We shall use the commutator word $$w(s,t)=[w_1(s,t), w_2(s,t)],$$ where $$w_1=[st^{-1},s^{-2} t s^2],\qquad w_2=t [st^{-1}, t^{-1} s^{-1} t] t^{-1}.$$ It is straightforward to check that for the map ${\bf F_2}\to P$ given by $s\mapsto (p_1,q_2)$ and $t\mapsto (p_2,q_1)$, the element $w$ lies in the kernel. Let $\phi,\psi\in\Homeo^+(I)$ be given generators of $N$. We first choose homeomorphisms $\mu,\nu\in\Homeo^+(I)$ such that $\psi=[\mu,\nu^{-1}]$ and homeomorphisms $\chi,\xi\in\Homeo^+(I)$ such that $\phi=[\xi,\chi^{-1}]$. Such choices are possible, since every element of $\Homeo^+(I)$ is a commutator (see Theorem 2.65 of~\cite{Calegari2007}, for instance). Identify $I$ with the unit interval $[0,1]\subset\R$. Recall that $\tau$ is translation by $1$ on $\R$. We set $\kappa\in\Homeo^+(\R)$ as $$\kappa=(\tau^{-2}\psi\tau^2) (\tau^{-102}\phi\tau^{102})$$ Intuitively, $\kappa$ acts by $\psi$ on the interval $[2,3]$, by $\phi$ on the interval $[102,103]$, and by the identity otherwise. We now verify that $\kappa$ witnesses the conditions of the lemma. We set \[y=(\tau^{-1}\mu\tau^{1})(\tau^{-2}\nu\tau^{2}) (\tau^{-101}\chi\tau^{101}) (\tau^{-102}\xi\tau^{102}).\] Intuitively, the homeomorphism $y$ acts by $\mu$ on $[1,2]$, by $\nu$ on $[2,3]$, by $\chi$ on $[101,102]$, and by $\xi$ on $[102,103]$. We check that $y$ is the solution to the equation. We proceed by analysing the two inner commutators separately, and then considering the outer commutator. Consider the commutator $[\tau y^{-1},\tau^{-2}y\tau^2]$. First note that since $y^{-1}$ has disjoint support from $\tau^{-2} y \tau^2$, they commute, and hence $$[\tau y^{-1},\tau^{-2}y\tau^2]=[\tau,\tau^{-2}y\tau^2]$$ We can now easily check that the action of the resulting homeomorphism is as follows: \begin{enumerate} \item It acts by $\mu$ on $[2,3]$, by $\nu\mu^{-1}$ on $[3,4]$, and by $\nu^{-1}$ on $[4,5]$. \item It acts by $\chi$ on $[102,103]$, by $\xi\chi^{-1}$ on $[103,104]$, and by $\xi^{-1}$ on $[104,105]$. \end{enumerate} We denote this homeomorphism by $\alpha$. Next, consider the commutator $$[\tau y^{-1}, y^{-1} \tau^{-1} y]$$ First, note that $$[\tau y^{-1},y^{-1}\tau^{-1}y]=[\tau,y^{-2}][\tau^{-1},y^{-1}]$$ It is straightforward to check that the homeomorphism resulting from the product of these commutators is as follows: \begin{enumerate} \item It acts by $\mu^{-2}$ on $[0,1]$, by $\nu^{-2}\mu^3$ on $[1,2]$, by $\nu^2\mu^{-1}\nu$ on $[2,3]$, and by $\nu^{-1}$ on $[3,4]$. \item It acts by $\chi^{-2}$ on $[100,101]$, by $\xi^{-2}\chi^3$ on $[101,102]$, by $\xi^2\chi^{-1}\xi$ on $[102,103]$, and by $\xi^{-1}$ on $[103,104]$. \end{enumerate} We denote by $\beta$ the homeomorphism $t [\tau y^{-1}, y^{-1} \tau^{-1} y] t^{-1}$. Finally, we consider the homeomorphism $[\alpha,\beta]$. Observe that the supports of $\alpha$ and $\beta$ intersect in the intervals $[2,3]$ and $[102,103]$. Since $\psi=[\mu,\nu^{-1}]$, we see that $[\alpha,\beta]$ acts by $\psi$ on $[2,3]$. Similarly, since $\phi=[\chi,\xi^{-1}]$, we have that $[\alpha,\beta]$ acts by $\phi$ on $[102,103]$. It follows that $[\alpha,\beta]$ agrees with $\kappa$, whence $y$ is a solution to the equation as claimed. Finally, we show that $N<\langle \kappa, \tau\rangle$. Indeed the group generated by $\tau^{-100}\kappa \tau^{100}, \kappa$ acts as $N$ on $[102,103]$ and as $\mathbb{Z}$ outside this interval. Since the relations in $N$ are elements of the commutator subgroup of the free group, it follows that this group is isomorphic to $N$. \end{proof} \section{General square root phenomena}\label{sec:generalroot} In this section, we pass to the completely general setup of finitely generated subgroups of $\Homeo^+(I)$ and address the results in Subsection~\ref{subsec:other}. \subsection{Roots of homeomorphisms} We begin with a completely general construction in $\Homeo^+(I)$ for producing roots of homeomorphisms. The following is a well--known fact, whose proof we recall for the convenience of the reader: \begin{lem}\label{lem:root} Let $f\in\Homeo^+(I)$. Then for all $n\in\bN$, there exists an element $g=g_n\in\Homeo^+(I)$ such that $g^n=f$. Moreover, there are uncountably many possible choices of such a map $g$. \end{lem} \begin{proof} By considering the components of the support of $f$ separately, we may consider the case where $f$ has no fixed points in the interval $(0,1)$. In this case, $f$ is topologically conjugate to the homeomorphism of $\R\cup\{\pm\infty\}$ given by $x\mapsto x+1$. We now build an $n^{th}$ root of $f$ defined on all of $\R$ in the following manner. First, we choose arbitrary orientation-preserving homeomorphisms \[h_m:[\frac{m}{n},\frac{m+1}{n}]\to [\frac{m+1}{n},\frac{m+2}{n}]\] for $0\leq m\leq n-2$. Next, we inductively define homeomorphisms \[h_m:[\frac{m}{n},\frac{m+1}{n}]\to [\frac{m+1}{n},\frac{m+2}{n}]\] for all $m\in \Z$, such that \[h_{k+(n-1)}\circ \cdots \circ h_{k}=x+1\] for each $k\in \mathbb{Z}$. It is clear then that the homeomorphisms $h_m$ piece together to give a homeomorphism $g$ of $\R\cup\{\pm\infty\}$, whose $n^{th}$ power is translation by one. Moreover, the arbitrariness of the choices made guarantees that there are uncountably many choices for $g$. \end{proof} \subsection{Free groups} Classical result from combinatorial and geometric group theory show that there exist two--generated groups $\langle a,b\rangle$ which are not free, but such that $\langle a^2,b^2\rangle$ is free. Moreover, one can arrange for these groups to be left orderable, and hence to be realized as subgroups of $\Homeo^+(I)$. For instance, we take the braid group on three strands \[B_3=\langle a,b\mid aba=bab\rangle.\] All braid groups are left orderable (in fact the reader may consult~\cite{DDRW08} as a book dedicated entirely to this subject), and it is a standard fact that the squares of the standard braids generate a free group (see Chapter 3, Section 5 of~\cite{FM2012}). \subsection{The lamplighter group} Using square roots of $F$, we can produce many square roots of the lamplighter group $L=\Z\wr\Z$. Recall that \[\Z\wr\Z\cong\Z\ltimes \big(\bigoplus_{i\in\Z}\Z_i\big),\] where the natural action of $\Z$ is by translating the index $\Z_i\mapsto\Z_{i+1}$. The group $L$ is naturally realized as a subgroup of $\Homeo^+(\R)<\Homeo^+(I)$ as follows. We choose an arbitrary homeomorphism $\psi$ such that $\supp\psi=(0,1)\subset\R$, and then we consider the group generated by $\psi$ and $\tau$, where as before $\tau(x)=x+1$. It is clear that $\langle\psi,\tau\rangle\cong L$. The following result clearly implies Theorem~\ref{thm:lamplighter}, in light of Theorem~\ref{thm:uncountable} \begin{thm}\label{thm:lamproot} Let $G$ be a left orderable marked extension of $P$. Then there exists a square root of $L$ containing an isomorphic copy of $G$. \end{thm} \begin{proof}[Sketch of proof] Let $T\in\Homeo^+(\R)$ be given by $T(x)=x+1/2$. Note that the intervals $(0,1)$ and $T((0,1))$ together form a chain of intervals. Let $G$ be a given left orderable marked extension of $P$, with distinguished generators $g_1$ and $g_2$. Let $\psi$ be a homeomorphism supported on $(0,1)$ satsifying the following conditions: \begin{enumerate} \item $\psi(1/2)=1/2$. \item The group $\langle\psi,T\psi T^{-1}\rangle<\Homeo^+(\R)$ contains a copy of $G$. \end{enumerate} It is easy to see that such a $\psi$ exists, since besides the two conditions above it is otherwise arbitrary, and its action on the two halves of $(0,1)$ can be chosen independently. Thus, one may arrange that the action on $(0,1/2)$ recovers the generator $g_1$ and the action on $(1/2,1)$ recovers the generator $g_2$. It is clear that squaring $\psi$ and $T$ gives rise to a group isomorphic to the lamplighter group. If one would like the copy of $G$ to lie in a square root of $F$, one can further compose $\psi$ with a suitably chosen increasing homeomorphism of $(0,1)$. The reader should compare with Subsection~\ref{subsec:building} for the details of the latter construction. \end{proof} \subsection{Square roots of $\Z$} In this section, we give a recipe for producing many $n$--generated groups of homeomorphisms of the interval, so that the squares of the generators generate a cyclic group. \begin{proof}[Proof of Theorem~\ref{thm:Z^n}] Let $\tau_1,\ldots,\tau_{n+1}$ be $n+1$ copies of the translation $\tau(x)=x+1$ viewed as a homeomorphism of $\R$, and let \[\langle h_1,\ldots,h_n\rangle=H<\Homeo^+(I)\] be an arbitrary $n$--generated subgroup. We set $T_{n+1}(x)=x+1/2$. Lemma~\ref{lem:root} constructs all possible roots $\tau$, and we follow the construction given there. We first scale down $H$ to be a group of homeomorphisms of $[0,1/2]$, and we abuse notation and label the generators of $H$ by $\{h_1,\ldots,h_n\}$. We now define $T_i$ to be the homeomorphism $T_{n+1}\circ h_i$ on $[0,1/2]$. The requirement $T_i^2=\tau$ determines the values of $T_i$ on the rest of $\R$. Now let $S_i=T_{n+1}^{-1}\circ T_i$ for $1\leq i\leq n$. Observe that $S_i$ acts by $h_i$ on each interval of the form $[k,k+1/2]$ and by $h_i^{-1}$ on each interval of the form $[k-1/2,k]$, where $k\in\Z$. It is clear then the $\yh H\cong \langle S_1,\ldots,S_n\rangle<\langle T_1,\ldots,T_{n+1}\rangle$. \end{proof} \begin{proof}[Proof of Corollary~\ref{cor:uncgeneral}] First, note that if $H$ is free then the skew subdirect product $\yh H$ is also free, which establishes the second part of the corollary. For the first part, we perform a mild modification of the Hall--Neumann groups as discussed in Proposition~\ref{prop:neumann} (see~\cite{KKL16} for a detailed discussion of these groups). All the Hall--Neumann groups quotients of a single two--generated group $\Gamma=\langle t,s_0\rangle$ which is left orderable. The element of $\mathcal{N}$ are given as quotients of $\Gamma$ by certain central normal subgroups $N_X<\Gamma$. We recall the definition of $\Gamma$ for the convenience of the reader, following III.C.40 of de la Harpe's book~\cite{MR1786869} (cf. Lemma 5.1 of~\cite{KKL16}). We begin with a set $S=\{s_i\}_{i\in\Z}$. Then, we define \[R=\{[[s_i,s_j],s_k]=\}_{i,j,k\in\Z}\cup \{[s_i,s_j]=[s_{i+k},s_{j+k}]\}_{i,j,k\in\Z}.\] The group $\Gamma_0$ is defined by $\langle S\mid R\rangle$, and $\Gamma$ is defined as a semidirect product of $\Z$ with $\Gamma_0$, where the generator $t$ of the $\Z$--factor acts by $t^{-1}s_it=s_{i+1}$. One sets $u_i=[s_0,s_i]$, and if $X\subset \Z\setminus \{0\}$, we write $N_X=\langle \{u_i\}_{i\in X}\rangle$. Note that the group $\Gamma$ is generated by $t$ and $s_0$. It is straightforward to check that the map given by $t\mapsto t^{-1}$ and $s_0\mapsto s_0^{-1}$ extends to a well--defined automorphism of $\Gamma$, whence $\yh\Gamma\cong\Gamma$. Moreover, the subgroups $N_X$ are all stable under this automorphism of $\Gamma$. In particular, it follows that if $N\in\mathcal{N}$ is one of the Hall--Neumann groups, then $N\cong\yh N$. The first claim of the corollary follows from Theorem~\ref{thm:Z^n}. \end{proof} \subsection{General groups}\label{subsec:generalroot} For a general finitely generated group $H=\langle x_1,\ldots,x_n\mid R\rangle$, one can \emph{formally take the square root} of $H$ by setting \[G=\langle y_1,\ldots,y_n,x_1,\ldots,x_n\mid R, x_1=y_1^2,\ldots,x_n=y_n^2\rangle.\] Note that this definition depends on the presentation of $H$ which is given. If $H$ is given as a free group with no relations then $G$ will be free of the same rank. However, if $H$ is not freely presented then $G$ can be very complicated. We will call a presentation for a group $H$ \emph{reduced} if $x_i$ is nontrivial in $H$ for each $i$. \begin{thm}\label{thm:formalorder} Let $H$ be a left orderable finitely generated group with a reduced presentation. Then the formal square root $G$ of $H$ is left orderable. \end{thm} \begin{proof} If $H=\langle x_1,\ldots,x_n\mid R\rangle$, set \[K=\langle y,x_1,\ldots,x_n\mid R,x_1=y^2\rangle.\] If we can prove that $K$ is left orderable then the result will follow by induction on $n$. To this end, note that $K$ admits a description as an amalgamated product via \[K=\Z*_{2\Z=\langle x_1\rangle}H.\] Since we can order $\Z$ either positively or negatively, we may assume that the isomorphism $2\Z\cong \langle x_1\rangle$ is order preserving. Then, a result of Bludov--Glass~\cite{BludovGlass} (cf. Bergman~\cite{Bergman90}) implies that the corresponding amalgamated product is again orderable. \end{proof} Note that the assumption that the presentation for $H$ in Theorem~\ref{thm:formalorder} is reduced was essential, since otherwise the formal square root would contain torsion. Moreover, we need not assume that $H$ be finitely generated in Theorem~\ref{thm:formalorder}, and this hypothesis could be replaced by countable generation. We include this hypothesis since nearly all groups under consideration in this paper are finitely generated. Finally, we show that formal square roots generally contain nonabelian free groups, so that free subgroups are in some precise sense a general phenomenon in square roots of groups of homeomorphisms: \begin{thm}\label{thm:generalfree} Let $H=\langle x_1,\ldots,x_n\mid R\rangle$ be a reduced presentation for a non--cyclic finitely generated left orderable group, and let $K=\langle y,x_1,\ldots,x_n\mid R,x_1=y^2\rangle$. Then $K$ contains a nonabelian free group. \end{thm} Thus, Theorem~\ref{thm:generalfree} implies that the formal square root of a non--cyclic group always contains nonabelian free groups. \begin{proof}[Proof of Theorem~\ref{thm:generalfree}] The result follows from general Bass--Serre theory. One can construct free subgroups explicitly using the standard theory of normal forms for amalgamated products (see~\cite{Trees,LS2001} for general introductions to combinatorial group theory and in particular Theorem 2.6 in \cite{LS2001} for the normal form theorem for amalgamated products). To do this, let $z\in H\setminus\langle x_1\rangle$, which exists since $H$ is assumed not to be cyclic. Note that $z$ has infinite order since $H$ is left orderable. Consider the group $\langle z,yzy^{-1}\rangle$. An arbitrary word in these generators will be of the form \[z^{n_1}yz^{m_1}y^{-1}\cdots z^{n_k}yz^{m_k}y^{-1},\] where all these exponents are nonzero except possibly $n_1$ and $m_k$. This word cannot collapse to the identity since it is in normal form. It follows that the group $\langle z,yzy^{-1}\rangle$ is free. \end{proof} M. Kassabov has pointed out to the authors that if $H$ is given a presentation with at least three generators then the formal square root $G$ of $H$ surjects onto \[\Z/2\Z*\Z/2\Z*\Z/2\Z,\] which contains a nonabelian free group. Finally, we remark that there appears to be little general compatibility between formal square roots of groups and ``dynamical" square roots of groups, such as in our discussion of square roots of $F$ in this paper. That is, let $G=\langle f,g\rangle\in\mathcal{S}$ be a square root of $F$, so that the supports of $f$ and $g$ form a two--chain. Then it is never the case that $G$ is the formal square root of $F$. Indeed, by Theorem~\ref{thm:generalfree}, we would have that $g$ and $h=f^{-1}gf$ generate a free group. This cannot happen, since there is an endpoint $x$ of $\supp g$ which is fixed by both $g$ and $h$, and where the germs of these two homeomorphisms at $x$ agree. By conjugating $g$ or $h$ suitably, one can obtain a homeomorphism $k$ such that $\supp k$ is contained in a neighborhood of $x$ on which $g$ and $h$ agree. But then $g^{-1}kg=h^{-1}kh$, violating the fact that $\langle g,h\rangle$ is free. If on the other hand $f$ and $g$ are fully supported homeomorphisms, then fairly easy Baire Category methods as in Proposition 4.5 of~\cite{Ghys2001} show that by choosing generic square roots of $f$ and $g$, one obtains a group which is isomorphic to the formal square root of $\langle f,g\rangle$. \section*{Acknowledgements} The authors thank C. Bleak, M. Kassabov, J. Moore, and D. Osin for helpful discussions. The authors thank M. Brin for pointing out Corollary~\ref{cor:pl}. The authors are particularly grateful to M. Sapir for suggesting the content of Section~\ref{sec:generalroot}. The authors thank an anonymous referee for providing a large number of helpful comments. The first author was partially supported by Simons Foundation Collaboration Grant number 429836, and is partially supported by an Alfred P. Sloan Foundation Research Fellowship and by NSF Grant DMS-1711488. The second author has been supported by an EPFL-Marie Curie fellowship and the Swiss National Science Foundation Grant ``Ambizione" PZ00P2\_174137. \bibliographystyle{amsplain}
{"config": "arxiv", "file": "1704.03112.tex"}
\begin{document} \maketitle \baselineskip=0.9 \normalbaselineskip \vspace{-3pt} \begin{center}{\footnotesize\em $^{\mbox{\tiny\rm 1}}$Centre for the mathematical sciences, Numerical Analysis, Lund University, Lund, Sweden\\ email: peter.meisrimel\symbol{'100}na.lu.se, azahar.sz\symbol{'100}gmail.com, philipp.birken\symbol{'100}na.lu.se\\ \footnotesize\em $^{\mbox{\tiny\rm 2}}$Chair of Computational Mathematics, University of Deusto, Spain, Bilbao} \end{center} \begin{abstract} We consider partitioned time integration for heterogeneous coupled heat equations. First and second order multirate, as well as time-adaptive Dirichlet-Neumann waveform relaxation (DNWR) methods are derived. In 1D and for implicit Euler time integration, we analytically determine optimal relaxation parameters for the fully discrete scheme. Similarly to a previously presented Neumann-Neumann waveform relaxation (NNWR) first and second order multirate methods are obtained. We test the robustness of the relaxation parameters on the second order multirate method in 2D. DNWR is shown to be very robust and consistently yielding fast convergence rates, whereas NNWR is slower or even diverges. The waveform approach naturally allows for different timesteps in the subproblems. In a performance comparison for DNWR, the time-adaptive method dominates the multirate method due to automatically finding suitable stepsize ratios. Overall, we obtain a fast, robust, multirate and time adaptive partitioned solver for unsteady conjugate heat transfer. \end{abstract} {\it {\bf Keywords}: Thermal Fluid-Structure Interaction, Coupled Problems, Dirichlet--Neumann Method, Multirate, Time Adaptivity, Waveform Relaxation}\\ {\it {\bf Mathematics Subject Classification (2000)}: 80M10, 35Q79, 65M22, 65F99}\medskip\\ {This research is supported by the Swedish e-Science collaboration eSSENCE, which we gratefully acknowledge.} \section{Introduction} We consider efficient numerical methods for the partitioned time integration of coupled multiphysics problems. In a partitioned approach different codes for the sub-problems are reused and the coupling is done by a master program which calls interface functions of the segregated codes \cite{causin:05, cristiano:11}. These algorithms are currently an active research topic driven by certain multiphysics applications where multiple physical models or multiple simultaneous physical phenomena involve solving coupled systems of partial differential equations (PDEs). An example of this is fluid structure interaction (FSI) \cite{vanBrummelen:11, bremicker:17}. Our prime motivation is thermal interaction between fluids and structures, also called conjugate heat transfer. There are two domains with jumps in the material coefficients across the connecting interface. Conjugate heat transfer plays an important role in many applications and its simulation has proved essential \cite{banka:05}. Examples for thermal fluid structure interaction are cooling of gas-turbine blades, thermal anti-icing systems of airplanes \cite{buchli:10}, supersonic reentry of vehicles from space \cite{mehta:05,hinrad:06}, gas quenching, which is an industrial heat treatment of metal workpieces \cite{hefiba:01,stshle:06} or the cooling of rocket nozzles \cite{kohoha:13,kotirh:13}. The most common form of coupling is a Dirichlet-Neumann (DN) approach, in which one problem has a Dirichlet boundary condition on the shared interface, while the other one uses a Neumann boundary condition. In the iteration, they provide each other with the suitable boundary data, i.e. a flux or the interface value. Thus, there is a connection to Domain Decomposition methods. From the partitioned time integration, we require that it allows for variable and adaptive time steps, preserves the order of the time integration in the subsolvers, and it should be robust and fast. A technique that promises to deliver these properties is the so called waveform relaxation (WR). Here, one iterates over continuous interface functions in time. WR methods were originally introduced by \cite{lelarasmee:82} for ordinary differential equation (ODE) systems, and used for the first time to solve time dependent PDEs in \cite{gander:98,giladi:02}. They allow the use of different spatial and time discretizations for each subdomain. This is especially useful in problems with strong jumps in the material coefficients \cite{gander:03} or the coupling of different models for the subdomains \cite{Gander:07}. A key problem is to make the waveform iteration fast. A black box approach is to make use of Quasi-Newton methods, leading to Quasi-Newton waveform iterations \cite{rbmbmb:20}. Here, we follow instead the idea to tailor very fast methods to a specific problem. In particular, we consider the Neumann-Neumann waveform relaxation (NNWR) and Dirichlet-Neumann waveform relaxation (DNWR) method of Gander et al. \cite{Fwok:14,gander:16}, which are WR methods based on the classical Neumann-Neumann and Dirichlet-Neumann iterations. The DNWR method is serial, whereas with NNWR, one can solve the subproblems in parallel. Using an optimal relaxation parameter, convergence in two iterations is obtained for the continuous iteration in 1D. In \cite{MongeBirken:multirate}, a fully discrete multirate NNWR method for coupled heat equations with jumping coefficients is presented. Optimal relaxation parameters are determined for the 1D case. The method was extended to the time adaptive case in \cite{monbirDD25:19}. However, the NNWR method is extremely sensitive to the choice of the relaxation parameter, leading to a lack of robustness. In this paper, we therefore focus on the DNWR method. The standard DN method was known to be a very fast method for thermal interaction between air and steel \cite{biquhm:11,birquihame:10,biglkm:15}. This was thoroughly analyzed for the fully discrete case for two coupled heat equations with different material properties in \cite{Monge:2017} and for coupled Laplace equations in \cite{goebir:20}. Thus, we can expect DNWR to be a fast solver not only when using an optimal relaxation parameter. The technique employed here to determine optimal relaxation parameters follows \cite{MongeBirken:multirate}. It is to consider a fully discrete iteration for a 1D model problem. Then, using the Toeplitz structure of the arising matrices, a formula for the spectral radius of the iteration matrix can be found and the optimal relaxation parameter can be analytically determined. We present first and second order multirate WR methods, as well as a second order time adaptive method. The time integration methods we use as a base are implicit Euler and a second order singly diagonally implicit Runge-Kutta (SDIRK2) method. The optimal relaxation parameter $\Theta_{opt}$ from the analysis of 1D and implicit Euler yields good results for 2D and SDIRK2. We show how to adapt $\Theta_{opt}$ for use in the multirate and time-adaptive setting to get good convergence rates. Additionally, we experimentally show that the convergence results also extend to non-square geometries. The convergence rate turns out to be robust and small. \section{Model problem} The unsteady transmission problem reads as follows, where we consider a domain $\Omega \subset \mathbb{R}^d$ which is cut into two subdomains $\Omega = \Omega_1 \cup \Omega_2$ with transmission conditions at the interface $\Gamma = \partial \Omega_1 \cap \partial \Omega_2$: \begin{align} \begin{split}\label{EQ PROB MONO} \alpha_m \frac{\partial u_m(\bm{x},t)}{\partial t} - \nabla \cdot (\lambda_m \nabla u_m(\bm{x},t)) &= 0,\,\, \bm{x} \in \Omega_m \subset \mathbb{R}^d, \, m=1,2,\\ u_m(\bm{x},t) &= 0, \,\, \bm{x} \in \partial \Omega_m \backslash \Gamma,\\ u_1(\bm{x},t) &= u_2(\bm{x},t), \,\, \bm{x} \in \Gamma,\\ \lambda_2 \frac{\partial u_2(\bm{x},t)}{\partial \bm{n}_2} &= -\lambda_1 \frac{\partial u_1(\bm{x},t)}{\partial \bm{n}_1}, \,\, \bm{x} \in \Gamma,\\ u_m(\bm{x},0) &= u_m^0(\bm{x}), \,\, \bm{x} \in \Omega_m, \end{split} \end{align} where $t \in [0, T_f]$ and $\bm{n}_m$ is the outward normal to $\Omega_m$ for $m=1,2$. The constants $\lambda_1$ and $\lambda_2$ describe the thermal conductivities of the materials on $\Omega_1$ and $\Omega_2$ respectively. $D_1$ and $D_2$ represent the thermal diffusivities of the materials and are defined by \begin{align*} D_m = \lambda_m /\alpha_m, \quad \mbox{with} \quad \alpha_m = \rho_m c_{p_m}, \end{align*} where $\rho_m$ is the density and $c_{p_m}$ the specific heat capacity of the material placed in $\Omega_m$, $m=1,2$. \section{The Dirichlet-Neumann Waveform Relaxation algorithm} The Dirichlet-Neumann waveform relaxation (DNWR) method is inspired by substructuring methods from Domain Decomposition. The PDEs are solved sequentially using a Dirichlet- respectively Neumann boundary condition with data given from the solution of the other problem, c.f. \cite{Mandal:14,mandalphd}. It starts with an initial guess $g^{(0)} (\bm{x},t)$ on the interface $\Gamma \times (0, T_f]$, and then performs a three-step iteration. At each iteration $k$, imposing continuity of the solution across the interface, one first finds the local solution $u_1^{(k+1)}(\bm{x},t)$ on $\Omega_1$ by solving the Dirichlet problem: \begin{align} \begin{split}\label{EQ CONT DIR PROB} \alpha_1 \frac{\partial u_1^{(k+1)}(\bm{x},t)}{\partial t} - \nabla \cdot (\lambda_1 \nabla u_1^{(k+1)}(\bm{x},t)) = 0, \,\, \bm{x} \in \Omega_1,\\ u_1^{(k+1)}(\bm{x},t) = 0, \,\, \bm{x} \in \partial \Omega_1 \backslash \Gamma, \\ u_1^{(k+1)}(\bm{x},t) = g^{(k)}(\bm{x},t), \,\, \bm{x} \in \Gamma, \\ u_1^{(k+1)}(\bm{x},0) = u_1^{0}(\bm{x}), \,\, \bm{x} \in \Omega_1. \end{split} \end{align} Then, imposing continuity of the heat fluxes across the interface, one finds the local solution $u_2^{(k+1)}(\bm{x},t)$ on $\Omega_2$ by solving the Neumann problem: \begin{align} \begin{split}\label{EQ CONT NEU PROB} \alpha_2 \frac{\partial u_2^{(k+1)}(\bm{x},t)}{\partial t} - \nabla \cdot (\lambda_2 \nabla u_2^{(k+1)}(\bm{x},t)) = 0, \,\, \bm{x} \in \Omega_2,\\ u_2^{(k+1)}(\bm{x},t) = 0, \,\, \bm{x} \in \partial \Omega_2 \backslash \Gamma, \\ \lambda_2 \frac{\partial u_2^{(k+1)}(\bm{x},t)}{\partial \bm{n}_2} = -\lambda_1 \frac{\partial u_1^{(k+1)}(\bm{x},t)}{\partial \bm{n}_1}, \,\, \bm{x} \in \Gamma, \\ u_2^{(k+1)}(\bm{x},0) = u_2^0(\bm{x}), \,\, \bm{x} \in \Omega_2. \end{split} \end{align} Finally, the interface values are updated with \begin{align} \label{EQ CONT UPDATE PROB} g^{(k+1)}(\bm{x}, t) = \Theta u_2^{(k+1)}(\bm{x}, t) + (1 - \Theta) g^{(k)}(\bm{x}, t), \,\, \bm{x} \in \Gamma, \end{align} where $\Theta \in (0,1]$ is the relaxation parameter. Note that choosing an appropriate relaxation parameter is crucial to get a great rate of convergence of the DNWR algorithm. In \cite{gander:16}, the optimal relaxation parameter has been proven to be $\Theta = 1/2$ for $\lambda_1=\lambda_2=\alpha_1=\alpha_2=1$. If one uses the optimal relaxation parameter for 1D problems, two iterations are enough for subdomains of equal length. \section{Semidiscrete method} We now describe a rather general space discretization of \eqref{EQ CONT DIR PROB}-\eqref{EQ CONT UPDATE PROB}. The core property we need is that the meshes of $\Omega_1$ and $\Omega_2$ share the same nodes on $\Gamma$ as shown in Figure \ref{FIG DOMAIN FE}. Furthermore, we assume that there is a specific set of unknowns associated with the interface nodes. Otherwise, we allow at this point for arbitrary meshes on both sides. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{triangulation_subdomains.png} \caption{Splitting of $\Omega$ and finite element triangulation.} \label{FIG DOMAIN FE} \end{figure} Then, letting $\bm{u}_I^{(m)}: [0, T_f] \rightarrow \mathbb{R}^{R_m}$ where $R_m$ is the number of grid points on $\Omega_m$, $m=1,2$, and $\bm{u}_{\Gamma}: [0, T_f] \rightarrow \mathbb{R}^{s}$, where $s$ is the number of grid points at the interface $\Gamma$, we can write a general discretization of the first two equations in \eqref{EQ CONT DIR PROB} and \eqref{EQ CONT NEU PROB}, respectively, in a compact form as: \begin{align} \label{EQ SEMI DISCR DIR} \bm{M}_{II}^{(1)} \dot{\bm{u}}_I^{(1),(k+1)}(t) + \bm{A}_{II}^{(1)} \bm{u}_I^{(1),(k+1)}(t) = -\bm{M}_{I \Gamma}^{(1)} \dot{\bm{u}}_{\Gamma}^{(k)}(t) - \bm{A}_{I \Gamma}^{(1)} \bm{u}_{\Gamma}^{(k)}(t),\\ \label{EQ SEMI DISCR NEU SINGLE} \bm{M}_{II}^{(2)} \dot{\bm{u}}_I^{(2),(k+1)}(t) + \bm{M}_{I \Gamma}^{(2)} \dot{\bm{u}}_{\Gamma}^{(k+1)}(t) + \bm{A}_{II}^{(2)} \bm{u}_I^{(2),(k+1)}(t) + \bm{A}_{I \Gamma}^{(2)} \bm{u}_{\Gamma}^{(k+1)}(t) = \mathbf{0}, \end{align} with initial conditions $\bm{u}_I^{(m)}(0) \in \mathbb{R}^{S_m}$ and $\bm{u}_{\Gamma}(0) \in \mathbb{R}^{s}$, $m=1,2$. To close the system, we need an approximation of the normal derivatives on $\Gamma$. Letting $\phi_j$ be a nodal FE basis function on $\Omega_m$ for a node on $\Gamma$ we observe that the normal derivative of $u_m$ with respect to the interface can be written as a linear functional using Green's formula \cite[pp. 3]{Toselli:2004qr}. Thus, the approximation of the normal derivative is given by \begin{align*} \begin{split} & \lambda_m \int_{\Gamma} \frac{\partial u_m}{\partial \bm{n}_m} \phi_j dS = \lambda_m \int_{\Omega_m} (\Delta u_m \phi_j + \nabla u_m \nabla \phi_j) d \bm{x} \\ & = \alpha_m \int_{\Omega_m} \frac{d}{dt} u_m \phi_j d\bm{x} + \lambda_m \int_{\Omega_m} \nabla u_m \nabla \phi_j d \bm{x}, \,\, m=1,2. \end{split} \end{align*} Consequently, the equation \begin{align} \label{EQ SEMI DISCR NEU GAMMA} \begin{split} & \bm{M}_{\Gamma \Gamma}^{(2)} \dot{\bm{u}}_{\Gamma}^{(k+1)}(t) + \bm{M}_{\Gamma I}^{(2)} \dot{\bm{u}}_{I}^{(2),(k+1)}(t) + \bm{A}_{\Gamma \Gamma}^{(2)} \bm{u}_{\Gamma}^{(k+1)}(t) + \bm{A}_{\Gamma I}^{(2)} \bm{u}_{I}^{(2),(k+1)}(t) \\ & = - \left( \bm{M}_{\Gamma \Gamma}^{(1)} \dot{\bm{u}}_{\Gamma}^{(k)}(t) + \bm{M}_{\Gamma I}^{(1)} \dot{\bm{u}}_{I}^{(1),(k+1)}(t) + \bm{A}_{\Gamma \Gamma}^{(1)} \bm{u}_{\Gamma}^{(k)}(t) + \bm{A}_{\Gamma I}^{(1)} \bm{u}_{I}^{(1),(k+1)}(t) \right), \end{split} \end{align} is a semi-discrete version of the third equation in \eqref{EQ CONT NEU PROB} and completes the system \eqref{EQ SEMI DISCR DIR}-\eqref{EQ SEMI DISCR NEU SINGLE}. \begin{remark}\label{DEF MONO} Omitting the iteration indices, the system of IVPs defined by \mbox{\eqref{EQ SEMI DISCR DIR}}, \mbox{\eqref{EQ SEMI DISCR NEU SINGLE}} and \mbox{\eqref{EQ SEMI DISCR NEU GAMMA}} is the semidiscrete version of \mbox{\eqref{EQ PROB MONO}}. We refer to it as the (semidiscrete) monolithic system and its solution as the monolithic solution. \end{remark} We can now formulate a semidiscrete version of the DNWR algorithm. In each iteration $k$, one first solves the Dirichlet problem \eqref{EQ SEMI DISCR DIR}, obtaining $\bm{u}_I^{(1),(k+1)}(t)$. Then, using the function of unknowns $\bm{u}^{(k+1)}(t) = \left( {\bm{u}_I^{(2),(k+1)}}^T {\bm{u}_{\Gamma}^{(k+1)}}^T \right)^T(t)$, one solves the following Neumann problem that corresponds to equations \eqref{EQ SEMI DISCR NEU SINGLE} and \eqref{EQ SEMI DISCR NEU GAMMA}: \begin{align} \label{EQ SEMI DISCR NEU} \bm{M} \dot{\bm{u}}^{(k+1)}(t) + \bm{A} \bm{u}^{(k+1)}(t) = \bm{b}^{(k)}(t), \end{align} where \begin{align*} \bm{M} = \left( \begin{array}{cc} \bm{M}_{II}^{(2)} & \bm{M}_{I \Gamma}^{(2)} \\ \bm{M}_{\Gamma I}^{(2)} & \bm{M}_{\Gamma \Gamma}^{(2)}\\ \end{array} \right), \,\, \bm{A} = \left( \begin{array}{cc} \bm{A}_{II}^{(2)} & \bm{A}_{I \Gamma}^{(2)} \\ \bm{A}_{\Gamma I}^{(2)} & \bm{A}_{\Gamma \Gamma}^{(2)}\\ \end{array} \right), \,\, \bm{b}^{(k)} = \left( \begin{array}{c} \bm{0} \\ -\bm{q}^{(k+1)}(t)\\ \end{array} \right), \end{align*} with the heat flux \begin{equation}\label{EQ SEMI DISCR HEAT FLUX} \bm{q}^{(k+1)}(t) = \bm{M}_{\Gamma \Gamma}^{(1)} \dot{\bm{u}}_{\Gamma}^{(k)}(t) + \bm{M}_{\Gamma I}^{(1)} \dot{\bm{u}}_{I}^{(1),(k+1)}(t) + \bm{A}_{\Gamma \Gamma}^{(1)} \bm{u}_{\Gamma}^{(k)}(t) + \bm{A}_{\Gamma I}^{(1)} \bm{u}_{I}^{(1),(k+1)}(t). \end{equation} Finally, the interface values are updated by \begin{align} \label{EQ SEMI DISCR UPDATE} \bm{u}_{\Gamma}^{(k+1)}(t) \gets \Theta \bm{u}_{\Gamma}^{(k+1)}(t) + (1 - \Theta) \bm{u}_{\Gamma}^{(k)}(t). \end{align} The iteration starts with an initial guess $\bm{u}_{\Gamma}^{(0)}(t)$. Since the iteration is done on functions, one would like to terminate when $ \| \bm{u}_{\Gamma}^{(k+1)}(t) - \bm{u}_{\Gamma}^{(k)}(t) \| \leq TOL $ is met, where $TOL$ is a user defined tolerance. However, this can be very memory consuming. Since we expect the time integration error to grow, we only compare the update at $T_f$. Our termination criterion is \begin{equation} \label{EQ TERMINATION CRIT} \| \bm{u}_{\Gamma}^{(k+1)}(T_f) - \bm{u}_{\Gamma}^{(k)}(T_f) \|_{\Gamma} \leq TOL \cdot \| \bm{u}_\Gamma(0)\|_{\Gamma}, \end{equation} i.e., the relative update w.r.t. the initial value at the interface. We use the discrete $\mathcal{L}^2$ interface norm, given by \begin{equation*} \| \cdot \|_{\Gamma} = \| \cdot \|_2 \Delta x^{(d-1)/2}. \end{equation*} Here, $d$ is the spatial dimension of \mbox{\eqref{EQ PROB MONO}}. \section{Space-time interface interpolation} Both the Dirichlet and the Neumann problems \eqref{EQ SEMI DISCR DIR} and \eqref{EQ SEMI DISCR NEU} allow the use of independent time discretization on each subdomain. Therefore, in the case of mismatched time grids, one needs to define an interface interpolation. Consider $\tau_1 = \{ t_0^{(1)}, t_1^{(1)}, t_2^{(2)},..,t_{N_1}^{(1)} \}$ and $\tau_2 = \{ t_0^{(2)}, t_1^{(2)}, t_2^{(2)},..,t_{N_2}^{(2)}\}$ to be two partitions of $[0,T_f]$ as illustrated in Figure \ref{FIG TIME INTERPOL}. In the WR algorithm, the solver of the Dirichlet problem \mbox{\eqref{EQ SEMI DISCR DIR}} takes a continuous function $\bm{u}^{(k)}_\Gamma(t)$ as input and outputs discrete fluxes $\bm{q}^{(k+1), 0}$, $\ldots$, $\bm{q}^{(k+1), N_1}$. We then use \textbf{linear interpolation} to get $\bm{q}^{(k+1)}(t)$ as the input to the Neumann problem \mbox{\eqref{EQ SEMI DISCR NEU}}, which returns discrete interface temperatures $\bm{u}_\Gamma^{(k+1), 0}$, \ldots, $\bm{u}_\Gamma^{(k+1), N_2}$ yielding $\bm{u}_\Gamma^{(k+1)}(t)$. The interpolation is done by the master program, as such a solver need not know about the other time grid. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{nonconforming_2d.png} \caption{Nonconfoming time grids in the two-dimensional subdomains.} \label{FIG TIME INTERPOL} \end{figure} \section{Multirate time discretization}\label{SEC TIME DISCR} In this section we present a time discretized version of the DNWR method presented in equations \eqref{EQ SEMI DISCR DIR}, \eqref{EQ SEMI DISCR NEU} and \eqref{EQ SEMI DISCR UPDATE}. We describe the algorithms in terms of general $\Delta t$, a multirate version is obtained by choosing different $\Delta t$ for the Dirichlet and Neumann solvers. We present the algorithm for the following two time integration methods: the implicit Euler method and the second order singly diagonally implicit Runge-Kutta method (SDIRK2). \subsection{Implicit Euler} Given the initial value problem (IVP) \begin{equation}\label{EQ IVP} \dot{\bm{u}}(t) = \bm{f}(t, \bm{u}(t)), \quad \bm{u}(0) = \bm{u}_0, \quad t \in [0, T_f], \end{equation} the implicit Euler (IE) method is \begin{equation}\label{EQ IE} \bm{u}_{n+1} = \bm{u}_n + \Delta t_n \underbrace{\bm{f}(t_n + \Delta t_n, \bm{u}_{n+1})}_{\approx \dot{\bm{u}}(t_n + \Delta t_n)}. \end{equation} In each iteration $k$ of the WR algorithm, given the initial guess $\bm{u}_I^{(1),(k+1),0} = \bm{u}_I^{(1)}(0)$ and $\bm{u}^{(k)}_\Gamma(t)$, one first solves the Dirichlet problem \eqref{EQ SEMI DISCR DIR} : \begin{align} \label{EQ IE DIR} \begin{split} ( \bm{M}_{II}^{(1)} + \Delta t\bm{A}_{II}^{(1)} ) \bm{u}_I^{(1),(k+1),n+1} = & \bm{M}_{II}^{(1)} \bm{u}_I^{(1),(k+1),n} - \Delta t \Big( \bm{M}_{I \Gamma}^{(1)} \dot{\bm{u}}_{\Gamma}^{(k)}(t_{n+1})\\ & + \bm{A}_{I \Gamma}^{(1)} \bm{u}_{\Gamma}^{(k)}(t_{n+1}) \Big). \end{split} \end{align} We compute the discrete heat fluxes, c.f. \eqref{EQ SEMI DISCR HEAT FLUX}, as output of the Dirichlet solver by \begin{align}\label{EQ IE FLUX} \begin{split} \bm{q}^{(k+1), n+1} =& \bm{M}_{\Gamma \Gamma}^{(1)} \dot{\bm{u}}_{\Gamma}^{(k)}(t_{n+1}) + \bm{M}_{\Gamma I}^{(1)} \dot{\bm{u}}_I^{(1),(k+1)}(t_{n+1})\\ &+ \bm{A}_{\Gamma \Gamma}^{(1)} \bm{u}_{\Gamma}^{(k)}(t_{n+1}) + \bm{A}_{\Gamma I}^{(1)} \bm{u}_I^{(1),(k+1)}(t_{n+1}) ,\quad n = 0, 1, \ldots. \end{split} \end{align} To approximate the derivative terms in \eqref{EQ IE DIR} and \eqref{EQ IE FLUX}, we use \eqref{EQ IE}, i.e., $\dot{\bm{u}}^{(k)}_\Gamma(t_{n+1}) \approx (\bm{u}^{(k)}_\Gamma(t_{n+1}) - \bm{u}^{(k)}_\Gamma(t_{n}))/\Delta t$ and $\dot{\bm{u}}_I^{(1),(k+1)}(t_{n+1}) \approx (\bm{u}_I^{(1),(k+1), n+1} - \bm{u}_I^{(1),(k+1), n})/\Delta t$. Computation of the initial flux $\bm{q}^{(k+1), 0}$ is done using analogous forward differences, i.e., $\dot{\bm{u}}(0) \approx (\bm{u}(\Delta t) - \bm{u}(0))/\Delta t$. Next, we rewrite the Neumann problem \eqref{EQ SEMI DISCR NEU} in terms of the vector of unknowns $\bm{u}^{(k+1),n+1} := \left( {\bm{u}_I^{(2),(k+1),n+1}}^T {\bm{u}_{\Gamma}^{(k+1),n+1}}^T \right)^T$. Given $\bm{u}^{(k+1),0} = \bm{u}(0)$ and $\bm{q}^{(k+1)}(t)$, one solves: \begin{align}\label{EQ IE NEU} \left( \bm{M} + \Delta t\bm{A} \right) \bm{u}^{(k+1),n+1} = \bm{M} \bm{u}^{(k+1),n} - \Delta t \begin{pmatrix} \bm{0} \\ \bm{q}^{(k+1)}(t_{n+1}) \end{pmatrix}, n=0,1,.... \end{align} Finally, the interfaces values are updated by \begin{align*} \bm{u}_{\Gamma}^{(k+1),n} = \Theta \bm{u}_{\Gamma}^{(k+1),n} + (1-\Theta) \bm{u}_{\Gamma}^{(k),n} \end{align*} The pseudocode code in Algorithm \ref{ALG IE FULL} provides an overview of this algorithm. \begin{algorithm}[ht!] \caption{Pseudocode of the DNWR IE method. On domain $\Omega_m$ we do $N_m$ timesteps of size $\Delta t_m = T_f/N_m$, we denote the resulting time-grids by $\tau_m$, $m = 1, 2$.} \label{ALG IE FULL} \begin{algorithmic} \STATE{\textbf{DNWR\_IE}($T_f$, $N_1$, $N_2$, $(\bm{u}_0^{(1)}, \bm{u}_0^{(2)}, \bm{u}_\Gamma(0))$, $\Theta$, $TOL$, $k_{\max}$):} \STATE{$\bm{u}_\Gamma^{(0), 0}, \ldots, \bm{u}_\Gamma^{(0), N_2} = \bm{u}_{\Gamma} (0)$} \FOR{$k = 0, \ldots, k_{\max} - 1$} \STATE{$\bm{u}_{\Gamma}^{(k)}(t) \gets \text{Interpolation}(\tau_2,\, \bm{u}_\Gamma^{(k), 0}, \ldots, \bm{u}_\Gamma^{(k), N_2})$} \STATE{$\bm{q}^{(k+1), 0}, \ldots, \bm{q}^{(k+1), N_1}, \bm{u}_I^{(1), (k+1), N_1}$ $\gets$ \texttt{SolveDirichlet}($T_f$, $N_1$, $\bm{u}_0^{(1)}$, $\bm{u}_{\Gamma}^{(k)}(t)$)} \STATE{$\bm{q}^{(k+1)}(t) \gets \text{Interpolation}(\tau_1,\, \bm{q}^{(k+1), 0}, \ldots, \bm{q}^{(k+1), N_1})$} \STATE{$\bm{u}_{\Gamma}^{(k+1), 1}, \ldots, \bm{u}_{\Gamma}^{(k+1), N_2}$, $\bm{u}_I^{(2), (k+1), N_2}$ $\gets$ \texttt{SolveNeumann}($T_f$, $N_2$, $(\bm{u}_0^{(2)}, \bm{u}_0(x_\Gamma))$, $\bm{q}^{(k+1)}(t)$)} \STATE{$\bm{u}_{\Gamma}^{(k+1)}(t) \gets \Theta \bm{u}_{\Gamma}^{(k+1)}(t) + (1 - \Theta) \bm{u}_{\Gamma}^{(k)}(t)$} \IF{$\| \bm{u}_{\Gamma}^{(k+1), N_2} - \bm{u}_{\Gamma}^{(k), N_2}\|_{\Gamma} < TOL \, \| \bm{u}_\Gamma(0)\|_{\Gamma}$} \STATE{\textbf{break}} \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} \subsection{SDIRK2}\label{SEC SDIRK2} We now introduce a higher order version of the same multirate algorithm. Specifically, we consider the second order singly diagonally implicit Runge-Kutta (SDIRK2) method as a basis to discretize the systems \eqref{EQ SEMI DISCR DIR}, \eqref{EQ SEMI DISCR NEU} and \eqref{EQ SEMI DISCR UPDATE} in time. Consider the IVP \eqref{EQ IVP}, a $s$-stage SDIRK method is defined as \begin{align}\label{EQ SDIRK2} \begin{split} \bm{k}_i & = \bm{f}(t_n + c_i \Delta t_n, \bm{U}_i) \approx \dot{\bm{u}}(t_n + c_i \Delta t_n), \\ \bm{U}_i & = \underbrace{\bm{u}_n + \Delta t_n \sum_{j=1}^{i-1} a_{ij}\bm{k}_j}_{=: \bm{s}_i} + \Delta t_n a_{ii} \bm{k}_i \approx \bm{u}(t_n + c_i \Delta t) ,\quad i = 1,\ldots, s, \\ \bm{u}_{n+1} & = \bm{u}_n + \Delta t_n \sum_{i=1}^s b_i \bm{k}_i, \end{split} \end{align} with coefficients $a_{ik}$, $b_i$ and $c_i$, $i, k = 1, \ldots, s$. The two-stage SDIRK2 method is defined by the coefficients in the Butcher tableau in Table \ref{TABLE BUTCHER}. \begin{table}[ht!]\label{TABLE BUTCHER} \begin{center} \begin{tabular}{c|cc} $\bm{c}$ & $\bm{A}$ \\ \hline & $\bm{b}$ \\ & $\hat{\bm{b}}$ \end{tabular} \qquad\qquad \begin{tabular}{c|cc} $a$ & $a$ & $0$ \\ $1$ & $1-a$ & $a$ \\ \hline & $1-a$ & $a$ \\ & $1-\hat{a}$ & $\hat{a}$ \end{tabular} \qquad\qquad \begin{tabular}{c} $a = 1 - \frac{1}{2} \sqrt{2}$ \\ $\hat{a} = 2 - \frac{5}{4} \sqrt{2}$ \end{tabular} \caption{Butcher tableau for SDIRK2.} \end{center} \end{table} Here, the last row of $\bm{A}$ and $\bm{b}$ coincide, such that $\bm{u}_{n+1} = \bm{U}_2$. One can rewrite \eqref{EQ SDIRK2} to \begin{align*} \begin{split} \bm{U}_1 & = \bm{s}_1 + a \Delta t \bm{f}(t_n + a \Delta t, \bm{U}_1) ,\quad \bm{s}_1 = \bm{u}_n,\\ \bm{u}_{n+1} & = \bm{s}_2 + a \Delta t \bm{f}(t_n + \Delta t, \bm{u}_{n+1}) ,\quad \bm{s}_2 = \bm{s}_1 + \frac{1-a}{a} (\bm{U}_1 - \bm{s}_1). \end{split} \end{align*} In the following we use superscripts $(m)$ on $\bm{s}_j$, $\bm{U}_j$ for variables associated with $\Omega_m$. The Dirichlet problem uses the initial guess $\bm{u}_I^{(1),(k+1),0} = \bm{u}_I^{(1)}(0)$. Given $\bm{u}_\Gamma^{(k)}(t)$, one solves \begin{align}\label{EQ SDIRK2 DIR} \begin{split} \left( \bm{M}_{II}^{(1)} + a \Delta t \bm{A}_{II}^{(1)} \right) \bm{U}_{j}^{(1)} =& \bm{M}_{II}^{(1)} \bm{s}_{j}^{(1)} - a \Delta t \Big(\bm{M}_{I \Gamma}^{(1)} \dot{\bm{u}}_{\Gamma}^{(k)}(t_n + c_j \Delta t)\\ & + \bm{A}_{I \Gamma}^{(1)} \bm{u}_{\Gamma}^{(k)}(t_n + c_j \Delta t)\Big), \quad j=1,2. \\ \end{split} \end{align} We approximate the derivatives based on \eqref{EQ SDIRK2}, i.e., \begin{equation}\label{EQ SDIRK2 DER APPROX 1} \dot{\bm{u}}(t_n + a \Delta t) \approx \bm{k}_1 = \frac{\bm{U}^1 - \bm{s}_1}{a \Delta t} = \frac{\bm{U}^1 - \bm{u}_n}{a \Delta t} \approx \frac{\bm{u}(t_n + a \Delta t) - \bm{u}(t_n)}{a \Delta t} \end{equation} Using the same approach for $\dot{\bm{u}}(t_n + \Delta t) \approx \bm{k_2}$ yields: \begin{equation}\label{EQ SDIRK2 DER APPROX 2} \dot{\bm{u}}^{(k)}(t_n + \Delta t) \approx \frac{\bm{u}^{(k)}(t_n + \Delta t) - (\bm{u}^{(k)}(t_n) + (1 - a)\Delta t \dot{\bm{u}}^{(k)}(t_n + a \Delta t) )}{a \Delta t}. \end{equation} These approximations are sufficiently accurate to retain second order for SDIRK2 WR. In particular \mbox{\eqref{EQ SDIRK2 DER APPROX 2}} solves the lacking accuracy of a simple backward difference, while only requiring evaluations within $[t_n, t_{n+1}]$. We use these approximations for $\dot{\bm{u}}_\Gamma^{(k)}$ in \mbox{\eqref{EQ SDIRK2 DIR}} and the flux computation. For the $\dot{\bm{u}}^{(1)}$ terms we directly use the corresponding $\bm{k}_j^{(1)}$. The resulting discrete fluxes are: \begin{align}\label{EQ SDIRK2 FLUX} \begin{split} \bm{q}_{j}^{(k+1),n+1} = & \bm{M}_{\Gamma \Gamma}^{(1)} \dot{\bm{u}}_{\Gamma}^{(k)}(t_n + c_j \Delta t) + \bm{M}_{\Gamma I}^{(1)} \bm{k}_j^{(1)} \\ & + \bm{A}_{\Gamma \Gamma}^{(1)} \bm{u}_{\Gamma}^{(k)}(t_n + c_j \Delta t) + \bm{A}_{\Gamma I}^{(1)} \bm{U}_j^{(1)}, \quad j= 1, 2, \quad n = 0, 1, \ldots. \end{split} \end{align} The initial flux $\bm{q}^{(k+1), 0}$ has to be sufficiently accurate to prevent loss of order due to error propagation. We suggest the following 3 point forward difference to approximate both $\dot{\bm{u}}_\Gamma^{(k)}(0)$ and $\dot{\bm{u}}^{(k), (1)}(0)$: \begin{equation}\label{EQ SDIRK2 FLUX 0} \dot{\bm{u}}(0) \approx (-3 \bm{u}(0) + 4 \bm{u}(\Delta t) - \bm{u}(2 \Delta t))/ ( 2 \Delta t). \end{equation} We rewrite the Neumann problem in \eqref{EQ SEMI DISCR NEU} for the vector of unknowns $\bm{u}^{(k+1),n} := \left( {\bm{u}_I^{(2),(k+1),n}}^T {\bm{u}_{\Gamma}^{(2),(k+1),n}}^T \right)^T$. Given $\bm{q}_j^{(k)}(t)$, $j = 1, 2$, one then solves the Neumann problem starting with $\bm{u}^{(k+1),0} = \bm{u}^{(k+1)}(0)$: \begin{align}\label{EQ SDIRK2 NEU} \begin{split} & \left( \bm{M} + a \Delta t \bm{A} \right) \bm{U}_j^{(2)} = \bm{M} \bm{s}_j^{(2)} - a \Delta t \begin{pmatrix} \bm{0} \\ \bm{q}_j^{(k+1)}(t_n + c_j \Delta t)\end{pmatrix} , \quad j=1,2,\\ & \bm{u}^{(k+1),n+1} = \bm{U}_2^{(2)}.\\ \end{split} \end{align} The update step and termination check are identical to IE. The differences to Algorithm \ref{ALG IE FULL} are in \texttt{SolveDirichlet} and \texttt{SolveNeumann}, these follow \eqref{EQ SDIRK2 DIR}, \eqref{EQ SDIRK2 FLUX} and \eqref{EQ SDIRK2 NEU}. Additionally, \texttt{SolveDirichlet} now returns 2 fluxes, which are both interpolated and passed into \texttt{SolveNeumann}. \begin{remark} The fixed point of the DNWR SDIRK2 method is not the monolithic solution one gets from applying SDIRK2 to the semidiscrete monolithic system, c.f. Remark \ref{DEF MONO}. I.e., given $N_1$, $N_2$, the coupling residual does not vanish for $k \rightarrow \infty$. This is due to the derivative approximations \eqref{EQ SDIRK2 DER APPROX 1} and \eqref{EQ SDIRK2 DER APPROX 2} in the Dirichlet solver. However, the coupling residual is of the same order as the time integration error and thus vanishes for $N_1, N_2, k \rightarrow \infty$. \end{remark} \subsection{Optimal relaxation parameter}\label{SEC THETA} A waveform relaxation method for the equation $\bm{B} \dot{\bm{u}} + \bm{A} \bm{u} = \bm{0}$ can be defined using (block)-splittings $\bm{B} = \bm{M}_B - \bm{N}_B$ and $\bm{A} = \bm{M}_A - \bm{N}_A$, see \cite{Janssen1997}. The resulting iteration is \begin{equation*} \bm{M}_B \dot{\bm{u}}^{(k+1)} + \bm{M}_A \bm{u}^{(k+1)} = \bm{N}_B \dot{\bm{u}}^{(k)} + \bm{N}_A \bm{u}^{(k)}, \end{equation*} using implicit Euler with a fixed $\Delta t$ yields \begin{equation*} \underbrace{(\bm{M}_B + \Delta t \bm{M}_A)}_{=:\bm{M}^*}\bm{u}_{n+1}^{(k+1)} - \bm{M}_B \bm{u}_{n}^{(k+1)} = \underbrace{(\bm{N}_B + \Delta t \bm{N}_A)}_{=:\bm{N}^*}\bm{u}_{n+1}^{(k)} - \bm{N}_B \bm{u}_{n}^{(k)}. \end{equation*} Now, writing the iteration for $\bm{U} = (\bm{u}_1^T, \ldots, \bm{u}_N^T)$ gives \begin{equation*} \begin{pmatrix} \bm{M}^* & & &\\ -\bm{M}_B & \ddots & &\\ & \ddots & \ddots &\\ & & -\bm{M}_B & \bm{M}^* \end{pmatrix} \bm{U}^{(k+1)} = \begin{pmatrix} \bm{N}^* & & & \\ -\bm{N}_B & \ddots & &\\ & \ddots & \ddots & \\ & & -\bm{N}_B & \bm{N}^* \end{pmatrix} \bm{U}^{(k)} + \begin{pmatrix} \bm{B}\bm{u}(0) \\\bm{0} \\ \vdots \end{pmatrix}. \end{equation*} This describes the iteration for all timesteps, i.e., the whole discrete waveform $\bm{U}^{(k)}$. Since both matrices are block lower triangular, so is the resulting iteration matrix. The diagonal blocks are given by ${\bm{M}^*}^{-1}\bm{N}^*$ and determine the spectral radius, which is the asymptotic convergence rate. Since the diagonal blocks are identical, it is sufficient to look at a single timestep. In this setting the iteration matrix w.r.t. $\bm{u}_\Gamma^{(k)}$ was already determined in \cite{Monge:2017}. It is given by \begin{equation*} \boldsymbol{\Sigma} = -{\bm{S}^{(2)}}^{-1} \bm{S}^{(1)}, \end{equation*} where \begin{equation}\label{EQ THETA S MAT} \bm{S}^{(m)} := \left( \frac{\bm{M}_{\Gamma \Gamma}^{(m)}}{\Delta t} + \bm{A}_{\Gamma \Gamma}^{(m)} \right) - \left( \frac{\bm{M}_{\Gamma I}^{(m)}}{\Delta t} + \bm{A}_{\Gamma I}^{(m)} \right) \left( \frac{\bm{M}_{II}^{(m)}}{\Delta t} + \bm{A}_{II}^{(m)} \right)^{-1} \left( \frac{\bm{M}_{I \Gamma}^{(m)}}{\Delta t} + \bm{A}_{I \Gamma}^{(m)} \right). \end{equation} This is obtained by solving \eqref{EQ IE DIR} for $\bm{u}_I^{(1), (k+1), n+1}$, assuming $\bm{u}_I^{(1),(k+1),n} = \bm{0}$. Insert this result into \eqref{EQ IE FLUX} and then \eqref{EQ IE NEU}. Lastly, solve \eqref{EQ IE NEU} for $\bm{u}_\Gamma^{(k+1),n+1}$, assuming $\bm{u}_{I}^{(2),(k+1),n} = \bm{0}$, using the Schur-complement. Including the relaxation step yields the following iteration (for a single timestep): \begin{equation*} \bm{u}_{\Gamma}^{(k + 1)} = (\Theta \boldsymbol{\Sigma} + (1 - \Theta) \bm{I}) \bm{u}_{\Gamma}^{(k)}. \end{equation*} In the 1D case $\bm{S}^{(m)}$ and $\boldsymbol{\Sigma}$ scalars, thus the optimal relaxation parameter is \begin{equation}\label{EQ THETA OPT} \Theta_{opt} = \frac{1}{\left| 1 + {\bm{S}^{(2)}}^{-1} \bm{S}^{(1)}\right|}. \end{equation} In the following we specifically consider a 1D model problem on $\Omega = [-1, 1]$, split at $x_\Gamma = 0$, and an equidistant discretization using linear finite elements. The matrices $\bm{M}^{(m)}_{II}$ and $\bm{A}^{(m)}_{II}$ have a known Toeplitz structure. Thus, one can write down an exact expression of \mbox{\eqref{EQ THETA S MAT}} using an Eigendecomposition to calculate the inverse of $\bm{M}_{II}^{(m)} /\Delta t + \bm{A}_{II}^{(m)}$. Through lengthy but straight forward calculations (see \cite{Monge:2017}), one obtains the following expressions: \begin{align} \bm{S}^{(m)} & = \frac{6 \Delta t \Delta x (\alpha_m \Delta x^2 + 3 \lambda_m \Delta t) - (\alpha_m \Delta x^2 - 6 \lambda \Delta t)s_m}{18 \Delta t^2 \Delta x}, \label{EQ THETA SM 1}\\ s_m & = \sum_{i=1}^{N} \frac{3 \Delta t \Delta x^2 \sin^2 (i \pi \Delta x)}{2 \alpha_m \Delta x^2 + 6 \lambda_m \Delta t + (\alpha_m \Delta x^2 - 6 \lambda_m \Delta t) \cos (i \pi \Delta x)}. \label{EQ THETA SM 2} \end{align} Using $c = \Delta t/\Delta x^2$, $\Theta_{opt}$ has the following temporal and spatial limits \cite{Monge:2017}: \begin{equation}\label{EQ THETA LIMS} \lim_{c \rightarrow 0} \Theta_{opt} = \frac{\alpha_2}{\alpha_1 + \alpha_2}, \quad \lim_{c \rightarrow \infty} \Theta_{opt} = \frac{\lambda_2}{\lambda_1 + \lambda_2}. \end{equation} These are consistent with the one-dimensional semidiscrete analysis performed in \cite{Fwok:14}. There, a convergence analysis using Laplace transforms for the DNWR method \eqref{EQ CONT DIR PROB}-\eqref{EQ CONT UPDATE PROB} on two identical subdomains $\Omega_1$ and $\Omega_2$ with constant coefficients shows that $\Theta_{opt} = 1/2$. Their result is recovered when approaching the continuous case in the limit $\Delta t/\Delta x^2 \rightarrow \infty$ for constant coefficients. Figure \ref{FIG THETA OPT VS CFL} shows $\Theta_{opt}$ for a few material combinations, see Table \ref{TABLE MATERIALS}. One can observe that $\Theta_{opt}$ is continuous and bounded by its spatial and temporal limits \eqref{EQ THETA LIMS}. \begin{figure} \centering \subfigure[Air-Water]{\includegraphics[width=4.5cm]{CFL_vs_opt_theta_air_water.png}} \hfill \subfigure[Air-Steel]{\includegraphics[width=4.5cm]{CFL_vs_opt_theta_air_steel.png}} \hfill \subfigure[Water-Steel]{\includegraphics[width=4.5cm]{CFL_vs_opt_theta_water_steel.png}} \hfill \caption{$\Theta_{opt}$ over $\Delta t / \Delta x^2$ for DNWR algorithm. $\Delta x = 1/100$, $T_f = 1e10$ and $\Delta t = T_f/2^0, \ldots, T_f/2^{50}$.} \label{FIG THETA OPT VS CFL} \end{figure} \subsubsection{Multirate relaxation parameter} For $\Delta t_1 \neq \Delta t_2$ the analysis in the previous section does not apply anymore. Instead, we determine $\Theta_{opt}$ based on numerical experiments in Section \ref{SEC THETA MR TEST}. These show that, on average, the optimal choice is to use $\Theta_{opt}$ based on the maximum of $\Delta t_1$ and $\Delta t_2$. This result coincides with the experiments to determine $\Theta_{opt}$ for the multirate NNWR method in \cite{monbirDD25:19}. \section{Time adaptive method}\label{SEC DT CONTROL} The goal of adaptivity is to use timesteps as large as possible and as small as necessary to reduce computational costs, while ensuring a target accuracy. In particular, using adaptive time stepping for both sub-domains separately, one can attain comparable errors automatically, bypassing the need to determine a suitable step-size ratio for the multirate case. The basic idea is to control timestepsizes to keep an error estimate at a given tolerance $\tau$. We use a local error estimate obtained by an embedded technique \cite[chap. IV.8]{HairerII}, i.e., for the SDIRK2 method we use the coefficients $\hat{\bm{b}}$, see Table \ref{TABLE BUTCHER}, to obtain $\hat{\bm{u}}_n$, another solution of lower order. The local error estimate is $\boldsymbol{\ell}_n = \bm{u}_n -\hat{\bm{u}}_n$. We then control timesteps using the proportional-integral (PI) controller \begin{equation*} \Delta t_{n+1} = \Delta t_n \left( \frac{\tau}{\|\boldsymbol{\ell}_n\|_I}\right)^{1/3} \left( \frac{\tau}{\|\boldsymbol{\ell}_{n-1}\|_I}\right)^{-1/6}, \end{equation*} cf. \cite{arevalo2017grid}, PI3333. We use this procedure on each subdomain independently. As the initial stepsize we use \begin{equation*} \Delta t_0^{(m)} = \frac{T_f \, {\tau^{(m)}}^{1/2}}{100(1 + \| {\bm{M}_{II}^{(m)}}^{-1}\bm{A}_{II}^{(m)} \bm{u}_I^{(m)}(0)\|_I)}, \quad m = 1,\,2, \end{equation*} c.f. \cite{monbirDD25:19}. We choose the tolerances $\tau^{(m)} = TOL/5$, $m = 1,\, 2$. This choice is motivated by \cite{Soderlind06} and already used in a similar context in \cite{BirkenMonge17, monbirDD25:19}. We use the discrete $\mathcal{L}^2$ norm \begin{equation}\label{EQ NORM INNER} \| \bm{u} \|_I^2 = (\bm{u}^T \bm{M} \bm{u})/|\Omega_u|, \end{equation} where $\bm{M}$ is the corresponding mass matrix and $|\Omega_u|$ the area on which $\bm{u}$ is defined. Using this adaptive method, we get independent time-grids for both sub-domains, that are suitable for the given material parameters. A pseudocode of the adaptive SDIRK2 DNWR method is shown in Algorithm \mbox{\ref{ALG ADAPTIVE}}. \texttt{AdaptiveSolveDirichlet} and \texttt{AdaptiveSolveNeumann} perform time integration as described in Section \mbox{\ref{SEC SDIRK2}} with timestep control as described above. Due to non-constant timesteps, the difference formula for the derivative approximation to compute the initial flux \mbox{\eqref{EQ SDIRK2 FLUX 0}} becomes \begin{equation*} \dot{\bm{u}}(0) \approx \frac{-(1 - c^2) \bm{u}(0) + \bm{u}(\Delta t_0) - c^2\bm{u}( \Delta t_0 + \Delta t_1)}{\Delta t_0 (1 - c)}, \quad c = \frac{\Delta t_0}{\Delta t_0 + \Delta t_1}. \end{equation*} \begin{algorithm}[ht!] \caption{Pseudocode of the adaptive SDIRK2 DNWR method. We denote the time-grids on $\Omega_m$, $m = 1,2$ by $\tau_m^{(k)}$. These mark the time-points corresponding to $\{ \bm{q}_2^{(k), j}\}_j$ resp. $\{ \bm{u}_\Gamma^{(k), j}\}_j$. The time-grid for the stage fluxes $\{ \bm{q}_1^{(k), j}\}_j$ is $\tau_{1, stage}^{(k)}$, it is defined by $\tau_1^{(k)}$.} \label{ALG ADAPTIVE} \begin{algorithmic} \STATE{\textbf{DNWR\_SDIRK2\_TA}($T_f$, $(\bm{u}_0^{(1)}, \bm{u}_0^{(2)}, \bm{u}_\Gamma(0))$, $\Theta$, $TOL$, $k_{\max}$):} \STATE{$\tau_2^{(0)} = \{ 0, T_f \}$; $\{\bm{u}_{\Gamma}^{(0), j}\}_{j = 0, 1} = \{ \bm{u}_{\Gamma}(0), \bm{u}_{\Gamma}(0) \}$} \FOR{$k = 0, \ldots, k_{\max} - 1$} \STATE{$\bm{u}_{\Gamma}^{(k)}(t) \gets \text{Interpolation}(\tau_2^{(k)}, \{\bm{u}_{\Gamma}^{(k), j}\}_{j})$} \STATE{$\tau_1^{(k+1)}, \{\bm{q}_1^{(k+1), j}\}_{j}, \{\bm{q}_2^{(k+1), j}\}_{j}$ $\gets$ \texttt{AdaptiveSolveDirichlet}($T_f$, $TOL/5$, $\bm{u}_0^{(1)}$, $\bm{u}_{\Gamma}^{(k)}(t)$)} \STATE{$\bm{q}_1^{(k+1)}(t) = \text{Interpolation}(\tau_{1, stage}^{(k+1)}, \{\bm{q}_1^{(k+1), j}\}_{j})$} \STATE{$\bm{q}_2^{(k+1)}(t) = \text{Interpolation}(\tau_1^{(k+1)}, \{\bm{q}_2^{(k+1), j}\}_{j})$} \STATE{$\tau_2^{(k+1)}, \{\bm{u}_\Gamma^{(k+1), j}\}_{j}$ $\gets$ \texttt{AdaptiveSolveNeumann}($T_f$, $TOL/5$, $(\bm{u}_0^{(2)}, \bm{u}_0(x_\Gamma))$, $\bm{q}_1^{(k+1)}(t)$, $\bm{q}_2^{(k+1)}(t)$)} \STATE{Compute $\Theta$, see Section \ref{SEC THETA ADAPTIVE}} \STATE{$\bm{u}_{\Gamma}^{(k+1)}(t) \gets \Theta \bm{u}_{\Gamma}^{(k+1)}(t) + (1 - \Theta) \bm{u}_{\Gamma}^{(k)}(t)$} \IF{$\| \bm{u}_{\Gamma}^{(k+1)}(T_f) - \bm{u}_{\Gamma}^{(k)}(T_f)\|_{\Gamma} < TOL \, \| \bm{u}_\Gamma(0)\|_{\Gamma}$} \STATE{\textbf{break}} \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Relaxation parameter}\label{SEC THETA ADAPTIVE} In the adaptive method the Dirichlet-Neumann operator changes every WR step, since the timestepsizes change. Consequently, we recompute $\Theta$ in every iteration. We use $\Theta$ as in the multirate case with the average stepsizes from each subdomain. This approach improves upon \mbox{\cite{monbirDD25:19}}, where $\Theta$ was based on the timegrids of the previous WR step. \section{The Neumann-Neumann Waveform Relaxation algorithm} Here, we briefly recap the related Neumann-Neumann waveform relaxation (NNWR) method, c.f. \cite{Fwok:14,MongeBirken:multirate,monbirDD25:19}. Similar to DNWR, NNWR solves \eqref{EQ PROB MONO} in a partitioned manner. The continuous formulation of the algorithm is as follows: Given $g^{(k)}(\bm{x}, t)$ one first solves the following Dirichlet problem on each subdomain: \begin{align*} \begin{split} \alpha_m \frac{\partial u_m^{(k+1)}(\bm{x},t)}{\partial t} - \nabla \cdot (\lambda_m \nabla u_m^{(k+1)}(\bm{x},t)) = 0, \,\, \bm{x} \in \Omega_m,\\ u_m^{(k+1)}(\bm{x},t) = 0, \,\, \bm{x} \in \partial \Omega_m \backslash \Gamma, \\ u_m^{(k+1)}(\bm{x},t) = g^{(k)}(\bm{x},t), \,\, \bm{x} \in \Gamma, \\ u_m^{(k+1)}(\bm{x},0) = u_m^{0}(\bm{x}), \,\, \bm{x} \in \Omega_m. \end{split} \end{align*} Next, one solves the following Neumann problems: \begin{align} \begin{split}\label{EQ NNWR NEU PROB} \alpha_m \frac{\partial \psi_m^{(k+1)}(\bm{x},t)}{\partial t} - \nabla \cdot (\lambda_m \nabla \psi_m^{(k+1)}(\bm{x},t)) = 0, \,\, \bm{x} \in \Omega_m,\\ \psi_m^{(k+1)}(\bm{x},t) = 0, \,\, \bm{x} \in \partial \Omega_m \backslash \Gamma, \\ \lambda_m \frac{\partial \psi_m^{(k+1)}(\bm{x},t)}{\partial \bm{n}_m} = \lambda_1 \frac{\partial u_1^{(k+1)}(\bm{x},t)}{\partial \bm{n}_1} + \lambda_2 \frac{\partial u_2^{(k+1)}(\bm{x},t)}{\partial \bm{n}_2}, \,\, \bm{x} \in \Gamma, \\ \psi_m^{(k+1)}(\bm{x},0) = 0, \,\, \bm{x} \in \Omega_m. \end{split} \end{align} Finally, the update step is \begin{equation*} g^{(k+1)}(\bm{x}, t) = g^{(k)}(\bm{x}, t) - \Theta(\psi^{(k+1)}_1(\bm{x}, t) + \psi^{(k+1)}_2(\bm{x}, t)), \quad \bm{x} \in \Gamma. \end{equation*} For the fully discrete version and a detailed algorithmic description see \cite{MongeBirken:multirate}. One can solve on the subdomains in parallel. The NNWR algorithm is based on the exact same Dirichlet and Neumann problems as the DNWR algorithm, but with different input data, namely fluxes and initial value for the Neumann problem \mbox{\eqref{EQ NNWR NEU PROB}}. Hence one can directly use the solvers described in Section \ref{SEC TIME DISCR}, including time-adaptivity, c.f. \cite{monbirDD25:19}. Under the same restrictions as in Section \mbox{\ref{SEC THETA}}, i.e. $\Omega = [-1,1]$, split at $x_\Gamma = 0$ and linear finite elements on an equidistant grid, one can analogously calculate $\Theta_{opt}$: \begin{equation}\label{EQ NNWR THETA OPT} \Theta_{opt} = \frac{1}{|2 + {\bm{S}^{(1)}}^{-1}\bm{S}^{(2)} + {\bm{S}^{(2)}}^{-1}\bm{S}^{(1)}|} \end{equation} with $\bm{S}^{(i)}$ given by \mbox{\eqref{EQ THETA SM 1}} and \mbox{\eqref{EQ THETA SM 2}}, see \mbox{\cite{MongeBirken:multirate}} for details. The spatial and temporal limits based on $c = \Delta t/\Delta x^2$ are \begin{equation}\label{EQ NNWR THETA LIMS} \lim_{c \rightarrow 0} \Theta_{opt} = \frac{\alpha_1 \alpha_2}{(\alpha_1 + \alpha_2)^2}, \quad \lim_{c \rightarrow \infty} \Theta_{opt} = \frac{\lambda_1 \lambda_2}{(\lambda_1 + \lambda_2)^2}. \end{equation} \section{Numerical results} We now present numerical experiments to illustrate the validity of the theoretical results. The methods and algorithms described have been implemented in Python 3.6, the code is available at \cite{DNWR_code}. We consider the domains $\Omega = [-1, 1]$ for 1D and $\Omega = [-1, 1] \times [0, 1]$ for 2D, with $\Omega_1$ and $\Omega_2$ split at $x_\Gamma = 0$. Our initial conditions are \begin{align}\label{EQ U0 1} u(x) = 500 \sin((x+1) \pi/2), \quad \text{resp.} \quad u(x, y) = 500 \sin((x+1) \pi/2) \sin(y \pi). \end{align} As the coefficients $\alpha$ and $\lambda$ in \eqref{EQ PROB MONO} we consider the materials as shown in Table \ref{TABLE MATERIALS}. \begin{table}[ht!] \label{TABLE MATERIALS} \begin{center} \begin{tabular}{|c|c|c|} \hline \textbf{Material} & $\alpha = \rho \cdot c_p [J/(K m^3)]$ & $\lambda [W/(m K)]$ \\ \hline Air & $1.293 \cdot 1005$ & $0.0243$ \\ \hline Water & $999.7 \cdot 4192.1 $ & $0.58 $ \\ \hline Steel & $7836 \cdot 443$ & $48.9$ \\ \hline \end{tabular} \caption{Material parameters.} \end{center} \end{table} The resulting heterogeneous cases are Air-Water, Air-Steel and Water-Steel. We use $T_f = 10^4$ in all cases. As space discretization we use linear finite elements on equidistant grids (1D: $\Delta x = 1/200$, 2D: $\Delta x = 1/100$) as shown in Figure \ref{FIG DOMAIN FE}, see \cite{BirkenMonge17} for more details. The resulting linear equation systems are solved with direct solvers. We define our multirate setup via $N$ as the number of base timesteps. In subdomain $\Omega_m$ we then use $N_m = c_m \cdot N$ timesteps. We consider the following cases: Coarse-coarse ($c_1 = c_2 = 1$), coarse-fine ($c_1 = 1$, $c_2 = 10$) and fine-coarse ($c_1 = 10$, $c_2 = 1$). \subsection{Multirate relaxation parameter}\label{SEC THETA MR TEST} The question is what $\Theta$ to choose for DNWR in the multirate case, i.e., which $\Delta t$ to use in \eqref{EQ THETA SM 1} and \eqref{EQ THETA SM 2}. We consider the following four choices: \textbf{Max}/\textbf{Min}/\textbf{Avg} by taking the maximum, minimum or average of $\Delta t_1$ and $\Delta t_2$ to compute $\Theta_{opt}$, and "\textbf{Mix}": $\Theta_{opt} = 1/(| 1 + {\bm{S}^{(2)}}^{-1}(\Delta t_2) \bm{S}^{(1)}(\Delta t_1)|)$. We experimentally determine the convergence rate via \begin{equation} \| \bm{u}_\Gamma^{(k)}(T_f) - \bm{u}_\Gamma^{(k-1)}(T_f)\|_{\Gamma}\, /\, \|\bm{u}_\Gamma^{(k-1)}(T_f) - \bm{u}_\Gamma^{(k-2)}(T_f)\|_{\Gamma}, \end{equation} i.e., the reduction rate in the update. Here, we perform up to $k_{\max} = 6$ iterations and take the mean of the update reductions, but never the last iteration, which could be near machine precision. This experiment is done using IE for the 1D test case and $N=1$, as we aim to determine the asymptotic convergence rates. \begin{figure}[ht!] \centering \subfigure[Coarse-fine.]{\includegraphics[width=7cm]{MR_theta_opt_test_air_water_1_10_10000.png}} \hfill \subfigure[Fine-coarse.]{\includegraphics[width=7cm]{MR_theta_opt_test_air_water_10_1_10000.png}} \hfill \caption{Observed convergence rates over $c = \Delta t / \Delta x^2$ for DNWR IE, 1D, air-water.} \label{FIG THETA MR AIR WATER} \end{figure} \begin{figure}[ht!] \centering \subfigure[Coarse-fine.]{\includegraphics[width=7cm]{MR_theta_opt_test_air_steel_1_10_10000.png}} \hfill \subfigure[Fine-coarse.]{\includegraphics[width=7cm]{MR_theta_opt_test_air_steel_10_1_10000.png}} \hfill \caption{Observed convergence rates over $c = \Delta t / \Delta x^2$ for DNWR IE, 1D, air-steel.} \label{FIG THETA MR AIR STEEL} \end{figure} \begin{figure}[ht!] \centering \subfigure[Coarse-fine.]{\includegraphics[width=7cm]{MR_theta_opt_test_water_steel_1_10_10000.png}} \hfill \subfigure[Fine-coarse.]{\includegraphics[width=7cm]{MR_theta_opt_test_water_steel_10_1_10000.png}} \hfill \caption{Observed convergence rates over $c = \Delta t / \Delta x^2$ for DNWR IE, 1D, water-steel.} \label{FIG THETA MR WATER STEEL} \end{figure} The results in Figures \ref{FIG THETA MR AIR WATER}, \ref{FIG THETA MR AIR STEEL} and \ref{FIG THETA MR WATER STEEL} show that "\textbf{Mix}" gives consistently slow convergence rates, while the other options yield comparable results, with "\textbf{Max}" being most consistent, making it our choice for DNWR in the multirate setting. Numerical experiments in \mbox{\cite{MongeBirken:multirate}} yielded the same conclusion for the NNWR method. As such we use \mbox{\eqref{EQ NNWR THETA OPT}} based on the larger timestep. \subsection{Optimality of relaxation parameter} We now verify the optimality of the relaxation parameters \eqref{EQ THETA OPT} and \eqref{EQ NNWR THETA OPT} in 1D with implicit Euler and otherwise test the convergence rate and robustness. To this end, we determine the experimental convergences rates as in Section \mbox{\ref{SEC THETA MR TEST}}, for varying $\Theta$ in 1D and 2D for both implicit Euler and SDIRK2. Integration is done in multirate and non-multirate, up to $T_f$ using $N = 100$ base timesteps. These tests are done for both DNWR and NNWR, results are reported in the following subsections. We expect little variation for implicit Euler and SDIRK2, since SDIRK2 consists of two successive implicit Euler steps. For the step from 1D to 2D, we anticipate more notable differences. Lastly, convergence rates might deviate due to transitive effects of WR, since the iteration matrices are non-normal. In the plots, the blue highlighted range on the $x$-axis marks the spatial and temporal limits of $\Theta$, see \mbox{\eqref{EQ THETA LIMS}} resp. \mbox{\eqref{EQ NNWR THETA LIMS}}. \subsubsection{DNWR} \begin{figure}[ht!] \centering \subfigure[Air-water, fine-coarse.]{\includegraphics[width=4.5cm]{theta_opt_test_air_water_10_1_10000.png}} \hfill \subfigure[Air-steel, coarse-coarse.]{\includegraphics[width=4.5cm]{theta_opt_test_air_steel_1_1_10000.png}} \hfill \subfigure[Water-steel, coarse-fine.]{\includegraphics[width=4.5cm]{theta_opt_test_water_steel_1_10_10000.png}} \hfill \caption{Observed convergence rates for DNWR algorithm.} \label{FIG OBS CONV} \end{figure} Results are seen in Figure \ref{FIG OBS CONV}. In both 1D and 2D, SDIRK2 rates match those of implicit Euler. In all cases, 1D results closely align with the theoretical result marked by $\Sigma(\Theta)$. 2D results are slightly off, but still yield good error reduction rates using the 1D $\Theta_{opt}$. For air-water, the error reduction rate is $\approx 10^{-2}$, i.e., the coupling residual gains two decimals in accuracy per iteration. The air-steel coupling yields very fast convergence with an error reduction rate of $\approx 10^{-4}$ and water-steel rates are between $0.1$ and $0.01$ for $\Theta_{opt}$. Additionally, we see that DNWR is convergent for all shown $\Theta$. Thus in the worst case DNWR convergence is slow, yet not divergent. \subsubsection{DNWR - non-square geometry} We test if the DNWR results extend to non-square domains. In particular, we consider the spatial domain with $x \in [-9, 1]$, $x_{\Gamma} = 0$. The initial conditions are stretched to fit the new domain, i.e., \begin{align*} u(x) = 500 \sin((x+9) \pi/10), \quad \text{resp.} \quad u(x, y) = 500 \sin((x+9) \pi/10) \sin(y \pi). \end{align*} We test only the non-multirate setting. Results are shown in Figure \mbox{\ref{FIG OBS CONV NON SQUARE}} and strongly resemble the results for square, identical domains in Figure \mbox{\ref{FIG OBS CONV}}, except for slower convergence rates for the 2D water-steel case. \begin{figure}[ht!] \centering \subfigure[Air-water, coarse-coarse.]{\includegraphics[width=4.5cm]{theta_opt_test_non_square_air_water_1_1_10000.png}} \hfill \subfigure[Air-steel, coarse-coarse.]{\includegraphics[width=4.5cm]{theta_opt_test_non_square_air_steel_1_1_10000.png}} \hfill \subfigure[Water-steel, coarse-coarse.]{\includegraphics[width=4.5cm]{theta_opt_test_non_square_water_steel_1_1_10000.png}} \hfill \caption{Observed convergence rates for DNWR algorithm using non-square geometry.} \label{FIG OBS CONV NON SQUARE} \end{figure} This similarity can be explained by looking at the Schur-complements \mbox{\eqref{EQ THETA S MAT}}. $\bm{M}_{\Gamma \Gamma}$ and $\bm{A}_{\Gamma \Gamma}$ remain unchanged. $\bm{M}_{II}$ resp. $\bm{A}_{II}$ increase in size, but values and structure persist. The remaining matrices are padded with additional zeros. The increased size of $\bm{M}_{II}/\Delta t + \bm{A}_{II}$ does affect its inverse, but after multiplication with $\bm{M}_{I\Gamma}/\Delta t + \bm{A}_{I\Gamma}$ and $\bm{M}_{\Gamma I}/\Delta t + \bm{A}_{\Gamma I}$, which are mostly zero, the effect on $\bm{S}^{(i)}$ is expected to be minor. \subsubsection{NNWR} \begin{figure}[ht!] \centering \subfigure[Air-water, fine-coarse.]{\includegraphics[width=4.5cm]{NNWR_theta_opt_test_air_water_10_1_10000.png}}\hfill \subfigure[Air-steel, coarse-coarse.]{\includegraphics[width=4.5cm]{NNWR_theta_opt_test_air_steel_1_1_10000.png}}\hfill \subfigure[Water-steel, coarse-fine.]{\includegraphics[width=4.5cm]{NNWR_theta_opt_test_water_steel_1_10_10000.png}}\hfill \caption{Observed convergence rates for NNWR algorithm.} \label{FIG NNWR OBS CONV} \end{figure} Results are shown in Figure \mbox{\ref{FIG NNWR OBS CONV}}. We additionally mark the divergence limit at $1$, showing the range of viable $\Theta$ is very small for NNWR and unlike DNWR, relaxation is non-optional for convergence. In particular, one may get divergence for $\Theta$ within the range marked by the temporal and spatial limits. Convergence rates for implicit Euler and SDIRK2 results are almost identical. {1D convergence rates align well with the theoretical results in all cases. In 2D, the air-water and air-steel results match with the 1D results, yielding rates of $\approx 0.1 - 0.01$. However, water-steel shows divergence in 2D, when using $\Theta_{opt}$. In the convergent cases, the observed error reduction rates are slower than for DNWR, {this is particularly pronounced in the air-steel case, with a difference of $\approx$ 3 orders of magnitude. Overall, NNWR shows a lack of robustness. One might achieve better convergence rates using macrostepping, i.e., successively performing the algorithm on smaller time-window. This may speed up convergence on each time window, but the coupling residual propagates through erroneous initial values. On the other hand, DNWR performs well on the given time-windows. \subsection{Multirate - convergence order} We show convergence of the error, on the whole domain in the discrete $\mathcal{L}^2$ norm \mbox{\eqref{EQ NORM INNER}} and using $T_f = 1$, for $\Delta t \rightarrow 0$. Our reference solution is the monolithic solution for sufficiently small timesteps, thus measuring both the time integration error and coupling residual. Results for $TOL = 10^{-13}$ can be seen in Figure \ref{FIG DNWR MR CONV} for DNWR and in Figure~\ref{FIG NNWR MR CONV} for NNWR. One can observe the desired first and second order convergence rates for $\Delta t \rightarrow 0$. \begin{figure}[h!] \centering \subfigure[Air-water, fine-coarse.]{\includegraphics[width=4.5cm]{DNWR_MR_order_air_water_10_1_1.png}} \hfill \subfigure[Air-steel, coarse-coarse.]{\includegraphics[width=4.5cm]{DNWR_MR_order_air_steel_1_1_1.png}} \hfill \subfigure[Water-steel, coarse-fine.]{\includegraphics[width=4.5cm]{DNWR_MR_order_water_steel_1_10_1.png}} \hfill \caption{Error over $\Delta t$ for DNWR and $T_f = 1$.} \label{FIG DNWR MR CONV} \end{figure} \begin{figure}[h!] \centering \subfigure[Air-water, fine-coarse.]{\includegraphics[width=4.5cm]{NNWR_air_water_10_1_MR_order_tf_1.png}} \hfill \subfigure[Air-steel, coarse-coarse.]{\includegraphics[width=4.5cm]{NNWR_air_steel_1_1_MR_order_tf_1.png}} \hfill \subfigure[Water-steel, coarse-fine.]{\includegraphics[width=4.5cm]{NNWR_water_steel_1_10_MR_order_tf_1.png}} \hfill \caption{Error over $\Delta t$ for NNWR and $T_f = 1$.} \label{FIG NNWR MR CONV} \end{figure} \subsection{Time adaptive results} We consider the time adaptive DNWR method described in Section \mbox{\ref{SEC DT CONTROL}}. The reference for error computation is the solution using $TOL = 10^{-8}$ in 1D and $TOL = 10^{-7}$ in 2D. We expect the errors to be proportional to the tolerance for $TOL \rightarrow 0$, which is observed in Figure \mbox{\ref{FIG TA}}. Due to its lacking robustness, we do not consider time-adaptive NNWR. \begin{figure}[h!] \centering \subfigure[1D]{\includegraphics[width=7cm]{TA_dim_1.png}} \hfill \subfigure[2D]{\includegraphics[width=7cm]{TA_dim_2.png}}\hfill \caption{Error over $TOL$ for time adaptive DNWR method.} \label{FIG TA} \end{figure} \subsubsection{Error over work comparison} We now compare efficiency of the adaptive and multirate method for the 2D test case. For this we compare error over work, which we measure as the total number of timesteps. We choose the stepsize ratios in the multirate setting such that both domains use comparable CFL numbers, which is achieved by $c_2 = c_1 D_2/D_1$, $D_m = \lambda_m/\alpha_m$. However, we require $c_m \in \mathbb{N}$. W.l.o.g. assume $D_2/D_1 > 1$, we then set $c_1 = 1$ and round down $c_2 = D_2/D_1$. See Table \mbox{\ref{TABLE ERR WORK STEPS}} for the resulting stepsize ratios for our material configurations. To compare the multirate method with the time-adaptive method, we parametrize the former by the number of base timesteps $N$. Given $\Delta t_m = T_f/(c_m \cdot N)$, $m = 1,\,2$, we compute the associated time integration error $e_{\Delta t_1, \Delta t_2}$ using $TOL = 10^{-12}$ and a monolithic reference solution with $\Delta t = \min(\Delta t_1, \Delta t_2)/2$. We then use $TOL = e_{\Delta t_1, \Delta t_2}/5$ in the termination criterion for the multirate method, for its error over work comparison. Finally, our references for the error computations in the error over work comparison are adaptive solutions with $TOL = 10^{-6}$. Results are shown in Figure \mbox{\ref{FIG WORK ERROR 1}} with the resulting stepsizes ratios for the adaptive case in Table \mbox{\ref{TABLE ERR WORK STEPS}}. The adaptive method is 4 times more efficient in the water-steel case, of similar efficiency in the air-water case and less efficient in the air-steel case. This can be explained by the stepsize ratios in Table \mbox{\ref{TABLE ERR WORK STEPS}}. The closer the multirate stepsize ratios correspond to the adaptive ones, the better the performance of multirate in comparison with the adaptive method. \begin{table}[ht!] \label{TABLE ERR WORK STEPS} \begin{center} \begin{tabular}{|c||c|c|c|} \hline & \textbf{Air-water} & \textbf{Air-steel} & \textbf{Water-steel} \\ \hline multirate ($c_1:c_2$) & $135:1$ & $1:1$ & $1:101$\\ \hline adaptive $u_0^1$ & $33.88:1$ & $1.15:1$& $1:2.47$\\ \hline adaptive $u_0^2$ & $21.25:1$& $1.09:1$ & $1:3.20$\\ \hline \end{tabular} \caption{Timestep ratios for the multirate and adaptive method (final grid) by materials. $u_0^1$ is for the initial condition \eqref{EQ U0 1} and $u_0^2$ is for \eqref{EQ U0 2}.} \end{center} \end{table} \begin{figure}[ht!] \centering \subfigure[Air-water]{\includegraphics[width=4.5cm]{err_work_u0_1_air_water.png}} \hfill \subfigure[Air-steel]{\includegraphics[width=4.5cm]{err_work_u0_1_air_steel.png}} \hfill \subfigure[Water-steel]{\includegraphics[width=4.5cm]{err_work_u0_1_water_steel.png}} \hfill \caption{DNWR work over error comparison for 2D test case using initial condition \ref{EQ U0 1}.} \label{FIG WORK ERROR 1} \end{figure} As second test case we consider the initial condition \begin{align}\label{EQ U0 2} u(x, y) = 800 \sin((x+1) \pi)^2 \sin(y \pi). \end{align} Here, we have $\bm{u}_{\Gamma}(0) = \bm{0}$ and thus skip the relative norm for the termination check \eqref{EQ TERMINATION CRIT}. Results are shown in Figure \mbox{\ref{FIG WORK ERROR 2}} with stepsize ratios in Table \mbox{\ref{TABLE ERR WORK STEPS}}. In the air-steel case performance is approximately equal, whereas adaptive performance is about 4 resp. 25 times better in the air-water resp. water-steel case. Overall we see that performance depends on the stepsize ratios. This makes the adaptive method a more robust choice, since it automatically determines suitable stepsize ratios, which vary for e.g. different initial conditions. \begin{figure}[ht!] \centering \subfigure[Air-water]{\includegraphics[width=4.5cm]{err_work_u0_2_air_water.png}} \hfill \subfigure[Air-steel]{\includegraphics[width=4.5cm]{err_work_u0_2_air_steel.png}} \hfill \subfigure[Water-steel]{\includegraphics[width=4.5cm]{err_work_u0_2_water_steel.png}} \hfill \caption{DNWR work over error comparison for 2D test case using initial condition \ref{EQ U0 2}.} \label{FIG WORK ERROR 2} \end{figure} \section{Summary and conclusions} We derived first and second order, multirate resp. time-adaptive DNWR methods for heterogeneous coupled heat equations. The optimal relaxation parameter $\Theta_{opt}$ for WR is shown to be identical to the one for the basic DN iteration. We experimentally show how to adapt $\Theta$ in the multirate case. The observed convergence rates using an analytical $\Theta_{opt}$ for 1D implicit Euler are shown to be very robust, yielding fast convergence rates for a second order method and 2D for various material combinations and multirate settings on long time intervals. The same tests for the related NNWR methods employing identical Dirichlet and Neumann subsolvers, using an analytical $\Theta_{opt}$ for 1D implicit Euler, show a lack of robustness, often resulting in divergence. The time-adaptive DNWR method is experimentally shown to be favorable over mutirate, due ease of use and superior performance. The latter is due to the resulting stepsizes being more suitably chosen than those of the multirate solver. Overall, we obtain a fast, robust, time adaptive (on each domain), partitioned solver for unsteady conjugate heat transfer. \bibliographystyle{siam} \bibliography{DNWR_multirate_MeisrimelMongeBirken_arxiv} \end{document}
{"config": "arxiv", "file": "2007.00410/DNWR_multirate_MeisrimelMongeBirken_arxiv.tex"}
TITLE: Terminal Paths in Kleene's O QUESTION [6 upvotes]: I'm stuck on a problem in Sack's Higher Recursion Theory (#2.4)- any hints are welcome. He defines Kleene's O in the usual way, and the corresponding order $<_O$. A path through O is a linearly ordered subset Z s.t. $w<_Ov\in Z\rightarrow w\in Z$. A path can be continued if there is a $w\in O$ s.t. $\forall z\in Z z<_O w$. The problem is find a path that can't be continued of order type $<\omega_1^{ck}$. I believe I can show that there is such a path using a counting argument, but I can't find one explicitly. Thank you. REPLY [3 votes]: I will use $\Phi_e(n)$ for $\{e\}(n)$ used by Sack's in his book. If $\delta$ is a constructive limit ordinal, then $\delta$ has infinitely many notations. This follows from the so called "Padding Lemma" which asserts that every computable function has infinitely many indices, i.e. for any $e$, there exists infinitely many $f$ such that $\Phi_e = \Phi_f$. Convince yourself that $\omega \cdot n$ and $\omega \cdot \omega$ are constructive ordinals. We will construct a sequence of notations $(a_i)_{i \in \omega}$ such that $a_i <_O a_{i + 1}$ and $|a_i| = \omega \cdot (i + 1)$. (The "+1" is just because $\omega$ starts with $0$.) First if $3 \cdot 5^{0} \in O$ and $|3 \cdot 5^e| \geq \omega$, then since $\omega$ is a limit constructive ordinal it has infinitely many indices so in particular it has an notation $a_0$ such that $a_0$ is not $<_O$ comparable with $3\cdot 5^0$. (You may want to use Theorem 2.2 (iii) to justify this.) If $3 \cdot 5^0 \notin O$ or $|3 \cdot 5^0| < \omega$, then we don't care and just let $a_0$ be any notation of $\omega$. Now suppose that $a_n$ has been defined with the desired properties. $|a_n| = \omega(n + 1)$. Suppose that $3 \cdot 5^{n + 1} \in O$ and $|3 \cdot 5^{n + 1}| \geq \omega(n + 2)$ and $a_i$ such that for all $i \leq n$, $3 \cdot 5^\omega \geq_O a_i$. Now choose notation $a_{n + 1}' = 3 \cdot 5^{f}$ (some $f \in \omega$) for $\omega(n + 2)$ such that $a_{n + 1}' = 3 \cdot 5^f$ is not $<_O$ comparable with $3 \cdot 5^{n + 1}$. By modifying the computable function $\Phi_f$ in finitely many places, one can get a $\Phi_{g}$ such that $\Phi_g(i) = a_i$ for all $i \leq n$, $\Phi_g$ agrees with $\Phi_f$ for all but finitely many values, $|3 \cdot 5^{g}| = \omega(n + 2)$, and $3 \cdot 5^{g}$ is $<_O$ incomparable with $3 \cdot 5^{n + 1}$. Define $a_{n + 1} = 3 \cdot 5^{g}$. (Note that $\Phi_f$ needed to be modified finitely to the $\Phi_g$ in order to ensure that $a_{n + 1}$ is comparable with the previous $a_i$'s.) If the above condition on $n + 1$ is not satisfied, just let $a_{n + 1}$ be any notation for $\omega(n + 2)$. In this way, we have produced the desired sequence $(a_n)_{n \in \omega}$. Let $Z$ be the $\{u \in O : (\exists n)(u <_O a_n)\}$. It is clear that $Z$ is a path. (Use Theorem 2.2 iii if necessary.) By construction it is clear that $Z$ has order type $\omega \cdot \omega$ which is constructive and hence $\omega \cdot \omega < \omega_1^\text{CK}$. It remain to show that $Z$ can not be continued. Clearly $Z$ does not have a largest element. Thus if $Z$ can be continued, there must exists an $e$ such that $\Phi_e(n) <_O \Phi_e(n + 1)$ and $z <_O 3 \cdot 5^{e}$ for all $z \in Z$. However this is impossible because $a_{e}$ is $<_0$ incomparable with $3 \cdot 5^e$ by construction. So $Z$ is your desired path that can not be continued. Note that the main idea is to diagonalize against all computable function satisfy Sacks 2.1 (2), but you have to add in a little argument to make sure the order type does not reach $\omega_1^\text{CK}$ and that the sequence $a_i < a_{i + 1}$.
{"set_name": "stack_exchange", "score": 6, "question_id": 227093}
TITLE: Does a collar along the boundary always find an embedding in the manifold? QUESTION [2 upvotes]: Suppose we have a smooth manifold $M$ with boundary $K$. $K$ always finds a natural embedding inside $M$. Can we extend this embedding to an embedding of $K\times [0,\epsilon)$ inside $M$ in general ? It seems to be the case in all examples that I can think of, but I can not find a proof. REPLY [2 votes]: A search for "collar neighborhood theorem" led to this, in which Theorem 1.0.5 is the theorem you want (assuming you're talking about smooth manifolds with smooth boundaries, which I'm assuming because you've put "differential geometry" as a tag). I think that there's also a nice proof of this in Milnor's Lectures on the h-cobordism Theorem.
{"set_name": "stack_exchange", "score": 2, "question_id": 1983503}
\begin{document} \title{\bf Some non-standard ways to generate SIC-POVMs in dimensions 2 and 3} \author{Gary McConnell\\\it Controlled Quantum Dynamics Theory Group\\\it Imperial College London\\\rm \texttt{g.mcconnell@imperial.ac.uk}} \date{\today} \maketitle \bf The notion of Symmetric Informationally Complete Positive Operator-Valued Measures (SIC-POVMs) arose in physics as a kind of optimal measurement basis for quantum systems~\cite{zauner, renes}. However the question of the existence of such systems is identical to the question of the existence of a maximal set of~\emph{complex equiangular lines}. That is to say, given a complex Hilbert space of dimension~$\mathbf d$, what is the maximal number of (complex) lines one can find which all make a common (real) angle with one another, in the sense that the inner products between unit vectors spanning those lines all have a common absolute value? A maximal set would consist of~$\mathbf d^2$ lines all with a common angle, the absolute value of whose cosine is equal to~$\mathbf{\frac{1}{\sqrt{d+1}}}$. The same question has also been posed in the real case and some partial answers are known: see~\cite[A002853]{sloan} for the known results; and for some of the theory see~\cite[chapter 11]{godsil}. But at the time of writing no unifying theoretical result has been found in the real or the complex case: some sporadic low-dimensional numerical constructions have been converted into algebraic solutions but beyond this very little is known. It is conjectured~\cite{renes, zauner} that such maximal structures always arise as orbits of certain fiducial vectors under the action of the Weyl (or generalised Pauli) group. In this paper we point out some new construction methods in the lowest dimensions ($\mathbf{ d=2}$ and $\mathbf {d=3}$). We should mention that the SIC-POVMs so constructed are all unitarily equivalent to previously known SIC-POVMs. \rm \section*{SIC-POVMs and Complex Equiangular Lines} Let~$d>1$ be a positive integer and let~$\CC^d$ denote complex Hilbert space of dimension~$d$ equipped with the usual Hermitian positive-definite inner product, denoted by~$\langle\ ,\ \rangle$. A \emph{complex line} is a 1-dimensional complex subspace of~$\CC^d$. We shall view such a line as being spanned by a unit vector~$\bfu$ which is unique up to a~\emph{phase} (an element of the complex unit circle). As in the real case (where the phase ambiguity however only extends to~$\pm 1$) we may ask about the relative \emph{angle} between two such complex lines. Although the definition of such angles is open to several interpretations~\cite{scharnhorst} we shall adopt the usual convention here and define the angle~$\alpha_{\bfu,\bfv}$ between two lines spanned by unit vectors $\bfu,\bfv$ to be the inverse cosine of the absolute value of their Hermitian inner product $\langle\bfu,\bfv\rangle$, viz: $$ \alpha_{\bfu,\bfv} = \arccos\left(\vert\langle\bfu,\bfv\rangle\vert\right). $$ Notice that this definition is unchanged if we multiply $\bfu$ or $\bfv$ or both by (possibly distinct) phases. We follow Scharnhorst~\cite{scharnhorst} in referring to $\alpha_{\bfu,\bfv}$ as the \emph{Hermitian angle} between the vectors~$\bfu$ and~$\bfv$. In~\cite{renes} it is shown that the generating set $\mcd$ of unit vectors for a complete (maximal) set of equiangular lines in $\CC^d$ will necessarily have cardinality $d^2$ and each pair of distinct vectors $\bfu,\bfv$ will satisfy \begin{equation}\label{SICcond} \vert\langle\bfu,\bfv\rangle\vert = \frac{1}{\sqrt{(d+1)}}. \end{equation} We shall speak about SIC-POVMs and complete sets of equiangular lines as though they were the same object: the translation from one perspective to another may be found in~\cite{renes}. Also where it will cause no confusion we shall not distinguish between row and column vectors, to avoid cluttering up the exposition with transpose symbols. To illustrate the basic idea we shall look at the simplest non-trivial real Euclidean example. \begin{example} Let $d=2$ and consider $\RR^2$ equipped with the usual inner (dot) product. Then the three unit vectors $(1,0)$, $(\frac{1}{2},\frac{\sqrt{3}}{2})$ and $(-\frac{1}{2},\frac{\sqrt{3}}{2})$ span three one-dimensional subspaces which constitute a (maximal) set of 3 equiangular lines in~$\RR^2$, with the mutual angle between them being $\arccos(\frac{1}{2})=\frac{\pi}{3}$. \end{example} In~\cite{renes} we find the first systematic numerical search for SIC-POVMs in low dimensions, with the smaller dimensional examples being converted into complete algebraic solutions. This was followed by~\cite{appleby05},~\cite{scottgrassl} and~\cite{appleby12} (the literature is in fact much broader: for a much more extensive set of references see~\cite{appleby12}). The framework in which all of this previous work has been completed is that of the action of the standard $d$-dimensional (Heisenberg-)Weyl Group $W_d$ upon a single \emph{fiducial vector} $\bff_d$: the orbit (modulo phases) of $\bff_d$ under the action of $W_d$ is then the entire SIC-POVM. Hence the focus has been upon finding such fiducial vectors $\bff_d$ since the basis for the expression of the $X_d$ and $Z_d$ matrices which generate $W_d$ is assumed fixed, hence the numerics can focus on just one vector in each dimension. The focus of this paper is somewhat different: we explore some other ways in which such structures can arise in dimensions 2 and 3. The original idea behind these constructions was to try to find a way of generating all of the elements of a SIC-POVM from a single matrix, by somehow creating a (not necessarily unitary) matrix which takes a simple vector like $\bfv_0=(1,0,\ldots,0)$ and then successively `twists' it to new vectors which have the appropriate angle to all of the previous ones. As we shall see below, this was possible for $d=2$ but is too ambitious for higher dimensions, even for $d=3$. So instead we built SIC-POVMs starting with $\bfv_0$ and building up in a sequence via simple geometric steps, based on the single-matrix dimension~2 example, which give the appropriate angles as we go along. Once again, this works in dimensions~2 and~3 but so far we have not been able to generalise the method to higher dimensions. However it points to a possible new heuristic for achieving such constructions in the general case. Thanks to Marcus Appleby for valuable discussions and for his comments on an earlier draft of this paper. I would also like to thank Terry Rudolph for many helpful ideas and I am grateful for his group's hospitality at Imperial College, where this work was done. \section*{$d=2$: an almost-cyclic construction} \begin{theorem}\label{quasar} There is a~$2\times2$ complex matrix~$M$ whose first four powers applied to a fiducial vector generate a SIC-POVM in~$d=2$. \end{theorem} \begin{proof}[Proof (by construction)] Let $\bfv_0 = (1,0)\in\CC^2$. If we start with $\bfv_0$ as the first vector of a SIC-POVM it follows from~(\ref{SICcond}) that up to appropriate phases, the remaining~3~vectors (in dimension~$d=2$) must be of the form~$(\frac{1}{\sqrt{3}},\sqrt{\frac{2}{3}}e^{i\theta_j})$, for some angles~$\theta_j\in[0,2\pi),\ j=1,2,3$. So if we postulate the existence of a~$2\times2$ matrix~$M$ which begins with~$\bfv_0$ and cycles us around to three more vectors~$\bfv_1=M\bfv_0$, $\bfv_2=M^2\bfv_0$, $\bfv_3=M^3\bfv_0$ then the first column of~$M$ must be of this same form. We do \bf not \rm insist that $M$ be unitary: this would be unnecessarily restrictive given that we are only looking for equiangular \emph{lines}, not necessarily unit vectors. As it turns out the matrix that we end up constructing does in fact generate a sequence of four \emph{unit} vectors, but its eigenvalues are not of modulus one and so subsequent powers give non-unit vectors. Since any SIC-POVM in dimension~2 may be represented as a tetrahedron of vectors in the Bloch sphere, we may unitarily rotate it so that any chosen pair of its representative vectors lies in the $X,Z$-plane. Hence these two vectors may be viewed as \emph{real} vectors in the sense that their coordinates in the computational basis of $\CC^2$ are real numbers. So we may take the form of $M$ to be: $$ M = \frac{1}{\sqrt{3}}\left( \begin{array}{cc} 1&re^{i\rho}\\ \sqrt{2}&se^{i\sigma}\\ \end{array} \right) $$ for appropriate non-negative real numbers $r,s,\rho,\sigma$. For any integer~$j$ we shall write $\bfv_j=M^j\bfv_0$. If we write out the equations governing the absolute values of the inner products between the vectors~$\{\bfv_0,\ \bfv_1\}$ and the vector~$\bfv_2$ and try to solve them so that they satisfy equation~(\ref{SICcond}) then we see a neat solution for~$\langle\bfv_0,\bfv_2\rangle$ is $r=\frac{1}{\sqrt{2}}$, $\rho=\frac{\pi}{3}$. Moving on to~$\langle\bfv_1,\bfv_2\rangle$ then gives us another `obvious' solution as~$s=2$,~$\sigma=\frac{4\pi}{3}$. Somewhat surprisingly, it turns out that this solution for~$\bfv_2$ which was picked only because it was easy to understand, goes on to generate a fourth vector~$\bfv_3$ which has precisely the desired angles with the previous three. So we have a~SIC-POVM $$\{\bfv_j:j=0,1,2,3\}$$ all generated from the initial vector $\bfv_0$ by successive multiplication by the single matrix $$ M = \frac{1}{\sqrt{3}}\left( \begin{array}{cc} 1&\frac{1}{\sqrt{2}}e^{\frac{i\pi}{3}}\\ \sqrt{2}&-2e^{\frac{i\pi}{3}}\\ \end{array} \right). $$ For completeness we list the SIC-POVM vectors as $$ \bfv_0=\begin{pmatrix}1\\0\end{pmatrix},\ \bfv_1=\frac{1}{\sqrt{3}}\begin{pmatrix}1\\\sqrt{2}\end{pmatrix},\ \bfv_2=\frac{i}{\sqrt{3}}\begin{pmatrix}e^{\frac{-i\pi}{3}}\\-\sqrt{2}\end{pmatrix},\ \bfv_3=\frac{1}{\sqrt{3}}\begin{pmatrix}1\\-\sqrt{2}e^{\frac{-i\pi}{3}}\end{pmatrix}. $$ \end{proof} So it seems our matrix~$M$ is able to twist~$\bfv_0$ and the next~2 successive vectors~$\bfv_1,\bfv_2$ by exactly the right amount in order to manufacture a SIC-POVM; thereafter (on both sides, ie for positive and negative powers of~$M$) the vectors sacrifice the angle and begin to grow in magnitude. For example~$\bfv_{-1}$ and~$\bfv_4$ each have length~$\sqrt{2}$ and the magnitudes go on to grow symmetrically about the SIC-POVM from there onwards~(see below). It is as though the behaviour is perfectly constrained just while we need it to be, then it shakes off the constraints and spins off to infinity. The eigenvalues of~$M$ are~$\lambda_{\pm}=-\frac{i}{2}\pm\frac{1}{2}\sqrt{1+2\sqrt{3}i}$, so since they differ in magnitude it follows that the limiting behaviour of~$M^r\bfv_0$ as $r\rightarrow\pm\infty$ is for the vectors to head towards infinity in magnitude in both directions, with the Hermitian angle between successive vectors $\bfv_j$ and $\bfv_{j+1}$ tending to zero; however with the limiting \emph{pseudo-angle}~\cite[\S2]{scharnhorst} between successive vectors equal to the argument of the relevant eigenvalue (ie~$\lambda_{-}$ as~$r\rightarrow\infty$ and~$\lambda_{+}$ as~$r\rightarrow-\infty$). The following image, taken from~\cite{nasa}, may help to visualise the behaviour of this matrix: \includegraphics[scale=0.58]{quasar-water.jpg} \noindent where the fiducial vector lies somewhere in the centre, there is a major cluster of vectors of constrained length generated around the centre at the heart of which is the `glowing light' of the particular SIC-POVM configuration, but it gradually (then exponentially) diverges in both directions; the central beam depicts the fact that the powers of~$M$ end up converging to the same vector in a Hermitian angle sense; whereas the widening beam schematically represents the constant non-zero pseudo-angle between successive vectors, which becomes more significant in absolute (Euclidean distance) terms as the vectors grow in magnitude. If we begin at the central point of the series, between~$\bfv_1$ and~$\bfv_2$, then these vectors yield a sequence of integers representing the squared absolute values in both directions as follows: $$1,\ 1,\ 2,\ 3,\ 5,\ 9,\ 15,\ 26,\ 45,\ 77,\ 133,\ 229,\ 394,\ 679,\ 1169,\ 2013,\ 3467,\ 5970,\ \ldots$$ This sequence does not appear in any of Sloan's online integer sequences~\cite{sloan}. Another way of visualising the symmetry of this SIC-POVM is to consider what happens if we interpolate the infinite sequence~$\ldots,\ \bfv_0,\ \bfv_1=M\bfv_0,\ \bfv_2=M^2\bfv_0,\ \bfv_3=M^3\bfv_0,\ \ldots$ using any matrix square root of~$M$ (notice the eigenvalues tell us that~$M$ has precisely four (similarity classes of) distinct square roots~\cite[p54]{horn}). Choose any such matrix~$Q$ with~$Q^2=M$. Then the central part (namely the part in which we are most interested) can be indexed instead as $$\bfv_0=\bfu_{-3/2},\ \bfv_1=\bfu_{-1/2},\ \bfv_2=\bfu_{1/2},\ \bfv_3=\bfu_{3/2},$$ where the subscripts this time refer to half-integral powers of~$M$ as applied to a central vector~$\bfu_0=Q^3\bfv_0$. One final curious fact is that the fourth power of~$M$ takes~$\bfv_0$ to the non-unit vector~$\bfv_4=(0,\sqrt{2})$ (which spans the subspace orthogonal to~$\bfv_0$). Let~$\bfu_0=(0,1)$ be a unit vector in the direction of~$\bfv_4$. If we now set~$B=(M^\dagger)^{-1}$ and define~$\bfu_r=B^r\bfu_0$ then the set~$\{\bfu_0,\bfu_1,\bfu_2,\bfu_3\}$ also forms a SIC-POVM which is a kind of `dual' to the above in that for all integers~$j$ by the properties of the inner product, $$\langle\bfu_j,\bfv_j\rangle=\langle(M^\dagger)^{-j}\bfu_0,M^j\bfv_0\rangle=\langle(M^\dagger)^{-j}(M^\dagger)^j\bfu_0,\bfv_0\rangle=\langle\bfu_0,\bfv_0\rangle=0.$$ This is not however the natural dual coming from the adjoint structure - it depends seemingly upon the orthogonality of~$\bfv_0$ and~$\bfv_4$, something which \emph{a priori} is unexpected. If we denote by~$X$ the Pauli~$X$ matrix~$X = \left( \begin{smallmatrix} 0&1\\ 1&0 \end{smallmatrix} \right)$ which is the involution which flips~$\bfv_0$ and~$\bfu_0$, then saying that the~$\bfu_j$ form a SIC-POVM is the same as saying that the matrix~$XM^\dagger X$ also generates a SIC-POVM from the fiducial~$\bfv_0$. \section*{$d=2,3$: a bi-cyclic structure} Motivated by the `shape' of the~SIC-POVM constructed in the previous section we began to look for an exact algebraic solution in dimensions~$d=2,3$ starting with a couple of simple assumptions about structure. Such solutions proved relatively straightforward in these low dimensions. In addition for~$d=2$ there is a kind of internal exponential structure to this exact solution, which we shall explain below. However these techniques in their original form cannot be extended to higher dimensions. \begin{theorem}\label{d2d3} Let~$d=2$ or~3. There exists a~$d\times d$ unitary matrix~$U_d$ of multiplicative order~$d$ which takes a fiducial vector~$\bfv_0$ to a set of~$d$ vectors~$\bfv_0,\bfv_1,\ldots,\bfv_{d-1}$, each of which represents one of the orbits~$\mathcal{O}_0,\mathcal{O}_1,\ldots,\mathcal{O}_{d-1}$ generated under left multiplication by a fixed~$d\times d$ diagonal unitary matrix~$D_d$ of multiplicative order~$\binom{d+1}{2}$. The disjoint union of these~$d$ orbits is a SIC-POVM. \end{theorem} \begin{proof} Once again the proof is by construction. For general~$d$ it is a fact of linear algebra~\cite[theorem~2.3.1]{horn} that given any basis of $\CC^d$ we can find unitaries to change the basis to one in which these~$d$ initial column vectors form an upper-triangular matrix. Now given any SIC-POVM set of~$d^2$ vectors it is always possible to take a subset of~$d$ vectors which forms a basis, and therefore in view of the result just stated we may choose these~$d$ such that following an appropriate unitary transformation the column vectors may be arranged to form an upper-triangular matrix. This observation will allow us to construct SIC-POVMs with a particularly transparent geometric structure, because once we have the triangular basis we multiply our basis vectors by a diagonal matrix whose non-zero entries are phases, to create a series of~$d$ orbits, each of which is determined -- by virtue of the `triangular' and diagonal substructures -- solely by the number of non-zero entries in the vector. So our SIC-POVM is then automatically partitioned into~$d$ orbits under the diagonal matrix~$D_d$ and we cycle between the orbits using a unitary matrix~$U_d$ of order~$d$, which we shall construct below. For any complex vector or matrix~$N$ we denote its transpose by~$N^T$, its entrywise complex conjugate by~$N^*$ and its conjugate transpose by~$N^\dagger={N^*}^T$. Let $\{\bfv_j\}$ be a basis for $\CC^d$ and let $\{\bfw_k\}$ be its dual basis, so $\bfw_k^\dagger\bfv_j=\langle\bfw_k,\bfv_j\rangle=\delta_{kj}$ for all~$j,k$, where~$\delta_{kj}$ is the Kronecker delta. We would like to find a unitary matrix~$U$ which cycles between these vectors, so that for all~$k$: $$U\bfv_k = \bfv_{k+1}$$ (where we understand the subscript indices as cycling modulo~$d$). I am grateful to Marcus Appleby for pointing out the following lemma, which shows that this is possible if and only if the Gram matrix~$G_\bfv$ of the chosen basis~$\{\bfv_k\}$ is circulant, that is $\langle\bfv_j,\bfv_k\rangle=\langle\bfv_{j+1},\bfv_{k+1}\rangle$ for all $j,k$. \begin{lemma}\label{unicirc} With notation as above, let~$\mathcal{A}$ be a~$d\times d$ complex matrix satisfying the following equivalent conditions: (i) $\mathcal{A}\bfv_j = \bfv_{j+1}$ for all $j$ (ii) $\mathcal{A} = \sum_{k=0}^{d-1}\bfv_k\otimes\bfw_{k-1}^\dagger$ Then $\mathcal{A}$ is unitary if and only if~$G_\bfv$ is circulant. \end{lemma} \begin{proof} We first need to prove the assertion that~(i) and~(ii) are equivalent. That (ii) implies (i) follows from the definitions; the converse is a consequence of the fact that since $\{\bfv_j\}$ is a basis for the space and $\{\bfw_k^\dagger\}$ is a basis for the dual space, the set~$\{\bfv_j\otimes\bfw_{k}^\dagger\}$ is a basis for the matrix operator space in which~$\mathcal{A}$ lives. So assume that~$\mathcal{A}$ is the matrix defined in~(ii): we must show that being unitary under the standard Hermitian inner product, in the sense that~$\mathcal{A}^\dagger\mathcal{A}=\mathcal{A}\mathcal{A}^\dagger=\mathbf{I}_d$ where~$\mathbf{I}_d$ is the~$d\times d$ identity matrix, is equivalent to the Gram matrix~$G_\bfv$ being circulant. Writing out the change-of-basis equations and using the definition of the dual basis, we see that \begin{equation}\label{cobgram} \bfv_l = \sum_{k=0}^{d-1} (G_\bfv^T)_{lk}\bfw_k. \end{equation} Since~$\{\bfv_j\}$ is a basis it follows that~$\mathcal{A}$ is unitary if and only if $\mathcal{A}^\dagger\mathcal{A}\bfv_l=\bfv_l$ for all~$l$, which means: $$\sum_{k=0}^{d-1}\bfw_{k-1}\otimes\bfv_k^\dagger\sum_{j=0}^{d-1}\bfv_j\otimes\bfw_{j-1}^\dagger\bfv_l=\bfv_l\hbox{\rm\ for\ all\ }l.$$ Now~$\bfw_{j-1}^\dagger\bfv_l=\delta_{j-1,l}=\delta_{j,l+1}$ and so the terms in the inner sum are non-zero only when~$j=l+1$. So by the definition of the Gram matrix~$G_\bfv$ the sum becomes: $$\sum_{k=0}^{d-1}(G_\bfv)_{k,l+1}\bfw_{k-1}=\bfv_l\hbox{\rm\ for\ all\ }l.$$ Using the index~$k+1$ in place of~$k$ and transposing gives $$\sum_{k=0}^{d-1}(G_\bfv^T)_{l+1,k+1}\bfw_k=\bfv_l\hbox{\rm\ for\ all\ }l$$ and since~$\{\bfv_j\}$ and~$\{\bfw_k\}$ are bases,~(\ref{cobgram}) shows that each of the above statements is equivalent to $$(G_\bfv)_{m,n}=(G_\bfv)_{m+1,n+1}$$ for all $m,n$. This completes the proof of the lemma. \end{proof} Let us specialise to the case~$d=2$ or~$3$ with our initial vector $\bfv_0$ which is $(1,0)$ for $d=2$ and $(1,0,0)$ for $d=3$. Armed with the above lemma we now search for~$\bfv_1,\ldots,\bfv_{d-1}$ such that the basis $\{\bfv_k\}$ has upper-triangular form and such that the Gram matrix~$G_\bfv$ is circulant. Since it is also automatically Hermitian this reduces considerably the possibilities for the vectors. Henceforth all of the vectors we consider will be assumed to be unit vectors. \ $\mathbf{d=2:}$ If we perform the same trick as in the previous section by identifying any SIC-POVM in dimension~2 with a tetrahedron in the Bloch sphere then we may assume once again that our second vector~$\bfv_1$ is~$\frac{1}{\sqrt{3}}(1,\sqrt{2})$. Notice that this automatically fulfils the circulant criterion, since in dimension~2 it boils down to the single requirement that~$\langle\bfv_0,\bfv_1\rangle=\langle\bfv_1,\bfv_0\rangle$ which by the fact that the inner product is Hermitian forces both to be real. So we may write our candidate for a unitary matrix which cycles between~$\bfv_0$ and~$\bfv_1$ as $$U_2=\frac{1}{\sqrt{3}}\left( \begin{array}{cc} 1&\alpha\\ \sqrt{2}&\beta\\ \end{array} \right) $$ for some complex numbers~$\alpha,\beta$. If we require that~$U_2$ be unitary and of multiplicative order~2 it follows that in fact~$U_2$ must be Hermitian and so~$\alpha=\sqrt{2}$ and~$\beta=\pm1$. Writing out the equations for~$U_2^2$ we find that the only possibility is: $$U_2=\frac{1}{\sqrt{3}}\left( \begin{array}{cc} 1&\sqrt{2}\\ \sqrt{2}&-1\\ \end{array} \right), $$ and we may verify that indeed~$U_2\bfv_0=\bfv_1$ and~$U_2\bfv_1=\bfv_0$. We now look for a diagonal matrix~$D_2$ of phases which will take our initial vectors~$\bfv_0$ and~$\bfv_1$ by left multiplication to~2 more vectors which comprise the remaining part of the generators for a maximal set of equiangular lines. Notice that the upper left-hand entry of~$D_2$ must be~1, since~$\bfv_0$ is always in a~$D_d$-orbit of its own (all other vectors in any SIC-POVM containing~$\bfv_0$ are forced to have their first entry equal to a phase times~$\frac{1}{\sqrt{(d+1)}}$). So our diagonal matrix in this case will look like $$D_2 = \left( \begin{array}{cc} 1&0\\ 0&\zeta\\ \end{array} \right) $$ where~$\zeta$ is some phase. We set~$\bfv_2=D_2\bfv_1$ and~$\bfv_3=D_2\bfv_2=D_2^2\bfv_1$. Writing out the equations for the set~$\{\bfv_0,\bfv_1,\bfv_2,\bfv_3\}$ to form a spanning set for a maximal set of equiangular lines in dimension~2 we observe first that the equiangularity between~$\bfv_0$ and the other three is automatic, by our choice of first entries (see the discussion of~$d=3$ below for a deeper insight into this property, which is the essence of the advantage of this construction method). So we only need worry about the angles among the remaining vectors~$\bfv_1,\bfv_2$ and~$\bfv_3$, which boil down to just three equations of the form $$\vert1+2\zeta^r\vert=\sqrt{3},$$ where~$r=1$ or 2. This forces~$\zeta$ to be one of the primitive cube roots of unity, and we are done. Notice that the requirement that $D_2^3=\mathbf{I}_2$ would also have forced~$\zeta$ to be one of the cube roots of unity (without necessarily having been a solution which provided a SIC-POVM!). However we did not impose this \emph{a priori} in case a similar situation should arise to that in the first section, where the generating matrix was not of finite order. \begin{remark} The way in which the above example and its counterpart below in dimension~3 were originally discovered was by considering `Hadamard' multiplication of rank~1 projectors with the density matrices corresponding to the upper-triangular vector set, since the structure shone through much more clearly there than in any other format; presumably because the phase ambiguities are removed. If we consider that our matrix~$D_2$ is in fact the diagonal matrix of a vector~$\bfh=(1,-e^\frac{\pi i}{3})$ say, and if we form the rank~1 projector from~$\bfh$ which is the Hermitian matrix~$H = \bfh^\dagger\bfh = \left( \begin{smallmatrix} 1&-e^\frac{-\pi i}{3}\\ -e^\frac{\pi i}{3}&1 \end{smallmatrix} \right)$, then the following remarkable fact arises: the set $$\big\{\bfv_0,\ e^{i\theta_m H}\bfv_0,\ (H\ast e^{i\theta_m H})\bfv_0,\ (H\ast H\ast e^{i\theta_m H})\bfv_0\big\}$$ or equivalently $$\big\{\bfv_0,\ e^{i\theta_m H}\bfv_0,\ (e^{i\theta_m H}\bfv_0)\ast\bfh,\ (e^{i\theta_m H}\bfv_0)\ast\bfh\ast\bfh\big\}$$ is a SIC-POVM, where~$\theta_m$ denotes the so-called~\emph{magic angle}~$\theta_m=\arccos{\frac{1}{\sqrt{3}}}$, and where the~$\ast$ denotes Hadamard (elementwise) multiplication of vectors and/or matrices. What we lose however in this version is the finite order property of the transition unitary~$e^{i\theta_m H}$: while (under Hadamard multiplication) the matrix~$H$ still has finite order, the unitary matrix~$e^{i\theta_m H}$ has infinite multiplicative order. In this context we mention that our original transition matrix~$U_2$ above may be expanded as the exponential $$U_2 = e^{-i\theta_m Y}Z,$$ where~$Y=\left( \begin{smallmatrix} 0&-i\\ i&0 \end{smallmatrix} \right)$ and~$Z=\left( \begin{smallmatrix} 1&0\\ 0&-1 \end{smallmatrix} \right)$ are the usual Pauli matrices. \end{remark} \ $\mathbf{d=3:}$ This time our vector~$\bfv_0=(1,0,0)$ and we must find a unitary matrix~$U_3$ which takes us from~$\bfv_0$ cyclically to vectors~$\bfv_1$ and~$\bfv_2$ which have respectively~2 and~3 non-zero entries (the upper-triangular format referred to above). We know by the same argument as in dimension~2 that the first entry of each of these vectors must have absolute value~$\frac{1}{2}$, so let the top entry of~$\bfv_1$ be~$\frac{1}{2}e^{ix}$ for some~$x\in[0,2\pi)$. The hypothesis that the Gram matrix of the set~$\{\bfv_0,\bfv_1,\bfv_2\}$ be circulant in particular forces~$\frac{1}{2}e^{ix}=\langle\bfv_0,\bfv_1\rangle=\langle\bfv_2,\bfv_0\rangle$ and so the top entry of~$\bfv_2$ must equal~$\frac{1}{2}e^{-ix}$. So let us write $$\bfv_1=\begin{pmatrix}\frac{1}{2}e^{ix}\\\frac{\sqrt{3}}{2}e^{iy}\\0\end{pmatrix},\ \bfv_2=\begin{pmatrix}\frac{1}{2}e^{-ix}\\re^{i\eta}\\\sqrt{\frac{3}{4}-r^2}e^{i\kappa}\end{pmatrix}$$ for suitable real non-negative~$y,r,\eta,\kappa$. We remark first that~$\kappa$ may be set to be zero since it has no impact upon any other quantities, including the effect of our target~$D_3$ matrix, as we shall explain below. It remains to ensure that the middle inner product~$\langle\bfv_1,\bfv_2\rangle$ then also equals~$\frac{1}{2}e^{ix}$. (Notice that the other~3 non-diagonal inner products in the Gram matrix are forced to obey the same circulant rule here because the Gram matrix is Hermitian and the dimension is only~3). So we only need solve the equation: \begin{equation}\label{ranga} \frac{1}{2}e^{ix}=\langle\bfv_1,\bfv_2\rangle=\frac{1}{4}e^{-2ix}+\frac{\sqrt{3}}{2}re^{i(\eta-y)}, \end{equation} which upon multiplying throughout by~$2e^{2ix}\neq0$ becomes $$(e^{ix})^3 - \sqrt{3}re^{i(\eta-y)}(e^{ix})^2 - \frac{1}{2} = 0.$$ Viewed as an equation in the variable~$e^{ix}$ and bearing in mind the role of sixth roots of unity in this theory, this equation has a particularly suggestive form: namely if we take the phase~$e^{i(\eta-y)}(e^{ix})^2$ in the central term to be~$\pm i$ then the whole equation has the shape of a sixth root of unity minus its real and imaginary components. That is, if we set~$(e^{ix})^3 = e^{\frac{\pi i}{3}} = \frac{1}{2}+\frac{\sqrt{3}}{2}i$, set~$r=\frac{1}{2}$ and ensure that the phase in the middle term is equal to~$i$, then we have a solution. So one neat form is to set~$\eta=\frac{\pi}{2}$,~$y=\frac{2\pi}{9}$ and so the vectors become: $$\bfv_0=\begin{pmatrix}1\\0\\0\end{pmatrix},\ \bfv_1=\begin{pmatrix}\frac{1}{2}e^{\frac{\pi i}{9}}\\\frac{\sqrt{3}}{2}e^{\frac{2\pi i}{9}}\\0\end{pmatrix},\ \bfv_2=\begin{pmatrix}\frac{1}{2}e^{-\frac{\pi i}{9}}\\\frac{i}{2}\\\frac{1}{\sqrt{2}}\end{pmatrix}$$ and using the formula in~(ii) of lemma~\ref{unicirc} gives our transition unitary~$U_3$ to be: $$U_3=\left( \begin{array}{ccc} \frac{1}{2}e^{\frac{\pi i}{9}} & -\frac{i}{2} & \frac{1}{\sqrt{2}} \\ \frac{\sqrt{3}}{2}e^{\frac{2\pi i}{9}} & \frac{i}{2\sqrt{3}} e^{\frac{\pi i}{9}} & - \frac{1}{\sqrt{6}} e^{\frac{\pi i}{9}} \\ 0 & \sqrt{\frac{2}{3}}e^{\frac{-2\pi i}{9}} & - \frac{i}{\sqrt{3}} e^{\frac{-2\pi i}{9}} \\ \end{array} \right) $$ which has multiplicative order~3. So we have our substructure of a triangular basis. It remains to search for a diagonal matrix~$D_3$ of phases such that the (subspaces generated by the) orbits of these vectors under left multiplication by~$D_3$ do in fact constitute a full set of equiangular lines. As in the~$d=2$ case the top left-hand entry of~$D_3$ must be~1. So let us write $$D_3 = \left( \begin{array}{ccc} 1&0&0\\ 0&\xi&0\\ 0&0&\zeta\\ \end{array} \right) $$ for some phases~$\xi,\zeta$. We observe that $D_3\bfv_1=\begin{pmatrix}\frac{1}{2}e^{\frac{\pi i}{9}}\\\frac{\sqrt{3}}{2}e^{\frac{2\pi i}{9}}\xi\\0\end{pmatrix}$ and so~$\langle\bfv_1,D_3\bfv_1\rangle=\frac{1}{4}+\frac{3}{4}\xi$. For this to be of absolute value~$\frac{1}{2}$ we require that~$\xi=-1$. Substituting this in turn into the equation for~$\langle D_3\bfv_1,\bfv_2\rangle$ yields an inner product~$\frac{1}{2}e^{-\frac{5\pi i}{9}}$, which is also of the correct absolute value. So far, so good: we have a collection of four vectors which span four equiangular lines. The final step is to check whether there is an appropriate choice of~$\zeta$ to generate the other five. Returning for a moment to the case of general~$d$, observe that for any positive integers~$r,s$ and~$j$, since~$D_d$ is by construction unitary: $$\langle D_d^{r}\bfv_{s+j},\bfv_{s}\rangle = \langle\bfv_{s+j},{D_d^\dagger}^r\bfv_{s}\rangle = \langle\bfv_{s+j},D_d^{n-r}\bfv_{s}\rangle,$$ where~$n$ is the lowest common multiple of the orders of the eigenvalues chosen so far for~$D_d$. In other words, all of the vectors in orbit~$\mathcal{O}_{s+j}$ will have the correct Hermitian angle with all of those in orbit~$\mathcal{O}_{s}$, since by stage~$(s+j)$ we have already verified that~$\bfv_{s+j}$ makes the correct angle with all of orbit~$\mathcal{O}_{s}$ and since~$D_d$ does not affect anything in the vectors of orbit~$\mathcal{O}_{s}$ beyond the $s$-th entry, the same must be true of all of the~$D_d$-multiples of~$\bfv_{s+j}$ no matter what our choice of eigenvalue at the~$(s+j)$-level. So the point about the upper-triangular structure we have created may be seen here (for~$d=2$ it was rather trivial): once we have created~$k$ levels in the sense that we have vectors~$\bfv_0,\ldots,\bfv_{k-1}$ and all of their finite orbits~$\mathcal{O}_0,\ldots,\mathcal{O}_{k-1}$ under repeated multiplication by~$D_d$, and once we are sure that the subsequent vectors~$\bfv_{k},\ldots,\bfv_{d-1}$ make the correct Hermitian angle with all of these orbits, then we may choose~\emph{any} phases for the~${k},\ldots,(d-1)$-st eigenvalues of~$D_d$ safe in the knowledge that the images of the vectors~$\bfv_{k},\ldots,\bfv_{d-1}$ under any power of the resulting matrix~$D_d$ will automatically make the correct Hermitian angle with the orbits~$\mathcal{O}_0,\ldots,\mathcal{O}_{k-1}$. So we are reduced at each $k+1$-st stage to ensuring that the set of new vectors~$\{D_d^r\bfv_{k}\}$ has the correct set of mutual angles with one another and with the subsequent vectors~$\bfv_{k+1},\ldots,\bfv_{d-1}$; the previous orbits automatically `fall into line'. This also shows that within each level we only need to check~$\vert\mathcal{O}_k\vert$ equations rather than the usual~$\binom{\vert\mathcal{O}_k\vert}{2}$, since for any integers~$r,s$: $$\langle D_d^{r}\bfv_k,D_d^s\bfv_k\rangle = \langle\bfv_k,{D_d^\dagger}^rD_d^s\bfv_k\rangle = \langle\bfv_k,D_d^{s-r}\bfv_k\rangle.$$ So in dimension~3 it is a consequence of the above discussion that no matter what our choice of~$\zeta$, the vectors~$D_3^t\bfv_2$ for integer~$t$ will always have the correct angle with vectors~$\bfv_0$,~$\bfv_1$, and~$D_3\bfv_1$. So we only need to focus on the inner products between the vectors~$D_3^t\bfv_2$ for~$t=0,1,2,3,4,5$. A glance at the shape of the vector~$\bfv_2$ shows that for any integers~$s,t$, since $D_3$ is automatically unitary: $$\langle D_3^s\bfv_2,D_3^t\bfv_2\rangle = \langle\bfv_2,D_3^{t-s}\bfv_2\rangle = \frac{1}{4} + (-1)^{t-s}\frac{1}{4} + \frac{1}{2}\zeta^{t-s},$$ explicitly showing that the individual vectors are unit vectors when~$s=t$. Without loss of generality we may assume when~$s\neq t$ that~$0\leq s<t\leq5$, so in particular~$1\leq t-s\leq 5$. The above expression shows immediately that if $t-s$ is odd then we have the correct absolute value of~$\frac{1}{2}$; when~$t-s$ is even (ie equal to~2 or~4) one sees that any primitive cube root or indeed sixth root of unity will once again yield the correct absolute value of~$\frac{1}{2}$. So for simplicity we shall set $$\zeta = e^\frac{2\pi i}{3},$$ hence~$D_3$ has the form $$D_3 = \left( \begin{array}{ccc} 1&0&0\\ 0&-1&0\\ 0&0&e^\frac{2\pi i}{3}\\ \end{array} \right), $$ whence our full set of vectors is: \begin{eqnarray*} \mathcal{O}_0 & = & \{ \begin{pmatrix}1\\0\\0\end{pmatrix} \}, \\ \mathcal{O}_1 & = & \{ \begin{pmatrix}\frac{1}{2}e^{\frac{\pi i}{9}}\\\frac{\sqrt{3}}{2}e^{\frac{2\pi i}{9}}\\0\end{pmatrix}, \begin{pmatrix}\frac{1}{2}e^{\frac{\pi i}{9}}\\-\frac{\sqrt{3}}{2}e^{\frac{2\pi i}{9}}\\0\end{pmatrix} \}, \\ \mathcal{O}_2 & = & \{ \begin{pmatrix}\frac{1}{2}e^{\frac{-\pi i}{9}}\\\frac{i}{2}\\\frac{1}{\sqrt{2}}\end{pmatrix}, \begin{pmatrix}\frac{1}{2}e^{\frac{-\pi i}{9}}\\-\frac{i}{2}\\\frac{1}{\sqrt{2}}e^\frac{2\pi i}{3}\end{pmatrix}, \begin{pmatrix}\frac{1}{2}e^{\frac{-\pi i}{9}}\\\frac{i}{2}\\\frac{1}{\sqrt{2}}e^\frac{4\pi i}{3}\end{pmatrix}, \begin{pmatrix}\frac{1}{2}e^{\frac{-\pi i}{9}}\\-\frac{i}{2}\\\frac{1}{\sqrt{2}}\end{pmatrix}, \begin{pmatrix}\frac{1}{2}e^{\frac{-\pi i}{9}}\\\frac{i}{2}\\\frac{1}{\sqrt{2}}e^\frac{2\pi i}{3}\end{pmatrix}, \begin{pmatrix}\frac{1}{2}e^{\frac{-\pi i}{9}}\\-\frac{i}{2}\\\frac{1}{\sqrt{2}}e^\frac{4\pi i}{3}\end{pmatrix} \}. \\ \end{eqnarray*} Notice we have split it into its three natural $D_3$-orbits:~$\mathcal{O}_0$ generated by~$\bfv_0$,~$\mathcal{O}_1$ generated by~$\bfv_1$ and~$\mathcal{O}_2$ generated by~$\bfv_2$. Also we remark that~$D_3^6=\mathbf{I}_3$, so in fact in this case we are able to stick to finite-order unitaries both for the transition matrix between orbits, and for the diagonal matrix which generates each orbit. This completes the proof of theorem~\ref{d2d3}.\end{proof} We should mention that the mere creation of the initial set~$\{\bfv_0,\bfv_1,\bfv_2\}$ does not in any way guarantee that it can be extended to a SIC-POVM in the above fashion. For example it is possible to create a set of three totally real vectors (using~$x=0$ above and then solving equation~(\ref{ranga})) which have no corresponding diagonal matrix to extend them to a full set. \begin{remark} Any attempt to extend this methodology beyond~$d=3$ using the na\"ive diagonal approach which worked in~$d=2,3$ is unfortunately doomed to fail - in a sense one `runs out of degrees of freedom' far too quickly. This does not rule out a kind of `block diagonal' approach, which we hope to be the subject of future work. \end{remark}
{"config": "arxiv", "file": "1402.7330/HadMultSICs.tex"}
TITLE: Asymptotic of integral QUESTION [2 upvotes]: I have this definite integral: $$ \int_{0}^{\frac \pi 2} \left( 1 + t^2 \cot^2\theta \right )^{-\frac{n-1}{2}} d \theta , t\in (0,1)$$ where $n$ is an integer. I need to find the asymptotic as a function on $n$. I suspect it should be $O(\frac{1}{\sqrt{n}})$ but wasn't able to complete the calculation. Any ideas? REPLY [0 votes]: One may write: $$ \begin{align} I(n)=\int_{0}^{\large \frac \pi 2} \left( 1 + t^2 \cot^2\theta \right )^{-\frac{n-1}{2}} d \theta&=\int_{0}^{\large \frac \pi 2} \left( 1 + t^2 \tan^2\theta \right )^{-\frac{n-1}{2}} d \theta \\\\ &=\int_{0}^{\infty} \frac{1}{1+x^2}\left( 1 + t^2 x^2 \right )^{-\frac{n-1}{2}} dx \end{align} $$ Upper bound ($t\leq1$): $$ \begin{align} I(n) &=\int_{0}^{\infty} \frac{1}{1+x^2}\left( 1 + t^2 x^2 \right )^{-\frac{n-1}{2}} dx \\\\ &\leq\int_{0}^{\infty} \frac{1}{1+t^2x^2}\left( 1 + t^2 x^2 \right )^{-\frac{n-1}{2}} dx \\\\ &= \int_{0}^{\infty} \left( 1 + t^2 x^2 \right )^{-\frac{n+1}{2}} dx \\\\ &= \frac1t \int_{0}^{\infty} \left( 1 + x^2 \right )^{-\frac{n+1}{2}} dx = \frac1t f(n) \end{align} $$ Lower bound(for $t\leq 1$): $$ \begin{align} I(n) &\geq \int_{0}^{\infty} \frac{1}{1+x^2}\left( 1 + x^2 \right )^{-\frac{n-1}{2}} dx \\\\ &= \int_{0}^{\infty} \left( 1 + x^2 \right )^{-\frac{n+1}{2}} dx =f(n)\\\\ \end{align} $$ Thus: $$ f(n) \leq I(n) \leq \frac1t f(n)$$ which is valid as a finite approximation when $t$ is strictly larger than 0. A recursion can be obtain for $f(n)$. Laplace approximation for $f(n)$ would give that $f(n) \approx \sqrt{\pi \over 2n}$ which show that the upper bound can be tight.
{"set_name": "stack_exchange", "score": 2, "question_id": 2062149}
TITLE: Find the tangent to the curve QUESTION [0 upvotes]: The equation of the tangent to the curve $y=sin²(\frac{\pi x^3}{6})$ at $x=1$ is? I know the question is pretty simple and straight but I would like to cross check my answer which is $y=\frac14 + \frac{\pi\sqrt3}{4}(x-1)$ REPLY [2 votes]: You are indeed correct. Let $y=\sin^2[{f(x)}]$ Then $y=\frac12-\frac12\cos[2f(x)]$ $\to\frac{dy}{dx}=f'(x)\sin[2f(x)]$ $f(x)=\frac\pi6x^3\to f'(x)=\frac\pi2x^2$ $\to \frac{dy}{dx}=\frac{\pi}{2}x^2\sin(\frac{\pi x^3}{3})$ Hence $x=1\to \frac{dy}{dx}=\frac\pi2\sin(\frac\pi3)=\frac{\pi\sqrt3}{4}$ $x=1\to y=\frac14, y-\frac14=\frac{\pi\sqrt3}{4}(x-1)\to y=\frac14+\frac{\pi\sqrt3}{4}(x-1)$
{"set_name": "stack_exchange", "score": 0, "question_id": 2780881}
TITLE: Let all eigenvalues of $A$ have a negative real part ( i.e. $A$ is stable ). Why does the following hold? QUESTION [2 upvotes]: Let all eigenvalues of $A$ have a negative real part ( i.e. $A$ is stable ). Why does the following hold? $$\int_{0}^{\infty} [Ae^{A\tau}BB^*e^{A^*\tau}+e^{A\tau}BB^*e^{A^*\tau}A^*] \, d\tau = \int_0^\infty d(e^{A\tau}BB^*e^{A^*\tau})$$ Please may you explain the meaning of $d(e^{A\tau}BB^*e^{A^*\tau})$ in the right hand side of the equation. Many thanks, Tri REPLY [1 votes]: Here I will give a partial answer without ruling out the possibility of finishing it later. I am mildly skeptical of the equality because it seems to be neglecting a product rule that would say $\displaystyle \frac d {d\tau} \left( e^{A\tau}BB^*e^{A^*\tau}\right) = Ae^{A\tau}B B^* e^{A^*\tau} + e^{A\tau}BB^* A^* e^{A^*\tau}.$ But the other question is: What is the meaning of $d(e^{A\tau}BB^*e^{A^*\tau})$? In the integral $\displaystyle \int_0^\infty d(e^{A\tau}BB^*e^{A^*\tau})$ one could wonder whether the variable that goes from $0$ to $\infty$ is $\tau$ or $A$ or $B$ or something else, but that is clear from the left side of the equality. The only meaning one can reasonably assign to this is that that integral is the total change in $e^{A\tau} B B^* e^{A^*\tau}$ as $\tau$ goes from $0$ to $\infty$, i.e. \begin{align} \int_0^\infty d\left(e^{A\tau} B B^* e^{A^*\tau}\right) & = \left( \lim_{\tau\to\infty} e^{A\tau} B B^* e^{A^*\tau} \right) - \left( e^{A0} B B^* e^{A^*0} \right) \\[6pt] & = \left( \lim_{\tau\to\infty} e^{A\tau} B B^* e^{A^*\tau} \right) - \left( B B^*\right). \end{align}
{"set_name": "stack_exchange", "score": 2, "question_id": 2113746}
\begin{document} \title[Exact asymptotics for constrained exponential random graphs]{Large deviations and exact asymptotics for constrained exponential random graphs} \author{Mei Yin} \thanks{Mei Yin's research was partially supported by NSF grant DMS-1308333.} \address{Department of Mathematics, University of Denver, Denver, CO 80208, USA} \email{mei.yin@du.edu} \dedicatory{\rm \today} \subjclass[2000]{60F10, 05C80, 60C05} \keywords{large deviations, normalization constants, exponential random graphs.} \begin{abstract} We present a technique for approximating generic normalization constants subject to constraints. The method is then applied to derive the exact asymptotics for the conditional normalization constant of constrained exponential random graphs. \end{abstract} \maketitle \section{Introduction} \label{intro} Exponential random graph models are widely used to characterize the structure and behavior of real-world networks as they are able to predict the global structure of the networked system based on a set of tractable local features. Let $s$ be a positive integer. We recall the definition of an $s$-parameter family of exponential random graphs. Let $H_1,\dots,H_s$ be fixed finite simple graphs (``simple'' means undirected, with no loops or multiple edges). By convention, we take $H_1$ to be a single edge. Let $\zeta_1,\dots,\zeta_s$ be $s$ real parameters and let $N$ be a positive integer. Consider the set $\G_N$ of all simple graphs $G_N$ on $N$ vertices. Let $\text{hom}(H_i, G_N)$ denote the number of homomorphisms (edge-preserving vertex maps) from the vertex set $V(H_i)$ into the vertex set $V(G_N)$ and $t(H_i, G_N)$ denote the homomorphism density of $H_i$ in $G_N$, \begin{equation} \label{t} t(H_i, G_N)=\frac{|\text{hom}(H_i, G_N)|}{|V(G_N)|^{|V(H_i)|}}. \end{equation} By an $s$-parameter family of exponential random graphs we mean a family of probability measures $\PR_N^{\zeta}$ on $\G_N$ defined by, for $G_N\in\G_N$, \begin{equation} \label{pmf} \PR_N^{\zeta}(G_N)=\exp\left(N^2\left(\zeta_1 t(H_1,G_N)+\cdots+ \zeta_s t(H_s,G_N)-\psi_N^{\zeta}\right)\right), \end{equation} where the parameters $\zeta_1,\dots,\zeta_s$ are used to tune the densities of different subgraphs $H_1,\dots,H_s$ of $G_N$ and $\psi_N^{\zeta}$ is the normalization constant, \begin{equation} \label{psi} \psi_N^{\zeta}=\frac{1}{N^2}\log\sum_{G_N \in \G_N} \exp\left(N^2 \left(\zeta_1 t(H_1,G_N)+\cdots+\zeta_s t(H_s,G_N)\right) \right). \end{equation} These exponential models are analogues of grand canonical ensembles in statistical physics, with particle and energy densities in place of subgraph densities, and temperature and chemical potentials in place of tuning parameters. A key objective while studying these models is to evaluate the normalization constant. It encodes essential information about the model since averages of various quantities of interest may be obtained by differentiating the normalization constant with respect to appropriate parameters. Indeed, a phase is commonly characterized as a connected region of the parameter space, maximal for the condition that the limiting normalization constant is analytic, and phase boundaries are determined by examining the singularities of its derivatives. Computation of the normalization constant is also important in statistics because it is crucial for carrying out maximum likelihood estimates and Bayesian inference of unknown parameters. The computation though is not always reliable for large $N$. For example, as shown by Chatterjee and Diaconis \cite{CD}, when $s=2$ and $\zeta_2>0$, all graphs drawn from the exponential model (\ref{pmf}) are not appreciably different from Erd\H{o}s-R\'{e}nyi in the large $N$ limit. This implies that sometimes subgraph densities cannot be tuned in the unconstrained model and exponential random graphs alone may not capture all desirable features of the networked system, such as interdependency and clustering. Furthermore, unlike standard statistical physics models, the equivalence of various ensembles (microcanonical, canonical, grand canonical) in the asymptotic regime does not hold in these models. One possible explanation is that since the normalization constant in the microcanonical ensemble is not always a convex function of the parameters \cite{RS}, the Legendre transform between the normalization constants in different ensembles is not invertible (see \cite{TET} for discussions about non-equivalence of ensembles). We are thus motivated to study the constrained exponential random graph model in \cite{KY}, where some subgraph density is controlled directly and others are tuned with parameters. In contrast to the above example where in the limit as $N\rightarrow \infty$, all graphs are close to Erd\H{o}s-R\'{e}nyi as $\zeta_2$ increases from $0$ to $\infty$, it was shown in \cite{KY} that for fixed edge density, a typical graph drawn from the constrained edge-triangle model still exhibits Erd\H{o}s-R\'{e}nyi structure for $\zeta_2$ close to $0$, but consists of one big clique and some isolated vertices as $\zeta_2$ gets sufficiently close to infinity. Notice that the transition observed in the constrained model is between graphs of different characters, whereas in the unconstrained model, although there is a curve in the parameter space across which the graph densities display sudden jumps \cite{CD, RY}, the transition is between graphs of similar characters (Erd\H{o}s-R\'{e}nyi graphs). Interesting mathematics is therefore expected from studying the constrained model, and in particular, the associated normalization constant directly; the normalization constant in the unconstrained model may sometimes be of no particular relevance. For clarity, we assume that the edge density of the graph is approximately known to be $e$, though the argument runs through without much modification if the density of some other more complicated subgraph is approximately described. Take $t>0$. The conditional normalization constant $\psi^{e, \zeta}_{N,t}$ is defined analogously to the normalization constant $\psi^{\zeta}_{N}$ for the unconstrained exponential random graph model, \begin{equation} \label{cpsi1} \psi^{e, \zeta}_{N,t}=\frac{1}{N^2}\log\sum_{G_N\in \mathcal{G}_N: |e(G_N)-e|\leq t}\exp\left(N^2 \left(\zeta_1 t(H_1,G_N)+\cdots+\zeta_s t(H_s,G_N)\right)\right), \end{equation} the difference being that we are only taking into account graphs $G_N$ whose edge density $e(G_N)$ is within a $t$ neighborhood of $e$. Correspondingly, the associated conditional probability measure $\PR^{e, \zeta}_{N,t}(G_N)$ is given by \begin{equation} \label{cpmf} \PR^{e, \zeta}_{N,t}(G_N)=\exp\left(N^2 \left(\zeta_1 t(H_1,G_N)+\cdots+\zeta_s t(H_s,G_N)-\psi^{e, \zeta}_{N,t}\right)\right)\mathbbm{1}_{|e(G_N)-e| \leq t}. \end{equation} Based on a large deviation principle for Erd\H{o}s-R\'{e}nyi graphs established in Chatterjee and Varadhan \cite{CV}, Chatterjee and Diaconis \cite{CD} developed an asymptotic approximation for the normalization constant $\psi_N^{\zeta}$ as $N\rightarrow \infty$ and connected the occurrence of a phase transition in the dense exponential model with the non-analyticity of the asymptotic limit of $\psi_N^{\zeta}$. Further investigations quickly followed, see for example \cite{AR, RRS, RS1, RS, RY, YRF, Z}. However, since the approximation relies on Szemer\'{e}di's regularity lemma, the error bound on $\psi_N^{\zeta}$ is of the order of some negative power of \begin{eqnarray} \log^* N=\left\{ \begin{array}{ll} 0, & \hbox{if $N\leq 1$;} \\ 1+\log^*(\log N), & \hbox{if $N>1$,} \\ \end{array} \right. \end{eqnarray} which is the number of times the logarithm function must be iteratively applied before the result is less than or equal to $1$, and this method is also not applicable for sparse exponential random graphs. Analogously, using the large deviation principle established in Chatterjee and Varadhan \cite{CV} and Chatterjee and Diaconis \cite{CD}, we developed an asymptotic approximation for the conditional normalization constant $\psi^{e, \zeta}_{N,t}$ as $N \rightarrow \infty$ and $t \rightarrow 0$, since it is in this limit that interesting singular behavior occurs \cite{KY}. Nevertheless, this approximation suffers from the same problem: the error bound on $\psi^{e, \zeta}_{N,t}$ is of the order of some negative power of $\log^* N$ and the method does not lead to an exact limit for $\psi^{e, \zeta}_{N,t}$ in the sparse setting. To improve on the approximation, Chatterjee and Dembo \cite{CD1} presented a general technique for computing large deviations of nonlinear functions of independent Bernoulli random variables in a recent work. In detail, let $f$ be a function from $[0,1]^n$ to $\mathbb{R}$, they considered a generic normalization constant of the form \begin{equation} \label{free} F=\log \sum_{x\in \{0,1\}^n}e^{f(x)} \end{equation} and investigated conditions on $f$ such that the approximation \begin{equation} \label{valid} F=\sup_{x\in [0,1]^n}(f(x)-I(x))+\text{ lower order terms} \end{equation} is valid, where $I(x)=\sum_{i=1}^n I(x_i)$ and \begin{equation} \label{I} I(x_i)=\sum_{i=1}^n (x_i\log x_i+(1-x_i)\log(1-x_i)). \end{equation} They then applied the general result and obtained bounds for the normalization constant $\psi_N^{\zeta}$ for finite $N$, which leads to a variational formula for the asymptotic normalization of exponential random graphs with a small amount of sparsity. Serious attempts have also been made at formulating a suitable ``sparse'' version of Szemer\'{e}di's lemma \cite{BCCZ1,BCCZ2}. This however may not always provide the precision required for large deviations, since random graphs do not necessarily satisfy the proposed regularity conditions in the large deviations regime. Seeing the power of nonlinear large deviations in deriving a concrete error bound for $\psi^{\zeta}_N$ as $N\rightarrow \infty$, we naturally wonder if it is possible to likewise obtain a better estimate for $\psi^{e, \zeta}_{N,t}$ as $N \rightarrow \infty$ and $t \rightarrow 0$, which will shed light on constrained exponential random graphs with sparsity. The following sections will be dedicated towards this goal. Due to the imposed constraint, instead of working with a generic normalization constant of the form (\ref{free}) as in Chatterjee and Dembo \cite{CD1}, we will work with a generic conditional normalization constant in Theorem \ref{main1} and then apply this result to derive a concrete error bound for the conditional normalization constant $\psi^{e, \zeta}_{N,t}$ of constrained exponential random graphs in Theorems \ref{general} and \ref{special}. \section{Overview of Chatterjee-Dembo results} \label{overview} Chatterjee and Dembo came up with a two-part sufficient condition under which the approximation (\ref{valid}) holds. They first assumed that $f$ is a twice continuously differentiable function on $[0,1]^n$ and introduced some shorthand notation. Let $\Vert \cdot \Vert$ denote the supremum norm. For each $i$ and $j$, let \begin{equation} f_i=\frac{\partial f}{\partial x_i} \text{ and } f_{ij}=\frac{\partial^2 f}{\partial x_i \partial x_j} \end{equation} and define $a=\Vert f \Vert$, $b_i=\Vert f_i \Vert$, and $c_{ij}=\Vert f_{ij} \Vert$. In addition to this minor smoothness condition on the function $f$, they further assumed that the gradient vector $\nabla f(x)=(\partial f/\partial x_1, \dots,\partial f/\partial x_n)$ satisfies a low complexity gradient condition: For any $\epsilon>0$, there is a finite subset of $\mathbb{R}^n$ denoted by $\D(\epsilon)$ such that for all $x\in [0,1]^n$, there exists $d=(d_1,\dots,d_n) \in \D(\epsilon)$ with \begin{equation} \label{D} \sum_{i=1}^n (f_i(x)-d_i)^2 \leq n\epsilon^2. \end{equation} \begin{theorem} [Theorem 1.5 in \cite{CD1}] \label{CD1} Let $F$, $a$, $b_i$, $c_{ij}$, and $\D(\epsilon)$ be defined as above. Let $I$ be defined as in (\ref{I}). Then for any $\epsilon>0$, $F$ satisfies the upper bound \begin{equation} F \leq \sup_{x \in [0,1]^n} (f(x)-I(x)) +\text{ complexity term }+\text{ smoothness term}, \end{equation} where \begin{equation} \text{complexity term }=\frac{1}{4}\left(n\sum_{i=1}^n b_i^2\right)^{1/2}\epsilon+3n\epsilon+\log|\D(\epsilon)|, \text{ and} \end{equation} \begin{equation} \text{smoothness term }=4\left(\sum_{i=1}^n (ac_{ii}+b_i^2)+\frac{1}{4}\sum_{i,j=1}^n (ac_{ij}^2+b_ib_jc_{ij}+4b_ic_{ij})\right)^{1/2} \end{equation} \begin{equation*} +\frac{1}{4}\left(\sum_{i=1}^n b_i^2\right)^{1/2}\left(\sum_{i=1}^n c_{ii}^2\right)^{1/2}+3\sum_{i=1}^n c_{ii}+\log 2. \end{equation*} \vskip.2truein \noindent Moreover, $F$ satisfies the lower bound \begin{equation} F\geq \sup_{x\in [0,1]^n}(f(x)-I(x))-\frac{1}{2}\sum_{i=1}^n c_{ii}. \end{equation} \end{theorem} To utilize Theorem \ref{CD1} in the exponential random graph setting, Chatterjee and Dembo introduced an equivalent definition of the homomorphism density so that the normalization constant for exponential random graphs (\ref{psi}) takes the same form as the generic normalization constant (\ref{free}). This notion of the homomorphism density, which dates back to Lov\'{a}sz, is denoted by $t(H,x)$ and may be constructed not only for simple graphs but also for more general objects (referred to as ``graphons'' in Lov\'{a}sz \cite{Lov}). Let $k$ be a positive integer and let $H$ be a finite simple graph on the vertex set $[k]=\{1,\dots,k\}$. Let $E$ be the set of edges of $H$ and let $m=|E|$. Let $N$ be another positive integer and let $n=\binom N2$. Index the elements of $[0,1]^n$ as $x=(x_{ij})_{1\leq i<j\leq N}$ with the understanding that if $i<j$, then $x_{ji}$ is the same as $x_{ij}$, and for all $i$, $x_{ii}=0$. Let $t(H,x)=T(x)/N^2$, where $T: [0,1]^n \rightarrow \mathbb{R}$ is defined as \begin{equation} \label{T} T(x)=\frac{1}{N^{k-2}}\sum_{q\in [N]^k}\prod_{\{l,l'\}\in E}x_{q_l q_{l'}}. \end{equation} For any graph $G_N$, if $x_{ij}=1$ means there is an edge between the vertices $i$ and $j$ and $x_{ij}=0$ means there is no edge, then $t(H,x)=t(H,G_N)$, where $t(H,G_N)$ is the homomorphism density defined by (\ref{t}). Furthermore, if we let $G_x$ denote the simple graph whose edges are independent, and edge $(i,j)$ is present with probability $x_{ij}$ and absent with probability $1-x_{ij}$, then $t(H,x)$ gives the expected value of $t(H, G_x)$. Chatterjee and Dembo checked that $T(x)$ satisfies both the smoothness condition and the low complexity gradient condition as assumed in Theorem \ref{CD1}. In detail, they showed in Lemmas 5.1 and 5.2 of \cite{CD1} that \begin{equation} \label{T1} \Vert T \Vert \leq N^2, \hspace{0.5cm} \Vert \frac{\partial T}{\partial x_{ij}} \Vert \leq 2m, \end{equation} \begin{eqnarray} \label{T2} \left \Vert \frac{\partial^2 T}{\partial x_{ij}\partial x_{i'j'}} \right \Vert \leq \left\{ \begin{array}{ll} 4m(m-1)N^{-1}, & \hbox{if $|\{i,j,i',j'\}|=2$ or $3$;} \\ 4m(m-1)N^{-2}, & \hbox{if $|\{i,j,i',j'\}|=4$,} \\ \end{array} \right. \end{eqnarray} and for any $\epsilon>0$, \begin{equation} \label{T3} |\D_T(\epsilon)| \leq \exp\left( \frac{cm^4k^4N}{\epsilon^4}\log \frac{Cm^4k^4}{\epsilon^4}\right), \end{equation} where $c$ and $C$ are universal constants. By taking $f(x)=\zeta_1T_1(x)+\cdots+\zeta_sT_s(x)$ in Theorem \ref{CD1}, they then gave a concrete error bound for the normalization constant $\psi_N^{\zeta}$, which is seen to be $F/N^2$ in this alternative interpretation of (\ref{free}). This error bound is significantly better than the negative power of $\log^* N$ and allows a small degree of sparsity for $\zeta_i$. As Theorem \ref{previous} shows, the difference between $\psi_N^{\zeta}$ and the approximation $\sup_{x\in [0,1]^n}\frac{f(x)-I(x)}{N^2}$ tends to zero as long as $\sum_{i=1}^s |\zeta_i|$ grows slower than $N^{1/8}(\log N)^{-1/8}$. \begin{theorem} [Theorem 1.6 in \cite{CD1}] \label{previous} Let $s$ be a positive integer and $H_1,\dots,H_s$ be fixed finite simple graphs. Let $N$ be another positive integer and let $n=\binom N2$. Define $T_1,\dots,T_s$ accordingly as in the above paragraph. Let $\zeta_1,\dots,\zeta_s$ be $s$ real parameters and define $\psi_N^{\zeta}$ as in (\ref{psi}). Let $f(x)=\zeta_1 T_1(x)+\cdots+\zeta_s T_s(x)$, $B=1+|\zeta_1|+\cdots+|\zeta_s|$, and $I$ be defined as in (\ref{I}). Then \begin{equation} -cBN^{-1}\leq \psi_N^{\zeta}-\sup_{x\in [0,1]^n}\frac{f(x)-I(x)}{N^2} \end{equation} \begin{equation*} \leq CB^{8/5}N^{-1/5}(\log N)^{1/5}\left(1+\frac{\log B}{\log N}\right)+CB^2N^{-1/2}, \end{equation*} where $c$ and $C$ are constants that may depend only on $H_1,\dots,H_s$. \end{theorem} \section{Nonlinear large deviations} \label{ld} Let $f$ and $h$ be two continuously differentiable functions from $[0,1]^n$ to $\mathbb{R}$. Assume that $f$ and $h$ satisfy both the smoothness condition and the low complexity gradient condition described at the beginning of this paper. Let $a$, $b_i$, $c_{ij}$ be the supremum norms of $f$ and let $\alpha$, $\beta_i$, $\gamma_{ij}$ be the corresponding supremum norms of $h$. For any $\epsilon>0$, let $\D_f(\epsilon)$ and $\D_h(\epsilon)$ be finite subsets of $\mathbb{R}^n$ associated with the gradient vectors of $f$ and $h$ respectively. Take $t>0$. Consider a generic conditional normalization constant of the form \begin{equation} \label{F} F^c=\log \sum_{x\in \{0,1\}^n: |h(x)|\leq tn}e^{f(x)}. \end{equation} \begin{theorem} \label{main1} Let $F^c$, $a$, $b_i$, $c_{ij}$, $\alpha$, $\beta_i$, $\gamma_{ij}$, $\D_f(\epsilon)$, and $\D_h(\epsilon)$ be defined as above. Let $I$ be defined as in (\ref{I}). Let $K=\log 2+2a/n$. Then for any $\delta>0$ and $\epsilon>0$, $F^c$ satisfies the upper bound \vspace{0.05cm} \begin{equation} F^c\leq \sup_{x\in [0,1]^n: |h(x)|\leq (t+\delta)n}(f(x)-I(x))+\text{ complexity term }+\text{ smoothness term}, \end{equation} where \begin{equation} \text{complexity term }=\frac{1}{4}\left(n\sum_{i=1}^n m_i^2\right)^{1/2}\epsilon+3n\epsilon+\log\left(\frac{12K\left(\frac{1}{n}\sum_{i=1}^n \beta_i^2\right)^{1/2}}{\delta\epsilon}\right) \end{equation} \begin{equation*} +\log|\D_f(\epsilon/3)|+ \log|\D_h((\delta\epsilon)/(6K))|, \text{ and} \end{equation*} \begin{equation} \text{smoothness term }=4\left(\sum_{i=1}^n (ln_{ii}+m_i^2)+\frac{1}{4}\sum_{i,j=1}^n (ln_{ij}^2+m_im_jn_{ij}+4m_in_{ij})\right)^{1/2} \end{equation} \begin{equation*} +\frac{1}{4}\left(\sum_{i=1}^n m_i^2\right)^{1/2}\left(\sum_{i=1}^n n_{ii}^2\right)^{1/2}+3\sum_{i=1}^n n_{ii}+\log 2, \end{equation*} where \begin{equation} l=a+nK, \end{equation} \begin{equation} m_i=b_i+\frac{2K \beta_i}{\delta}, \end{equation} \begin{equation} n_{ij}=c_{ij}+\frac{2K \gamma_{ij}}{\delta}+\frac{6K \beta_i \beta_j}{n\delta^2}. \end{equation} \vskip.2truein \noindent Moreover, $F^c$ satisfies the lower bound \begin{equation} F^c\geq \sup_{x\in [0,1]^n: |h(x)|\leq (t-\delta_0)n}(f(x)-I(x))-\epsilon_0 n-\eta_0 n-\log 2, \end{equation} where \begin{equation} \delta_0=\frac{\sqrt{6}}{n}\left(\sum_{i=1}^n (\alpha \gamma_{ii}+\beta_i^2)\right)^{1/2}, \end{equation} \begin{equation} \epsilon_0=2\sqrt{\frac{6}{n}}, \end{equation} \begin{equation} \eta_0=\frac{\sqrt{6}}{n}\left(\sum_{i=1}^n (a c_{ii}+b_i^2)\right)^{1/2}. \end{equation} \end{theorem} The proof of Theorem \ref{main1} follows a similar line of reasoning as in the proof of Theorem 1.1 of Chatterjee and Dembo \cite{CD1}, however the argument is more involved due to the following reasons. First, instead of having a one-sided constraint $f \geq tn$ as in Theorem 1.1, we have a two-sided constraint $|h|\leq tn$, and this calls for a minor modification of the function $\psi$. Then, more importantly, in Theorem 1.1, the upper and lower bounds are established for a probability measure, whereas here we are trying to establish the upper and lower bounds for the normalization constant of a probability measure with exponential weights. So to justify the upper bound, rather than checking the smoothness condition and the low complexity gradient condition for a single function $g$, which is connected to the constraint on $f$ as in the proof of Theorem 1.1, we need to check the smoothness condition and the low complexity gradient condition for the sum of two functions $f+e$ in our proof, where $f$ is the weight in the exponent and $e$ is connected to the constraint on $h$; while to justify the lower bound, rather than considering two small probability sets $\mathcal{A}$ and $\mathcal{A}'$ as in the proof of Theorem 1.1, we need to consider the probability of one more set $A_3$, which deals with the weight deviation in the exponent in our proof. \vskip.2truein \noindent \textit{Proof of the upper bound.} Let $g: \mathbb{R} \rightarrow \mathbb{R}$ be a function that is twice continuously differentiable, non-decreasing, and satisfies $g(x)=-1$ if $x \leq -1$ and $g(x)=0$ if $x \geq 0$. Let $L_1=\Vert g' \Vert$ and $L_2=\Vert g'' \Vert$. Chatterjee and Dembo \cite{CD1} described one such $g$: \begin{equation} g(x)=10(x+1)^3-15(x+1)^4+6(x+1)^5-1, \end{equation} which gives $L_1 \leq 2$ and $L_2 \leq 6$. Define \begin{equation} \psi(x)=Kg((t-|x|)/\delta). \end{equation} Then clearly $\psi(x)=-K$ if $|x|\geq t+\delta$, $\psi(x)=0$ if $|x|\leq t$, and $\psi(x)$ is non-decreasing for $-(t+\delta)\leq x\leq -t$ and non-increasing for $t\leq x\leq t+\delta$. We also have \begin{equation} \Vert \psi \Vert \leq K, \hspace{0.5cm} \Vert \psi' \Vert \leq \frac{2 K}{\delta}, \hspace{0.5cm} \Vert \psi'' \Vert \leq \frac{6 K}{\delta^2}. \end{equation} Let $e(x)=n\psi(h(x)/n)$. The plan is to apply Theorem \ref{CD1} to the function $f+e$ instead of $f$ only. Note that \begin{equation} \sum_{x\in \{0,1\}^n: |h(x)|\leq tn}e^{f(x)} \leq \sum_{x\in \{0,1\}^n} e^{f(x)+e(x)}. \end{equation} We estimate $f(x)+e(x)-I(x)$ over $[0,1]^n$. There are three cases. \begin{itemize} \item If $|h(x)|\leq tn$, then \begin{equation} f(x)+e(x)-I(x)=f(x)-I(x) \leq \sup_{x\in [0,1]^n: |h(x)|\leq (t+\delta)n}(f(x)-I(x)). \end{equation} \item If $|h(x)|\geq (t+\delta)n$, then \begin{align} f(x)+e(x)-I(x)&=f(x)-nK-I(x)\leq a+n\log 2-nK \\ &\leq -a \leq \sup_{x\in [0,1]^n: |h(x)|\leq (t+\delta)n}(f(x)-I(x)). \nonumber \end{align} \item If $|h(x)|=(t+\delta')n$ for some $0<\delta'<\delta$, then \begin{equation} f(x)+e(x)-I(x)\leq f(x)-I(x) \leq \sup_{x\in [0,1]^n: |h(x)|\leq (t+\delta)n}(f(x)-I(x)). \end{equation} \end{itemize} This shows that \begin{equation} \sup_{x\in [0,1]^n}(f(x)+e(x)-I(x)) \leq \sup_{x\in [0,1]^n: |h(x)|\leq (t+\delta)n}(f(x)-I(x)). \end{equation} We check the smoothness condition for $f+e$ first. Note that \begin{equation} \Vert f+e \Vert \leq a+nK=l, \end{equation} and for any $i$, \begin{equation} \left \Vert \frac{\partial (f+e)}{\partial x_i} \right \Vert \leq b_i+\frac{2K \beta_i}{\delta}=m_i, \end{equation} and for any $i$, $j$, \begin{equation} \left \Vert \frac{\partial^2 (f+e)}{\partial x_i \partial x_j} \right \Vert \leq c_{ij}+\frac{2K \gamma_{ij}}{\delta}+\frac{6K \beta_i \beta_j}{n\delta^2}=n_{ij}. \end{equation} Next we check the low complexity gradient condition for $f+e$. Let \begin{equation} \epsilon'=\frac{\epsilon}{3\Vert \psi' \Vert} \text{ and } \tau=\frac{\epsilon}{3\left(\frac{1}{n} \sum_{i=1}^n \beta_i^2\right)^{1/2}}. \end{equation} Define \begin{multline} \D(\epsilon)=\{d^f+\theta d^h: d^f\in \D_f(\epsilon/3), d^h\in \D_h(\epsilon'), \\ \text{ and } \theta=j\tau \text{ for some integer } -\Vert \psi' \Vert /\tau<j<\Vert \psi' \Vert /\tau\}. \end{multline} Note that \begin{equation} |\D(\epsilon)| \leq \frac{2\Vert\psi'\Vert}{\tau}|\D_f(\epsilon/3)||\D_h(\epsilon')|. \end{equation} Let $e_i=\partial e/\partial x_i$. Take any $x\in [0,1]^n$ and choose $d^f \in \D_f(\epsilon/3)$ and $d^h\in \D_h(\epsilon')$. Choose an integer $j$ between $-\Vert \psi' \Vert /\tau$ and $\Vert \psi' \Vert /\tau$ such that $|\psi'(h(x)/n)-j\tau|\leq \tau$. Let $d=d^f+j\tau d^h$ so that $d\in \D(\epsilon)$. Then \begin{equation} \sum_{i=1}^n (f_i(x)+e_i(x)-d_i)^2=\sum_{i=1}^n\left((f_i(x)-d^f_i)+(\psi'(h(x)/n)h_i(x)-j\tau d^h_i)\right)^2 \end{equation} \begin{eqnarray*} &\leq& 3\sum_{i=1}^n (f_i(x)-d^f_i)^2+3(\psi'(h(x)/n)-j\tau)^2\sum_{i=1}^n h_i(x)^2+3\Vert \psi' \Vert^2\sum_{i=1}^n (h_i(x)-d^h_i)^2\\ &\leq& \frac{1}{3}n\epsilon^2+3\tau^2 \sum_{i=1}^n \beta_i^2+3\Vert \psi' \Vert^2 n\epsilon'^2=n\epsilon^2. \end{eqnarray*} Thus $\D(\epsilon)$ is a finite subset of $\mathbb{R}^n$ associated with the gradient vector of $f+e$. The proof is completed by applying Theorem \ref{CD1}. \qed \vskip.2truein \noindent \textit{Proof of the lower bound.} Fix any $y \in [0,1]^n$ such that $|h(y)|\leq (t-\delta_0)n$. Let $Y=(Y_1,\dots,Y_n)$ be a random vector with independent components, where each $Y_i$ is a $\text{Bernoulli}(y_i)$ random variable. Let $Y^{(i)}$ be the random vector $(Y_1,\dots,Y_{i-1}, 0, Y_{i+1}, \dots, Y_n)$.\ Let \begin{equation} A_1=\{x\in \{0,1\}^n: |h(x)| \leq tn\}, \end{equation} \begin{equation} A_2=\{x\in \{0,1\}^n: |g(x,y)-I(y)|\leq \epsilon_0 n\}, \end{equation} \begin{equation} A_3=\{x\in \{0,1\}^n: |f(x)-f(y)|\leq \eta_0 n\}. \end{equation} Let $A=A_1\cap A_2\cap A_3$. Then \begin{eqnarray} \label{lower} \sum_{x\in \{0,1\}^n: |h(x)|\leq tn}e^{f(x)} &=&\sum_{x\in A_1}e^{f(x)-g(x,y)+g(x,y)}\\\notag &\geq&\sum_{x\in A}e^{f(x)-g(x,y)+g(x,y)}\\\notag &\geq&e^{f(y)-I(y)-(\epsilon_0+\eta_0) n}\PR(Y\in A). \end{eqnarray} We first consider $\PR(Y\in A_1)$. Let $U=h(Y)-h(y)$. For $t\in [0,1]$ and $x\in [0,1]^n$ define $u_i(t,x)=h_i(tx+(1-t)y)$. Note that \begin{equation} U=\int_0^1 \sum_{i=1}^n (Y_i-y_i)u_i(t,Y)dt, \end{equation} which implies \begin{equation} \ER(U^2)=\int_0^1 \sum_{i=1}^n \ER((Y_i-y_i)u_i(t,Y)U)dt. \end{equation} Let $U_i=h(Y^{(i)})-h(y)$ so that $Y^{(i)}$ and $U_i$ are functions of random variables $(Y_j)_{j\neq i}$ only. By the independence of $Y_i$ and $(Y^{(i)}, U_i)$, we have \begin{equation} \ER((Y_i-y_i)u_i(t,Y^{(i)})U_i)=0. \end{equation} Therefore \begin{eqnarray} |\ER((Y_i-y_i)u_i(t,Y)U)|&\leq& \ER|((u_i(t,Y)-u_i(t,Y^{(i)}))U|+\ER|u_i(t,Y^{(i)})(U-U_i)|\\\notag &\leq&\left\Vert\frac{\partial u_i}{\partial x_i}\right\Vert\Vert U \Vert+\Vert u_i \Vert \left\Vert U-U_i \right \Vert \\\notag &\leq& 2\alpha t\gamma_{ii}+\beta_i^2. \end{eqnarray} This gives \begin{equation} \label{YA1} \PR(Y \in A_1^c) \leq \PR(|U|\geq \delta_0 n)\leq \frac{\ER(U^2)}{\delta_0^2 n^2} \leq \frac{\sum_{i=1}^n (\alpha \gamma_{ii}+\beta_i^2)}{\delta_0^2 n^2}=\frac{1}{6}. \end{equation} Next we consider $\PR(Y\in A_2)$. Note that \begin{equation} \ER(g(Y,y))=I(y) \end{equation} and \begin{eqnarray} \VR(g(Y,y))&=&\sum_{i=1}^n \VR(Y_i\log y_i+(1-Y_i)\log(1-y_i))\\\notag &=&\sum_{i=1}^n y_i(1-y_i)\left(\log \frac{y_i}{1-y_i}\right)^2. \end{eqnarray} For $x\in [0,1]$, since $|\sqrt{x}\log x|\leq 1$, we have \begin{equation} x(1-x)\left(\log \frac{x}{1-x}\right)^2 \leq \left(|\sqrt{x}\log x|+|\sqrt{1-x}\log(1-x)|\right)^2\leq 4. \end{equation} Therefore \begin{equation} \label{YA2} \PR(Y\in A_2^c)\leq \PR(|g(Y,y)-I(y)|\geq \epsilon_0 n)\leq \frac{\VR (g(Y,y))}{\epsilon_0^2 n^2}\leq \frac{4}{\epsilon_0^2 n}=\frac{1}{6}. \end{equation} Finally we consider $\PR(Y\in A_3)$. Let $V=f(Y)-f(y)$. For $t\in [0,1]$ and $x\in [0,1]^n$ define $v_i(t,x)=f_i(tx+(1-t)y)$. Note that \begin{equation} V=\int_0^1 \sum_{i=1}^n (Y_i-y_i)v_i(t,Y)dt, \end{equation} which implies \begin{equation} \ER(V^2)=\int_0^1 \sum_{i=1}^n \ER((Y_i-y_i)v_i(t,Y)V)dt. \end{equation} Let $V_i=f(Y^{(i)})-f(y)$ so that $Y^{(i)}$ and $V_i$ are functions of random variables $(Y_j)_{j\neq i}$ only. By the independence of $Y_i$ and $(Y^{(i)}, V_i)$, we have \begin{equation} \ER((Y_i-y_i)v_i(t,Y^{(i)})V_i)=0. \end{equation} Therefore \begin{eqnarray} |\ER((Y_i-y_i)v_i(t,Y)V)|&\leq& \ER|((v_i(t,Y)-v_i(t,Y^{(i)}))V|+\ER|v_i(t,Y^{(i)})(V-V_i)|\\\notag &\leq&\left\Vert\frac{\partial v_i}{\partial x_i}\right\Vert\Vert V \Vert+\Vert v_i \Vert \left\Vert V-V_i \right \Vert \\\notag &\leq& 2atc_{ii}+b_i^2. \end{eqnarray} This gives \begin{equation} \label{YA3} \PR(Y \in A_3^c)\leq\PR(|V|\geq \eta_0 n) \leq \frac{\ER(V^2)}{\eta_0^2 n^2} \leq \frac{\sum_{i=1}^n (ac_{ii}+b_i^2)}{\eta_0^2 n^2}=\frac{1}{6}. \end{equation} Combining (\ref{YA1}), (\ref{YA2}) and (\ref{YA3}), we have \begin{equation} \PR(Y\in A)\geq 1-\PR(Y\in A_1^c)-\PR(Y\in A_2^c)-\PR(Y\in A_3^c)\geq \frac{1}{2}. \end{equation} Plugging this into (\ref{lower}) and taking supremum over $y$ completes the proof. \qed \section{Application to exponential random graphs} \label{application} As mentioned earlier, we would like to apply Theorem \ref{main1} to derive the exact asymptotics for the conditional normalization constant of constrained exponential random graphs. Recall the definition of an $s$-parameter family of conditional exponential random graphs introduced earlier, where we assume that the ``ideal'' edge density of the graph is $e$. Let \begin{equation} f(x)=\zeta_1T_1(x)+\cdots+\zeta_sT_s(x) \text{ and } h(x)=T_1(x)-N^2e, \end{equation} where $T_i(x)/N^2$ is the equivalent notion of homomorphism density as defined in (\ref{T}). Let $n=\binom N2$. We compare the conditional normalization constant $\psi^{e, \zeta}_{N,t}$ (\ref{cpsi1}) for constrained exponential random graphs with the generic conditional normalization constant $F^c$ (\ref{F}). Note that the constraint $|e(G_N)-e|\leq t$ may be translated into $|T_1(x)-N^2e|\leq N^2t$, and if we further redefine $t$ to be $(1-1/N)t'/2$ then we arrive at the generic constraint $|h(x)|\leq t'n$ as in (\ref{F}). Thus $\psi^{e, \zeta}_{N,t}=F^c/N^2$. In the following we give a concrete error bound for $\psi^{e, \zeta}_{N,t}$ using the estimates in Theorem \ref{main1}. Our proof is analogous to the proof of Theorem 1.6 in Chatterjee and Dembo \cite{CD1}, where they analyzed various error bounds for the generic normalization constant obtained in Theorem 1.5 (referenced as Theorem \ref{CD1} in this paper) and applied it in the exponential setting. Instead, we analyze the various error bounds for the generic conditional normalization constant obtained in Theorem \ref{main1} and apply it in the constrained exponential setting. The rationales behind the two arguments are essentially the same, except that the argument to be presented in the proof of Theorem \ref{general} is more involved due to the imposed constraint. \begin{theorem} \label{general} Let $s$ be a positive integer and $H_1,\dots,H_s$ be fixed finite simple graphs. Let $N$ be another positive integer and let $n=\binom N2$. Define $T_1,\dots,T_s$ accordingly as in the paragraph before Theorem \ref{previous}. Let $\zeta_1,\dots,\zeta_s$ be $s$ real parameters and define $\psi^{e, \zeta}_{N,t}$ as in (\ref{cpsi1}). Let $f(x)=\zeta_1 T_1(x)+\cdots+\zeta_s T_s(x)$, $B=1+|\zeta_1|+\cdots+|\zeta_s|$, and $I$ be defined as in (\ref{I}). Take $\kappa>8$. Then \begin{equation} \sup_{x\in [0,1]^n: |h(x)|\leq (t'-cn^{-1/(2\kappa)})n}\frac{f(x)-I(x)}{N^2}-CBN^{-1/2} \leq \psi^{e, \zeta}_{N,t} \end{equation} \begin{equation*} \leq \sup_{x\in [0,1]^n: |h(x)|\leq (t'+cn^{-1/(2\kappa)})n}\frac{f(x)-I(x)}{N^2}+CB^{8/5}N^{(8-\kappa)/(5\kappa)}(\log N)^{1/5}\left(1+\frac{\log B}{\log N}\right) \end{equation*} \begin{equation*} +CB^2N^{(2-\kappa)/(2\kappa)}, \end{equation*} where $t'=2Nt/(N-1)$ and $c$ and $C$ are constants that may depend only on $H_1,\dots,H_s$ and $e$. \end{theorem} \begin{proof} Chatterjee and Dembo \cite{CD1} checked that $T_i(x)$ satisfies both the smoothness condition and the low complexity gradient condition stated at the beginning of this paper, which readily implies that $f$ and $h$ satisfy the assumptions of Theorem \ref{main1}. Recall that the indexing set for quantities like $b_i$ and $\gamma_{ij}$, instead of being $\{1,\dots,n\}$, is now $\{(i,j): 1\leq i<j\leq N\}$, and for simplicity we write $(ij)$ instead of $(i, j)$. Let $a$, $b_{(ij)}$, $c_{(ij)(i'j')}$ be the supremum norms of $f$ and let $\alpha$, $\beta_{(ij)}$, $\gamma_{(ij)(i'j')}$ be the corresponding supremum norms of $h$. For any $\epsilon>0$, let $\D_f(\epsilon)$ and $\D_h(\epsilon)$ be finite subsets of $\mathbb{R}^n$ associated with the gradient vectors of $f$ and $h$ respectively. Based on the bounds for $T_i$ (\ref{T1}) (\ref{T2}) (\ref{T3}), we derive the bounds for $f$ and $h$. \begin{equation} a\leq CBN^2, \hspace{0.5cm} b_{(ij)}\leq CB, \end{equation} \begin{equation} c_{(ij)(i'j')}\leq \left\{ \begin{array}{ll} CBN^{-1}, & \hbox{if $|\{i,j,i',j'\}|=2$ or $3$;} \\ CBN^{-2}, & \hbox{if $|\{i,j,i',j'\}|=4$,} \\ \end{array} \right. \end{equation} \begin{equation} |\D_f(\epsilon)|\leq \prod_{i=1}^s |\D_i(\epsilon/(\zeta_i s))|\leq \exp\left(\frac{CB^4N}{\epsilon^4}\log\frac{CB}{\epsilon}\right). \end{equation} \begin{equation} \alpha\leq CN^2, \hspace{0.5cm} \beta_{(ij)}\leq C, \end{equation} \begin{equation} \gamma_{(ij)(i'j')}\leq \left\{ \begin{array}{ll} CN^{-1}, & \hbox{if $|\{i,j,i',j'\}|=2$ or $3$;} \\ CN^{-2}, & \hbox{if $|\{i,j,i',j'\}|=4$,} \\ \end{array} \right. \end{equation} \begin{equation} |\D_h(\epsilon)|=|\D_1(\epsilon)|\leq \exp\left(\frac{CN}{\epsilon^4}\log\frac{C}{\epsilon}\right). \end{equation} We then estimate the lower and upper error bounds for $\psi^{e, \zeta}_{N,t}$ using the bounds on $f$ and $h$ obtained above. First the lower bound: \begin{equation} \sum_{(ij)}ac_{(ij)(ij)}\leq CB^2N^3, \hspace{0.5cm} \sum_{(ij)}b_{(ij)}^2 \leq CB^2N^2. \end{equation} \begin{equation} \sum_{(ij)}\alpha \gamma_{(ij)(ij)}\leq CN^3, \hspace{0.5cm} \sum_{(ij)}\beta_{(ij)}^2 \leq CN^2. \end{equation} Therefore \begin{equation} \label{delta0} \delta_0 \leq cn^{-1/4}\leq cn^{-1/(2\kappa)}, \end{equation} \begin{equation} \frac{\epsilon_0 n+\eta_0n+\log 2}{N^2}\leq CN^{-1}+CBN^{-1/2}+CN^{-2}\leq CBN^{-1/2}. \end{equation} This gives \begin{equation} \psi^{e, \zeta}_{N,t} \geq \sup_{x\in [0,1]^n: |h(x)|\leq (t'-cn^{-1/(2\kappa)})n}\frac{f(x)-I(x)}{N^2}-CBN^{-1/2}, \end{equation} Next the more involved upper bound: Assume that $n^{-1/4}\leq \delta\leq 1$ and $0<\epsilon\leq 1$. Since $K\leq CB$, this implies that \begin{equation} l\leq CBN^2, \hspace{0.5cm} m_{(ij)}\leq CB\delta^{-1}, \end{equation} \begin{equation} n_{(ij)(i'j')}\leq \left\{ \begin{array}{ll} CBN^{-1}\delta^{-1}, & \hbox{if $|\{i,j,i',j'\}|=2$ or $3$;} \\ CBN^{-2}\delta^{-2}, & \hbox{if $|\{i,j,i',j'\}|=4$.} \\ \end{array} \right. \end{equation} The following estimates are direct consequences of the bounds on $l$, $m_{(ij)}$, and $n_{(ij)(i'j')}$. \begin{equation} \sum_{(ij)}ln_{(ij)(ij)}\leq CB^2N^3\delta^{-1}, \hspace{0.5cm} \sum_{(ij)}m_{(ij)}^2\leq CB^2N^2\delta^{-2}, \end{equation} \begin{equation} \sum_{(ij)(i'j')}ln_{(ij)(i'j')}^2 \leq CB^3N^3\delta^{-2}, \end{equation} \begin{equation} \sum_{(ij)(i'j')}m_{(ij)}(m_{i'j'}+4)n_{(ij)(i'j')}\leq CB^3N^2\delta^{-4}, \end{equation} \begin{equation} \sum_{(ij)}n_{(ij)(ij)}^2\leq CB^2\delta^{-2}, \hspace{0.5cm} \sum_{(ij)}n_{(ij)(ij)}\leq CBN\delta^{-1}. \end{equation} Therefore \begin{align} \text{complexity term }&\leq CBN^2\delta^{-1}\epsilon+CN^2\epsilon+\log\frac{CB}{\delta\epsilon}+\frac{CB^4N}{\epsilon^4}\log\frac{CB}{\epsilon}+\frac{CB^4N}{\delta^4\epsilon^4}\log\frac{CB}{\delta\epsilon} \\ &\leq CBN^2\delta^{-1}\epsilon+\frac{CB^4N}{\delta^4\epsilon^4}\log\frac{CB}{\delta\epsilon}. \nonumber \end{align} \begin{equation} \text{smoothness term }\leq CB^{3/2}N^{3/2}\delta^{-1}+CB^2N\delta^{-2}+CBN\delta^{-1}+C\leq CB^{2}N^{3/2}\delta^{-1}. \end{equation} Taking $\epsilon=(B^3\log N)/(\delta^3 N)^{1/5}$, this gives \begin{equation} \psi^{e, \zeta}_{N,t} \leq \sup_{x\in [0,1]^n: |h(x)|\leq (t'+\delta)n}\frac{f(x)-I(x)}{N^2}+CB^{8/5}N^{-1/5}(\log N)^{1/5}\delta^{-8/5}\left(1+\frac{\log B}{\log N}\right) \end{equation} \begin{equation*} +CB^2N^{-1/2}\delta^{-1}. \end{equation*} For $n$ large enough, we may choose $\delta=cn^{-1/(2\kappa)}$ as in (\ref{delta0}), which yields a further simplification \begin{equation} \psi^{e, \zeta}_{N,t} \leq \sup_{x\in [0,1]^n: |h(x)|\leq (t'+cn^{-1/(2\kappa)})n}\frac{f(x)-I(x)}{N^2}+CB^{8/5}N^{(8-\kappa)/(5\kappa)}(\log N)^{1/5}\left(1+\frac{\log B}{\log N}\right) \end{equation} \begin{equation*} +CB^2N^{(2-\kappa)/(2\kappa)}. \end{equation*} \end{proof} We can do a more refined analysis of Theorem \ref{general} when $\zeta_i$'s are non-negative for $i\geq 2$. \begin{theorem} \label{special} Let $s$ be a positive integer and $H_1,\dots,H_s$ be fixed finite simple graphs. Let $N$ be another positive integer and let $n=\binom N2$. Let $\zeta_1,\dots,\zeta_s$ be $s$ real parameters and suppose $\zeta_i\geq 0$ for $i\geq 2$. Define $\psi^{e, \zeta}_{N,t}$ as in (\ref{cpsi1}). Let $B=1+|\zeta_1|+\cdots+|\zeta_s|$ and $I$ be defined as in (\ref{I}). Take $\kappa>8$. Then \begin{equation} \label{last} -cBN^{-1/\kappa} \leq \psi^{e, \zeta}_{N,t}-\sup_{|x-e|\leq t}\left\{\zeta_1 x+\cdots+\zeta_k x^{e(H_k)}-\frac{1}{2}I(x)\right\} \end{equation} \begin{equation*} \leq CB^{8/5}N^{(8-\kappa)/(5\kappa)}(\log N)^{1/5}\left(1+\frac{\log B}{\log N}\right) \end{equation*} \begin{equation*} +CB^2N^{-1/\kappa}, \end{equation*} where $e(H_i)$ denotes the number of edges in $H_i$ and $c$ and $C$ are constants that may depend only on $H_1,\dots,H_s$, $e$, and $t$. \end{theorem} \begin{remark} If $H_i$, $i\geq 2$ are all stars, then the conclusions of Theorem \ref{special} hold for any $\zeta_1,\dots,\zeta_s$. \end{remark} \begin{remark} As an example, consider the case where $s=2$, $H_1$ is a single edge and $H_2$ is a triangle. Theorem \ref{special} shows that the difference between $\psi^{e, \zeta}_{N,t}$ and $\sup_{|x-e|\leq t}\left\{\zeta_1 x+\zeta_2 x^{3}-\frac{1}{2}I(x)\right\}$ tends to zero as long as $|\zeta_1|+|\zeta_2|$ grows slower than $N^{(\kappa-8)/(8\kappa)}(\log N)^{-1/8}$, thereby allowing a small degree of sparsity for $\zeta_i$. When $\zeta_i$'s are fixed, it provides an approximation error bound of order $N^{(8-\kappa)/(5\kappa)}(\log N)^{1/5}$, substantially better than the negative power of $\log^* N$ given by Szemer\'{e}di's lemma. \end{remark} \begin{proof} Fix $t>0$. We find upper and lower bounds for \begin{equation} L_N=\sup_{x\in [0,1]^n: |h(x)|\leq (t'+cn^{-1/(2\kappa)})n}\frac{f(x)-I(x)}{N^2} \end{equation} and \begin{equation} M_N=\sup_{x\in [0,1]^n: |h(x)|\leq (t'-cn^{-1/(2\kappa)})n}\frac{f(x)-I(x)}{N^2} \end{equation} in Theorem \ref{general} when $N$ is large. On one hand, by considering $g(x,y)=x_{ij}$ for any $(\frac{i-1}{N}, \frac{i}{N}] \times (\frac{j-1}{N}, \frac{j}{N}]$ and $i\neq j$, we have \begin{equation} L_N \leq \sup_{\substack{g: [0,1]^2 \rightarrow [0,1], g(x,y)=g(y,x)\\ |e(g)-e| \leq t+\frac{c}{2}n^{-1/(2\kappa)}}} \left\{\zeta_1t(H_1, g)+\cdots+\zeta_kt(H_k, g)-\frac{1}{2}\iint_{[0,1]^2}I(g(x,y))dxdy\right\}, \end{equation} \begin{equation} M_N \leq \sup_{\substack{g: [0,1]^2 \rightarrow [0,1], g(x,y)=g(y,x)\\ |e(g)-e| \leq t}} \left\{\zeta_1t(H_1, g)+\cdots+\zeta_kt(H_k, g)-\frac{1}{2}\iint_{[0,1]^2}I(g(x,y))dxdy\right\}. \end{equation} It was proved in Chatterjee and Diaconis \cite{CD} that when $\zeta_i$'s are non-negative for $i\geq 2$, the above supremum may only be attained at constant functions on $[0,1]$. Therefore \begin{equation} L_N \leq \sup_{ |x-e|\leq t+\frac{c}{2}n^{-1/(2\kappa)}} \left\{\zeta_1 x+\cdots+\zeta_k x^{e(H_k)}-\frac{1}{2}I(x)\right\}, \end{equation} \begin{equation} M_N \leq \sup_{ |x-e|\leq t} \left\{\zeta_1 x+\cdots+\zeta_k x^{e(H_k)}-\frac{1}{2}I(x)\right\}. \end{equation} On the other hand, by considering $g'(x,y)=x_{ij}\equiv x$ for any $i\neq j$, we have \begin{equation} L_N \geq \sup_{|\frac{N-1}{N}x-e|\leq t}\left\{\zeta_1 x+\cdots+\zeta_k x^{e(H_k)}-\frac{1}{2}I(x)\right\}+O(\frac{1}{N}), \end{equation} \begin{equation} M_N \geq \sup_{|\frac{N-1}{N}x-e|\leq t-\frac{c}{2}n^{-1/(2\kappa)}}\left\{\zeta_1 x+\cdots+\zeta_k x^{e(H_k)}-\frac{1}{2}I(x)\right\}+O(\frac{1}{N}). \end{equation} The $O(1/N)$ factor comes from the following consideration. The difference between $I(g')$ and $I(x)$ is easy to estimate, while the difference between $t(H_i, g')$ and $t(H_i, x)=x^{e(H_i)}$ is caused by the zero diagonal terms $x_{ii}$. We do a broad estimate of (\ref{T}) and find that it is bounded by $c_i/N$, where $c_i$ is a constant that only depends on $H_i$. Putting everything together, \begin{equation} L_N=M_N=\sup_{ |x-e|\leq t} \left\{\zeta_1 x+\cdots+\zeta_k x^{e(H_k)}-\frac{1}{2}I(x)\right\}+O(\frac{1}{N^{1/\kappa}}). \end{equation} The rest of the proof follows. \end{proof} \section*{Acknowledgements} The author thanks the anonymous referees for their helpful comments and suggestions.
{"config": "arxiv", "file": "1412.6001.tex"}
TITLE: Span and Dimension: A subspace QUESTION [5 upvotes]: If $A$ is finite set of linearly independent vectors then the dimension of the subspace spanned by $A$ is equal to the number of vectors in $A$. This is obviously true. Since $A$ is a finite set of linearly independent vectors and spans a subspace, $A$ is a basis for that subspace spanned by $A$ and thus by definition the dimension of a vector space is equal to the cardinality of any basis. I would help with writing the above argument in a concise, precise manner with mathematical notation and other shorthand Secondly in general what tips and/or advice you could give in general to make my arguments and proofs as efficient (time-wise) as possible. REPLY [1 votes]: Here are some tips that I follow when writing proofs. Write in complete sentences including punctuation. (This seems contradictory since there are often so many symbols in math proofs. But symbols have exact meanings in words. For example, $\exists$ means "there exists". Anywhere you see $\exists$, in your mind you can replace that symbol with "there exists". In this way, math proofs should be paragraphs of complete sentences with punctuation.) Write down the relevant definitions first. Often, the proof is just showing that the circumstances match the definitions. I think you're trying to prove the statement: if $A$ is a finite set of linearly independent vectors then the dimension of the subspace spanned by $A$ is equal to the number of vectors in $A$. Here is one proof: The dimension of a vector subspace is the size of any of its bases. (Recall the theorem: all bases of a vector subspace have the same size.) A basis for a vector subspace $V$ is a set of linearly independent vectors that spans the subspace. We are given that $A$ is a set of linearly independent vectors. Therefore $\text{Span}(A)$ is a subspace, and its dimension is $|A|$ (the number of elements in $A$).
{"set_name": "stack_exchange", "score": 5, "question_id": 692721}
TITLE: Solution curve for three-variable differential system QUESTION [2 upvotes]: Show that the functions $F(x,y,z)=x+y+z$ and $G(x,y,z)=x^2+y^2+z^2$ are integrals of the system of equations $dx/dt=y-z, dy/dt=z-x, dz/dt=x-y$, i.e. on any solution curve $(x(t),y(t),z(t))$ these functions are constant. Interpret the result geometrically. We have $dF/dt=dx/dt+dy/dt+dz/dt=(x-y)+(y-z)+(z-x)=0$, so $F(t)$ is constant. Also, $dG/dt=2xdx/dt+2ydy/dt+2zdz/dt=2x(y-z)+2y(z-x)+2z(x-y)=0$, so $G(t)$ is constant. But how can I interpret the result geometrically? I'm not even sure which geometric setting I should be looking at. REPLY [2 votes]: The ODE is the red coil , parallel to that is the green hexagon F, finally a scalar field for G.
{"set_name": "stack_exchange", "score": 2, "question_id": 543746}
TITLE: Dimension of a space of matrices QUESTION [4 upvotes]: Let $m,n\in\mathbb{Z}$ and $r<\min$(m,n). Denote by $M$ the set of $m\times n $ matrices over a field $k$, and let $M_r$ be the subset of matrices of rank at least equal to $r$. Now fix a matrix $A\in M_r$ and a subspace $W\in G(n-r,n)$ of dimension $n-r$ in $k^n$, such that $A\cdot W = 0$. Consider the set $$ T= \left\{ B\in M \mid B\cdot W \subset A\cdot k^n \right\}. $$ What is the easiest/more elegant way to see that $T$ has dimension $(n-r)(m-rank[A])$ ? EDIT: $T$ doesn't have dimension, but CO-dimension $(n-r)(m-rank[A])$ inside $M$. REPLY [1 votes]: That's not the answer I get. Let's assume that the rank of $A$ is $r$. Select a basis for $k^n$ such that $k^n\cong W_1\oplus V_1$, where $W_1=\operatorname{ker}(A)$ and $A$ restricted to $V_1$ is an isomorphism with the image of $A$. Now select a basis for $k^m$ such that $k^m\cong W_2\oplus V_2$, where $V_2=\operatorname{Im}(A)$. With respect to these bases, $A$ looks like $$ \begin{pmatrix} 0 & 0\\ 0& A_{22} \end{pmatrix} $$ where $A_{22}$ is an $r\times r$ block matrix with determinant nonzero. Suppose we have the special case where $W=W_1$. Then a $B$ satsifying what you want looks like $$ \begin{pmatrix} 0 & B_{12}\\ B_{21} & B_{22} \end{pmatrix}. $$ But, the dimension of the set of such matrices is $r(m-r)+r(n-r)+r^2$, which is not the same as $(n-r)(m-r)$. EDIT: With the added information and the assumption the book is asking for the codimension of the subspace of matrices, here is an answer. Everything is as above, but the rank of $A$ may not be $r$. Thus $A_{22}$ will be a block matrix of size $s\times s$. Adjust accordingly for $B_{12}$, $B_{21}$, and $B_{22}$. Then the dimension we are looking for is the dimension of the block matrix $0$, which is exactly what is asked.
{"set_name": "stack_exchange", "score": 4, "question_id": 524761}
TITLE: Radian, an arbitrary unit too? QUESTION [2 upvotes]: Why is the radian defined as the angle subtended at the centre of a circle when the arc length equals the radius? Why not the angle subtended when the arc length is twice as long as the radius , or the radius is twice as long as the arc length? Isn't putting 2 $\pi$ radian similar to arbitrarily putting 360 degree in a revolution? REPLY [6 votes]: You are right when you say that the angle can be described in the ways you specified as well. The radian is arbitrary in that sense. The relationship between subtended arc length and angle is not arbitrary. Radians are a convenient choice which you can see if you relate arc length to radius. $$ s = r \cdot k \theta $$ If you want the $k$ to be a $1$ then you want to work in radians. Physicists run into a similar issue when choosing units. If the speed of light is $c$ the distance travelled by a light ray is $$ d = c t$$ The numerical value of $c$ depends on the units you are working in. If you choose to work in meters and seconds then $c=3.0\times 10^8 \frac{m}{s}$. If you choose to work in light-years and years then $c=1.0 \frac{\text{light-years}}{\text{year}}$. Update: There is a comment requesting clarification as to what I mean by "The relationship between subtended arc length and angle is not arbitrary". There are many ways you can assign numerical values to different angle measures however that is still what you might call a "human convention". The way in which we agree to assign numbers to angles doesn't change what an angle is. No matter how you do it there will still be a circular arc subtended by the angle which is unique to that angle. Therefore if you give me a scheme for assigning numbers to angles then it will always be possible for me take a given angle measure and tell you the arc length for that angle. An example of this is in the way you proposed to define angles. We could say that the value of an angle is twice the arc length subtended by the angle. If we did this there would still be a 1-1 correspondence between any angle measurement and the arc-length. In our case if you told me you had an angle measured to be 2 meters in this convention I could telly you the subtended arc length is 1 meter. Update about a year later Please don't consider the following as a formal proof of anything. There are some loose ends involving infinite limits that I didn't completely tidy up. Nevertheless this is a sketch of how one might get at the inifinites series for sine using the idea of radian measure for angles and hopefully explains why this series specifically requires radians. You asked in the comment below why the Taylor series for $\sin$ is only valid for angles measured in radians. To understand why this is we need to have some idea where such a series would come from in the context of geometry. Remember we define the measure of an angle in radians in terms of the arc length subtended along a circle. Measuring the length of an arc is difficult to do geometrically. The standard approach involves approximating the circular arc with a arbitrarily large number of small line segments which become indefinitely small. From this approach to measuring arc length we can conclude that if an angle is measured in radians then the following will be approximately true for very small angles, $$ \sin( \delta \theta ) \approx \delta \theta \qquad \cos(\delta\theta) \approx 1 $$ The above "small angle" formulas are the critical point in this derivation where we use the notion of radian measure in particular. If we measured angles in terms of degrees we would have some conversion factors showing up in these formulas. Now suppose we want to know the sine of some large angle $\theta$. For a large enough integer $N$ we can write $\theta$ as a multiple $N$ of some very small angle $\delta \theta$, $$ N \delta \theta = \theta .$$ Using your favorite multiple angle identity it is possible to express $\sin(N \delta \theta)$ in terms of $\sin(\delta \theta)$ and $\cos(\delta \theta)$. In particular I'm going to use DeMoivre's Formula, $$ \cos(N\delta\theta) + i \sin( N \delta \theta) = (\cos(\delta\theta) + i \sin(\delta\theta))^N$$ Applying the binomial theorem on the right allows us to express this as , $$ \cos(N\delta\theta) + i \sin( N \delta \theta) = \sum_{k=0}^N i^k \sin^k(\delta\theta) \cos^{N-k}(\delta \theta) \frac{N!}{k!(N-k)!}$$ Now if $\delta \theta $ is small enough we can use the small angle identities for sine and cosine, $$ \cos(N\delta\theta) + i \sin( N \delta \theta) = \sum_{k=0}^N i^k (\delta\theta)^k \frac{N!}{k!(N-k)!}$$ Removing all explicit reference to $\delta \theta$ we get, $$ \cos(\theta) + i \sin(\theta) = \sum_{k=0}^N i^k (\theta)^k \frac{ N!}{k! N^k (N-k)!}$$ Now if we take the limit as $N$ becomes infinite the term $\frac{ N!}{k! N^k (N-k)!}$ will go to $\frac{1}{k!}$. Since all of the approximations we made become exact for infinite $N$ we take the limit and get, $$ \cos(\theta) + i \sin(\theta) = \sum_{k=0}^\infty i^k \theta^k \frac{1}{k!}$$ If we want the $\sin(\theta)$ then we need to take the imaginary part of this series which will give us only the odd terms in the series. We note that $Im(i) = 1$, $Im(i^3)=-1$, $Im(i^5)=1$, and so on. $$\sin(\theta) = \theta - \theta^3 / 3! + \theta^5 / 5! -\theta^7/7!+\cdots$$ So you can see that the form of this series is all based on a hypothesis about how to measure angles in terms of arc length and the multiple angle formula for $\sin$.
{"set_name": "stack_exchange", "score": 2, "question_id": 526043}
TITLE: Does Russel's paradox preclude us from using the power set to generate every possible set? QUESTION [0 upvotes]: Suppose I have the set of all things $\{a, b, c,... \}$. It seems to me that $ \mathcal P \{a, b, c,... \} $ would be the set of all sets, which sounds like it includes the set of all sets that do not contain themselves. However, I can't envision deriving that set from the power set of $\{a, b, c,... \}$. My thinking is that, so long as there are no sets in $\{a, b, c,... \}$, then the paradox doesn't arise. Otherwise, my reasoning, or my knowledge, is shamefully off. Where did I go wrong? REPLY [1 votes]: If you allow arbitrary subsets, you can prove that the set of all things cannot exist. Proof: Suppose this universal set $U$ exists such that $\forall x: x\in U$. Then there must exist a subset $S$ of $U$ such that $\forall x:[x\in S \iff x\in U \land x\notin x]$. Clearly, $S\in U$. Then, using only the rules of logic, we can obtain the contradiction $S\in S \iff S\notin S$. So, the set $U$ cannot exist. Corollary: Every set excludes something. The power set axiom presents no problems.
{"set_name": "stack_exchange", "score": 0, "question_id": 1249171}
\begin{document} \maketitle \begin{abstract} We consider the defocusing cubic nonlinear Schr\"odinger equation (NLS) on the two-dimensional torus. The equation admits a special family of elliptic invariant quasiperiodic tori called finite-gap solutions. These are inherited from the integrable 1D model (cubic NLS on the circle) by considering solutions that depend only on one variable. We study the long-time stability of such invariant tori for the 2D NLS model and show that, under certain assumptions and over sufficiently long timescales, they exhibit a strong form of \emph{transverse instability} in Sobolev spaces $H^s(\T^2)$ ($0<s<1$). More precisely, we construct solutions of the 2D cubic NLS that start arbitrarily close to such invariant tori in the $H^s$ topology and whose $H^s$ norm can grow by any given factor. This work is partly motivated by the problem of infinite energy cascade for 2D NLS, and seems to be the first instance where (unstable) long-time nonlinear dynamics near (linearly stable) quasiperiodic tori is studied and constructed. \end{abstract} \section{Introduction} A widely held principle in dynamical systems theory is that invariant quasiperiodic tori play an important role in understanding the complicated long-time behavior of Hamiltonian ODE and PDE. In addition to being important in their own right, the hope is that such quasiperiodic tori can play an important role in understanding other, possibly more generic, dynamics of the system by acting as \emph{islands} in whose vicinity orbits might spend long periods of time before moving to other such islands. The construction of such invariant sets for Hamiltonian PDE has witnessed an explosion of activity over the past thirty years after the success of extending KAM techniques to infinite dimensions. However, the dynamics near such tori is still poorly understood, and often restricted to the linear theory. The purpose of this work is to take a step in the direction of understanding and constructing non-trivial \emph{nonlinear} dynamics in the vicinity of certain quasiperiodic solutions for the cubic defocusing NLS equation. In line with the above philosophy emphasizing the role of invariant quasiperiodic tori for other types of behavior, another aim is to push forward a program aimed at proving infinite Sobolev norm growth for the 2D cubic NLS equation, an outstanding open problem. \medskip \noindent{\bf 1.1. The dynamical system and its quasiperiodic objects.} We start by describing the dynamical system and its quasiperiodic invariant objects at the center of our analysis. Consider the periodic cubic defocusing nonlinear Schr\"odinger equation (NLS), \begin{equation}\label{NLS}\tag{2D-NLS} \im \partial_t u+\Delta u=|u|^2 u \end{equation} where $(x,y)\in\T^2=\R^2/(2\pi\Z)^2$, $t\in\R$ and $u:\R\times\T^2\rightarrow\C$. All the results in this paper extend trivially to higher dimensions $d\geq 3$ by considering solutions that only depend on two variables\footnote{We expect that the results also extend to the focusing sign of the nonlinearity ($-|u|^2u$ on the R.~H.~S.~ of \eqref{NLS}). The reason why we restrict to the defocusing sign comes from the fact that the linear analysis around our quasiperiodic tori has only been established in full detail in \cite{Maspero-Procesi} in this case.}. This is a Hamiltonian PDE with conserved quantities: i) the Hamiltonian \begin{equation}\label{def:Ham:Original} H_0(u)=\int_{\T^2}\left(\left|\nabla u(x,y)\right|^2+\frac{1}{2}|u(x,y)|^4\right) \di x \, \di y , \end{equation} ii) the mass \begin{equation}\label{def:NLS:mass} M(u)=\int_{\T^2}|u(x,y)|^2 \di x\, \di y, \end{equation} which is just the square of the $L^2$ norm of the solution, and iii) the momentum \begin{equation}\label{def:NLS:momentum} P(u)=\im\int_{\T^2} \overline{ u(x,y)} \nabla u(x,y) \, \di x \, \di y. \end{equation} Now, we describe the invariant objects around which we will study and construct our long-time nonlinear dynamics. Of course, such a task requires a very precise understanding of the linearized dynamics around such objects. For this reason, we take the simplest non-trivial family of invariant quasiperiodic tori admitted by \eqref{NLS}, namely those inherited from its completely integrable 1D counterpart \begin{equation}\label{def:NLS1D}\tag{1D-NLS} \im \partial_t q=-\partial_{xx} q+|q|^2q,\quad x\in \T. \end{equation} This is a subsystem of \eqref{NLS} if we consider solutions that depend only on the first spatial variable. It is well known that equation \eqref{def:NLS1D} is integrable and its phase space is foliated by tori of finite or infinite dimension with periodic, quasiperiodic, or almost periodic dynamics. The quasiperiodic orbits are usually called \emph{finite-gap solutions}. Such tori are Lyapunov stable (for all time!) as solutions of \eqref{def:NLS1D} (as will be clear once we exhibit its integrable structure) and some of them are linearly stable as solutions of \eqref{NLS}, but we will be interested in their \emph{long-time nonlinear stability} (or lack of it) as invariant objects for the 2D equation \eqref{NLS}. In fact, we shall show that they are \emph{nonlinearly unstable} as solutions of \eqref{NLS}, and in a strong sense, in certain topologies and after very long times. Such instability is transversal in the sense that one drifts along the purely 2-dimensional directions: solutions which are initially very close to 1-dimensional become strongly 2-dimensional after some long time scales\footnote{ The tranversal instability phenomenon was already studied for solitary waves of the water waves equation \cite{RoussetT11} and the KP-I equation \cite{RoussetT12} by Rousset and Tzvetkov. However, their instability is a \emph{linear effect}, in the sense that the linearized dynamics is unstable. In contrast, our result is a fundamentally nonlinear effect, as the linearized dynamics around some of the finite gap tori is stable.}. \medskip \noindent{\bf 1.2. Energy Cascade, Sobolev norm growth, and Lyapunov instability.} In addition to studying long-time dynamics close to invariant objects for NLS, another purpose of this work is to make progress on a fundamental problem in nonlinear wave theory, which is the transfer of energy between characteristically different scales for a nonlinear dispersive PDE. This is called the \emph{energy cascade} phenomenon. It is a purely nonlinear phenomenon (energy is static in frequency space for the linear system), and will be the underlying mechanism behind the long-time instability of the finite gap tori mentioned above. We shall exhibit solutions whose energy moves from very high frequencies towards low frequencies (\emph{backward or inverse cascade}), as well as ones that exhibit cascade in the opposite direction (\emph{forward or direct cascade}). Such cascade phenomena have attracted a lot of attention in the past few years as they are central aspects of various theories of turbulence for nonlinear systems. For dispersive PDE, this goes by the name of \emph{wave turbulence theory} which predicts the existence of solutions (and statistical states) of \eqref{NLS} that exhibit a cascade of energy between very different length-scales. In the mathematical community, Bourgain drew attention to such questions of energy cascade by first noting that it can be captured in a quantitative way by studying the behavior of the Sobolev norms of the solution $$ \|u\|_{H^s}=\left(\sum_{n \in \Z^2} (1+|n|)^{2s}|\widehat u_n|^2\right)^{\frac12}. $$ In his list of Problems on Hamiltonian PDE \cite{Bourgain00b}, Bourgain asked whether there exist solutions that exhibit a quantitative version of the forward energy cascade, namely solutions whose Sobolev norms $H^s$, with $s>1$, are unbounded in time \begin{equation}\label{infinite growth} \sup_{t\geq 0} \|u(t)\|_{H^s}= +\infty, \qquad s>1. \end{equation} We should point out here that such growth cannot happen for $s=0$ or $s=1$ due to the conservation laws of the equations. For other Sobolev indices, there exists polynomial upper bounds for the growth of Sobolev norms (cf. \cite{Bourgain96,Staffilani97, CollianderDKS01,Bourgain04,Zhong08, CatoireW10,Sohinger10a,Sohinger10b, Sohinger11,CollianderKO12,PTV17}). Nevertheless, results proving actual growth of Sobolev norms are much more scarce. After seminal works by Bourgain himself \cite{Bourgain96} and Kuksin \cite{Kuksin96, Kuksin97, Kuksin97b}, the landmark result in \cite{CKSTT} played a fundamental importance in the recent progress, including this work: It showed that for any $s>1$, $\de\ll1$, $K\gg 1$, there exist solutions $u$ of \eqref{NLS} such that \begin{equation}\label{eq:IteamGrowth} \|u(0)\|_{H^s}\leq \de \quad\text{ and }\quad \|u(T)\|_{H^s}\geq K \end{equation} for some $T>0$. Even if not mentioned in that paper, the same techniques also lead to the same result for $s\in (0,1)$. This paper induced a lot of activity in the area \cite{GuardiaK12,Hani12,Guardia14,HaniPTV15, HausProcesi, GuardiaHP16} (see also \cite{GerardG10, Delort10,Pocovnicu11,GerardG11,Pocovnicu12,GerardG15,Maspero18g} on results about growth of Sobolev norms with different techniques). Despite all that, Bourgain's question about solutions exhibiting \eqref{infinite growth} remains open on $\T^d$ (however a positive answer holds for the cylindrical domain $\R\times \T^d$, \cite{HaniPTV15}). \medskip The above-cited works revealed an intimate connection between Lypunov instability and Sobolev norm growth. Indeed, the solution $u=0$ of \eqref{NLS} is an elliptic critical point and is linearly stable in all $H^s$. From this point of view, the result in \cite{CKSTT} given in \eqref{eq:IteamGrowth} can be interpreted as the Lyapunov instability in $H^s$, $s\neq 1$, of the elliptic critical point $u=0$ (the first integrals \eqref{def:Ham:Original} and \eqref{def:NLS:mass} imply Lyapunov stability in the $H^1$ and $L^2$ topology). It turns out that this connection runs further, particularly in relation to the question of finding solutions exhibiting \eqref{infinite growth}. As was observed in \cite{Hani12}, one way to prove the existence of such solutions is to prove that, for sufficiently many $\phi\in H^s$, an instability similar to that in \eqref{eq:IteamGrowth} holds, but with $\|u(0)-\phi\|_{H^s} \leq \delta$. In other words, proving long-time instability as in \eqref{eq:IteamGrowth} but with solutions starting $\delta-$close to $\phi$, and for sufficiently many $\phi\in H^s$ implies the existence (and possible genericness) of unbounded orbits satisfying \eqref{infinite growth}. Such a program (based on a Baire-Category argument) was applied successfully for the Szeg\"o equation on $\T$ in \cite{GerardG15}. Motivated by this, one is naturally led to studying the Lyapunov instability of more general invariant objects of \eqref{NLS} (or other Hamiltonian PDEs), or equivalently to investigate whether one can achieve Sobolev norm explosion starting arbitrarily close to a given invariant object. The first work in this direction is by one of the authors \cite{Hani12}. He considers the plane waves $u(t,x)=A e^{\im(mx-\omega t)}$ with $\omega=m^2+A^2$, periodic orbits of \eqref{NLS}, and proves that there are orbits which start $\de$-close to them and undergo $H^s$ Sobolev norm explosion, $0<s<1$. This implies that the plane waves are Lyapunov unstable in these topologies. Stability results for plane waves in $H^s$, $s>1$, on shorter time scales are provided in \cite{FaouGL14}. The next step in this program would be to study such instability phenomena near higher dimensional invariant objects, namely quasiperiodic orbits. This is the purpose of this work, in which we will address this question for the family of finite-gap tori of \eqref{def:NLS1D} as solutions to the \eqref{NLS}. To control the linearized dynamics around such tori, we will impose some Diophantine (strongly non-resonant) conditions on the quasiperiodic frequency parameters. This allows us to obtain a stable linearized operator (at least with respect to the perturbations that we consider), which is crucial to control the delicate construction of the unstable nonlinear dynamics. \medskip {\bf 1.3. Statement of results.} Roughly speaking, we will construct solutions to \eqref{NLS} that start very close to the finite-gap tori in appropriate topologies, and exhibit either backward cascade of energy from high to low frequencies, or forward cascade of energy from low to high frequencies. In the former case, the solutions that exhibit backward cascade start in an arbitrarily small vicinity of a finite-gap torus in Sobolev spaces $H^s(\T^2)$ with $0< s<1$, but grow to become larger than any pre-assigned factor $K\gg1$ in the same $H^s$ (higher Sobolev norms $H^s$ with $s>1$ decrease, but they are large for all times). In the latter case, the solutions that exhibit forward cascade start in an arbitrarily small vicinity of a finite-gap torus in $L^2(\T^2)$, but their $H^s$ Sobolev norm (for $s>1$) exhibits a growth by a large multiplicative factor $K\gg 1$ after a large time. We shall comment further on those results after we state the theorems precisely. To do that, we need to introduce the Birkhoff coordinates for equation \ref{def:NLS1D}. Gr\'ebert and Kappeler showed in \cite{grebert_kappeler} that there exists a globally defined map, called the Birkhoff map, such that $\forall s \geq 0$ \begin{equation}\label{def:BirkhoffMap} \begin{split} \Phi :\,& H^s(\T) \longrightarrow\,\,\, h^s(\Z)\times h^s(\Z)\\ &\quad q \quad\,\longmapsto\ (z_m,\overline z_m)_{m\in\Z}, \end{split} \end{equation} such that equation \eqref{def:NLS1D} is transformed in the new coordinates $(z_m,\overline z_m)_{m\in\Z}=\Phi (q)$ to: \begin{equation}\label{def:NLSinBirkhoff} \im \dot z_m=\alpha_m(I)z_m \end{equation} where $I=(I_m)_{m \in \Z}$ and $I_m=|z_m|^2$ are the actions, which are conserved in time (since $\alpha_m(I)\in \R$). Therefore in these coordinates, called Birkhoff coordinates, equation \eqref{def:NLS1D} becomes a chain of nonlinear harmonic oscillators and it is clear that the phase space is foliated by finite and infinite dimensional tori with periodic, quasiperiodic or almost periodic dynamics, depending on how many of the actions $I_m$ (which are constant!) are nonzero and on the properties of rational dependence of the frequencies. In this paper we are interested in the finite dimensional tori with quasiperiodic dynamics. Fix $\tk\in \N$ and consider a set of modes \begin{equation}\label{def:SetOfModes} \cS_0=\{\tm_1, \ldots,\tm_\tk\}\subset \Z\times \{0\}. \end{equation} Fix also a value for the actions $I_{\tm_i}=I_{\tm_i}^0$ for $i=1,\ldots \tk$. Then we can define the $\tk$-dimensional torus \begin{equation}\label{def:torus} \tT^\tk=\tT^\tk(\cS_0,I^0_m)=\left\{z\in\ell^2: |z_{\tm_i}|^2=I_{\tm_i}^0, \,\,\text{ for }\,i=1,\ldots, \tk, \ \ \ z_m=0\,\,\text{ for }\, m\not\in \cS_0\right\}, \end{equation} which is supported on the set $\cS_0$. Any orbit on this torus is quasiperiodic (or periodic if the frequencies of the rigid rotation are completely resonant). We will impose conditions to have non-resonant quasiperiodic dynamics. This will imply that the orbits on $\tT^\tk$ are dense. By equation \eqref{def:NLSinBirkhoff}, it is clear that this torus, as an invariant object of equation \ref{def:NLS1D}, is stable for this equation for all times in the sense of Lyapunov. The torus \eqref{def:torus} {(actually, its pre-image $\Phi^{-1}(\tT^\td)$ though the Birkhoff map)} is also an invariant object for the original equation \eqref{NLS}. The main result of this paper will show the instability (in the sense of Lyapunov) of this invariant object. Roughly speaking, we show that under certain assumptions (on the choices of modes \eqref{def:SetOfModes} and actions \eqref{def:torus}) these tori are unstable in the $H^s(\T^2)$ topology for $s\in (0,1)$. Even more, there exist orbits which start arbitrarily close to these tori and undergo an arbitrarily large $H^s$-norm explosion. We will abuse notation, and identify $H^s(\T)$ with the closed subspace of $H^s(\T^2)$ of functions depending only on the $x$ variable. Consequently, $\mathcal T^\tk:=\Phi^{-1}(\tT^\tk)$ (see \eqref{def:BirkhoffMap}) is a closed torus of $H^s(\T)\subset H^s(\T^2)$. \begin{theorem}\label{thm:main} Fix a positive integer $\td$. For any choice of $\td$ modes $\cS_0$ (see \eqref{def:SetOfModes}) satisfying a genericity condition (namely Definition \ref{Lgenericity} with sufficiently large $\tL$), there exists $\eps_\ast>0$ such that for any $\eps\in (0,\eps_\ast)$ there exists a positive measure Cantor-like set $\cI\subset (\eps/2,\eps)^\tk$ of actions, for which the following holds true for any torus $\tT^\tk=\tT^\tk(\cS_0,I^0_m)$ with $I_m^0\in \cI$: \begin{enumerate} \item For any $s\in (0,1)$, $\de>0$ small enough, and $K>0$ large enough, there exists an orbit $u(t)$ of \eqref{NLS} and a time \[ 0<T\leq e^{\left(\frac{K}{\de}\right)^\beta} \] such that $u(0)$ is $\de$-close to the torus $\mathcal T^\tk:=\Phi^{-1}(\tT^\tk)$ in $H^s(\T^2)$ and $\|u(T)\|_{H^s}\geq K$. Here $\beta>1$ is independent of $K, \delta$. \item For any $s>1$, and any $K>0$ large enough, there exists an orbit $u(t)$ of \eqref{NLS} and a time \[ 0<T\leq e^{K^\sigma} \] such that $$ \mathrm{dist}\left(u(0), \mathcal T^\tk \right)_{L^2(\T^2)}\leq K^{-\sigma'}\quad \text{ and }\quad\|u(T)\|_{H^s(\T^2)}\geq K\|u(0)\|_{H^s(\T^2)}. $$ Here $\sigma, \sigma'>0$ are independent of $K$. \end{enumerate} \end{theorem} {\bf 1.4. Comments and remarks on Theorem \ref{thm:main}:} \begin{enumerate} \item The relative measure of the set $\cI$ of admissible actions can be taken as close to 1 as desired. Indeed, by taking smaller $\eps_\ast$, one has that the relative measure satisfies \[ |1-\mathrm{Meas}(\cI)|\leq C\eps_\ast^\kappa \] for some constant $C>0$ and $0<\kappa<1$ independent of $\eps_\ast>0$. The genericity condition on the set $\cS_0$ and the actions $(I_{\tm})_{\tm \in \cS_0}\in \cI$ ensure that the \emph{linearized} dynamics around the resulting torus $\mathcal T^\tk$ is stable for the perturbations we need to induce the nonlinear instability. In fact, a subset of those tori is even linearly stable for much more general perturbations as we remark below. \item \textit{Why does the finite gap solution need to be small?} To prove Theorem \ref{thm:main} we need to analyze the linearization of equation \eqref{NLS} at the finite gap solution (see Section \ref{sec:reducibility}). Roughly speaking, this leads to a Schr\"odinger equation with a quasi-periodic potential. Luckily, such operators can be \emph{reduced} to constant coefficients via a KAM scheme. This is known as \emph{reducibility theory} which allows one to construct a change of variables that casts the linearized operator into an essentially constant coefficient diagonal one. This KAM scheme was carried out in \cite{Maspero-Procesi}, and requires the quasi-periodic potential, given by the finite gap solution here, to be small for the KAM iteration to converge. That being said, we suspect a similar result to be true for non-small finite gap solutions. \item To put the complexity of this result in perspective, it is instructive to compare it with the stability result in \cite{Maspero-Procesi}. In that paper, it is shown that a proper {\sl subset} $\cI' \subset \cI$ of the tori considered in Theorem \ref{thm:main} are Lyapunov stable in $H^s$, $s>1$, but for shorter time scales than those considered in this theorem. More precisely, all orbits that are initially $\de$-close to $\mathcal T^\tk$ in $H^s$ stay $C\de$-close for some fixed $C>0$ for time scales $t\sim \delta^{-2}$. The same stability result (with a completely identical proof) holds if we replace $H^s$ by $\mathcal F\ell_1$ norm (functions whose Fourier series is in $\ell^1$). In fact, by trivially modifying the proof, one could also prove stability on the $\delta^{-2}$ timescale in $\mathcal F\ell_1\cap H^s$ for $0<s<1$. What this means is that the solutions in the first part of Theorem \ref{thm:main} remains within $C\delta$ of $\tT^\tk$ up to times $\sim \delta^{-2}$ but can diverge vigorously afterwards at much longer time scales. It is also worth mentioning that the complementary subset $\cI \setminus \cI'$ has a positive measure subset where tori are linearly unstable since they possess a finite set of modes that exhibit hyperbolic behavior. In principle, hyperbolic directions are good for instability, but they are not useful for our purposes since they live at very low frequencies, and hence cannot be used (at least not by themselves alone) to produce a substantial growth of Sobolev norms. We avoid dealing with these linearly unstable directions by restricting our solution to an invariant subspace on which these modes are at rest. \item It is expected that a similar statement to the first part of Theorem \ref{thm:main} is also true for $s>1$. This would be a stronger instability compared to that in the second part (for which the initial perturbation is small in $L^2$ but not in $H^s$). Nevertheless, this case cannot be tackled with the techniques considered in this paper. Indeed, one of the key points in the proof is to perform a (partial) Birkhoff normal form up to order 4 around the finite gap solution. The terms which lead to the instabilities in Theorem \ref{thm:main} are quasi-resonant instead of being completely resonant. Working in the $H^s$ topology with $s\in (0,1)$, such terms can be considered completely resonant with little error on the timescales where instability happens. However, this cannot be done for $s>1$, for which one might be able to eliminate those terms by a higher order normal form ($s>1$ gives a stronger topology and can thus handle worse small divisors). This would mean that one needs other resonant terms to achieve growth of Sobolev norms. The same difficulties were encountered in \cite{Hani12} to prove the instability of the plane waves of \eqref{NLS}. \item For finite dimensional Hamiltonian dynamical systems, proving Lyapunov instability for quasi-periodic Diophantine elliptic (or maximal dimensional Lagrangian) tori is an extremely difficult task. Actually all the obtained results \cite{ChengZ13,GuardiaK14} deal with $C^r$ or $C^\infty$ Hamiltonians, and not a single example of such instability is known for analytic Hamiltonian systems. In fact, there are no results of instabilities in the vicinity of non-resonant elliptic critical points or periodic orbits for analytic Hamiltonian systems (see \cite{LeCalvezDou83,Douady88, KaloshinMV04} for results on the $C^\infty$ topology). The present paper proves the existence of unstable Diophantine elliptic tori in an analytic infinite dimensional Hamiltonian system. Obtaining such instabilities in infinite dimensions is, in some sense, easier: having infinite dimensions gives ``more room'' for instabilities. \item It is well known that many Hamiltonian PDEs possess quasiperiodic invariant tori \cite{Wayne90,Poschel96a,kuksin_poschel,Bourgain98,BB1,Eliasson10,GYX,BBi10, Wang2,PX, BCP,PP, PP13,BBHM18}. Most of these tori are normally elliptic and thus linearly stable. It is widely expected that the behavior given by Theorem \ref{thm:main} also arises in the neighborhoods of (many of) those tori. Nevertheless, it is not clear how to apply the techniques of the present paper to these settings. \end{enumerate} \medskip \medskip {\bf 1.5. Scheme of the proof.} Let us explain the main steps to prove Theorem \ref{thm:main}. \begin{enumerate} \item Analysis of the 1-dimensional cubic Schr\"odinger equation. We express the 1-dimensional cubic NLS in terms of the Birkhoff coordinates. We need a quite precise knowledge of the Birkhoff map (see Theorem \ref{thm:dnls}). In particular, we need that it ``behaves well'' in $\ell^1$. This is done in the paper \cite{AlbertoVeyPaper} and summarized in Section \ref{sec:AdaptedVarAndBirk}. In Birkhoff coordinates, the finite gap solutions are supported in a finite set of variables. We use such coordinates to express the Hamiltonian \eqref{def:Ham:Original} in a more convenient way. \item Reducibility of the 2-dimensional cubic NLS around a finite gap solution. We reduce the linearization of the vector field around the finite gap solutions to a constant coefficients diagonal vector field. This is done in \cite{Maspero-Procesi} and explained in Section \ref{sec:reducibility}. In Theorem \ref{thm:reducibility} we give the conditions to achieve full reducibility. In effect, this transforms the linearized operator around the finite gap into a constant coefficient diagonal (in Fourier space) operator, with eigenvalues $\{\Omega_{\jj}\}_{\jj\in \Z^2\setminus \cS_0}$. We give the asymptotics of these eigenvalues in Theorem \ref{thm:reducibility4}, which roughly speaking look like \begin{equation}\label{geraffe} \Omega_{\jj}=|\jj|^2 +O(J^{-2}) \end{equation} for frequencies $\jj=(m,n)$ satisfying $|m|, |n|\sim J$. This seemingly harmless $O(J^{-2})$ correction to the unperturbed Laplacian eigenvalues is sharp and will be responsible for the restriction to $s\in (0,1)$ in the first part of Theorem \ref{thm:main} as we shall explain below. \item Degree three Birkhoff normal form around the finite gap solution. This is done in \cite{Maspero-Procesi}, but we shall need more precise information from this normal form that will be crucial for Steps 5 and 6 below. This is done in \ref{sec:CubicBirkhoff} (see Theorem \ref{thm:3b}). \item Partial normal form of degree four. We remove all degree four monomials which are not (too close to) resonant. This is done in Section \ref{sec:QuarticBirkhoff}, and leaves us with a Hamiltonian with (close to) resonant degree-four terms plus a higher-degree part which will be treated as a remainder in our construction. \item We follow the paradigm set forth in \cite{CKSTT, GuardiaK12} to construct solutions to the truncated Hamiltonian consisting of the (close to) resonant degree-four terms isolated above, and then afterwards to the full Hamiltonian by an approximation argument. This construction will be done at frequencies $\jj=(m, n)$ such that $|m|,|n| \sim J$ with $J$ very large, and for which the dynamics is effectively given by the following system of ODE $$ \begin{cases} \im \dot a_{\jj} &= -|a_{ \jj}|^2a_{ \jj}+\sum_{\mathcal R(\jj)} a_{\jj_1}\overline{a_{\jj_2}} a_{\jj_3} e^{\im\Gamma t} \\ \mathcal R(\jj)&:=\{(\jj_1, \jj_2, \jj_3) \in \Z^2\setminus \cS_0: \jj_1, \jj_3\neq \jj, \ \ \ \jj_1-\jj_2+\jj_3=\jj, \ \ \ |\jj_1|^2-|\jj_2|^2+|\jj_3|^2=|\jj|^2\}\\ \Gamma&:= \Omega_{\jj_1}-\Omega_{\jj_2}+\Omega_{\jj_3}-\Omega_{\jj}. \end{cases} $$ We remark that the conditions of the set $\mathcal R(\jj)$ are essentially equivalent to saying that $(\jj_1, \jj_2, \jj_3, \jj)$ form a rectangle in $\mathbb{Z}^2$. Also note that by the asymptotics of $\Omega_{\jj}$ mentioned above in \eqref{geraffe}, one obtains that $\Gamma=O(J^{-2})$ if all the frequencies involved are in $\mathcal R(\jj)$ and satisfy $|m|,|n| \sim J$. The idea now is to reduce this system into a finite dimensional system called the ``Toy Model'' which is tractable enough for us to construct a solution that cascades energy. An obstruction to this plan is presented by the presence of the oscillating factor $e^{\im \Gamma t}$ for which $\Gamma$ is not zero (in contrast to \cite{CKSTT}) but rather $O(J^{-2})$. The only way to proceed with this reduction is to approximate $e^{\im \Gamma t} \sim 1$ which is only possible provided $J^{-2}T \ll 1$. The solution coming from the Toy Model is supported on a finite number of modes $\jj\in \Z^2\setminus \cS_0$ satisfying $|j|\sim J$, and the time it takes for the energy to diffuse across its modes is $T\sim O(\nu^{-2})$ where $\nu$ is the characteristic size of the modes in $\ell^1$ norm. Requiring the solution to be initially close in $H^s$ to the finite gap would necessitate that $\nu J^s \lesssim \delta$ which gives that $T\gtrsim_\delta J^{-2s}$, and hence the condition $J^{-2}T \ll 1$ translates into the condition $s<1$. This explains the restriction to $s<1$ in the first part of Theorem \ref{thm:main}. If we only require our solutions to be close to the finite gap in $L^2$, then no such restriction on $\nu$ is needed, and hence there is no restriction on $s$ beyond being $s>0$ and $s\neq 1$, which is the second part of the theorem. This analysis is done in Section \ref{sec:ToyModel} and \ref{sec:Approximation}. In the former, we perform the reduction to the effective degree 4 Hamiltonian taking into account all the changes of variables performed in the previous sections; while in Section \ref{sec:Approximation} we perform the above approximation argument allowing to shadow the Toy Model solution mentioned above with a solution of \eqref{NLS} exhibiting the needed norm growth, thus completing the proof of Theorem \ref{thm:main}. \end{enumerate} \vspace{1em} \noindent{\bf Acknowledgements:} This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 757802) and under FP7- IDEAS (grant agreement No 306414). M. G. has been also partly supported by the Spanish MINECO-FEDER Grant MTM2015-65715-P. Z.~H.~ was partly supported by a Sloan Fellowship, and NSF grants DMS-1600561 and DMS-1654692. A.M. was partly supported by Progetto di Ricerca GNAMPA - INdAM 2018 ``Moti stabili ed instabili in equazioni di tipo Schr\"odinger''. \section{Notation and functional setting} \subsection{Notation} For a complex number $z$, it is often convenient to use the notation $$ z^{\sigma}=\begin{cases} z \qquad \text{if }\sigma=+1,\\ \bar z \qquad \text{if }\sigma=-1. \end{cases} $$ For any subset $\Gamma \subset \Z^2$, we denote by $h^s(\Gamma)$ the set of sequences $(a_{\jj})_{\jj \in \Gamma}$ with norm $$ \|a\|_{h^s(\Gamma)}=\left(\sum_{\jj \in \Gamma}\langle \jj \rangle^{2s}|a_{\jj}|^2\right)^{1/2}<\infty. $$ Our phase space will be obtained by an appropriate linearization around the finite gap solution with $\td$ frequencies/actions. For a finite set $\mathcal S_0 \subset \Z\times \{0\}$ of $\tk$ elements, we consider the phase space $\mathcal X= (\C^\tk\times\T^\tk)\times \ell^1(\Z^2 \setminus \cS_0)\times \ell^1(\Z^2 \setminus \cS_0)$. The first part $( \C^\tk \times \T^\tk)$ corresponds to the finite-gap sites in action angle coordinates, whereas $\ell^1(\Z^2 \setminus \cS_0)\times \ell^1(\Z^2 \setminus \cS_0)$ corresponds to the remaining orthogonal sites in frequency space. We shall often denote the $\ell^1$ norm by $\|\cdot \|_1$. We shall denote variables on $\mathcal X$ by $$ \mathcal X\ni( \yy,\theta, \ba): \qquad \yy \in \mathbb C^{\tk}, \ \ \theta \in \mathbb T^{\tk}, \ \ \ba=(a, \bar a) \in \ell^1(\Z^2\setminus \cS_0) \times \ell^1(\Z^2\setminus \cS_0). $$ We shall use multi-index notation to write monomials like $\yy^{l}$ and $\fm_{\al, \bt}=a^\al \bar a^\bt$ where $l \in \mathbb N^{\tk}$ and $\alpha, \beta \in(\mathbb N)^{\Z^2 \setminus \cS_0}$. Often times, we will abuse notation, and simply use the notation $\ba \in \ell^1$ to mean $\ba=(a, \bar a) \in \ell^1(\Z^2\setminus \cS_0) \times \ell^1(\Z^2\setminus \cS_0)$, and $\|\ba\|_{1}=\|a\|_{\ell^1(\Z^2\setminus \cS_0)}$. \begin{definition}\label{def:degree} For a monomial of the form $ e^{\im \ell\cdot \theta} \, \yy^l \, \fm_{\al,\bt}$, we define its degree to be $2|l|+|\alpha|+|\beta|-2$, where the modulus of a multi-index is given by its $\ell^1$ norm. \end{definition} \subsection{Regular Hamiltonians} Given a Hamiltonian function $F(\yy,\theta,\ba)$ on the phase space $\mathcal X$, we associate to it the Hamiltonian vector field \[ X_F:=\{ -\partial_\theta F, \partial_\yy F, \ \ - \im \partial_{\bar a} F, \im \partial_{a}F\}, \] where we have used the standard complex notation to denote the Fr\'echet derivatives of $F$ with respect to the variable $\ba \in \ell^1$. We will often need to complexify the variable $\theta \in \T^\tk$ into the domain $$ \T^\tk_\rho:=\{\theta\in \C^\tk: {\rm Re }(\theta)\in \T^\tk \,,\quad |{\rm Im }(\theta)|\le \rho \} $$ and consider vector fields which are functions from $$\C^\tk \times \T^\tk_\rho \times \ell^1 \to \C^{\tk}\times \C^\tk\times \ell^1\,:\;(\yy,\theta,\ba)\to (X^{(\yy)},X^{(\theta)},X^{(a)},X^{(\bar a )}) $$ which are analytic in $\yy,\theta,\ba$. Our vector fields will be defined on the domain \begin{equation}\label{def:domain} D(\rho,r):=\T^\tk_\rho \times D(r) \ \ \ \mbox{ where } \ D(r):=\{ |\yy|\le r^2 ,\quad \|\ba\|_1 \le r \}. \end{equation} On the vector field, we use as norm $$ \bnorm{X}_r:=|X^{(\theta)}|+\frac{|X^{(\yy)}|}{r^2}+ \frac{\|X^{(a)}\|_{1}}{r}+ \frac{\|X^{(\bar a)}\|_{1}}{r}. $$ All Hamiltonians $F$ considered in this article are analytic, real valued and can be expanded in Taylor Fourier series which are well defined and pointwise absolutely convergent \begin{equation} \label{h.funct} F(\yy,\theta,\ba)= \sum_{\al ,\bt\in\N^{\Z^2\setminus\cS_0},\ell\in \Z^\tk, l\in \N^\tk } F_{\al,\bt,l,\ell} \ e^{\im \ell\cdot \theta} \, \yy^l \, \fm_{\al,\bt}. \end{equation} Correspondingly we expand vector fields in Taylor Fourier series (again well defined and pointwise absolutely convergent) $$ X^{(v)}(\yy,\theta,\ba)= \sum_{\al ,\bt\in\N^{\Z^2\setminus\cS_0},\ell\in \Z^\tk, l\in \N^\tk } X_{\al,\bt,l,\ell}^{(v)} \, e^{\im \ell\cdot \theta} \, \yy^l \, \fm_{\al,\bt}\,, $$ where $v$ denotes the components $\theta_i, \yy_i$ for $1 \leq i \leq \tk$ or $a_\jj,\bar a_\jj $ for $\jj \in \Z^2\setminus\cS_0$. To a vector field we associate its majorant $$ \und{X}_\rho^{(v)}[\yy,\ba]:= \sum_{\ell\in \Z^\tk,l\in \N^\tk,\al ,\bt\in\N^{\Z^2} } |X^{(v)}_{\al,\bt,\ell}| \, e^{\rho\, |\ell|} \, \yy^l \, \fm_{\al,\bt} $$ and require that this is an analytic map on $D(r)$. Such a vector field is called \emph{majorant analytic}. Since Hamiltonian functions are defined modulo constants, we give the following definition of the {\em norm} of $F$: $$ |F|_{\rho,r}:=\sup_{(\yy,\ba)\in D(r)} \bnorm{ \und{(X_F)}_\rho}_{r} \ . $$ Note that the norm $| \cdot |_{\rho,r}$ controls the $| \cdot |_{\rho',r'}$ whenever $\rho'< \rho$, $r'<r$. Finally, we will also consider Hamiltonians $F(\lambda; \theta, a, \bar a) \equiv F(\lambda)$ depending on an external parameter $\lambda \in \cO\subset \R^\tk$. For those, we define the {\em inhomogeneous Lipschitz} norm: \[ |F|_{\rho,r}^\cO:= \sup_{\lambda \in \cO}|F(\lambda)|_{\rho,r}+ \sup_{\lambda_1\neq \lambda_2\in \cO} \frac{|F(\lambda_1)-F(\lambda_2) |_{\rho,r}}{|\lambda_1-\lambda_2|} \ . \] \subsection{Commutation rules} Given two Hamiltonians $F$ and $G$, we define their Poisson bracket as {$ \{F, G\} := \di F(X_G)$}; in coordinates $$ \{F, G\}=-\partial_\yy F \cdot \partial_\theta G+\partial_\theta F \cdot \partial_\yy G +\im \left(\sum_{\jj \in \Z^2\setminus \cS_0} \partial_{\bar a_\jj} F \partial_{a_\jj}G -\partial_{ a_\jj} F \partial_{\bar a_\jj}G\right). $$ Given $\al,\bt\in \N^{\Z^2\setminus \cS_0}$ we denote $\fm_{\al,\bt}:= a^\al \bar a^\bt$. To the monomial $e^{\im \ell\cdot \theta}\yy^l \fm_{\al,\bt}$ with $\ell\in \Z^\tk, l \in \mathbb N^{\tk}$ we associate various numbers. We denote by \begin{equation} \label{def.eta} \eta(\alpha, \beta) := \sum_{\jj \in \Z^2\setminus \cS_0} (\alpha_\jj - \beta_\jj) \ , \quad \qquad \eta(\ell):= \sum_{i=1}^\tk \ell_i \ . \end{equation} We also associate to $e^{\im \ell\cdot \theta}\yy^l\fm_{\al,\bt}$ the quantities $\pi({\al,\bt})=(\pix,\piy)$ and $\pi(\ell)$ defined by \begin{equation} \label{def.pi} \pi({\al,\bt})= \begin{bmatrix} \pi_x(\alpha, \beta) \\ \pi_y(\alpha, \beta) \end{bmatrix} = \sum_{\jj=(m,n)\in \Z^2\setminus \cS_0} \begin{bmatrix} m \\ n \end{bmatrix} (\al_\jj-\bt_\jj) \ , \quad \qquad \pi(\ell)= \sum_{i=1}^\tk \tm_i \ell_i \ . \end{equation} The above quantities are associated with the following mass $\mathcal M$ and momentum $\mathcal P=(\mathcal P_x, \mathcal P_y)$ functionals given by \begin{equation} \label{mp.1} \begin{aligned} &\mathcal M:= \sum_{i=1}^\tk \yy_i + \sum_{\jj \in \Z^2\setminus \cS_0 }|a_\jj|^2 \\ &\mathcal P_x:= \sum_{i=1}^\tk \tm_i \yy_i + \sum_{(m,n) \in \Z^2 \setminus \cS_0}\!\!\!\!m \, |a_{(m,n)}|^2 \\ &\mathcal P_y:= \sum_{(m,n) \in \Z^2 \setminus \cS_0} n |a_{(m,n)}|^2 \end{aligned} \end{equation} via the following commutation rules: given a monomial $e^{\im \ell\cdot\theta}\yy^l\fm_{\al,\bt}$ \begin{align*} \{\cM,e^{\im \ell\cdot\theta}\yy^l\fm_{\al,\bt}\}&=\im (\eta(\alpha, \beta)+\eta(\ell) )e^{\im \ell\cdot\theta}\yy^l\fm_{\al,\bt}\\ \{\cP_x,e^{\im \ell\cdot\theta}\yy^l\fm_{\al,\bt}\}&=\im (\pi_x(\alpha, \beta)+\pi(\ell) )e^{\im \ell\cdot\theta}\yy^l\fm_{\al,\bt} \\ \{\cP_y,e^{\im \ell\cdot\theta}\yy^l\fm_{\al,\bt}\}&=\im \, \pi_y(\alpha, \beta)\, e^{\im \ell\cdot\theta}\yy^l\fm_{\al,\bt} \end{align*} \begin{remark} \label{leggi_sel} An analytic hamiltonian function $\mathcal F$ (expanded as in \eqref{h.funct}) commutes with the mass $\cM$ and the momentum $\cP$ if and only if the following {\em selection rules} on its coefficients hold: \begin{align*} &\{ \cF, \cM\} = 0 \ \ \ \Leftrightarrow \ \ \ \cF_{\alpha, \beta, l, \ell} \, (\eta(\alpha, \beta) + \eta(\ell)) = 0 \\ & \{ \cF, \cP_x\} = 0 \ \ \ \Leftrightarrow \ \ \ \cF_{\alpha, \beta, l, \ell} \, (\pi_x(\alpha, \beta) + \pi(\ell)) = 0 \\ & \{ \cF, \cP_y\} = 0 \ \ \ \Leftrightarrow \ \ \ \cF_{\alpha, \beta, l, \ell} \, (\pi_y(\alpha, \beta)) = 0 \end{align*} where $\eta(\alpha, \beta), \eta(\ell)$ are defined in \eqref{def.eta} and $\pi(\alpha, \beta), \pi(\ell)$ are defined in \eqref{def.pi}. \end{remark} \begin{definition} We will denote by $\cA_{\rho,r}$ the set of all real-valued Hamiltonians of the form \eqref{h.funct} with finite $| \cdot |_{\rho,r}$ norm and which Poisson commute with $\cM$, $\cP$. Given a compact set $\cO\subset \R^\tk$, we denote by $\cA^\cO_{\rho,r}$ the Banach space of Lipschitz maps $\cO\to \cA_{\rho,r}$ with the norm $|\cdot|_{\rho,r}^\cO$. \end{definition} From now on, all our Hamiltonians will belong to some set $\cA_{\rho, r}$ for some $\rho, r>0$. \section{Adapted variables and Hamiltonian formulation}\label{sec:AdaptedVarAndBirk} \subsection{Fourier expansion and phase shift}\label{sec:FourierPhase} Let us start by expanding $u$ in Fourier coefficients $$ u(x,y,t)= \sum_{\jj=(m,n)\in \Z^2} u_{\jj}(t) \, e^{\im(m x + n y)}. $$ Then, the Hamiltonian $H_0$ introduced in \eqref{def:Ham:Original} can be written as \begin{align*} H_0(u)=&\sum_{\jj\in \Z^2}|\jj|^2 |u_{\jj}|^2 + \frac12\sum_{\jj_i\in \Z^2 \atop \jj_1-\jj_2+\jj_3-\jj_4=0}u_{\jj_1}\bar u_{\jj_2}u_{\jj_3}\bar u_{\jj_4}\\ =& \sum_{\jj\in \Z^2}|\jj|^2 |u_{\jj}|^2 -\frac12 \sum_{\jj\in \Z^2}|u_{\jj}|^4 +2\overbrace{\left(\sum_{\jj\in \Z^2}|u_\jj|^2\right)^2}^{M(u)^2}+ \frac12\sum_{\jj_i\in \Z^2 \atop \jj_1-\jj_2+\jj_3-\jj_4=0}^{\star}u_{\jj_1}\bar u_{\jj_2}u_{\jj_3}\bar u_{\jj_4} \end{align*} where the $\sum^\star$ means the sum over the quadruples $\jj_i$ such that $\{\jj_1,\jj_3\}\neq \{\jj_2,\jj_4\}$. Since the mass $M(u)$ in \eqref{def:NLS:mass} is a constant of motion, we make a trivial phase shift and consider an equivalent Hamiltonian $H(u)=H_0(u)-M(u)^2$, \begin{equation} \label{parto} H(u) = \int_{\T^2} \abs{\nabla u(x,y)}^2 \, \di x \, \di y + \frac{1}{2}\int_{\T^2}\abs{u(x,y)}^4 \, \di x \, \di y - M(u)^2 \end{equation} corresponding to the Hamilton equation \begin{equation}\label{piri0} \im \partial_t u = -\Delta u + |u|^2 u -2 M(u) u \ , \qquad (x,y)\in\T^2\ . \end{equation} Clearly the solutions of \eqref{piri0} differ from the solutions of \eqref{NLS} only by a phase shift\footnote{In order to show the equivalence we consider any solution $u(x,t)$ of \eqref{piri0} and consider the invertible map $$ u\mapsto v= u\; e^{-2 \im M(u) t } \quad \mbox{with inverse}\quad v\mapsto u= v\; e^{2 \im M(v) t }. $$ Then a direct computation shows that $v$ solves 2D-NLS. }. Then, \begin{equation} \label{Ha0} H(u) = \sum_{\jj\in \Z^2}|\jj|^2 |u_{\jj}|^2 -\frac12 \sum_{\jj\in \Z^2}|u_{\jj}|^4 + \frac12\sum_{\jj_i\in \Z^2 \atop \jj_1-\jj_2+\jj_3-\jj_4=0}^{\star}u_{\jj_1}\bar u_{\jj_2}u_{\jj_3}\bar u_{\jj_4}. \end{equation} \subsection{The Birkhoff map for the 1D cubic NLS}\label{sec:1DNLS} We devote this section to gathering some properties of the Birkhoff map for the integrable 1D NLS equation. These will be used to write the Hamiltonian \eqref{Ha0} in a more convenient way. The main reference for this section is \cite{AlbertoVeyPaper}. We shall denote by $B^{s}(r)$ the ball of radius $r$ and center $0$ in the topology of $h^s \equiv h^s(\Z)$. \begin{theorem} \label{thm:dnls} There exist $r_* >0$ and a symplectic, real analytic map $\Phi$ with $\di\Phi(0) = \uno $ such that $\forall s \geq0$ one has the following \begin{itemize} \item[(i)] ${\Phi} : B^{s}(r_*) \to h^s$. More precisely, there exists a constant $C>0$ such that for all $0 \leq r \leq r_*$ \[\sup_{\norm{q}_{h^s} \leq r} \norm{({\Phi }- \uno)(q)}_{h^s} \leq C \, r^3 \ .\] The same estimate holds for $\Phi^{-1}-\uno$ or by replacing the space $h^s$ with the space $\ell^1$. \item[(ii)] Moreover, if $q\in h^s$ for $s \geq 1$, $\Phi$ introduces local Birkhoff coordinates for (NLS-1d) in $h^s$ as follows: the integrals of motion of (NLS-1d) are real analytic functions of the actions $I_j = |z_j|^2$ where $(z_j)_{j \in \Z}=\Phi(q)$. In particular, the Hamiltonian $H_{{\rm NLS1d}}(q) \equiv \int_\T \abs{\derx q(x)}^2 dx-M(q)^2+ \frac{1}{2} \int_\T \abs{q(x)}^4 dx$, the mass $M(q):= \int_\T \abs{q(x)}^2 dx$ and the momentum $P(q):= -\int_\T \bar q(x) \im \derx q(x) dx$ have the form \begin{align} \label{ham.bc} &\left(H_{{\rm NLS1d}} \circ \Phi^{-1}\right)(z) \equiv h_{\rm nls1d}\left((|z_m|^2)_{m \in \Z}\right) = \sum_{m\in \Z} m^2 |z_m|^2 - \frac{1}{2} \sum_{m\in \Z} |z_m|^4 + O(|z|^6) \ , \\ \notag &\left(M\circ \Phi^{-1}\right)(z) = \sum_{m \in \Z} |z_m|^2 \ , \\ \notag&\left(P\circ \Phi^{-1}\right)(z) = \sum_{m \in \Z} m |z_m|^2 \ . \end{align} \item[(iii)] Define the (NLS-1d) action-to-frequency map $I \mapsto \alpha^{{\rm nls1d}}(I)$, where $ \alpha^{{\rm nls1d}}_m(I) := \frac{\partial h_{\rm nls1d}}{\partial I_m}$, $\forall m \in \Z.$ Then one has the asymptotic expansion \begin{equation}\label{freq.bc} \alpha^{{\rm nls1d}}_m(I) = m^2 - I_m + \frac{\varpi_m(I)}{\la m\ra} \end{equation} where $\varpi_m(I)$ is at least quadratic in $I$. \end{itemize} \end{theorem} \begin{proof} Item $(i)$ is the main content of \cite{AlbertoVeyPaper}, where it is proved that the Birkhoff map is majorant analytic between some Fourier-Lebesgue spaces. Item $(ii)$ is proved in \cite{grebert_kappeler}. Item $(iii)$ is Theorem 1.3 of \cite{KST}. \end{proof} \begin{remark} Theorem \ref{thm:dnls} implies that all solutions of 1D NLS have Sobolev norms uniformly bounded in time (as it happens for other integrable systems, like KdV and Toda lattice, see e.g. \cite{BambusiM16, Kappeler16}). On the contrary, the Szeg\H{o} equation is an integrable system which exhibits growth of Sobolev norms \cite{GerardG15}. \end{remark} \subsection{Adapted variables} The aim of this section is to write the Hamiltonian \eqref{parto}, the mass $M$ \eqref{def:NLS:mass} and the momentum $P$ \eqref{def:NLS:momentum} in the local variables around the finite gap solution corresponding to \[ \begin{cases} |z_{\tm_k}|^2&=I_k , \qquad k=1, 2, \ldots, \tk \label{finitegapZ}\\ z_m&=0 , \qquad \,\ m\in \Z\setminus \cS_0. \end{cases} \] To begin with, we start from the Hamiltonian in Fourier coordinates \eqref{Ha0}, and set \[ q_m:= u_{(m,0)}\quad \mbox{ if } \ m\in \Z\,,\qquad a_{\jj}= u_{\jj} \quad \mbox{ if } \ \jj=(m,n)\in \Z^2\,,\; n\neq 0 \ . \] We rewrite the Hamiltonian accordingly in increasing degree in $a$, obtaining \begin{align} \notag H(q, a)= & \sum_{m \in \Z} m^2 |q_m|^2 -\frac12 \sum_{m \in \Z} |q_m|^4 + \frac12\sum_{m_i\in \Z \atop m_1-m_2+m_3-m_4=0}^{\star}q_{m_1}\bar q_{m_2}q_{m_3}\bar q_{m_4}+\\ \notag & + \sum_{\jj\in \Z^2\setminus \Z} |\jj|^2 |a_{\jj}|^2 + 2\sum^\star_{\jj_i=(m_i,n_i)\,,i=3,4\,,\; n_i\neq 0 \atop { m_1-m_2+m_3-m_4=0\atop n_3-n_4=0}} q_{m_1}\bar q_{m_2} a_{\jj_3}\bar a_{\jj_4} +\operatorname{Re} \sum_{ \jj_i=(m_i,n_i)\,,i=2,4\,,\;n_i\neq 0 \atop { m_1-m_2+m_3-m_4=0 \atop n_2+n_4=0} } \bar q_{m_1} a_{\jj_2} \bar q_{m_3} a_{\jj_4}\\ \notag & + 2 \operatorname{Re } \sum_{ \jj_i=(m_i,n_i)\,,i=2,3,4\,,\; n_i\neq 0 \atop {m_1-m_2+m_3-m_4=0\atop -n_2+n_3-n_4=0}} q_{m_1} \bar a_{\jj_2} a_{\jj_3} \bar a_{\jj_4} \\ \notag & + \frac{1}{2}\sum^\star_{ \jj_i=(m_i,n_i)\,,i=1,2,3,4\,,\; n_i\neq 0\atop{ \jj_1-\jj_2+\jj_3-\jj_4 = 0}} a_{\jj_1} \bar a_{\jj_2} a_{\jj_3} \bar a_{\jj_4}-\frac12\sum_{\jj\in \Z^2\setminus \Z}|a_\jj|^4\\ &=: H_{\rm nls1d}(q)+H^{\rm II}(q, a)+H^{\rm III}(q, a)+H^{\rm IV}(a).\notag \end{align} {\bf Step 1:} First we do the following change of coordinates, which amounts to introducing Birkhoff coordinates on the line $\Z\times \{0\}$. We set \begin{align} \notag&\left( (z_m)_{m \in \Z}, (a_{\jj})_{\jj\in \Z^2\setminus \Z}\right)\mapsto \left((q_m)_{m \in \Z}, (a_{\jj})_{\jj \in \Z^2\setminus \Z}\right)\\ \notag&(q_m)_{m \in \Z} =\Phi^{-1}\left((z_m)_{m \in \Z}\right) , \ \ \ a_{\jj}=u_{\jj}, \quad \jj\in \Z^2\setminus \Z. \end{align} In those new coordinates, the Hamiltonian becomes \begin{align*} H(z, a)=&H_{\rm nls1d}(\Phi^{-1}(z))+H^{\rm II}(\Phi^{-1}(z), a)+H^{\rm III}(\Phi^{-1}(z), a)+H^{\rm IV}(a), \end{align*} where \[ \quad H_{\rm nls1d}(\Phi^{-1}(z))=h_{\rm nls1d}((|z_m|^2)_{m \in \Z}) .\\ \] {\bf Step 2:} Next, we go to action-angle coordinates only on the set $\cS_0=\{\tm_1, \ldots, \tm_d\}\subset \Z\times \{0\}$ and rename $z_m$ for $m \notin \cS_0$ as $a_{(m, 0)}$, as follows \begin{align*} \left(\yy_i, \theta_i, a_{\jj}\right)_{\substack{1\leq i \leq \td\\\jj \in \Z^2\setminus \cS_0}} &\mapsto (z_m, a_{\jj})_{m \in \Z, \jj \in \Z^2\setminus \Z}\\% (z_{\tm_1}, \ldots, z_{\tm_\td})\\ z_{\tm_i}&= \sqrt{I_i +\yy_i} \ e^{\im \theta_i}, \qquad \tm_i \in \cS_0, \\ z_{m}&=a_{(m, 0)}, \qquad m\in \Z\setminus \cS_0,\\ a_{\jj}&=a_{\jj}, \qquad \jj \in \Z^2\setminus \Z. \end{align*} In those coordinates, the Hamiltonian becomes (using \eqref{ham.bc}) \begin{align} \mathcal{H}(\yy, \theta, a)=&\ h_{\rm nls1d}(I_1 +\yy_1, \ldots, I_\td +\yy_\td, \left(|a_{(m, 0)}|^2\right)_{m \notin \cS_0}) \label{penguin1}\\ &+H^{\rm II}\left(\Phi^{-1}\left(\sqrt{I_1 +\yy_1}e^{\im \theta_1}, \ldots, \sqrt{I_\td +\yy_\td}e^{\im \theta_\td}, (a_{(m, 0)})_{m \notin \cS_0}\right), (a_{(m,n)})_{n\neq 0}\right) \label{penguin2}\\ &+H^{\rm III}\left(\Phi^{-1}\left(\sqrt{I_1 +\yy_1}e^{\im \theta_1}, \ldots, \sqrt{I_\td +\yy_\td}e^{\im \theta_\td}, (a_{(m, 0)})_{m \notin \cS_0}\right), (a_{(m,n)})_{n\neq 0}\right)\label{penguin3}\\ &+H^{\rm IV}\left((a_{(m,n)})_{n\neq 0}\right)\label{penguin4}. \end{align} {\bf Step 3:} Now, we expand each line by itself. By Taylor expanding around the finite-gap torus corresponding to $(\yy, \theta, a)=(0,\theta, 0)$ we obtain, up to an additive constant, \begin{align*} h_{\rm nls1d}\left(I_1 +\yy_1, \ldots, I_\td +\yy_\td, (|a_{(m, 0)}|^2)_{m \notin \cS_0}\right)=&\sum_{i=1}^{\tk}\partial_{\tm_i} h_{\rm nls1d}(I_1, \ldots, I_\tk, 0)\yy_i\\ &+\sum_{m \in \Z\setminus \cS_0}\partial_{m} h_{\rm nls1d}(I_1, \ldots, I_\tk, 0)|a_{(m, 0)}|^2\\ &-\frac{1}{2}\left(|\yy|^2+\sum_{m \in \Z\setminus \cS_0}|a_{(m, 0)}|^4\right)\\ &+O\left(|I| \left\{\sum_{j=1}^\tk \yy_j +\sum_{m \notin \cS_0}|a_{(m,0)}|^2\right\}^2\right)\\ &+O\left( \left\{\sum_{j=1}^\tk \yy_j +\sum_{m \notin \cS_0}|a_{(m,0)}|^2\right\}^3\right), \end{align*} where we have used formula \eqref{ham.bc} in order to deduce that $\frac{\partial^2 h_{nls1d}}{\partial I_m \partial I_n}(0)=-\delta_{n}^m$ where $\delta_n^m$ is the Kronecker delta. The following lemma follows easily from Theorem \ref{thm:dnls} (particularly formulae \eqref{ham.bc} and \eqref{freq.bc}): \begin{lemma}[Frequencies around the finite gap torus] Denote $$ \partial_{I_{\tm_j}} h_{\rm nls1d}(I_1, \ldots, I_\tk, 0)=\tm_j^2-\widetilde \lambda_j(I_1, \ldots, I_\tk). $$ Then, \begin{enumerate} \item The map $(I_1, \ldots, I_{\tk}) \mapsto \widetilde\lambda(I_1, \ldots, I_\tk)=(\widetilde\lambda_i (I_1, \ldots, I_{\tk}))_{1\leq i \leq \tk}$ is a diffeomorphism from a small neighborhood of 0 of $\R^\tk$ to a small neighborhood of 0 in $\R^\tk$. Indeed, $\widetilde \lambda=$Identity +(quadratic in $I$). More precisely, there exists $\e_{1d}>0$ such that if $0<\e< \e_{1d}$ and $$ \widetilde \lambda(I_1, \ldots, I_\tk)=\e \lambda , \quad \lambda \in \left(\frac12, 1\right)^\tk $$ then $(I_1, \ldots, I_{\tk})=\e \lambda +O(\e^2)$. From now on, and to simplify notation, we will use the vector $\lambda$ as a parameter as opposed to $(I_1, \ldots, I_{\tk})$, and we shall set the vector $$ \omega_i(\lambda)=\tm_i^2-\e \lambda_i, \qquad 1\leq i\leq \tk $$ to denote the frequencies at the tangential sites in $\cS_0$. \item For $m \in \Z\setminus \cS_0$, denoting $\Omega_m(\lambda):=\partial_{I_m} h_{\rm nls1d}(I_1(\lambda), \ldots, I_\tk(\lambda), 0)$, we have \begin{equation}\notag \Omega_m(\lambda) := m^2 +\frac{\varpi_m(I(\lambda))}{\la m\ra}\,,\quad \mbox{with} \ \ \ \sup_{\lambda\in (\frac{1}{2}, 1)^\td}\, \sup_{m \in \Z }|\varpi_m(I(\lambda))| \le C\e^2 \ . \end{equation} \end{enumerate} \end{lemma} With this in mind, line \eqref{penguin1} becomes \begin{align*} h_{\rm nls1d}\left(I_1 +\yy_1, \ldots, I_\td +\yy_\td, (|a_{(m, 0)}|^2)_{m \notin \cS_0}\right)=\,&\omega(\lambda)\cdot \yy +\sum_{m \in \Z\setminus \cS_0}\Omega_m(\lambda) \left|a_{(m,0)}\right|^2\\ &-\frac{1}{2}\left(|\yy|^2+\sum_{m \in \Z\setminus \cS_0}\left|a_{(m, 0)}\right|^4\right)\\ &+O\left(|I| \left\{\sum_{j=1}^\tk \yy_j +\sum_{m \notin \cS_0}\left|a_{(m,0)}\right|^2\right\}^2\right)\\ &+O\left( \left\{\sum_{j=1}^\tk \yy_j +\sum_{m \notin \cS_0}\left|a_{(m,0)}\right|^2\right\}^3\right). \end{align*} We now analyze \eqref{penguin2}. This is given by \begin{align*} \eqref{penguin2}=\sum_{\jj\in \Z^2\setminus \Z} |\jj|^2 |a_{\jj}|^2 + 2\sum^\star_{\jj_i=(m_i,n_i)\,,i=3,4\,,\; n_i\neq 0 \atop { m_1-m_2+m_3-m_4=0\atop n_3-n_4=0}} q_{m_1}\bar q_{m_2} a_{\jj_3}\bar a_{\jj_4} +\operatorname{Re} \sum_{ \jj_i=(m_i,n_i)\,,i=2,4\,,\;n_i\neq 0 \atop { m_1-m_2+m_3-m_4=0 \atop n_2+n_4=0} } \bar q_{m_1} a_{\jj_2} \bar q_{m_3} a_{\jj_4}\\ \end{align*} where we now think $q_m$ as a function of $\yy, \theta, a$. By Taylor expanding it at $\yy = 0$ and $a = 0$, \begin{equation}\label{qmExpand} \begin{split} q_m=q_m(\lambda; \yy, \theta, (a_{(m_1, 0)})_{m_1 \in \Z\setminus \cS_0}) = & \overbrace{q_m(\lambda;0, \theta, 0)}^{=:q_m^{\rm fg}(\lambda; \theta)} + \sum_{i=1}^\td \frac{\partial q_m}{\partial \yy_i}(\lambda;0,\theta,0) \yy_i \\ &+ \sum_{m_1 \in \Z \setminus \cS_0} \frac{\partial q_m}{\partial a_{(m_1,0)}}(\lambda;0,\theta,0) a_{(m_1, 0)}+\frac{\partial q_m}{\partial \bar a_{(m_1,0)}}(\lambda;0,\theta,0) \overline a_{(m_1, 0)} \\ & + \sum_{\substack{m_1, m_2 \in \Z\setminus \cS_0\\ \sigma_1, \sigma_2 =\pm 1}} Q_{m,m_1 m_2}^{\sigma_1 \sigma_2}(\lambda; \theta) a_{(m_1,0)}^{\sigma_1}a_{(m_2,0)}^{\sigma_2}+\cO(\yy^2, \yy a , a^3), \end{split} \end{equation} where we have denoted $(q_m^{\rm fg}(\lambda; \theta))_{m \in \Z}$ the finite gap torus (which corresponds to $\yy = 0$, $\ba = 0$), and $$ Q_{m,m_1 m_2}^{\sigma_1 \sigma_2}(\lambda; \theta)=\frac{1}{2} \frac{\partial^2 q_m}{\partial a_{m_1}^{\sigma_1}\partial a_{m_2}^{\sigma_2}}(\lambda;0,\theta,0). $$ Therefore, we obtain \begin{align*} \eqref{penguin2}=&\sum_{\jj\in \Z^2\setminus \Z} |\jj|^2 |a_{\jj}|^2 + 2\sum^\star_{\jj_i=(m_i,n_i)\,,i=3,4\,,\; n_i\neq 0 \atop { m_1-m_2+m_3-m_4=0\atop n_3-n_4=0}} q^{\rm fg}_{m_1}(\lambda; \theta) \bar q^{\rm fg}_{m_2}(\lambda; \theta)a_{\jj_3}\bar a_{\jj_4} \\ & +\operatorname{Re} \sum_{ \jj_i=(m_i,n_i)\,,i=2,4\,,\;n_i\neq 0 \atop { m_1-m_2+m_3-m_4=0 \atop n_2+n_4=0} } \bar q^{\rm fg}_{m_1}(\lambda; \theta) a_{\jj_2}\bar q^{\rm fg}_{m_3}(\lambda; \theta) a_{\jj_4}\\ &+\left\{2\sum^\star_{\jj_i=(m_i,n_i)\,,i=3,4\,,\; n_i\neq 0 \atop { m_1-m_2+m_3-m_4=0\atop n_3-n_4=0}}\sum_{m_2'\in \Z\setminus \cS_0} {\frac{\partial \bar q_{m_2}}{\partial \bar a_{(m_2',0)}}(\lambda;0,\theta,0)} q^{\rm fg}_{m_1}(\lambda; \theta)\bar a_{(m_2', 0)} a_{\jj_3}\bar a_{\jj_4} +\text{similar cubic terms in } (a, \bar a)\right\}\\ &+\eqref{penguin2}^{(2)}+\eqref{penguin2}^{(\geq 3)} \end{align*} where $\eqref{penguin2}^{(2)}$ are degree 2 terms (cf. Definition \ref{def:degree}), $\eqref{penguin2}^{(\geq 3)}$ are of degree $\geq 3$. More precisely, \begin{equation}\label{penguin22} \begin{split} \eqref{penguin2}^{(2)}=&2\sum^\star_{\substack{\jj_i=(m_i,n_i)\,,i=3,4\,,\; n_i\neq 0 \\ m_1-m_2+m_3-m_4=0\atop n_3-n_4=0\\1\leq i\leq \tk}} q^{\rm fg}_{m_1}(\lambda; \theta) \frac{\partial \bar q_{m_2}}{\partial \yy_i}(\lambda;0,\theta,0) \yy_i a_{\jj_3}\bar a_{\jj_4}+\text{similar terms}\\ &+\sum^\star_{\substack{\jj_i=(m_i,n_i)\,,i=3,4\,,\; n_i\neq 0\\ m_1-m_2+m_3-m_4=0\\ n_3-n_4=0\\\sigma_1, \sigma_2=\pm 1, m_1', m_2'\in \Z\setminus \cS_0}} L_{m_1, m_2, m_1', m_2'}^{\sigma_1, \sigma_2}(\lambda; \theta) a_{(m_1', 0)}^{\sigma_1}a_{(m_2', 0)}^{\sigma_2}a_{\jj_3}\bar a_{\jj_4}+\text{similar terms}, \end{split} \end{equation} for some uniformly bounded coefficients $L_{m_1, m_2, m_1', m_2'}^{\sigma_1, \sigma_2}$. Next, we move on to \eqref{penguin3}, for which we have using equation \eqref{qmExpand} \begin{equation}\label{penguin32} \begin{split} \eqref{penguin3}=&2 \operatorname{Re } \sum_{ \jj_i=(m_i,n_i)\,,i=2,3,4\,,\; n_i\neq 0 \atop {m_1-m_2+m_3-m_4=0\atop -n_2+n_3-n_4=0}} q^{\rm fg}_{m_1}(\lambda; \theta) \bar a_{\jj_2} a_{\jj_3} \bar a_{\jj_4} \\ &+\underbrace{2 \operatorname{Re } \sum_{ \substack{\jj_i=(m_i,n_i)\,,i=2,3,4\,,\; n_i\neq 0 \\ m_1-m_2+m_3-m_4=0\\ -n_2+n_3-n_4=0}} \frac{\partial q_{m_1}}{\partial a_{(m_1',0)}}(\lambda;0,\theta,0) a_{(m'_1, 0)} \bar a_{\jj_2} a_{\jj_3} \bar a_{\jj_4} +\text{similar terms}}_{\eqref{penguin3}^{(2)}}+\eqref{penguin3}^{(\geq 3)}, \end{split} \end{equation} where $\eqref{penguin3}^{(2)}$ are terms of degree 2 and $\eqref{penguin3}^{(\geq 3)}$ are terms of degree $\geq 3$. In conclusion, we obtain \begin{align} \label{H.2} \mathcal{H}(\lambda;\yy, \theta, \ba) = & \cN+\cH^\0(\lambda; \theta, {\bf a})+\cH^\1(\lambda; \theta, {\bf a})+\cH^\2(\lambda; \yy, \theta, {\bf a})+\cH^{(\geq 3)}(\lambda; \yy,\theta, {\bf a}), \end{align} where \begin{equation} \label{def:N} \cN = \sum_{i=1}^\tk \omega_{\tm_i} (\lambda) \yy_i + \sum_{m\notin \cS_0} \Omega_m(\lambda) |a_{(m,0)}|^2+ \sum_{\jj=(m,n) \in \Z^2 \atop n\neq 0} |\jj|^2 |a_{\jj}|^2 \\ \end{equation} \begin{align} \cH^\0(\lambda; \theta, {\bf a})=\,&2\sum^\star_{\jj_i=(m_i,n_i)\,,i=3,4\,,\; n_i\neq 0 \atop { m_1-m_2+m_3-m_4=0\atop n_3-n_4=0}} q^{\rm fg}_{m_1}(\lambda; \theta) \bar q^{\rm fg}_{m_2}(\lambda; \theta)a_{\jj_3}\bar a_{\jj_4} \label{def of H0}\\&+\operatorname{Re} \sum_{ \jj_i=(m_i,n_i)\,,i=2,4\,,\;n_i\neq 0 \atop { m_1-m_2+m_3-m_4=0 \atop n_2+n_4=0} } \bar q^{\rm fg}_{m_1}(\lambda; \theta) a_{\jj_2}\bar q^{\rm fg}_{m_3}(\lambda; \theta) a_{\jj_4} \notag\\ \cH^{(1)}(\lambda; \theta, {\bf a})=\,&2 \operatorname{Re } \sum_{ \jj_i=(m_i,n_i)\,,i=2,3,4\,,\; n_i\neq 0 \atop {m_1-m_2+m_3-m_4=0\atop -n_2+n_3-n_4=0}} q^{\rm fg}_{m_1}(\lambda; \theta) \bar a_{\jj_2} a_{\jj_3} \bar a_{\jj_4} \label{def of H1}\\ &+2\sum^\star_{\jj_i=(m_i,n_i)\,,i=3,4\,,\; n_i\neq 0 \atop { m_1-m_2+m_3-m_4=0\atop n_3-n_4=0}}\sum_{m_2' \in \Z\setminus \cS_0} {\frac{\partial \bar q_{m_2}}{\partial \bar a_{(m_2',0)}}(\lambda;0,\theta,0)} q^{\rm fg}_{m_1}(\lambda; \theta)\bar a_{(m_2', 0)} a_{\jj_3}\bar a_{\jj_4}\notag\\ &+ \text{similar cubic terms in } (a, \bar a)\notag\\ \cH^{(2)}(\lambda; \theta, {\bf a})=\,&H^{\rm IV}\left((a_{(m,n)})_{n\neq 0}\right)-\frac{1}{2}\left(|\yy|^2+\sum_{m \in \Z\setminus \cS_0}|a_{(m, 0)}|^4\right) \label{giraffe2}\\ &+O\left(\varepsilon\left\{\sum_{j=1}^\tk \yy_j +\sum_{m \notin \cS_0}|a_{(m,0)}|^2\right\}^2\right) +\eqref{penguin2}^{(2)}+\eqref{penguin3}^{(2)},\notag \end{align} where $\eqref{penguin2}^{(2)}$ and $\eqref{penguin3}^{(2)}$ were defined in \eqref{penguin22} and \eqref{penguin32} respectively. Finally, $\cH^{(\geq 3)}$ collects all remainder terms of degree $\geq 3$. For short we write $\cN$ as $\cN=\omega(\lambda) \cdot \yy + \cD$ where $\cD$ is the diagonal operator \begin{equation}\notag \cD:=\sum_{\jj=(m,n) \in \Z^2\setminus\cS_0} \Omega_\jj^\0\, |a_\jj|^2 \end{equation} and the normal frequencies $\Omega_\jj^\0$ are defined by \begin{equation} \label{def:Omega} \Omega^{(0)}_\jj := \left\{ \begin{array}{ll} |\jj|^2 & \text{if}\; \jj=(m,n) \;\text{with} \; n\neq 0 \\ \Omega_m(\lambda) & \text{if}\; \jj=(m,0) , \ m \notin \cS_0 \end{array}\right. \ . \end{equation} Proceeding as in \cite{Maspero-Procesi}, one can prove the following result: \begin{lemma} \label{lem:norm.ham} Fix $\rho >0$. There exists $\e_\ast>0$ and for any $0 \leq \e \leq \e_\ast$, there exist $r_\ast \leq \sqrt{\e}/4 $ and $C >0$ such that $\cH^\0, \cH^\1, \cH^{(2)}$ and $\cH^{(\geq 3)}$ belong to $\cA_{\rho,r_\ast}^\cO$ and $\forall 0 < r \leq r_*$ \begin{equation} \label{lem:norm.ham1} |\cH^\0|_{\rho,r}^\cO \le C\e\,,\qquad |\cH^\1|_{\rho,r}^\cO \le C\sqrt{\e}r\,,\qquad |\cH^{(2)}|_{\rho,r}^\cO \le C r^2, \qquad |\cH^{(\geq 3)}|_{\rho,r}^\cO \le C \frac{r^3}{\sqrt{\e}} . \end{equation} \end{lemma} \section{Reducibility theory of the quadratic part}\label{sec:reducibility} In this section, we review the reducibility of the quadratic part $\cN+\cH^\0$ (see \eqref{def:N} and \eqref{def of H0}) of the Hamiltonian, which is the main part of the work \cite{Maspero-Procesi}. This will be a symplectic linear change of coordinates that transforms the quadratic part into an effectively diagonal, time independent expression. \subsection{Restriction to an invariant sublattice $\Z^2_N$} For $N\in \N$, we define the sublattice $\Z_N^2:= \Z\times N\Z$ and remark that it is invariant for the flow in the sense that the subspace $$ E_N:=\{a_\jj=\bar a_\jj=0\,,\quad \mbox{for} \quad \jj\, \notin \Z^2_N\} $$ is invariant for the original NLS dynamics and that of the Hamiltonian \eqref{H.2}. From now on, we restrict our system to this invariant sublattice, with \begin{equation}\label{def:SizeN} N > \max_{1 \leq i \leq \td} |\tm_i|. \end{equation} The reason for this restriction is that it simplifies (actually eliminates the need for) some genericity requirements that are needed for the work \cite{Maspero-Procesi} as well as some of the normal forms that we will perform later. It will also be important to introduce the following two subsets of $\mathbb Z_N^2$: \begin{equation}\label{def:SetZ} \sS:=\{(\tm, n): \tm \in \cS_0, \ n \in N \Z, \ n \neq 0\}, \qquad \fZ=\Z^2_N \setminus (\sS \cup \cS_0). \end{equation} \begin{definition}[$\tL-$genericity]\label{Lgenericity} Given $\tL\in \N$, we say that $\cS_0$ is $\tL$-generic if it satisfies the condition \begin{equation}\label{pop} \sum_{i = 1}^\td \ell_i\tm_i\neq 0 \qquad \forall \; 0<|\ell|\le \tL. \end{equation} \end{definition} \subsection{Admissible monomials and reducibility} The reducibility of the quadratic part of the Hamiltonian will introduce a change of variables that modifies the expression of the mass $\cM$ and momentum $\cP$ as follows. Let us set \begin{equation} \label{mp.4} \begin{aligned} &\wtcM:= \sum_{i=1}^\tk \yy_i + \sum_{(m , n) \in \fZ }|a_j|^2 , \\ &\wtcP_x:= \sum_{i=1}^\tk \tm_i \yy_i + \sum_{(m,n) \in\fZ}\!\!\!\!m \, |a_{(m,n)}|^2 , \\ &\wtcP_y:= \sum_{(m,n) \in \Z_N^2} n |a_{(m,n)}|^2. \end{aligned} \end{equation} These will be the expressions for the mass and momentum after the change of variables introduced in the following two theorems. Notice the absence of the terms $\sum_{\substack{1\leq i \leq \tk\\n\in N\Z}} |a_{(\tm_i, n)}|^2$ and $\sum_{\substack{1\leq i \leq \tk\\n\in N\Z}} \tm_i |a_{(\tm_i, n)}|^2$ from the expressions of $\wtcM$ and $\wtcP_x$ above. These terms are absorbed in the new definition of the $\yy$ and ${\bf a}$ variables. \begin{definition}[Admissible monomials] \label{rem:adm3} Given $\bj = (\jj_1, \ldots, \jj_p) \in (\Z^2_N\setminus\cS_0)^p$, $\ell \in \Z^\tk$, $l \in \N^\td$, and $\sigma= (\sigma_1, \ldots, \sigma_p) \in \{-1, 1\}^p$, we say that $(\bj , \ell, \sigma)$ is {\em admissible}, and denote $(\bj , \ell, \sigma) \in \mathfrak A_p$, if the monomial $\mathfrak m= e^{\im \theta \cdot \ell} \cY^l\, a_{\jj_1}^{\sigma_1} \, \ldots a_{\jj_p}^{\sigma_p}$ Poisson commutes with $\wtcM,\wtcP_x, \wtcP_y$. We call a monomial $ e^{\im \theta \cdot \ell} \cY^l\, a_{\jj_1}^{\sigma_1} \, \ldots a_{\jj_p}^{\sigma_p}$ admissible if $(\bj , \ell, \sigma)$ is admissible. \end{definition} \begin{definition}\label{def:R2} We define the {\em resonant set at degree 0}, \begin{equation} \label{res2} \fR_2:=\{ (\jj_1, \jj_2, \ell, \sigma_1, \sigma_2) \} \in \mathfrak A_2: \ell=0, \ \ \sigma_1=-\sigma_2, \ \ \jj_1=\jj_2\}. \end{equation} \end{definition} \begin{theorem} \label{thm:reducibility} Fix $\e_0>0$ sufficiently small. There exist positive $\rho_0, \gamma_0, \tau_0, r_0, \tL_0$ (with $\tL_0$ depending only on $\td$) such that the following holds true uniformly for all $0<\e\le \e_0$: For an $\tL_0$-generic choice of the set $\Tan$ (in the sense of Definition \ref{Lgenericity}), there exist a compact {\em domain} $\cO_0 \subseteq (1/2,1)^\tk$, satisfying $| (1/2,1)^\tk\setminus \cO_0|\leq \e_0$, and Lipschitz (in $\lambda$) functions $\{\Omega_\jj\}_{\jj\in \Z_N^2\setminus \cS_0}$ defined on $\cO_0$ (described more precisely in Theorem \ref{thm:reducibility4} below) such that: \begin{enumerate} \item The set \begin{equation} \label{2.mc} \cC^{(0)}:=\left\{\lambda\in \cO_0:\; \abs{\omega \cdot \ell + \s_1\Omega_{\jj_1}(\lambda, \e)+ \s_2 \Omega_{\jj_2}(\lambda, \e)} \geq \gamma_0 \frac{\e}{\la \ell \ra^{\tau_0}} \ ,\;\forall (\jj,\ell,\s)\in \fA_2\setminus \fR_2\right\} \end{equation} has positive measure. In fact $|\cO_0\setminus \cC^{(0)}|\lesssim \e_0^{\kappa_0}$ for some $\kappa_0>0$ independent of $\e_0$. \item For each $\lambda \in \cC^{(0)}$ and all $r \in [0, r_0]$, $\rho \in [\frac{\rho_0}{64}, \rho_0]$, there exists an invertible symplectic change of variables $\cL^{(0)}$, that is well defined and majorant analytic from $D(\rho/8, \zeta_0 r) \to D(\rho,r)$ (here $\zeta_0>0$ is a constant depending only on $\rho_0,\max(|\tm_k|^2)$) and such that if $\ba\in h^1(\Z_N^2 \setminus \cS_0)$, then \begin{equation}\notag (\cN+\cH^{(0)})\circ \cL^{(0)}(\yy, \theta, \ba) = \omega \cdot \yy + \sum_{\jj \in \Z_N^2 \setminus \cS_0} \Omega_\jj\, |a_\jj|^2. \end{equation} \item The mass $\cM$ and the momentum $\cP$ (defined in \eqref{mp.1}) in the new coordinates are given by \begin{equation}\label{mass.momentum.L} \cM\circ \cL^{(0)}= \wtcM\,,\quad \cP\circ \cL^{(0)}= \wtcP \ , \end{equation} where $\wtcM$ and $\wtcP$ are defined in \eqref{mp.4}. \item The map $\cL^{(0)}$ maps $h^1$ to itself and has the following form $$ \cL^{(0)}:\quad \ba \mapsto L(\lambda; \theta, \e)\ba, \qquad \yy \mapsto \yy + (\ba, Q(\lambda; \theta,\e)\ba), \qquad \theta \mapsto \theta. $$ The same holds for the inverse map $(\cL^{(0)})^{-1}$. \item The linear maps $L(\lambda; \theta, \e)$ and $Q(\lambda; \theta,\e)$ are block diagonal in the $y$ Fourier modes, in the sense that $L={\rm diag}_{n\in N\N}(L_{n})$ with each $L_{n}$ acting on the sequence $\{a_{(m,n)},a_{(m,-n)}\}_{m\in \Z}$ (and similarly for $Q$). Moreover, $L_0={\rm Id}$ and $L_n$ is of the form ${\rm Id}+S_n$ where $S_n$ is a smoothing operator in the following sense: with the smoothing norm $\lceil \cdot \rfloor_{\rho,-1}$ defined in \eqref{def.smoothingnorm} below $$ \sup_{n\neq 0} \lceil S_n\circ P_{\{|m|\geq (\tm_\tk+1)\}} \rfloor_{\rho,-1} \lesssim \varepsilon, $$ where $P_{\{|m|\geq K\}}$ is the orthogonal projection of a sequence $(c_m)_{m \in \Z}$ onto the modes $|m| \geq K$. \end{enumerate} \end{theorem} The above smoothing norm is defined as follows: Let $S(\lambda; \theta, \e)$ be an operator acting on sequences $(c_k)_{k \in \Z}$ through its matrix elements $S(\lambda; \theta, \e)_{m, k}$. Let us denote by $S(\lambda; \ell, \e)_{m, k}$ the $\theta$-Fourier coefficients of $S(\lambda; \theta, \e)_{m, k}$. For $\rho, \nu >0$ we define $\lceil S(\lambda; \theta, \e) \rfloor_{\rho, \nu}$ as: \begin{equation}\label{def.smoothingnorm} \lceil S(\lambda; \theta, \e) \rfloor_{\rho, \nu}:=\sup_{\|c\|_{\ell^1}\leq 1} \left\|\left(\sum_{\substack{k\in \Z\\ \ell\in \Z^\tk}} e^{\rho|\ell|} |S_{m, k}(\lambda; \ell, \e) | \langle k \rangle^{-\nu} c_k\right)\right\|_{\ell^1} . \end{equation} This definition is equivalent to the more general norm used in Definition 3.9 of \cite{Maspero-Procesi}. Roughly speaking, the boundedness of this norm means that, in terms of its action on sequences, $S$ maps $\langle k \rangle^\nu \ell^1 \to \ell^1$. As observed in Remark 3.10 of \cite{Maspero-Procesi}, thanks to the conservation of momentum this also means that $S$ maps $\ell^1 \to \langle k \rangle^{-\nu} \ell^1$. \begin{remark} Note that in \cite{Maspero-Procesi} Theorem \ref{thm:reducibility} is proved in $h^s$ norm with $s>1$, for instance in \eqref{def.smoothingnorm} the $\ell^1$ norm is substituted with the $h^s$ one. However the proof only relies on momentum conservation and on the fact that $h^s$ is an algebra w.r.t. convolution, which holds true also for $\ell^1$. Hence the proof of our case is identical and we do not repeat it. \end{remark} We are able to describe quite precisely the asymptotics of the frequencies $\Omega_\jj$ of Theorem \ref{thm:reducibility}. \begin{theorem} \label{thm:reducibility4} For any $0<\e\le\e_{0}$ and $\lambda\in \cC^\0$, the frequencies $\Omega_\jj \equiv \Omega_\jj(\lambda,\e)$, $\jj = (m,n)\in \Z_N^2\setminus \cS_0$, introduced in Theorem \ref{thm:reducibility} have the following asymptotics: \begin{equation} \label{as.omega} \Omega_\jj(\lambda, \e) = \begin{cases} \wtOmega_\jj(\lambda, \e)+\displaystyle{\frac{\varpi_m(\lambda, \e)}{\langle m \rangle}}, \qquad n=0 \\ \wtOmega_\jj(\lambda, \e) + \displaystyle{\frac{\Theta_{m}(\lambda, \e)}{\la m \ra^2} + \frac{\Theta_{m,n}(\lambda, \e)}{\la m \ra^2 + \la n \ra^2}}, \qquad n\neq 0 \end{cases} \ , \end{equation} where \begin{equation}\notag \wtOmega_\jj (\lambda, \e) := \begin{cases} m^2, & \jj=(m,0), m \notin \cS_0\\ m^2 + n^2 , & \jj=(m,n) \in \fZ \ , n \neq 0 \\ \e \mu_i(\lambda) + n^2 \ , & \jj=(\tm_i, n)\in \sS , n\neq 0 \end{cases} \end{equation} where $\fZ$ and $\sS$ are the sets defined in \eqref{def:SetZ}. Here the $\{\mu_i(\lambda)\}_{1 \leq i \leq \tk }$ are the roots of the polynomial \begin{equation}\notag P(t,\lambda):= \prod_{i=1}^\tk (t + \lambda_i) - 2 \sum_{i=1}^\tk \lambda_i \, \prod_{k \neq i} (t + \lambda_k), \end{equation} which is irreducible over $\mathbb{Q}(\lambda)[t]$. Finally $\mu_i(\lambda)$, $\{\varpi_m(\lambda, \e)\}_{m \in \Z\setminus \cS_0}$, $\{\Theta_m(\lambda, \e)\}_{m \in \Z}$ and $ \{\Theta_{m,n}(\lambda, \e)\}_{(m,n) \in \Z_N^2\setminus \Tan}$ fulfill \begin{equation} \label{theta.est} \sum_{1 \leq i \leq \tk} | \mu_i(\cdot) |^{\cO_0} + \sup_{\e \leq \e_0 } \frac{1}{\e^2}\Big( \sup_{m \in \Z\setminus \cS_0} |\varpi_m(\cdot, \e)|^{\cO_0} +\sup_{m \in \Z} |\Theta_m(\cdot, \e)|^{\cO_0} + \sup_{\substack{(m,n) \in \Z_N^2\\n\neq 0}} |\Theta_{m,n}(\cdot, \e)|^{\cO_0} \Big) \leq \tM_0 \ \end{equation} for some $\tM_0$ independent of $\e$. \end{theorem} Theorems \ref{thm:reducibility} and \ref{thm:reducibility4} follow from Theorems 5.1 and 5.3 of \cite{Maspero-Procesi}, together with the observation that the set $\sC$ defined in Definition 2.3 of \cite{Maspero-Procesi} satisfies $\sC \cap \Z_N^2 = \emptyset$ if $N>\max_i |\tm_i|$. We conclude this section with a series of remarks. \begin{remark} \label{rem:mu}\label{rmk:mus} Notice that the $\{\mu_i(\lambda)\}_{1 \leq i \leq \tk}$ depend on the number $\tk$ of tangential sites but \emph{ not on the $\{\tm_i\}_{1 \leq i \leq \td}$}. \end{remark} \begin{remark} \label{rem:asym} The asymptotic expansion \eqref{as.omega} of the normal frequencies does not contain any constant term. The reason is that we canceled such a term when we subtracted the quantity $M(u)^2$ from the Hamiltonian at the very beginning (see the footnote in Section \ref{sec:FourierPhase}). Of course if we had not removed $M(u)^2$, we would have had a constant correction to the frequencies, equal to $\norm{q(\omega t, \cdot)}^2_{L^2}$. Since $q(\omega t, x)$ is a solution of \eqref{NLS}, it enjoys mass conservation, and thus $\norm{q(\omega t, \cdot)}^2_{L^2} = \norm{q(0, \cdot)}^2_{L^2}$ is independent of time. \end{remark} \begin{remark} \label{leggi_sel1} In the new variables, the {\em selection rules} of Remark \ref{leggi_sel} become (with $\cH$ expanded as in \eqref{h.funct}): \begin{align*} &\{ \cH, \widetilde \cM\} = 0 \ \ \ \Leftrightarrow \ \ \ \cH_{\alpha, \beta, \ell} \, (\widetilde \eta(\alpha, \beta) + \eta(\ell)) = 0 \\ & \{ \cH, \widetilde \cP_x\} = 0 \ \ \ \Leftrightarrow \ \ \ \cH_{\alpha, \beta, \ell} \, (\widetilde \pi_x(\alpha, \beta) + \pi(\ell)) = 0 \\ & \{ \cH, \widetilde \cP_y\} = 0 \ \ \ \Leftrightarrow \ \ \ \cH_{\alpha, \beta, \ell} \, (\pi_y(\alpha, \beta)) = 0 \end{align*} where $\eta(\ell)$ is defined in \eqref{def.eta}, $\pi_y(\alpha, \beta), \pi(\ell)$ in \eqref{def.pi}, while $$ \widetilde \eta(\alpha, \beta):= \sum_{\jj \in \fZ }(\al_\jj-\bt_\jj) \ , $$ $$ \widetilde{\pi}_x(\alpha, \beta):= \sum_{\jj=(m,n) \in \fZ} m(\al_\jj-\bt_\jj). $$ \end{remark} \section{Elimination of cubic terms}\label{sec:CubicBirkhoff} If we apply the change $ \cL^{(0)}$ obtained in Theorem \ref{thm:reducibility} to Hamiltonian \eqref{H.2}, we obtain \begin{equation} \label{ham.bnf3} \begin{split} \cK (\lambda; \yy, \theta, \ba)&:= \cH\circ \cL^{(0)}(\lambda; \yy, \theta, \ba)=\omega \cdot \yy + \sum_{\jj \in \Z_N^2 \setminus \cS_0} \Omega_\jj\, |a_\jj|^2 + \cK^{\1} + \cK^{(2)} +\cK^{(\geq 3)}, \\ \cK^{(j)}&=\cH^{{(j)}} \circ \cL^{(0)}\quad (j=1, 2), \qquad \cK^{(\geq 3)}=\cH^{(\geq 3)}\circ \cL^{(0)}. \end{split} \end{equation} As a direct consequence of Lemma \ref{lem:norm.ham} and Theorem \ref{thm:reducibility}, estimates \eqref{lem:norm.ham1} hold also for $\cK^{(j)}$, $j=1,2$ and $\cK^{(\geq 3)}$. We now perform one step of Birkhoff normal form change of variables which cancels out $\cK^\1$ completely. In order to define such a change of variables we need to impose third order Melnikov conditions, which hold true on a subset of the set $\cC^{(0)}$ of Theorem \ref{thm:reducibility}. \begin{lemma}\label{lemma:cubic:MeasureEstimate} Fix $0 <\e_1<\e_0$ sufficiently small and $\tau_1 >\tau_0$ sufficiently large. There exist constants $\gamma_1>0, \tL_1>\tL_0$ (with $\tL_1$ depending only on $\td$), such that for all $0<\e\le \e_1$ and for an $\tL_1$-generic choice of the set $\Tan$ (in the sense of Definition \ref{Lgenericity}), the set \begin{equation}\notag \cC^{(1)}:=\left\{\lambda\in \cC^{(0)}:\; \abs{\omega \cdot \ell + \s_1\Omega_{\jj_1}(\lambda, \e)+ \s_2 \Omega_{\jj_2}(\lambda, \e)+\s_3 \Omega_{\jj_3}(\lambda, \e)} \geq \gamma_1 \frac{\e }{\la \ell \ra^{\tau_1}} \ ,\;\forall (\jj,\ell,\s)\in \fA_3\right\}, \end{equation} where $\fA_3$ is introduced in Definition \ref{rem:adm3}, has positive measure. More precisely, $|\cC^{(0)}\setminus \cC^{(1)} |\lesssim \e_1^{\kappa_1}$ for some constant $\kappa_1>0$ independent of $\e_1$. \end{lemma} This lemma is proven in Appendix C of \cite{Maspero-Procesi}. The main result of this section is the following theorem. \begin{theorem} \label{thm:3b} Assume the same hypotheses and use the same notation as in Lemma \ref{lemma:cubic:MeasureEstimate}. Consider the constants $\tL_1$, $\gamma_1$, $\tau_1$ given by Lemma \ref{lemma:cubic:MeasureEstimate}, the associated set $\cC^{(1)}$, and the constants $\eps_0$, $\rho_0$ and $r_0$ given in Theorem \ref{thm:reducibility}. There exist $0<\e_1\leq \e_0$, $0<\rho_1\leq \rho_0/64$, $0<r_1\leq r_0$ such that the following holds true for all $0<\e\le \e_1$. For each $\lambda \in \cC^{(1)}$ and all $0<r \leq r_1$, $0<\rho \leq \rho_1$, there exists a symplectic change of variables $\cL^{(1)}$, that is well defined and majorant analytic from $D(\rho/2, r/2) \to D(\rho,r)$ such that applied to Hamiltonian $\cK$ in \eqref{ham.bnf3} leads to \begin{equation}\label{def:HamAfterCubic} \cQ:=\cK \circ \cL^{(1)}(\lambda; \yy, \theta, \ba) =\omega \cdot \yy + \sum_{\jj\in \Z_N^2\setminus \Tan}\Omega_\jj(\lambda, \e) |a_\jj|^2 +\cQ^{( 2)} +\cQ^{(\geq 3)}\ , \end{equation} where \begin{itemize} \item[(i)] the map $\cL^{(1)}$ is the time-1 flow of a cubic hamiltonian $\chi^\1$ such that $|{\chi}^\1|_{\rho/2,r/2}^{\cC^\1} \lesssim \frac{r }{\sqrt{\e}}$. \item[(ii)] $\cQ^{( 2)} $ is of degree 2 (in the sense of Definition \ref{def:degree}) and is given by \begin{equation}\label{zebra2} \cQ^{(2)}=\cK^{(2)} +\frac12 \{ \cK^{(1)}, \chi^{\1}\}, \end{equation} and satisfies $|\cQ^{(2)}|_{\rho/2,r/2}\lesssim {r^2}$ . \item[(iii)] $\cQ^{( \geq 3)} $ is of degree at least 3 and satisfies \begin{equation}\label{zebra3} |\cQ^{(\geq 3)}|_{\rho/2, r/2}^{\cC^{(1)}} \lesssim \frac{r^3}{\sqrt\varepsilon}. \end{equation} \item [(iv)] $\cL^{(1)}$ satisfies $\wt \cM \circ \cL^{(1)} = \wt \cM$ and $\wt \cP \circ \cL^{(1)} = \wt \cP$. \item[(v)] $\cL^{(1)}$ maps $D(\rho/2, r/2) \cap h^1 \to D(\rho, r)\cap h^1$, and if we denote $( \widetilde \cY, \widetilde\theta, \widetilde {\bf a})=\cL^{(1)}( \cY, \theta, {\bf a})$, then \begin{equation}\label{cubic difference} \left\| \widetilde{\bf a}-\ba\right\|_{\ell^1}\lesssim \|\ba\|_{\ell^1}^2. \end{equation} \end{itemize} \end{theorem} To prove this theorem, we state the following lemma, which is proved in \cite{Maspero-Procesi}. \begin{lemma}\label{lemma:Estimates} For every $ \rho,r>0$ the following holds true: \begin{itemize} \item[(i)] Let $h,\, f \in \cA_{\rho,r}^\cO$. For any $0<\rho' <\rho$ and $0<r' < r$, one has \[ \left|\{f,g\}\right|_{\rho',r'}^\mathcal{O}\leq \upsilon^{-1}C \left|f\right|_{\rho,r}^\mathcal{O} \left|g\right|_{\rho,r}^\mathcal{O}. \] where $\upsilon := \min \left( 1-\frac{r'}{r}, \rho-\rho'\right)$. If $\upsilon^{-1} |f|_{\rho,r}^\cO<\zeta$ sufficiently small then the (time-1 flow of the) Hamiltonian vector field $X_f$ defines a close to identity canonical change of variables $\cT_f$ such that $$ |h\circ\cT_f|_{\rho', r'}^\cO \leq (1+C\zeta)|h|_{\rho,r}^\cO \ , \qquad\text{for all }\, 0<\rho' <\rho \ , \ \ 0<r' < r \ . $$ \item[(ii)] Let $f,g \in \cA_{\rho,r}^\cO$ of minimal degree respectively $\td_f$ and $\td_g$ (see Definition \ref{def:degree}) and define the function \begin{equation}\label{taylor} \re{\ti}(f; h)=\sum_{l=\ti}^\infty \frac{(\ad f)^l}{l!} h\,,\quad \ad(f)h:= \{h,f\} \ . \end{equation} Then $\re{\ti}(f; g)$ is of minimal degree $\td_f \ti +\td_g$ and we have the bound $$ \abs{\re{\ti}(f; h)}_{\rho',r'}^\cO \leq C(\rho) \upsilon^{-\ti} \left(|f|_{\rho,r}^\cO\right)^\ti \, |g|_{\rho,r}^\cO \ , \qquad \forall 0<\rho' <\rho \ , \ \ 0<r' < r \ . $$ \end{itemize} \end{lemma} \begin{proof}[Proof of Theorem \ref{thm:3b}] We look for $\cL^{(1)}$ as the time-one-flow of a Hamiltonian $\chi^\1$. With $\widehat\cN := \omega \cdot \yy + \sum_{\jj\in \Z_N^2\setminus \Tan}\Omega_\jj(\lambda, \e) |a_\jj|^2$ and $\displaystyle{\re{j}(\chi^{(1)}; \,\cdot )=\sum_{k \geq j} \frac{{\rm ad}(\chi^\1)^{k-1}[ \{ \cdot, \chi^\1 \}]}{k!}}$, we have \begin{align} \label{blu.10} \cK\circ \cL^{(1)} = & \ \widehat\cN + \{\widehat\cN , \chi^\1 \} + \cK^\1 \\ \label{blu.20} & + \re2(\chi^\1; \, \widehat\cN ) + \{ \cK^\1, \chi^\1\}+ \re2(\chi^\1; \, \cK^\1 ) \\ \label{blu.30} & + \cK^{\2} +\re1(\chi^\1; \, \cK^\2 ) + \cK^{(\geq 3)} \circ \cL^{(1)} \end{align} We choose $\chi^\1$ to solve the homological equation $ \{ \widehat\cN , \chi^\1 \} + \cK^\1 = 0$. Thus we set $$ \cK^\1= \sum_{\ell,\bj,\vec{\s}\in\; \fA_3} K^{\vec{\sigma}}_{\ell,\bj}(\lambda, \e)\, e^{\im \theta\cdot \ell}a^{\sigma_1}_{\jj_1}a^{\sigma_2}_{\jj_2}a^{\sigma_3}_{\jj_3} \,, \qquad \chi^\1= \sum_{\ell,\bj,\vec{\s}\in\; \fA_3} \chi^{\vec{\sigma}}_{\ell,\bj}(\lambda, \e)\, e^{\im \theta\cdot \ell}a^{\sigma_1}_{\jj_1}a^{\sigma_2}_{\jj_2}a^{\sigma_3}_{\jj_3} $$ with $$ \chi^{\vec{\sigma}}_{\ell,\bj}(\lambda, \e) := \frac{\im K^{\vec{\sigma}}_{\ell,\bj}(\lambda, \e)}{\omega \cdot \ell + \s_1\Omega_{\jj_1}(\lambda, \e)+ \s_2 \Omega_{\jj_2}(\lambda, \e)+ \s_3 \Omega_{\jj_3}(\lambda, \e)} \ . $$ Since $\lambda \in \cC^\1$, we have $$ |{\chi}^\1|_{\frac{\rho}{2},r}^{\cC^\1} \lesssim \frac{r }{\sqrt{\e}}, $$ since the terms $q_m^{\rm{fg}}$ appearing in $\cH^\1$ (and hence $\cK^\1$) are $O(\sqrt\e)$. We come to the terms of line \eqref{blu.20}. First we use the homological equation $ \{ \widehat\cN , \chi^\1 \} + \cK^\1 = 0$ to get that \begin{align*} \re2\left( \chi^\1; \widehat\cN \right) & = \sum_{k \geq 2} \frac{{\rm ad}(\chi^\1)^{k-1}[ \{ \widehat\cN, \chi^\1 \}]}{k!} = -\frac12\{ \cK^\1, \chi^\1\}- \sum_{k \geq 2} \frac{{\rm ad}(\chi^\1)^{k}[\cK^\1 ]}{(k+1)!}. \end{align*} Therefore, we set $\cQ^\2$ as in \eqref{zebra2} and $$\cQ^{(\geq 3)}=\re2(\chi^\1; \, \cK^\1 ) +\re1(\chi^\1; \, \cK^\2 ) + \cK^{(\geq 3)} \circ \cL^{(1)}- \sum_{k \geq 2} \frac{{\rm ad}(\chi^\1)^{k}[\cK^\1 ]}{(k+1)!}.$$ By Lemma \ref{lemma:Estimates}, $\cQ^{(\geq 3)}$ has degree at least 3 and fulfills the quantitative estimate \eqref{zebra3}. To prove $(iv)$, we use the fact that $\{ \wt\cM, \chi^\1\} = \{ \wt \cP, \chi^\1\} = 0$ follows since $\cK^\1$ commutes with $\wt\cM$ and $\wt \cP$, hence its monomials fulfill the selection rules of Remark \ref{leggi_sel1}. By the explicit formula for $\chi^\1$ above, it follows that the same selection rules hold for $\chi^\1$, and consequently $\cL^{(1)}$ preserves $\wt \cM$ and $\wt \cP$. \medskip It remains to show the mapping properties of the operator $\cL^{(1)}$. First we show that it maps $D(\rho/2, r/2) \to D(\rho, r)$. Let us denote by $( \widetilde \cY, \widetilde\theta, \widetilde {\bf a})=\cL^{(1)}( \cY, \theta, {\bf a})$, then $( \widetilde \cY, \widetilde\theta, \widetilde {\bf a})=( \widetilde \cY(s), \widetilde\theta(s), \widetilde {\bf a}(s))\big|_{s=1}$ where $( \widetilde \cY(s),\widetilde\theta(s), \widetilde {\bf a}(s))$ is the Hamiltonian flow generated by $\chi^{(1)}$ at time $0\leq s\leq 1$. Using the identity $$ ( \widetilde \cY(t),\widetilde\theta(t), \widetilde {\bf a}(t))=( \cY, \theta, {\bf a})+\int_0^t X_{\chi^{(1)}}\left( \widetilde \cY(s), \widetilde\theta(s), \widetilde {\bf a}(s)\right) \di s $$ where $X_{\chi^{(1)}}$ is the Hamiltonian vector field associated with $\chi^{(1)}$ above, and a standard continuity (bootstrap) argument, we conclude that $( \widetilde \cY, \widetilde\theta,\widetilde {\bf a}) \in D(\rho, r)$. Similarly, one also concludes estimate \eqref{cubic difference}. Finally, to prove that $\cL^{(1)}$ maps $D(\rho/2, r/2) \cap h^1 \to h^1$, we note that $\widehat \cN$ is equivalent to the square of the $h^1$ norm, and $$ \widehat \cN \circ \cL^{(1)}= \widehat\cN + \re1(\chi^\1; \, \widehat\cN )=\widehat\cN - \sum_{k \geq 0} \frac{{\rm ad}(\chi^\1)^{k}[\cK^\1 ]}{(k+1)!}=\widehat \cN +O(\sqrt \e r^3), $$ and this completes the proof. \end{proof} \section{Analysis of the quartic part of the Hamiltonian}\label{sec:QuarticBirkhoff} At this stage, we are left with the Hamiltonian $\cQ$ given in \eqref{def:HamAfterCubic}. The aim of this section is to eliminate non-resonant terms from $\cQ^{(2)}$. First note that $\cQ^\2$ contains monomials which have one of the two following forms $$ e^{\im \theta \cdot \ell} \, a_{\jj_1}^{\sigma_1} \, a_{\jj_2}^{\sigma_2} \, a_{\jj_3}^{\sigma_3}\, a_{\jj_4}^{\sigma_4} \quad \mbox{ or } \quad e^{\im \theta \cdot \ell} \,\yy^l a_{\jj_1}^{\sigma_1} \, a_{\jj_2}^{\sigma_2} \ \ \ \mbox{with} \ \ |l| = 1. $$ In order to cancel out the terms quadratic in $a$ by a Birkhoff Normal form procedure, we only need the {\sl second Melnikov conditions} imposed in \eqref{2.mc}. In order to cancel out the quartic tems in $a$ we need {\sl fourth Melnikov conditions}, namely to control expressions of the form \begin{equation} \label{4m} \omega(\lambda) \cdot \ell + \sigma_1 \Omega_{\jj_1}(\lambda, \e) + \sigma_2 \Omega_{\jj_2}(\lambda, \e) + \sigma_3 \Omega_{\jj_3}(\lambda, \e) + \sigma_4 \Omega_{\jj_4}(\lambda, \e) \,,\quad \s_i=\pm 1. \end{equation} We start by defining the following set $\fR_4\subset\fA_4$ (see Definition \ref{rem:adm3}), \begin{align} \label{def:R4} \fR_4 := \Big\{(\bj, \ell, \sigma) \colon & \ell = 0 \mbox{ and } \jj_1, \jj_2, \jj_3, \jj_4\notin \sS \mbox{ form a rectangle}\\ & \ell = 0 \mbox{ and } \jj_1 , \jj_2 \notin\sS , \jj_3, \jj_4 \in \sS \mbox{ form a horizontal rectangle (even degenerate)}\notag\\ &\ell \neq 0, \ \jj_1, \jj_2, \jj_3 \in \sS, \ \jj_4 \not\in \sS \mbox{ and } |m_4|< M_0, \mbox{ where } M_0 \mbox{ is a universal constant} \notag\\ &\ell = 0, \ \jj_1, \jj_2, \jj_3, \jj_4 \in \sS \mbox{ form a horizontal trapezoid} \Big\} \notag \end{align} where $\sS$ is the set defined in \eqref{def:SetZ}. Here a trapezoid (or a rectangle) is said to be {\em horizontal} if two sides are parallel to the $x$-axis. \begin{figure}[ht]\centering \vskip-10pt\begin{minipage}[c]{5cm} \hskip-122pt {\centering \includegraphics[width=12cm]{rectangles.png} } \end{minipage} \caption{The black dots, are the points in $\cS_0$. The two rectangles and the trapezoid correspond to cases 1,2,4 in $\fR_4$. In order to represent case 3. we have highlighted three points in $\cS$. To each such triple we may associate at most one $\ell\neq 0$ and one $\jj_4\in \fZ$, which form a resonance of type 3.} \end{figure} \begin{proposition} \label{hopeful thinking} Fix $0<\e_2<\e_1$ sufficiently small and $\tau_2>\tau_1$ sufficiently large. There exist positive $\gamma_2>0, \tL_2\ge \tL_1$ (with $\tL_2$ depending only on $\td$), such that for all $0<\e\le \e_2$ and for an $\tL_2$-generic choice of the set $\Tan$ (in the sense of Definition \ref{Lgenericity}), the set \begin{equation}\notag \begin{split} \cC^{(2)}:=\Big\{\lambda\in \cC^{(1)} :&\; \abs{\omega \cdot \ell + \s_1\Omega_{\jj_1}(\lambda, \e)+ \s_2 \Omega_{\jj_2}(\lambda, \e)+\s_3 \Omega_{\jj_3}(\lambda, \e)+\s_4\Omega_{\jj_4}(\lambda, \e)} \geq \frac{\gamma_2\e}{\la \ell \ra^{\tau_2}} \ ,\\ &\;\forall (\jj,\ell,\s)\in \fA_4 \setminus\fR_4\Big\} \end{split} \end{equation} has positive measure and $\abs{\cC^{(1)} \setminus \cC^{(2)}} \lesssim \e_2^{\kappa_2}$ for some $\kappa_2>0$ independent of $\e_2$. \end{proposition} The proof of the proposition, being quite technical, is postponed to Appendix \ref{app:mes.m}. An immediate consequence, following the same strategy as for the proof of Theorem \ref{thm:3b}, is the following result. We define $\Pi_{\fR_4}$ as the projection of a function in $D(\rho, r)$ onto the sum of monomials with indexes in $\fR_4$. Abusing notation, we define analogously $\Pi_{\fR_2}$ as the projection onto monomials $e^{\im\ell\cdot\theta}\yy^l a_{\jj_1}^{\sigma_1} a_{\jj_2}^{\sigma_2}$ with $|l|=1$ and $(\jj_1,\jj_2,\ell, \sigma_1,\sigma_2)\in\fR_2$. \begin{theorem}\label{prop:Birkhoff4} There exist $0 < r_2\leq r_1$, $0<\rho_2 \leq\rho_1$ such that for all $0<\e\leq \e_2$, for all $\lambda\in \cC^\2$ and for all $r \in [0, r_2]$, $\rho \in [\frac{\rho_2}{2} , \rho_2]$ there exists a symplectic change of variables $\cL^\2$ well defined and majorant analytic from $D(\rho/2,r/2)\to D(\rho,r)$ such that \begin{equation}\label{def:HamAfterBirkhoff4} \cQ \circ \cL^\2 (\yy, \theta, \ba)= \omega \cdot \yy + \sum_{\jj\in \Z^2\setminus \cS_0}\Omega_\jj(\lambda, \e) |a_\jj|^2 +\cQ^{(2)}_{\rm Res}+\wt \cQ^{(\ge 3)} \end{equation} where \begin{equation}\label{q2res} \cQ^{(2)}_{\rm Res}= \Pi_{\fR_4} \cQ^{(2)} + \Pi_{\fR_2} \cQ^\2 \end{equation} with $\fR_4$ defined in \eqref{def:R4}, $\fR_2$ defined in \eqref{res2} and $$ |\cQ^{(2)}_{\rm Res}|_{\rho/2,r/2}\lesssim {r^2} \ , \qquad |\wt \cQ^{(\ge 3)}|_{\rho/2,r/2}\lesssim \frac{r^3}{\sqrt{\e}} . $$ Moreover $\cL^{(2)}$ maps $D(\rho/2, r/2) \cap h^1 \to D(\rho, r)\cap h^1$, and if we denote $( \widetilde \cY,\widetilde\theta, \widetilde {\bf a})=\cL^{(2)}( \cY, \theta, {\bf a})$, then \begin{equation}\notag \left\| \widetilde{\bf a}-\ba\right\|_{\ell^1}\lesssim \|\ba\|_{\ell^1}^3. \end{equation} \end{theorem} \begin{proof} The proof is analogous to the one of Theorem \ref{thm:3b}, and we skip it. \end{proof} \section{Construction of the toy model}\label{sec:ToyModel} Once we have performed (partial) Birkhoff normal form up to order 4, we can start applying the ideas developed in \cite{CKSTT} to Hamiltonian \eqref{def:HamAfterBirkhoff4}. Note that throughout this section $\eps>0$ is a fixed parameter. Namely, we do not use its smallness and we do not modify it. We first perform the (time dependent) change of variables to rotating coordinates \begin{equation}\label{def:rotating} a_\jj=\beta_\jj \ e^{\im \Omega_\jj(\lambda, \e)t}, \end{equation} to the Hamiltonian \eqref{def:HamAfterBirkhoff4}, which leads to the {corrected} Hamiltonian \begin{equation}\label{def:HamRotAfterNF} \cQ_\rot(\yy, \theta, \beta,t)=\cQ \circ \cL^\2 \left(\yy, \theta, \{\beta_\jj \ e^{\im \Omega_\jj(\lambda, \e)t}\}_{\jj\in\mathbb{Z}_N^2\setminus\cS_0}\right) - \sum_{\jj \in \Z^2_N\setminus \cS_0} \Omega_\jj(\lambda, \e) |\beta_\jj|^2. \end{equation} We split this Hamiltonian as a suitable first order truncation $\cG$ plus two remainders, \[ \cQ_\rot(\yy, \theta, \beta, t)=\cG(\yy, \theta, \beta) + \cJ_1(\yy, \theta, \beta, t)+\cR(\yy, \theta, \beta, t) \] with \begin{equation}\label{def:HamTruncRotatingSimpl} \begin{split} \cG(\yy, \theta, \beta)&=\omega \cdot \yy + \cQ^{(2)}_{\rm Res}(\yy, \theta, \beta)\\ \cJ_1(\yy, \theta, \beta,t)&=\cQ^{(2)}_{\rm Res}\left(\yy,\theta, \{\beta_\jj \ e^{\im \Omega_\jj(\lambda, \e)t}\}_{\jj\in\mathbb{Z}_N^2\setminus\cS_0}\right)-\cQ^{(2)}_{\rm Res}(\yy, \theta, \beta)\\ \cR(\yy, \theta, \beta,t)&=\wt \cQ^{(\ge 3)}\left(\yy, \theta, \{\beta_\jj \ e^{\im \Omega_\jj(\lambda, \e)t}\}_{\jj\in\mathbb{Z}_N^2\setminus\cS_0}\right) \end{split} \end{equation} where $\cQ^{(2)}_{\rm Res}$ and $\wt \cQ^{(\ge 3)}$ are the Hamiltonians introduced in Theorem \ref{prop:Birkhoff4}. For the rest of this section we focus our study on the truncated Hamiltonian $\cG$. Note that the remainder $\cJ_1$ is not smaller than $\cG$. Nevertheless it will be smaller when evaluated on the particular solutions we consider. The term $\cR$ is smaller than $\cG$ for small data since it is the remainder of the normal form obtained in Theorem \ref{prop:Birkhoff4}. Later in Section \ref{sec:Approximation} we show that including the dismissed terms $\cJ_1$ and $\cR$ barely alters the dynamics of the solutions of $\cG$ that we analyze. \subsection{The finite set $\Lambda$} We now start constructing special dynamics for the Hamiltonian $\cG$ with the aim of treating the contributions of $\cJ_1$ and $\cR$ as remainder terms. Following \cite{CKSTT}, we do not study the full dynamics of $\cG$ but we restrict the dynamics to invariant subspaces. Indeed, we shall construct a set $\Lambda\subset \fZ:=(\Z\times N\Z)\setminus (\cS_0\cup \sS)$ for some large $N$, in such a way that it generates an invariant subspace (for the dynamics of $\cG$) given by \begin{equation}\label{def:ULambda} U_\Lambda:= \{\beta_\jj=0:\jj\not\in\Lambda\}. \end{equation} Thus, we consider the following definition. \begin{definition}[Completeness]\label{completeness} We say that a set $\Lambda\subset \fZ$ is {\em complete} if $U_\Lambda$ is invariant under the dynamics of $\cG$. \end{definition} \begin{remark} It can be easily seen that if $\Lambda$ is complete, $U_\Lambda$ is also invariant under the dynamics of $\cG+\cJ_1$. \end{remark} We construct a complete set $\Lambda\subset \fZ$ (see Definition \ref{completeness}) and we study the restriction on it of the dynamics of the Hamiltonian $\cG$ in \eqref{def:HamTruncRotatingSimpl}. Following \cite{CKSTT}, we impose several conditions on $\Lambda$ to obtain dynamics as simple as possible. The set $\Lambda$ is constructed in two steps. First we construct a preliminary set ${\Lambda_0} \subset \Z^2$ on which we impose numerous geometrical conditions. Later on we scale ${\Lambda_0}$ by a factor $N$ to obtain $\Lambda\subset (N\Z\times N\Z)\subset \fZ$. The set ${\Lambda_0}$ is ``essentially'' the one described in \cite{CKSTT}. The crucial point in that paper is to choose carefully the modes so that each mode in $\Lambda_0$ only belongs to two rectangles with vertices in $\Lambda_0$. This allows to simplify considerably the dynamics and makes it easier to analyze. Certainly, this requires imposing several conditions on $\Lambda_0$. We add some extra conditions to adapt the set $\Lambda_0$ to the particular setting of the present paper. We start by describing them. We split $\Lambda_0$ into $\gen$ disjoint generations $\La_0=\La_{01}\cup\ldots \cup \La_{0\gen}$. We call a quadruplet $(\jj_1,\jj_2, \jj_3, \jj_4) \in \Lambda_0^4$ a \emph{nuclear family} if $\jj_1, \jj_3 \in \Lambda_{0k}$, $\jj_2, \jj_4 \in \Lambda_{0,k+1}$, and the four vertices form a non-degenerate rectangle. Then, we require the following conditions. \begin{itemize} \item Property $\mathrm{I}_{\Lambda_0}$ (Closure): If $\jj_1,\jj_2, \jj_3 \in {\Lambda_0}$ are three vertices of a rectangle, then the fourth vertex of that rectangle is also in ${\Lambda_0}$. \item Property $\mathrm{II}_{{\Lambda_0}}$ (Existence and uniqueness of spouse and children): For each $1\leq k <\gen$ and every $\jj_1\in {\Lambda_0}_k$, there exists a unique spouse $\jj_3\in {\Lambda_0}_k$ and unique (up to trivial permutations) children $\jj_2,\jj_4 \in \Lambda_{0,k+1}$ such that $(\jj_1,\jj_2,\jj_3,\jj_4)$ is a nuclear family in ${\Lambda_0}$. \item Property $\mathrm{III}_{{\Lambda_0}}$ (Existence and uniqueness of parents and siblings): For each $1\leq k <\gen$ and every $\jj_2 \in \Lambda_{0,k+1}$ there exists a unique sibling $\jj_4\in \Lambda_{0,k+1}$ and unique (up to permutation) parents $\jj_1,\jj_3 \in {\Lambda_0}_k$ such that $(\jj_1,\jj_2,\jj_3,\jj_4)$ is a nuclear family in ${\Lambda_0}$. \item Property $\mathrm{IV}_{\Lambda_0}$ (Non-degeneracy): A sibling of any frequency $\jj$ is never equal to its spouse. \item Property $\mathrm{V}_{\Lambda_0}$ (Faithfulness): Apart from nuclear families, ${\Lambda_0}$ contains no other rectangles. In fact, by the closure property $\mathrm{I}_{\Lambda_0}$, this also means that it contains no right angled triangles other than those coming from vertices of nuclear families. \item Property $\mathrm{VI}_{\Lambda_0}$: There are no two elements in ${\Lambda_0}$ such that $\jj_1 \pm \jj_2 = 0 $. There are no three elements in ${\Lambda_0}$ such that $\jj_1-\jj_2+\jj_3=0$. If four points in ${\Lambda_0}$ satisfy $\jj_1-\jj_2+\jj_3-\jj_4=0$ then either the relation is trivial or such points form a family. \item Property $\mathrm{VII}_{\Lambda_0}$: There are no points in ${\Lambda_0}$ with one of the coordinates equal to zero i.e. $${\Lambda_0} \cap \big(\Z\times \{0\} \cup \{0\}\times \Z\big)= \emptyset.$$ \item Property $\mathrm{VIII}_{\Lambda_0}$: There are no two points in ${\Lambda_0}$ which form a right angle with $0$. \end{itemize} Condition $\mathrm{I}_{\Lambda_0}$ is just a rephrasing of the completeness condition introduced in Definition \ref{completeness}. Properties $\mathrm{II}_{\Lambda_0}$, $\mathrm{III}_{\Lambda_0}$, $\mathrm{IV}_{\Lambda_0}$, $\mathrm{V}_{\Lambda_0}$ correspond to being a family tree as stated in \cite{CKSTT}. \begin{theorem}\label{thm:SetLambda} Fix $\tK\gg 1$ and $s\in (0,1)$. Then, there exists $\gen\gg 1$, $A_0\gg 1$, $\eta>0$, and a set $\Lambda_0\subset \Z^2$ with \[ \Lambda_0=\Lambda_{01}\cup\ldots\cup\Lambda_{0\gen}, \] which satisfies conditions $\mathrm{I}_{\Lambda_0}$ -- $\mathrm{VIII}_{\Lambda_0}$ and also \begin{equation}\label{def:Growth} \frac{\sum_{\jj\in\Lambda_{0,\gen-1}}|\jj|^{2s}}{\sum_{\jj\in\Lambda_{03}}|\jj|^{2s}} \geq \dfrac 12 2^{(1-s)(\gen-4)}\ge \tK^2. \end{equation} Moreover, for any $A\geq A_0$, there exist $\gen$ and a function $f(\gen)$ satisfying \begin{equation}\label{estimate on fg} e^{A^{\gen}}\leq f(\gen)\leq e^{2(1+\eta)A^{\gen}} \qquad \text{for $\gen$ large enough,} \end{equation} such that each generation $\Lambda_{0k}$ has $2^{\gen-1}$ disjoint frequencies $\jj$ satisfying \begin{equation}\label{eq:BoundsS1:0} C^{-1}f(\gen)\leq |\jj|\leq C3^\gen f(\gen),\ \ \jj\in\Lambda_{0k}, \end{equation} and \begin{equation}\label{eq:BoundsSN:0} \frac{\sum_{\jj\in\Lambda_{0k}}|\jj|^{2s}}{\sum_{\jj\in\Lambda_{0i}}|\jj|^{2s}}\leq Ce^{s\gen} \end{equation} for any $1\leq i <k\leq \gen$ and some constant $C>0$ independent of $\gen$. \end{theorem} The construction of such kind of sets was done first in \cite{CKSTT} (see also \cite{GuardiaK12, GuardiaK12Err, Guardia14, GuardiaHP16}) where the authors construct sets $\Lambda$ satisfying Properties $\mathrm{I}_\Lambda$-$\mathrm{V}_\Lambda$ and estimate \eqref{eq:BoundsSN:0}. The proof of Theorem \ref{thm:SetLambda} follows the same lines as the ones in those papers. Indeed, Properties $\mathrm{VI}_\Lambda$-$\mathrm{VIII}_\Lambda$ can be obtained through the same density argument. Finally, the estimate \eqref{eq:BoundsS1:0}, even if it is not stated explicitly in \cite{CKSTT}, it is an easy consequence of the proof in that paper (in \cite{GuardiaK12, GuardiaK12Err, GuardiaHP16} a slightly weaker estimate is used). \begin{remark} Note that $s\in (0,1)$ implies that were are constructing a backward cascade orbit (energy is transferred from high to low modes). This means that the modes in each generation of $\Lambda_0$ are just switched oppositely $\Lambda_{0j} \leftrightarrow \Lambda_{0, \gen -j +1}$ compared to the ones constructed in \cite{CKSTT}. The second statement of Theorem \ref{thm:main} considers $s>1$ and therefore a forward cascade orbit (energy transferred from low to high modes). For this result, we need a set $\Lambda_0$ of the same kind as that of \cite{CKSTT}, which thus satisfies \[ \frac{\sum_{\jj\in\Lambda_{0,\gen-1}}|\jj|^{2s}}{\sum_{\jj\in\Lambda_{03}}|\jj|^{2s}} \geq \dfrac 12 2^{(s-1)(\gen-4)}\ge \tK^2 \] instead of estimate \eqref{def:Growth}. \end{remark} We now scale ${\Lambda_0}$ by a factor $N$ satisfying \eqref{def:SizeN} and we denote by ${\Lambda}:= N{\Lambda_0}$. Note that the listed properties $\mathrm{I}_{\Lambda_0}$ -- $\mathrm{VIII}_{\Lambda_0}$ are invariant under scaling. Thus, if they are satisfied by $\Lambda_0$, they are satisfied by $\Lambda$ too. \begin{lemma} \label{lem:rangle} There exists a set $\Lambda$ satisfying all statements of Theorem \ref{thm:SetLambda} (with a different $f(\gen)$ satisfying \eqref{estimate on fg}) and also the following additional properties. \begin{enumerate} \item If two points $\jj_1, \jj_2 \in {\Lambda}$ form a right angle with a point $(m,0) \in \Z \times \{0\}$, then $$ |m| \geq \sqrt{f(\gen)} \ . $$ \item $\Lambda\subset N\Z\times N\Z$ with \[ N=f(\gen)^{\frac{4}{5}}. \] \end{enumerate} \end{lemma} \begin{proof} Consider any of the sets $\Lambda$ obtained in Theorem \ref{thm:SetLambda}. By property $\mathrm{VIII}_{{\Lambda_0}}$ one has $m \neq 0$. Define $\jj_3= (m,0)$. The condition for orthogonality is either $$ (i) \ (\jj_1- \jj_2)\cdot (\jj_3- \jj_2 ) = 0 \ \text{ or } \ \ \ (ii) \ (\jj_1- \jj_3)\cdot (\jj_2- \jj_3) = 0 \ . $$ Taking $\jj_i=(m_i,n_i)$, $i=1,2$, condition $(i)$ implies (after some computations) that $m$ is given by $$ m = \frac{(n_1- n_2)n_2 + (m_1-m_2)m_2}{m_1- m_2} \ . $$ Then since $|m_1- m_2| \leq 2 Cf(\gen)3^\gen$ and the numerator is not zero, we have \begin{equation}\label{eq:RightAngleCondition} |m|\geq \frac{1}{4Cf(\gen) 3^\gen}\geq\frac{1}{(f(\gen))^{3/2}}. \end{equation} Now we consider condition $(ii)$. One gets that $m$ is a root of the quadratic equation $$ m^2 - (m_1+m_2)m + (m_1m_2 + n_1n_2) = 0 \ . $$ First we note that $m_1m_2 + n_1n_2 \neq 0$ by property $\mathrm{VIII}_{\Lambda_0}$, since $m=0$ cannot be a solution. Now consider the discriminant $\Delta= (m_1+m_2)^2 - 4(m_1m_2 + n_1n_2)$. If $\Delta <0$, then no right angle is possible. If $\Delta = 0$, then clearly $|m|\geq 1/2$, since once again $m = 0$ is not a solution. Finally let $\Delta >0$. Then $$ m = \frac{(m_1+ m_2)}{2} \left(1 \pm \sqrt{1- \frac{4(m_1m_2+ n_1n_2)}{(m_1+ m_2)^2}} \right) \ . $$ Denoting by $\gamma:=\frac{4(m_1m_2+ n_1n_2)}{(m_1+ m_2)^2}$, the condition $\Delta>0$ implies that $-\infty <\gamma< 1$. Splitting in two cases: $|\gamma|\leq 1$ and $\gamma<-1$ one can easily obtain that either way $m$ satisfies \eqref{eq:RightAngleCondition}. Now it only remains to scale the set $\Lambda$ by a factor $(f(\gen))^{4}$. Then, taking as new $f(\gen)$, $\wt f(\gen):=(f(\gen))^5$, the obtained set $\Lambda$ satisfies all statements of Theorem \ref{thm:SetLambda} and also the statements of Lemma \ref{lem:rangle}. \end{proof} \subsection{The truncated Hamiltonian on the finite set $\Lambda$ and the \cite{CKSTT} toy model} We use the properties of the set $\Lambda$ given by Theorem \ref{thm:SetLambda} and Lemma \ref{lem:rangle} to compute the restriction of the Hamiltonian $\cG$ in \eqref{def:HamTruncRotatingSimpl} to the invariant subset $U_\Lambda$ (see \eqref{def:ULambda}). \begin{lemma}\label{lemma:ResonantHamRectangles} Consider the set $\Lambda\subset N\Z\times N\Z$ obtained in Theorem \ref{thm:SetLambda}. Then, the set \[ \cM_\Lambda=\left\{(\yy, \theta, \beta):\yy=0, \ \ \beta\in U_\Lambda\right\} \] is invariant under the flow associated to the Hamiltonian $\cG$. Moreover, $\cG$ restricted to $\cM_\Lambda$ can be written as \begin{equation}\label{def:HamToyModelLattice} \cG\big\vert_{\cM_\Lambda}(\theta, \beta)=\cG_0(\beta) + \cJ_2(\theta,\beta) \end{equation} where \begin{equation}\label{def:HamLambdaIteam} \cG_0(\beta)=-\frac12 \sum_{\jj\in\Lambda}|\beta_\jj|^4+ \frac12 \sum_{\substack{(\jj_1,\jj_2,\jj_3,\jj_4)\in \Lambda^4\\ \jj_i \text{ form a rectangle}}}^* \beta_{\jj_1}\bar \beta_{\jj_2}\beta_{\jj_3}\bar \beta_{\jj_4} \end{equation} and the remainder $\cJ_2$ satisfies \begin{equation}\label{def:BoundJ2} |\cJ_2|_{\rho,r}\lesssim r^2(f(\gen))^{-\frac{4}{5}} . \end{equation} \end{lemma} \begin{proof} First we note that, since $\yy=0$ on $\cM_\Lambda$, \[ \cG\big\vert_{\cM_\Lambda}=\cQ^{(2)}_\mathrm{Res}\big\vert_{\cM_\Lambda}= \Pi_{\fR_4} \cQ^{(2)} \big\vert_{\cM_\Lambda} \] where $\cQ^{(2)}_\mathrm{Res}$ is the Hamiltonian defined in Theorem \ref{prop:Birkhoff4}. We start by analyzing the Hamiltonian $\cQ^\2$ introduced in Theorem \ref{thm:3b}, which is defined as $$\cQ^\2= \cK^\2 + \frac{1}{2}\left\{\cK^\1,\chi^\1\right\}.$$ We analyze each term. Here it plays a crucial role that $\Lambda\subset N\Z\times N\Z$ with $N=f(\gen)^{4/5}$. In order to estimate $\cK^\2$, defined in \eqref{ham.bnf3}, we recall that $\Lambda$ does not have any mode in the $x$-axis and therefore the original quartic Hamiltonian has not been modified by the Birkhoff map \eqref{def:BirkhoffMap} (this is evident from the formula for $\cH^{(2)}$ in \eqref{giraffe2}). Thus, it is enough to analyze how the quartic Hamiltonian has been modified by the linear change $\cL^{(0)}$ analyzed in Theorems \ref{thm:reducibility} and \ref{thm:reducibility4}. Using the smoothing property of the change of coordinates $\cL^{(0)}$ given in Statement 5 of Theorem \ref{thm:reducibility}, one obtains $$ \Pi_{\fR_4} \cK^{(2)} \big\vert_{\cM_\Lambda}= -\frac12 \sum_{\jj\in\Lambda}|a_\jj|^4+ \frac12 \sum_{{\rm Rectangles}\subset \Lambda} a_{\jj_1}\bar a_{\jj_2}a_{\jj_3}\bar a_{\jj_4} + O\left(\frac{ r^2 }{N}\right).$$ Now we deal with the term $\{\cK^\1,\chi^\1\}$. Since we only need to analyze $\Pi_{\fR_4}\{\cK^\1,\chi^\1\}\big\vert_{\cM_\Lambda}$, we only need to consider monomials in $\cK^\1$ and in $\chi^\1$ which have at least two indexes in $\Lambda$. We represent this by setting \begin{equation}\notag \chi^\1= \chi^\1_{\#\Lambda\le 1}+ \chi^\1_{\#\Lambda \ge 2} \,, \end{equation} where $\#\Lambda \ge 2$ means that we restrict to those monomials which have at least two indexes in $\Lambda$. We then have $$ \{\cK^\1,\chi^\1\}\big\vert_{\cM_\Lambda}= \{\cK^\1,\chi^\1_{\#\Lambda \ge 2}\}\big\vert_{\cM_\Lambda}. $$ We estimate the size of $\chi^\1_{\#\Lambda \ge 2}$. As explained in the proof of Theorem \ref{thm:3b}, $\chi^\1_{\#\Lambda \ge 2}$ has coefficients \begin{equation}\label{def:Chi1} \chi^\1_{\ell,\bj,\vec{\s}}=\frac{\im \cK^\1_{\ell,\bj,\vec{\s}}}{\omega \cdot \ell + \s_1\Omega_{\jj_1}(\lambda, \e)+ \s_2 \Omega_{\jj_2}(\lambda, \e)+ \s_3 \Omega_{\jj_3}(\lambda, \e)} \end{equation} with $\jj_2,\jj_3\in \Lambda$. We first estimate the tails (in $\ell$) of $\chi^\1$ and then we analyze the finite number of cases left. For the tails, it is enough to use Theorem \ref{thm:3b} to deduce the following estimate for any $\rho\leq \rho_1/2$, where $\rho_1$ is the constant introduced in that theorem, $$ \left| \sum_{|\ell|>\sqrt[4]N} \chi^\1_{\ell,\bj,\vec{\s}} \, e^{\im \theta \cdot \ell} a_{\jj_1}^{\sigma_1}a_{\jj_2}^{\sigma_2}a_{\jj_3}^{\sigma_3} \right|_{\rho,r}^{\cC^\1}\lesssim e^{-(\rho_1-\rho)\sqrt[4]{N}} \left| \chi^\1 \right|_{ \rho_1 , r }^{\cC^\1} \le r e^{-(\rho_1-\rho)\sqrt[4]{N}}. $$ We restrict our attention to monomials with $|\ell|\le \sqrt[4]{N}$. We take $\jj_2,\jj_3\in \Lambda$ and we consider different cases depending on $\jj_1$ and the properties of the monomial. In each case we show that the denominator of \eqref{def:Chi1} is larger than $N$. \paragraph{Case 1.} Suppose that $\jj_1\notin \sS$. The selection rules are (according to Remark \ref{leggi_sel1}) $$ \eta(\ell)+ \s_1+\s_2+\s_3=0\,,\quad \vec\tm\cdot \ell+ \s_1 m_1+\s_2m_2+\s_3m_3=0\,,\quad \s_1 n_1+\s_2n_2+\s_3n_3=0 $$ and the leading term in the denominator of \eqref{def:Chi1} is \begin{equation}\label{def:SmallDivisorChi} \vec{\tm}^2\cdot \ell+ \s_1 |\jj_1|^2+\s_2|\jj_2|^2+\s_3|\jj_3|^2 \end{equation} where $\vec{\tm}^2=(\tm_1^2,\dots,\tm_\tk^2)$. We consider the following subcases: \begin{itemize} \item[{\bf A1}] $\s_3=\s_1=+1$, $\s_2=-1$. In this case $\jj_1 - \jj_2 + \jj_3 -\tt v = 0$, where $\mathtt v:=( -\vec\tm\cdot \ell,0)$. We rewrite \eqref{def:SmallDivisorChi} as $$ \vec{\tm}^2\cdot \ell + (\vec\tm\cdot \ell)^2 - (\vec\tm\cdot \ell)^2 + |\jj_1|^2-|\jj_2|^2+|\jj_3|^2= \vec{\tm}^2\cdot \ell + (\vec\tm\cdot \ell)^2 - 2\Big( \mathtt v -\jj_3,\jj_3-\jj_2\Big) . $$ Assume first $\jj_2\neq \jj_3$. Since the set $\Lambda$ satisfies Lemma \ref{lem:rangle} 1. and $\abs{\vec \tm \cdot \ell} \lesssim \sqrt[4]{N} \lesssim f(\gen)^{1/5}$, we can ensure that $\jj_2$ and $\jj_3$ do not form a right angle with $\tt v$, thus \[\Big( \mathtt v -\jj_2,\jj_3-\jj_2\Big)\in \Z \setminus \{0\}.\] Actually by the second statement of Lemma \ref{lem:rangle}, $\jj_3-\jj_2\in N\Z^2$ and hence, using also $|\ell|\le \sqrt[4]{N}$, $$ \Big| \vec{\tm}^2\cdot \ell + (\vec\tm\cdot \ell)^2 - 2\Big( \mathtt v -\jj_3,\jj_3-\jj_2\Big)\Big|\ge 2N - N/8 >N. $$ Now it remains the case $\jj_2=\jj_3$. Such monomials cannot exist in $\cH^{(1)}$ in \eqref{def of H1} since the monomials with two equal modes have been removed in \eqref{Ha0} (it does not support degenerate rectangles). Naturally a degenerate rectangle may appear after we apply the change $\cL^{(0)}$ introduced in Theorem \ref{thm:reducibility}. Nevertheless, the map $\cL^{(0)}$ is identity plus smoothing (see statement 5 of that theorem), which leads to the needed $N^{-1}$ factor. \item[{\bf B1}] $\s_3=\s_2=+1$, $\s_1=-1$. Now the selection rule reads $-\jj_1 + \jj_2 + \jj_3 -\tt v = 0$, with again $\mathtt v=( -\vec\tm\cdot \ell,0)$. We rewrite \eqref{def:SmallDivisorChi} as $$ \vec{\tm}^2\cdot \ell + (\vec\tm\cdot \ell)^2 - (\vec\tm\cdot \ell)^2 - |\jj_1|^2+|\jj_2|^2+|\jj_3|^2= \vec{\tm}^2\cdot \ell + (\vec\tm\cdot \ell)^2 - 2\Big( \mathtt v -\jj_3,\mathtt v-\jj_2\Big) . $$ By the first statement of Lemma \ref{lem:rangle}, $\Big( \mathtt v -\jj_2,\mathtt v-\jj_3\Big)\neq 0$. By Property $\mathrm{VIII}_\Lambda$ and the second statement of Lemma \ref{lem:rangle}, one has $|(\jj_2, \jj_3)|\geq N^2$ and estimate \eqref{eq:BoundsS1:0} implies $|\jj_2|, |\jj_3| \leq N^{3/2}$. Then \begin{equation}\notag \abs{\Big( \mathtt v -\jj_2,\mathtt v-\jj_3\Big)} \geq |(\jj_2, \jj_3)| - |(\mathtt v,\jj_2+ \jj_3)| - |\mathtt v|^2 \geq N^2/4 \ \end{equation} and one concludes as in {\bf A1}. \item[{\bf C1}] \; $\s_1=\s_3=\s_2=+1$. The denominator \eqref{def:SmallDivisorChi} satisfies $$ | \vec{\tm}^2\cdot \ell + |\jj_1|^2+|\jj_2|^2+|\jj_3|^2|\ge 2N -|\vec{\tm}^2\cdot \ell| \ge 2N-N/8\ge N. $$ \end{itemize} This completes the proof of Case 1. \paragraph{Case 2.} Suppose that $\jj_1\in \sS$. The selection rules are $$ \eta(\ell)+\s_2+\s_3=0\,,\quad \vec\tm\cdot \ell+\s_2m_2+\s_3m_3=0\,,\quad \s_1 n_1+\s_2n_2+\s_3n_3=0 $$ and the leading term in the denominator is \begin{equation} \label{miao} \vec{\tm}^2\cdot \ell+ \s_1 n_1^2+\s_2|\jj_2|^2+\s_3|\jj_3|^2, \end{equation} where $\vec{\tm}^2=(\tm_1^2,\dots,\tm_\tk^2)$. We can reduce Case 2 to Case 1. \begin{itemize} \item[{\bf B2}] $\s_2=\s_3=+1$, $\s_1=-1$. Assume w.l.o.g. that $\jj_1=(\tm_1,n_1)$. Define $\tilde{\ell}=\ell+\be_1$, and obtain from the selection rules and \eqref{miao} that $$ \vec\tm\cdot \tilde\ell - \tm_1 +m_2+m_3= \vec\tm\cdot \ell+m_2+m_3 =0 \,. $$ Then the leading term in the denominator becomes $$ \vec\tm^2\cdot \tilde\ell - (\tm_1^2+n_1^2 )+|\jj_2|^2+|\jj_3|^2 $$ and one proceeds as in case {\bf B1} with $\tilde \ell$ in place of $\ell$. \end{itemize} The cases {\bf A2} and {\bf C2} are completely equivalent. \smallskip In conclusion we have proved that \begin{equation}\label{def:ChiEstimate} \left|\chi^\1_{\#\Lambda \ge 2}\big\vert_{\cM_\Lambda}\right|_{\rho,r}^{\cC^\1}\le r N^{-1}. \end{equation} Item $(i)$ of Lemma \ref{lemma:Estimates}, jointly with estimate \eqref{def:ChiEstimate}, implies that, for $\rho'\in (0, \rho/2]$ and $r' \in (0, r/2]$ $$ \left|\left\{\cK^\1,\chi^\1_{\#\Lambda \ge 2}\right\}\big\vert_{\cM_\Lambda}\right|_{\rho',r'}^{\cC^\1}\lesssim r^2 N^{-1}. $$ This completes the proof of Lemma \ref{lemma:ResonantHamRectangles}. \end{proof} The Hamiltonian $\cG_0$ in \eqref{def:HamLambdaIteam} is the Hamiltonian that the I-team derived to construct their toy model. A posteriori we will check that the remainder $\cJ_2$ plays a small role in our analysis. The properties of $\Lambda$ imply that the equation associated to $\cG_0$ reads \begin{equation}\label{eq:InftyODE:FirstFiniteReduction} \im \dot \beta_\jj=-\beta_\jj|\beta_\jj|^2+2 \beta_{\jj_{\mathrm{child}_1}}\beta_{\jj_{\mathrm{child}_2}}\ol{\beta_{ \jj_\mathrm{ spouse}}}+2 \beta_{\jj_{\mathrm{parent}_1}}\beta_{\jj_{\mathrm{parent}_2}}\ol{\beta_{ \jj_\mathrm { sibling}}} \end{equation} for each $\jj\in \Lambda$. In the first and last generations, the parents and children are set to zero respectively. Moreover, the particular form of this equation implies the following corollary. \begin{corollary}[\cite{CKSTT}]\label{coro:Invariant} Consider the subspace \[ \widetilde U_\Lambda=\left\{\beta\in U_\Lambda: \beta_{\jj_1}=\beta_{\jj_2}\,\,\text{for all }\jj_1,\jj_2\in\Lambda_k\,\,\text{for some }k\right\}, \] where all the members of a generation take the same value. Then, $\wt U_\Lambda$ is invariant under the flow associated to the Hamiltonian $\cG_0$. Therefore, equation \eqref{eq:InftyODE:FirstFiniteReduction} restricted to $ \widetilde U_\Lambda$ becomes \begin{equation}\label{def:model} \im \dot b_k=-b_k^2\overline b_k+2 \ol b_k\left(b_{k-1}^2+b_{k+1}^2\right),\,\,k=1,\ldots \gen , \end{equation} where \begin{equation}\label{def:ChangeToToyModel} b_k=\beta_\jj\,\,\,\text{ for any }\jj\in\Lambda_k. \end{equation} \end{corollary} The dimension of $ \widetilde U_\Lambda$ is $2\gen$, where $\gen$ is the number of generations. In the papers \cite{CKSTT} and \cite{GuardiaK12}, the authors construct certain orbits of the \emph{toy model} \eqref{def:model} which shift its mass from being localized at $b_3$ to being localized at $b_{\gen-1}$. These orbits will lead to orbits of the original equation \eqref{NLS} undergoing growth of Sobolev norms. \begin{theorem}[\cite{GuardiaK12}]\label{thm:ToyModelOrbit} Fix a large $\gamma\gg 1$. Then for any large enough $\gen$ and $\mu=e^{-\gamma \gen}$, there exists an orbit of system \eqref{def:model}, $\kk>0$ (independent of $\gamma$ and $\gen$) and $T_0>0$ such that \[ \begin{split} |b_3(0)|&>1-\mu^\kk\\ |b_i(0)|&< \mu^\kk\qquad\text{ for }i\neq 3 \end{split} \qquad \text{ and }\qquad \begin{split} |b_{\gen-1}(T_0)|&>1-\mu^\kk\\ |b_i(T_0)|&<\mu^\kk \qquad\text{ for }i\neq \gen-1. \end{split} \] Moreover, there exists a constant $K>0$ independent of $\gen$ such that $T_0$ satisfies \begin{equation}\notag 0<T_0< C \gen \ln \left(\frac 1 \mu \right)=C\,\gamma\,\gen^2. \end{equation} \end{theorem} This theorem is proven in \cite{CKSTT} without time estimates. The time estimates were obtained in \cite{GuardiaK12}. \section{The approximation argument}\label{sec:Approximation} In Sections \ref{sec:reducibility}, \ref{sec:CubicBirkhoff} and \ref{sec:QuarticBirkhoff} we have applied several transformations and in Sections \ref{sec:QuarticBirkhoff} and \ref{sec:ToyModel} we have removed certain small remainders. This has allowed us to derive a simple equation, called toy model in \cite{CKSTT}; then, in Section \ref{sec:ToyModel}, we have analyzed some special orbits of this system. The last step of the proof of Theorem \ref{thm:main} is to show that when incorporating back the removed remainders ($\cJ_1$ and $\cR$ in \eqref{def:HamTruncRotatingSimpl} and $\cJ_2$ in \eqref{def:HamToyModelLattice}) and undoing the changes of coordinates performed in Theorems \ref{thm:reducibility} and \ref{thm:3b}, in Proposition \ref{prop:Birkhoff4} and in \eqref{def:rotating}, the toy model orbit obtained in Theorem \ref{thm:ToyModelOrbit} leads to a solution of the original equation \eqref{NLS} undergoing growth of Sobolev norms. Now we analyze each remainder and each change of coordinates. From the orbit obtained in Theorem \ref{thm:ToyModelOrbit} and using \eqref{def:ChangeToToyModel} one can obtain an orbit of Hamiltonian \eqref{def:HamLambdaIteam}. Moreover, both the equation of Hamiltonian \eqref{def:HamLambdaIteam} and \eqref{def:model} are invariant under the scaling \begin{equation}\label{def:Rescaling} b^\nu(t)=\nu^{-1}b\left(\nu^{-2}t\right). \end{equation} By Theorem \ref{thm:ToyModelOrbit}, the time spent by the solution $b^\nu(t)$ is \begin{equation}\label{def:Time:Rescaled} T=\nu^2 T_0\le \nu^2C\gamma \gen^2, \end{equation} where $T_0$ is the time obtained in Theorem \ref{thm:ToyModelOrbit}. Now we prove that one can construct a solution of Hamiltonian \eqref{def:HamRotAfterNF} ``close'' to the orbit $\beta^\nu$ of Hamiltonian \eqref{def:HamLambdaIteam} defined as \begin{equation}\label{def:RescaledApproxOrbit} \begin{split} \beta^\nu_\jj(t)&=\nu^{-1}b_k\left(\nu^{-2}t\right) \,\,\,\text{ for each }\ \jj\in\Lambda_k\\ \beta^\nu_\jj(t)&=0\,\,\,\text{ for each }\ \jj\not\in\Lambda, \end{split} \end{equation} where $b(t)$ is the orbit given by Theorem \ref{thm:ToyModelOrbit}. Note that this implies incorporating the remainders in \eqref{def:HamTruncRotatingSimpl} and \eqref{def:HamToyModelLattice}. We take a large $\nu$ so that \eqref{def:RescaledApproxOrbit} is small. In the original coordinates this will correspond to solutions close to the finite gap solution. Taking $\cJ=\cJ_1+\cJ_2$ (see \eqref{def:HamTruncRotatingSimpl} and \eqref{def:HamToyModelLattice}), the equations for $\beta$ and $\yy$ associated to Hamiltonian \eqref{def:HamRotAfterNF} can be written as \begin{equation}\label{eq:betay} \begin{split} \im\dot\beta&=\partial_{\ol \beta} \cG_0(\beta)+\partial_{\ol \beta} \cJ(\yy,\theta, \beta)+\partial_{\ol \beta} \cR(\yy,\theta, \beta)\\ \dot \yy&=-\partial_\theta \cJ(\yy,\theta, \beta)-\partial_{\theta} \cR(\yy,\theta, \beta). \end{split} \end{equation} Now we obtain estimates of the closeness of the orbit of the toy model obtained in Theorem \ref{thm:ToyModelOrbit} and orbits of Hamiltonian \eqref{def:HamRotAfterNF}. \begin{theorem}\label{thm:Approximation} Consider a solution $(\yy, \theta, \beta)=(0,\theta_0, \beta^\nu(t))$ of Hamiltonian \eqref{def:HamLambdaIteam} for any $\theta_0\in \T^\tk$, where $\beta^\nu(t)=\{\beta^\nu_\jj(t)\}_{\jj\in \Z_N^2\setminus \cS_0}$ is the solution given by \eqref{def:RescaledApproxOrbit}. Fix $\sigma$ small independent of $\gen$ and $\gamma$. Assume \begin{equation}\label{def:LambdaOfN} \frac12 \left(f(\gen)\right)^{1-\sigma}\leq \nu\leq \left(f(\gen)\right)^{1-\sigma}. \end{equation} Then any solution $(\yy(t), \theta(t),\widetilde \beta(t))$ of \eqref{def:HamRotAfterNF} with initial condition $\widetilde\beta(0)=\widetilde\beta^0\in\ell^1$, $\yy(0)=\yy^0\in\R^\tk$ with $\|\widetilde\beta^0-\beta^\nu(0)\|_{\ell^1}\leq \nu^{-1-4\sigma}$ and $|\yy^0|\leq \nu^{-2-4\sigma}$ and any $\theta(0)=\theta_1\in \T^\tk$, satisfies \begin{equation}\notag \left\|\widetilde \beta_\jj(t)-\beta^\nu_\jj(t)\right\|_{\ell^1} \leq \nu^{-1-\sigma}, \qquad \left|\yy(t)\right|\leq \nu^{-2-\sigma}, \end{equation} for $0<t<T$, where $T$ is the time defined in \eqref{def:Time:Rescaled}. \end{theorem} The proof of this theorem is deferred to Section \ref{sec:Approx}. Note that the change to rotating coordinates in \eqref{def:rotating} does not alter the $\ell^1$ norm and therefore a similar result as this theorem can be stated for orbits of Hamiltonian \eqref{def:HamAfterBirkhoff4} (modulus adding the rotating phase). \begin{proof}[Proof of Theorem \ref{thm:main}] We use Theorem \ref{thm:Approximation} to obtain a solution of Hamiltonian \eqref{H.2} undergoing growth of Sobolev norms. We consider the solution $(\yy^*(t), \theta^*(t),\ba^*(t))$ of this Hamiltonian with initial condition \begin{equation}\label{def:IniCond} \begin{split} \yy^*&=0\\ \theta^*&=\theta_0\\ a^*_{\jj}&=\nu^{-1}b_k(0)\qquad\text{for each }\jj\in\Lambda_k\\ a^*_{\jj}&=0\qquad\qquad\,\quad\text{for each }\jj\not\in\Lambda_k \end{split} \end{equation} for an arbitrary choice of $\theta_0\in \mathbb T^\tk$. We need to prove that Theorem \ref{thm:Approximation} applies to this solution. To this end, we perform the changes of coordinates given in Theorems \ref{thm:reducibility}, \ref{thm:3b} and \ref{prop:Birkhoff4}, keeping track of the $\ell^1$ norm. For $\cL^{(j)}$, $j=1,2$, Theorems \ref{thm:3b} and \ref{prop:Birkhoff4} imply the following. Consider $(\yy,\theta,\ba)\in D(\rho,r)$ and define $\pi_\ba(\yy,\theta,\ba):=\ba$. Then, we have \begin{equation}\label{def:EstimatesChange12} \left\|\pi_\ba \cL^{(j)}(\yy,\theta,\ba)-\ba\right\|_{\ell^1}\lesssim \|\ba\|_{\ell^1}^2. \end{equation} This estimate is not true for the change of coordinates $\cL^{(0)}$ given in Theorem \ref{thm:reducibility}. Nevertheless, this change is smoothing (see Statement 5 of Theorem \ref{thm:reducibility}). This implies that if all $\jj\in \mathrm{supp}\{\ba\}$ satisfy $|\jj|\geq J$ then \begin{equation}\label{def:EstimatesChange0} \left\|\pi_{\bf a} \cL^{(0)}\left(\yy, \theta,\ba\right)-\ba\right\|_{\ell^1}\lesssim J^{-1} \|\ba\|_{\ell^1}. \end{equation} Thanks to Theorem \ref{thm:SetLambda} (more precisely \eqref{eq:BoundsS1:0}), we can apply this estimate to \eqref{def:IniCond} with $J=C f(\gen)$. Using the fact that $ \left\|\ba^*\right\|_{\ell^1}\lesssim \nu^{-1}\gen 2^\gen $ and the condition on $\nu$ in \eqref{def:LambdaOfN}, one can check \[ \left\|\pi_{\bf a} \cL^{(0)}\left(0, \theta^*,\ba^*\right)-\ba^*\right\|_{\ell^1}\lesssim \nu^{-1}\gen 2^\gen f(\gen)^{-1}\leq \nu^{-3/2}. \] Therefore, we can conclude \begin{equation}\notag \left\|\pi_\ba \left( \cL^{(2)}\circ\cL^{(1)}\circ \cL^{(0)}\left(0, \theta^*,\ba^*\right)\right)-\ba^*\right\|_{\ell^1}\lesssim \nu^{-3/2}. \end{equation} We define $( \widetilde \yy^*,\widetilde \theta^*, \widetilde \ba^*)$ the image of the point \eqref{def:IniCond} under the composition of these three changes. We apply Theorem \ref{thm:Approximation} to the solution of \eqref{def:HamRotAfterNF} with this initial condition. Note that Theorem \ref{thm:Approximation} is stated in rotating coordinates (see \eqref{def:rotating}). Nevertheless, since this change is the identity on the initial conditions, one does not need to make any further modification. Moreover, the change \eqref{def:rotating} leaves invariant both the $\ell^1$ and Sobolev norms. We show that such solution $( \widetilde \yy^*(t),\widetilde \theta^*(t), \widetilde \ba^*(t))$ expressed in the original coordinates satisfies the desired growth of Sobolev norms. Define \begin{equation}\notag S_i=\sum_{\jj\in\Lambda_i}|\jj|^{2s} \text{ for }i=1,\dots, \gen. \end{equation} To estimate the initial Sobolev norm of the solution $( \yy^*(t), \theta^*(t), \ba^*(t))$, we first prove that \begin{equation}\notag \left \|\ba^*(0)\right\|^2_{h^s}\lesssim \nu^{-2}S_3. \end{equation} The initial condition of the considered orbit given in \eqref{def:IniCond} has support $\Lambda$ (recall that $\yy=0$). Therefore, \[ \left\|\ba^*(0)\right\|^2_{h^s}=\sum_{i=1}^{\gen}\sum_{\jj\in\Lambda_i}|\jj|^{2s} \nu^{-2}\left|b_i(0)\right|^2. \] Then, taking into account Theorem \ref{thm:ToyModelOrbit}, \[ \begin{split} \sum_{i=1}^{\gen}\sum_{\jj\in\Lambda_i}|\jj|^{2s} \nu^{-2}\left|b_i(0)\right|^2&\leq \nu^{-2} S_3+\nu^{-2}\mu^{2\kk}\sum_{i\neq3}S_i\\ &\leq \nu^{-2}S_3\left(1+\mu^{2\kk}\sum_{i\neq3}\frac{S_i}{S_3}\right). \end{split} \] From Theorem \ref{thm:SetLambda} we know that for $i\neq 3$, \[ \frac{S_i}{S_3}\lesssim e^{s\gen}. \] Therefore, to bound these terms we use the definition of $\mu$ from Theorem \ref{thm:ToyModelOrbit}. Taking $\gamma>\frac{1}{2\kappa} $ and taking $\gen$ large enough, we have that \[ \|\ba^*(0)\|_{h^s}^2\leq 2\nu^{-2}S_3. \] To control the initial Sobolev norm, we need that $2\nu^{-2}S_3\leq\de^2$. To this end, we need to use the estimates for $\nu$ given in Theorem \ref{thm:Approximation}, and the estimates for $|\jj| \in \Lambda$ and for $f(\gen)$ given in Theorem \ref{thm:SetLambda}. Then, if we choose $\nu=(f(\gen))^{1-\sigma}$, we have \[ \|\ba^*(0)\|_{h^s}^2\lesssim (f(\gen))^{-2(1-\sigma-s)} 3^{2s\gen}2^\gen\leq e^{-2(1-\sigma-s)A^{\gen}} 3^{2s\gen}2^\gen. \] Note that Theorem \ref{thm:Approximation} is valid for any fixed small $\sigma>0$. Thus, {\bf provided $s <1$}, we can choose $0<\sigma<1-s$ and take $\gen$ large enough, so that we obtain an arbitrarily small initial Sobolev norm. \begin{remark} \label{rem:l2} In case we ask only the $\ell^2$ norm of $\ba^*(0)$ to be small we can drop the condition $s <1$. Indeed $ \|\ba^*(0)\|_{\ell^2}\lesssim \nu^{-1} 2^{\gen }\gen$ which can be made arbitrary small by simply taking $\gen$ large enough (and $\nu$ as in \eqref{def:LambdaOfN}). \end{remark} Now we estimate the final Sobolev norm. First we bound $\|\ba^*(T)\|_{h^s}$ in terms of $S_{\gen-1}$. Indeed, \begin{equation}\label{what the hell} \left \|\ba^*(T)\right\|^2_{h^s}\geq \sum_{\jj\in\Lambda_{\gen-1}}|\jj|^{2s} \left|a_\jj^*(T)\right|^2\geq S_{\gen-1}\inf_{\jj\in\Lambda_{\gen-1}}\left|a_\jj^*(T)\right|^2. \end{equation} Thus, it is enough to obtain a lower bound for $\left|a^*_\jj(T)\right|$ for $\jj\in\Lambda_{\gen-1}$. To obtain this estimate we need to express $\ba^*$ in normal form coordinates and use Theorem \ref{thm:Approximation}. We split $|a^*_\jj(T)|$ as follows. Define $( \widetilde \yy^*(t),\widetilde \theta^*(t), \widetilde \ba^*(t))$ the image of the orbit with initial condition \eqref{def:IniCond} under the changes of variables in Theorems \ref{thm:reducibility} and \ref{thm:3b}, Proposition \ref{prop:Birkhoff4} and in \eqref{def:rotating}. Then, \[ \left|a^*_\jj(T)\right|\geq \left|\beta_\jj^\nu(T)\right|- \left|\widetilde a^*_\jj(T)-\beta_\jj^\nu(T)e^{\im \Omega_\jj(\lambda,\eps)T}\right|- \left|\widetilde a^*_\jj(T)-a^*_\jj(T)\right|. \] The first term, by Theorem \ref{thm:ToyModelOrbit}, satisfies $|\beta_\jj^\nu(T)|\geq \nu^{-1}/2$. For the second one, using Theorem \ref{thm:Approximation}, we have \[ \left|\widetilde a^*_\jj(T)-\beta_\jj^\nu(T)e^{\im \Omega_\jj(\lambda,\eps)T}\right| \leq \nu^{-1-\sigma}. \] Finally, taking into account the estimates \eqref{def:EstimatesChange12} and \eqref{def:EstimatesChange0}, the third one can be bounded as \[ \left|\widetilde a^*_\jj(T)-a^*_\jj(T)\right|\leq \left\|\widetilde \ba^*(T)-\ba^*(T)\right\|_{\ell^1}\lesssim \|\ba^*(T)\|_{\ell^1}^2+\frac{\|\ba^*(T)\|_{\ell^1}}{|\jj|}. \] Now, by Theorem \ref{thm:Approximation} and Theorem \ref{thm:SetLambda} (more precisely the fact that $|\jj|\gtrsim f(\gen)$ for $\jj\in\Lambda$), \[ \left|\widetilde a^*_\jj(T)-a^*_\jj(T)\right|\leq \nu^{-1-\sigma}. \] Thus, by \eqref{what the hell}, we can conclude that \[ \left \|\ba^*(T)\right\|^2_{h^s}\geq \frac{\nu^{-2}}{2}S_{\gen-1}, \] which, by Theorem \ref{thm:SetLambda}, implies \[ \frac{\left \|\ba^*(T)\right\|^2_{h^s}}{\left \|\ba^*(0)\right\|^2_{h^s}}\geq \frac{S_{\gen-1}}{4S_3} \geq \frac{1}{8}2^{(1-s)(\gen-4)} . \] Thus, taking $\gen$ large enough we obtain growth by a factor of $K/\de$. The time estimates can be easily deduced by \eqref{def:Time:Rescaled}, \eqref{def:LambdaOfN}, \eqref{estimate on fg} and Theorem \ref{thm:ToyModelOrbit}, which concludes the proof of the first statement of Theorem \ref{thm:main}. For the proof of the second statement of Theorem \ref{thm:main} it is enough to point out that the condition $s<1$ has only been used in imposing that the initial Sobolev norm is small. The estimate for the $\ell^2$ norm can be obtained as explained in Remark \ref{rem:l2}. \end{proof} \subsection{Proof of Theorem \ref{thm:Approximation}}\label{sec:Approx} To prove Theorem \ref{thm:Approximation}, we define \[ \xi=\beta-\beta^\nu(t). \] We use the equations in \eqref{eq:betay} to deduce an equation for $\xi$. It can be written as \begin{equation}\label{eq:ForXi} \im\dot \xi=\cZ_0(t)+\cZ_1(t)\xi+\cZ_1'(t)\ol \xi+\cZ_1''(t)\yy+\cZ_2(\xi,\yy,t), \end{equation} where \begin{equation}\label{def:Zs} \begin{split} \cZ_0(t)=&\partial_{\ol \beta} \cJ(0,\theta,\beta^\nu)+\partial_{\ol \beta} \cR(0,\theta,\beta^\nu)\\ \cZ_1(t)=&\partial_{\beta\ol \beta} \cG_0(\beta^\nu)+\partial_{\beta\ol \beta} \cJ(0,\theta,\beta^\nu)\\ \cZ_1'(t)=&\partial_{\ol\beta\ol \beta} \cG_0(\beta^\nu)+\partial_{\ol\beta\ol \beta} \cJ(0,\theta,\beta^\nu)\\ \cZ_1''(t)=&\partial_{\yy\ol \beta} \cG_0(\beta^\nu)+\partial_{\yy\ol \beta} \cJ(0,\theta,\beta^\nu)\\ \cZ_2(t)=&\partial_{\ol \beta} \cG_0(\beta^\nu+\xi)-\partial_{\ol \beta}\cG_0(\beta^\nu) -\partial_{\beta\ol \beta} \cG_0(\beta^\nu)\xi-\partial_{\ol\beta\ol \beta} \cG_0(\beta^\nu)\ol\xi\\ &\partial_{\ol \beta} \cJ(\yy,\theta,\beta^\nu+\xi)-\partial_{\ol \beta}\cJ(0,\theta,\beta^\nu) -\partial_{\beta\ol \beta} \cJ(0,\theta,\beta^\nu)\xi-\partial_{\ol\beta\ol \beta} \cJ(0,\theta,\beta^\nu)\ol\xi\\ &-\partial_{\yy\ol \beta} \cJ(0,\theta,\beta^\nu)\yy +\partial_{\ol \beta} \cR(\yy,\theta,\beta^\nu+\xi)-\partial_{\ol \beta} \cR(0,\theta,\beta^\nu). \end{split} \end{equation} We analyze the equations for $\xi$ in \eqref{eq:ForXi} and $\yy$ in \eqref{eq:betay}. \begin{lemma}\label{lemma:EqForxi} Assume that $(\beta^\nu,\yy)$,$(\beta^\nu+\xi,\yy)\in D(r_2)$ (see \eqref{def:domain}) where $r_2$ has been given by Theorem \ref{prop:Birkhoff4}. Then, the function $\|\xi\|_{\ell^1}$ satisfies \[ \begin{split} \frac{d}{dt}\|\xi\|_{\ell^1}\leq & C \nu^{-4}\gen^42^{4\gen}+ C \nu^{-3}\gen^32^{3\gen} \left(f(\gen)^{-\frac{4}{5}} + t f(\gen)^{-2}\right)\\ &+ C\nu^{-2}\gen^22^{2\gen}\|\xi\|_{\ell^1}+C\nu^{-1}\gen 2^\gen |\yy|+C\nu^{-1}\gen 2^\gen \|\xi\|_{\ell^1} ^2+C\|\xi\|_{\ell^1}|\yy|+C|\yy|^2 \end{split} \] for some constant $C>0$ independent of $\nu$. \end{lemma} \begin{proof} We compute estimates for each term in \eqref{def:Zs}. For $\cZ_0$, we use the fact that the definition of $\cR$ in \eqref{def:HamTruncRotatingSimpl} and Theorem \ref{prop:Birkhoff4} imply $\|\partial_{\ol \beta} \cR(0,\theta,\beta^\nu)\|_{\ell^1}\leq \mathcal O(\|\beta^\nu\|_{\ell^1}^4)$. Thus, it only remains to use the results in Theorems \ref{thm:ToyModelOrbit} (using \eqref{def:Rescaling}) and Theorem \ref{thm:SetLambda}, to obtain \[ \|\partial_{\ol \beta} \cR(0,\theta,\beta^\nu)\|_{\ell^1}\leq C\nu^{-4}\gen^4 2^{4\gen}. \] To bound $\partial_{\ol \beta} \cJ(0,\theta,\beta^\nu)$, the other term in $\cZ_0$, recall that $\cJ=\cJ_1+\cJ_2$ (see \eqref{def:HamTruncRotatingSimpl} and \eqref{def:HamToyModelLattice}). Then, we split into two terms $\partial_{\ol \beta} \cJ(0,\theta,\beta^\nu)=\partial_{\ol \beta} \cJ_1(0,\theta,\beta^\nu)+\partial_{\ol \beta} \cJ_2(\theta,\beta^\nu)$ as \begin{align} \partial_{\ol \beta} \cJ_1(0,\theta,\beta^\nu)&=\partial_{\ol \beta} \left\{ \cG\left(0,\theta, (\beta^\nu_\jj e^{\im\Omega_\jj(\lambda, \e)t})_{\jj\in\Z^2_N\setminus\cS_0}\right) - \cG\left(0,\theta,\beta^\nu\right) \right\}\notag\\ &= \partial_{\ol \beta}\left\{ \cQ^\2_{\rm Res} \left(0,\theta, (\beta^\nu_\jj e^{\im\Omega_\jj(\lambda, \e)t} )_{\jj\in\Z^2_N\setminus\cS_0}\right) - \cQ^\2_{\rm Res}\left(0,\theta,\beta^\nu\right) \right\}\label{j1}\\ \partial_{\ol \beta} \cJ_2(\theta,\beta^\nu)&= \partial_{\ol \beta} \left\{ \cG\left( 0,\theta, (\beta^\nu_\jj e^{\im\Omega_\jj(\lambda, \e)t})_{\jj\in\Z^2_N\setminus\cS_0}\right) - \cG_0\left((\beta^\nu_\jj e^{\im\Omega_\jj(\lambda, \e)t})_{\jj\in\Z^2_N\setminus\cS_0}\right) \right\}\label{j2} \end{align} To bound \eqref{j1}, recall that $\cQ^\2_{\rm Res}$ defined in \eqref{q2res} is the sum of two terms. Since $\Pi_{\fR_2}\cQ^\2$ is action preserving, the only terms contributing to \eqref{j1} are the ones coming from $\Pi_{\fR_4}\cQ^\2$. Since $\beta^\nu$ is supported on $\Lambda$, it follows from \eqref{def:R4} that \begin{align} \partial_{\ol \beta} \cJ_1(0,\theta,\beta^\nu) = \left(\sum_{\jj_1, \jj_2, \jj_3 \in \Lambda \atop |\jj_1|^2 - |\jj_2|^2 - |\jj_3|^2 - |\jj|^2 = 0}\left( e^{\im t (\Omega_{\jj_1} - \Omega_{\jj_2} + \Omega_{\jj_3} - \Omega_{\jj})} - 1 \right)\, \cJ_{\jj_1 \jj_2 \jj_3 \jj } \, \beta^\nu_{\jj_1} \, \overline{\beta^\nu_{\jj_2}} \, \beta^\nu_{\jj_3} \right)_{\jj \in \Lambda} . \end{align} In order to bound the oscillating factor, we use the formula for the eigenvalues given in Theorem \ref{thm:reducibility4}, to obtain that, for $\jj_1, \jj_2, \jj_3, \jj \in \Lambda$, one has $$ \abs{ e^{\im t (\Omega_{\jj_1} - \Omega_{\jj_2} + \Omega_{\jj_3} - \Omega_{\jj})} - 1 } \lesssim |t| \abs{\Omega_{\jj_1} - \Omega_{\jj_2} + \Omega_{\jj_3} - \Omega_{\jj}} \lesssim \frac{|t|}{f(\gen)^2} . $$ Hence, for $t\in [0,T]$, using the estimate for $\mathcal{Q}_\mathrm{Res}^{(2)}$ given by Theorem \ref{prop:Birkhoff4}, \[ \|\partial_{\ol \beta} \cJ_1(0,\theta,\beta^\nu)\|_{\ell^1}\leq C t f(\gen)^{-2} \|\beta^\nu\|_{\ell^1}^3\leq Ct\nu^{-3}\gen^32^{3\gen}f(\gen)^{-2}. \] To bound \eqref{j2}, it is enough to use \eqref{def:BoundJ2} and \eqref{estimate on fg} to obtain \[ \|\partial_{\ol \beta} \cJ_2(\theta,\beta^\nu)\|_{\ell^1}\leq Cf(\gen)^{-\frac{4}{5}}\|\beta^\nu\|_{\ell^1}^3\leq C\nu^{-3}\gen^32^{3\gen}f(\gen)^{-\frac{4}{5}}. \] For the linear terms, one can easily see that \[ \left\|\cZ_1(t)\xi\right\|_{\ell^1}\leq C\|\beta^\nu\|^2_{\ell^1}\|\xi\|_{\ell^1}\leq C\nu^{-2}\gen^22^{2\gen}\|\xi\|_{\ell^1} \] and the same for $ \left\|\cZ_1'(t)\ol\xi\right\|_{\ell^1}$. Analogously, \[ \left\|\cZ_1''(t)\yy\right\|_{\ell^1}\leq C\|\beta^\nu\|_{\ell^1}|\yy|\leq C\nu^{-1}\gen 2^\gen|\yy|. \] Finally, it is enough to use the definition of $\cZ_2$, the definition of $\cR$ in \eqref{def:HamTruncRotatingSimpl} and Theorem \ref{prop:Birkhoff4} to show \[ \begin{split} \|\cZ_2\|\leq& \, \|\beta^\nu\|_{\ell^1} \|\xi\|^2_{\ell^1}+ \|\beta^\nu\|_{\ell^1}^2 |\yy| + \|\xi\|_{\ell^1}|\yy|+\|\beta^\nu\|_{\ell^1} ^3\|\xi\|_{\ell^1}+|\yy|^2\\ \leq &\, C\nu^{-1}\gen 2^\gen |\yy| \|\xi\|^2_{\ell^1}+ C\nu^{-2}\gen^2 2^{2\gen} \|\xi\|_{\ell^1}|\yy|+C\nu^{-3}\gen^3 2^{3\gen} \|\xi\|_{\ell^1}+|\yy|^2. \end{split} \] \end{proof} \begin{lemma}\label{lemma:EqFory} Assume that $(\beta^\nu,\yy)$, $(\beta^\nu+\xi,\yy)\in D(r_2)$ (see \eqref{def:domain}) where $r_2$ has been given by Theorem \ref{prop:Birkhoff4}. Then, the function $|\yy|$ satisfies \[ \begin{split} \frac{d}{dt}|\yy|\leq&\, C\nu^{-5}\gen^52^{5\gen} +C\nu^{-3}\gen^32^{3\gen}\|\xi\|_{\ell^1} ^2\\&+C\nu^{-1}\gen2^{\gen}\|\xi\|_{\ell^1} ^3+C\|\xi\|_{\ell^1}|\yy|+C\nu^{-3}\gen^32^{3\gen}|\yy|^2 \end{split} \] for some constant $C>0$ independent of $\nu$. \end{lemma} \begin{proof} Proceeding as for $\dot \xi$, we write the equation for $\dot\yy$ as \begin{equation}\label{eq:ForY} \dot \yy= \cX_0(t)+\cX_1(t)\xi+\cX_1'(t)\ol\xi+\cX_1''(t)\yy+\cX_2(\xi,\yy,t), \end{equation} with \[ \begin{split} \cX_0(t)=& -\partial_\theta \cJ(0,\theta,\beta^\nu)-\partial_{\theta} \cR(0,\theta,\beta^\nu). \\ \cX_1(t)=& \partial_{\beta\theta} \cJ(0,\theta,\beta^\nu) \\ \cX_1'(t)=&\partial_{\ol\beta\theta} \cJ(0,\theta,\beta^\nu)\\ \cX_1''(t)=&\partial_{\yy\theta} \cJ(0,\theta,\beta^\nu)\\ \cX_2(t)=& -\partial_\theta \cJ(\yy,\theta,\beta^\nu+\xi)+\partial_\theta \cJ(0,\theta,\beta^\nu)\\ & -\partial_{\theta} \cR(\yy,\theta,\beta^\nu+\xi)+\partial_{\theta} \cR(0,\theta,\beta^\nu). \end{split} \] We claim that $\cX_1(t)$ and $\cX_1'(t)$ are identically zero. Then, proceeding as in the proof of Lemma \ref{lemma:EqForxi}, one can bound each term and complete the proof of Lemma \ref{lemma:EqFory}. To explain the absence of linear terms, consider first $\partial_{\beta \theta}\cJ(0, \theta, \beta^\nu)$. It contains two types of monomials: those coming from $\fR_2$ (see \eqref{res2}) which however do not depend on $\theta$, and those coming from $\fR_4$ (see \eqref{def:R4}). But also these last monomials do not depend on $\theta$ once they are restricted on the set $\Lambda$ (indeed the only monomials of $\fR_4$ which are $\theta$ dependent are those of the third line of \eqref{def:R4}, which are supported outside $\Lambda$). Therefore $\partial_{\beta \theta}\cJ(0, \theta, \beta^\nu) \equiv 0$ (and so $\partial_{\ol\beta \theta}\cJ(0, \theta, \beta^\nu)$ and $\partial_{\yy \theta}\cJ(0, \theta, \beta^\nu)$). \end{proof} We define \begin{equation}\notag M=\|\xi\|_{\ell^1}+\nu |\yy|. \end{equation} As a conclusion of these two lemmas, we can deduce that \[ \dot M\leq C\left(\nu^{-4}\gen^42^{4\gen}+ \nu^{-3}\gen^32^{3\gen}\left(f(\gen)^{-\frac{4}{5}} + t f(\gen)^{-2}\right)\right)+C\nu^{-2}\gen^2 2^{2\gen}M+\nu^{-1} \gen 2^\gen M^2 . \] Now we apply a bootstrap argument. Assume that for some $T^*>0$ and $0<t<T^*$ we have \begin{equation}\notag M(t)\leq C\nu^{-1-\sigma/2}. \end{equation} Recall that for $t=0$ we know that it is already satisfied since $M(0)\leq\nu^{-1-4\sigma}$. \emph{A posteriori} we will show that the time $T$ in \eqref{def:Time:Rescaled} satisfies $0<T<T^*$ and therefore the bootstrap assumption holds. Note that, taking $\gen$ large enough (and recalling \eqref{estimate on fg} and \eqref{def:LambdaOfN}), the bootstrap estimate implies that $(\beta^\nu,\yy)$ and $(\beta^\nu+\xi,\yy)$ satisfy the assumption of Lemmas \ref{lemma:EqForxi} and \ref{lemma:EqFory}. With the boostrap assumption then, we have \[ \dot M\leq C\left(\nu^{-4}\gen^42^{4\gen}+ \nu^{-3}\gen^32^{3\gen}\left(f(\gen)^{-\frac{4}{5}}+tf(\gen)^{-2} \right)\right)+C\nu^{-2}\gen^2 2^{2\gen}M. \] Applying Gronwall inequality, \[ M\leq C\left(M(0)+\nu^{-4}\gen^42^{4\gen}t+ \nu^{-3}\gen^32^{3\gen}\left(tf(\gen)^{-\frac{4}{5}}+t^2f(\gen)^{-2} \right)\right) e^{\nu^{-2}\gen^22^{2\gen} t } \] and thus, using \eqref{def:Time:Rescaled} and the estimates for $T_0$ in Theorem \ref{thm:ToyModelOrbit}, \[ M\leq C\left(M(0)+\nu^{-2}\gen^62^{4\gen}+ \nu^{-1}\gen^52^{3\gen}f(\gen)^{-\frac{4}{5}}+ \nu\gen^72^{3\gen}f(\gen)^{-2}\right)e^{C\gen^4 2^{2\gen}}. \] Since we are assuming \eqref{def:LambdaOfN} and we can take $A$ large enough (see Theorem \ref{thm:SetLambda}), we obtain that for $t\in [0,T]$, provided $\gen$ is sufficiently large \[ M(t)\leq \nu^{-1-\sigma}, \] which implies that $T\leq T^*$. That is, the bootstrap assumption was valid. This completes the proof. \appendix \section{Proof of Proposition \ref{hopeful thinking}} \label{app:mes.m} We split the proof in several steps. We first perform an algebraic analysis of the nonresonant monomials. \subsection{Analysis of monomials of the form $e^{\im \theta \cdot \ell} \, a_{\jj_1}^{\sigma_1} \, a_{\jj_2}^{\sigma_2} \, a_{\jj_3}^{\sigma_3}\, a_{\jj_4}^{\sigma_4}$} We analyze the small divisors \eqref{4m} related to these monomials. Taking advantage of the asymptotics of the eigenvalues given in Theorem \ref{thm:reducibility4}, we consider a ``good'' first order approximation of the small divisor given by \begin{equation} \label{res.4} \omega(\lambda) \cdot \ell + \sigma_1 \wt\Omega_{\jj_1}(\lambda, \e) + \sigma_2 \wt\Omega_{\jj_2}(\lambda, \e) + \sigma_3 \wt\Omega_{\jj_3}(\lambda, \e) + \sigma_4 \wt\Omega_{\jj_4}(\lambda, \e). \end{equation} Note that this is an affine function in $\eps$ and therefore it can be written as \[ \eqref{res.4} \equiv \tK_{\bj, \ell}^\sigma + \e \tF_{\bj, \ell}^{\sigma}(\lambda). \] We say that a monomial is Birkhoff non-resonant if, for any $\eps>0$, this expression is not 0 as a function of $\lambda$. \begin{lemma} Assume that the $\tm_k$'s do not solve any of the linear equations defined in \eqref{hyperplane} (this determines $\tL_2$ in the statement of Theorem \ref{hopeful thinking}). Consider a monomial of the form $e^{\im \theta \cdot \ell} \, a_{j_1}^{\sigma_1} \, a_{j_2}^{\sigma_2} \, a_{j_3}^{\sigma_3}\, a_{j_4}^{\sigma_4}$ with $(\bj, \ell, \sigma) \in\fA_4$. If $(\bj, \ell, \sigma) \not\in \fR_4$, then it is Birkhoff non resonant. \end{lemma} \begin{proof} We write explicitly the functions $\tK_{\bj, \ell}^\sigma$ and $\tF_{\bj, \ell}^{\sigma}(\lambda)$ as \begin{align} \tK_{\bj, \ell}^\sigma & := \omega^\0 \cdot \ell + \s_1\wtOmega_{\jj_1}(\lambda, 0) + \sigma_2 \, \wtOmega_{\jj_2}(\lambda,0) +\s_3 \wtOmega_{\jj_3}(\lambda,0)+\s_4 \wtOmega_{\jj_3}(\lambda,0) \label{solapostdoc}\\ \tF_{\bj, \ell}^{\sigma}(\lambda) &:= \partial_{\e} \left. \Big(\omega(\lambda) \cdot \ell + \s_1 \wtOmega_{\jj_1}(\lambda, \e) +\sigma_2\wtOmega_{\jj_2}(\lambda, \e) + \sigma_3\wtOmega_{\jj_3}(\lambda, \e) +\sigma_4\wtOmega_{\jj_4}(\lambda, \e) \Big) \right|_{\e = 0}\notag\\ & =- \lambda\cdot \ell + \s_1\vartheta_{\jj_1}(\lambda) + \sigma_2 \vartheta_{\jj_2}(\lambda) + \sigma_3 \vartheta_{\jj_3}(\lambda) + \sigma_4 \vartheta_{\jj_4}(\lambda) \label{patata} \end{align} As in \cite{Maspero-Procesi}, $\tK_{\bj, \ell}^\sigma$ is an integer while the functions $\vartheta_{\jj}(\lambda)$ belong to the finite list of functions $ \vartheta_{\jj}(\lambda)\in \Big\{0, \{\mu_i(\lambda)\}_{1 \leq i \leq \tk} \Big \}$ defined in Theorem \ref{thm:reducibility4}. Clearly to prove that the resonance \eqref{res.4} not to hold identically, it is enough to ensure that \begin{equation} \label{K.F.0} \tK_{\bj, \ell}^\sigma = 0 \quad \mbox{ and } \quad \tF_{\bj, \ell}^\sigma(\lambda) \equiv 0 \ \end{equation} cannot occur for $(\bj,\ell, \sigma)\in \fA_4\setminus\fR_4$. We study all the possible combinations, each time we assume that \eqref{K.F.0} holds and we deduce a contradiction. \begin{enumerate} \item $\jj_i \in \fZ$ for any $1 \leq i \leq 4$. In case $\ell \neq 0$, then $\tF_{\bj, \ell}^{\sigma}(\lambda) = -\lambda \cdot \ell$ is not identically $0$. Now take $\ell = 0$. By conservation of $\wt\cP_x$, $\wt\cP_y$ we have that $\sum_{i=1}^4 \sigma_i \jj_i=0$ and $\tK_{\bj, \ell}^\sigma = 0$ implies $\sum_{i=1}^4 \sigma_i\abs{\jj_i}^2 = 0$. Then, using mass conservation (see Remark \ref{leggi_sel1}), since $\ell=0$, one has $\sum_{i=1}^4 \sigma_i=0$ and therefore the $\jj_i$'s form a rectangle (and thus $(\bj,0,\sigma)$ belongs to $\fR_4$). \item $\jj_1, \jj_2, \jj_3 \in \fZ$, $\jj_4 \in \sS$. Then $\tF_{{\bf j}, \ell}^\sigma(\lambda) = -\lambda \cdot \ell + \sigma_3 \, \mu_{i}(\lambda)$ for some $1 \leq i \leq \tk$. If $\tF_{{\bf j}, \ell}^\sigma(\lambda) \equiv 0$ then $\mu_i(\lambda) = \sigma_3 \lambda \cdot \ell$ is a root in $\Z[\lambda]$ of the polynomial $P(t, \lambda)$ defined in Theorem \ref{thm:reducibility4}; however $P(t, \lambda)$ is irreducible over $\Q(\lambda)[t]$, thus leading to a contradiction. \item $\jj_1, \jj_2 \in \fZ$, $\jj_3, \jj_4 \in \sS$. W.l.o.g. let $\jj_3 = (\tm_{i}, n_3)$, $\jj_4 = (\tm_{k}, n_4)$ for some $1 \leq i, k \leq \td$. Then $$ \tF_{{\bf j}, \ell}^\sigma(\lambda) = -\lambda \cdot \ell + \sigma_3\, \mu_{i}(\lambda) + \sigma_4 \mu_{k}(\lambda) \ . $$ {\bf Case $\ell \neq 0$}. Then $\tF_{\bj, \ell}^\bs(\lambda)\equiv 0$ iff $ \mu_i(\lambda) \equiv - \sigma_3 \sigma_4 \mu_k(\lambda)+ \sigma_3\lambda \cdot \ell$. This means that $\mu_k(\lambda)$ is a common root of $P(t,\lambda)$ and $P(-\sigma_3\sigma_4 t + \sigma_3 \lambda \cdot \ell,\lambda ) $. However this last polynomial is irreducible as well, being the translation of an irreducible polynomial. Hence the two polynomials must be equal (or opposite). A direct computation shows that this does not happen (see Lemma 6.1 of \cite{Maspero-Procesi} for details). \\ {\bf Case $\ell = 0$}. Then $\tF_{{\bf j}, \ell}^\sigma(\lambda) \equiv 0$ iff $\mu_i(\lambda) \equiv -\sigma_3\sigma_4 \mu_k(\lambda)$. \begin{itemize} \item[-] If $i \neq k$ and $\sigma_3\sigma_4 = -1$, then $P(t, \lambda)$ would have a root with multiplicity 2. But $P(t, \lambda)$, being an irreducible polynomial, has no multiple roots. \item[-] If $i \neq k$ and $\sigma_3\sigma_4 = 1$, then $P(t, \lambda)$ and $P(-t, \lambda)$ would have $\mu_k(\lambda)$ as a common root. However $P(-t, \lambda)$ is irreducible on $\Z[\lambda]$ as well, and two irreducible polynomials sharing a common root must coincide (up to a sign), i.e. $P(t, \lambda) \equiv \pm P(-t, \lambda)$. A direct computation using the explicit expression of $P(t, \lambda)$ shows that this is not true. \item[-] If $i=k$ and $\sigma_3\sigma_4 = 1$ then $\mu_i(\lambda) \equiv 0$ would be a root of $P(t, \lambda)$. But $P(t, \lambda)$ is irreducible over $\Z[\lambda]$, does it cannot have $0$ as a root. \item[-] If $i=k$ and $\sigma_3\sigma_4 = -1$ (w.l.o.g. assume $\sigma_3=1$, $\sigma_4=-1$), by mass conservation one has $\sigma_1 + \sigma_2 = 0 $ and by conservation of $\wt\cP_x$ one has $\sigma_1 m_1 + \sigma_2 m_2 = 0$, thus $m_1 = m_2$. Then by conservation of $\wt\cP_y$ we get $n_1 - n_2 + n_3 - n_4= 0$, which together with $0 = \tK_{\bj, \ell}^\sigma =n_1^2 - n_2^2 + n_3^2 - n_4^2 $ gives $\{n_1, n_3\} = \{n_2, n_4\}$. One verifies easily that in such a case the sites $\jj_r$'s form a horizontal rectangle (that could be even degenerate), and therefore they belong to $\fR_4$. \end{itemize} \item $\jj_1, \jj_2, \jj_3 \in \sS$, $\jj_4 \in \fZ$. W.l.o.g. let $\jj_1=(\tm_{i_1}, n_1)$, $\jj_2=(\tm_{i_2}, n_2)$, $\jj_3= (\tm_{i_3}, n_3)$ for some $1 \leq i_1,i_2,i_3 \leq \tk$ and $n_1, n_2, n_3 \neq 0$. Then $$ \tF_{\bj, \ell}^{\sigma}(\lambda) = -\lambda \cdot \ell + \sigma_1 \mu_{i_1}(\lambda)+ \sigma_2 \mu_{i_2}(\lambda) + \sigma_3 \mu_{i_3}(\lambda) \ . $$ By conservation of mass $\eta(\ell) + \sigma_4 = 0$, hence $\ell \neq 0$. Assume $\tF_{{\bf j}, \ell}^\sigma(\lambda) \equiv 0$. This can only happen for (at most) a unique choice of $\ell^{({\bf i}, \sigma)} \in \Z^\tk$ uniquely, ${\bf i}:=(i_1, i_2, i_3)$. By conservation of $\wt\cP_x$ we have $\sum_{k} \tm_k \ell_k^{({\bf i}, \sigma)} + \sigma_4 m_4 = 0$. These two conditions fix $m_4 \equiv m_4^{({\bf i}, \sigma)}$ uniquely. In particular if $m_4$ is sufficiently large, we have a contradiction. \item $\jj_r \in \sS$, $\forall 1 \leq r \leq 4$. Then $$ \tF_{\bj, \ell}^{\sigma}(\lambda) = -\lambda \cdot \ell + \sigma_1 \mu_{i_1}(\lambda)+ \sigma_2 \mu_{i_2}(\lambda) + \sigma_3 \mu_{i_3}(\lambda)+ \sigma_4 \mu_{i_4}(\lambda) \ . $$ If $\ell \neq 0$, the condition $\tF_{{\bf j}, \ell}^\sigma(\lambda) \equiv 0$ fixes $\ell^{({\bf i}, \sigma)} \in \Z^\tk$ uniquely, ${\bf i}:=(i_1, i_2, i_3, i_4)$. By conservation of $\wt\cP_x$ we have the condition \begin{equation}\label{hyperplane} \sum_{k} \tm_k \ell_k^{({\bf i}, \sigma)} = 0 \end{equation} defining a hyperplane, which can be excluded by suitably choosing the tangential sites $\tm_k$ (recall that the functions $\mu_i (\lambda)$ are independent of this choice, see Remark \ref{rmk:mus}). If $\ell = 0$, we have $\sum_{r} \sigma_r n_r = \sum_r \sigma_r n_r^2 = 0$. Then $\{n_1, n_3\} = \{n_2, n_4\}$. One verifies easily that in such case the sites $\jj_r$'s form a horizontal trapezoid (that could be even degenerate). \end{enumerate} \end{proof} \subsection{Analysis of monomials of the form $e^{\im \theta \cdot \ell} \,\yy^l a_{j_1}^{\sigma_1} \, a_{j_2}^{\sigma_2} $} In this case, since the factor $\yy^l$ does not affect the Poisson brackets, admissible monomials (in the sense of Definition \ref{rem:adm3}) are non-resonant provided they do not belong to the set $\fR_2$ introduced in Definition \ref{def:R2}. \begin{lemma}\label{weakBNR2} Any monomial of the form $e^{\im \theta \cdot \ell} \, a_{\jj_1}^{\sigma_1} \, a_{\jj_2}^{\sigma_2} \yy_i $ with $(\bj, \ell, \sigma)\notin\fR_2$ admissible in the sense of Definition \ref{rem:adm3} is Birkhoff non-resonant. \end{lemma} \begin{proof} We skip the proof since it is analogous to Lemma 6.1 of \cite{Maspero-Procesi}. \end{proof} \subsection{Quantitative measure estimate} We are now in a position to prove our quantitative non-resonance estimate. Recall that, by Theorem \ref{thm:reducibility}, the frequencies $\Omega_\jj(\lambda, \e)$ of Hamiltonian \eqref{ham.bnf3} have the form \eqref{as.omega}. Expanding $\Omega_\jj(\lambda, \e)$ in Taylor series in powers of $\e$ we get that \begin{equation} \label{bb0} \omega(\lambda) \cdot \ell + \sigma_1 \Omega_{\jj_1}(\lambda, \e) +\sigma_2 \Omega_{\jj_2}(\lambda, \e) + \sigma_3 \Omega_{\jj_3}(\lambda, \e) + \sigma_4 \Omega_{\jj_4}(\lambda, \e) = \tK_{\bj, \ell}^\bs+ \e \ \tF_{\bj, \ell}^\bs(\lambda) + \e^2 \ \tG_{\bj, \ell}^\bs(\lambda,\e) \ , \end{equation} where $\tK_{\bj, \ell}^\bs $ is defined in \eqref{solapostdoc} and $\tF_{\bj, \ell}^\bs(\lambda) $ is defined in \eqref{patata}. We wish to prove that the set of $\lambda\in \cC^{(2)}_{\e}$ such that \begin{equation} \label{eq:IIIm} \abs{\omega(\lambda) \cdot \ell + \s_1\Omega_{\jj_1}(\lambda, \e)+ \s_2 \Omega_{\jj_2}(\lambda, \e)+\s_3 \Omega_{\jj_3}(\lambda, \e) + \s_4 \Omega_{\jj_4}(\lambda, \e)} \geq \e \frac{\gamma_2}{\la \ell \ra^{\tau_2}} \ , \qquad \forall \, (\bj,\ell,\bs)\in \fA_4\setminus\fR_4 \end{equation} has positive measure for $\g_2$ and $\e$ small enough and $\tau_2$ large enough. We treat separately the cases $|\ell | \leq 4\tM_0$ and $|\ell| > 4\tM_0$. \subsubsection{Case $|\ell| \leq 4 \tM_0$} We start with the following lemma. \begin{lemma} \label{lem:cono} There exist ${\tt k} \in \N$, such that for any $\gamma_c>0$ sufficiently small, there exists a compact domain $\cC_{\rm c}\subset\cO_0$, with $|\cO_0 \setminus \cC_{\rm c} | \sim \g_c^{1/{\tt k}}$ and \begin{equation}\notag \min\left\lbrace\abs{\tF_{\bj, \ell}^\bs(\lambda)} \colon \lambda \in \cC_{\rm c}, \, (\ell,\bj, \bs )\in \fA_4\setminus\fR_4, \, |\ell| \leq 4\tM_0\,, \, \tK_{\bj, \ell}^\bs=0 \right\rbrace \geq \g_c > 0 \ . \end{equation} \end{lemma} \begin{proof}See Lemma 6.4 of \cite{Maspero-Procesi}. The estimate on the measure follows from classical results on sublevels of analytic functions. \end{proof} We can now prove the following result. \begin{proposition} \label{prop:cC} There exits $\e_c>0$ and a set $\cC_c\subset \cO_0$ such that for any $\e \leq \e_c$, any $\lambda \in \cC_{\rm c}$, one has \begin{equation} \label{cCc.est} \abs{\omega(\lambda) \cdot \ell + \sum_{l = 1}^4 \s_l\Omega_{\jj_l}(\lambda, \e)} \geq \frac{\g_c \e }{2} \ , \qquad \forall \, (\bj,\ell,\bs)\in \fA_4\setminus\fR_4 \ , \quad |\ell| \leq 4 \tM_0. \end{equation} Moreover, one has that $|\cO_0\setminus \cC_c|\leq \alpha \e_c^\kappa$ where $\alpha, \kappa$ do not depend on $\e_c$. \end{proposition} \begin{proof} By the very definition of $\tM_0$ in \eqref{theta.est} and the estimates on the eigenvalues given in Theorem \ref{thm:reducibility4}, one has $\sup_{\lambda \in \cO_0} |\tF_{\bj, \ell}^\bs(\lambda)| \leq 8\, \tM_0 $ and $ \sup_{\lambda \in \cO_{0}} |\tG_{\bj, \ell}^\bs(\lambda)| \leq 4\, \tM_0$. Assume first that $\tK_{\bj, \ell}^\bs \in \Z\setminus\{0\}$, then if $\e_c$ is sufficiently small and for $\e < \e_c$ one has $$ \abs{ \eqref{bb0} } \geq |\tK_{\bj, \ell}^\bs| - \e 8 \tM_0 - \e^2 4 \tM_0 \geq \frac{1}{2}. $$ Hence, for such $\ell$'s, \eqref{cCc.est} is trivially fulfilled $\forall \lambda \in \cO_0$. If instead $\tK_{\bj, \ell}^\bs = 0$, we use Lemma \ref{lem:cono} with $\gamma_c=10\tM_0 \e_c$ to obtain a set $\cC_c\subset \cO_0$, such that for any $\lambda \in \cC_{c}$ and any $(\bj,\ell,\bs)\in \fA_4\setminus\fR_4$ with $ |\ell| \leq 4 \tM_0$ $$ \abs{\eqref{bb0}} \geq \e \g_c - \e^2 4\tM_0 \geq \frac{\e\g_c}{2}=:C\e \ . $$ \end{proof} \subsubsection{ Case $|\ell| > 4\tM_0$} In this case we prove the following result. \begin{proposition} \label{prop:Cs} Fix $\e_\star >0$ sufficiently small and $\tau_\star >0$ sufficiently large. For any $\e < \e_\star$, there exists a set $\cC_\star \subset \cO_0$ such that $\abs{\cO_0 \setminus \cC_\star} \lesssim \e_\star ^\kappa $ (with $\alpha, \kappa$ independent of $\e_\star$), and for any $\lambda \in \cC_\star$ and $ |\ell| > 4 \tM_0$ one has \begin{equation} \label{cCs.est} \abs{\omega(\lambda) \cdot \ell + \sum_{l = 1}^4 \s_l\Omega_{\jj_l}(\lambda, \e)} \geq \gamma_\star \frac{\e}{\la \ell \ra^{\tau_\star}} \ . \end{equation} for some constant $\gamma_\star$ depending on $\e_\star$. \end{proposition} To prove the proposition, first define, for $1 \leq i \leq \tk$ and $0 \leq k \leq \tk$, the functions \begin{equation}\notag \widehat\tF_{i,k}(\lambda) = \begin{cases} \e \mu_{i}(\lambda) & \mbox{ if } k = 0 \\ \e \mu_{i,k}^+ (\lambda) & \mbox{ if } 1 \leq i < k \leq \tk \\ \e \mu_{i,k}^- (\lambda) & \mbox{ if } 1 \leq k < i \leq \tk \\ 0 & \mbox{ if}\; 1 \leq i=k \leq \tk \end{cases} \end{equation} The right hand side of \eqref{bb0} is always of the form \begin{equation} \label{cc} \begin{aligned} \omega (\lambda) \cdot \ell + K &+\eta_1 \widehat\tF_{i_1,k_1}(\lambda) + \eta_2 \widehat\tF_{i_2,k_2}(\lambda) + \eta_3 \widehat\tF_{i_3,k_3}(\lambda) + \eta_4 \widehat\tF_{i_4,k_4}(\lambda) \\ &+ \eta_{11} \frac{\Theta_{m_1}(\lambda, \e)}{\la m_1 \ra^2} + \eta_{12} \frac{\Theta_{m_2}(\lambda, \e)}{\la m_2 \ra^2} + \eta_{13} \frac{\Theta_{m_3}(\lambda, \e)}{\la m_3 \ra^2} +\eta_{14} \frac{\Theta_{m_4}(\lambda, \e)}{\la m_4 \ra^2} \\ &+ \eta_{21} \frac{\Theta_{m_1,n_1}(\lambda, \e)}{\la m_1\ra^2 + \la n_1 \ra^2} + \eta_{22} \frac{\Theta_{m_2,n_2}(\lambda, \e)}{\la m_2 \ra^2 + \la n_2 \ra^2} + \eta_{23} \frac{\Theta_{m_3,n_3}(\lambda, \e)}{\la m_3 \ra^2+ \la n_3 \ra^2 } + \eta_{24} \frac{\Theta_{m_4,n_4}(\lambda, \e)}{\la m_4 \ra^2+ \la n_4 \ra^2 } \\ & {+ \eta_{31} \frac{\varpi_{m_1}(\lambda, \e)}{\la m_1 \ra} + \eta_{32} \frac{\varpi_{m_2}(\lambda, \e)}{\la m_2 \ra} + \eta_{33} \frac{\varpi_{m_3}(\lambda, \e)}{\la m_3 \ra}} + \eta_{34} \frac{\varpi_{m_4}(\lambda, \e)}{\la m_4 \ra} \end{aligned} \end{equation} for a particular choice of $K \in \Z$, $m_i\in \Z,n_i\in N\Z\setminus \{0\}$ and $\eta_r, \eta_{j j'} \in \{ -1, 0, 1 \}$. Therefore it is enough to show \eqref{cCs.est} where the left hand side is replaced by \eqref{cc}. \begin{proof}[Proof of Proposition \ref{prop:Cs}] If the integer $K$ is sufficiently large, namely $|K| \geq 4 \, |\ell| \displaystyle{\max_{1 \leq i \leq \tk} } (\tm_i^2) $, then the quantity in the left hand side of \eqref{cCs.est} is far from zero. More precisely one has \begin{align*} |\eqref{cc} | & \geq |K| - |\omega(\lambda)| \, |\ell| - \sum_{r=1}^4 \abs{\widehat\tF_{i_r,k_r}}^{\cO_1} - \sum_{r=1}^4 \frac{\abs{\Theta_{m_r}(\cdot, \e )}^{\cO_1}}{\la m_r \ra^2} - \sum_{r=1}^4 \frac{\abs{\Theta_{m_r, n_r}(\cdot, \e )}^{\cO_1}}{\la m_r \ra^2 + \la n_r \ra^2 } {- \sum_{r=1}^4 \frac{\abs{\varpi_{m_r}(\cdot, \e )}^{\cO_1}}{\la m_r \ra} }\\ \notag & \geq 4 \, \max_{1 \leq i \leq \tk} (\tm_i^2) \, |\ell| - \max_{1 \leq i \leq \tk} (\tm_i^2) \, |\ell| - \e |\ell| - 4\e \tM_0-4\e^2 \tM_0 \geq \tM_0 \ . \end{align*} So from now on we restrict ourselves to the case $|K| \leq 4 \, |\ell| \displaystyle{\max_{1 \leq i \leq \tk} } (\tm_i^2) $. We will repeatedly use the following result, which is an easy variant of Lemma 5 of \cite{Poschel96a}. \begin{lemma} \label{c.2} Fix arbitrary $K \in \Z$, $m_i\in \Z,n_i\in \Z\setminus \{0\}$, $\eta_j, \eta_{j j'} \in \{ -1, 0, 1 \}$. For any $\al >0$ one has \begin{equation}\notag {\rm meas}(\{ \lambda \in \cO_{0}: |\eqref{cc}| < \e\alpha \}) < 16 \alpha |\ell|^{-1} \ . \end{equation} \end{lemma} The proof relies on the fact that all the functions appearing in \eqref{cc} are Lipschitz in $\lambda$, for full details see e.g. Lemma C.2 of \cite{Maspero-Procesi}. Now, let us fix \begin{equation} \label{cond12} \g_\star =\frac{\e_\star \tM_0}{100}. \end{equation} We construct the set $\cC_\star$ by induction on the number $n$ defined by $$n:=|\eta_{1,1}|+ \cdots + |\eta_{3,4}| \leq 12 $$ which is nothing but the number of nonzero coefficients in \eqref{cc} . For every $0 \leq n \leq 12$ we construct $(i)$ a positive increasing sequence $\tau_n$ and $(ii)$ a sequence of nested sets $\cC^n = \cC^n(\g_\star, \tau_n)$ such that \begin{enumerate} \item There exists $C >0$, independent of $\e$ and $\g_\star$, s.t. \begin{equation} \label{meas.cn} \meas(\cO_0 \setminus \cC^{0}) \leq C \g_\star \ , \quad \meas(\cC^n \setminus \cC^{n+1}) \leq C \g_\star \end{equation} \item For $\lambda \in \cC^n$ and $|\ell| \geq 4 \tM_0$ one has \begin{align} \label{eta.ind} \Big| \eqref{cc} \Big| \geq \frac{\e \ \g_\star}{\la \ell\ra ^{\tau_n}} \ . \end{align} \end{enumerate} Then the proposition follows by taking $\cC_\star := \cC^{12}$, $\tau_\star = \tau_{12}$, so that one has $\abs{\cO_0 \setminus \cC_\star} \leq 13 C \g_\star \sim \g_\star$, provided $\g_\star$ is small enough. \\ \noindent\textbf{Case $n=0$:} Define the set $$ G_{K, \bi, \bk, \eta,\ell}^0(\g_\star, \tau_0) := \left\{\lambda \in \cO_0 \ : \ |\eqref{cc}| \leq \frac{\e \ \g_\star}{\la \ell\ra ^{\tau_0}} \ \mbox{ and } \ \eta_{j j'} = 0 \ \ \ \forall j, j' \right\} \ , $$ where $K \in \Z$ with $|K|\leq 4 \, \displaystyle{\max_{1 \leq i \leq \tk} (\tm_i^2)} \, |\ell|$, $\bi = (i_1, i_2, i_3, i_4) \in \{1, \ldots \tk\}^4$, $\bk= (k_1, k_2, k_3, k_4) \in \{0,\ldots ,\tk\}^4$, $\ell \in \Z^\tk$ with $|\ell| \geq 4 \tM_0$, $\eta=(\eta_1,\eta_2,\eta_3, \eta_4) \in \{-1, 0, 1\}^4$. By Lemma \ref{c.2} with $\alpha = \g_\star \la \ell \ra^{-\tau_0}$ we have $$ \meas \left( G_{K, \bi, \bk, \eta,\ell}^0(\g_\star, \tau_0)\right) \leq \frac{ 16 \g_\star}{ \, \la \ell\ra ^{\tau_0+1}} \ . $$ Taking the union over all the possible values of $K, \bi, \bk,\eta, \ell$ one gets that $$ \meas \left ( \bigcup_{|\ell| \geq 4 \tM_0, \ \bi, \bk,\eta \atop |K| \leq 4 \, \max_i (\tm_i^2) \, |\ell| } G_{K, \bi, \bk, \eta, \ell}^0(\g_\star, \tau_0) \right) \leq C(\tk) \, \g_\star \, \sum_{|\ell| \geq 4 \tM_0} \frac{1}{ \, \la \ell\ra ^{\tau_0}} \leq C \g_\star \ , $$ which is finite provided $\tau_0 \geq \tk+1$. Letting $$\cC^0 := \cO_0 \setminus \bigcup_{|\ell| \geq 4 \tM_0, \ \bi, \bk,\eta \atop |K| \leq 4 \, \max_i (\tm_i^2) \, |\ell| } G_{K, \bi, \bk,\eta, \ell}^0(\g_\star, \tau_0) $$ one has clearly that $\meas (\cO_0 \setminus \cC^0) \leq C\g_\star$ and for $\lambda \in \cC^0 $ we have \begin{equation}\notag \abs{ \omega(\lambda) \cdot \ell + K + \sum_{r=1}^4 \eta_j \widehat\tF_{i_r,k_r}(\lambda) } \geq \frac{\e \ \g_\star}{ \la \ell\ra ^{\tau_0}} \end{equation} for any admissible choice of $\ell,K,\bi,\bk,\eta$. This proves the inductive step for $n=0$.\\ \noindent\textbf{Case $n \leadsto n+1$:} Assume that \eqref{eta.ind} holds for any possible choice of $\eta_{11}, \ldots, \eta_{34}$ s.t. $|\eta_{11}|+ \cdots + |\eta_{34}| \leq n \leq 11 $ for some $(\tau_j)_{j = 1}^n$. We prove now the step $n+1$. Let us fix $\tau_{n+1} \geq \td + 1+ 6 \tau_n$. We shall show that for each $| \ell | \geq 4 \tM_0$, the set \begin{equation} \label{setC} G_\ell^{n+1}:= \left\lbrace \lambda \in \cC^n \colon \abs{\eqref{cc}} \leq \frac{\e \g_\star}{\la \ell \ra^{\tau_{n+1}}} \ , \ \ \ |\eta_{11}| +\ldots+ |\eta_{34}| = n + 1 \right\rbrace \end{equation} has measure $\displaystyle{\leq \frac{C(\tk) \g_\star}{\la \ell \ra^{\tk +1}}}$. Thus defining $$ \cC^{n+1} := \cC^n\setminus \bigcup_{|\ell| \geq 4 \tM_0 \atop } G^{n+1}_{ \ell}(\g_\star, \tau_{n+1}) . $$ we obtain the claimed estimates \eqref{meas.cn} and \eqref{eta.ind}. To estimate the measure of \eqref{setC} we split in three cases.\\ \medskip \noindent {\em Case 1}: Assume that \begin{equation}\notag \exists \ m_i \mbox{ s.t. } |m_i| \geq \la \ell \ra^{\tau_n} \end{equation} (of course we also assume that one of the coefficients $\eta_{1i}, \eta_{2i}, \eta_{3i}$ is not null). W.l.o.g. assume it is $m_4$. Then we treat all the terms in \eqref{cc} which contain $m_4$ as perturbations, and we estimate all the other terms using the inductive assumption. Here the details: first we have $$ \abs{\frac{\Theta_{m_4}(\lambda, \e)}{\la m_4\ra^2 }} + \abs{\frac{\Theta_{m_4,n_4}(\lambda, \e)}{\la m_4 \ra^2 + \la n_4 \ra^2 }}+ \abs{\frac{\varpi_{m_4}(\lambda, \e)}{\la m_4\ra }} \leq \frac{\tM_0 \, \e^2}{\la \ell \ra^{\tau_n}} . $$ By the inductive assumption \eqref{eta.ind} and \eqref{cond12}, for any $\lambda \in \cC^n$ one has \begin{align*} \Big| \eqref{cc} \Big| \geq& \left| \omega(\lambda) \cdot \ell + K +\sum_{j=1}^4 \eta_i \widehat\tF_{i_j,k_j}(\lambda) + \sum_{j=1}^3 \eta_{1j} \frac{\Theta_{m_r}(\lambda, \e)}{\la m_j \ra^2} + \sum_{j=1}^3\eta_{2j} \frac{\Theta_{m_j,n_j}(\lambda, \e)}{\la m_j\ra^2 + \la n_j \ra^2} + \sum_{j=1}^3 \eta_{3j} \frac{\varpi_{m_j}(\lambda, \e)}{\la m_j \ra} \right| \\ &- \frac{\tM_0 \, \e^2}{\la \ell \ra^{\tau_n}} \\ \geq& \frac{\e \ \g_\star}{ \la \ell\ra ^{\tau_n}} - \frac{\tM_0 \, \e^2}{\la \ell \ra^{\tau_n}} \geq \frac{\e \ \g_\star}{2 \la \ell\ra ^{\tau_{n}}} \geq \frac{\e \ \g_\star}{ \la \ell\ra ^{\tau_{n+1}}} \end{align*} provided $\tau_{n+1} \geq \tau_n +1$. Therefore, in this case, there are no $\lambda$'s contributing to the set \eqref{setC}.\\ \noindent{\em Case 2:} Assume that \begin{equation}\notag \exists \ n_i \mbox{ s.t. } |n_i|^2 \geq \la \ell \ra^{\tau_n} \ \end{equation} (and again we also assume that one of the coefficients $ \eta_{2i}$ is not null). W.l.o.g. assume it is $n_4$. Similarly to the previous case, we treat the term in \eqref{cc} which contains $n_4$ as a perturbation, and we estimate all the other terms using the inductive assumption. More precisely we have $$ \abs{\frac{\Theta_{m_4,n_4}(\lambda, \e)}{\la m_4 \ra^2 + \la n_4 \ra^2}} \leq \frac{\tM_0 \, \e^2}{\la \ell \ra^{\tau_n}} , $$ so by the inductive assumption \eqref{eta.ind} and \eqref{cond12} \begin{align*} \Big| \eqref{cc} \Big| \geq & \Bigg| \omega(\lambda) \cdot \ell + K +\sum_{j=1}^4 \eta_i \widehat\tF_{i_j,k_j}(\lambda) + \sum_{j=1}^4 \eta_{1j} \frac{\Theta_{m_j}(\lambda, \e)}{\la m_j \ra^2} + \sum_{j=1}^3\eta_{2j} \frac{\Theta_{m_j,n_j}(\lambda, \e)}{\la m_j\ra^2+ \la n_jj \ra^2} + \sum_{j=1}^4 \eta_{3j} \frac{\varpi_{m_j}(\lambda, \e)}{\la m_j \ra}\Bigg| \\ & - \frac{\tM_0 \, \e^2}{\la \ell \ra^{\tau_n}} \\ \geq & \frac{\e \ \g_\star}{2 \la \ell\ra ^{\tau_{n}}} \geq \frac{\e \ \g_\star}{\la \ell\ra ^{\tau_{n+1}}} \end{align*} provided $\tau_{n+1} \geq \tau_n +1$. Also in this case, there are no $\lambda$'s contributing to the set \eqref{setC}.\\ \noindent{\em Case 3:} We have $$|m_i|\, , |n_i|^2 \leq \la \ell \ra^{\tau_n}$$ for all the $m_i, n_i$ that appear in \eqref{cc} with nonzero coefficients. Furthermore, recall that we are considering just the case $|K| \leq 4 \, \max_i (\tm_i^2) \, | \ell | . $ Thus we are left with a finite number of cases and we can impose a finite number of Melnikov conditions. So define the sets $$ G^{n+1}_{K, \bi, \bk, \eta, \ell, \bm, \bn}(\g_\star, \tau_{n+1}) := \left\{\lambda \in \cC^n \ : \ |\eqref{cc}| \leq \frac{\e \ \g_\star}{ \la \ell\ra ^{\tau_{n+1}}} \ , \qquad |\eta_{11}|+ \cdots + |\eta_{34}| = n+1 \right\} . $$ By Lemma \ref{c.2} with $\alpha = \g / \la \ell \ra^{\tau_{n+1}}$ we have \begin{equation} \label{atl} \meas \left( G^{n+1}_{K, \bi, \bk, \eta, \ell, \bm, \bn}(\g_\star, \tau_{n+1}) \right) \leq \frac{ 16\g_\star}{ \, \la \ell\ra ^{\tau_{n+1}+1}} \ , \end{equation} and taking the union over the possible values of $K, \bi, \bk, \eta, \bm, \bn$ one gets that $$ G_\ell^{n+1} \equiv \bigcup_{ \bi, \bk, \eta} \ \ \ \bigcup_{|m_i| \, ,\ |n_i|^2 \leq \la \ell \ra^{\tau_n} \atop |K| \leq 4 \, \max_i (\tm_i^2) \, |\ell| } G_{ K, \bi, \bk, \eta, \ell, \bm, \bn}(\g_\star, \tau_{n+1}) . $$ Estimate \eqref{atl} gives immediately $$ \meas \Big( G_\ell^{n+1} \Big) \leq C(\tk) \, \g_\star \frac{ \la \ell \ra^{1+ 6 \tau_n}}{ \, \la \ell\ra ^{\tau_{n+1}+1}} \leq \frac{ C(\tk) \, \g_\star }{ \la \ell\ra ^{\tk +1 }} $$ which is what we claimed. \end{proof} We can finally prove Proposition \ref{hopeful thinking}. \begin{proof}[Proof of Proposition \ref{hopeful thinking}] Fix $ \g_c = \g_\star =: \g_2$ sufficiently small, and put $\e_2 := \min(\e_c, \e_\star)$, $\tau_2 := \tau_\star$ and $\cC^{(2)} := \cC_{\rm c} \cap \cC_\star$. Propositions \ref{prop:cC} and \ref{prop:Cs} guarantee that for any $\lambda \in \cC^{(2)}$, estimate \eqref{eq:IIIm} is fulfilled. Finally one has $\abs{\cC^{(1)} \setminus \cC^{(2)}} \lesssim \g_2^{1/{\tt k}} + \g_2 \sim \g_2^{1/{\tt k}}$. \end{proof} \bibliography{references} \bibliographystyle{alpha} \end{document}
{"config": "arxiv", "file": "1810.03694/GHHMP4amsart11.tex"}
TITLE: differentiability and compactness QUESTION [1 upvotes]: I have no idea how to show whether this statement is false or true: If every differentiable function on a subset $X\subseteq\mathbb{R}^n$ is bounded then $X$ is compact. Thank you REPLY [0 votes]: If $X$ is unbounded, take some variant of $\|x\|$. If $X\subset \mathbb{R}^n$ is bounded but not closed, then there is some $x\in\partial X\cap X'$. To have any smooth functions, $X$ must have interior, so take a small ball $B\subset X$ with $x\in\partial B$. Now your job boils down to finding a function on the unit ball which goes to $\infty$ as $x\to (1,0,\cdots,0)$ and goes to $0$ as $x\to$ any other point on the boundary.
{"set_name": "stack_exchange", "score": 1, "question_id": 257473}
TITLE: How to show that the graded dual of the universal enveloping algebra of a free Lie algebra on a finite set is the shuffle algebra QUESTION [4 upvotes]: In the article, the universal enveloping algebra of a free Lie algebra on a set X is defined to be the free associative algebra generated by X. It is said that the graded dual of the universal enveloping algebra of a free Lie algebra on a finite set is the shuffle algebra. What is the meaning of "graded dual"? How to show that the graded dual of the universal enveloping algebra of a free Lie algebra on a finite set is the shuffle algebra? Some examples to explain this result will be greatly appreciated! REPLY [3 votes]: All that has been said in the excellent comments by Darij is true for modules (the set of scalars being a ring) except maybe Radford's theorem (see below). I complete here what has been said, too long for a comment though. The shuffle product is called such because it "shuffles" the letters of two words considered as card decks. If you want to get all shuffles between two words you quickly come to the recursion (with $a,b$ being letters of an alphabet $X$ and $u,v\in X^*$ words with their letters in $X$) $$ au * bv=a(u * bv)+b(au * v)\quad (R2) $$ which means $$ \mbox{ all shuffles}=\mbox{shuffles beginning by "a"}+\mbox{shuffles beginning by "b"} $$ ($a$ being seen as the first card of the deck $au$ and $b$, the first of the deck $bv$). If you pursue the recursion, you arrive at the empty word and must initialize the recursion, considering it as a neutral $$ u* 1_{X^*}=1_{X^*}* u=u\quad (R1) $$ Of course, the output being a sum, you must linearize the situation and consider it as an algebra law on $R\langle X\rangle$ ($=R[X^*]$, the $R$-algebra of the free monoid $X^*$ i.e. the free algebra or the algebra of noncommutative polynomials) defined by $R1-R2$. It turns out that $(R\langle X\rangle,*,1_{X^*})$ is a $R$-CAAU (commutative, associative algebra with unit) in all cases (and an algebra of polynomials, a free CAAU, in case $R$ is a $\mathbb{Q}$-algebra, which is Radford's theorem). Now, to come to your question, given a set $X$ and a ring $R$, one builds the free monoid $X^*$ the free Lie algebra $Lie_R\langle X\rangle$ (in the category of $R$-Lie algebras) the free algebra $R\langle X\rangle$ (in the category of $R$-AAU) by transitivity of free objects, one gets at once (and for all $R$) that $\mathcal{U}(Lie_R\langle X\rangle)$ is isomorphic to $R\langle X\rangle$. At this point, it is not granted that $Lie_R\langle X\rangle$ be the Lie subalgebra of $R\langle X\rangle$ generated by letters. This is due to the magic of Lyndon words [Lothaire, Combinatorics on words] (or Hall sets [Bourbaki, Lie chapter II, Reutenauer, Free Lie Algebras]) saying that, for all $R$, $Lie_R\langle X\rangle$ is free as a $R$-module. As for the graded dual, one can prove your statement by constructing the enveloping bialgebra $$ \mathcal{B}=(R\langle X\rangle,conc,1_{X^*},\Delta,\epsilon) $$ (considering that $R\langle X\rangle\simeq \mathcal{U}(Lie_R\langle X\rangle)$ (with, this time, $Lie_R\langle X\rangle$ the Lie subalgebra of $R\langle X\rangle$ generated by the letters). Then considering, for $x\in X$, $\Delta(x)=x\otimes 1+1\otimes x$, one gets the very nice expression (for any word $w\in X^*$ of length $n$) $$ \Delta(w)=\sum_{I+J=[1..n]} w[I]\otimes w[J]\quad (3)\ . $$ In order to get the dual, one must restrict the linear forms. The largest dual is Sweedler's one (you have a quick definition here), here your question is about the graded dual. In both cases one gets (with a small abuse of language) $$ \mathcal{B}^\vee=(\mathcal{B}^\vee,*,1_{X^*},\Delta_{conc},\epsilon) $$ because the dual of $\Delta$ (which is a law of algebra) is precisely the shuffle product. Now, just a final remark : if the alphabet in finite, you use the grading by the length of the words and if the alphabet is infinite you must use the grading by multidegree (because $S_X=\sum_{x\in X}\,x$ is not rational, which means that $\Delta_{conc}\,S_X$ cannot be computed in our framework) and then get $R\langle X\rangle^\vee=R\langle X\rangle$. Hope it helps ! (do not hesitate to ask for clarification.)
{"set_name": "stack_exchange", "score": 4, "question_id": 255472}
TITLE: Tails of sums of Weibull random variables QUESTION [7 upvotes]: Suppose that $X_1, X_2, \ldots, X_n$ are i.i.d random variables distributed according to Weibull distribution with shape $0 < \epsilon < 1$ (it means that $\mathbf{Pr}[X_i \geq t] = e^{-\Theta(t^{\epsilon})}$). Now consider the random variable $S_n = X_1 + X_2 + \ldots + X_n$, when $n$ tends to infinity. Clearly, $\mathbf{E}[S_n] = O_{\epsilon}(n)$. Is it true that for some $C = C(\epsilon)$ we have $\mathbf{Pr}[S_n \geq C n] \leq e^{-\Omega_{\epsilon}(n^{\alpha})}$ for some $\alpha = \alpha(\epsilon) > 0$? If so, what is the largest $\alpha$ one can get? The standard MGF-based methods that work nicely in similar situations are not applicable here due to the fact that $X_i$'s are heavy-tailed. My feeling is that this question must be studied somewhere. REPLY [1 votes]: The affirmative answer can be found in this paper by A.V.Nagaev: essentially "the conjecture" is true for $\alpha = \varepsilon$ (which is clearly the best possible).
{"set_name": "stack_exchange", "score": 7, "question_id": 118562}
TITLE: Solve nonlinear equation $-xu_x+uu_y=y$ QUESTION [4 upvotes]: Solve nonlinear equation $$\left\{\begin{matrix} -xu_x+uu_y=y & \\ u(x,2x)=0& \end{matrix}\right.$$ using method of characteristic curves my attempt: for the pde we can write it as $\frac{dx}{-x}=\frac{dy}{u}=\frac{du}{y}$ but i cant solve little please..thank you REPLY [1 votes]: $$-xu_x+uu_y=y $$ $$\dfrac{dx}{-x}=\dfrac{dy}{u}=\dfrac{du}{y}$$ First family of characteristic curves, from $\quad\dfrac{dy}{u}=\dfrac{du}{y}\quad\to\quad u^2-y^2=c_1$ $\dfrac{dy}{u}=\dfrac{du}{y}=\dfrac{dy+du}{u+y}=\dfrac{d(u+y)}{u+y}$ Second family of characteristic curves, from $\quad\dfrac{dx}{-x}=\dfrac{d(u+y)}{u+y} \quad\to\quad (u+y)x=c_2$ General solution on the form of implicit equation : $$(u+y)x=F(u^2-y^2)$$ where $F(X)$ is any differentiable function. Condition : $u(x,2x)=0\quad\implies\quad (0+2x)x=F(0^2-(2x)^2) \quad\to\quad 2x^2=F(-4x^2)$ $$F(X)=-\frac{1}{2}X$$ Particular solution according to the boundary condition, with $X=u^2-y^2$ : $\quad (u+y)x=-\frac{1}{2}(u^2-y^2)\quad$ and after simplification : $\quad x=-\frac{1}{2}(u-y)$ $$u(x,y)=y-2x$$
{"set_name": "stack_exchange", "score": 4, "question_id": 2443958}
TITLE: Boundary conditions for $\mathbf D$ and $\mathbf H$ QUESTION [0 upvotes]: I understand the derivation for the boundary conditions for $\mathbf B$ and $\mathbf E$ as it was explained to me in Griffiths, but Griffiths states the following: $$H_{\text{above}}^{\bot} - H_{\text{below}}^{\bot} = -(M_{\text{above}}^{\bot}-M_{\text{below}}^{\bot})$$ and $$\mathbf D_{\text{above}}^{\parallel} - \mathbf D_{\text{below}}^{\parallel} = \mathbf P_{\text{above}}^{\parallel} - \mathbf P_{\text{below}}^{\parallel}$$ This feels like the right idea, due to the relations that $\nabla \times \mathbf D = \nabla \times \mathbf P$ and $\nabla \cdot \mathbf H = - (\nabla \cdot \mathbf M)$, and from what I learned of the implications by $\nabla \cdot \mathbf E = \rho / \epsilon_0 $ implies a discontinuity in the parallel component of the electric field, which is why it makes sense to me that the discontinuity in the parallel component of the $\mathbf D$ field is the way it is. However, I can't quite piece together a coherent argument justifying this boundary conditions. How is this proven? Also, the $\mathbf H$-field perpendicular component boundary conditions are expressed in magnitude in Griffiths, yet the $\mathbf D$-field parallel component boundary conditions were expressed as vectors, as I've shown. Why is that? If I made the $\mathbf H$-field a vector in the boundary condition I wrote for it, wouldn't that still be true? REPLY [0 votes]: In brief, the argument goes: $\nabla \cdot {\bf D}$ equation $\rightarrow$ condition on $D_\perp$. $\nabla \cdot {\bf B}$ equation $\rightarrow$ condition on $B_\perp$. $\nabla \wedge {\bf E}$ equation $\rightarrow$ condition on $E_\parallel$. $\nabla \wedge {\bf H}$ equation $\rightarrow$ condition on $H_\parallel$. The first two arguments involve a cylinder-shaped Gaussian surface. The second two arguments involve an integral around a loop hugging the boundary. A complete argument will include the time-derivative terms and show that they are negligible in the limit where the Gaussian surface or the loop closely hugs the boundary. A complete argument will also include the surface conduction current in the $H_\parallel$ result, and the surface density of free charge in the $D_\perp$ result. Once you have the above then you can get the equations for things like $E_\perp$, $D_\parallel$, $B_\parallel$ and $H_\perp$ simply by using the definitions of $\bf D$ and $\bf H$ in terms of ${\bf E}$, ${\bf P}$, ${\bf B}$, ${\bf M}$.
{"set_name": "stack_exchange", "score": 0, "question_id": 479204}
TITLE: How to evaluate limiting value of sums of a specific type QUESTION [2 upvotes]: We know that if $f$ is integrable in (0,1) then $$ \lim_{n \to \infty} \frac{1}{n}\sum_{k=1}^{n}f(k/n) = \int_{0}^{1}f(x)dx. $$ Recently I found the following sum $$ \lim_{n \to \infty} \sum_{k=1}^{n}\frac{k}{n^2 + k^2} = \frac{\ln 2}{2}. $$ This sum cannot be expressed in the form $\lim_{n \to \infty} \frac{1}{n}\sum_{k=1}^{n}f(k/n)$. Rather it is of the form $$ \lim_{n \to \infty} \frac{1}{n^2}\sum_{k=1}^{n} k f(k/n). $$ Part 1 Is there any technique to evaluate limits of this form? Part 2 Generalizing the question one step further, how can we asymptotics of summations of the form $$ \sum_{k=1}^{n} f(k)g\Big(\frac{k}{n}\Big). $$ REPLY [1 votes]: $$\lim_{n\rightarrow \infty }\sum_{k=1}^{n}\frac{k}{n^2+k^2}=\lim_{n\rightarrow \infty }\sum_{k=1}^{n}\frac{1}{n}\frac{nk}{n^2+k^2}$$ $$\lim_{n\rightarrow \infty }\sum_{k=1}^{n}\frac{1}{n}\frac{nk}{n^2+k^2}=\lim_{n\rightarrow \infty }\sum_{k=1}^{n}\frac{1}{n}\frac{\frac{k}{n}}{1+(\frac{k}{n})^2}$$ so $$f(\frac{k}{n})=\frac{\frac{k}{n}}{1+(\frac{k}{n})^2}$$ so $$f(x)=\frac{x}{1+x^2}$$ and $$\lim_{n\rightarrow \infty }\sum_{k=1}^{n}\frac{1}{n}\frac{nk}{n^2+k^2}=\lim_{n\rightarrow \infty }\frac{1}{n}\sum_{k=1}^{n}\frac{nk}{n^2+k^2}=\int_{0}^{1}\frac{x}{1+x^2}dx=\frac{ln 2}{2}$$
{"set_name": "stack_exchange", "score": 2, "question_id": 400174}
TITLE: Show that given set is smooth submanifold. QUESTION [1 upvotes]: This is Exercise 2.1.14 of ``Lecture on the geometry of Manifold." Let $Z=\{(x,a,b,c) \in \mathbb{R}^{4}: a \neq 0, ax^2+bx+c=0 \}$. Show that $Z$ is a smooth submanifold of $\mathbb{R}^{4}.$ Clearly, if we just assume $Z' = \{(x,a,b,c) \in \mathbb{R}^{4}: ax^2+bx+c=0 \}$, then by letting $f:(x,a,b,c) \mapsto ax^{2}+bx+c$, which is clearly smooth and regular at $0$ (because $\frac{\partial f}{\partial c}=1$) we know that $Z'$ is an embedded smooth submanifold of codimension 1. However, in case of $Z$, if we construct similar map $F:(x,a,b,c) \mapsto (a,ax^{2}+bx+c)$, then we may not use the regular value theorem since it is not a level set of some value. Moreover, $U =\{(x,y)\in \mathbb{R}^{2}: x\neq 0, y=0 \}$ is not even a closed set nor open set. Now I'm stuck. Could you give any hint for solving this exercise? REPLY [1 votes]: If you argued that $Z'$ is a smooth submanifold (which happens to be a closed subset of $\Bbb R^4$), then you're just observing that $Z$ is an open subset of $Z'$, which immediately makes it a smooth submanifold.
{"set_name": "stack_exchange", "score": 1, "question_id": 3355379}
\begin{document} \title{Strong stationarity conditions for a class of optimization problems governed by variational inequalities of the 2nd kind} \date{\today} \author{J.~C.~De Los Reyes\footnotemark[3] \and C.~Meyer\footnotemark[4]} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \footnotetext[3]{Research Center on Mathematical Modelling (MODEMAT), Escuela Politécnica Nacional, Quito-Ecuador} \footnotetext[4]{Faculty of Mathematics, Technische Universit\"at Dortmund, Dortmund-Germany.} \renewcommand{\thefootnote}{\arabic{footnote}} \maketitle \begin{abstract} We investigate optimality conditions for optimization problems constrained by a class of variational inequalities of the second kind. Based on a nonsmooth primal-dual reformulation of the governing inequality, the differentiability of the solution map is studied. Directional differentiability is proved both for finite-dimensional and function space problems, under suitable assumptions on the active set. A characterization of B- and strong stationary optimal solutions is obtained thereafter. Finally, based on the obtained first-order information, a trust-region algorithm is proposed for the solution of the optimization problems. \end{abstract} \begin{keywords} Variational inequalities, optimality conditions, mathematical programs with equilibrium constraints. \end{keywords} \section{Introduction} Optimization problems with variational inequality constraints have been intensively investigated in the last years with many important applications in focus. Problems in contact mechanics, phase separation or elastoplasticity are some of the most relevant application examples. Special analytical and numerical techniques have been developed for characterizing and finding optima of such problems, mainly in the finite-dimensional case (see \cite{LuoPangRalph} and references therein). In the function space framework much of the work has been devoted to optimization problems constrained by variational inequalities of the first kind: \begin{subequations} \begin{align} \min ~& j(y,u)\\ \text{subject to: }& (Ay,v-y) \geq (u, v-y), \text{ for all } v \in K, \end{align} \end{subequations} where $A :V \mapsto V^*$ is an elliptic operator and $K \subset V$ is closed convex set. Such obstacle type structure has allowed to develop an analytical machinery for such kind of problems. In addition, different type of stationarity concepts have been investigated in that framework (\emph{C-}, \emph{B-}, \emph{M-} and strong stationary points). The utilized proof techniques include regularization approaches as well as differentiability properties (directional, conic) of the solution map or elements of set valued analysis (see e.g. \cite{Mignot1976, MignotPuel1984, Barbu1993, Bergounioux1998, hintermuller2009mathematical, OutrataJarusekStara2011, KunischWachsmuth2012a, KunischWachsmuth2012b, HerzogMeyerWachsmuth, SchielaWachsmuth2011, HMS13}). For problems involving variational inequalities of the second kind: \begin{subequations} \begin{align} \min ~& j(y,u)\\ \text{subject to: }& (Ay,v-y) +\varphi(v)- \varphi(y) \geq (u, v-y), \text{ for all } v \in V, \end{align} \end{subequations} with $\varphi$ continuous and convex, only weak results have been obtained in the past, due to the very general structure (see e.g. \cite{Barbu1993,Bergounioux1998,BoCa2,outrata2000}). In \cite{Delosreyes2009} a special class of problems were investigated, where a richer structure of the nondifferentiability was exploited. Nonsmooth terms of the type $\varphi (y)=\int_S |B y|~ds$ were considered there and, by using a tailored regularization approach, a more detailed optimality system was obtained. The results were then extended to problems in fluid mechanics \cite{dlRe2010}, image processing \cite{dlReSchoen2013} and elastoplasticity \cite{dlRHM13}. Thanks to the availability of primal and dual formulations in elastoplasticity, the kind of optimality systems obtained in \cite{Delosreyes2009} were proved to be equivalent to C-stationary optimality systems in optimization problems constrained by variational inequalities of the first kind, see \cite{dlRHM13}. In this paper we aim to characterize further stationary points by investigating differentiability properties of the solution map. In that spirit B- and strong stationarity conditions are in focus. To avoid problems related to the regularity of the variables, we start by considering the finite-dimensional case. A reformulation of the variational inequality as a nonsmooth system of primal dual equations enables us to take difference quotients and prove directional differentiability of the finite-dimensional solution operator. The technique is then extended to the function space setting. Since in this context the regularity of the functions as well as the structure of the active set play a crucial role, special functional analysis and measure theoretical methods have to be considered. As a preparatory step, the Lipschitz continuity of the solution operator from $L^p(\Omega) \to L^\infty(\Omega)$ is proved by using Stampacchia's technique. The directional differentiability of the solution map is then proved by assuming that the active set has a special structure, namely that it consists of the union of a regular subdomain of positive measure and a set of zero capacity (see Assumption \ref{assu:active} below). With the directional differentiability at hand, the characterization of B-stationarity points is carried out thereafter. The theoretical part of the paper ends with the derivation of strong stationarity conditions by an adaptation of the method of proof introduced by \cite{MignotPuel1984} for optimal control of the obstacle problem. In the last part of the paper the first order information related to the directional derivative is utilized within a trust-region algorithm for the solution of the VI-constrai\-ned optimization problem. The computed derivative information is treated as an inexact descent direction, which is inserted into the trust-region framework to get robust iterates. The performance of the resulting algorithm is tested on a representative test problem, showing the suitability of the approach. \section{Differentiability for a finite dimensional VI of second kind} \label{sec:finite} We start by considering the following prototypical VI in $\R^n$: \begin{equation}\label{eq:vi2} \dual{A y}{v-y} + |v|_1 - |y|_1 \geq \dual{u}{v-y} \quad \forall \, v\in \R^n. \end{equation} Throughout this section $\dual{.}{.} = \dual{.}{.}_{\R^n}$ denotes the Euclidean scalar product. Moreover, $A\in \R^{n\times n}$ is positive definite and $|v|_1 = \sum_{i=1}^n |v_i|$. Existence and uniqueness for \eqref{eq:vi2} for arbitrary right hand sides $u\in \R^n$ follows by classical arguments due to the maximal monotonicity of $A + \partial |\,.\,|_1$. \begin{definition} We denote the solution mapping associated to \eqref{eq:vi2} by $S: \R^n \ni u \mapsto y \in \R^n$. \end{definition} Next let us introduce a dual (slack) variable $q \in \R^n$ by $q := u - A y$. If we test \eqref{eq:vi2} with $v_i = 0$, $v_i = 2 y_i$, and $v_i = q_i + y_i$ and $v_j = y_j$ for all $j\neq i$, then the following complementarity-like equivalent problem is obtained: \begin{equation*} \left\{\; \begin{aligned} A y + q &= u &&\\ q_i y_i &= |y_i|, \quad && i = 1, 2, ..., n \\ |q_i| &\leq 1, \quad && i = 1, 2, ..., n, \end{aligned} \right. \end{equation*} which can be reformulated as the following system of nonsmooth equations \begin{equation}\label{eq:complsys} \left\{\; \begin{aligned} A y + q &= u &&\\ q_i y_i &= |y_i|, \quad && i = 1, 2, ..., n \\ \max\{|q_i|,1\} &= 1, \quad && i = 1, 2, ..., n. \end{aligned} \right. \end{equation} In order to derive a directional derivative for $S$, consider a perturbed version of \eqref{eq:vi2}, given by \begin{equation}\label{eq:pert} \begin{aligned} A y^t + q^t &= u + t\,h\\ q^t_i y^t_i &= |y^t_i|, \quad i = 1, 2, ..., n \\ \max\{|q^t_i|,1\} &= 1, \quad i = 1, 2, ..., n, \end{aligned} \end{equation} which leads to the following nonsmooth system for the difference quotient: \begin{equation}\label{eq:diffquot} \begin{aligned} A\, \frac{y^t - y}{t} + \frac{q^t - q}{t} &= h\\ \frac{q^t_i y^t_i - q_i y_i -(|y^t_i| - |y_i|)}{t} &= 0, \quad i = 1, 2, ..., n \\ \frac{\max\{|q^t_i|,1\} - \max\{|q_i|,1\}}{t} &= 0, \quad i = 1, 2, ..., n. \end{aligned} \end{equation} In the sequel, we will pass to the limit in \eqref{eq:diffquot} to obtain the relations determining the directional derivative of $S$. For this purpose we test the VI associated with \eqref{eq:pert}, given by \begin{equation}\label{eq:vi2pert} \dual{A y^t}{v-y^t} + |v|_1 - |y^t|_1 \geq \dual{u + t h}{v-y^t} \quad \forall \, v\in \R^n, \end{equation} with $v=y$. If we test \eqref{eq:vi2} with $v=y^t$ and add both inequalities, we arrive at \begin{equation*} \lambda_{\min}(A) \Big|\frac{y^t - y}{t}\Big|^2 \leq \vdual{\frac{y^t - y}{t}}{A\, \frac{y^t - y}{t}} \leq \vdual{h}{\frac{y^t - y}{t}}, \end{equation*} where $|\,.\,| = |\,.\,|_{\R^n}$ denotes the euclidian norm and $\lambda_{\min}(A) > 0$ is the smallest eigenvalue of $A$. Thus \begin{equation*} \Big|\frac{y^t - y}{t}\Big| \leq \frac{1}{\lambda_{\min}(A)}\, |h| < \infty, \end{equation*} and so there exists a converging subsequence, w.l.o.g.\ $\left\{\frac{y^t - y}{t}\right\}_{t>0}$ itself, such that \begin{equation}\label{eq:convy} \frac{y^t - y}{t} \stackrel{t \searrow 0}{\longrightarrow} \eta. \end{equation} In Theorem \ref{thm:rablfinite} below we will see that the limit $\eta$ is unique so that the whole sequence $\{(y^t - y)/t\}$ converges. This justifies to assume the convergence of the whole sequence right from the beginning. By definition of $q$ we have \begin{equation}\label{eq:convq} \frac{q^t - q}{t} = h - A \, \frac{y^t - y}{t} \stackrel{t \searrow 0}{\longrightarrow} h - A \eta =: \lambda, \end{equation} which in particular implies $q^t \to q$. \begin{lemma}\label{lem:rablabs} For all $i=1, 2, ..., n$ there holds \begin{equation} \frac{q^t_i y^t_i - q_i y_i -(|y^t_i| - |y_i|)}{t} \stackrel{t \searrow 0}{\longrightarrow} \lambda_i y_i + q_i \eta_i - f_i(\eta_i) \end{equation} with \begin{equation*} f_i: \R\to \R, \quad f_i(x) := \begin{cases} \sign(y_i)x, & y_i \neq 0,\\ |x|, & y_i = 0. \end{cases} \end{equation*} \end{lemma} \begin{proof} We start by estimating \begin{equation*} \begin{aligned} &\Big|\frac{q^t_i y^t_i - q_i y_i -(|y^t_i| - |y_i|)}{t} - \lambda_i y_i - q_i \eta_i + f_i(\eta_i)\Big|\\ &\quad \leq \begin{aligned}[t] \Big|\Big(\frac{q^t - q}{t} - \lambda_i\Big) y_i\Big| &+ \Big|q^t_i\,\frac{y^t - y}{t} - q_i \eta_i \Big|\\ &+ \Big|\frac{|y^t_i| - |y_i + t\eta_i|}{t}\Big| + \Big| \frac{|y_i + t\eta_i| - |y_i|}{t} - f_i(\eta_i)\Big|. \end{aligned} \end{aligned} \end{equation*} Because of \eqref{eq:convy}, \eqref{eq:convq}, and $q^t \to q$, the first two terms converge to zero. Moreover, due to \eqref{eq:convy}, it holds \begin{equation*} \frac{y^t_i - y_i}{t} = \eta_i + \OO(t) \end{equation*} and thus the strong triangle inequality gives \begin{equation*} \Big|\frac{|y^t_i| - |y_i + t\eta_i|}{t}\Big| \leq \Big|\frac{y^t_i - y_i - t\eta_i}{t}\Big| = \OO(t) \to 0. \end{equation*} Moreover, $f_i$ is just the directional derivative of $|\,.\,|: \R \to \R$ at $y_i$ so that \begin{equation*} \frac{|y_i + t\eta_i| - |y_i|}{t} - f_i(\eta_i) \stackrel{t\searrow 0}{\longrightarrow} 0. \end{equation*} Altogether this implies the assertion. \end{proof} \begin{lemma}\label{lem:rablmax} The function $g: \R \to \R$, $g(x) = \max\{|x|, 1\}$ is directionally differentiable with \begin{equation}\label{eq:rablmax} g'(x;h) = \begin{cases} 0, & |x| < 1\\ \sign(x)h, & |x| > 1\\ \max\{0, x h\}, & |x| = 1. \end{cases} \end{equation} \end{lemma} \begin{proof} Let us define \begin{equation*} B_t := \frac{\max\{|x+ th|, 1\} - \max\{|x|,1\}}{t}. \end{equation*} If $|x| < 1$, then $|x+th| < 1$ and thus $B_t = 0$ for sufficiently small $t>0$. If $|x| > 1$, then $|x+th| > 1$ for sufficiently small $t>0$ and thus \begin{equation*} B_t = \frac{|x + th| - |x|}{t} \stackrel{t\searrow 0}{\longrightarrow} \sign(x)h. \end{equation*} If $|x| = 1$, then a simple distinction of cases shows that for $t> 0$ sufficiently small \begin{equation*} B_t = \begin{cases} 0, & h x < 0\\ \sign(x) h, & h x \geq 0. \end{cases} \end{equation*} The right hand side is equivalent to $\max\{0,x h\}$ as we will see in the following. This is clear for $h x < 0$. If $h x = 0$, then $h = 0$ (since $|x| = 1$) and thus $\max\{0,x h\} = 0 = \sign(x) h$. If $h x > 0$, then \begin{equation*} \max\{0,x h\} = x h = |x| |h| = |h| = \sign(h) h = \sign(x) h. \end{equation*} All in all we have proven the assertion. \end{proof} \begin{lemma}\label{lem:rablmax2} For every $i=1, 2, ..., n$ there holds \begin{equation*} \frac{\max\{|q^t_i|,1\} - \max\{|q_i|,1\}}{t} \stackrel{t\searrow 0}{\longrightarrow} g'(q_i;\lambda_i) \end{equation*} with $g_i'$ as defined in \eqref{eq:rablmax}. \end{lemma} \begin{proof} The proof is similar to the one of Lemma \ref{lem:rablabs}. We estimate \begin{multline*} \Big|\frac{\max\{|q^t_i|,1\} - \max\{|q_i|,1\}}{t} - g'(q_i;\lambda_i)\Big| \\ \leq \Big|\frac{\max\{|q^t_i|,1\} - \max\{|q_i + t \lambda_i|,1\}}{t}\Big| \\ + \Big|\frac{\max\{|q_i + t \lambda_i|,1\} - \max\{|q_i|,1\}}{t} - g'(q_i;\lambda_i)\Big|. \end{multline*} The second addend tends to zero due to Lemma \ref{lem:rablmax}. Moreover, by \eqref{eq:convq} we have $(q_i^t - q_i)/t = \lambda_i + \OO(t)$ and thus we find for the first addend, by employing the Lipschitz continuity of $x\mapsto \max\{1,x\}$ and again the strong triangle inequality, \begin{equation*} \Big|\frac{\max\{|q^t_i|,1\} - \max\{|q_i + t \lambda_i|,1\}}{t}\Big| \leq \Big|\frac{|q^t_i| - |q_i + t \lambda_i|}{t}\Big| \leq \Big| \frac{q^t_i - q_i - t \lambda_i}{t} \Big| \leq \OO(t) \to 0, \end{equation*} which gives the assertion. \end{proof} In view of \eqref{eq:convy}, \eqref{eq:convq}, and lemmas \ref{lem:rablabs} and \ref{lem:rablmax2}, we can pass to the limit as $t\searrow 0$ in \eqref{eq:diffquot} and obtain in this way: \begin{subequations}\label{eq:rablcomplsys} \begin{align} A \eta + \lambda &= h\\ \lambda_i y_i + q_i \eta_i &= \begin{cases} \sign(y_i)\eta_i, & y_i \neq 0,\\ |\eta_i|, & y_i = 0, \end{cases} \qquad i = 1, 2, ..., n \label{eq:rablb}\\ \max\{0,q_i\lambda_i\} &= 0 \quad \text{for all } i \in \{1, ..., n\} \text{ with } |q_i| = 1. \label{eq:rablc} \end{align} \end{subequations} (Note that the case $|q_i|> 1$ is obsolete.) The system \eqref{eq:rablcomplsys} will lead to a VI satisfied by the limit $\eta$. To see this, we have to reformulate \eqref{eq:rablcomplsys} in the following way: \begin{lemma} The system \eqref{eq:rablcomplsys} is equivalent to \begin{subequations}\label{eq:rablcomplsys2} \begin{align} A \eta + \lambda &= h \label{eq:rabl2a}\\ \lambda_i &= 0 \quad \text{for all } i \in \{1, ..., n\} \text{ with } y_i \neq 0 \label{eq:rabl2b}\\ \eta_i &= 0 \quad \text{for all } i \in \{1, ..., n\} \text{ with } |q_i| < 1 \label{eq:rabl2c}\\ \eta_i q_i &\geq 0 \quad \text{for all } i \in \{1, ..., n\} \text{ with } y_i = 0,\; |q_i| = 1 \label{eq:rabl2d}\\ \lambda_i q_i &\leq 0 \quad \text{for all } i \in \{1, ..., n\} \text{ with } y_i = 0,\; |q_i| = 1. \label{eq:rabl2e} \end{align} \end{subequations} \end{lemma} \begin{proof} $\eqref{eq:rablcomplsys} \Rightarrow \eqref{eq:rablcomplsys2}$:\\ It is evident that \begin{equation}\label{eq:maxequiv} \max\{0,q_i\lambda_i\} = 0 \text{ if } |q_i| = 1 \quad \Longleftrightarrow \quad q_i\lambda_i \leq 0 \text{ if } |q_i| = 1, \end{equation} which implies \eqref{eq:rabl2e}. Next, let $i\in \{1, ..., n\}$ such that $y_i \neq 0$. Then \begin{equation*} q_i = \frac{y_i}{|y_i|} = \sign(y_i), \end{equation*} and hence \eqref{eq:rablb} yields $\lambda_i y_i = 0$, which in turn gives \eqref{eq:rabl2b} due to $y_i \neq 0$. Now take $i\in \{1, ..., n\}$ with $|q_i| < 1$ arbitrary. Then we have $y_i = 0$, and hence \eqref{eq:rablb} implies $q_i \eta_i = |\eta_i|$. Because of $|q_i| < 1$ this results in \eqref{eq:rabl2c}. To show \eqref{eq:rabl2d}, let $i\in \{1, ..., n\}$ with $y_i = 0$ and $|q_i| = 1$ be arbitrary. Then \eqref{eq:rablb} gives $q_i \eta_i = |\eta_i| \geq 0$. $\eqref{eq:rablcomplsys2} \Rightarrow \eqref{eq:rablcomplsys}$:\\ Due to \eqref{eq:rabl2b} and \eqref{eq:rabl2e} we have $\lambda_i q_i \leq 0$ whenever $|q_i| = 1$, which, in view of \eqref{eq:maxequiv}, implies \eqref{eq:rablc}. Because of \eqref{eq:rabl2b}, we have \begin{equation}\label{eq:rablequiv1} \lambda_i y_i + \eta_i q_i = \eta_i q_i \quad \forall \, i = 1, ..., n. \end{equation} Now, if $y_i \neq 0$, then $q_i = \sign(y_i)$ and thus $\eta_i q_i = \sign(y_i) \eta_i$. If $y_i = 0$ and $|q_i| < 1$, then, by \eqref{eq:rabl2c}, we obtain $\eta_i q_i = 0 = |\eta_i|$. If finally $y_i = 0$ and $|q_i| = 1$, then \eqref{eq:rabl2d} implies $\eta_i q_i = |\eta_i||q_i| = |\eta_i|$. In summary \eqref{eq:rablb} is verified, which yields the assertion. \end{proof} System \eqref{eq:rablcomplsys2} is not yet complete, since there is still one relation missing to derive the VI fulfilled by $\eta$. The missing part is stated in the following lemma. \begin{lemma}\label{lem:etalam} There holds \begin{equation*} \eta_i \lambda_i = 0 \quad \text{for all } i \in \{1, ..., n\} \text{ with } y_i = 0,\; |q_i| = 1. \end{equation*} \end{lemma} \begin{proof} Let $i\in \{1, ..., n\}$ with $y_i = 0$ and $|q_i| = 1$ be arbitrary. W.l.o.g.\ we assume that $q_i = 1$. The case $q_i = -1$ can be discussed analogously. If $\eta_i = 0$, the assertion is trivially fulfilled. So let $\eta_i \neq 0$. By \eqref{eq:rabl2d} and $q_i = 1$ we then have $\eta_i > 0$. Due to \eqref{eq:convy} this implies \begin{equation*} \frac{y_i^t - y_i}{t} > 0 \quad \text{for } t > 0 \text{ sufficiently small} \end{equation*} and thus, due to $y_i = 0$, \begin{equation*} y_i^t > 0 \quad \text{for } t > 0 \text{ sufficiently small.} \end{equation*} Consequently, $q_i^t = \sign(y^t_i) = 1$ for $t>0$ sufficiently small and hence, since $q_i = 1$ by assumption, \begin{equation*} \lambda_i = \lim_{t\searrow 0} \frac{q^t_i - q_i}{t} = 0, \end{equation*} which gives the assertion. \end{proof} Now we have everything at hand to prove the main result of this section, i.e., the directional differentiability of $S: u\mapsto y$. \begin{theorem}\label{thm:rablfinite} The solution mapping $S$ of \eqref{eq:vi2} is directionally differentiable at every point $u\in \R^n$ and the directional derivative $\eta = S'(u;h)$ in direction $h\in \R^n$ solves the following \emph{VI of first kind}: \begin{equation}\label{eq:vi1} \eta \in K(y), \quad \dual{A \eta}{v - \eta} \geq \dual{h}{v-\eta} \quad \forall\, v\in K(y) \end{equation} where $K(y)$ is the convex cone defined by \begin{equation}\label{eq:conefinite} K(y) := \{ v\in \R^n: v_i = 0 \text{ if } |q_i| < 1,\; v_i q_i \geq 0 \text{ if } y_i = 0, \, |q_i| = 1 \}. \end{equation} \end{theorem} \begin{proof} Define the biactive set by \begin{equation*} \BB := \{i \in \{1, ..., n\} : y_i = 0,\; |q_i| = 1\}. \end{equation*} Fist we show that the limit $\eta$ solves \eqref{eq:vi1}. We already know that $\eta$ satisfies \eqref{eq:rablcomplsys2} and in addition $\eta_i \lambda_i = 0$ if $y_i = 0$ and $|q_i| = 1$. Thus \eqref{eq:rabl2c} and \eqref{eq:rabl2d} imply $\eta \in K(y)$, i.e., feasibility of $\eta$. Now let $v\in K(y)$ be arbitrary. Then \eqref{eq:rabl2b}, $v\in K(y)$, and \eqref{eq:rabl2e} yield \begin{equation}\label{eq:lambdasign} \dual{\lambda}{v} = \sum_{i\in\BB} \lambda_i v_i = \sum_{i\in\BB} \lambda_i \underbrace{q_i q_i}_{=1} v_i \leq 0. \end{equation} Similarly, we infer from \eqref{eq:rabl2b}, $\eta \in K(y)$, and Lemma \ref{lem:etalam} that \begin{equation*} \dual{\lambda}{\eta} = \sum_{i\in\BB} \lambda_i v_i = 0. \end{equation*} Therefore, if we multiply \eqref{eq:rabl2a} with $v-\eta$, then we arrive at \begin{equation*} \begin{aligned} \dual{h}{v-\eta} = \dual{A\eta}{v-\eta} + \dual{\lambda}{v} - \dual{\lambda}{\eta} \leq \dual{A\eta}{v-\eta}, \end{aligned} \end{equation*} so that the limit $\eta$ indeed solves \eqref{eq:vi1}. Since $A$ is positive definite and $K(y)$ is convex and closed, the operator $A + \partial I_{K(y)}(.): \R^n \to 2^{\R^n}$ is maximal monotone, where $I_{K(y)}$ denotes the indicator function of the set $K(y)$. Thus there is a unique solution of \eqref{eq:vi1}. Since every accumulation point $\eta$ of the difference quotient $(y^t-y)/t$ solves \eqref{eq:vi1}, the limit is thus unique and consequently a well-known argument gives the convergence of the whole sequence. \end{proof} \begin{corollary} Let the biactive set have zero cardinality, i.e.\ $y_i = 0$ implies $|q_i| < 1$. Then $S$ is G\^ateaux-differentiable, i.e.\ $S'(u;h)$ is linear and continuous w.r.t.\ $h$, and $\eta = S'(u)h$ is given by the unique solution of the following linear system: \begin{align} \eta_i &= 0 \quad \text{for all } i\in\{1, ..., n\} \text{ with } y_i = 0 \label{eq:etanull}\\ \sum_{j: y_j \not = 0} A_{ij} \eta_j &= h_i \quad \text{for all } i\in\{1, ..., n\} \text{ with } y_i \neq 0.\label{eq:rableq} \end{align} \end{corollary} \begin{proof} If the biactive set has zero cardinality, then \eqref{eq:rabl2c} implies \eqref{eq:etanull}. Moreover, \eqref{eq:rabl2b} immediately yields \eqref{eq:rableq}. Since $A$ is positive definite, the same holds for $A_\II := (A_{ij})_{i,j\in\II}$ with $\II := \{ i \in \{1, ..., n\}: y_i \neq 0\}$. Thus $A_\II$ is invertible and $\eta_\AA = A_\II^{-1} h_\II$. Together with \eqref{eq:etanull}, i.e.\ $\eta_{\{1, ..., n\}\setminus\II} = 0$, this implies that $\eta$ is uniquely determined by \eqref{eq:etanull} and \eqref{eq:rableq}. Moreover, due to the invertibility of $A_\II$, $\eta$ depends continuously on $h$ as claimed. \end{proof} \section{Weak differentiability for a VI of second kind in function space} Next we extend the result of the preceding section to a VI of second kind in function space. For this purpose, let $\Omega \subset \mathbb R^d$, $d\geq 1$, be a bounded domain with regular boundary satisfying the cone condition. We consider the following prototypical VI of second kind: \begin{equation}\tag{VI2}\label{eq:vi} \dual{A y}{v-y} + \int_\Omega |v|\,dx - \int_\Omega |y|\,dx \geq \dual{u}{v-y} \quad \forall \, v\in V, \end{equation} where we abbreviated $V := H^1_0(\Omega)$. From now on $\dual{.}{.}$ denotes the dual pairing in $V$. Furthermore $A: V \to V^*$ stands for the following linear second-order elliptic differential operator: \begin{equation}\label{eq:diffop} A y = \sum_{i=1}^d \Big(\sum_{j=1}^d \ddp{}{x_i} a_{ij} \ddp{y}{x_j} + b_i\,\ddp{y}{x_i}\Big) + \gamma\,y, \end{equation} where $a_{ij}, b_i, \gamma \in L^\infty(\Omega)$, $i,j=1, .., d$, are such that $A$ is coercive, i.e. \begin{equation}\label{eq:coer} \dual{A y}{y} \geq \alpha \, \|y\|_{V}^2, \end{equation} with a constant $\alpha > 0$. In addition, we require \begin{equation}\label{eq:gammasign} \gamma \geq 0. \end{equation} Moreover, $u \in V^*$ is given a inhomogeneity. The plan of this section is as follows. First we state some well known results for \eqref{eq:vi} concerning existence, uniqueness, and an equivalent reformulation by means of a complementarity-like system. Then we introduce a perturbed problem, similar to \eqref{eq:pert}, and derive several auxiliary results for the associated difference quotients and their (weak) limits. In order to show an infinite dimensional analogon to \eqref{eq:rabl2b}, we unfortunately need to assume some properties of the active set, see Assumption \ref{assu:active} below. Based on this assumption, we can derive a weak directional differentiability result, similar to Theorem \ref{thm:rablfinite} (see Theorem \ref{thm:ablvi} below). \begin{lemma}\label{lem:lipschitz} For every $u\in V^*$ there exists a unique solution $y\in V$ of \eqref{eq:vi}, which we denote by $y = S(u)$. The associated solution operator $S: V^* \to V$ is globally Lipschitz continuous, i.e., there exists a constant $L > 0$ such that \begin{equation} \|S(u_1) - S(u_2)\|_V \leq L \, \|u_1 - u_2\|_{V^*} \quad \forall \, u_1, u_2 \in V^*. \end{equation} \end{lemma} \begin{proof} Existence and uniqueness for \eqref{eq:vi} follows by standard arguments from the maximal monotonicity of $A + \partial \|.\|_{L^1(\Omega)}$, see for instance \cite{Barbu1993}. To prove the Lipschitz continuity we test the VI for $y_1 = S(u_1)$ with $y_2 = S(u_2)$ and vice versa and add the arising inequalities to obtain \begin{equation*} \dual{A(y_1 - y_2)}{y_1 - y_2} \leq \dual{u_1 - u_2}{y_1 - y_2}. \end{equation*} The coercivity of $A$ then yields the result. \end{proof} \begin{remark} Sometimes we will use $S$ with different domains and ranges, which may be inferred from the context. \end{remark} By standard arguments based on Fenchel duality or the Hahn-Banach theorem, the VI in \eqref{eq:vi} can be rewritten in terms of a complementarity-like system, see e.g. \cite{Delosreyes2009}. In this way the following result is obtained: \begin{lemma}\label{lem:slack} For every $u\in V^*$ there exists a unique function $q\in L^2(\Omega)$ such that the unique solution $y \in V$ of \eqref{eq:vi} fulfills the following complementarity-like system: \begin{subequations} \begin{gather} \dual{A y}{v} + \int_\Omega q\, v\,dx = \dual{u}{v} \quad \forall \, v\in V \label{eq:qdef}\\ q(x) y(x) = |y(x)|,\quad |q(x)| \leq 1 \quad \text{a.e.\ in } \Omega.\label{eq:slacklike} \end{gather} \end{subequations} The function $q$ is called slack function in all what follows, and we will refer to \eqref{eq:slacklike} as slackness condition in the sequel. \end{lemma} Next let $h\in V^*$ be arbitrary and $\{t_n\} \subset \R^+$ be an arbitrary sequence of positive numbers tending to 0. We denote the solutions to the VI associated to $u+t_n h$ by $y_n$, i.e., \begin{equation}\label{eq:ynvi} \dual{A y_n}{v-y_n} + \int_\Omega |v|\,dx - \int_\Omega |y_n|\,dx \geq \dual{u+t_n h}{v-y_n} \quad \forall \, v\in V. \end{equation} The associated slack function is analogously denoted by $q_n \in L^2(\Omega)$, i.e. \begin{equation}\label{eq:qn} \begin{gathered} \dual{A y_n}{v} + \int_\Omega q_n\, v\,dx = \dual{u + t_n h}{v} \quad \forall \, v\in V, \\ q_n(x) y(x) = |y_n(x)|,\quad |q_n(x)| \leq 1 \quad \text{a.e.\ in } \Omega. \end{gathered} \end{equation} By Lemma \ref{lem:lipschitz} it holds \begin{equation*} \Big\| \frac{y_n - y}{t_n}\Big\|_V \leq \|h\|_{V^*} \end{equation*} and thus there is a weakly convergent subsequence, denoted the same, and a limit point $\eta \in V$ such that \begin{equation}\label{eq:weakV} \frac{y_n - y}{t_n} \weak \eta \text{ in } V. \end{equation} This simplification of notation will be justified by the uniqueness of the weak limit $\eta$, which implies the weak convergence of the whole sequence by a well-known argument (see Theorem \ref{thm:ablvi} below). For the slack functions we obtain \begin{equation*} \int_\Omega \frac{q_n - q}{t_n} \, v \, dx = \dual{h}{v} - \Bigdual{A \,\frac{y_n - y}{t_n}}{v} \to \dual{h - A\eta}{v} \quad \forall\, v\in V, \end{equation*} i.e., \begin{equation*} \frac{q_n - q}{t_n} \weak \lambda \text{ in } V^*, \end{equation*} with $\lambda = h - A\eta$. Note that it is in general not possible to show the boundedness of $(q_n - q)/t_n$ in any Lebesgue space so that one cannot expect $\lambda$ to be more regular. Next consider the first equation in the slackness condition \eqref{eq:slacklike} for $y$ and $y_n$. By multiplying these equations with $1/t_n$ and an arbitrary $\varphi\in C^\infty_0(\Omega)$, integrating over $\Omega$, and taking the difference, we arrive at \begin{equation}\label{eq:slackdiff} \int_\Omega \frac{q_n - q}{t_n}\, y_n\, \varphi\, dx + \int_\Omega \frac{y_n - y}{t_n}\, q \, \varphi\, dx = \int_\Omega \frac{|y_n| - |y|}{t_n}\,\varphi \, dx, \quad \forall\, \varphi \in C^\infty_0(\Omega). \end{equation} In order to pass to the limit in this relation, we have to define the following sets: \begin{definition}\label{def:sets} We define --up to sets of zero measure-- \begin{equation}\label{eq:defsets} \begin{aligned} \AA &:= \{x\in\Omega : y(x) = 0\}, & \AA_s &:= \{x\in \Omega: |q(x)| < 1\}\\ \II &:= \{x\in \Omega: y(x) \neq 0\}, & \BB &:= \{x\in \Omega: |q(x)| = 1, \; y(x) = 0\}\\ \II^+ &:= \{x\in \Omega: y(x) > 0\}, & \II^- &:= \{x\in \Omega: y(x) < 0\}\\ \BB^+ &:= \{x\in \Omega: q(x) = 1, \; y(x) = 0\}, & \BB^- &:= \{x\in \Omega: q(x) = -1, \; y(x) = 0\}. \end{aligned} \end{equation} The set $\AA$ is called \emph{active set}, while $\AA_s$ is the \emph{strongly active set}. Moreover, we call $\II$ and $\BB$ \emph{inactive} and \emph{biactive set}, respectively. \end{definition} Note that \begin{equation*} \Omega = \AA \cup \II \quad \text{and} \quad \AA = \AA_s \cup \BB, \end{equation*} due to \eqref{eq:slacklike}. The next lemma covers the directional differentiability of the $L^1$-norm. Its proof is straightforward and therefore postponed to Appendix \ref{sec:l1deriv}. \begin{lemma}\label{lem:l1deriv} For every $\varphi \in C^\infty_0(\Omega)$ it holds \begin{equation*} \int_\Omega \frac{|y_n| - |y|}{t_n}\,\varphi \, dx \to \int_\Omega \absop'(y;\eta)\,\varphi\, dx \end{equation*} with \begin{equation*} \absop'(y;\eta) \in L^2(\Omega), \quad \absop'(y;\eta)(x) := \begin{cases} \sign\big(y(x)\big) \eta(x), & y(x) \neq 0\\ |\eta(x)|, & y(x) = 0. \end{cases} \end{equation*} \end{lemma} Together with Lemma \ref{lem:l1deriv} the weak convergence of $(q_n-q)/t_n$ in $V^*$ and $(y_n - y)/t_n$ in $V$ and the strong convergence of $y_n$ to $y$ in $V$ allow to pass to the limit in \eqref{eq:slackdiff}, which results in \begin{equation}\label{eq:slackabl} \dual{\lambda}{y\,\varphi} + \int_\Omega \eta\, q \, \varphi\, dx = \int_\Omega \absop'(y;\eta)\,\varphi\,dx \quad \forall \, \varphi \in C^\infty_0(\Omega). \end{equation} Using this relation, we can prove the following result, which is just the infinite dimensional counterpart to \eqref{eq:rabl2c} and \eqref{eq:rabl2d}: \begin{lemma}\label{lem:etafeas} There holds \begin{align} \eta(x) &= 0 \quad \text{a.e., where } |q(x)| < 1 \label{eq:etaAs}\\ \eta(x)\, q(x) &\geq 0 \quad \text{a.e., where } |q(x)| = 1 \text{ and } y(x) = 0.\label{eq:etaB} \end{align} \end{lemma} \begin{proof} Let $\varphi \in C^\infty_0(\Omega)$ with $\varphi \geq 0$ a.e.\ in $\Omega$ be arbitrary. The slackness condition \eqref{eq:slacklike} implies for all $n\in \N$ that \begin{equation*} \frac{q_n(x) - q(x)}{t_n}\, y(x) \leq 0 \quad \text{a.e.\ in }\Omega. \end{equation*} Therefore we have \begin{equation*} \dual{\lambda}{y\, \varphi} = \lim_{n\to \infty} \int_\Omega \frac{q_n - q}{t_n}\, y\, \varphi \, dx \leq 0, \end{equation*} and thus \eqref{eq:slackabl} yields \begin{equation*} \int_\Omega \eta\, q \, \varphi\, dx \geq \int_\Omega \absop'(y;\eta)\,\varphi\,dx \quad \forall \, \varphi \in C^\infty_0(\Omega) \text{ with } \varphi \geq 0. \end{equation*} The fundamental lemma of the calculus of variations thus yields \begin{equation*} \eta(x)\, q(x) \geq \absop'(y;\eta)(x) \quad \text{a.e.\ in } \Omega, \end{equation*} which by definition of $\absop'(y;\eta)$ in turn gives \begin{equation*} \eta(x) \, q(x) \geq |\eta(x)| \quad \text{a.e. in } \AA. \end{equation*} Since $|q(x)| \leq 1$ a.e.\ in $\Omega$, this results in \begin{equation}\label{eq:etaqae} \eta(x) \, q(x) = |\eta(x)| \quad \text{a.e. in } \AA. \end{equation} As the slackness conditions in \eqref{eq:slacklike} implies $\{x\in\Omega: |q(x)| < 1\} \subset \{x\in\Omega: y(x) = 0\}$, the result follows immediately from \eqref{eq:etaqae}. \end{proof} \begin{lemma}\label{lem:etalambdanull} There holds $\dual{\lambda}{\eta} \geq 0$. \end{lemma} \begin{proof} By inserting the definition of the slack variable $q$ into \eqref{eq:vi} one obtains \begin{equation} \int_\Omega q(v-y)\,dx \leq \int_\Omega |v|\, dx - \int_\Omega |y|\, dx \quad \forall\, v\in V \end{equation} and an analogous inequality for $q_n$ and $y_n$. Inserting $y_n \in V$ in this inequality and $y$ in the corresponding one for $q_n$ and $y_n$, adding both inequalities and dividing by $t_n^2$ yields \begin{equation*} \int_\Omega \frac{q_n - q}{t_n}\,\frac{y_n - y}{t_n}\,dx \geq 0. \end{equation*} Since $A$ is elliptic and bounded, the mapping $V \ni w \mapsto \dual{A w}{w} \in \R$ is convex and continuous and thus weakly lower semicontinuous. The equations for $q$ and $q_n$ and the weak convergence of $(y_n - y)/t_n$ in $V$ therefore imply \begin{equation*} \begin{aligned} 0 &\leq \liminf_{n\to\infty} \int_\Omega \frac{q_n - q}{t_n}\,\frac{y_n - y}{t_n}\,dx \\ &\leq \limsup_{n\to\infty} \int_\Omega \frac{q_n - q}{t_n}\,\frac{y_n - y}{t_n}\,dx \\ &= \limsup_{n\to\infty} \Big(\Bigdual{h}{\frac{y_n - y}{t_n}} - \Bigdual{A\Big(\frac{y_n - y}{t_n}\Big)}{\frac{y_n - y}{t_n}}\Big)\\ &\leq \lim_{n\to\infty} \Bigdual{h}{\frac{y_n - y}{t_n}} - \liminf_{n\to\infty} \Bigdual{A\Big(\frac{y_n - y}{t_n}\Big)}{\frac{y_n - y}{t_n}}\\ &\leq \dual{h}{\eta} - \dual{A \eta}{\eta} = \dual{\lambda}{\eta}. \end{aligned} \end{equation*} \end{proof} The most delicate issue, when transferring the finite dimensional findings of Section \ref{sec:finite} to the function space setting, is to verify the conditions \eqref{eq:rabl2a} and \eqref{eq:rabl2e} on $\lambda$. To do so, we first prove that $S$ is Lipschitz continuous in $L^\infty(\Omega)$, provided that the right hand sides in \eqref{eq:vi} are more regular. We employ the well-known technique of Stampacchia based on the following lemma, whose proof is presented in Appendix \ref{sec:stam} for convenience of the reader. \begin{lemma}[Stampacchia]\label{lem:stam} For every function $w\in V$ and every $k\geq 0$, the function $w_k$ defined by \begin{equation}\label{eq:truncfunc} w_k(x) := \begin{cases} w(x) - k, & w(x) \geq k\\ 0, & |w(x)| < k\\ w(x) + k, & w(x) \leq -k \end{cases} \end{equation} is an element of $V$. Furthermore, if there is a constant $\alpha > 0$ such that \begin{equation}\label{eq:stamest} \alpha \|w_k\|_{H^1(\Omega)}^2 \leq \int_\Omega f\,w_k\,dx \quad \forall\, k \geq 0 \end{equation} with a function $f\in L^p(\Omega)$, $p> \max\{d/2, 1\}$, then $w$ is essentially bounded and there exists a constant $c>0$ so that \begin{equation}\label{eq:inftybound} \|w\|_{L^\infty(\Omega)} \leq c\,\|f\|_{L^p(\Omega)}. \end{equation} \end{lemma} \begin{lemma}\label{lem:inftylip} There exists a constant $K>0$ such that \begin{equation*} \|S(u_1) - S(u_2)\|_{L^\infty(\Omega)} \leq K\,\|u_1 - u_2\|_{L^p(\Omega)} \end{equation*} for all $u_1, u_2 \in L^p(\Omega)$ with $p> \max\{d/2, 1\}$. Here we identified $u \in L^p(\Omega)$ with an element of $V^*$. \end{lemma} \begin{proof} We apply Lemma \ref{lem:stam} to $w:= y_1 - y_2$ with $y_i = S(u_i)$, $i=1,2$. To this end we shall verify \eqref{eq:stamest} with $f = u_1 - u_2$. For this purpose we test the VI for $y_1$ with $y_1 - v$ and the one for $y_2$ with $y_2 + v$ and add the arising inequalities to obtain: \begin{equation}\label{eq:diffineq} \dual{A(y_1 - y_2)}{v} + \int_\Omega\big(|y_1| + |y_2| - |y_1 - v| - |y_2 + v|\big) dx \leq \int_\Omega (u_1 - u_2)v \, dx \quad \forall\, v\in V. \end{equation} Next let $k \geq 0$ be arbitrary and define $w_k = (y_1 - y_2)_k$ as in \eqref{eq:truncfunc}. In the following we will prove that \begin{equation}\label{eq:wksign} I(x) := |y_1(x)| + |y_2(x)| - |y_1(x) - w_k(x)| - |y_2(x) + w_k(x)| \geq 0 \quad\text{a.e.\ in } \Omega, \end{equation} by a simple distinction of cases. \emph{1st case: $|y_1(x) - y_2(x)| < k$:}\\ In this case we have $w_k(x) = 0$ and thus \eqref{eq:wksign} is trivially fulfilled with equality. \emph{2nd case: $y_1(x) - y_2(x) \geq k$:}\\ Now we obtain $w_k(x) = y_1(x) - y_2(x) - k$ and consequently \begin{equation*} I(x) = |y_1(x)| + |y_2(x)| - |y_2(x) + k| - |y_1(x) - k|. \end{equation*} If $y_1(x) \geq k$ and $y_2(x) \leq -k$, then \begin{equation*} I(x) = |y_1(x)| + |y_2(x)| + y_2(x) + k - y_1(x) + k \geq 2k \geq 0. \end{equation*} If $y_1(x) \leq k$ and $y_2(x) \geq -k$, then \begin{equation*} I(x) = |y_1(x)| + |y_2(x)| - y_2(x) - k + y_1(x) - k \geq 2\big( y_1(x) - y_2(x) - k \big)\geq 0, \end{equation*} where we used $y_1(x) - y_2(x) \geq k$ for the last estimate.\\ If $y_1(x) \geq k$ and $y_2(x) \geq -k$, then \begin{equation*} I(x) = |y_1(x)| + |y_2(x)| - y_2(x) - y_1(x) \geq 0. \end{equation*} If finally $y_1(x) \leq k$ and $y_2(x) \leq -k$, then \begin{equation*} I(x) = |y_1(x)| + |y_2(x)| + y_2(x) + y_1(x) \geq 0, \end{equation*} which gives the assertion of \eqref{eq:wksign} for this case. \emph{3rd case: $y_1(x) - y_2(x) \leq -k$:}\\ In this case we get that $y_2(x) - y_1(x) \geq k$ and thus $I(x) = |y_1(x)| + |y_2(x)| - |y_2(x) - k| - |y_1(x) + k|$. Interchanging the roles of $y_1(x)$ and $y_2(x)$ and repeating the arguments for the second case immediately yields \eqref{eq:wksign} in the third case. Let us now define $\AA_k := \{x\in \Omega: |w(x)| \geq k\}$. From the first part of Lemma \ref{lem:stam} we get that $w_k \in V$ and so we are allowed to insert $w_k$ as test function in \eqref{eq:diffineq}. Owing to the coercivity of $A$, the definition of $w_k$ in \eqref{eq:truncfunc}, \eqref{eq:gammasign}, and \eqref{eq:wksign}, we then obtain \begin{equation*} \begin{aligned} \alpha \, \|w_k\|_{H^1(\Omega)}^2 &\leq \dual{A w_k}{w_k}\\ &= \int_{\AA(k)} \Big[\sum_{i=1}^d \Big( \sum_{j=1}^d a_{ij} \ddp{w_k}{x_j}\,\ddp{w_k}{x_j} dx + b_i \,\ddp{w_k}{x_i}\,w_k + \gamma\,\big(|w|-k\big)^2\Big]\dx\\ &\leq \int_\Omega \Big[\sum_{i=1}^d \Big( \sum_{j=1}^d a_{ij} \ddp{w}{x_j}\,\ddp{w_k}{x_j} dx + b_i \,\ddp{w}{x_i}\,w_k + \gamma\,w\,w_k\Big]\dx\\ &\leq \dual{A w}{w_k} = \dual{A(y_1 - y_2)}{w_k} \leq \int_\Omega (u_1 - u_2) w_k\,dx, \end{aligned} \end{equation*} which is \eqref{eq:stamest} with $f = u_1 - u_2$. Since $k\geq 0$ was aribitrary, all conditions of Lemma \ref{lem:stam} are satisfied so that it can be applied and gives the desired result. \end{proof} \begin{remark}\label{rem:infty} Since $S(0) = 0$, it immediately follows from Lemma \ref{lem:inftylip} that $$\|S(u)\|_{L^\infty(\Omega)} \leq c \|u\|_{L^p(\Omega)}.$$ \end{remark} \begin{corollary} If $u,h\in L^p(\Omega)$ with $p> \max\{d/2, 1\}$, then \begin{equation*} \frac{y_n - y}{t_n} \weak^* \eta \text{ in } L^\infty(\Omega), \end{equation*} which implies $\eta \in L^\infty(\Omega)$. \end{corollary} \begin{proof} By Lemma \ref{lem:inftylip} $(y_n - y)/t_n$ is bounded in $L^\infty(\Omega)$. Thus, there is a subsequence converging weakly-$*$ to a element $\tilde\eta \in L^\infty(\Omega)$. This subsequence therefore converges weakly in $L^2(\Omega)$ and in view of \eqref{eq:weakV} we find \begin{equation*} \int_\Omega \eta \, v \, dx = \int_\Omega \tilde\eta \, v \, dx, \quad \forall \, v\in L^2(\Omega). \end{equation*} The fundamental lemma of the calculus of variations implies $\tilde\eta = \eta$ a.e.\ in $\Omega$. Since the weak limit is therefore unique, a standard argument implies weak-$*$ convergence of the whole sequence as claimed. \end{proof} Based on the Lipschitz continuity of $S$ in Lemma \ref{lem:inftylip}, we can prove a first result towards an infinite dimensional counterpart to \eqref{eq:rabl2a}. \begin{lemma}\label{lem:omega_rho} Assume that $u, h \in L^p(\Omega)$ with $p> \max\{d/2, 1\}$. Let moreover $\rho > 0$ be arbitrary and define --up to sets of measure zero-- \begin{equation*} \AA_\rho := \{ x\in \Omega: y(x) \in [-\rho,\rho]\}. \end{equation*} Then for all $v \in V$ with $v(x) = 0$ a.e.\ in $\AA_\rho$ there holds \begin{equation*} \dual{\lambda}{v} = 0. \end{equation*} \end{lemma} \begin{proof} Let $\rho>0$ and $v\in V$ with $v(x) = 0$ a.e.\ in $\AA_\rho$ be arbitrary. Thanks to Lemma \ref{lem:inftylip} we have \begin{equation}\label{eq:inftyconv} \|y_n - y\|_{L^\infty(\Omega)} \leq K\, t_n\, \|h\|_{L^p(\Omega)} \to 0. \end{equation} Therefore, for almost all $x\in \Omega$ with $y(x) > \rho$, it follows that \begin{equation*} y_n(x) \geq y(x) - |y(x) - y_n(x)| \geq \rho - \|y - y_n\|_{L^\infty(\Omega)} \geq \frac{\rho}{2} > 0, \quad \forall \, n \geq N_1, \end{equation*} with $N_1 \in \N$ depending on $\rho$ but not on $x$. Therefore, thanks to \eqref{eq:slacklike}, we have for all $n \geq N_1$ that \begin{equation}\label{eq:qdiffnull} q_n(x) = \frac{y_n(x)}{|y_n(x)|} = 1 \quad \Rightarrow \quad \frac{q_n(x) - q(x)}{t_n} = 0 \quad \text{f.a.a.\ } x \in \Omega \text{ with } y(x) > \rho, \end{equation} where we used that $q(x) = 1$ due to $y(x) > \rho > 0$. Completely analogously one can show the existence of $N_2 \in \N$, only depending on $\rho$, such that \begin{equation*} \frac{q_n(x) - q(x)}{t_n} = 0 \quad \text{f.a.a.\ } x \in \Omega \text{ with } y(x) < - \rho \end{equation*} for all $n \geq N_2$. Therefore, since $v(x) = 0$ a.e., where $y(x) \in [-\rho,\rho]$, we obtain \begin{equation*} \int_\Omega \frac{q_n - q}{t_n}\, v \, dx = 0 \quad \forall \, n \geq \max\{N_1, N_2\}. \end{equation*} The convergence $(q_n - q)/t_n \weak \lambda$ in $V^*$ thus implies the assertion. \end{proof} The aim is now to drive $\rho$ in Lemma \ref{lem:omega_rho} to zero. This however requires several additional assumptions. The first one covers the regularity of $y$ and $q$. \begin{assumption}\label{assu:ycont} \begin{enumerate} \item\label{assu:ycont1} We assume that the solution $y = S(u)$ is continuous. \item\label{assu:ycont2} The slack function is continuous, i.e.\ $q\in C(\bar\Omega)$. \end{enumerate} \end{assumption} \begin{remark} Let us point out that Assumption \ref{assu:ycont}\eqref{assu:ycont1} is not restrictive at all. Indeed, Lemma \ref{lem:slack} implies that $y$ solves $A y = u - q$ and, if $u\in L^2(\Omega)$, then $y$ thus solves a second-order elliptic equation with right hand side in $L^2(\Omega)$. For problems of this type, standard regularity theory yields continuity of the solution under mild assumptions on the data, see for instance \cite{Evans}. In contrast to this, Assumption \ref{assu:ycont}\eqref{assu:ycont2} cannot be guaranteed in general. Nevertheless, multiple numerical observations indicate that $q$ is often continuous. \end{remark} If Assumption \ref{assu:ycont} is satisfied, i.e.\ if $y$ and $q$ have continuous representatives, then we can define the sets in Definition \ref{def:sets} in a pointwise manner, i.e., not only up to sets of zero measure. The sets arising in this way are denoted by the same symbols, and we always mean these sets in all what follows when writing $\AA$, $\II$, $\BB$ etc. \begin{lemma}\label{lem:inactivedist} Under Assumption \ref{assu:ycont} the sets $\II^+$ and $\II^-$ are strictly separated, i.e., there exists $\delta > 0$ such that \begin{equation*} \dist(\II^+, \II^-) := \min\big\{|x-z|_{\R^d} : x\in \clos{\II^+}, z\in \clos{\II^-}\big\} > \delta. \end{equation*} \end{lemma} \begin{proof} Since $\bar\Omega$ is compact, Assumption \ref{assu:ycont}\eqref{assu:ycont2} implies that $q$ is uniformly continuous. From the slackness condition \eqref{eq:slacklike} we infer $q=1$ in $\II^+$ so that the uniform continuity of $q$ yields the existence of $\delta > 0$ with \begin{equation}\label{eq:IplusB} q(x) \geq 1/2 \quad \text{for all } x \in \II^+ + B(0,\delta). \end{equation} Hence, due to $q = -1$ on $\II^-$ by \eqref{eq:slacklike}, this gives the assertion. \end{proof} In addition to Assumption \ref{assu:ycont}, we need the following rather restrictive assumption on the active set. \begin{assumption}\label{assu:active} The active set $\AA = \{ x\in \Omega: y(x) = 0\}$ satisfies the following conditions: \begin{enumerate} \item\label{assu:active1} $\AA = \AA_1 \cup \AA_0$, where $\AA_1$ has positive measure and $\AA_0$ has zero capacity. \item\label{assu:active2} $\AA_1$ is closed and possesses non-empty interior. Moreover, it holds $\AA_1 = \clos{\interior(\AA_1)}$. \item\label{assu:active3} For the set $\JJ:= \Omega\setminus \AA_1$ it holds \begin{equation}\label{eq:innererrand} \partial\JJ \setminus (\partial\JJ\cap\partial\Omega) = \partial\AA_1\setminus(\partial\AA_1\cap\partial\Omega), \end{equation} and both $\AA_1$ and $\JJ$ are supposed to have regular boundaries. That is the connected components of $\JJ$ and $\AA_1$ have positive distance from each other and the boundaries of each of them satisfies the cone condition. \end{enumerate} \end{assumption} Figures \ref{fig:cap0} and \ref{fig:cap1} illustrate Assumption \ref{assu:active} in the two-dimensional case. \begin{figure}[h!] \centering \begin{minipage}[t]{0.48\linewidth} \centering \includegraphics[scale=0.5]{cap0.pdf} \put(-12,5){$\Omega$} \put(-90,100){$\AA_1$} \put(-53,97){$\AA_0$} \caption{Active set satistying Assumption \ref{assu:active}}\label{fig:cap0} \end{minipage} \quad \begin{minipage}[t]{0.48\linewidth} \centering \includegraphics[scale=0.5]{cap1.pdf} \put(-12,5){$\Omega$} \put(-130,17){$\AA$} \caption{Active set not feasible for Assumption \ref{assu:active}}\label{fig:cap1} \end{minipage} \end{figure} With the help of Assumption \ref{assu:ycont} and \ref{assu:active} we can now prove the following infinite dimensional counterpart to \eqref{eq:rabl2a}: \begin{lemma}\label{lem:lambdanull} Let $u, h \in L^p(\Omega)$, $p > \max\{d/2,1\}$, be given. Assume that $u$ is such that Assumptions \ref{assu:ycont} and \ref{assu:active} are fulfilled. Then \begin{equation*} \dual{\lambda}{v} = 0 \quad \text{for all } v \in V \text{ with } v(x) = 0 \text{ a.e.\ in }\AA \end{equation*} holds true. \end{lemma} \begin{proof} Let $v \in V$ with $v(x) = 0$ a.e.\ in $\AA$ be arbitrary. By Assumption \ref{assu:active}\eqref{assu:active3} there are linear and continuous trace operators $\tau_j: H^1(\Omega) \to L^2(\partial\JJ)$ and $\tau_a: H^1(\Omega) \to L^2(\partial\AA_1)$. Due to $v = 0$ a.e.\ in $\AA_1$, we have $\tau_a v = 0$ and, by \eqref{eq:innererrand} and $v\in V$, thus $\tau_j v = 0$. Since $\partial\JJ$ is regular, there exists a sequence $\{\varphi_k\}_{k\in \N} \subset C^\infty_0(\JJ)$ with $\varphi_k \to v$ in $H^1(\JJ)$, see e.g.\ \cite[Lemma 1.33]{GajewskiGroegerZacharias}. In particular it holds \begin{equation*} \omega_k := \supp(\varphi_k) \subset\subset \JJ. \end{equation*} We extend $\varphi_k$ by zero outside $\JJ$ to obtain a function in $C^\infty_0(\Omega)$, which we denote by the same symbol for simplicity. Because of $v = 0$ a.e.\ in $\AA_1$ it follows that \begin{equation}\label{eq:vconv} \varphi_k \stackrel{k\to\infty}{\longrightarrow} v \quad \text{in }V. \end{equation} By construction we have $\JJ \subset \II \cup \AA_0$. Since $\AA_0$ has zero capacity, there is a sequence $\{w_m\}_{m\in \N} \subset V$ and a sequence of open neighborhoods of $\AA_0$, denoted by $\{\UU_m\}_{m\in \N}\subset \Omega$, such that \begin{gather*} w_m\geq 0 \text{ a.e.\ in } \Omega, \quad w_m = 1 \text{ a.e.\ in }\UU_m, \quad w_m \stackrel{m\to\infty}{\longrightarrow} 0 \text{ in } H^1(\Omega). \end{gather*} Now let $k,m\in \N$ be fixed but arbitrary and define \begin{equation*} \II_m^+ := (\omega_k \setminus \UU_m) \cap \II^+, \quad \II_m^- := (\omega_k \setminus \UU_m) \cap \II^-. \end{equation*} Since $\UU_m$ is open, $\omega\setminus\UU_m$ is closed. Moreover, in view of $\JJ = \II \cup \AA_0$, it holds $\omega_k\setminus\UU_m \subset \II$. Thus, Lemma \ref{lem:inactivedist} and the boundedness of $\Omega$ yield that $\II_m^+$ and $\II_m^-$ are compact. The continuity of $y$ therefore implies that there is $\xi \in \II_m^+$ such that \begin{equation*} y(\xi) = \min_{x\in \II_m^+} y(x) \end{equation*} and, due to $\xi \in \II^+$, one obtains $\rho_m^+ := y(\xi) > 0$. Analogously one derives $\rho_m^- := \max_{x\in \II_m^-} y(x) < 0$. As in the proof of Lemma \ref{lem:omega_rho} one proves the existence of $N^+_m\in \N$ such that for all $n\geq N^+_m$ there holds \begin{equation*} \frac{q_n(x) - q(x)}{t_n} = 0 \quad \text{f.a.a.\ $x\in \Omega$ with } y(x)\geq \rho_m^+, \end{equation*} see \eqref{eq:qdiffnull}. Clearly, there is $N_m^- \in \N$ so that the same equation holds for every $n\geq N_m^-$ and almost all $x\in \Omega$ with $y(x) \leq \rho_m^-$. Consequently, we obtain \begin{equation}\label{eq:diffqsupp} \frac{q_n - q}{t_n} = 0 \quad \text{a.e.\ in }\omega_k \setminus\UU_m = \II_m^+ \cup \II_m^-, \end{equation} provided that $n\geq N_m := \max\{N_m^+, N_m^-\}$. Thanks to \eqref{eq:diffqsupp} and $w_m = 1$ a.e.\ in $\UU_m$, it follows \begin{equation}\label{eq:umomega} \begin{aligned} \int_\Omega \frac{q_n - q}{t_n}\, \varphi_k \, w_m\, dx &= \int_{\omega_k \setminus \UU_m} \frac{q_n - q}{t_n}\, \varphi_k\, w_m \,dx + \int_{\UU_m} \frac{q_n - q}{t_n}\, \varphi_k \, w_m dx\\ &= \int_{\UU_m} \frac{q_n - q}{t_n}\, \varphi_k \,dx \qquad \forall\, n \geq N_m. \end{aligned} \end{equation} On the other hand $\varphi_k w_m \in V$ is a feasible test function for \eqref{eq:qdef} and \eqref{eq:qn}. If we insert this test function and subtract the arising equation, then \eqref{eq:umomega} together with H\"older's inequality and Sobolev embeddings yield \begin{equation*} \begin{aligned} &\int_{\UU_m} \frac{q_n - q}{t_n}\, \varphi_k \,dx = \int_\Omega \frac{q_n - q}{t_n}\, \varphi_k \, w_m\, dx \\ &\quad = - \int_\Omega \nabla \Big(\frac{y_n - y}{t_n}\Big) \cdot \nabla(\varphi_k w_m)\,dx + \int_\Omega h\,\varphi_k\,w_m\,dx\\ &\quad \leq 2 \Big\|\frac{y_n - y}{t_n}\Big\|_{H^1(\Omega)} \|w_m\|_{H^1(\Omega)} \|\varphi_k\|_{W^{1,\infty}(\Omega)} + c\,\|h\|_{L^2(\Omega)} \|w_m\|_{H^1(\Omega)} \|\varphi_k\|_{H^1(\Omega)} \end{aligned} \end{equation*} for all $n\geq N_m$. Therefore, in view of \eqref{eq:diffqsupp}, the weak convergence (and thus boundedness) of $(y_n - y)/t_n$ gives \begin{equation*} \int_\Omega \frac{q_n - q}{t_n}\, \varphi_k \,dx = \int_{\UU_m} \frac{q_n - q}{t_n}\, \varphi_k \,dx \leq c\,\|w_m\|_{H^1(\Omega)} \|\varphi_k\|_{W^{1,\infty}(\Omega)} \end{equation*} for all $n\geq N_m$ and thus \begin{equation*} \dual{\lambda}{\varphi_k} = \lim_{n\to \infty} \int_\Omega \frac{q_n - q}{t_n}\, \varphi_k \,dx \leq c\,\|w_m\|_{H^1(\Omega)} \|\varphi_k\|_{W^{1,\infty}(\Omega)}. \end{equation*} Due to $w_m \to 0$ in $H^1(\Omega)$, passing to the limit $m\to\infty$ yields $\dual{\lambda}{\varphi_k} \leq 0$. The above arguments also apply to $-\varphi_k$ so that $\dual{\lambda}{\varphi_k} = 0$. Since $k\in \N$ was arbitary, this equation holds for every $k\in \N$ and thus we can pass to the limit $k\to \infty$. The convergence in \eqref{eq:vconv} then gives the assertion. \end{proof} Similarly to \eqref{eq:conefinite}, we define \begin{equation}\label{eq:KKae} \begin{aligned} \KK(y) &:= \{v\in V: \;\, v(x) = 0 \text{ a.e.\ in } \AA_s,\; v(x)q(x) \geq 0 \text{ a.e.\ in } \BB\}\\ &= \begin{aligned}[t] \{v\in V: \;\, & v(x) = 0 \text{ a.e., where } |q(x)| < 1,\\ & v(x)q(x) \geq 0 \text{ a.e., where } |q(x)| = 1 \text{ and } y(x) = 0\} \end{aligned} \end{aligned} \end{equation} This set will be the feasible set of the VI belonging to the directional derivative of $S$ (see Theorem \ref{thm:ablvi} below). As seen in the proof of Theorem \ref{thm:rablfinite}, in the finite dimensional setting, there holds $\lambda^\top v \leq 0$ for all $v\in K(y)$, see \eqref{eq:lambdasign}. The infinite dimensional analogon is also true, provided that Assumptions \ref{assu:ycont} and \ref{assu:active} hold, as the following lemma shows. \begin{lemma}\label{lem:signlambda} Let $u,h \in L^p(\Omega)$ with $p > \max\{d/2,1\}$ be given, and assume that $u$ is such that Assumptions \ref{assu:ycont} and \ref{assu:active} are fulfilled. Then there holds \begin{equation*} \dual{\lambda}{v} \leq 0 \quad \text{for all } v\in \KK(y). \end{equation*} \end{lemma} \begin{proof} Let $v\in \KK(y)$ be fixed but arbitrary. Due to $\AA_s \cup \BB \cup \II = \Omega$ and $v(x) = 0$ a.e.\ in $\AA_s$, we obtain \begin{equation}\label{eq:intest} \int_\Omega \frac{q_n - q}{t_n}\,v\,dx = \int_{\BB^+} \frac{q_n - 1}{t_n}\,v\,dx + \int_{\BB^-} \frac{q_n + 1}{t_n}\,v\,dx + \int_{\II} \frac{q_n - q}{t_n}\,v\,dx, \end{equation} Since $q_n \in [-1,1]$ a.e.\ in $\Omega$ and $q \,v \geq 0$ a.e.\ in $\BB$, which implies $v \geq 0$ a.e.\ in $\BB^+$ and $v \leq 0$ a.e.\ in $\BB^-$, we can further estimate \begin{equation*} \int_\Omega \frac{q_n - q}{t_n}\,v\,dx \leq \int_{\II} \frac{q_n - q}{t_n}\,v\,dx = \int_{\JJ} \frac{q_n - q}{t_n}\,v\,dx, \end{equation*} where $\JJ$ is the set from Assumption \ref{assu:active}\eqref{assu:active3}. For the last equality we used that $\JJ = \II \cup \AA_0$ and $\AA_0$ has zero capacity, thus zero Lebesgue-measure. We now prove that $\JJ = \JJ^+ \cup \JJ^-$, where $\JJ^+$ and $\JJ^-$ possess regular boundaries and coincide with $\II^+$ and $\II^-$ up to sets of zero capacity. For this purpose, we show $\clos{\JJ} = \clos{\II}$. Due to $\II \subset \JJ$, we clearly have $\clos{\II}\subseteq\clos{\JJ}$. Let $\xi \in \clos{\JJ}$ be arbitrary. Then there is a sequence $\{x_k\}_{k\in \N}\subset \JJ$ so that $x_k \to \xi$. If $\{x_k\}$ contains a subsequence in $\II$, we immediately obtain $\xi \in \clos{\II}$. So assume the contrary, i.e., in view of $\JJ = \II \cup \AA_0$, $x_k\in \AA_0$ for all $k\in \N$ sufficiently large. W.l.o.g.\ we assume $\{x_k\}\subset \AA_0$ for the whole sequence. Since $\AA_0$ has zero capacity, thus zero measure, there is, for each $x_k$, a sequence $\{x_k^{(m)}\}_{m\in \N} \subset \Omega \setminus \AA_0$ with $x^{(m)}_k \to x_k$ for $m\to \infty$. Since $x_k^{(m)} \notin \AA_0$, we have either $x^{(m)}_k \in \AA_1$ or $x^{(m)}_k \in \II$. If $\{x_k^{(m)}\}$ would contain a subsequence in $\AA_1$, then the closedness of $\AA_1$ would imply $x_k \in \AA_1$ in contradiction to $x_k \in \AA_0$. Thus we may w.l.o.g.\ assume that $\{x_k^{(m)}\}\subset \II$. Therefore, there is a diagonal sequence $\{x_k^{(m(k))}\} \subset \II$ converging to $\xi$, which gives $\xi \in \clos{\II}$. Hence we have shown \begin{equation*} \clos{\JJ}=\clos{\II} = \clos{\II^+} \cup \clos{\II^-} \end{equation*} with $\II^+$ and $\II^-$ as defined in \eqref{eq:defsets}. Since $\clos{\II^+}$ and $\clos{\II^-}$ have positive distance from each other by Lemma \ref{lem:inactivedist}, there exist sets $\JJ^+, \JJ^-$ such that $\JJ^+ \cup \JJ^- = \JJ$ and $\dist(\JJ^+, \JJ^-) > \delta$. Moreover, thanks to Lemma \ref{lem:inactivedist} and $\JJ = \II \cup \AA_0$ with $\capa(\AA_0) = 0$, $\JJ^+$ differs from $\II^+$ only by a set of zero capacity and the same holds for $\JJ^-$ and $\II^-$. Finally, because of $\dist(\JJ^+, \JJ^-) > \delta$, Assumption \ref{assu:active}\eqref{assu:active3} yields that $\JJ^+$, $\JJ^-$, $\Omega \setminus \JJ^+$, and $\Omega\setminus\JJ^-$ possess regular boundaries. (This actually implies that $\JJ^\pm = \interior(\clos{\II^\pm}).$) Since $\JJ^+$ differs from $\II^+$ only on a set of zero measure, the definition of $\II^+$ and the slackness condition \eqref{eq:slacklike} imply $q = 1$ a.e.\ in $\JJ^+$, and analogously $q = -1$ a.e.\ in $\JJ^-$. Thus \eqref{eq:intest} can be further estimated by \begin{align} \int_\Omega \frac{q_n - q}{t_n}\,v\,dx &\leq \int_{\JJ^+} \underbrace{\frac{q_n - 1}{t_n}}_{\leq 0}\,\underbrace{\max\{0,v\}}_{\geq 0}\,dx + \int_{\JJ^+} \frac{q_n - q}{t_n}\,\min\{0,v\}\,dx \nonumber\\ &\quad + \int_{\JJ^-} \underbrace{\frac{q_n + 1}{t_n}}_{\geq 0}\,\underbrace{\min\{0,v\}}_{\geq 0}\,dx + \int_{\JJ^-} \frac{q_n - q}{t_n}\,\max\{0,v\}\,dx \nonumber\\ & \leq \int_{\JJ^+} \frac{q_n - q}{t_n}\,\min\{0,v\}\,dx + \int_{\JJ^-} \frac{q_n - q}{t_n}\,\max\{0,v\}\,dx. \label{eq:intest2} \end{align} Next we show that $\min\{0,v\} \in H^1_0(\JJ^+)$ and $\max\{0,v\} \in H^1_0(\JJ^-)$. The proof of Lemma \ref{lem:inactivedist} shows \begin{equation}\label{eq:incl} \big(\II^+ + B(0,\varepsilon)\big) \setminus \II^+ \subset \{x\in \Omega: q(x) \geq 1/2,\; y(x) = 0\} \subset \AA_s \cup \BB^+, \end{equation} see \eqref{eq:IplusB}. Because of $v\in \KK(y)$ we have $q\,v \geq 0$ a.e.\ in $\AA_s \cup \BB^+$ and thus \eqref{eq:incl} gives $v \geq 0$ a.e.\ in $(\II^+ + B(0,\varepsilon)\big) \setminus \II^+$. Since $\II^+$ and $\JJ^+$ differ only up to a set of zero measure, we thus get \begin{equation*} \min\{0,v\} = 0 \quad \text{a.e.\ in } (\JJ^+ + B(0,\varepsilon)) \setminus \JJ^+. \end{equation*} The regularity of $\partial\JJ^+$ and $\partial(\Omega\setminus\JJ^+)$ therefore gives \begin{equation*} \min\{0,v(x)\} = 0\quad \text{a.e.\ on }\partial\JJ^+, \end{equation*} and thus $\min\{0,v\} \in H^1_0(\JJ^+)$. An analogous argument shows that $\max\{0,v\} \in H^1_0(\JJ^-)$. Due to the zero trace and the regularity of $\partial\JJ^+$ by Assumption \ref{assu:active}\eqref{assu:active3}, we can extend $\min\{0,v\}$ by zero outside $\JJ^+$ to obtain a function in $V$, i.e., $\chi_{\JJ^+}\min\{0,v\} \in V$, where $\chi_{\JJ^+}$ denotes the characteristic function of $\JJ^+$. Thus the weak convergence $(q_n - q)/t_n \weak \lambda$ in $V^*$ gives \begin{equation*} \int_{\JJ^+} \frac{q_n - q}{t_n}\,\min\{0,v\}\,dx = \int_{\Omega} \frac{q_n - q}{t_n}\,\chi_{\JJ^+}\min\{0,v\}\,dx \to \dual{\lambda}{\chi_{\JJ^+}\min\{0,v\}}. \end{equation*} Since $\chi_{\JJ^+}\min\{0,v\} = 0$ a.e.\ in $\AA \subset \Omega\setminus\JJ^+$, Lemma \ref{lem:lambdanull} yields $\dual{\lambda}{\chi_{\JJ^+}\min\{0,v\}} = 0$. Analogously \begin{equation*} \int_{\JJ^-} \frac{q_n - q}{t_n}\,\max\{0,v\}\,dx \to \dual{\lambda}{\chi_{\JJ^-}\max\{0,v\}} = 0 \end{equation*} is obtained. Therefore, in view of \eqref{eq:intest2}, we finally arrive at $\dual{\lambda}{v} \leq 0$ and, since $v\in \KK(y)$ was arbitrary, this proves the assertion. \end{proof} Now we are finally in the position to prove the main result of this section covering the ''weak directional differentiability`` of the solution operator associated with the VI in \eqref{eq:vi}. \begin{theorem}\label{thm:ablvi} Let $u,h \in L^p(\Omega)$ with $p > \max\{d/2,1\}$ be given. Suppose further that Assumptions \ref{assu:ycont} and \ref{assu:active} are fulfilled by $y = S(u)$ and the associated slack variable $q$. Then there holds \begin{equation}\label{eq:weaklim} \frac{S(u + t\,h) - S(u)}{t} \weak \eta \quad \text{in } V, \quad \text{as } t \searrow 0, \end{equation} where $\eta \in V$ solves the following VI of first kind: \begin{equation}\label{eq:ablvi} \begin{aligned} \eta \in \KK(y),\quad \dual{A\eta}{v-\eta} \geq \dual{h}{v-\eta} \quad \forall\, v\in \KK(y) \end{aligned} \end{equation} with $\KK(y)$ as defined in \eqref{eq:KKae}. \end{theorem} \begin{proof} Lemma \ref{lem:etafeas} yields $\eta \in \KK(y)$. Furthermore, since $A \eta + \lambda = h$, Lemmas \ref{lem:etalambdanull} and \ref{lem:signlambda} give \begin{equation*} \dual{A\eta}{v-\eta} - \dual{h}{v-\eta} = \dual{\lambda}{\eta} - \dual{\lambda}{v} \geq 0 \end{equation*} for all $v\in \KK(y)$, which is just the VI in \eqref{eq:ablvi}. Since $\KK(y)$ is nonempty, convex, and closed and $A$ is bounded and coercive, standard arguments yields existence and uniqueness for this VI of first kind. Thus the weak limit $\eta$ is unique, which implies the weak convergence of the whole sequence. \end{proof} \begin{definition} With a little abuse of notation we call the weak limit $\eta$ in \eqref{eq:weaklim} \emph{weak directional derivative} and denote it by $\eta = S_w'(u;h)$. \end{definition} \begin{remark} If $\BB$ has zero measure, then $\KK(y)$ turns into \begin{equation*} \KK(y) =\{v\in V: \;\, v(x) = 0 \text{ a.e.\ in } \AA_s\}, \end{equation*} i.e., a linear and closed subspace of $V$. Thus, in this case, \eqref{eq:ablvi} becomes an equation. If $\AA_s$ possesses a regular boundary, then this equation is equivalent to \begin{equation*} A \eta = h \quad \text{in } \II \quad\text{and}\quad \eta = 0\quad \text{a.e.\ in }\AA = \AA_s. \end{equation*} \end{remark} \begin{remark} It is very likely that Theorem \ref{thm:ablvi} could be proven without the restrictive Assumption \ref{assu:active}, if the weak limit $\eta$ would satisfy the conditions in \eqref{eq:etaAs} and \eqref{eq:etaB} not only almost everywhere, but \emph{quasi-everywhere} in $\Omega$. In this case, the feasible set of \eqref{eq:ablvi} would read \begin{equation*} \begin{aligned} \KK := \{v\in V: \;\, & v(x) = 0 \text{ q.e., where } |q(x)| < 1,\\ & v(x)q(x) \geq 0 \text{ q.e., where } |q(x)| = 1 \text{ and } y(x) = 0\}. \end{aligned} \end{equation*} However, unfortunately, so far we have neither been able to show that \eqref{eq:etaqae} holds quasi everywhere, nor to establish a counterexample which demonstrates that this is wrong in general. This question gives rise to future research. \end{remark} \section{Bouligand stationarity}\label{sec:bouli} With the differentiability result of Theorem \ref{thm:ablvi} at hand, it is now straightforward to establish first-order optimality conditions in purely primal form for optimization problems governed by \eqref{eq:vi}. To be more precise, we consider an optimization problem of the form \begin{equation}\label{eq:optcontrol} \left. \begin{aligned} \min \quad & J(y,u)\\ \text{s.t.} \quad & \dual{A y}{v-y} + \int_\Omega |v|\,dx - \int_\Omega |y|\,dx \geq \dual{u}{v-y} \quad \forall \, v\in V\\ \text{and}\quad & u \in \Uad, \end{aligned} \qquad \right\} \end{equation} where $\Uad\subset L^p(\Omega)$, $p > \max\{d/2,1\}$, is nonempty, closed, and convex. As shown in \cite[Lemma 3.9]{HerzogMeyerWachsmuth}, weak convergence of the difference quotient associated with the control-to-state mapping $S: u \mapsto y$ is sufficient to prove that the reduced objective, defined by \begin{equation*} j: L^p(\Omega) \to \R, \quad j(u) := J(S(u),u), \end{equation*} is directionally differentiable. This allows us to formulate the following purely primal optimality conditions, which, in case of optimal control of VIs of first kind, are known as Bouligand stationarity conditions. \begin{theorem}\label{thm:bstat} Let $p > \max\{d/2,1\}$ and assume that $J$ is Fr\'echet-differentiable from $V\times L^p(\Omega)$ to $\R$. Suppose moreover that $\bar u \in \Uad$ is a local optimal solution of \eqref{eq:optcontrol}, such that $\bar y = S(\bar u)$ and the associated slack variable $\bar q$ satisfy Assumptions \ref{assu:ycont} and \ref{assu:active}. Then the following primal stationarity conditions are fulfilled: \begin{equation}\label{eq:noc1} \partial_y J(\bar y, \bar u) \eta + \partial_u J(\bar y, \bar u)(u - \bar u) \geq 0 \quad \forall \, u \in \Uad, \end{equation} where $\eta\in V$ solves \eqref{eq:ablvi} with $\KK(y) = \KK(\bar y)$ and $h = u - \bar u$. \end{theorem} \begin{proof} As mentioned above, \cite[Lemma 3.9]{HerzogMeyerWachsmuth} and Theorem \ref{thm:ablvi} imply that $u \mapsto j(u)$ is directionally differentiable in every direction $h \in L^p(\Omega)$ with directional derivative $j'(\bar u;h) = \partial_y J(\bar y, \bar u)S_w'(\bar u;h) + \partial_u J(\bar y, \bar u)h$. Local optimality of $\bar u$ yields $j'(\bar u; u - \bar u)\geq 0$, which is the assertion. \end{proof} Next we derive a variant of the above optimality condition based on the cone tangent to the admissble set of \eqref{eq:optcontrol}. As a result, we obtain an optimality condition which can be interpreted as the counterpart of the implicit programming approach in the discussion of finite dimensional MPECs, see \cite[Section 3.3]{LuoPangRalph}. Note that such similarities have already been observed in \cite{HerzogMeyerWachsmuth}. \begin{lemma}\label{lem:convrabl} Assume that $\bar u\in L^p(\Omega)$, $p > \max\{d/2,1\}$, is such that Assumptions \ref{assu:ycont} and \ref{assu:active} are fulfilled. Suppose moreover that the sequences $\{u_n\}\subset L^p(\Omega)$ and $\{t_n\}\subset \R^+$ satisfy \begin{equation*} t_n \searrow 0, \quad \frac{u_n - \bar u}{t_n} \weak h \quad \text{in } L^p(\Omega). \end{equation*} Then \begin{equation*} \frac{S(u_n) - S(\bar u)}{t_n} \weak S'_w(\bar u;h) \quad \text{in } V. \end{equation*} \end{lemma} \begin{proof} By adding a zero we obtain \begin{equation*} \begin{aligned} \frac{S(u_n) - S(\bar u)}{t_n} = \frac{S(u_n) - S(\bar u + t_n\, h)}{t_n} + \frac{S(\bar u + t_n\, h) - S(\bar u)}{t_n}. \end{aligned} \end{equation*} While the latter addend converges weakly to $S'_w(\bar u;h)$ by Theorem \ref{thm:ablvi}, the Lipschitz continuity of $S$ by Lemma \ref{lem:lipschitz} yields for the first addend that \begin{equation*} \Big\|\frac{S(u_n) - S(\bar u + t_n\, h)}{t_n}\Big\|_V \leq L\,\Big\|\frac{u_n - \bar u}{t_n} - h\Big\|_{V^*} \to 0, \end{equation*} where we used the compactness of the embedding $L^p(\Omega) \embed V^*$. \end{proof} We define the tangent cone to the admissible set of \eqref{eq:optcontrol} as follows: \begin{definition}[Tangent cone] For given $u \in \Uad$ we define the tangent cone at $u$ by \begin{equation*} \TT(u) := \begin{aligned}[t] &\Big\{(\eta,h) \subset V \times L^p(\Omega): \exists \; \{u_n\}_{n\in\N} \subset \Uad, \, \{t_n\} \subset \R^+ \text{ such that }\\ &\qquad\qquad \frac{u_n - u}{t_n} \weak h \text{ in } L^p(\Omega) \quad\text{and}\quad \frac{S(u_n) - S(u)}{t_n} \weak \eta \text{ in } V\Big\}. \end{aligned} \end{equation*} \end{definition} Since the VI in \eqref{eq:optcontrol} is uniquely solvable such that $y$ is determined by $u$, this cone coincides with the standard tangent cone in finite dimensions, except that we replace strong by weak convergence. Next consider the VI in \eqref{eq:ablvi} associated with the directional derivative of $S$ at $\bar u$. Due to the coercivity of $A$, this VI does clearly not only admit a unique solution for right hand sides in $L^p(\Omega)$, but also for inhomogeneities in $V^*$. We denote the associated solution operator by $G: V^* \to V$, i.e. \begin{equation}\label{eq:Gdef} \eta = G(h) \quad :\Longleftrightarrow\quad \eta \in \KK(\bar y), \quad \dual{A \eta}{v - \eta} \geq \dual{h}{v - \eta} \quad \forall\,v\in \KK(\bar y). \end{equation} Furthermore, owing again to the coercivity of $A$ this operator is Lipschitz continuous, i.e. \begin{equation}\label{eq:Glip} \|G(h_1) - G(h_2)\|_V \leq \frac{1}{\alpha}\, \|h_1 - h_2\|_{V^*} \quad \forall\, h_1, h_2\in V^*, \end{equation} where $\alpha$ is the coercivity constant of $A$. This enables us to show the following \begin{theorem} Suppose that the assumptions of Theorem \ref{thm:bstat} are fulfilled with a local optimum $\bar u\in \Uad$ of \eqref{eq:optcontrol}. Then there holds \begin{equation}\label{eq:noc2} \partial_y J(\bar y, \bar u) \eta + \partial_u J(\bar y, \bar u)h \geq 0 \quad \forall \, (\eta,h) \in \TT(\bar u). \end{equation} \end{theorem} \begin{proof} If $h_n \weak h$ in $L^p(\Omega)$ and consequently $h_n \to h$ in $V^*$, then \eqref{eq:Glip} gives $G(h_n) \to G(h)$ in $V$. Since $G(h) = S_w'(\bar u; h)$ for $h\in L^p(\Omega)$, this implies that $L^p(\Omega) \ni h \mapsto S'_w(\bar u;h) \in V$ is completely continuous. Now let $(\eta,h) \in \TT(\bar u)$ be arbitrary. Hence there is $\{u_n\}\in\Uad$ so that $(u_n - \bar u)/t_n \weak h$ in $L^p(\Omega)$. As seen above, $S_w'(\bar u; .)$ is the solution operator of a VI of first kind with the cone $\KK(\bar y)$ as feasible set. Hence, $S_w'(\bar u; .)$ is positively homogeneous such that Theorem \ref{thm:bstat} yields \begin{equation}\label{eq:gradineq} \partial_y J(\bar y, \bar u) S_w'\Big(\bar u;\frac{u_n - \bar u}{t_n}\Big) + \partial_u J(\bar y, \bar u)\Big(\frac{u_n - \bar u}{t_n}\Big) \geq 0. \end{equation} The complete continuity of $S_w'(\bar u; .)$ together with Lemma \ref{lem:convrabl} implies \begin{equation*} S_w'\Big(\bar u;\frac{u_n - \bar u}{t_n}\Big) \to S_w'(\bar u; h) = \eta \quad \text{in } V. \end{equation*} Due to the weak continuity of $\partial_u J(\bar y, \bar u)$ the second addend in \eqref{eq:gradineq} converges to $\partial_u J(\bar y, \bar u)h$, which completes the proof. \end{proof} \section{Strong stationarity} In this section we aim at deriving optimality conditions which, in contrast to the ones presented in Section \ref{sec:bouli}, also involve dual variables. Given the differentiability result and the Bouligand stationarity conditions in Theorem \ref{thm:bstat}, we can follow the lines of \cite{MignotPuel1984}. For this purpose we have to require the following assumptions concerning the quantities in the optimal control problem \ref{eq:optcontrol}: \begin{assumption}\label{assu:strong1} We suppose that $U_{\textup{ad}} = L^2(\Omega)$. Moreover, $J$ is continously Fr\'echet-differentiable from $V\times L^2(\Omega)$ to $\R$. \end{assumption} In order to be able to utilize our differentiability result we furthermore assume the following: \begin{assumption}\label{assu:strong2} Assume that $\bar u$ is a local optimum such that the associated state $\bar y$ and the associated slack variable $\bar q$ satisfy Assumptions \ref{assu:ycont} and \ref{assu:active}. \end{assumption} \begin{lemma}\label{lem:bstatV} Under Assumptions \ref{assu:strong1} and \ref{assu:strong2} there exists a $\bar p\in V$ such that \begin{equation*} \partial_y J(\bar y, \bar u)G(h) - \dual{\bar p}{h} \geq 0 \quad \forall\, h\in V^* \end{equation*} with $G$ as defined in \eqref{eq:Gdef}. \end{lemma} \begin{proof} By Theorem \ref{thm:bstat} and $S'_w(\bar u;h) = G(h)$ for $h\in L^2(\Omega)$, there holds \begin{equation}\label{eq:gradvi} \partial_y J(\bar y,\bar u)G(h) + \partial_u J(\bar y,\bar u)h \geq 0 \quad \forall\, h\in L^2(\Omega). \end{equation} which, together with \eqref{eq:Glip}, gives in turn \begin{equation*} \partial_u J(\bar y,\bar u)h \leq \|\partial_y J(\bar y,\bar u)\|_{V^*} \,\frac{1}{\alpha}\, \|h\|_{V^*} \quad \forall\, h\in L^2(\Omega). \end{equation*} Therefore, by the Hahn-Banach theorem, the linear functional $\partial_u J(\bar y, \bar u): L^2(\Omega) \to \R$ can be extended to a linear and bounded functional on $V^*$, which we identify with a function $\bar p\in V$, i.e. \begin{equation*} \dual{\bar p}{h} = - \partial_u J(\bar y, \bar u) h \quad \forall\, h \in L^2(\Omega). \end{equation*} The density of $L^2(\Omega) \embed V^*$ in combination with \eqref{eq:gradvi} then gives the assertion. \end{proof} Next define $q\in V$ as solution of \begin{equation*} \dual{A^* q}{v} = \dual{\partial_y J(\bar y,\bar u)}{v} \quad \forall\, v\in V, \end{equation*} which is well defined because of the coercivity of $A$. Furthermore, we introduce the operator $\Pi: V \to \KK(\bar y)$ by \begin{equation*} \Pi := G \circ A. \end{equation*} Note that $\Pi$ can be interpreted as $A$-projection on $\KK(\bar y)$. It is straightforward to see the following properties of $\Pi$: \begin{equation}\label{eq:idem} \begin{aligned} & \text{$\Pi$ as well as $I - \Pi$ are idempotent,}\\ & \Pi \circ (I-\Pi) = (I-\Pi) \circ \Pi = 0, \end{aligned} \end{equation} and, as $\KK(\bar y)$ is a convex cone, \begin{equation}\label{eq:QPsenkrecht} \dual{A(I-\Pi)\xi}{\Pi(\xi)} = 0 \quad \forall\, \xi \in V. \end{equation} Moreover by construction, we find $G = \Pi \circ A^{-1}$. Thus Lemma \ref{lem:bstatV} implies for every $h \in V^*$ that \begin{equation}\label{eq:gradineqmod} \begin{aligned} 0 &\leq \partial_y J(\bar y,\bar u)G(h) - \dual{\bar p}{h}\\ &= \dual{G(h)}{A^*q} - \dual{A A^{-1} h}{\bar p}\\ &= \dual{A\Pi(A^{-1}h)}{q - \bar p} - \dual{A(I - \Pi)(A^{-1}h)}{\bar p}\\ &= \begin{aligned}[t] & \dual{\Pi(A^{-1} h)}{A^*(q - \bar p)}\\ & - \dual{A(I - \Pi)(A^{-1} h)}{\Pi(\bar p)} - \dual{A(I - \Pi)(A^{-1} h)}{(I-\Pi)(\bar p)}. \end{aligned} \end{aligned} \end{equation} If we insert $h = A(I-\Pi)\bar p \in V^*$, then \eqref{eq:idem} and \eqref{eq:QPsenkrecht} yield \begin{equation*} \dual{A(I-\Pi)\bar p}{(I-\Pi)\bar p} \leq 0. \end{equation*} The coercivity of $A$ then implies $\bar p = \Pi(\bar p)$ and thus $\bar p\in \KK(\bar y)$, i.e. \begin{equation*} \begin{aligned} \bar p(x) &= 0 & & \text{a.e., where } |\bar q(x)| < 1,\\ \bar p(x)\bar q(x) &\geq 0 & & \text{a.e., where } |\bar q(x)| = 1 \text{ and } \bar y(x) = 0. \end{aligned} \end{equation*} Next we define $z\in V$ by \begin{equation}\label{eq:zgl} \dual{A z}{v} = \dual{v}{A^*(\bar p - q)}\quad \forall\, v\in V \end{equation} and insert $h = A\Pi(z)\in V$ in \eqref{eq:gradineqmod}. Together with \eqref{eq:idem}, \eqref{eq:zgl}, and \eqref{eq:QPsenkrecht}, we obtain in this way \begin{equation*} 0\leq \dual{\Pi(z)}{A^*(q - \bar p)} = -\dual{Az}{\Pi(z)} = -\dual{A\Pi(z)}{\Pi(z)} \end{equation*} so that $\Pi(z) = G(Az)= 0$ by the coercivity of $A$. Consequently the definition of $G$ in \eqref{eq:Gdef} leads to \begin{equation*} \dual{A z}{v} \leq 0 \quad \forall\, v\in \KK(\bar y) \quad \Longrightarrow \quad \dual{A^*\bar p}{v} \leq \dual{A^* q}{v} = \dual{\partial_y J(\bar y,\bar u)}{v} \quad \forall\, v\in \KK(\bar y). \end{equation*} By defining $\bar\mu := g'(\bar y) - A^*\bar p \in V^*$ we therefore arrive at \begin{equation*} \begin{aligned} A^* \bar p &= \partial_y J(\bar y,\bar u) - \mu \quad \text{in } V^*\\ \dual{\bar\mu}{v} &\geq 0 \quad \forall\, v\in \KK(\bar y). \end{aligned} \end{equation*} All in all we have thus proven the following: \begin{theorem} Assume that Assumption \ref{assu:strong1} holds. Suppose moreover that $\bar u$ is a local optimum which satisfies Assumption \ref{assu:strong2}. Then there exists an adjoint state $\bar p\in V$ and a multiplier $\mu \in V^*$ such that the following \emph{strong stationarity system} is fulfilled: \begin{subequations}\label{eq:strongstat} \begin{gather} A \bar y + \bar q = \bar u \quad \text{in } V^* \label{eq:state1}\\ \bar q(x) \,\bar y(x) = |\bar y(x)|, \quad |\bar q(x)| \leq 1\quad \text{a.e.\ in }\Omega \label{eq:state2}\\[1mm] A^* \bar p = \partial_y J(\bar y,\bar u) - \mu \quad \text{in } V^* \label{eq:adjoint}\\ \bar p\in \KK(\bar y),\quad \dual{\bar\mu}{v} \geq 0 \quad \forall\, v\in \KK(\bar y) \label{eq:compl}\\[1mm] \bar p + \partial_u J(\bar y,\bar u) = 0\label{eq:gradeq} \end{gather} \end{subequations} with $\KK(\bar y)$ as defined in \eqref{eq:KKae}. \end{theorem} \begin{remark} A comparable result for optimal control problems governed by VIs of the first kind is known as strong stationarity conditions, see \cite{hintermuller2009mathematical}. This is why we have chosen the same terminology here. \end{remark} \begin{remark} We compare the optimality system \eqref{eq:strongstat} with results from \cite{Delosreyes2009} obtained via regularization and subsequent limit analysis. The optimality system obtained in \cite{Delosreyes2009} coincides with \eqref{eq:strongstat} except that \eqref{eq:compl} is replaced by \begin{equation}\label{eq:cstat} \dual{\bar\mu}{\bar p} \geq 0, \quad \dual{\bar\mu}{\bar y} = 0. \end{equation} However, thanks to the definition of $\KK(\bar y)$ and $\pm\bar y \in \KK(\bar y)$, these relations are an immediate consequence of \eqref{eq:compl}. The optimality system in \eqref{eq:strongstat} is therefore sharper compared to the one obtained via regularization. We point out however that the analysis in \cite{Delosreyes2009} does not require the restrictive Assumptions \ref{assu:active} and \ref{assu:ycont} and in addition applies to more general VIs of the second kind. \end{remark} \section{An inexact trust-region algorithm} In this section we propose an inexact trust-region algorithm for the solution of the finite-dimensional optimization problem: \begin{align} \min ~& J(y,u)\\ \text{subject to: }& \langle Ay,v-y \rangle +g|v|_1- g|y|_1 \geq \langle u, v-y\rangle, \text{ for all } v \in \mathbb R^n, \end{align} with $g>0$. The main difficulty of the method consists in computing a descent direction along which the algorithm has to perform the next step. In the case of an empty biactive set, the derivative information is given by \eqref{eq:etanull}-\eqref{eq:rableq}. From the latter, existence of an adjoint state can be proved and an adjoint calculus may be performed. Since the information so obtained does not necessarily correspond to an element of the subdifferential, in case of a non-empty biactive set, we apply a trust-region scheme to provide robust iterates. In this context the adjoint related gradient is considered as an inexact version of a descent direction. Since in the applications we focus on, the biactive set is either empty or very small, such an approach is justified from the numerical point of view. Indeed, by assuming that the biactive set $$B=\{ i: y_i=0, |q_i|=1 \},$$ is empty, the solution operator is G\^ateaux differentiable and the directional derivative $\eta=S'(u)h$ corresponds to the solution of the following system of equations: \begin{align*} \eta_i=0 &\text{ for }i:y_i=0,\\ \sum_{j:y_j \not =0} A_{i,j} \eta_j =h_i &\text{ for }i:y_i \not =0. \end{align*} To simplify the description of the algorithm, we confine ourselves to a quadratic cost functional of the form $J(y,u) = 1/2\, \|y - z\|^2 + \alpha/2\, \|u\|^2$, where $\|\,.\,\|$ denotes the Euclidian norm and $z\in \R^n$ is a given desired state. Considering the reduced cost functional $$j(u)= \frac{1}{2} \|S(u)-z\|^2 + \frac{\alpha}{2} \|u\|^2,$$ the directional derivative is given by $$j'(u)h= (S(u)-z, S'(u)h)+ \alpha (u,h)= \sum_i (y_i-z_i)\eta_i + \alpha \sum_i u_i h_i.$$ Let us recall that the inactive set is given by $\mathcal I :=\{ i \in \{1, \dots, n \}: y_i \not =0 \}$. By reordering the indices such that the active and inactive ones occur in consecutive order, and defining the adjoint state $p \in \mathbb R^n$ as the solution to the system: $$\begin{pmatrix} I &0\\ 0 & A_{\mathcal I}^T \end{pmatrix} p = y-z, $$ where $A_{\mathcal I}$ corresponds to the block of $A$ with indexes $i, j$ such that $y_i \not = 0,y_j \not = 0$, we obtain that $$j'(u)h= \sum_{i \in \mathcal I}p_i h_i + \alpha \sum_i u_i h_i$$ or, equivalently, $j'(u)=\begin{cases} \alpha u_i & \text{ if } i \not \in \mathcal I\\ p_i+ \alpha u_i & \text{ if } i \in \mathcal I. \end{cases} $ Before stating the trust-region algorithm, let us introduce some notation to be used. The quadratic model of the reduced cost function is given by $$q_k(s)=j(u_k)+ g_k^T s+ \frac{1}{2} s^T H_k s,$$ where $g_k=j'(u_k)$ and $H_k$ is a matrix with second order information, obtained with the BFGS method. The trust region radius is denoted by $\Delta_k$ and the actual and predicted reductions are given by $$ared_k(s^k):=j(u_k)-j(u_k+s^k) \text{ and }pred_k(s^k)=j(u_k)-q_k(s^k), \text{ respectively.}$$ The quality indicator is computed by $$\rho_k(s^k)=\frac{ared_k(s^k)}{pred_k(s^k)}.$$ The resulting trust region algorithm (of dogleg type) is given through the following steps: \paragraph{\textbf{Trust region algorithm}} \begin{enumerate} \item Choose the parameter values $0<\eta_1<\eta_2<1$, $0<\gamma_0<\gamma_1<1<\gamma_2$, $\Delta_{min}\geq0.$ \item Choose the initial iterate $x_0\in\mathbb{R}^n$ and the trust region radius $\Delta_0>0,\ \Delta_0\geq\Delta_{min}\geq0.$ \item Compute the Cauchy step $s_c^k=-t^* g_k,$ where $$t^*=\left\{ \begin{matrix} \displaystyle\frac{\Delta_k}{||g_k||},\ \ \text{ if }\ g_k^\top H_kg_k\leq0\\ \\ \min\left(\displaystyle\frac{||g_k||^2}{g_k^\top H_kg_k},\displaystyle\frac{\Delta_k}{||g_k||}\right),\ \ \text{ if }\ g_k^\top H_k g_k>0 \end{matrix} \right.$$ and the Newton step $s^k_n= -H_k^{-1} g_k$. If $s^k_n$ satisfies the fraction of Cauchy decrease: $$\exists \delta \in (0,1] \text{ and } \beta \geq 1 \text{ such that }\|s^k\| \leq \beta \Delta_k \text{ and }pred_k(s^k) \geq \delta ~pred_k(s_c^k).$$ then $s^k=s^k_n$, else $s^k=s^k_c$. \item If $\varrho_k(s^k)>\eta_2$, then \[ u_{k+1}=u_k+s_k,\ \ \ \Delta_{k+1}\in\left[\Delta_k,\gamma_2\Delta_k\right] \] Else if $\varrho_k(s^k)\in(\eta_1,\eta_2)$, then \[ u_{k+1}=u_k+s_k,\ \ \ \Delta_{k+1}\in\left[\max(\Delta_{min},\gamma_1\Delta_k),\Delta_k\right] \] Else if $\varrho_k(s^k)\leq\eta_1$, then \[ u_{k+1}=u_k,\ \ \ \Delta_{k+1}\in\left[\gamma_0\Delta_k,\gamma_1\Delta_k\right] \] Repeat until stopping criteria. \end{enumerate} \subsection{Example} We consider as test example the following finite-dimensional optimization problem: \begin{align} \min ~& J(y,u)=\frac{1}{2} \|y-z\|^2 + \frac{\alpha}{2} \|u\|^2\\ \text{subject to: }& \langle Ay,v-y \rangle +g|v|_1- g|y|_1 \geq \langle u, v-y\rangle, \text{ for all } v \in \mathbb R^n, \label{numerics VI} \end{align} where $A$ corresponds to the finite differences discretization matrix of the negative Laplace operator in the two dimensional domain $\Omega=]0,1[^2$, $z =10 \sin(5 x_1) \cos(4 x_2)$ stands for the desired state and $\alpha$ and $g$ are positive constants. It is expected that as $g$ becomes larger the solution becomes sparser. For solving \eqref{numerics VI} within the trust region algorithm a semismooth Newton method is used. The method is built upon a huberization of the $l_1$ norm and the use of dual information. Specifically, we consider the solution of the regularized inequality: \begin{align} & Ay +q=u\\ & q-h_\gamma(y)=0, \end{align} where $\left( h_\gamma(y) \right)_i= g\frac{\gamma y_i}{\max(g, \gamma |y_i|)}$. Considering a generalized derivative of the max function, the following system has to be solved in each semismooth Newton iteration: \begin{align} & A \delta_y +\delta_q=u-Ay -q\\ & \delta_q-\frac{\gamma \delta_y}{\max(g,\gamma|y|)}+ \diag(\chi_{\mathcal I_\gamma}) \frac{\gamma^2 y^T \delta_y}{\max(g,\gamma|y|)^2} \frac{y}{|y|}=-q+h_\gamma(y), \label{eq:ssn2} \end{align} where $\left( \chi_{\mathcal I_\gamma} \right)_i:=\begin{cases}1 &\text{ if }\gamma |y_i| \geq g,\\ 0 &\text{ if not.} \end{cases}$, $\max(g,\gamma |y|):=\left( \max(g,\gamma |y_1|),\dots,\max(g,\gamma |y_n|) \right)^T$ and the division is to be understood componentwise. By using dual information in the iteration matrix (as in \cite{HintermuellerStadler2007}, \cite{dlRe2010}) the following modified version of \eqref{eq:ssn2} is obtained: \begin{equation} \delta_q-\frac{\gamma \delta_y}{\max(g,\gamma|y|)}+ \diag(\chi_{\mathcal I_\gamma}) \frac{\gamma^2 y^T \delta_y}{\max(g,\gamma|y|)^2} \frac{q}{\max(g,|q|)}=-q+h_\gamma(y). \label{eq:ssn2.1} \end{equation} This leads to a globally convergent iterative algorithm, which converges locally with superlinear rate. The used trust region parameter values are $\eta_1=0.25$, $\eta_2=0.75$, $\gamma_1=0.5$, $\gamma_2=1.5$ and $\beta=1$. For the parameter values $\alpha=0.0001$ and $g=15$, and the mesh size step $h=1/80$, the algorithm requires a total number of 35 iterations to converge, for a stopping criteria given by $\|u_{k+1}-u_k\| \leq 1e-4$. The optimized state is shown in Figure \ref{Bsp1E}, where a large zone where the state takes value zero can be observed. \begin{figure}[ht!] \centering \includegraphics[height=6cm]{controlledstate.png} \caption{Optimized state: on the left corner the sparse structure of the solution can be observed.}\label{Bsp1E} \end{figure} The algorithm was also tested for other values of the parameters $\alpha$ and $g$, yielding the convergence behaviour registered in Table \ref{table1}. Although the considered derivative information was inexact, the trust-region approach yields convergence in a relatively small number of iterations. \begin{table} \centering \begin{tabular}{|l|r|r|r|r|} \backslashbox{$\alpha$}{$g$} &1 &5 &10 &15\\ \hline 0,1 &20 &28 &53 &- \\ 0,01 &23 &24 &27 &32\\ 0,001 &33 &48 &54 &31\\ 0,0001 &69 &70 &62 &34 \end{tabular} \caption{Number of trust-region iterations for different $\alpha$ and $g$ values. Mesh size step $h=1/40$.}\label{table1} \end{table} Further descent type directions to be used in the context of the trust-region methodology, as well as the convergence theory of the combined approach, will be investigated in future work. \begin{appendix} \section{Directional derivative of the $L^1$-norm}\label{sec:l1deriv} \begin{proofof}{Lemma \ref{lem:l1deriv}} The definition of $\AA$ and $\absop'$ imply \begin{equation}\label{eq:l1deriv1} \begin{aligned} & \Big|\int_\Omega \Big(\frac{|y_n| - |y|}{t_n} - \absop'(y;\eta)\Big) \varphi\, dx\Big|\\ &\quad \leq \Big|\int_\Omega \frac{|y_n| - |y + t_n\eta|}{t_n} \, \varphi\, dx\Big| + \underbrace{\Big|\int_{\AA} \Big(\frac{|y + t_n\eta| - |y|}{t_n} - |\eta|\Big) \varphi\, dx\Big|}_{\displaystyle{= 0}}\\ &\qquad + \Big|\int_{\Omega\setminus\AA} \Big(\frac{|y + t_n \eta| - |y|}{t_n} - \sign(y)\eta\Big) \varphi\, dx\Big|. \end{aligned} \end{equation} By the compact embedding $H^1(\Omega)\embed\embed L^2(\Omega)$ we have \begin{equation*} \frac{y_n - y}{t_n} \to \eta \text{ in } L^2(\Omega) \end{equation*} and thus \begin{equation}\label{eq:l1deriv2} \begin{aligned} \Big|\int_\Omega \frac{|y_n| - |y + t_n\eta|}{t_n} \, \varphi\, dx\Big| &\leq \int_\Omega \Big|\frac{y_n - y}{t_n} + \eta\Big| \, |\varphi|\, dx\\ &\leq \Big\|\frac{y_n - y}{t_n} + \eta\Big\|_{L^1(\Omega)} \, \|\varphi\|_{L^\infty(\Omega)} \to 0. \end{aligned} \end{equation} Let $x \in \Omega\setminus\AA$ be an arbitrary common Lebesgue point of $y$ and $\eta$. Then the directional differentiability of $\R \ni r \mapsto |r| \in \R$ yields \begin{equation*} \frac{|y(x) + t_n \eta(x)| - |y(x)|}{t_n} - \sign\big(y(x)\big)\eta(x) \to 0, \end{equation*} and, since almost all points in $\Omega$ are common Lebesgue points of $y$ and $\eta$, this pointwise convergence holds almost everywhere in $\Omega\setminus\AA$. Due to \begin{equation*} - 2 |\eta(x)|\leq \frac{|y(x) + t_n \eta(x)| - |y(x)|}{t_n} - \sign\big(y(x)\big)\eta(x) \leq 2 |\eta(x)| \quad \text{a.e.\ in }\Omega, \end{equation*} Lebesgue dominated convergence theorem thus gives \begin{equation*} \frac{|y + t_n \eta| - |y|}{t_n} - \sign(y)\eta \to 0 \text{ in } L^1(\Omega\setminus\AA). \end{equation*} Therefore, we arrive at \begin{multline}\label{eq:l1deriv3} \Big|\int_{\Omega\setminus\AA} \Big(\frac{|y + t_n \eta| - |y|}{t_n} - \sign(y)\eta\Big) \varphi\, dx\Big|\\ \leq \Big\|\frac{|y + t_n \eta| - |y|}{t_n} - \sign(y)\eta\Big\|_{L^1(\Omega\setminus\AA)} \, \|\varphi\|_{L^\infty(\Omega)} \to 0. \end{multline} Inserting \eqref{eq:l1deriv2} and \eqref{eq:l1deriv3} in \eqref{eq:l1deriv1} yields the assertion. \end{proofof} \section{Boundedness for functions in $H^1(\Omega)$}\label{sec:stam} For convenience of the reader, we prove Lemma \ref{lem:stam}. The arguments are classical and go back to \cite{Stampacchia/Kinderlehrer}. \begin{proofof}{Lemma \ref{lem:stam}} The truncated function defined in \eqref{eq:truncfunc} is equivalent to \begin{equation*} w_k(x) = w(x) - \min\big((\max(w(x),-k),k\big) \end{equation*} and therefore \cite[Theorem A.1]{Stampacchia/Kinderlehrer} implies $w_k \in V$. It remains to verify the $L^\infty$-bound in \eqref{eq:inftybound}. If $d=1$, then the assertion follows directly from \eqref{eq:stamest} and the Sobolev embedding $H^1(\Omega) \embed L^\infty(\Omega)$. So assume that $d\geq 2$. Then let $k \geq 0$ be given and set $A(k) :=\{x\in \Omega\;|\;|w(x)|\geq k\}$. Note that $w_k(x) = 0$ a.e.\ in $\Omega\setminus A(k)$. Next let $h \geq k$ be arbitrary so that $w(x) \geq h \geq k$ a.e.\ in $A(h)$. Then Sobolev embeddings give that \begin{equation}\label{eq:stam1} \begin{aligned} \|w_k\|_{H^1(\Omega)}^2 \geq c\, \|w_k\|_{L^m(\Omega)}^2 &= c\Big(\int_{A(k)} \big| |w| - k \big|^m dx\Big)^{2/m}\\ &\geq c\int_{A(h)}(h-k)^m dx^{2/m} = c\,(h-k)^2 |A(h)|^{2/m}, \end{aligned} \end{equation} where $m = 2d/(d-2)$, see e.g.\ ... On the other hand, \eqref{eq:stamest} implies \begin{equation*} \alpha\,\|w_k\|^2 \leq \int_{A(k)} f\,w_k\,dx \leq \|f\|_{L^{m'}(A(k))} \, \|w_k\|_{L^m(A(k))} \leq c\,\|f\|_{L^{m'}(A(k))} \, \|w_k\|_{H^1(\Omega)}, \end{equation*} where $m'$ is the conjugate exponent to $m$, i.e.\ $1/m + 1/m' = 1$. Note that \begin{equation*} m' = \frac{m}{m-1} = \frac{d}{d/2 + 1} \leq \frac{d}{2} < p, \quad \text{if } d \geq 2, \end{equation*} and thus $f\in L^{m'}(\Omega)$ by the assumption on $f$ in Lemma \ref{lem:stam}. Together with Young's inequality, then H\"older's inequality yields \begin{equation}\label{eq:stam2} \|w_k\|^2 \leq c\Big(\int_{A(k)} |f|^{m'}\,dx\Big)^{2/m'} \leq c\,\|f\|_{L^p(\Omega)}^2\,|A(k)|^{2r/m'} \end{equation} with $r = p/(p-m') \geq 1$ so that $r' = r/(r-1) = p/m'$. By setting \begin{equation}\label{eq:expo} s = \frac{m}{m'}\, r = \frac{p}{(m'-1)(p-m')} \end{equation} we infer from \eqref{eq:stam1} and \eqref{eq:stam2} that \begin{equation}\label{eq:stam3} |A(h)|^{2/m} \leq c\,\|f\|_{L^p(\Omega)}^2\,\frac{1}{(h-k)^2} \, \big(|A(h)|^{2/m}\big)^s\quad \text{for all } h > k \geq 0. \end{equation} Since $m > 2$, we have $m'< 2$ and therefore $(m'-1)(p-m') < p-m' < p$ such that \eqref{eq:expo} gives in turn $s>1$. In this case, according to \cite[Lemma B.1]{Stampacchia/Kinderlehrer}, it follows from \eqref{eq:stam3} that the nonnegative and non-increasing function $\R\ni h \mapsto |A(h)|^{2/m} \in \R$ admits a zero at \begin{equation*} h^* = 2^{s/(s-1)} \sqrt{c \,|\Omega|^{2(s-1)/m}}\, \|f\|_{L^p(\Omega)}. \end{equation*} By definition, $|A(h^*)| = 0$ is equivalent to $|w(x)| \leq h^*$ a.e.\ in $\Omega$, which yields the assertion. \end{proofof} \end{appendix} \section*{Acknowledgement} The authors would like to thank Gerd Wachsmuth (TU Chemnitz) for his hint concerning strong stationarity. This work was supported by a DFG grant within the Collaborative Research Center SFB 708 (\emph{3D-Surface Engineering of Tools for Sheet Metal Forming – Manufacturing, Modeling, Machining}), which is gratefully acknowledged. \bibliographystyle{plain} \bibliography{biblio} \end{document}
{"config": "arxiv", "file": "1404.4787/r_abl_VI2_2014.tex"}
TITLE: Does the antidiagonal in this square matrix always contain a prime? QUESTION [5 upvotes]: Does the antidiagonal in the square matrix with entries $1,2,\ldots,n^2$ row by row in that order always contain a prime? For example: For n=2: $\begin{bmatrix}1 & 2\\3 & 4\end{bmatrix}$ the antidiagonal contains two primes (2,3). For n=3: $\begin{bmatrix}1 & 2&3\\4&5&6\\ 7&8& 9\end{bmatrix}$ the antidiagonal contains three primes (3,5,7). etc... Is there a matrix for n>1 that doesn't contain a prime in the antidiagonal? REPLY [1 votes]: While it may be hard to prove precise results, it is extremely likely that there are quite a lot of primes on this antidiagonal. If $a$ is an integer small relative to $n$, then the number of primes $p \equiv 1$ (mod $a$) with $p \leq n^{2}$ is of the order of $\frac{1}{\phi(a)}\frac{n^{2}}{\log n^{2}}$ for $n$ large enough. This is according to the quantitative version of Dirichlet's Theorem for primes in arithmetic progression, and is proved. Here we are looking at the case $a = n-1$ which is not small relative to $n$. Nevertheless, it would be interesting to compare the number of primes on the antidiagonal with $$\left( \prod_{ {\rm prime} p | n-1} \frac{p}{p-1} \right) \frac{n}{2 \log n}$$ for $n$ large, and to examine the limiting behaviour of the ratio of the two quantities.
{"set_name": "stack_exchange", "score": 5, "question_id": 227671}
TITLE: Explain why, if x, y and z is a primitive Pythagorean triple, then either x or y must be odd QUESTION [1 upvotes]: "Three positive integers $x$, $y$ and $z$ are called a Pythagorean triple if $x^2 + y^2 = z^2$. A Pythagorean triple is called primitive if the only positive integer that is a factor of all three integers $x$, $y$ and $z$ is $1$. Explain why, if $x$, $y$ and $z$ is a primitive Pythagorean triple, either $x$ or $y$ must be odd." - From book An exercise from my proof book that I'm confused about. I'm a beginner any help would be appreciated REPLY [2 votes]: In fact $x$ and $y$ must be coprime. If they have any common factor $d$ then $d^2$ divides $x^2$ and $y^2$ and thus also divides $z^2$ and so $d$ divides $z$, contradicting the definition of a primitive triple. The particular case of any even $d$, including $d=2$, shows that both $x$ and $y$ cannot be even. The reason it is worth thinking about primitive triples is that each such $(x,y,z)$ generates a whole family of triples $(kx,ky,kz), k\in \Bbb N$. Of course when $k$ is even, the whole resulting triple is even.
{"set_name": "stack_exchange", "score": 1, "question_id": 2427497}
TITLE: Symmetric random walk with bounds QUESTION [4 upvotes]: can anyone help me with this: We are considering a symmetric random walk that ends if level 3 is reached or level -1 is reached. Start=0 What is the expected number of walks? So I am looking for: $E[{\tau}]$ with $\tau$=the stopping time. REPLY [0 votes]: $\newcommand{\Z}{\mathbb{Z}}\newcommand{\N}{\mathbb{N}}\newcommand{\E}{\mathbb{E}}\newcommand{\F}{\mathcal{F}}\newcommand{\P}{\mathrm{P}}$As @Did pointed out in a comment above, the optional stopping theorem (OST) holds under relaxed uniform integrability assumptions. I was unaware of this more general statement, so I used the more classical statement of OST which requires $\tau$ to be a.s. bounded to compute the expected stopping time. I found these lecture notes and this book where the OST is stated under UI conditions (and the proof is not too difficult). In any case, I had already derived the result using the version of the OST I knew, so here it is: The a.s. boundedness assumption often fails to hold in practice. However, for every fixed $N\in\N$, $\tau \wedge N$ is a bounded stopping time. The idea is to apply OST for $\tau \wedge N$ and take $N\to\infty$. Problem. Let $(X_n)_n$ be the standard symmetric random walk with $X_0=x$ and let $a < x < b$ with $a,b\in\Z$. Define $\tau = \inf\{n\in\N {}\mid{} X_n \in \{a,b\}\}$. Find $\E[\tau]$. Step 1. We will show that $Y_n = X_n^2 - n$ is a martingale with respect to the natural filtration of $X_n$, $(\F_n)_n$. Indeed, $Y_n$ is $\F_n$-measurable Since $\E[|X_n|]\leq |x| + n$, we have $\E[Y_n] = \E[X_n^2 - n] \leq (|x|+n)^2 < \infty$ We have $Y_{n+1} = Y_n + W_n$, and \begin{align} \E[Y_{n+1}\mid \F_n] {}={}& \E[X_{n+1}^2-n-1 {}\mid{} \F_n]\\ {}={}& \E[(X_{n}+W_{n+1})^2-n-1 {}\mid{} \F_n]\\ {}={}& \E[X_{n}^2+W_{n+1}^2+2X_nW_{n+1}-n-1 {}\mid{} \F_n]\\ {}={}& X_{n}^2+\E[W_{n+1}^2\mid \F_n] +2X_n\E[W_{n+1}\mid \F_n]-n-1\\ {}={}& X_{n}^2-n=Y_n\\ \end{align} Step 2. Let $N\in\N$. Since $Y_n$ is an $\F_n$-martingale, $\tau\wedge N$ is a bounded $\F_n$-measurable stopping time (indeed, $\tau\wedge N \leq N$), we may employ OST: \begin{align} {}&{}\E[Y_{0}] = \E[Y_{\tau \wedge N}]\\ \Leftrightarrow{}&{} x^2 {}={} \E[X_{\tau \wedge N}^2 - \tau\wedge N]\\ \Leftrightarrow{}&{} x^2 {}={} \E[X_{\tau \wedge N}^2] - \E[\tau\wedge N] \tag{1} \end{align} Step 3. Note that $\tau\wedge N \leq \tau \wedge (N+1)$, therefore, by virtue of Lebesgue's monotone convergence theorem, \begin{align} \lim_{N\to\infty}\E[\tau \wedge N] = \E[\lim_{N\to\infty} \tau \wedge N] = \E[\tau]\tag{2} \end{align} Step 4. We need to compute $\lim_n \E[X_{\tau \wedge N}^2]$. We will use a known fact for random walks: for $x,m\in \N$ let $\tau(m) = \inf\{n \in \N {}\mid{} X_n = m, \text{with }X_0 = x\}$. Then, for $a<b$, \begin{align} \P[\tau(b,x) < \tau(a, x)] = \frac{x-a}{b-a}.\tag{3} \end{align} This is easy to prove. We also have that \begin{align} \E[X_{\tau\wedge N}^2] = \E[X_{\tau}^2 1_{\tau < N}] + \E[X_{N}^2 1_{\tau \geq N}]\tag{4} \end{align} For the second term in (4) we have that $0 \leq \E[X_N^2 1_{\tau \geq N}] \leq \max\{a^2, b^2\} \E[1_{\tau \geq N}] \to 0$ as $N\to\infty$. For the first term, we have \begin{align} \lim_n \E[X_{\tau}^2 1_{\tau < N}] {}={}& a^2 \P[\tau(a, x) < \tau(b,x )] + b^2 \P[\tau(b, x) < \tau(a,x )]\\ {}={}& a^2 \left(1 - \frac{x-a}{b-a}\right) + b^2 \frac{x-a}{b-a}\\ {}={}& x(a+b) - ab\tag{5} \end{align} Therefore, from all the above we have \begin{align} \E[\tau] = (b-x)(x-a)\tag{*} \end{align}
{"set_name": "stack_exchange", "score": 4, "question_id": 288298}
TITLE: Why is $n \log (n)$ more significant than $n^2 \log (n)$ in terms of efficiency? QUESTION [2 upvotes]: I am in a computer algorithms course where I was faced with a problem to determine the big theta efficiency of $n\log(n) + n^2\log(n)$. The solution is $\Theta(n\log(n))$. Why was this chosen over $n^2 \log(n)$? REPLY [0 votes]: Following everyone's analysis, here is an example where you can confirm their answers Hope this helps!
{"set_name": "stack_exchange", "score": 2, "question_id": 1927661}
TITLE: Derivative of Lagrangian with respect to velocity QUESTION [2 upvotes]: My question revolves around this lecture notes on page $109$ equation $(5.1.10)$. Let's stick to $\mathbb{R}^3$ and consider a particle in $3$-space with position vector $\mathbf{x} = (x, y, z)$. Denote its velocity by $\mathbf{v} = \dot{\mathbf{x}} = (\dot{x}, \dot{y}, \dot{z}).$ Basically, we have the Lagrangian describing the particle: \begin{equation} L = -mc^2 \sqrt{1 - \frac{v^2}{c^2}}, \end{equation} where $m$ is the mass of the particle, $c$ is the speed of light constant and $v = |\mathbf{v}| = \sqrt{\dot{x}^2 + \dot{y}^2 + \dot{z}^2}$ is the speed of the particle. Then the author derived in the notes: \begin{equation} \frac{\partial L}{\partial \mathbf{v}} = -mc^2\left(-\frac{\mathbf{v}}{c^2}\right)\frac{1}{\sqrt{1 - v^2/c^2}} = \frac{m \mathbf{v}}{\sqrt{1 - v^2/c^2}}. \end{equation} My question is, how did the author get the first equality in the derivation? I know that I can just do it by computing this: \begin{equation} \frac{\partial L}{\partial \mathbf{v}} = \left(\frac{\partial L}{\partial \dot{x}}, \frac{\partial L}{\partial \dot{y}}, \frac{\partial L}{\partial \dot{z}}\right). \end{equation} But my question is more specific: how did the author get the first equality that fast? Is there a trick I'm missing here? REPLY [0 votes]: Use the chain rule and the fact that $v^2 = \mathbf v\cdot\mathbf v$ whence $$\frac{\partial v^2}{\partial\mathbf v} = 2 \mathbf v.$$
{"set_name": "stack_exchange", "score": 2, "question_id": 546882}
TITLE: In how many ways can you choose between 5 places to visit, but two times each? QUESTION [2 upvotes]: Lets suppose you want to visit 5 places, A B C D E, you also want to go to each one two times. For example: You visit $A$, after a day you visit $B$, after a day you visit $B$ again, then $C$ and so on. The thing is, you cannot visit $E$ two times in a row, you have to visit another place first. My attempt: I can initially choose between 5 places, and I got stuck from here. For the second place, I may have already visited $E$, or I may have not, and so on for the other cases. I did not find a general formula to calculate it. I would appreacite any answers or hints so much. REPLY [4 votes]: Let's count the number of permutations of $AABBCCDDEE$ that are invalid, i.e. some city appears twice in a row. Let $n$ be the number of cities (in this example 5) and $S_X$ be the set of strings in which city $X$ appears consecutively. Note $\bigcup_{X \in \{A, B, C, D, E\}} S_X$ is exactly the set of strings that are invalid. By the principle of inclusion and exclusion $$ |\bigcup_{X \in \{A, B, C, D, E\}} S_X | = \sum_{J \subset \{A, B, C, D, E\}} (-1)^{|J| + 1} |\bigcap_{X \in J} S_X|. $$ Note that if $J$ is a subset of $k$ cities, $\bigcap_{X \in J} S_X$ is the set of strings in which you fix $k$ of the cities to be next to each other and the remaining cities can be visited in any order. The size of this set is $(2n-k)!/2^{n-k}$ since you `glue' the two symbols for each of the $k$ cities together resulting in $2n-k$ symbols where $n-k$ of them are repeated twice. Since this value only depends on the size of the set of cities, we can rewrite the sum by grouping sets of size $k$ together: $$ \sum_{k=1}^n (-1)^{k - 1}\binom{n}{k}\frac{(2n-k)!}{2^{n-k}}. $$ There are then $(2n)!/2^n$ total permutations so the number of valid permutations is $$ \frac{(2n)!}{2^n} - \sum_{k=1}^n (-1)^{k - 1}\binom{n}{k}\frac{(2n-k)!}{2^{n-k}}, $$ which simplifies to $$ \sum_{k=0}^n (-1)^{k}\binom{n}{k}\frac{(2n-k)!}{2^{n-k}} $$
{"set_name": "stack_exchange", "score": 2, "question_id": 4304675}
TITLE: Calculating closed forms of integrals QUESTION [1 upvotes]: So I've been told that you can't find the closed form of $\int e^{-\frac{x^2}{2}}$. Apparently, you know the exact result then you integrate over the whole of $\mathbb{R}$ but every other number needs to be calculated numerically. What I'm wondering is why is this considered worse (or maybe it isn't?) than say, $\int\cos x=\sin x$. As far as I'm aware, you can calculate the exact value of $\sin$ for a very limited number of points and the rest are again calculated numerically. So any not just give a special name to the first integral, or maybe something more elementary, and say that's also a closed form? What makes this different from trigonometric functions? Edit: I'm also a bit fuzzy on why/if getting $\pi$ as a result is considered exact. It's just a name for something that we can't express in a closed form, right? REPLY [4 votes]: Why is this considered worse $($or maybe it isn't ?$)$ than say $\displaystyle\int\cos x~dx=\sin x$ ? Because all integrals whose integrand consists solely of trigonometric functions $($whose arguments are integer multiples of $x)$, in combination with the four basic operations and exponentiation to an integer power, can be expressed in closed form using the Weierstrass substitution, followed by partial fraction decomposition. $\big($Of course, certain algebraic constants might appear there, which are not expressible in radicals, but this is another story$\big)$. In other words, they don't create anything new, since $\cos x=\sin\bigg(x+\dfrac\pi2\bigg)$. But the indefinite integral of a Gaussian function does create something new, namely the error function, and then, when we further integrate that, we get something even newer $\bigg($since $\displaystyle\int\text{erf}(x)^3~dx$ cannot be expressed as a combination of elementary and error functions$)$, etc. and it just never stops. So trigonometric, hyperbolic, and exponential functions are self-contained under integration, in a way in which Gaussian ones simply aren't.
{"set_name": "stack_exchange", "score": 1, "question_id": 1188320}
TITLE: Given the following set of equations, find: $xy + 2yz + 3zx$. QUESTION [1 upvotes]: The positive reals x, y, z satisfy the equations $$x^2 + xy + \frac{y^2}{3}=25$$ $$\frac{y^2}{3}+z^2 = 9$$ $$z^2+ zx + x^2 = 16$$ Find $$xy + 2yz + 3zx$$ My understanding: What struck me first were the squares $9, 16, 25$. This is the “Egyptian triangle.” It is a hint to the theorem of Pythagoras, to geometry, and geometrical interpretation. Instead of $x, y, z$ only $xy + 2yz + 3zx$ is required. This may be an area, maybe even the area $6$ of the Egyptian triangle. It is also a hint that I should not try to find $x, y, z$. $\frac{y^2}{3}$ occurs twice, so it may be helpful to set $a^2=\frac{y^2}{3}$. The equations finally become: $$x^2+ \sqrt{3}xt + t^2 = 25$$ $$t^2+z^2=9$$ $$z^2+ zx + x^2 = 16$$ Also, one more observation, $zx$ must be a perfect square. I am not sure if I'm on the right track, but I don't know how to proceed further. Hints and help would be appreciated, thanks. REPLY [0 votes]: Let $f=xy + 2yz + 3zx.$ Second equation is the circle with parametric equations $$y=3\sqrt3\sin t,\quad z=3\cos t$$ Adding the second and third equation and subtracting first equation , we get $$2z^2+xz-xy=0,$$ $$x=\frac{2z^2}{y-z}=\frac{6\cos^2t }{\sqrt3\sin t-\cos t}.$$ Parametric expresions for $x,y,z$ we substitute in $f^2$. Trigonometric simplifications gives $$f^2=-\frac{1458(\cos2t+1)}{\sqrt3\sin2t+\cos2t-2}.$$ From first equation $$\cos2t=\frac{37-32\sqrt3\sin2t}{59}.$$ On subsitituting this in $f^2$ we get $$f^2=1728$$ Then $f=\sqrt{1728}=24\sqrt3$. For trigonometric simplifications I use CAS (Maxima, Maple, ...)
{"set_name": "stack_exchange", "score": 1, "question_id": 4149496}
\begin{document} \title{Global solutions for $H^s$-critical nonlinear biharmonic \\ Schr\"{o}dinger equation} \author{Xuan Liu, Ting Zhang\\ \small{School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China}} \date{} \maketitle \begin{abstract} We consider the nonlinear biharmonic Schr\"odinger equation $$i\partial_tu+(\Delta^2+\mu\Delta)u+f(u)=0,\qquad (\text{BNLS})$$ in the critical Sobolev space $H^s(\R^N)$, where $N\ge1$, $\mu=0$ or $-1$, $0<s<\min\{\fc N2,8\}$ and $f(u)$ is a nonlinear function that behaves like $\lambda\left|u\right|^{\alpha}u$ with $\lambda\in\mathbb{C},\alpha=\frac{8}{N-2s}$. We prove the existence and uniqueness of the global solutions to (BNLS) for the small initial data. \\ \textbf{Keywords:} Fourth-order Schr\"odinger equation; Local well-posedness; Continuous dependence. \end{abstract} \section{Introduction} In this paper, we consider the following nonlinear biharmonic Schr\"odinger equation \begin{equation}\label{NLS} \begin{cases}i\partial_tu+(\Delta^2 +\mu\Delta) u+f(u)=0,\\ u(0,x)=\phi(x),\end{cases} \end{equation} where $t\in\mathbb{R}$, $x\in\mathbb{R}^N$, $N\ge1$, $\phi\in H^s(\R^N)$, $0<s<\min \left\{\frac{N}{2},8\right\}$, $\mu=-1$ or $\mu=0$, $u:\mathbb{R}\times\mathbb{R}^N\rightarrow\mathbb{C}$ is a complex-valued function and $f(u)$ is a nonlinear function that behaves like $\lambda \left|u\right|^{\alpha}u$ with $\lambda\in\mathbb{C}$, $\alpha>0$. Note that if $\mu=0$ and $f(u)=\lambda \left|u\right|^{\alpha}u$ with $\lambda\in\mathbb{C},\alpha>0$, the equation (\ref{NLS}) is invariant under the scaling, $ u_k(t,x)=k^{\frac{4}\alpha}u(k^4t,kx), k>0. $ This means if $u$ is a solution of (\ref{NLS}) with the initial datum $\phi$, so is $u_k$ with the initial datum $\phi_k=k^{\frac{4}\alpha}\phi(kx)$. Computing the homogeneous Sobolev norm, we get $$ \left\|\phi_k\right\|_{\dot{H}^s}=k^{s-\frac{N}{2}+\frac{4}{\alpha}}\left\|\phi\right\|_{\dot{H}^s}. $$ Hence the scale-invariant Sobolev space is $\dot{H}^{s_c}(\R^N)$, with the critical index $s_c=\frac N2-\frac{4}\alpha$. If $s_c=s$ (equivalently $\alpha=\frac{8}{N-2s}$), the Cauchy problem (\ref{NLS}) is known as $H^s$-critical; if in particular $s_c=2$ (equivalently $\alpha=\frac{8}{N-4}$), it is called energy-critical or $H^2$-critical. The nonlinear biharmonic Schr\"odinger equation (\ref{NLS}), also called the fourth-order Schr\"odinger equation, was introduced by Karpman \cite{Karpman1}, and Karpman--Shagalov \cite{Karpman2} to take into account the role of small fourth-order dispersion terms in the propagation of intense laser beams in a bulk medium with Kerr nonlinearity. The biharmonic Schr\"odinger equation has attracted a lot of interest in the past decade. The sharp dispersive estimates for the fourth-order Schr\"odinger operator in (\ref{NLS}), namely for the linear group associated to $i\partial_t+\Delta^2+\mu\Delta$ was obtained in Ben-Artzi, Koch, and Saut \cite{Ben}. In \cite{Pa}, Pausader established the corresponding Strichartz' estimate for the biharmonic Schr\"odinger equation (\ref{NLS}). Since then, the local and global well-posedness for (\ref{NLS}) have been widely studied in recent years. See \cite{Guo3,Dinh,H,HLW1,HLW2,xuan,Miao2,Pa,Pa2,Wang} and references therein. We are interested in the global solutions to (\ref{NLS}) in the critical Sobolev space $H^s\left(\R^N\right)$. For $s=2$, Pausader \cite{Pa} established the global well-posedness for the defocusing energy-critical equation (\ref{NLS}) (i.e. $\mu=0$ or $-1, f(u)=\lambda \left|u\right|^\alpha u$ with $\lambda>0,\alpha=\frac{8}{N-4}$) in a radially symmetrical setting. The global well-posedness for the defocusing energy-critical problem without the radial condition and the focusing energy-critical equation (\ref{NLS}) (i.e. $\mu=0$ or $-1$, $f(u)=\lambda \left|u\right|^\alpha u$ with $\lambda<0$, $\alpha=\frac{8}{N-4}$) were discussed in \cite{Miao, Miao2, Pa3, Pa2}. For general $s$, Y. Wang \cite{Wang} established the small radial initial data global solutions to the biharmonic Schr\"odinger equation by using the improved Strichartz's estimate for spatial spherically symmetric function. He proved the global existence of solution for the Cauchy problem (\ref{NLS}) when $N\ge2$, $-\frac{3N-2}{2N+1}<s<\frac{N}{2},\alpha=\frac{8}{N-2s},\mu=0,f(u)=\lambda \left|u\right|^\alpha u,\lambda=\pm1$, and $\phi\in H^s(\R^N)$ is a small radial function. The goal of this paper is to establish the time global solution for (\ref{NLS}) with the small initial data in the critical Sobolev space $H^s(\R^N)$, where $N\ge1$, $0<s<\min \left\{\frac{N}{2},8\right\}$. Before stating our results, we define the class $\mathcal{C}(\alpha)$. \begin{definition} Let $\alpha>0$, $f\in C^{[\alpha]+1}(\mathbb{C},\mathbb{C})$ in the real sense, where $[\alpha]$ denotes the largest integer less than or equal to $\alpha$, and $f^{(j)}(0)=0$ for all $j$ with $0\leq j\leq [\alpha]$. We say that $f$ belongs to the class $\mathcal{C}(\alpha)$, if it satisfies one of the following two conditions:\\ (i) $\alpha\notin \mathbb{Z}$, $f^{([\alpha]+1)}(0)=0$, and there exists $C>0$ such that for any $z_1,z_2\in\mathbb{C}$ $$ \left|f^{([\alpha]+1)}(z_1)-f^{([\alpha]+1)}(z_2)\right|\le C \left(\left|z_1\right|^{\alpha-[\alpha]}+\left|z_2\right|^{\alpha-[\alpha]}\right)\left|z_1-z_2\right|, $$ (ii) $\alpha\in\mathbb{Z}$, and there exists $C>0$ such that for any $z\in\mathbb{C}$ $$ \left|f^{([\alpha]+1)}(z)\right|\le C. $$ \end{definition} \begin{remark} We note that the power type nonlinearity $f(u)=\lambda \left|u\right|^\alpha u$ or $f(u)=\lambda \left|u\right|^{\alpha+1}$ with $\lambda\in\R, \alpha>0$ is of the class $\mathcal{C}(\alpha)$, which has been widely studied in classical and biharmonic nonlinear Schr\"odinger equations. See \cite{Ca9,Ca10,Guo3,Dinh,HLW1,HLW2,xuan,Miao,Miao2,Pa,Pa3,Pa2} for instance. \end{remark} \begin{remark} For any $\alpha>0$ and $f\in \mathcal{C}(\alpha)$, it is easy to check that there exists $C>0$ such that for any $u,v\in\mathbb{C}$, we have \begin{equation}\label{fu} \left|f(u)-f(v)\right|\le C \left(\left|u\right|^{\alpha}+\left|v\right|^{\alpha}\right)\left|u-v\right|,\qquad \left|\partial_tf(u)\right|\le C \left|u\right|^{\alpha}\left|\partial_tu\right|. \end{equation} \end{remark} Our main result is the following. For the definitions of vector-valued Besov spaces $B^{\theta }_{q,2}\left(\R,L^r\left(\R^N\right)\right)$ and $B^{\theta-\sigma/4}_{q,2}B^\sigma_{r,2}$, we refer to Section \ref{s2}. \begin{theorem}\label{T1} Assume $0<s<\min\{8,\fc N2\}$, $N\ge1$, $\mu=0$ or $-1$, $f\in\mathcal{C}(\alpha)$ and $\alpha=\fc8{N-2s}>\alpha(s)$, where $$ \alpha(s)=\begin{cases} 0,\qquad \qquad\qquad \qquad \text{if } 0<s<4,\\ \max \left\{\frac{s}{4}-1,s-5\right\}, \ \text{if } 4<s<8. \end{cases} $$ Given any $\phi\in H^s(\R^N)$ with $\left\|\phi\right\|_{H^s}$ sufficiently small, there exists a unique global solution $ u\in C\left(\R,H^s\left(\R^N\right)\right)\cap \mathcal{X}$ to the Cauchy problem (\ref{NLS}), where $$ \mathcal{X}=\left\{\begin{array}{ll} L^{q_1}\left(\R,B^{s}_{r_1,2}\right)\cap B^{s/4}_{q_1,2}\left(\R,L^{r_1}\left(\R^N\right)\right), &0<s\le4,\\ L^{q_2}\left(\R,H^{4,r_2}(\R^N)\right),&s=4,\\ L^{q_3}\left(\R,B^{s}_{r_3,2}\left(\R^N\right)\right)\cap B^{s/4}_{q_3,2}\left(\R,L^{r_3}\left(\R^N\right)\right)\cap H^{1,q_3}\left(\R,B^{s-4}_{r_3,2}\left(\R^N\right)\right),& 4<s<6,\\ B^{s/4}_{2,2}\left(\R,L^{r_4}\left(\R^N\right)\right)\cap B^{\left(s-2\right)/4}_{2,2}\left(\R,L^{r_4}\left(\R^N\right)\right),& 6\le s<8, \end{array}\right.$$ with $$ \begin{cases} q_1=\frac{\left(2N+8\right)\left(N-2s+8\right)}{\left(N-2s\right)\left(N+8\right)}, \ \ &r_1=\frac{2N\left(N+4\right)\left(N-2s+8\right)}{8N\left(N+4\right)+\left(N-2s\right)\left(N^2-32\right)},\\ q_2=\frac{2N-8}{N-8},\ &r_2=\frac{2N\left(N-4\right)}{N^2-8N+32},\\ q_3=\frac{2\left(N-2s+8\right)}{N-2s},\ &r_3=\frac{2N\left(N-2s+8\right)}{\left(N-4\right)\left(N-2s\right)+8N},\\ r_4=\frac{2N}{N-4}. \end{cases} $$ \end{theorem} \begin{remark} Note that the lower bound $\alpha(s)$ is a continuous function of $s$. Moreover, the condition $\alpha>\max \left\{\frac{s}{4}-1,s-5\right\}$ is natural for $s>4$, since one time derivative corresponds to four spatial derivatives and the $s$-derivative of $u$ by the spatial variables requires the $(s-4)$-derivatives of $f(u)$ by (\ref{NLS}). \end{remark} \begin{remark} Theorem \ref{T1} improves the result in Y. Wang \cite{Wang} in the case $0<s<4$, where he made an additional radial assumption for the initial datum. \end{remark} Theorem \ref{T1} may be considered as a generalization of the corresponding results for the classical nonlinear Schr\"odinger equation. In \cite{Ca10}, Cazenave and Weissler showed the existence of the time global solutions for the small initial data of the $H^s$ critical Cauchy problem \begin{equation}\label{CNLS} \begin{cases}i\partial_tu+\Delta u+\lambda|u|^\alpha u=0,\\ u(0,x)=\phi(x)\in H^s(\mathbb{R}^N),\end{cases} \end{equation} for $0 \leq s<\fc N2$ and $[s]<\alpha=\fc4{N-2s}$. The condition $[s]<\alpha$ is the required regularity for $f(u)$, which can be improved to $s-1<\alpha$ by applying the nonlinear estimates obtained in Ginibre--Ozawa--Velo \cite{G}, and Nakamura--Ozawa \cite{Na3}. Recently, Nakamura--Wada \cite{Na,Na2} constructed some modified Strichartz's estimate and Strichartz type estimates in mixed Besov spaces to obtain small global solutions with less regularity assumption for the nonlinear term. More precisely, they showed that if $1<s<4$, $s\neq2$, $\alpha_0(s)<\alpha=\fc4{N-2s}$, with $$ \alpha_{0}(s):=\left\{\begin{array}{ll} 0, & \text { for } 0<s<2, \\ \frac{s}{2}-1, & \text { for } 2<s<4, \end{array}\right. $$ the Cauchy problem (\ref{CNLS}) admits a unique time global solution for the small initial data. Theorem \ref{T1} extends the results in \cite{Na,Na2} into the biharmonic Schr\"odinger case. The main tools used to prove Theorem \ref{T1} is the following modified Strichartz's estimate for fourth-order Schr\"odinger equation, by which we can replace the spatial derivative of order $4\theta$ with the time derivative of order $\theta$ in terms of Besov spaces. For the definitions of the biharmonic admissible pairs set $\Lambda_b$, and the Chemin--Lerner type space $l^2L^{\overline{q}}L^{\overline{r}}$, we refer to Section \ref{s2}. \begin{proposition}\label{p1} Assume $0<\theta<1$, $0\le\sigma<4\theta$, $(q,r),(\gamma,\rho)\in\Lambda_b$ are two biharmonic admissible pairs, and $\mu=0$ or $-1$. Assume also that $1\le \overline{q}\le q$, $1\le \overline{r}\le\infty$ satisfy $\frac{4}{\overline{q}}-N\left(\frac{1}{2}-\frac{1}{\overline{r}}\right)=4(1-\theta)$. Then for any $u_0\in H^{4\theta}$ and $f\in B^{\theta}_{\gamma',2}(\R,L^{\rho'})\cap l^2L^{\overline{q}}\left(\R,L^{\overline{r}}\right)$, we have $e^{it(\Delta^2+\mu\Delta)}u_0, Gf\in C(\R,H^{4\theta})$ where \begin{equation} (Gf)(t)=\int_0^te^{i(t-s)(\Delta^2+\mu\Delta)}f(s)ds.\notag \end{equation} Moreover, the following inequalities hold, \begin{equation}\label{i1} \|e^{it(\Delta^2+\mu\Delta)}u_0\|_{ L^qB^{4\theta}_{r,2}\cap B^{\theta-\sigma/4}_{q,2}{B^\sigma_{r,2}}}\lesssim \|u_0\|_{H^{4\theta}}, \end{equation} \begin{equation}\label{i2} \|Gf\|_{ L^q B^{4\theta}_{r,2}}\lesssim \|f\|_{B^{\theta}_{\gamma',2}L^{\rho'}}+\|f\|_{l^2L^{\overline{q}}L^{\overline{r}}}, \end{equation} \begin{equation}\label{i3} \|Gf\|_{B_{q, 2}^{\theta-\sigma/4}B_{r, 2}^{\sigma}} \lesssim\|f\|_{B_{\gamma^{\prime}, 2}^{\theta}L^{\rho^{\prime}}}+\|f\|_{l^2L^{\overline{q}}L^{\overline{r}}}. \end{equation} \end{proposition} In this paper, we first establish the modified Strichartz's estimate (\ref{i1})--(\ref{i3}) for the biharmonic Schr\"odinger equation in the spirit of \cite{Na,Na2}. Then we establish various nonlinear estimates and use the contraction mapping principle based on the modified Strichartz's estimate to complete the proof of Theorem \ref{T1}. The rest of the paper is organized as follows. In Section \ref{s2}, we introduce some notations and give a review of the biharmonic Strichartz's estimates. In Section \ref{s3}, we establish the modified Strichartz's estimate. In Section \ref{s4}, we give the proof of Theorem \ref{T1}. \section{Preliminary}\label{s2} If $X, Y$ are nonnegative quantities, we sometimes use $X\lesssim Y$ to denote the estimate $X\leq CY$ for some positive constant $C$. Pairs of conjugate indices are written as $p$ and $p'$, where $1\leq p\leq\infty$ and $\frac1p+\frac1{p'}=1$. We use $L^p (\mathbb{R}^N)$ to denote the usual Lebesgue space and $L^\gamma(I,L^\rho(\mathbb{R}^N))$ to denote the space-time Lebesgue spaces with the norm \begin{gather}\notag \|f\|_{L^\gamma(I,L^\rho(\mathbb{R}^N))}:=\left(\int_I\|f\|_{L_x^\rho}^\gamma dt\right)^{1/\gamma} \end{gather} for any time slab $I\subset\mathbb{R}^N$, with the usual modification when either $\gamma$ or $\rho$ is infinity. We define the Fourier transform on $\R,\R^N$ and $\R^{1+N}$ by \begin{align*} &\hat f(\tau)=\int_\R f(t)e^{-it\tau}dt,\qquad\qquad\quad\qquad\qquad\tau\in\R,\\ &\hat f(\xi)=\int_{\R^N}f(x)e^{-ix\cdot\xi}d\xi,\qquad\quad\qquad\qquad\xi\in\R^N,\\ &\widetilde f(\tau,\xi)=\int_{\R^{1+N}}f(t,x) e^{-ix\cdot\xi-it\tau}dxdt,\quad(\tau,\xi)\in\R\times\R^N, \end{align*} respectively. Next, we review the definition of Besov spaces. Let $\phi$ be a smooth function whose Fourier transform $\hat\phi$ is a non-negative even function which satisfies supp $\hat\phi\subset\{\tau\in\R,1/2\le|\tau|\le2\}$ and $\sum_{k=-\infty}^{\infty}\hat\phi(\tau/{2^k})=1$ for any $\tau\neq0$. For $k\in\mathbb{Z}$, we put $\hat\phi_k(\cdot)=\hat\phi(\cdot/{2^k})$ and $\psi=\sum_{j=-\infty}^0\phi_j$. Moreover, we define $\chi_k=\sum_{k-2}^{k+2}\phi_j$ for $k\ge1$ and $\chi_0=\psi+\phi_1+\phi_2$. For $s\in\R$ and $1 \leq p, q \leq \infty$, we define the Besov space $$ B_{p,q}^{s}\left({\R}^{N}\right)=\left\{u \in \mathcal{S}^{\prime}\left(\R^{N}\right), \|u\|_{B_{p,q}^{s}\left(\R^{N}\right)}<\infty\right\}, $$ where $\mathcal{S}^{\prime}\left({\R}^{N}\right)$ is the space of tempered distributions on $\R^{N},$ and $$ \|u\|_{B_{p, q}^{s}\left(\R^{N}\right)}=\left\|\psi *_{x} u\right\|_{L^{p}\left(\R^{N}\right)}+\left\{\begin{array}{ll} \left\{\sum_{k \geq 1}\left(2^{s k}\left\|\phi_{k} *_{x} u\right\|_{L^{p}\left(\R^{N}\right)}\right)^{q}\right\}^{\frac1q}, & q<\infty, \\ \sup _{k \geq 1} 2^{s k}\left\|\phi_{k}*_{x} u\right\|_{L^{p}\left(\R^{N}\right)}, & q=\infty, \end{array}\right. $$ where $*_{x}$ denotes the convolution with respect to the variables in $\R^{N}$. Here we use $\phi_k*_xu$ to denote $\phi_k(|\cdot|)*_xu$. We also define $\chi_k*_xu,\psi*_xu,\chi_0*_xu$ similarly. This is an abuse of symbol, but no confusion is likely to arise. For $1\le q$, $\alpha\le \infty$ and a Banach space $V$, we denote the Lebesgue space for functions on $\R$ to $V$ by $L^q\left(\R,V\right)$ and the Lorentz space by $L^{q,\alpha}\left(\R,V\right)$. We define the Sobolev space $H^{1,q}\left(\R,V\right)=\left\{u:u\in L^q\left(\R,V\right),\partial_tu\in L^q \left(\R,V\right)\right\}$. For $1\le\alpha,r,q\le\infty$, we denote the Chemin-Lerner type space $$l^{\alpha} L^{q}\left(\mathbb{R}, L^{r}\left(\mathbb{R}^{N}\right)\right)=\left\{u \in L_{\mathrm{loc}}^{1}\left(\mathbb{R}, L^{r}\left(\mathbb{R}^{N}\right)\right), \|u\|_{\ell^{\alpha} L^{q}\left(\mathbb{R},\ L^{r}\left(\mathbb{R}^{N}\right)\right)}<\infty\right\}$$ with the norm defined by \[ \|u\|_{l^\alpha L^q(\R,L^r(\R^N))}=\|\psi*_xu\|_{L^q(\R,L^r(\R^N))}+\left(\sum_{k\ge1}\|\phi_k*_xu\|_{L^q(\R,L^r(\R^N))}^\alpha\right)^{1/\alpha} \] with trivial modification if $\alpha=\infty$. We also define $l^\alpha L^{q,\infty}\left(\R,L^r\left(\R^N\right)\right)$ similarly. Finally, we define the Besov space of vector-valued functions. Let $\theta \in\R, 1 \leq q, \alpha \leq \infty$ and $V$ be a Banach space. We put $$ B_{q, \alpha}^{\theta}(\R, V)=\left\{u \in \mathcal{S}^{\prime}(\R, V) ;\|u\|_{B_{q, \alpha}^{\theta}(\R, V)}<\infty\right\} $$ where \begin{equation}\label{9221} \|u\|_{B_{q, \alpha}^{\theta}(\R, V)}=\left\|\psi *_{t} u\right\|_{L^{q}(\R , V)}+\left\{\sum_{k \geq 1}\left(2^{\theta k}\left\|\phi_{k} *_{t} u\right\|_{L^{q}(\R , V)}\right)^{\alpha}\right\}^{1 / \alpha}\notag \end{equation} with trivial modification if $\alpha=\infty.$ Here $*_{t}$ denotes the convolution in $\R$. In this paper, we omit the integral domain for simplicity unless noted otherwise. For example, we write $l^\alpha L^qL^r=l^\alpha L^q\left(\R,L^r(\R^N)\right)$, $L^qB^s_{r,2}=L^q\left(\R,B^s_{r,2}(\R^N)\right)$ and $B^{\theta-\sigma/4}_{q,2}B^\sigma_{r,2}=B^{\theta-\sigma/4}_{q,2}(\R,$ $B^\sigma_{r,2}(\R^N))$ etc. Following standard notations, we introduce Schr\"odinger admissible pair as well as the corresponding Strichartz's estimate for the biharmonic Schr\"odinger equation. \begin{definition}\label{bpair} A pair of Lebesgue space exponents $(\gamma, \rho)$ is called biharmonic Schr\"odinger admissible for the equation (\ref{NLS}) if $(\gamma, \rho)\in \Lambda_b$ where \begin{equation*} \Lambda_b=\{(\gamma, \rho):2\leq \gamma, \rho\leq\infty, \ \frac4\gamma+\frac N\rho=\frac N2, \ (\gamma, \rho, N)\neq(2, \infty, 4)\}, \end{equation*} \end{definition} \begin{lemma}[Strichartz's estimate for BNLS, \cite{Pa}]\label{L2.2S} Suppose that $(\gamma,\rho), (a,b)\in\Lambda_b $ are two biharmonic admissible pairs, and $\mu=0$ or $-1$. Then for any $u\in L^2(\mathbb{R}^N)$ and $h\in L^{a'}(\R,L^{b'}(\mathbb{R}^N))$, we have \begin{gather}\label{sz} \|e^{it(\Delta^2+\mu\Delta)}u\|_{L^\gamma(\R, L^\rho)}\leq C\|u\|_{L^2}, \end{gather} \begin{equation}\label{sz1} \|\int_{\R}e^{-is(\Delta^2+\mu\Delta)}h(s)\ ds \|_{L^2}\leq C\|h\|_{L^{a'}(\R,L^{b'})}, \end{equation} \begin{equation}\label{SZ} \|\int_0^te^{i(t-s)(\Delta^2+\mu\Delta)}h(s)\ ds \|_{L^\gamma(\R, L^\rho)}\leq C\|h\|_{L^{a'}(\R,L^{b'})}. \end{equation} \end{lemma} \section{Modified Strichartz's estimate }\label{s3} In this section, we prove Proposition \ref{p1}. First, we prepare several lemmas. We assume the functions $\phi,\chi_0,\psi,\phi_j,\chi_j$ are defined in Section \ref{s2}. \begin{lemma}\label{L1} Assume $N\ge1$, $\mu=0$ or $-1$, and $K(t,x), K_j(t,x)(j\ge1):\R\times\R^N\rightarrow\mathbb{C}$ are defined by \begin{align*} K(t,x)=\fc1{(2\pi)^{1+N}}\int e^{it\tau+ix\cdot \xi}\fc{\hat\psi(|\xi|^4-\mu|\xi|^2)(1-\hat\chi_0(\tau))}{i(\tau-|\xi|^4+\mu|\xi|^2)} d\tau d\xi, \end{align*} \begin{equation} K_j(t,x)=\fc1{(2\pi)^{1+N}}\int e^{it\tau+ix\cdot \xi}\fc{\hat\phi_j(|\xi|^4-\mu|\xi|^2)(1-\hat\chi_j(\tau))}{i(\tau-|\xi|^4+\mu|\xi|^2)} d\tau d\xi.\notag \end{equation} Then for any $0<\theta<1$, $1\le q\le\infty$, $1\le r\le\infty$ with $\fc4{q}-N(1-\fc1{ r})=4\theta$, we have \begin{equation*} \|K\|_{L^{ q,1}L^{ r}}\le C,\quad\text{and }\quad\|K_j\|_{L^{ q,1}L^{ r}}\le C2^{-j\theta}, \end{equation*} where the constant $C$ independent of $j\ge1$. \end{lemma} \begin{proof} The method used here is inspired by the last part of Lemma 2.4 in \cite{Wada}. We shall prove the estimate for $K_j(t,x)$, the estimate for $K(t,x)$ can be treated in a similar way. Define $\chi=\sum_{-2}^{2}\phi_j$ and \begin{equation} \widetilde {L_j}(\tau,\xi)=\fc{\hat\phi(|\xi|^4-\mu2^{- j/2}|\xi|^2)(1-\hat\chi(\tau))}{i(\tau-|\xi|^4+\mu2^{- j/2}|\xi|^2)},\qquad j\ge1.\notag \end{equation} Then by Fourier transform $$ \widetilde{K_j}(\tau,\xi)=\fc{\hat\phi_j(|\xi|^4-\mu|\xi|^2)(1-\hat\chi_j(\tau))}{i(\tau-|\xi|^4+\mu|\xi|^2)} = 2^{-j}\widetilde {L_j}(\tau/{2^j},\xi/{2^{ j/4}}),\notag $$ so that $K_j(t,x)=2^{{Nj}/4}L_j(2^jt,2^{ j/4}x)$. Moreover, by the change of variables, we have \begin{equation} \|K_j\|_{L^{q,1}L^{r}}=2^{j\left(\frac{N}{4}-\frac{1}{q}-\frac{N}{4r}\right)}\|L_j\|_{L^{q,1}L^{ r}}=2^{-j\theta}\|L_j\|_{L^{q,1}L^{ r}}.\notag \end{equation} Therefore, it suffices to show that for any $l\ge1$, there exists $C>0$ independent of $j\ge1$ such that \begin{equation}\label{1172} |L_j(t,x)|\le C(1+|t|+|x|)^{-l},\qquad \forall (t,x)\in \R\times\R^N. \end{equation} We now prove (\ref{1172}). By Fourier inversion formula \begin{equation} L_j(t,x)=\fc1{(2\pi)^{1+N}}\int\int_{\R^{1+N}}e^{it\tau+ix\cdot\xi}\fc{\hat\phi(|\xi|^4-\mu2^{-j/2}|\xi|^2)(1-\hat\chi(\tau))}{i(\tau-|\xi|^4+\mu2^{- j/2}|\xi|^2)} d\tau d\xi.\notag \end{equation} Note that on the support of the integrand of $L_{j},$ we must have $|\tau| \notin[1 / 4,4]$ and $\left||\xi|^4-\mu2^{-j/2}|\xi|^{2}\right| \in[1 / 2,2],$ so that $\left|\tau-|\xi|^4+\mu2^{-j/2}|\xi|^2\right| \geq 1/4$. Therefore, we deduce that the following integral \[ \int_{\left|\tau-|\xi|^4+\mu2^{-j/2}|\xi|^2\right| \leq 10} e^{it\tau+ix\cdot\xi}\fc{\hat\phi(|\xi|^4-\mu2^{- j/2}|\xi|^2)(1-\hat\chi(\tau))}{i(\tau-|\xi|^4+\mu2^{- j/2}|\xi|^2)} d\tau d\xi\] is bounded. On the other hand, since $\hat{\chi}(\tau)=0$ when $\left|\tau-|\xi|^4+\mu2^{- j/2}|\xi|^2\right|\ge10$ and $1/2\le \left||\xi|^4-\mu2^{- j/2}|\xi|^{2}\right|\le2,$ we have \begin{eqnarray*} &&\int_{\left|\tau-|\xi|^4+\mu2^{- j/2}|\xi|^2\right|\geq 10} e^{i t \tau} \fc{\hat\phi(|\xi|^4-\mu2^{- j/2}|\xi|^2)(1-\hat\chi(\tau))}{i(\tau-|\xi|^4+\mu2^{- j/2}|\xi|^2)} d\tau\\ &=&\int_{\left|\tau-|\xi|^4+\mu2^{- j/2}|\xi|^2\right|\geq 10} e^{i t \tau} \fc{\hat\phi(|\xi|^4-\mu2^{- j/2}|\xi|^2)}{i(\tau-|\xi|^4+\mu2^{- j/2}|\xi|^2)} d\tau\\ &=&2 \operatorname{sign}(t) e^{i t(|\xi|^4-\mu2^{- j/2}|\xi|^{2})} \hat\phi(|\xi|^4-\mu2^{- j/2}|\xi|^2) \int_{10/|t|}^{\infty} \frac{\sin \tau}{\tau} d \tau. \end{eqnarray*} This is also bounded, so that we have proved the boundedness of $L_{j}(t, x)$. Moreover, for $1 \leq l \leq N,$ the integration by parts shows that \begin{eqnarray}\label{1194} &&x_{l} L_{j}(t, x)\notag\\ &=&\frac{1}{(2 \pi)^{1+N}} \iint_{R^{1+N}} e^{i t \tau+i x\cdot \xi} \frac{\partial}{\partial \xi_{l}} \fc{\hat\phi(|\xi|^4-\mu2^{-j/2}|\xi|^2)(1-\hat\chi(\tau))}{\tau-|\xi|^4+\mu2^{- j/2}|\xi|^2} d\tau d\xi\notag\\ &=&\frac{1}{(2 \pi)^{1+N}} \iint_{R^{1+N}} e^{i t \tau+i x\cdot \xi} (4|\xi|^2\xi_{l}-2\mu2^{- j/2}\xi_{l})\left\{\frac{\hat{\phi}^{\prime}\left(|\xi|^4-\mu2^{- j/2}|\xi|^{2}\right)\left(1-\hat{\chi}(\tau)\right)}{\tau-|\xi|^4+\mu2^{- j/2}|\xi|^{2}}\right.\notag\\ &&+\left.\fc{\hat\phi(|\xi|^4-\mu2^{-j/2}|\xi|^2)(1-\hat\chi(\tau))}{(\tau-|\xi|^4+\mu2^{- j/2}|\xi|^2)^2}\right\} d \tau d \xi. \end{eqnarray} The right-hand side of (\ref{1194}) is bounded as before. Similarly, $tL_j(t,x)$ is also bounded. Repeating this, we can obtain the desired estimate (\ref{1172}). \end{proof} \begin{lemma}\label{f} Let $N\ge1$, $0<\theta<1$, $1\le r_0$, $\overline{r}$, $\overline{q}$, $\gamma \le \infty$, $1<q_0$, $\rho<\infty$. Assume that $2\le\overline r\le\infty,\fc4{\overline q}-N(\fc12-\fc1{\overline r})=\frac{4}{q_0}-N(\frac{1}{2}-\frac{1}{r_0})=4(1-\theta),(\gamma,\rho)\in\Lambda_b$ and $r_0$ satisfies $\rho'\le r_0<\overline{r}$ or $\overline{r}<r_0\le\rho'$. Then for any $f\in l^2L^{\overline q}L^{\overline r}\cap B^{\theta}_{\gamma',2}L^{\rho'}$, we have \begin{equation} \|f\|_{l^2L^{q_0,\infty}L^{r_0}}\lesssim \|f\|_{l^2L^{\overline q}L^{\overline r}}+\|f\|_{B^{\theta}_{\gamma',2}L^{\rho'}}.\notag \end{equation} \end{lemma} \begin{proof} The proof is an obvious adaptation of Lemma 2.5 in \cite{Na}. \end{proof} \begin{lemma}\label{dj} Let $s\in\R$, $1\le p,q\le\infty$, $\mu=0$ or $-1$, then the norm defined by \begin{eqnarray*} &&\|u\|_{\widetilde{B }_{p, q}^{s}\left(\mathbb{R}^{n}\right)}:=\left\|\left(\mathcal{F}_{\xi}^{-1}\left(\widehat{\psi}\left(|\xi|^{4}-\mu|\xi|^2\right)\right)\right) *_{x} u\right\|_{L^{p}\left(\mathbb{R}^{n}\right)} \\ &&+\begin{cases}\left\{\sum_{j \geq 1}\left(2^{s j / 4}\left\|\left(\mathcal{F}_{\xi}^{-1}\left(\widehat{\phi}\left((|\xi|^{4}-\mu|\xi|^2)/{2^{j}}\right)\right)\right) *_{x} u\right\|_{L^{p}\left(\mathbb{R}^{n}\right)}\right)^{q}\right\}^{\frac1q},\ \text{ if }q<\infty,\\ \sup _{j \geq 1} 2^{s j / 4}\left\|\left(\mathcal{F}_{\xi}^{-1}\left(\widehat{\phi}\left((|\xi|^{4}-\mu|\xi|^2)/{2^{j}}\right)\right)\right) *_{x} u\right\|_{L^{p}\left(\mathbb{R}^{n}\right)},\ \text{ if }q=\infty,\end{cases} \end{eqnarray*} is equivalent to the norm $\|u\|_{B^s_{p,q}}(\R^N)$ for any function $u$. \end{lemma} \begin{proof} For the sake of convenience and completeness, we briefly sketch the proof. Indeed, readers seeking a fuller treatment of certain details may consult Lemma 2.3 of \cite{Na}. Firstly, we show that $\|u\|_{B_{p,q}^{s}\left(\mathbb{R}^{N}\right)} \lesssim\|u\|_{\widetilde{B }_{p,q}^{s}\left(\mathbb{R}^{N}\right)}.$ Since $$ \sum_{k=-6}^7 \widehat{\phi}\left((|\xi|^{4}-\mu|\xi|^2)/{2^{4j+k}}\right)=1 $$ on the support of $\widehat{\phi}\left(|\cdot|/2^j\right),$ it follows from Young's inequality that \begin{eqnarray*} 2^{js}\left\|\phi_{j} *_{x} u\right\|_{L^{p}\left(\mathbb{R}^{N}\right)} &=&2^{js}\left\|\left(\mathcal{F}_{\xi}^{-1} \widehat{\phi}\left({|\xi|}/{2^{j}}\right)\right) *_{x} u\right\|_{L^{p}\left(\mathbb{R}^{N}\right)} \\ & \lesssim& 2^{js}\sum_{k=-4}^{5}\left\|\left(\mathcal{F}_{\xi}^{-1}\left(\widehat{\phi}\left((|\xi|^{4}-\mu|\xi|^2)/{2^{4j+k}}\right)\right)\right) *_{x} u\right\|_{L^{p}\left(\mathbb{R}^{N}\right)}\\ &\lesssim& \sum_{k=-4}^{5}2^{\fc{ls}4}\left\|\left(\mathcal{F}_{\xi}^{-1}\left(\widehat{\phi}\left((|\xi|^{4}-\mu|\xi|^2)/{2^{l}}\right)\right)\right) *_{x} u\right\|_{L^{p}\left(\mathbb{R}^{N}\right)}, \end{eqnarray*} where $l=4j+k$. Similar inequality for the low frequency part also holds. Taking $l^q(\mathbb{Z})$ norm, we obtain the desired inequality $\|u\|_{B_{p,q}^{s}\left(\mathbb{R}^{N}\right)} \lesssim\|u\|_{\widetilde{B}_{p,q}^{s}\left(\mathbb{R}^{N}\right)}$. Next, we show the opposite inequality. Note that $\sum_{k=-4}^{4}\hat\phi({|\xi|}/{2^{[ j/4]+k}})=1$ on the support of $\hat\phi((|\xi|^{4}-\mu|\xi|^2)/{2^j})$, so that \begin{eqnarray*} &&2^{{js}/4}\left\|\mathcal{F}_{\xi}^{-1}\left(\widehat{\phi}\left((|\xi|^{4}-\mu|\xi|^2)/{2^{j}}\right)\right) *_{x} u\right\|_{L^{p}\left(\mathbb{R}^{N}\right)} \\ &\lesssim& 2^{{js}/4}\sum_{k=-2}^{2}\left\|\mathcal{F}_{\xi}^{-1}\left(\widehat{\phi}\left({|\xi|}/{2^{[ j/4]+k}}\right)\right) *_{x} u\right\|_{L^{p}\left(\mathbb{R}^{N}\right)}\\ &\lesssim& \sum_{k=-2}^{2}2^{ls}\left\|\mathcal{F}_{\xi}^{-1}\left(\widehat{\phi}\left(|\xi|/{2^{l}}\right)\right) *_{x} u\right\|_{L^{p}\left(\mathbb{R}^{N}\right)}, \end{eqnarray*} where $l=[ j/4]+k$. Since the low frequency parts are easier to treat, we can take $l^q(\mathbb{Z})$ norm to obtain the desired inequality $\|u\|_{\widetilde{B}_{p,q}^{s}\left(\mathbb{R}^{N}\right)}\lesssim \|u\|_{B_{p,q}^{s}\left(\mathbb{R}^{N}\right)}.$ \end{proof} \begin{proof}[\textbf{Proof of Proposition \ref{p1}}] We use the natation $\phi_{ j/4}=\mathcal{F}_\xi^{-1}\left(\hat\phi_j(|\xi|^4-\mu|\xi|^2)\right)$. This is an abuse of symbol, but no confusion is likely to arise. Under this notation, we obtain the following equivalence from Lemma \ref{dj}, \begin{equation}\label{1271} \|u\|_{{B}_{p, q}^{s}}\approx\left\|\left(\mathcal{F}_{\xi}^{-1}\left(\widehat{\psi}\left(|\xi|^{4}-\mu|\xi|^2\right)\right)\right) *_{x} u\right\|_{L^{p}} +\left\{\sum_{j=1}^\infty\left(2^{s j / 4}\left\| \phi_{ j/4}*_{x} u\right\|_{L^{p}}\right)^{q}\right\}^{\frac1q} \end{equation} with trivial modification if $q=\infty$. Then we claim that for any $f:\R^N\rightarrow \mathbb{C}$, we have \begin{equation}\label{a1} \phi_{j} *_{t}e^{it(\Delta^2+\mu\Delta)}f=e^{it(\Delta^2+\mu\Delta)}\phi_{ j/4} *_{x} f. \end{equation} In fact, by Fourier transform \begin{eqnarray} \left(\phi_j*_te^{it(\Delta^2+\mu\Delta)}f\right)\hat{\phantom{f}}(t,\xi)&=&\int_{-\infty }^{\infty } \phi_j(\tau)e^{i(t-\tau)\left(\left|\xi\right|^{4}-\mu \left|\xi\right|^{2}\right)}\hat f(\xi)\mathrm{d}\tau \notag\\ &=& e^{it \left(\left|\xi\right|^{4}-\mu \left|\xi\right|^{2}\right)}\hat f(\xi)\hat\phi_j \left(\left|\xi\right|^{4}-\mu \left|\xi\right|^{2}\right).\notag \end{eqnarray} Taking Fourier inverse transform, we obtain (\ref{a1}). We now resume the proof of Proposition \ref{p1}. We seperate the proof into three parts.\\ \textbf{The proof of the inequality (\ref{i1})}. Using the same method as that used to prove Corollary 2.3.9 in \cite{Ca9}, we deduce that $e^{it(\Delta^2+\mu\Delta)}u_0\in C\left(\R,H^{4\theta }\right)$ and \begin{equation}\label{1166} \left\|e^{it(\Delta^2+\mu\Delta)}u_0\right\|_{L^qB^{4\theta}_{r,2}}\lesssim \left\|u_0\right\|_{H^{4\theta}}. \end{equation} It remains to estimate $\left\|e^{it(\Delta^2+\mu\Delta)}u_0\right\|_{B^{\theta-\sigma/4}_{q,2}B^\sigma_{r,2}}$. Applying (\ref{a1}) and Strichartz's estimate (\ref{sz}), we conclude that \begin{eqnarray} &&\sum_{j=1}^{\infty} \sum_{k=1}^{\infty} 2^{(2 \theta-\sigma/2) j+\sigma k/2}\left\|\phi_{j} *_{t} \phi_{ k/4} *_{x} e^{it(\Delta^2+\mu\Delta)}u_0\right\|_{L^{q}L^{r}}^{2} \notag\\ & \lesssim& \sum_{j=1}^{\infty} \sum_{k=1}^{\infty} 2^{(2 \theta-\sigma/2) j+\sigma k/2 }\left\|\phi_{ j/4} *_{x} \phi_{k/4} * u_0\right\|_{L^{2}}^{2} \notag\\ & \lesssim& \sum_{k=1}^{\infty} 2^{2 \theta k}\left\|\phi_{ k/4} *_{x} u_0\right\|_{L^{2}}^{2} \lesssim\left\|u_0\right\|_{H^{4 \theta}}^{2},\notag \end{eqnarray} where we used (\ref{1271}) and the fact $\hat{\phi}_{j}\left(|\xi|^4-\mu|\xi|^{2}\right) \hat{\phi}_{k}\left(|\xi|^4-\mu|\xi|^{2}\right)=0$ whenever $|j-k|\ge 2$. Since the low frequency parts are easier to treat, we obtain $$ \left\|e^{it(\Delta^2+\mu\Delta)}u_0\right\|_{B_{q, 2}^{\theta-\sigma/4}B_{r, 2}^{\sigma}} \lesssim\left\|u_0\right\|_{H^{4 \theta}}. $$ This inequality together with (\ref{1166}) yields (\ref{i1}). \noindent\textbf{The proof of the inequality (\ref{i2})}. Taking Fourier transform, we get \begin{eqnarray}\label{9206} (Gf)\hat{\phantom{f}}(t,\xi)&=&\int_0^te^{i(t-s)(|\xi|^4-\mu|\xi|^2)}\hat f(s,\xi)ds \notag\\ &=&\fc1{2\pi}\int_0^te^{i(t-s)(|\xi|^4-\mu|\xi|^2)}ds \int_{-\infty}^{\infty}\widetilde f(\tau,\xi)e^{i\tau s}d\tau\notag\\ &=&\int_{-\infty}^{\infty}\fc{e^{it\tau}-e^{it(|\xi|^4-\mu|\xi|^2)}}{2\pi i(\tau-|\xi|^4+\mu|\xi|^2)}\widetilde f(\tau,\xi)d \tau. \end{eqnarray} We then multiply (\ref{9206}) with $\hat\phi_k(|\xi|^4-\mu|\xi|^2)$ to obtain \begin{eqnarray}\label{1272} &&\hat\phi_k(|\xi|^4-\mu|\xi|^2)(Gf)\hat{\phantom{f}}(t,\xi)\notag\\ &=&\int_{-\infty}^{\infty}\fc{e^{it\tau}\hat\phi_k(|\xi|^4-\mu|\xi|^2)\hat\chi_k(\tau)}{{2\pi i(\tau-|\xi|^4+\mu|\xi|^2)}}\widetilde f(\tau,\xi) d\tau\notag\\ &&+\int_{-\infty}^{\infty}\fc{e^{it\tau}\hat\phi_k(|\xi|^4-\mu|\xi|^2)(1-\hat\chi_k(\tau))}{2\pi i(\tau-|\xi|^4+\mu|\xi|^2)}\cdot \hat\chi_k(|\xi|^4-\mu|\xi|^2)\widetilde f(\tau,\xi) d\tau\notag\\ &&-\int_{-\infty}^{\infty}\fc{e^{it(|\xi|^4-\mu|\xi|^2)}}{{2\pi i(\tau-|\xi|^4+\mu|\xi|^2)}}\hat\phi_k(|\xi|^4-\mu|\xi|^2)\hat\chi_k(\tau)\widetilde f(\tau,\xi) d\tau\notag\\ &&-\int_{-\infty}^{\infty}\fc{e^{it(|\xi|^4-\mu|\xi|^2)}\hat\phi_k(|\xi|^4-\mu|\xi|^2)(1-\hat\chi_k(\tau))}{2\pi i(\tau-|\xi|^4+\mu|\xi|^2)}\cdot \hat\chi_k(|\xi|^4-\mu|\xi|^2)\widetilde f(\tau,\xi) d\tau \end{eqnarray} where we used the face that $\hat\chi_k=1$ on the support of $\hat\phi_k$. Since \begin{eqnarray*} \mathcal{F}_\tau^{-1}\{\fc1{i(\tau-|\xi|^4+\mu|\xi|^2)}\}(t)&=&\fc1{2\pi}\int_{-\infty}^{\infty}e^{it\tau}\fc1{i(\tau-|\xi|^4+\mu|\xi|^2)}d\tau\\ &=&\fc{1}{2\pi}e^{it(|\xi|^4-\mu|\xi|^2)}\int_{-\infty}^{\infty}\fc{e^{it\tau}}{i\tau} d\tau\\ &=&\fc12\text{sign}(t)e^{it(|\xi|^4-\mu|\xi|^2)}, \end{eqnarray*} it follows from (\ref{1272}) that \begin{eqnarray}\label{1168} \phi_{ k/4}*_x(Gf)&=&\fc12\int_{-\infty}^{\infty}\text{sign}(t-s)e^{i(t-s)(\Delta^2+\mu\Delta)}(\phi_{ k/4}*_x\chi_k*_tf)(s)ds\notag\\ &&+K_k*_{t,x}\chi_{ k/4}*_xf\notag\\ &&-\fc12e^{it(\Delta^2+\mu\Delta)}\int_{-\infty}^{\infty}\text{sign}(-s)e^{is(\Delta^2+\mu\Delta)}(\phi_{ k/4}*_x\chi_k*_tf)(s) ds \notag\\ &&-e^{it(\Delta^2+\mu\Delta)}\{K_k*_{t,x}\chi_{k/4}*_xf\}|_{t=0}, \end{eqnarray} where $K_j$ is the function defined in Lemma \ref{L1}. Applying (\ref{1168}), Strichartz's estimates (\ref{sz})--(\ref{SZ}), we conclude that \begin{equation}\label{9201} \|\phi_{k/4}*_x(Gf)\|_{L^qL^r} \lesssim \|\chi_k*_tf\|_{L^{\gamma'}L^{\rho'}}+\|K_k*_{t,x}\chi_{ k/4}*_xf\|_{L^qL^r\cap L^\infty L^2}. \end{equation} Next, we estimate $\|K_k*_{t,x}\chi_{ k/4}*_xf\|_{L^qL^r\cap L^\infty L^2}$. Let \begin{gather}\notag r_0=\begin{cases}\overline r,\qquad\text{if }1\le\overline r\le2,\\ 2, \qquad\text{if }\overline r\ge2,\end{cases} \qquad r_1=\begin{cases}\overline r,\qquad\text{if }1\le\overline r\le r,\\ 2, \qquad\text{if }\overline r\ge r.\end{cases} \end{gather} We then define $q_0,\widetilde{r_0},\widetilde{q_0}$, $q_1,\widetilde{r_1},\widetilde{q_1}$ such that $\fc4{q_0}-N(\fc12-\fc1{r_0})=4(1-\theta)$, $1+\fc12=\fc1{\widetilde{r_0}}+\fc1{r_0},1=\fc1{\widetilde {q_0}}+\fc1{q_0}$ and $\fc4{q_1}-N(\fc12-\fc1{r_1})=4(1-\theta)$, $1+\fc1r=\fc1{\widetilde{r_1}}+\fc1{r_1},1+\fc1q=\fc1{\widetilde {q_1}}+\fc1{q_1}$. Then it is easy to check that $1\le r_0,q_0,\widetilde{r_0},\widetilde{q_0}$, $r_1, q_1,\widetilde{r_1},\widetilde{q_1}\le \infty $ and $\fc4{\widetilde{q_0}}-N(1-\fc1{\widetilde{r_0}})=\fc4{\widetilde{q_1}}-N(1-\fc1{\widetilde{r_1}})=4\theta$. From Young's inequality and Lemma \ref{L1}, we have \begin{equation}\label{1167} \|K_k*_{t,x}\chi_{ k/4}*_xf\|_{L^\infty L^2}\lesssim \|K_k\|_{L^{\widetilde{q_0 },1}L^{\widetilde{r_0 }}}\|\chi_{ k/4}*_xf\|_{L^{q_0,\infty} L^{r_0}}\lesssim2^{-k\theta}\|\chi_{ k/4}*_xf\|_{L^{q_0,\infty} L^{r_0}}, \end{equation} and \begin{equation}\label{9213} \|K_k*_{t,x}\chi_{ k/4}*_xf\|_{L^q L^r}\lesssim \|K_k\|_{L^{\widetilde {q_1},1}L^{\widetilde {r_1}}}\|\chi_{ k/4}*_xf\|_{L^{q_1,\infty} L^{r_1}}\lesssim2^{-k\theta}\|\chi_{ k/4}*_xf\|_{L^{q_1,\infty} L^{r_1}}. \end{equation} Estimates (\ref{9201}), (\ref{1167}) and (\ref{9213}) imply \begin{equation}\label{1241} \left\|\phi_{k/4}*_x\left(Gf\right)\right\|_{L^qL^r}\lesssim \left\|\chi_k*_tf\right\|_{L^{\gamma '}L^{\rho'}}+2^{-k\theta }\left\|\chi_{k/4}*_xf\right\|_{L^{q_0,\infty} L^{r_0}\cap L^{q_1,\infty }L^{r_1}}. \end{equation} Similarly, \begin{eqnarray}\label{9207} &&\|\mathcal{F}_\xi^{-1}(\hat\psi(|\xi|^4-\mu|\xi|^2))*_x(Gf)\|_{L^qL^r\cap L^\infty L^2}\notag\\ &\lesssim&\|\chi_0*_tf\|_{L^{\gamma'}L^{\rho'}}+\|\mathcal{F}^{-1}_\xi\left(\hat\chi_0(|\xi|^4-\mu|\xi|^2)\right)*_xf\|_{L^{q_0,\infty} L^{r_0}\cap L^{q_1,\infty} L^{r_1}}. \end{eqnarray} It now follows from (\ref{1241}), (\ref{9207}) and (\ref{1271}) that $$ \|Gf\|_{ L^{q}B^{4\theta}_{r,2}} \lesssim \|f\|_{B^{\theta}_{\gamma',2}L^{\rho'}}+\|f\|_{l^2L^{q_0,\infty} L^{r_0}}+\|f\|_{l^2 L^{q_1,\infty} L^{r_1}} \lesssim \|f\|_{B^{\theta}_{\gamma',2}L^{\rho'}}+\|f\|_{l^2L^{\overline q} L^{\overline r}}, $$ where we used Lemma \ref{f} when $\overline{r}\ge2$. This proves (\ref{i2}). \noindent\textbf{The proof of the inequality (\ref{i3})}. By the definition of Besov norm and (\ref{1271}), we have \begin{equation}\label{1192} \|Gf\|_{B_{q, 2}^{\theta-\sigma/4}B_{r, 2}^{\sigma}} \lesssim\|Gf\|_{L^{q}B_{r, 2}^{\sigma}}+\|Gf\|_{B_{q, 2}^{\theta}L^{r}}+J, \end{equation} where $J=\left\{\sum_{j=1}^{\infty} \sum_{k=1}^{\infty} 2^{(2 \theta-\sigma/2) j+\sigma k/2}\left\|\phi_{j} *_{t} \phi_{ k/4} *_{x} (Gf)\right\|_{L^{q}L^{r}}^2\right\}^{{1}/{2}}$. Since $\|Gf\|_{L^{q}B_{r, 2}^{\sigma}}$ can be controlled by (\ref{i2}), it suffices to estimate the last two terms in (\ref{1192}). We first estimate $\|Gf\|_{B_{q, 2}^{\theta}L^{r}}$. Since $\phi_j*_te^{ita}=e^{ita}\hat\phi_j(a)$, for any $a\in\R$, it follows from (\ref{9206}) that \begin{eqnarray} \phi_j*_t(\widehat{Gf}) &=&\int_{-\infty}^{\infty}\fc{e^{it\tau}\hat\phi_j(\tau)}{2\pi i(\tau-|\xi|^4+\mu|\xi|^2)}\widetilde f(\tau,\xi) d\tau\notag\\ &&-\int_{-\infty}^{\infty}\fc{e^{it(|\xi|^4-\mu|\xi|^2)}\hat\phi_j(|\xi|^4-\mu|\xi|^2)\hat\chi_j(\tau)}{{2\pi i(\tau-|\xi|^4+\mu|\xi|^2)}}\widetilde f(\tau,\xi) d\tau\notag\\ &&-\int_{-\infty}^{\infty}\fc{e^{it(|\xi|^4-\mu|\xi|^2)}\hat\phi_j(|\xi|^4-\mu|\xi|^2)(1-\hat\chi_j(\tau))}{2\pi i(\tau-|\xi|^4+\mu|\xi|^2)}\notag\\ &&\qquad\quad\cdot \hat\chi_j(|\xi|^4-\mu|\xi|^2)\widetilde f(\tau,\xi) d\tau,\notag \end{eqnarray} so that \begin{eqnarray}\label{1191} \phi_j*_t(Gf)&=&\fc12\int_{-\infty}^{\infty}\text{sign}(t-s)e^{i(t-s)(\Delta^2+\mu\Delta)}(\phi_j*_tf)(s)ds\notag\\ &&-\fc12e^{it(\Delta^2+\mu\Delta)}\int_{-\infty}^{\infty}\text{sign}(-s)e^{is(\Delta^2+\mu\Delta)}(\phi_{j/4}*_x\chi_j*_tf)(s) ds\notag \\ &&-\fc12e^{it(\Delta^2+\mu\Delta)}\{K_j*_{t,x}\chi_{ j/4}*_xf\}|_{t=0}. \notag \end{eqnarray} This together with Strichartz's estimates (\ref{sz})--(\ref{SZ}) and (\ref{1167}) implies \begin{equation} \|\phi_j*_t(Gf)\|_{L^qL^r}\lesssim\|\phi_j*_tf\|_{L^{\gamma'}L^{\rho'}}+\|\chi_j*_tf\|_{L^{\gamma'}L^{\rho'}}+2^{-j\theta}\left\|\chi_{j/4}*_xf\right\|_{L^{q_0,\infty }L^{r_0}}.\notag \end{equation} Similarly, \begin{equation} \|\psi*_t(Gf)\|_{L^qL^r}\lesssim\|\psi*_tf\|_{L^{\gamma'}L^{\rho'}}+\|\chi_0*_tf\|_{L^{\gamma'}L^{\rho'}}+\|\mathcal{F}^{-1}_\xi \left\{\hat\chi_0(|\xi|^4-\mu|\xi|^2)\right\}*_xf\|_{L^{q_0,\infty} L^{r_0}}.\notag \end{equation} Combining the above two inequalities, we obtain $$ \|Gf\|_{B^{\theta}_{q,2}L^{r}} \lesssim \|f\|_{B^{\theta}_{\gamma',2}L^{\rho'}}+\|f\|_{l^2L^{q_0,\infty} L^{r_0}} \lesssim \|f\|_{B^{\theta}_{\gamma',2}L^{\rho'}}+\|f\|_{l^2L^{\overline q} L^{\overline r}}, $$ where we used Lemma \ref{f} when $\overline{r}\ge2$. Next, we estimate $J$. Note that by (\ref{1168}) and (\ref{a1}) \begin{eqnarray*} && \phi_j*_t\phi_{k/4}*_x(Gf)\\ &=&\fc12\int_{-\infty}^{\infty}\text{sign}(t-s)e^{i(t-s)(\Delta^2+\mu\Delta)}(\phi_j*_t\phi_{ k/4}*_x\chi_k*_tf)(s)ds\\ &&+K_k*_{t,x}\phi_j*_t\chi_{ k/4}*_xf\\ &&-\fc12e^{it(\Delta^2+\mu\Delta)}\int_{-\infty}^{\infty}\text{sign}(-s)e^{is(\Delta^2+\mu\Delta)}(\phi_{ j/4}*_x\phi_{ k/4}*_x\chi_k*_tf)(s) ds \\ &&-e^{it(\Delta^2+\mu\Delta)}\{K_k*_{t,x}\phi_{ j/4}*_x\chi_{k/4}*_xf\}|_{t=0},\notag\\ &:=&\uppercase\expandafter{\romannumeral1}_{j,k}+\uppercase\expandafter{\romannumeral2}_{j,k}+\uppercase\expandafter{\romannumeral3}_{j,k}+\uppercase\expandafter{\romannumeral4}_{j,k}, \end{eqnarray*} so that \begin{eqnarray*} J &\lesssim &\left(\sum_{j=1}^{\infty} \sum_{k=1}^{\infty} 2^{(2 \theta-\sigma/2) j+\sigma k/2}\left\|\uppercase\expandafter{\romannumeral1}_{j,k}\right\|_{L^{q}L^{r}}^{2}\right)^{\frac12} +\left(\sum_{j=1}^{\infty} \sum_{k=1}^{\infty} 2^{(2 \theta-\sigma/2) j+\sigma k/2}\left\|\uppercase\expandafter{\romannumeral2}_{j,k}\right\|_{L^{q}L^{r}}^{2}\right)^{\frac12}\notag\\ &&+\left(\sum_{j=1}^{\infty} \sum_{k=1}^{\infty} 2^{(2 \theta-\sigma/2) j+\sigma k/2}\left\|\uppercase\expandafter{\romannumeral3}_{j,k}\right\|_{L^{q}L^{r}}^{2}\right)^{\frac12} +\left(\sum_{j=1}^{\infty} \sum_{k=1}^{\infty} 2^{(2 \theta-\sigma/2) j+\sigma k/2}\left\|\uppercase\expandafter{\romannumeral4}_{j,k}\right\|_{L^{q}L^{r}}^{2}\right)^{\frac12}\notag\\ &:=& J_{1}+J_{2}+J_{3}+J_{4}.\notag \end{eqnarray*} We first consider $J_1,J_3$. By Strichartz's estimates (\ref{sz})-(\ref{SZ}), we get \begin{equation} \|\uppercase\expandafter{\romannumeral1}_{j,k}\|_{L^qL^r} \lesssim \|\phi_{j}*_t\chi_k*_tf\|_{L^{\gamma'}L^{\rho'}} ,\quad\|\uppercase\expandafter{\romannumeral3}_{j,k}\|_{L^qL^r}\lesssim\|\phi_{ j/4}*_x\phi_{ k/4}*_x\chi_k*_tf\|_{L^{\gamma'}L^{\rho'}}.\notag \end{equation} Since $\phi_j*_t\chi_k=0$ whenever $|j-k|\ge4$, we have \begin{eqnarray} J_1^2&\lesssim&\sum_{j=1}^\infty\sum_{k=1}^\infty2^{(2\theta-\sigma/2)j+\sigma/2 k}\|\phi_{j}*_t\chi_k*_tf\|_{L^{\gamma'}L^{\rho'}}^2\notag\\ &\lesssim&\sum_{j=1}^{\infty}2^{2\theta j}\|\phi_j*_tf\|_{L^{\gamma'}L^{\rho'}}^2\lesssim\|f\|_{B^\theta_{\gamma',2}L^{\rho'}}^2.\notag \end{eqnarray} Similarly, we have $J_3\lesssim\|f\|_{B^\theta_{\gamma',2}L^{\rho'}}.$ For $J_4$, we deduce from (\ref{SZ}) and (\ref{1167}) that \begin{equation} \|\uppercase\expandafter{\romannumeral4}_{j,k}\|_{L^qL^r}\lesssim2^{-k\theta}\|\phi_{ j/4}*_x\chi_{ k/4}*_xf\|_{L^{q_0,\infty} L^{r_0}}.\notag \end{equation} Since $\phi_{ j/4}*_x\chi_{ k/4}=0$ whenever $|j-k|\ge4$, we conclude that \begin{equation} J_4\lesssim\|f\|_{l^2L^{q_0,\infty}L^{r_0}}\lesssim \|f\|_{B^{\theta}_{\gamma',2}L^{\rho'}}+\|f\|_{l^2L^{\overline q} L^{\overline r}},\notag \end{equation} where we used Lemma \ref{f} when $\overline{r}\ge2$. Our final step is to estimate $J_2$. Similarly to (\ref{9213}), we have \begin{equation}\label{1169} \|\uppercase\expandafter{\romannumeral2}_{j,k}\|_{L^qL^r} \lesssim 2^{-k\theta }\|\phi_j*_t\chi_{ k/4}*_xf\|_{L^{q_1,\infty}L^{r_1}}. \end{equation} On the other hand, by Young's inequality \begin{equation}\label{11610} \|\uppercase\expandafter{\romannumeral2}_{j,k}\|_{L^qL^r} \lesssim \|\phi_j*_tf\|_{L^{\gamma'}L^{\rho'}}. \end{equation} It follows from (\ref{1169}) and (\ref{11610}) that \begin{eqnarray*} J_{2} &\lesssim &\left(\sum_{j=1}^{\infty} \sum_{k=1}^{j} 2^{(2\theta-\sigma/2) j+\sigma k/2}\left\|\phi_{j} *_{t} f\right\|_{L^{\gamma'}L^{\rho'}}^{2}\right)^{1/2} \\ &&+\left(\sum_{j=1}^{\infty} \sum_{k=j+1}^{\infty} 2^{(2 \theta-\sigma/2) (j-k)}\left\|\phi_{j} *_{t} \chi_{ k/4} *_{x} f\right\|_{L^{q_1,\infty}L^{r_1}}^{2}\right)^{1/2} \\ &:=& J_{2,1}+J_{2,2}. \end{eqnarray*} Since $\sum_{k=1}^{j} 2^{\sigma k/2} \lesssim 2^{\sigma j/2},$ we have $J_{2,1} \lesssim\|f\|_{B_{\gamma^{\prime}, 2}^{\theta}L^{\rho^{\prime}}}$. To estimate $J_{2,2}$, we interchange the order of the summation to obtain \begin{eqnarray*} J_{2,2}^{2} &=&\sum_{k=1}^{\infty} \sum_{j=1}^{k-1} 2^{(2 \theta-\sigma/2) (j-k)}\left\|\phi_{j} *_{t} \chi_{ k/4} *_{x} f\right\|_{L^{q_{1}, \infty}L^{r_{1}}}^{2} \\ & \lesssim & \sum_{k=1}^{\infty}\left\|\chi_{ k/4} *_{x} f\right\|_{L^{q_1,\infty }L^{r_1}}^2 \lesssim\|f\|_{l^2L^{q_1,\infty }L^{r_1}}^2. \end{eqnarray*} Collecting these estimates, we obtain $$ J \lesssim \|f\|_{B_{\gamma^{\prime}, 2}^{\theta}L^{\rho^{\prime}}}+\left\|f\right\|_{l^2L^{\overline{q}}L^{\overline{r}}}+\|f\|_{l^2L^{q_1,\infty }L^{r_1}} \lesssim |f\|_{B_{\gamma^{\prime}, 2}^{\theta}L^{\rho^{\prime}}}+\left\|f\right\|_{l^2L^{\overline{q}}L^{\overline{r}}}, $$ where we used Lemma \ref{f} when $\overline{r}\ge r$. This finishes the proof of (\ref{i3}). Finally, the continuity of $Gf$ in time follows from the density argument. This completes the proof of Proposition \ref{p1}. \end{proof} \section{Proof of Theorem \ref{T1}}\label{s4} In this section, we prove Theorem \ref{T1}. Firstly, we recall two lemmas that we will need to complete the contraction argument. \begin{lemma}[\cite{G}, Lemma 3.4]\label{l1} Assume $\alpha>0,0\le s<\alpha+1$, $f\in\mathcal{C}(\alpha)$. and $1<p, r, \rho<\infty$ satisfy $\fc1p=\fc\alpha \rho+\fc1r$. Then for any $u\in L^\rho\cap B^s_{r,2}$, we have\\ (i) if $s\in\mathbb{Z}$, $$ \|f(u)\|_{ H^{s,p}}\lesssim\|u\|_{L^\rho}^\alpha\|u\|_{H^{s,r}},$$\\ (ii) if $s\notin \mathbb{Z}$, $$ \|f(u)\|_{ B^s_{p,2}}\lesssim\|u\|_{L^\rho}^\alpha\|u\|_{B^s_{r,2}}.$$ \end{lemma} \begin{lemma}[\cite{Na2}, Lemma 2.3]\label{l2} Assume $0<s<8,s\neq4$, $\max \left\{0,\frac{s}{4}-1\right\}<\alpha$ and $1\le\rho, r_0,r,q_0\le \infty $. Assume also that $1<\gamma, q<\infty $ and $\gamma ,\rho,q_0,r_0,q,r$ satisfy $\frac{1}{\gamma '}=\frac{\alpha}{q_0}+\frac{1}{q}, \frac{1}{\rho'}=\frac{\alpha}{r_0}+\frac{1}{r}$. Then for any $f\in \mathcal{C}(\alpha)$ and $u\in L^{q_0}L^{r_0}\cap B^{s/4}_{q,2}L^r$, we have \begin{equation} \left\|f(u)\right\|_{B^{s/4}_{\gamma ',2}L^{\rho'}}\lesssim \left\|u\right\|_{L^{q_0}L^{r_0}}^\alpha \left\|u\right\|_{B^{s/4}_{q,2}L^r}.\notag \end{equation} \end{lemma} We regard the solution of the Cauchy problem (\ref{NLS}) as the fixed point of the integral equation given by \begin{equation}\label{Su} u(t)=(Su)(t)=e^{it(\Delta^2+\mu\Delta)}\phi+i \int_{0}^{t}e^{i\left(t-s\right)(\Delta^2+\mu\Delta)}f(u)\left(s\right)\mathrm{d}s, \end{equation} for $t\in\R$, where $u(t):=u(t,\cdot)$. Note that $Su$ satisfies \begin{equation}\label{SNLS} \begin{cases} i\partial_t\left(Su\right)+\Delta^2\left(Su\right)+\mu\Delta \left(Su\right)+f(u)=0,\\ \left(Su\right)(0)=\phi, \end{cases} \end{equation} and that \begin{equation}\label{tNLS} \partial_t \left(Su\right) = i e^{it(\Delta^2+\mu\Delta)}\left[\left(\Delta^2+\mu\Delta\right)\phi+f(\phi)\right] +i \int_{0}^{t}e^{i\left(t-s\right)(\Delta^2+\mu\Delta)}\partial_sf(u)\left(s\right)\mathrm{d}s. \end{equation} We now resume the proof of Theorem \ref{T1}. We consider four cases: $0<s<4$, $s=4$, $4<s<6$ and $6\le s<8$. \subsection{The case $0<s<4$} Throughout this subsection, we fix \begin{equation}\label{ga1} \gamma =\frac{2N+8}{N},\qquad \rho=\frac{2N+8}{N}. \end{equation} We then define $q,r,\overline{q},\overline{r}$ such that \begin{equation}\label{1153} \overline{q}=\gamma ',\qquad\frac{4}{\overline{q}}-N\left(\frac{1}{2}-\frac{1}{\overline{r}}\right)=4-s \end{equation} and \begin{equation} \frac{1}{\gamma '}=\frac{\alpha+1}{q},\qquad \frac{1}{\rho'}=\alpha \left(\frac{1}{r}-\frac{s}{N}\right)+\frac{1}{r}.\notag \end{equation} Since $\alpha=\frac{8}{N-2s}$, $0<s<\min \left\{\frac{N}{2},4\right\}$, it is straightforward to verify that $(\gamma ,\rho), (q,r)\in\Lambda_b$ are two biharmonic admissible pairs, $1<\overline{q}<2<\overline{r}<\infty $, and $r<\frac{N}{s}$. Assume $\left\|\phi\right\|_{H^s}$ sufficiently small such that \begin{equation}\label{251} \left(2C_1\right)^{\alpha+1}\left\|\phi\right\|_{H^s}^\alpha\le1,\qquad C_2\left(2C_1\left\|\phi\right\|_{H^s}\right)^\alpha\le \frac{1}{2}, \end{equation} where $C_1,C_2$ are the constants in (\ref{148}) and (\ref{147}), respectively. Set $M=2C_1\left\|\phi\right\|_{H^s}$ and consider the metric space $$ X_M=\left\{u\in L^\infty H^s\cap L^qB^s_{r,2}\cap B^{s/4}_{q,2}L^r:\left\|u\right\|_{L^\infty H^s \cap L^qB^s_{r,2}\cap B^{s/4}_{q,2}L^r}\le M\right\}.\notag $$ It follows that $X_M$ is a complete metric space when equipped with the distance \begin{equation}\label{d} d(u,v)=\left\|u-v\right\|_{L^\infty L^2\cap L^qL^r}. \end{equation} Next, we show that the map $S$, defined in (\ref{Su}), is a contraction on the space $X_M$. We first show that $S$ maps $X_M$ into itself. Using Proposition \ref{p1}, we get \begin{equation}\label{144} \left\|Su\right\|_{L^\infty H^s\cap L^qB^s_{r,2}\cap B^{s/4}_{q,2}L^r}\lesssim \left\|\phi\right\|_{H^s}+\left\|f(u)\right\|_{B^{s/4}_{\gamma ',2}L^{\rho'}}+\left\|f(u)\right\|_{l^2L^{\overline{q}}L^{\overline{r}}}. \end{equation} Since $\frac{1}{\gamma '}=\frac{\alpha+1}{q}$, and $\frac{1}{\rho'}=\alpha \frac{N-sr}{Nr}+\frac{1}{r}$, we deduce from Lemma \ref{l2} and Sobolev's embedding $B^s_{r,2}(\R^N)\hookrightarrow L^{\frac{Nr}{N-sr}}(\R^N)$ that \begin{equation}\label{145} \left\|f(u)\right\|_{B^{s/4}_{\gamma ',2}L^{\rho'}}\lesssim \left\|u\right\|_{L^q L^{\frac{Nr}{N-sr}} }^\alpha \left\|u\right\|_{B^{s/4}_{q,2},L^r} \lesssim \left\|u\right\|_{L^qB^s_{r,2}}^\alpha \left\|u\right\|_{B^{s/4}_{q,2},L^r}. \end{equation} Next, we estimate $\left\|f(u)\right\|_{l^2L^{\overline{q}}L^{\overline{r}}}$. Since $r,\overline{r}>2$, we can choose $\ep>0$ sufficiently small and $\rho_\ep,q_\ep>2$ such that \begin{equation}\label{1151} \frac{1}{\overline{r}}=\frac{1}{\rho_\ep}-\frac{\ep}{N},\qquad \frac{1}{q_\ep}=\frac{1}{r}-\frac{s-\ep}{N}. \end{equation} Then we deduce from Minkowski's inequality ($\overline{q}=\gamma '\le2$) and Sobolev's embedding $B^\ep_{\rho_\ep,2}(\R^N)\hookrightarrow B^0_{\overline{r},2}(\R^N)$ that \begin{equation}\label{141} \left\|f(u)\right\|_{l^2L^{\overline{q}}L^{\overline{r}}} \lesssim \left\|f(u)\right\|_{L^{\gamma '}B^0_{\overline{r},2}}\lesssim \left\|f(u)\right\|_{L^{\gamma '}B^\ep_{\rho_\ep,2}}. \end{equation} Moreover, since $\frac{1}{\rho_\ep}=\alpha \left(\frac{1}{r}-\frac{s}{N}\right)+\frac{1}{q_\ep}$ by (\ref{ga1}), (\ref{1153}) and (\ref{1151}), it follows from Lemma \ref{l1} and Sobolev's embedding $B^s_{r,2}\left(\R^N\right)\hookrightarrow L^{\frac{Nr}{N-sr}}\left(\R^N\right)$ that \begin{equation}\label{142} \left\|f(u)\right\|_{B^\ep_{\rho_\ep,2}}\lesssim \left\|u\right\|_{L^{\frac{Nr}{N-sr}}}^\alpha \left\|u\right\|_{B^\ep_{q_\ep,2}}\lesssim \left\|u\right\|_{B^s_{r,2}}^\alpha \left\|u\right\|_{B^\ep_{q_\ep,2}}. \end{equation} Estimates (\ref{141}), (\ref{142}) and H\"older's inequality imply \begin{equation}\label{146} \left\|f(u)\right\|_{l^2L^{\overline{q}}L^{\overline{r}}} \lesssim \left\|u\right\|_{L^qB^s_{r,2}}^\alpha \left\|u\right\|_{L^qB^\ep_{q_\ep,2}}\lesssim \left\|u\right\|_{L^qB^s_{r,2}}^{\alpha+1}, \end{equation} where we used the embedding $B^s_{r,2}\left(\R^N\right)\hookrightarrow B^\ep_{q_\ep,2}\left(\R^N\right)$ (see (\ref{1151})) in the second inequality. It now follows from (\ref{144}), (\ref{145}) and (\ref{146}) that, for any $u\in X_M$, \begin{equation}\label{148} \left\|Su\right\|_{L^\infty H^s\cap L^qB^s_{r,2}\cap B^{s/4}_{q,2}L^r}\le C_1\left\|\phi\right\|_{H^s}+C_1\left\|u\right\|^{\alpha+1}_{L^qB^s_{r,2}\cap B^{s/4}_{q,2}L^r}\le M, \end{equation} where we used (\ref{251}) in the last inequality. Our next aim is the desired Lipschitz property of $S$ with respect to the metric $d$ defined in (\ref{d}). For any $u,v\in X_M$, we deduce from Strichartz's estimate (\ref{SZ}), (\ref{fu}), (\ref{251}), H\"older's inequality and Sobolev's embedding that \begin{eqnarray}\label{147} d(Su,Sv)&\lesssim &\left\|\left(\left|u\right|^{\alpha}+\left|v\right|^{\alpha}\right)\left(u-v\right)\right\|_{L^{\gamma '}L^{\rho'}}\notag\\ &\lesssim &\left(\left\|u\right\|^\alpha_{L^qB^s_{r,2}}+\left\|v\right\|^\alpha_{L^qB^s_{r,2}}\right)\left\|u-v\right\|_{L^qL^r}\notag\\ &\le& C_2M^\alpha d(u,v)\le\frac{1}{2}d(u,v). \end{eqnarray} Therefore, by Banach's fixed point theorem, we conclude that the Cauchy problem (\ref{NLS}) admits a unique global solution $u\in CH^s\cap L^qB^s_{r,2}\cap B^{s/4}_{q,2}L^r$, where the continuity of $u$ in time follows from Proposition \ref{p1}. \subsection{The case $s=4$} Throughout this subsection, we fix \begin{equation} \gamma=\frac{8(\alpha+2)}{(N-8)\alpha},\qquad \rho=\frac{N(\alpha+2)}{N+4\alpha}.\notag \end{equation} For $\phi\in H^4$ and $T>0$, we define \begin{eqnarray} F(\phi,T)&=&\|e^{it(\Delta^2+\mu\Delta)}(\Delta^2+\mu\Delta)\phi\|_{L^{\gamma}\left([0,T],L^\rho\right)}+\|e^{it(\Delta^2+\mu\Delta)}f(\phi)\|_{L^{\gamma}\left([0,T],L^\rho\right)}\notag\\ &&+\left\|e^{it(\Delta^2+\mu\Delta)}\phi\right\|_{L^\gamma \left([0,T], H^{4,\rho}\right)}. \notag \end{eqnarray} By Strichartz's estimate (\ref{sz}), (\ref{fu}) and Sobolev's embedding $H^4\left(\R^N\right) \hookrightarrow L^{2\alpha+2}\left(\R^N\right)$, we have \begin{equation}\label{1261} F(\phi,T) \lesssim \left\|\phi\right\|_{H^4}+\left\|f(\phi)\right\|_{L^2} \le C_3 \left(\left\|\phi\right\|_{H^4}+\left\|\phi\right\|_{H^4}^{\alpha+1}\right). \end{equation} We then recall the following result from \cite{xuan}. \begin{proposition}[Proposition 5.1 in \cite{xuan}]\label{p2} Let $N>8$, $\alpha=\frac{8}{N-8},\mu=0$ or $-1$ and $f\in\mathcal{C}(\alpha)$. There exists $M>0,C_4>0$ such that for any $T>0$ with \begin{equation}\label{1281} C_4\left(1+\left\|\phi\right\|_{H^4}^\alpha\right)F(\phi,T)\le \frac{M}{2}, \end{equation} the Cauchy problem (\ref{NLS}) admits a unique solution $u\in C \left([0,T],H^4\right)\cap L^\gamma \left([0,T],H^{4,\rho}\right)$ satisfying $\left\|u\right\|_{H^{1,\gamma }\left([0,T],L^\rho\right)\cap L^\gamma \left([0,T],H^{4,\rho}\right)}\le M$. \end{proposition} Let $\left\|\phi\right\|_{H^4}$ sufficiently small such that \begin{equation}\label{1282} C_3C_4\left(1+\left\|\phi\right\|_{H^4}^\alpha\right)\left(\left\|\phi\right\|_{H^4}+\left\|\phi\right\|_{H^4}^{\alpha+1}\right)\le \frac{M}{2}, \end{equation} where $C_3,C_4$ are the constants in (\ref{1261}) and (\ref{1281}) respectively. It now follows from Proposition \ref{p2}, (\ref{1261}) and (\ref{1282}) that for any $T>0$, the Cauchy problem (\ref{NLS}) admits a unique global solution $u\in C \left([0,T],H^4\right)\cap L^\gamma \left([0,T],H^{4,\rho}\right)$ with $\left\|u\right\|_{H^{1,\gamma }\left([0,T],L^\rho\right)\cap L^\gamma \left([0,T],H^{4,\rho}\right) }\le M$. Since $T>0$ is arbitrary and $M>0$ is fixed, we deduce that (\ref{NLS}) admits a unique solution $u\in C \left([0,\infty ),H^4\right)\cap L^\gamma \left([0,\infty ),H^{4,\rho}\right)$. By symmetry, a similar conclusion is reached in the negative time direction. Therefore, we obtain a unique solution $u\in C H^4\cap L^\gamma H^{4,\rho}$ to (\ref{NLS}). \subsection{The case $4<s<6$} Throughout this subsection, we fix \begin{equation} \gamma =2, \qquad \rho=\frac{2N}{N-4}.\notag \end{equation} We then define $q,r,\overline{q},\overline{r}$ such that \begin{equation} \overline{q}=2,\qquad\frac{4}{\overline{q}}-N\left(\frac{1}{2}-\frac{1}{\overline{r}}\right)=8-s,\notag \end{equation} and \begin{equation} \frac{1}{\gamma '}=\frac{\alpha+1}{q},\qquad \frac{1}{\rho'}=\alpha \left(\frac{1}{r}-\frac{s}{N}\right)+\frac{1}{r}.\notag \end{equation} Since $4<s<6,N>2s$, it is straightforward to verify that $(\gamma ,\rho), (q,r)\in\Lambda_b$ are two biharmonic admissible pairs, $1<\overline{r}<2 $, $r<\frac{N}{s}$ and $\frac{1}{\overline{r}}=\alpha \left(\frac{1}{r}-\frac{s}{N}\right)+\frac{1}{r}-\frac{s-4}{N}$. Assume $\left\|\phi\right\|_{H^s}$ sufficiently small such that \begin{equation}\label{252} \left(2C_5\right)^{\alpha+1}\left(\left\|\phi\right\|_{H^s}+\left\|\phi\right\|_{H^s}^{\alpha+1}\right)^\alpha\le1,\qquad \left(C_6+C_7\right)\left(2C_5\left(\left\|\phi\right\|_{H^s}+\left\|\phi\right\|_{H^s}^{\alpha+1}\right)\right)^\alpha\le \frac{1}{2}, \end{equation} where $C_5,C_6,C_7$ are the constants in (\ref{1420}), (\ref{1421}) and (\ref{1283}), respectively. Set $M=2C_5(\left\|\phi\right\|_{H^s}$ $+\left\|\phi\right\|_{H^s}^{\alpha+1})$ and consider the metric space \begin{equation}\label{7273}\begin{array}{c} Y_{M}=\left\{u \in L^\infty H^s\cap L^qB^s_{r,2}\cap B^{s/4}_{q,2}L^r\cap H^{1,q}B^{s-4}_{r,2}:\right. \\ \qquad\qquad\qquad\left. \left\|u\right\|_{L^\infty H^s\cap L^qB^s_{r,2}\cap B^{s/4}_{q,2}L^r\cap H^{1,q}B^{s-4}_{r,2}}\le M\right\}. \end{array}\notag \end{equation} It follows that $Y_M$ is a complete metric space when equipped with the distance \begin{equation}\label{d2} d(u,v)=\left\|u-v\right\|_{L^\infty L^2\cap L^qL^r}. \end{equation} Next, we show that the map $S$, defined in (\ref{Su}), is a contraction on the space $Y_M$. We first show that $S$ maps $Y_M$ into itself. From the equation (\ref{SNLS}), we have \begin{equation}\label{149} \left\|Su\right\|_{L^\infty H^s}\le \left\|Su\right\|_{L^\infty L^2}+\left\|Su\right\|_{L^\infty H^{s-2}}+\left\|\partial_t\left(Su\right)\right\|_{L^\infty H^{s-4}}+\left\|f(u)\right\|_{L^\infty H^{s-4}} \end{equation} and \begin{equation}\label{1410} \left\|Su\right\|_{L^qB^s_{r,2}}\le \left\|Su\right\|_{L^qL^r}+\left\|Su\right\|_{L^qB^{s-2}_{r,2}}+\left\|\partial_t\left(Su\right)\right\|_{L^qB^{s-4}_{r,2}}+\left\|f(u)\right\|_{L^qB^{s-4}_{r,2}}. \end{equation} Since $\left(H^s,L^2\right)_{2/s,2}=H^{s-2}$ and $\left(B^s_{r,2},B^0_{r,\infty }\right)_{2/s,2}=B^{s-2}_{r,2}$ (see Theorem 6.4.5 in \cite{Bergh}), it follows from H\"older's inequality and Young's inequality that \begin{equation}\label{1411} \left\|Su\right\|_{L^\infty H^{s-2}}\lesssim \left\|Su\right\|_{L^\infty H^s}^{1-2/s}\left\|Su\right\|_{L^\infty L^2}^{2/s}\le \frac{1}{2}\left\|Su\right\|_{L^\infty H^s}+C\left\|Su\right\|_{L^\infty L^2}, \end{equation} and \begin{equation}\label{1412} \left\|Su\right\|_{L^qB^{s-2}_{r,2}}\lesssim \left\|Su\right\|_{L^qB^s_{r,2}}^{1-2/s}\left\|Su\right\|_{L^qB^0_{r,\infty }}^{2/s}\le \frac{1}{2}\left\|Su\right\|_{L^qB^s_{r,2}}+C\left\|Su\right\|_{L^qL^r}, \end{equation} where we used the embedding $L^r\left(\R^N\right) \hookrightarrow B^0_{r,\infty }\left(\R^N\right)$ (see Theorem 6.4.4 in \cite{Bergh}) in (\ref{1412}). Estimates (\ref{149})--(\ref{1412}) imply \begin{equation}\label{1415} \left\|Su\right\|_{L^\infty H^s}\leq \left\|Su\right\|_{L^\infty L^2}+\left\|\partial_t\left(Su\right)\right\|_{L^\infty H^{s-4}}+\left\|f(u)\right\|_{L^\infty H^{s-4}} \end{equation} and \begin{equation}\label{1416} \left\|Su\right\|_{L^qB^s_{r,2}}\le \left\|Su\right\|_{L^qL^r}+\left\|\partial_t\left(Su\right)\right\|_{L^qB^{s-4}_{r,2}}+\left\|f(u)\right\|_{L^qB^{s-4}_{r,2}}. \end{equation} From (\ref{1415}) and (\ref{1416}), we have \begin{eqnarray}\label{271} &&\left\|Su\right\|_{L^\infty H^s\cap L^qB^{s}_{r,2}\cap B^{s/4}_{q,2}L^r}\\ &\lesssim& \left\|Su\right\|_{L^\infty L^2\cap L^qL^r}+\left\|f(u)\right\|_{L^\infty H^{s-4}\cap L^qB^{s-4}_{r,2}} +\left\|\partial_t \left(Su\right)\right\|_{L^\infty H^{s-4}\cap L^qB^{s-4}_{r,2}\cap B^{\left(s-4\right)/4}_{q,2}L^r}.\notag \end{eqnarray} We first estimate $\left\|Su\right\|_{L^\infty L^2\cap L^qL^r}$. Since $\frac{1}{\gamma '}=\frac{\alpha+1}{q}, \frac{1}{\rho'}=\alpha \left(\frac{1}{r}-\frac{s}{N}\right)+\frac{1}{r}$, it follows from Strichartz's estimates (\ref{sz})--(\ref{SZ}), (\ref{fu}), H\"older's inequality and Sobolev's embedding that \begin{equation}\label{1417} \left\|Su\right\|_{L^\infty L^2\cap L^qL^r} \lesssim \left\|\phi\right\|_{L^2}+\left\|\left|u\right|^{\alpha}u\right\|_{L^{\gamma '}L^{\rho'}} \lesssim \left\|\phi\right\|_{H^s}+\left\|u\right\|_{L^qB^s_{r,2}}^\alpha \left\|u\right\|_{L^qL^r}. \end{equation} Next, we estimate $\left\|f(u)\right\|_{L^\infty H^{s-4}\cap L^qB^{s-4}_{r,2}}$. Let $p_1=\frac{2N}{N-8}$. Since $\frac{1}{2}=\alpha \frac{N-2s}{2N}+\frac{1}{p_1}$ and $\alpha+1>s-4$, we deduce from Lemma \ref{l1} and Sobolev's embedding $H^s \left(\R^N\right) \hookrightarrow B^{s-4}_{p_1,2}\left(\R^N\right)\cap L^{\frac{2N}{N-2s}}\left(\R^N\right)$ that \begin{equation}\label{1413} \left\|f(u)\right\|_{L^\infty H^{s-4}}\lesssim \left\|\left\|u\right\|_{ \frac{2N}{N-2s}}^\alpha \left\|u\right\|_{B^{s-4}_{p_1,2}}\right\|_{L^\infty }\lesssim \left\|u\right\|_{L^\infty H^s}^{\alpha+1}. \end{equation} Let $p_2$ be given by $\frac{1}{r}=\frac{\alpha(N-2s)}{2N}+\frac{1}{p_2}$. Similar to (\ref{1413}), we have \begin{equation}\label{1414} \left\|f(u)\right\|_{L^qB^{s-4}_{r,2}}\lesssim \left\|u\right\|_{L^\infty \frac{2N}{N-2s}}^\alpha \left\|u\right\|_{L^qB^{s-4}_{p_2,2}}\lesssim \left\|u\right\|_{L^\infty H^s}^\alpha \left\|u\right\|_{L^qB^s_{r,2}}. \end{equation} Finally, we claim that \begin{eqnarray} \label{272} &&\left\|\partial_t\left(Su\right)\right\|_{L^\infty H^{s-4}\cap L^qB^{s-4}_{r,2}\cap B^{(s-4)/4}_{q,2}L^r}\notag\\ &\lesssim & \left\|\phi\right\|_{H^s}+\left\|\phi\right\|_{H^s}^{\alpha+1}+\left\|u\right\|_{L^qB^{s}_{r,2}}^\alpha \left(\left\|u\right\|_{B^{s/4}_{q,2}L^r}+\left\|\partial_t u\right\|_{L^q B^{s-4}_{r,2}}\right). \end{eqnarray} In fact, form the equation (\ref{tNLS}) and the inequality (\ref{i2}), we have \begin{eqnarray}\label{1197} && \left\|\partial_t\left(Su\right)\right\|_{L^\infty H^{s-4}\cap L^qB^{s-4}_{r,2}\cap B^{(s-4)/4}_{q,2}L^r}\notag\\ &\lesssim& \left\|\phi\right\| _{H^{s}}+\left\|f(\phi)\right\|_{H^{s-4}}+\left\|\partial_tf(u)\right\|_{B^{s/4-1}_{\gamma ',2}L^{\rho'}}+\left\|\partial_tf(u)\right\|_{l^2L^{\overline{q}}L^{\overline{r}}}. \end{eqnarray} Similar to (\ref{1413}), we have \begin{equation}\label{1198} \left\|f(\phi)\right\|_{H^{s-4}}\lesssim \left\|\phi\right\|_{H^s}^{\alpha+1}. \end{equation} Moreover, we deduce from Lemma \ref{l2} and Sobolev's embedding $B^s_{r,2}\left(\R^N\right)\hookrightarrow L^{\frac{Nr}{N-sr}}\left(\R^N\right)$ that \begin{equation}\label{1165} \left\|\partial_tf(u)\right\|_{B^{s/4-1}_{\gamma ',2}L^{\rho'}}\lesssim \left\|f(u)\right\|_{B^{s/4}_{\gamma ',2}L^{\rho'}} \lesssim \left\|u\right\|_{L^qB^s_{r,2}}^\alpha \left\|u\right\|_{B^{s/4}_{q,2},L^r}. \end{equation} On the other hand, since $\overline{q}=\gamma '=2$ and $1<\overline{r}\le2$, it follows from Minkowski's inequality and the embedding $L^{\overline{r}}\left(\R^N\right)\hookrightarrow B^0_{\overline{r},2}\left(\R^N\right)$ (see Theorem 6.4.4 in \cite{Bergh}) that \begin{equation}\label{11910} \left\|\partial_tf(u)\right\|_{l^2L^{\overline{q}}L^{\overline{r}}} \lesssim \left\|\partial_tf(u)\right\|_{L^{\overline{q}}B^0_{\overline{r},2}}\lesssim \left\|\partial_tf(u)\right\|_{L^{\gamma '}L^{\overline{r}}}. \end{equation} Since $\frac{1}{\overline{r}}=\alpha \left(\frac{1}{r}-\frac{s}{N}\right)+\frac{1}{r}-\frac{s-4}{N}$, it follows from (\ref{fu}), H\"older's inequality and Sobolev's embedding $B^s_{r,2}\left(\R^N\right)\hookrightarrow L^{\frac{Nr}{N-sr}}\left(\R^N\right), B^{s-4}_{r,2}\left(\R^N\right)\hookrightarrow L^{\frac{Nr}{N-(s-4)r}}\left(\R^N\right)$ that $$ \left\|\partial_tf(u)\right\|_{L^{\overline{r}}}\lesssim \left\|u\right\|_{L^{\frac{Nr}{N-sr}}}^\alpha \left\|\partial_tu\right\|_{L^{\frac{Nr}{N-(s-4)r}}}\lesssim \left\|u\right\|_{B^s_{r,2}}^\alpha \left\|\partial_tu\right\|_{B^{s-4}_{r,2}}. $$ This inequality together with (\ref{11910}), H\"older's inequality implies \begin{equation}\label{1419} \left\|\partial_tf(u)\right\|_{l^2L^{\overline{q}}L^{\overline{r}}} \lesssim \left\|u\right\|^\alpha_{L^q B^s_{r,2}}\left\|\partial_tu\right\|_{L^q B^{s-4}_{r,2}}. \end{equation} The inequality (\ref{272}) is now an immediate consequence of (\ref{1197}), (\ref{1198}), (\ref{1165}) and (\ref{1419}). Estimates (\ref{271})--(\ref{272}) imply that, for any $u\in Y_M$, \begin{equation}\label{1420} \left\|Su\right\|_{L^\infty H^s\cap L^qB^s_{r,2}\cap B^{s/4}_{q,2}L^r \cap H^{1,q}B^{s-4}_{r,2}}\le C_5\left(\left\|\phi\right\|_{H^s}+\left\|\phi\right\|_{H^s}^{\alpha+1}\right)+C_5M^{\alpha+1}\le M, \end{equation} where we used (\ref{252}) in the second inequality. Our next aim is the desired Lipschitz property of $S$ with respect to the metric $d$ defined in (\ref{d2}). Similar to (\ref{147}), we have for any $u,v\in Y_M$, \begin{eqnarray}\label{1421} d(Su,Sv)&\lesssim& \left(\left\|u\right\|^\alpha_{L^qB^s_{r,2}}+\left\|v\right\|^\alpha_{L^qB^s_{r,2}}\right)\left\|u-v\right\|_{L^qL^r}\notag\\ &\le& C_6M^\alpha d(u,v)\le\frac{1}{2}d(u,v). \end{eqnarray} Therefore, we deduce from Banach's fixed point argument that the Cauchy problem (\ref{NLS}) admits a unique global solution $u\in L^\infty H^s\cap L^qB^s_{r,2}\cap B^{s/4}_{q,2}L^r\cap H^{1,q}B^{s-4}_{r,2}$. It remains to prove that $u\in C\left(\R, H^s\right)$. Similar to (\ref{1415}), we have \begin{eqnarray}\label{273} \left\|u(t_1)-u(t_2)\right\|_{H^s}&\lesssim& \left\|\partial_t u(t_1)-\partial_t u(t_2)\right\|_{H^{s-4}}+\left\|u(t_1)-u(t_2)\right\|_{L^2}\notag\\ &&+\left\|f\left(u(t_1)\right)-f\left(u(t_2)\right)\right\|_{H^{s-4}}. \end{eqnarray} Since $\partial_tf\in B^{s/4-1}_{\gamma ',2}L^{\rho'}\cap l^2L^{\overline{q}}L^{\overline{r}}$ by (\ref{1165}) and (\ref{11910}), we deduce from (\ref{tNLS}) and Proposition \ref{p1} that $u\in C^1\left(\R,H^{s-4}\right)$, so that by (\ref{273}) it suffices to prove $f(u)\in C\left(\R,H^{s-4}\right)$. To this end, we first show that $f(u)\in C\left(\R,B^{0}_{\rho_0,\infty }\right)$, where $\rho_0$ is given by $\frac{1}{\rho_0}=\frac{1}{2}-\frac{s-4}{N}$. Indeed, using the same method as that used to derive (\ref{1416}), we obtain \begin{eqnarray}\label{1284} \left\|u(t_1)-u(t_2)\right\|_{H^{4,\rho_0}}&\lesssim& \left\|\partial_t u(t_1)-\partial_t u(t_2)\right\|_{L^{\rho_0}}+\left\|u(t_1)-u(t_2)\right\|_{L^{\rho_0}}\notag\\ &&+\left\|f\left(u(t_1)\right)-f\left(u(t_2)\right)\right\|_{L^{\rho_0}}. \end{eqnarray} Moreover, it follows from (\ref{fu}), H\"older's inequality, Sobolev's embedding $H^s\left(\R^N\right) \hookrightarrow H^{\frac{2N}{N-2s}}\left(\R^N\right)$ and $H^{4,\rho_0}\left(\R^N\right)\hookrightarrow L^{\frac{N\rho_0}{N-4\rho_0}}\left(\R^N\right)$ that \begin{eqnarray}\label{1221} &&\left\|f\left(u(t_1)\right)-f\left(u(t_2)\right)\right\|_{L^{\rho_0}}\notag\\ &\lesssim&\left(\left\|u(t_1)\right\|_{L^{\frac{2N}{N-2s}}}^\alpha+\left\|u(t_2)\right\|^\alpha_{L^\frac{2N}{N-2s}}\right)\left\|u(t_1)-u(t_2)\right\|_{L^{\frac{N\rho_0}{N-4\rho_0}}}\notag\\ &\lesssim & \left\|u\right\|_{L^\infty H^s}^\alpha \left\|u(t_1)-u(t_2)\right\|_{H^{4,\rho_0}}. \end{eqnarray} Combining (\ref{1284}) and (\ref{1221}), we obtain \begin{eqnarray}\label{1283} \left\|u(t_1)-u(t_2)\right\|_{H^{4,\rho_0}}&\le& C_7\left\|\partial_t u(t_1)-\partial_t u(t_2)\right\|_{L^{\rho_0}}+C_7\left\|u(t_1)-u(t_2)\right\|_{L^{\rho_0}}\notag\\ &&+ C_7\left\|u\right\|_{L^\infty H^s}^\alpha \left\|u(t_1)-u(t_2)\right\|_{H^{4,\rho_0}}. \end{eqnarray} Since $C_7\left\|u\right\|_{L^\infty H^s}^\alpha\le C_7M^\alpha\le \frac{1}{2}$ in (\ref{252}), we have \begin{equation}\label{1222} \left\|u(t_1)-u(t_2)\right\|_{H^{4,\rho_0}}\lesssim \left\|\partial_t u(t_1)-\partial_t u(t_2)\right\|_{L^{\rho_0}}+\left\|u(t_1)-u(t_2)\right\|_{L^{\rho_0}}. \end{equation} On the other hand, since $u\in C^1\left(\R,H^{s-4}\right)$ and $H^{s-4}\left(\R^N\right)\hookrightarrow L^{\rho_0}\left(\R^N\right)$, we have $u\in C^1\left(\R,L^{\rho_0}\right)$. This together with (\ref{1222}) implies $u\in C\left(\R,H^{4,\rho_0}\right)$. So by (\ref{1221}) and Sobolev's embedding $L^{\rho_0}\left(\R^N\right) \hookrightarrow B^0_{\rho_0,\infty }\left(\R^N\right)$, we have $f(u)\in C\left(\R,B^0_{\rho_0,\infty }\right)$. We proceed to show $f(u)\in C\left(\R,H^{s-4}\right)$. Let $\frac{1}{\rho_\ep}=\frac{1}{2}+\frac{\ep}{N}$ and $p_\ep=\frac{2N}{N-8+2\ep}$, where $\ep>0$ sufficiently small such that $\alpha>s-5+\ep$. We then claim that $f(u)$ is bounded in $B^{s-4+\ep}_{\rho_\ep,2}$. In fact this follows from Lemma \ref{l2} ( $\frac{1}{\rho_\ep}=\alpha \frac{N-2s}{2N}+\frac{1}{p_\ep}$) and Sobolev's embedding $H^s\left(\R^N\right) \hookrightarrow L^{\frac{2N}{N-2s}}\left(\R^N\right)\cap B^{s-4+\ep}_{p_\ep,2}\left(\R^N\right)$, \begin{equation}\label{1285} \left\|f(u)\right\|_{B^{s-4+\ep}_{\rho_\ep,2}}\lesssim \left\|u\right\|_{L^{\frac{2N}{N-2s}}}^\alpha \left\|u\right\|_{B^{s-4+\ep}_{p_\ep,2}}\lesssim \left\|u\right\|_{H^s}^{\alpha+1}. \end{equation} Then by the interpolation theorem (see Theorem 6.4.5 in \cite{Bergh}), we have $$ \left(B^0_{\rho_0,\infty },B^{s-4+\ep}_{\rho_\ep,2}\right)_{\theta,2}=B^{s-4}_{2,2}=H^{s-4}, \qquad \theta=\frac{s-4}{s-4+\ep}. $$ This together with (\ref{1285}) and the fact $f(u)\in C\left(\R,B^0_{\rho_0,\infty }\right)$ implies $f(u)\in C\left(\R,H^{s-4}\right)$. Combing (\ref{273}) and $u\in C^1\left(\R,H^{s-4}\right)$, we can immediately get that $u\in C\left(\R, H^s\right)$. \subsection{The case $6\leq s<8$} Throughout this subsection, we fix $r=\frac{2N}{N-4}$. Assume $\left\|\phi\right\|_{H^s}$ sufficiently small such that \begin{equation}\label{253} \left(2C_8\right)^{\alpha+1}\left(\left\|\phi\right\|_{H^s}+\left\|\phi\right\|_{H^s}^{\alpha+1}\right)^\alpha\le1,\qquad C_9\left(2C_8\left(\left\|\phi\right\|_{H^s}+\left\|\phi\right\|_{H^s}^{\alpha+1}\right)\right)^\alpha\le \frac{1}{2}, \end{equation} where $C_8,C_9$ are the constants in (\ref{11431}) and (\ref{11432}), respectively. Set $M=2C_8\left(\left\|\phi\right\|_{H^s}\right.$ $ \left.+\left\|\phi\right\|_{H^s}^{\alpha+1}\right)$ and consider the metric space \begin{equation} Z_{M}=\left\{u \in L^\infty H^s\cap B^{s/4}_{2,2}L^r\cap B^{(s-2)/4}_{2,2}B^2_{r,2}: \left\|u\right\|_{L^\infty H^s\cap B^{s/4}_{2,2}L^r\cap B^{(s-2)/4}_{2,2}B^{2}_{r,2}}\le M\right\}.\notag \end{equation} It follows that $Z_M$ is a complete metric space when equipped with the distance \begin{equation}\label{d3} d(u,v)=\left\|u-v\right\|_{L^\infty L^2\cap L^2L^r}. \end{equation} Next, we show that the map $S$, defined in (\ref{Su}), is a contraction on the space $Z_M$. We first estimate $\left\|Su\right\|_{L^\infty H^s\cap B^{s/4}_{2,2}L^r}$. Note that (\ref{1415}), (\ref{1413}) and (\ref{1198}) still hold in the case $6\le s<8$, we have \begin{equation} \label{274} \left\|Su\right\|_{L^\infty H^s} \lesssim \left\|u\right\|_{L^\infty H^s}^{\alpha+1} +\left\|Su\right\|_{L^\infty L^2}+\left\|\partial_t \left(Su\right)\right\|_{L^\infty H^{s-4}}, \end{equation} so that \begin{equation} \label{275} \left\|Su\right\|_{L^\infty H^s\cap B^{s/4}_{2,2}L^r} \lesssim \left\|u\right\|_{L^\infty H^s}^{\alpha+1} +\left\|Su\right\|_{L^\infty L^2\cap L^2L^r} +\left\|\partial_t \left(Su\right)\right\|_{L^\infty H^{s-4}\cap B^{\left(s-4\right)/4}_{2,2}L^r}. \end{equation} From (\ref{SNLS}) and Strichartz's estimate (\ref{SZ}), we have \begin{equation}\label{1286} \left\|Su\right\|_{L^\infty L^2\cap L^2L^r} \lesssim \left\|\phi\right\|_{L^2}+\left\|f(u)\right\|_{L^2L^{r'}} \lesssim \left\|\phi\right\|_{H^s}+\left\|u\right\|_{L^\infty H^s}^\alpha \left\|u\right\|_{L^2L^r}, \end{equation} where we used (\ref{fu}), H\"older's inequality and Sobolev's embedding $H^s\left(\R^N\right) \hookrightarrow L^{\frac{2N}{N-2s}}\left(\R^N\right)$ in the second inequality. Next, we claim that \begin{eqnarray}\label{276} &&\left\|\partial_t \left(Su\right)\right\|_{L^\infty H^{s-4}\cap B^{\left(s-4\right)/4}_{2,2}L^r}\notag\\ &\lesssim & \left\|\phi\right\|_{H^s}+\left\|\phi\right\|_{H^s}^{\alpha+1}+\left\|u\right\|_{L^\infty H^S}^\alpha \left(\left\|u\right\|_{B^{s/4}_{2,2}L^r}+\left\|u\right\|_{B^{\left(s-2\right)/4}_{2,2}B^{2}_{r,2}}\right). \end{eqnarray} In fact, from the equation (\ref{tNLS}), the inequalities (\ref{i2}) and (\ref{1198}), we have \begin{eqnarray}\label{11421} &&\left\|\partial_t \left(Su\right)\right\|_{L^\infty H^{s-4}\cap B^{\left(s-4\right)/4}_{2,2}L^r}\notag\\ &\lesssim &\left\|\phi\right\|_{H^{s}}+\left\|f(\phi)\right\|_{H^{s-4}}+\left\|\partial_tf(u)\right\|_{B^{(s-4)/4}_{2,2}L^{r'}}+\left\|\partial_tf(u)\right\|_{l^2L^{4/(8-s)}L^2}\notag\\ &\lesssim &\left\|\phi\right\|_{H^s}+\left\|\phi\right\|_{H^s}^{\alpha+1}+\left\|f(u)\right\|_{B^{s/4}_{2,2}L^{r'}}+\left\|\partial_tf(u)\right\|_{l^2L^{4/(8-s)}L^2}. \end{eqnarray} From Lemma \ref{l2} and Sobolev's embedding $H^s \left(\R^N\right)\hookrightarrow L^{\frac{2N}{N-2s}}\left(\R^N\right)$, we have \begin{equation}\label{11424} \left\|f(u)\right\|_{B^{s/4}_{2,2}L^{r'}} \lesssim \left\|u\right\| _{L^\infty L^{\frac{2N}{N-2s}}}^\alpha\left\|u\right\|_{B^{s/4}_{2,2}L^r} \lesssim \left\|u\right\|_{L^\infty H^s}^\alpha\left\|u\right\|_{B^{s/4}_{2,2}L^r}. \end{equation} It remains to estimate $\left\|\partial_tf(u)\right\|_{l^2L^{4/(8-s)}L^2}$. Note that $\frac{4}{8-s}\ge 2$, so that $B^{(s-6)/4}_{2,2}L^2\hookrightarrow B^{(s-6)/4}_{2,4/(8-s)}L^2\hookrightarrow L^{4/(8-s)}L^2$, which implies \begin{equation}\label{11425} \left\|\partial_tf(u)\right\|_{l^2L^{4/(8-s)}L^2} \lesssim \left\|\partial_tf(u)\right\|_{l^2B^{(s-6)/4}_{2,2}L^2}\lesssim \left\|f(u)\right\|_{B^{(s-2)/4}_{2,2}L^2}. \end{equation} Moreover, it follows from Lemma \ref{l2} and Sobolev's embedding $H^s\left(\R^N\right)\hookrightarrow L^{\frac{2N}{N-2s}}\left(\R^N\right)$, $B^2_{r,2}\left(\R^N\right)\hookrightarrow L^{\frac{2N}{N-8}}$ that \begin{equation}\label{11426} \left\|f(u)\right\|_{B^{(s-2)/4}_{2,2}L^2} \lesssim \left\|u\right\|^\alpha_{L^\infty L^{\frac{2N}{N-2s}}}\left\|u\right\|_{B^{(s-2)/4}_{2,2}L^{\frac{2N}{N-8}}} \lesssim \left\|u\right\|^\alpha_{L^\infty H^s}\left\|u\right\|_{B^{(s-2)/4}_{2,2}B^2_{r,2}}. \end{equation} The inequality (\ref{276}) is now an immediate consequence of (\ref{11421})--(\ref{11426}). \\ Estimates (\ref{275}), (\ref{1286}) and (\ref{276}) imply that, for any $u\in Z_M$, \begin{equation}\label{11428} \left\|Su\right\|_{L^\infty H^s\cap B^{s/4}_{2,2}L^r}\lesssim \left\|\phi\right\|_{H^s}+\left\|\phi\right\|_{H^s}^{\alpha+1}+M^{\alpha+1}. \end{equation} We now estimate $\left\|Su\right\|_{B^{(s-2)/4}_{2,2}B^2_{r,2}}$. When $s=6$, we deduce from the inequality (\ref{i2}) and the equation (\ref{tNLS}) that \begin{eqnarray}\label{261} &&\left\|\partial_t \left(Su\right)\right\|_{L^2 B^{2}_{r,2}}\notag\\ &\lesssim& \left\|\phi\right\|_{H^6}+\left\|f(\phi)\right\|_{H^2}+\left\|\partial_t f(u)\right\|_{B^{1/2}_{2,2}L^{r'}}+\left\|\partial_t f(u)\right\|_{l^2L^2L^2}\notag\\ &\lesssim &\left\|\phi\right\|_{H^6}+\left\|\phi\right\|_{H^6}^{\alpha+1}+\left\|f(u)\right\|_{B^{3/2}_{2,2}L^{r'}}+\left\|\partial_t f(u)\right\|_{L^2L^2}, \end{eqnarray} where we used (\ref{1198}) and the embedding $L^2L^2 \hookrightarrow l^2L^2L^2$ in the last inequality. When $6<s<8$, we deduce from the equation (\ref{tNLS}), the inequality (\ref{i3}) ($\sigma=2,\theta=(s-4)/4$) and (\ref{1198}) that \begin{eqnarray}\label{11429} &&\left\|\partial_t\left(Su\right)\right\|_{B^{(s-4)/4-2/4}_{2,2}B^2_{r,2}}\notag\\ &\lesssim &\left\|\phi\right\|_{H^{s}}+\left\|f(\phi)\right\|_{H^{s-4}}+\left\|\partial_tf(u)\right\|_{B^{(s-4)/4}_{2,2}L^{r'}}+\left\|\partial_tf(u)\right\|_{l^2L^{4/(8-s)}L^2}\notag\\ &\lesssim & \left\|\phi\right\|_{H^s}+\left\|\phi\right\|_{H^s}^{\alpha+1}+\left\|f(u)\right\|_{B^{s/4}_{2,2}L^{r'}}+\left\|\partial_tf(u)\right\|_{l^2L^{4/(8-s)}L^2}. \end{eqnarray} Estimates (\ref{261}), (\ref{11429}), (\ref{11424}), (\ref{11425}) and (\ref{11426}) imply that, for any $u\in Z_{M}$, \begin{equation} \label{262} \left\|\partial_t\left(Su\right)\right\|_{B^{(s-4)/4-2/4}_{2,2}B^2_{r,2}}\lesssim \left\|\phi\right\|_{H^s}+\left\|\phi\right\|_{H^s}^{\alpha+1}+M^{\alpha+1}, \qquad 6\le s<8. \end{equation} On the other hand, it follows from the inequality (\ref{i2}) and the embedding $L^2L^2 \hookrightarrow l^2L^2L^2$ that \begin{eqnarray} \label{277} \left\|Su\right\|_{L^2B^{2}_{r,2}}&\lesssim& \left\|\phi\right\|_{H^2}+\left\|f(u)\right\|_{B^{1/2}_{2,2}L^{r'}}+\left\|f(u)\right\|_{L^2L^2}\notag\\ &\lesssim & \left\|\phi\right\|_{H^2}+\left\|u\right\|_{L^\infty H^s}^{\alpha} \left(\left\|u\right\|_{B^{s/4}_{2,2}L^r}+\left\|u\right\|_{B^{\left(s-2\right)/4}_{2,2}B^{2}_{r,2}}\right), \end{eqnarray} where we used (\ref{11424}) and (\ref{11426}) in the second inequality. It now follows from (\ref{11428}), (\ref{262}), (\ref{277}) and (\ref{253}) that, for any $u\in Z_M$, \begin{equation}\label{11431} \left\|Su\right\|_{L^\infty H^s\cap B^{s/4}_{2,2}L^r\cap B^{(s-2)/4}_{2,2}B^2_{r,2}}\le C_8\left(\left\|\phi\right\|_{H^s}+\left\|\phi\right\|_{H^s}^{\alpha+1}\right)+C_8M^{\alpha+1}\le M. \end{equation} Our next aim is the desired Lipschitz property of $S$ with respect to the metric $d$ defined in (\ref{d2}). For any $u,v\in Z_M$, we deduce from Strichartz's estimate (\ref{SZ}), the inequality (\ref{fu}), (\ref{253}), H\"older's inequality and Sobolev's embedding $H^s\left(\R^N\right)\hookrightarrow L^{\frac{2N}{N-2s}}\left(\R^N\right)$ that \begin{eqnarray}\label{11432} d(Su,Sv)&\lesssim &\left\|\left(\left|u\right|^{\alpha}+\left|v\right|^{\alpha}\right)\left(u-v\right)\right\|_{L^2L^{r'}}\notag\\ &\lesssim &\left(\left\|u\right\|^\alpha_{L^\infty H^s}+\left\|v\right\|^\alpha_{L^\infty H^s}\right)\left\|u-v\right\|_{L^2L^r}\notag\\ &\le &C_9M^\alpha d(u,v)\le\frac{1}{2}d(u,v). \end{eqnarray} Therefore, we deduce from Banach's fixed point argument that the Cauchy problem (\ref{NLS}) admits a unique global solution $u\in C\left(\R,H^s\right)\cap B^{s/4}_{2,2}L^r\cap B^{(s-2)/4}_{2,2}B^2_{r,2}$, where the continuity of $u$ in time follows from the same argument used in the case $4<s<6$. \section*{Funding:} {This work is partially supported by the National Natural Science Foundation of China 11771389, 11931010 and 11621101.}
{"config": "arxiv", "file": "2103.04293.tex"}
TITLE: compound distribution in Bayesian sense vs. compound distribution as random sum? QUESTION [0 upvotes]: I'm trying to sort out two different uses of the term "compound distribution" and figure out the relationship. The Wikipedia article on compound distribution -- which I wrote -- defines a compound distribution as an infinite mixture, i.e. if $p(x|a)$ is a distribution of type F, and $p(a|b)$ is a distribution of type G, then $p(x|b) = \int_a p(x|a) p(a|b) da$ is a compound distribution that results from compounding F with G. This is the distribution of prior and posterior predictive distributions in Bayesian statistics. However, the term "compound distribution" has another meaning as a random sum, i.e. a sum of i.i.d. variables where the number of variables is random. What's the relation between the two? And am I using "compound distribution" correctly for the first definition? REPLY [0 votes]: I can't say in general, but in the actuarial literature, the random sum of random variables is called an aggregate distribution, as in aggregate insured losses. Your definition of compound distribution is the one used in insurance.
{"set_name": "stack_exchange", "score": 0, "question_id": 91877}
\begin{document} \title{Micro-macro decomposition based asymptotic-preserving numerical schemes and numerical moments conservation for collisional nonlinear kinetic equations \footnote{The first and the third author was supported by the funding DOE--Simulation Center for Runaway Electron Avoidance and Mitigation. The second author was supported by NSF grants DMS-1522184 and DMS-1107291: RNMS KI-Net. }} \author{Irene M. Gamba \footnote{Department of Mathematics and The Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, Austin, TX 78712, USA (gamba@math.utexas.edu).}, Shi Jin \footnote{Department of Mathematics, University of Wisconsin, Madison, WI 53706, USA and School of Mathematical Sciences, Institute of Natural Sciences, MOE-LSEC and SHL-MAC, Shanghai Jiao Tong University, Shanghai, China (sjin@wisc.edu). } and Liu Liu\footnote{The Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, Austin, TX 78712, USA (lliu@ices.utexas.edu). }} \maketitle \abstract{ In this paper, we first extend the micro-macro decomposition method for multiscale kinetic equations from the BGK model to general collisional kinetic equations, including the Boltzmann and the Fokker-Planck Landau equations. The main idea is to use a relation between the (numerically stiff) linearized collision operator with the nonlinear quadratic ones, the later's stiffness can be overcome using the BGK penalization method of Filbet and Jin for the Boltzmann, or the linear Fokker-Planck penalization method of Jin and Yan for the Fokker-Planck Landau equations. Such a scheme allows the computation of multiscale collisional kinetic equations efficiently in all regimes, including the fluid regime in which the fluid dynamic behavior can be correctly computed even without resolving the small Knudsen number. A distinguished feature of these schemes is that although they contain implicit terms, they can be implemented {\it explicitly}. These schemes preserve the moments (mass, momentum and energy) {\it exactly} thanks to the use of the macroscopic system which is naturally in a conservative form. We further utilize this conservation property for more general kinetic systems, using the Vlasov-Amp\'{e}re and Vlasov-Amp\'{e}re-Boltzmann systems as examples. The main idea is to evolve both the kinetic equation for the probability density distribution and the moment system, the later naturally induces a scheme that conserves exactly the moments numerically if they are physically conserved. } {\bf keywords: } Boltzmann equation, Landau equation, micro-macro decomposition, asymptotic preserving scheme, conservative scheme, Vlasov-Amp\'{e}re-Boltzmann \section{Introduction} The Boltzmann equation and the Fokker-Planck-Landau equation are among the most important kinetic equations, arising in describing the dynamics of probability density distribution of particles in rarified gas and plasma, respectively. One of the main computational challenges for these kinetic equations is that the problem may often encounter multiple time and spatial scales, characterized by the Knudsen number (denoted by $\varepsilon$), the dimensionless mean free path, that may vary in orders of magnitude in the computational domain, covering the regimes from fluid, transition, rarefied to even free streaming regimes. Asymptotic-Preserving (AP) schemes, which mimics the asymptotic transition from one scale to another at the discrete level, have been shown to be an effective computational paradigm in the last two decades \cite{jin1999efficient, jin2010asymptotic}. Such schemes allow efficient numerical approximations in {\it all} regimes, and coarse mesh and large time steps can be used even in the fluid dynamic regime, without numerically resolving the small Knudsen number. For the space inhomogeneous Boltzmann equation, AP schemes were first designed using BGK-operator based penalty \cite{Filbet-Jin}. Other approaches include the exponential integrator based methods \cite{dimarco2011exponential, li2014exponential}, or micro-macro (MM) decomposition \cite{MM-Lemou}. We also mention relevant works \cite{xu2010unified, liu2016unified}. One should note that \cite{MM-Lemou, xu2010unified} only dealt with the BGK model, rather than the full Boltzmann equation. For AP schemes to deal with the stiff Landau collision operator, the BGK-penalization method was extended to the Fokker-Planck-Landau equation in \cite{JinYan}, using the linear Fokker-Planck operator as the penalty. The aim of this paper is not on the comparison of all these different approaches, rather we will focus on the micro-macro decomposition method, which was formulated in \cite{MM-Lemou} for the Boltzmann but numerically realized only for the BGK model. One of the difficulties in this formulation is that one encounters a stiff linearized collision operator whose inversion could be computationally inefficient. One of the goals of this paper is to show how it can be extended to the general collision operators, include the Boltzmann and Landau collision operators. Having its theoretical origin in \cite{liu2004boltzmann} (see also \cite{liu2006nonlinear}), the micro-macro decomposition has also found its advantage in designing AP schemes for radiative heat transfer \cite{klar2001numerical}, linear transport equation \cite{Lemou-BC}, among others. One of the advantages of the micro-macro approach is that one can obtain good uniform numerical stability result \cite{liu2010analysis, LFY-DG}. Our main idea for the MM method is the usage of a simple relation between a linearized collision operator (a numerically stiff term) and the quadratically nonlinear collision operator. For the (stiff) nonlinear collision operators, we then use the BGK-penalty method of Filbet-Jin \cite{Filbet-Jin} for the Boltzmann collision or the Fokker-Planck penalty of Jin-Yan \cite{JinYan} for the Fokker-Planck-Landau collision. This allows us to extend the MM method of \cite{MM-Lemou} from the BGK model to the more physical Boltzmann and Fokker-Planck-Landau equations in a rather simple fashion. We would like to point out that in the MM formalism (as well as in the penalty methods in \cite{Filbet-Jin, JinYan}), one needs to solve the macroscopic system, which is in conservation form, giving rise the conservation of mass, momentum and total energy. When discretizing the macroscopic system with a standard spatially conservative scheme, these physically conserved quantities are naturally conserved numerically. This is not the case if one uses the microscopic equation for the particle density distribution $f$ and then takes moments from the discrete $f$, since many collision solvers, for example the spectral methods \cite{gamba2017fast, pareschi2000numerical,gamba2009spectral, mouhot2006fast}, do not have the {\it exact} conservation properties, and extra efforts are needed for the exact conservation, see \cite{mieussens2000discrete, zhang2017conservative, gamba2014conservative}. The advantage of the conservation of moments made from the {\it macro} system was noted and emphasized in \cite{JinYan}. In Section \ref{sec:7} we further extend this idea to design {\it conservative} schemes for {\it general} (collisional or non-collisional) kinetic systems, using the Vlasov-Poisson and Vlasov-Poisson-Boltzmann systems as examples. The general principle favored here is that one should solve the original kinetic equation and the moment system {\it simultaneously}. One first obtains the moment system analytically and then the discrete moment system, when using spatially conservative discretizations, {\it automatically} yields the exact conservations of moments, if they are conserved physically. Since the total energy also includes the electric energy, another idea introduced here is to replace the Poisson equation for the electric field by the Amp\'{e}re equation, and then the coupled system is discretized in time by a carefully designed explicit-implicit scheme. This paper is organized as follows. Section \ref{sec:Intro} gives an introduction of two kinetic equations: the Boltzmann and the Fokker-Planck-Landau equations. In Section \ref{sec:micro}, the basic idea of the micro-macro decomposition method is reviewed. Section \ref{sec:NA} studies the fully discretized AP numerical scheme, especially on how to embed the penalization method in the micro-macro decomposition framework to solve the full nonlinear Boltzmann and Fokker-Planck-Landau equations. We also emphasize that our scheme conserves the moments (mass, momentum, energy) if these moment variables are obtained from the macroscopic system instead of from the particle density distribution $f$. Section \ref{Sec:Num} provides some implementation details, while in Section \ref{sec:NE} a number of numerical examples are used to study the conservation property as well as the performance of the new schemes in different regimes. In Section \ref{sec:7} we introduce conservation schemes for the Vlasov-Amp\'ere system and Vlasov-Amp\'re-Boltzmann system, with the conservations obtained through solving the moment systems and a specially designed time discretization. Finally, we conclude and list some future work in Section \ref{sec:FW}. \section{Introduction of two kinetic equations} \label{sec:Intro} \subsection{The Boltzmann equation} One of the most celebrated kinetic equations for rarefied gas is the Boltzmann equation, which describes the time evolution of the density distribution of a dilute gas of particles when the only interactions considered are binary elastic collisions. A dimensionless form reads \begin{equation}\label{Boltz} \partial_t f + v\cdot \nabla_x f= \frac{1}{\varepsilon}\, \mathcal Q_{\text{B}}(f, f), \qquad t>0, \, (x,v)\in\Omega\times\mathbb R^d, \end{equation} where $f(t,x,v)$ is the probability density distribution (p.d.f) function, modeling the probability of finding a particle at time $t$, at position $x\in\Omega$, with velocity $v\in\mathbb R^d$. The parameter $\varepsilon$ is the Knudsen number defined as the ratio of the mean free path over a typical length scale such as the size of the spatial domain, which characterizes the degree of rarefaction of the gas. The Boltzmann collision operator is denoted by $\mathcal Q_{\text{B}}$, which is a bilinear functional and only acts on the velocity dependence of $f$, \begin{equation} \mathcal Q(f, g)(t,x,v) = \int_{\mathbb R^d}\int_{\mathbb S^{d-1}}\, B(|v-v_{\ast}|, \cos\theta) \left(f(t,x,v^{\prime})g(t,x,v_{\ast}^{\prime}) - f(t,x,v)g(t,x,v_{\ast})\right) d\sigma\, dv_{\ast}\,. \end{equation} We consider the elastic interaction. The velocity pairs before and after the collision $(v, v_{\ast})$ and $(v^{\prime}, v_{\ast}^{\prime})$ have the relation, \begin{align} \begin{cases} &\displaystyle v^{\prime}= \frac{v+v_{\ast}}{2} + \frac{|v-v_{\ast}|}{2}\, \sigma, \\[6pt] &\displaystyle v_{\ast}^{\prime} = \frac{v+v_{\ast}}{2} - \frac{|v-v_{\ast}|}{2}\, \sigma. \end{cases} \end{align} Here $\sigma$ is the scattering direction varying in the unit sphere $\mathbb S^{d-1}$, and is defined by $$\sigma = \frac{u^{\prime}}{|u^{\prime}|} = \frac{u^{\prime}}{|u|}, $$ where the pre- and post-collisional relative velocities $u=v-v_{\ast}$ and $u^{\prime}=v^{\prime}-v_{\ast}^{\prime}$ have the same magnitude, i.e., $|u^{\prime}|=|u|$. Cosine of the deviation angle is given by $$\cos\theta = \frac{u\cdot u^{\prime}}{|u|^2} = \frac{u\cdot \sigma}{|u|} := \hat u \cdot\sigma\,. $$ The collision kernel $B$ is a non-negative function, which is usually written in a form of a product of a power function of the relative velocity $u$ and a scattering angular function $b$ depending on $\cos\theta$, that is, \begin{equation} B(|v-v_{\ast}|, \cos\theta) = B(|u|, \hat u \cdot\sigma) = C_{\lambda}\, |u|^{\lambda}\, b(\hat u \cdot\sigma), \qquad -d\leq \lambda\leq 1. \end{equation} Here $\lambda>0$ corresponds to the hard potentials, $\lambda<0$ the soft potentials, and $\lambda=0$ refers to the Maxwell pseudo-molecules model. It is not hard to find that \begin{equation}\label{weak} \int_{\mathbb R^d}\, \mathcal Q(f, f)(v)\phi(v)\, dv = \frac{1}{2} \int_{\mathbb R^d}\, f f_{\ast}\left(\phi + \phi_{\ast} - \phi^{\prime} - \phi_{\ast}^{\prime}\right) B(|v-v_{\ast}|, \sigma)\, d\sigma dv_{\ast} \end{equation} equals to zeros if \begin{equation}\label{phi} \phi + \phi_{\ast} = \phi^{\prime} + \phi_{\ast}^{\prime}. \end{equation} One can prove that (\ref{phi}) holds if and only if $\phi(v)$ lies in the space spanned by the moments of mass, momentum and kinetic energy. We call the $d+2$ test functions $1, \, v, \, \frac{|v|^2}{2}$ {\it collision invariants} associated to $\mathcal Q_{B}$. Denote $$m(v) = \left(1, v, \frac{|v|^2}{2}\right)^{T}, $$ then \begin{equation}\label{Q_cons} \int_{\mathbb R^d}\, \mathcal Q_{B}(f, f)m(v)\, dv = 0, \end{equation} which correspond to the conservation of mass, momentum and kinetic energy of $\mathcal Q_{B}$. Define $U=(\rho, \rho u, E)^{T}$ as the velocity averages of $f$ multiplying by the collision invariants $m$, which is a vector composing of $d+2$ conserved moments of density, momentum and energy, \begin{equation}\label{U_eqn} \langle m M(U)\rangle = U = \int_{\mathbb R^d}\begin{pmatrix}1 \\ v \\ \frac{1}{2}|v|^2 \end{pmatrix}f(v)dv = \begin{pmatrix}\rho \\ \rho u \\ \frac{1}{2}\rho\, |u|^2 + \frac{d}{2}\rho\, T \end{pmatrix} = \begin{pmatrix} \rho \\ \rho u \\ E \end{pmatrix}. \end{equation} If setting $\phi(v) = \ln f(v)$ in (\ref{weak}), one can prove the following dissipation of entropy \begin{equation}\label{H-thm1}\int_{\mathbb R^d}\, \mathcal Q_{B}(f, f)\ln f\, dv \leq 0, \end{equation} which is known as the celebrated Boltzmann's H-theorem. Furthermore, the Boltzmann theorem for elastic interaction is given by \begin{equation}\label{H-thm2} \int_{\mathbb R^d}\, \mathcal Q_{B}(f, f)\ln f\, dv=0 \, \Leftrightarrow\, \mathcal Q_{B}(f, f)=0 \, \Leftrightarrow\, f = M, \end{equation} where $M$ is the equilibrium state given by a {\it Maxwellian distribution} \begin{equation}\label{Max} M(U)(v) = \frac{\rho}{(2\pi T)^{\frac{d}{2}}}\exp\left(-\frac{|v-u|^2}{2T}\right) := M_{U(x,t)}(v)\,. \end{equation} Here $\rho$, $u$ and $T$ are respectively the density, bulk velocity, and temperature defined by $$\rho = \int_{\mathbb R^d}\, f(v)\, dv, \qquad u = \frac{1}{\rho}\, \int_{\mathbb R^d}\, f(v)v\, dv, \qquad T = \frac{1}{d \rho}\, \int_{\mathbb R^d}\, f(v)|v-u|^2\, dv. $$ \\[2pt] {\bf The fluid limit}\, We introduce the notation $\langle\, \cdot\, \rangle$ as the velocity averages of the argument, i.e., $$ \langle f \rangle = \int_{\mathbb R^d}\, f(v)\, dv. $$ Multiplying (\ref{Boltz}) by $m(v)$ and integrating with respect to $v$, by using the conservation property of $\mathcal Q_{\text{B}}$ given by (\ref{Q_cons}), one has $$ \partial_t \langle m f \rangle + \nabla_x\cdot \langle v m f \rangle = 0. $$ This gives a non-closed system of conservation laws \begin{equation}\label{INS} \partial_t \begin{pmatrix} \rho \\ \rho u \\ E \end{pmatrix} + \nabla_x \cdot \begin{pmatrix} \rho u \\ \rho u \otimes u + \mathbb P \\ E u + \mathbb P u + \mathbb Q \end{pmatrix} = 0, \end{equation} where $E$ is the energy defined in (\ref{U_eqn}), $\mathbb P = \langle (v-u)\otimes (v-u)f \rangle$ is the pressure tensor, and $\mathbb Q = \frac{1}{2}\langle (v-u) |v-u|^2 f \rangle$ is the heat flux vector. When $\varepsilon\to 0$, $f\to M(U)$. Replacing $f$ by $M(U)$ and using expression (\ref{Max}), $\mathbb P$ and $\mathbb Q$ are given by $$ \mathbb P = p\, I, \qquad \mathbb Q = 0, $$ where $p = \rho T$ is the pressure, $I$ is the identity matrix. Then (\ref{INS}) reduces to the usual compressible Euler equations \begin{equation}\label{Euler} \partial_t \begin{pmatrix} \rho \\ \rho u \\ E \end{pmatrix} + \nabla_x \cdot \begin{pmatrix} \rho u \\ \rho u\otimes u + p\, I \\ (E+p)\, u \end{pmatrix} = 0. \end{equation} \subsection{The Fokker-Planck-Landau equation} The nonlinear Fokker-Planck-Landau (nFPL) equation is widely used in plasma physics. The rescaled nFPL equation reads \begin{equation}\label{LD} \partial_t f + v\cdot \nabla_x f= \frac{1}{\varepsilon}\, \mathcal Q_{\text{L}}(f, f), \qquad t>0, \, (x,v)\in\Omega\times\mathbb R^d, \end{equation} with the nFPL operator \begin{equation} \mathcal Q_{L}(f, f) = \nabla_v \cdot \int_{\mathbb R^d}\, A(v-v_{\ast}) \left(f(v_{\ast})\, \nabla_v f(v) - f(v)\, \nabla_v f(v_{\ast})\right) dv_{\ast}\,, \end{equation} where the semi-positive definite matrix $A(z)$ is $$ A(z) = \Psi(z)\, \left( I -\frac{z\otimes z}{|z|^2} \right), \qquad \Psi(z) = |z|^{\gamma+2}\,. $$ The parameter $\gamma$ characterizes the type of interaction between particles. The inverse power law gives $\gamma\geq -3$. Similar to Boltzmann collision operator, $\gamma>0$ categorizes hard potentials, $\gamma=0$ for Maxwellian molecules and $\gamma<0$ for soft potentials. The case $\gamma=-3$ corresponding to Coulomb interactions. The nFPL equation is derived as a limit of the Boltzmann equation when all the collisions become grazing. Therefore, the nFPL operator possesses similar conservation laws and decay of entropy (H-theorem) as the Boltzmann collision operator, which are given in (\ref{H-thm1})-(\ref{H-thm2}). \section{The micro-macro decomposition method} \label{sec:micro} When no confusion is possible, we set $M_{U(x,t)}(v)=M$ in the following. Consider the Hilbert space $L^2_{M}=\left\{\phi \, \big|\, \phi\, M^{-\frac{1}{2}}\in L^2 (\mathbb R^d)\right\}$ endowed with the weighted scalar product $$ (\phi, \, \psi)_{M}= \langle \phi\, \psi\, M^{-1}\rangle. $$ It is well-known that the linearized operator $\mathcal L_{M}$ is a non-positive self-adjoint operator on $L^2_{M}$ and that its null space is $$\mathcal N(\mathcal L_{M}) =\text{Span}\left\{M, \, |v| M,\, |v|^2 M\right\}, $$ whose orthogonal basis is $$\mathcal B = \left\{\frac{M}{\rho}, \, \frac{(v-u)}{\sqrt{T}}\frac{M}{\rho}, \, \left(\frac{|v-u|^2}{2T}-\frac{d}{2}\right)\frac{M}{\rho}\right\}. $$ The orthogonal projection of $\phi\in L^2_{M}$ onto $\mathcal N(\mathcal L_{M})$ is given by $\Pi_{M}(\phi)$: $$\Pi_{M}(\phi)= \frac{1}{\rho}\left[\langle\phi\rangle + \frac{(v-u)\cdot \langle(v-u)\phi\rangle}{T} + \left(\frac{|v-u|^2}{2T}-\frac{d}{2}\right)\frac{2}{d} \left\langle\left(\frac{|v-u|^2}{2T}-\frac{d}{2}\right)\phi\right\rangle\right] M. $$ We explain the main idea of the micro-macro decomposition, which mostly follows that in \cite{MM-Lemou}, where the BGK equation, with $\mathcal Q_{BGK}(f, f) = \frac{1}{\tau}(M - f)$, is numerically implemented ($\tau$ is the relaxation time). Let $f$ be the solution of the Boltzmann equation (\ref{Boltz}). We decompose $f = f(t,x,v)$ as \begin{equation}\label{ansatz} f = M + \varepsilon\, g(x,t,v) \end{equation} where $U$ and $M$ are given in (\ref{U_eqn}) and (\ref{Max}) respectively. Inserting (\ref{ansatz}) into (\ref{Boltz}), one obtains $$\partial_t M + v\cdot\nabla_x M + \varepsilon (\partial_t g + v\cdot\nabla_x g)=\frac{1}{\varepsilon}\mathcal Q(M+\varepsilon g, M+ \varepsilon g). $$ Denote the linearized collision operator \begin{equation}\mathcal L_{M}(g)=2\mathcal Q(M, g). \label{LL}\end{equation} Since $\mathcal Q$ is bilinear and $\mathcal Q(M,M)=0$, then $$\mathcal Q(M+\varepsilon g, M+ \varepsilon g)=\mathcal Q(M,M) + 2\varepsilon\mathcal Q(M,g) + \varepsilon^2 \mathcal Q(g,g)= \varepsilon\mathcal L_{M}(g) + \varepsilon^2 \mathcal Q(g,g), $$ thus \begin{equation}\label{M_1}\partial_t M + v\cdot\nabla_x M + \varepsilon (\partial_t g + v\cdot\nabla_x g) = \mathcal L_{M}(g) + \varepsilon \mathcal Q(g,g). \end{equation} Applying the operator $\mathbb I - \Pi_{M}$ to (\ref{M_1}), one gets \begin{equation}\label{g}\partial_t g + (\mathbb I - \Pi_{M})(v\cdot\nabla_x g)- \mathcal Q(g,g)= \frac{1}{\varepsilon}\left[\mathcal L_{M}(g) - (\mathbb I -\Pi_{M})(v\cdot\nabla_x M)\right]. \end{equation} On the other hand, if we take the moments of equation (\ref{M_1}), then \begin{equation}\label{M_2}\partial_t \langle mM \rangle + \nabla_x \cdot \langle v m M\rangle + \varepsilon \nabla_x \cdot \langle vmg\rangle =0. \end{equation} Denote the flux vector of $U$ by $$F(U)=\langle v m M\rangle = \begin{pmatrix} \rho u \\ \rho u \otimes u + \rho T \\ E u + \rho T u \end{pmatrix}, $$ then (\ref{M_2}) becomes \begin{equation}\label{U}\partial_t U + \nabla_x \cdot F(U) + \varepsilon \nabla_x \cdot \langle vmg\rangle =0. \end{equation} Therefore, the coupled system (\ref{g}) and (\ref{U}) gives a kinetic/fluid formulation of the Boltzmann equation. It has been shown in \cite{MM-Lemou} that this coupled system is equivalent to the Boltzmann equation (\ref{Boltz}). \\[4pt] {\bf Initial and boundary conditions} \\ For the initial condition, we set $$ f(t=0, x, v) = f^{0}(x, v). $$ $x$ is in a bounded set $\Omega$ with boundary $\Gamma$. For the numerical implementation purpose, we only consider the periodic boundary condition (BC) in $x$ in this paper. Nevertheless, we briefly mention other types of BC. For points $x$ on the boundary $\Gamma$, the distribution function of incoming velocities (i.e., $v$ with $v\cdot n(x)<0$, where $n(x)$ is the outer normal vector of $\Gamma$ at $x$) should be specified. The Dirichlet BC reads \begin{equation}\label{BC1} f(t,x,v) = f_{\Gamma}(t,x,v) \qquad \forall x\in\Gamma, \, \forall v, \, \text{s.t.}\, v\cdot n(x)<0. \end{equation} The reflecting BC is given by \begin{equation}\label{BC2} f(t,x,v) = \int_{v^{\prime}\cdot n(x)>0}\, K(x,v,v^{\prime})f(t,x,v^{\prime})\, dv^{\prime} \qquad \forall x\in\Gamma, \, \forall v, \, \text{s.t.}\, v\cdot n(x)<0, \end{equation} where the kernel $K$ satisfies the zero normal mass flux condition across the boundary: $$ \int_{\Gamma}\, v\cdot n(x) f(t,x,v)\, dv = 0. $$ The periodic BC can be used when the shape of $\Omega$ is symmetric, $$ f(t,x,v) = f(t, Sx, v), \qquad x\in\Gamma_1, \, \forall v, $$ where S is a one-to-one mapping from a part $\Gamma_1$ of $\Gamma$ onto another part $\Gamma_2$ of $\Gamma$. In general, using the micro-macro decomposition into boundary conditions (\ref{BC1})-(\ref{BC2}) provides relations for $M + \varepsilon g$, but do not provide the values for $M$ and $g$ separately. Moreover, $f$ is generally known only for incoming velocities at boundary points, which may induce difficulties to define the macroscopic moments $U$. Note that various numerical boundary conditions based on micro-macro formulation for {\it linear} kinetic equations in the diffusion limit is studied in \cite{Lemou-BC}. \section{Numerical Approximation} \label{sec:NA} \subsection{Time discretization} We denote $\Delta t$ a fixed time step, $t_n$ a discrete time with $t_n=n\Delta t$, $n\in \mathbb N$. Let $U^n(x)\approx U(t_n, x)$, $g^n(x,v)\approx g(t_n, x, v)$. Note that in equation (\ref{g}), $\varepsilon^{-1}\mathcal L_{M}(g)$ is the only collision term that presents the stiffness, thus one needs to take an implicit discretization for this term, while the term $(I -\Pi_{M})(v\cdot \nabla_x M)$ is still explicit. The time discretization for (\ref{g}) is given by \begin{equation}\label{g_discrete}\frac{g^{n+1}-g^n}{\Delta t} + (\mathbb I - \Pi_{M^n})(v\cdot\nabla_x g^n) - \mathcal Q(g^n,g^n)= \frac{1}{\varepsilon}\left[\mathcal L_{M^n}(g^{n+1}) - (\mathbb I -\Pi_{M^n})(v\cdot\nabla_x M^n)\right]. \end{equation} For the time discretization of the fluid part (\ref{U}), the flux $F(U)$ at time $t_n$ is approximated by $F(U^n)=\langle v m M^n \rangle$, and the convection term $\nabla_x \cdot \langle v m g\rangle$ is discretized by $\nabla_x \cdot \langle v m g^{n+1}\rangle$, \begin{equation}\label{U_discrete}\frac{U^{n+1}-U^n}{\Delta t} + \nabla_x \cdot F(U^n) + \varepsilon\nabla_x \cdot \langle v m g^{n+1}\rangle =0. \end{equation} In \cite{MM-Lemou} only BGK collision operator was considered, thus avoided the difficulty of inverting the $\mathcal L_{M^n}(g^{n+1})$ term in (\ref{g_discrete}), since the implicit BGK operator can be inverted explicitly, thanks to the conservation property of the operator due to (\ref{U_eqn}). For general collision operator this is no longer true. In the next subsection, we propose an efficient method to deal with the term $\mathcal L_{M^n}(g^{n+1})$, which is one of the main ideas of this paper. \subsection{AP schemes by penalization} To avoid the complication of inverting the stiff, implicit linearized collision operator $\mathcal L_{M^n}(g^{n+1})$ in (\ref{g_discrete}), our proposed method is to use the relation $$ \mathcal Q(M, g)=\frac{1}{4}\left[ \mathcal Q(M+g, M+g) - \mathcal Q(M-g, M-g)\right], $$ and by (\ref{LL}), namely $\mathcal L_{M}(g)=2\mathcal Q(M, g)$, then \begin{equation}\label{L-Mg}\mathcal L_{M^n}(g^{n+1})=\frac{1}{2}\left[ \mathcal Q(M^n + g^{n+1}, M^n + g^{n+1}) - \mathcal Q(M^n - g^{n+1}, M^n - g^{n+1})\right]. \end{equation} To deal with the implicit collision operator $\mathcal Q$, we adopt the penalization method developed in \cite{Filbet-Jin} for the Boltzmann equation, and that in \cite{JinYan} for the Fokker-Planck-Landau equation. {\bf I}. For the Boltzmann equation, the linear BGK collision operator \cite{Filbet-Jin} \begin{equation} P(f) = P_{BGK}^{M}f = \beta(M - f) \end{equation} is used as the penalty operator. Now we replace $\mathcal L_{M^n}(g^{n+1})$ in (\ref{U_discrete}) by $\mathcal L_{M^n}^P(g^{n+1})$, given by \begin{align} &\displaystyle\mathcal L_{M^n}^P(g^{n+1}) = \frac{1}{2}\bigg[\mathcal Q_{\text{B}}(M^n+g^n, M^n+g^n) - \beta_1^n (M^n - (M^n+ g^n)) + \beta_1^{n+1} (M^{n+1}-(M^{n+1} + g^{n+1})) \notag\\[2pt] &\displaystyle \qquad\qquad\qquad - \bigg(\mathcal Q_{\text{B}}(M^n-g^n, M^n-g^n) - \beta_2^n (M^n - (M^n - g^n)) + \beta_2^{n+1} (M^{n+1}- (M^{n+1} - g^{n+1}))\bigg)\bigg] \notag\\[2pt] &\displaystyle \qquad\qquad\quad = \frac{1}{2}\bigg[\mathcal Q_{\text{B}}(M^n+g^n, M^n+g^n) + \beta_1^n g^n - \beta_1^{n+1} g^{n+1} \notag\\[2pt] &\displaystyle \qquad\qquad\qquad - \mathcal Q_{\text{B}}(M^n-g^n, M^n-g^n) + \beta_2^n g^n - \beta_2^{n+1} g^{n+1})\bigg] \notag\\[2pt] &\displaystyle\label{LG1} \qquad\qquad\quad = \frac{1}{2}\left(\mathcal Q_{\text{B}}(M^n+g^n, M^n+g^n) - \mathcal Q_{\text{B}}(M^n-g^n, M^n-g^n)\right)\notag\\[2pt] &\displaystyle\qquad\qquad\qquad + \frac{1}{2}(\beta_1^n + \beta_2^n) g^n - \frac{1}{2}(\beta_1^{n+1} + \beta_2^{n+1})g^{n+1}\,. \end{align} In the Boltzmann equation, the parameter $\beta>0$ is chosen as an upper bound of $||\nabla \mathcal Q(M)||$ or some approximation of it, for example, \begin{eqnarray} &\displaystyle \beta_1^n = \sup_{v}\left|\frac{\mathcal Q(M^n+g^n, M^n+g^n) -\mathcal Q(M^n,M^n)}{g^n}\right| = \sup_{v}\left|\frac{\mathcal Q(M^n+g^n, M^n+g^n)}{g^n}\right|, \notag\\[4pt] &\displaystyle\label{penalty1} \beta_2^n = \sup_{v} \left|\frac{\mathcal Q(M^n-g^n, M^n-g^n) - \mathcal Q(M^n,M^n)}{g^n}\right| = \sup_{v} \left|\frac{\mathcal Q(M^n-g^n, M^n-g^n)}{g^n}\right|. \end{eqnarray} {\bf II}. For the nFPL equation, the linear Fokker-Planck (FP) operator \begin{equation}\label{FP}P f = P_{FP}^{M}f = \nabla_{v}\cdot \left(M \nabla_{v}\left(\frac{f}{M}\right)\right) \end{equation} is chosen as the suitable penalty operator \cite{JinYan}. We now replace $\mathcal L_{M^n}(g^{n+1})$ in (\ref{U_discrete}) by $\mathcal L_{M^n}^P(g^{n+1})$ (and use the bracket notation $\{ \cdot \}$ to denote $P$ imposed on the argument), \begin{align} &\displaystyle\mathcal L_{M^n}^P(g^{n+1}) = \frac{1}{2}\bigg[\mathcal Q_{\text{L}}(M^n+g^n, M^n+g^n) - \beta_1^n P^n \{ M^n+g^n \} + \beta_1^n P^{n+1}\{ M^{n+1}+g^{n+1}\} \notag\\[2pt] &\displaystyle \qquad\qquad\qquad - \left(\mathcal Q_{\text{L}}(M^n-g^n, M^n-g^n) - \beta_2^n P^n\{M^n-g^n\} + \beta_2^n P^{n+1}\{M^{n+1}-g^{n+1}\}\right)\bigg] \notag\\[2pt] &\displaystyle \qquad\qquad\quad = \frac{1}{2}\bigg[\mathcal Q_{\text{L}}(M^n+g^n, M^n+g^n) - \beta_1^n P^n g^n + \beta_1^n P^{n+1}g^{n+1}\notag\\[2pt] &\displaystyle \qquad\qquad\qquad - \mathcal Q_{\text{L}}(M^n-g^n, M^n-g^n) - \beta_2^n P^n g^n + \beta_2^n P^{n+1} g^{n+1} \bigg] \notag\\[2pt] &\displaystyle\label{LG2} \qquad\qquad\quad = \frac{1}{2}\left(\mathcal Q_{\text{L}}(M^n+g^n, M^n+g^n) - \mathcal Q_{\text{L}}(M^n-g^n, M^n-g^n)\right) \notag\\[2pt] &\displaystyle\qquad\qquad\qquad - \frac{1}{2}(\beta_1^n + \beta_2^n)P^n g^n + \frac{1}{2}(\beta_1^n + \beta_2^n)P^{n+1} g^{n+1}\,, \end{align} where the well-balanced property of $P$, i.e., $P^n M^n =P^{n+1} M^{n+1} = 0$ is used. In (\ref{LG2}), $\beta_1^n$ and $\beta_2^n$ are chosen as \begin{align*} &\displaystyle\beta_1^n = \beta_0 \max_{v}\lambda(D_{A}(g^n+M^n)), \\[4pt] &\displaystyle \beta_2^n = \beta_0 \max_{v}\lambda(D_{A}(g^n-M^n)). \end{align*} $\beta_0$ is a constant satisfying $\beta_0>\frac{1}{2}$, and a simple choice is $\beta_0=1$. $\lambda(D_{A})$ is the spectral radius of the positive symmetric matrix $D_{A}$, $$ D_{A}(f)= \int_{\mathbb R^d} A(v-v_{\ast})f_{\ast}\, dv_{\ast}, $$ \subsection{Space and velocity discretizations} {\bf Space discretization}\, For simplicity and clarity of notations, we only consider $x\in\mathbb R$. As done in \cite{MM-Lemou}, a finite volume discretization is used for the transport term in the left-hand-side of (\ref{g_discrete}); a central difference scheme is used to discretize the term $(\mathbb I -\Pi_{M^n})(v\cdot\nabla_x M^n)$ via (\ref{g_discrete}), and the term $\varepsilon\nabla_x \cdot\langle v m g^{n+1}\rangle$ via (\ref{U_discrete}). Consider spatial grid points $x_{i+\frac{1}{2}}$ and $x_i$ the center of the cell $[x_{i-\frac{1}{2}}, x_{i+\frac{1}{2}}]$, for $i=0, \cdots N_x$. A uniform space step is $\Delta x=x_{i+\frac{1}{2}}-x_{i-\frac{1}{2}}=x_{i}-x_{i-1}$. Let $U_i^n \approx U(t_n, x_i)$ and $g_{i+\frac{1}{2}}^n \approx g(t_n, x_{i+\frac{1}{2}})$. Now we define the following notations for the finite difference operators. For every grid function $\phi=(\phi_{i+\frac{1}{2}})$, define the one-sided difference operators: $$ D^{-}\phi_{i+\frac{1}{2}}=\frac{\phi_{i+\frac{1}{2}}-\phi_{i-\frac{1}{2}}}{\Delta x}, \qquad D^{+}\phi_{i+\frac{1}{2}}=\frac{\phi_{i+\frac{3}{2}}-\phi_{i+\frac{1}{2}}}{\Delta x}. $$ For every grid function $\mu=(\mu_{i})$, we define the following centered operator: $$ \delta^{0}\mu_{i+\frac{1}{2}}=\frac{\mu_{i+1}-\mu_{i}}{\Delta x}. $$ {\bf Velocity discretization}\, We adopt the simple trapezoidal rule to compute the numerical integral in velocity space. For example, we write the one-dimensional trapezoidal rule, $$ \int_{\mathbb R}\, f\, dv \approx \Delta v \left(\frac{1}{2}f(v_0) + f(v_1) + \cdots + f(v_{N_{v}-1}) + \frac{1}{2}f(v_{N_v})\right) := \sum_{j=0}^{N_v}\, f(v_j)\, w_j\, \Delta v, $$ where $w=(\frac{1}{2}, 1, \cdots, 1, \frac{1}{2})$. \\[2pt] {\bf Macroscopic equations} \\ The fluid equation (\ref{U_discrete}) is approximated at points $x_i$. The flux $\partial_{x}F(U^n)$ at $x_i$ is discretized by \begin{equation} \partial_{x}F(U^n)\big|_{x_i} \approx\frac{F_{i+\frac{1}{2}}(U^n)- F_{i-\frac{1}{2}}(U^n)}{\Delta x}, \end{equation} where upwind-based discretization is used to approximate $F(U^n)=\langle vm M^n \rangle$ at points $x_{i+\frac{1}{2}}$. The first order approximation is given by \begin{equation} F_{i+\frac{1}{2}}(U^n) = \langle m(v^{+}M_{i}^n + v^{-}M_{i+1}^n)\rangle. \end{equation} A second order approximation of $\partial_x F(U)$ term will be discussed in section \ref{Sec:Num}. The flux term $\partial_{x}\langle vm g^{n+1}\rangle$ at $x_i$ on the right-hand-side of (\ref{U_discrete}) is approximated by central differences, \begin{equation} \partial_{x}\langle vmg^{n+1}\rangle \big|_{x_i} \approx\bigg\langle v m \frac{g_{i+\frac{1}{2}}^{n+1}- g_{i-\frac{1}{2}}^{n+1}}{\Delta x}\bigg\rangle. \end{equation} The fully discretized scheme of the equation (\ref{U_discrete}) then reads \begin{equation}\label{full_U} \frac{U_i^{n+1}-U_i^n}{\Delta t} + \frac{F_{i+\frac{1}{2}}(U^n)-F_{i-\frac{1}{2}}(U^n)}{\Delta x} = - \varepsilon\sum_{j=0}^{N_v}\, v_j\, m(v_j)\, \frac{g_{i+\frac{1}{2}, j}^{n+1} - g_{i-\frac{1}{2}, j}^{n+1}}{\Delta x}\, w_j\, \Delta v, \end{equation} where $g_{i+\frac{1}{2}, j}^n \approx g(t_n, x_{i+\frac{1}{2}}, v_j)$. Next, we prove that the discrete macroscopic equations (\ref{full_U}) conserve mass, momentum and total energy. \begin{theorem} {\bf (Conservation of moments $U$)}\\ For periodic or zero flux boundary condition, one has \begin{equation}\label{cons-U} \sum_{i=0}^{N_x}\, U_i^{n+1} = \sum_{i=0}^{N_x}\, U_i^n\,. \end{equation} Namely, the total mass, momentum and energy are all numerically conserved. \end{theorem} \begin{proof} Summing up all $i=0, \cdots, N_x$ on (\ref{full_U}), one has \begin{equation}\label{U_Cons} \frac{\sum_{i} U_i^{n+1} - \sum_{i} U_i^n}{\Delta t} + \sum_{i}\left(\frac{F_{i+\frac{1}{2}}(U^n) - F_{i-\frac{1}{2}}(U^n)}{\Delta x}\right) = -\varepsilon \sum_{i}\sum_{j=0}^{N_v}\, v_j\, m(v_j)\, \frac{g_{i+\frac{1}{2}, j}^{n+1} - g_{i-\frac{1}{2}, j}^{n+1}}{\Delta x}\, w_j\, \Delta v. \end{equation} By the assumption on the boundary condition, the telescoping summation terms vanish, and then one has (\ref{cons-U}). \end{proof} \begin{remark} If $\varepsilon$ is spatially dependent, then (\ref{U_discrete}) is written by $$\frac{U^{n+1}-U^n}{\Delta t} + \nabla_x \cdot F(U^n) + \nabla_x \cdot \langle \varepsilon v m g^{n+1}\rangle =0, $$ and (\ref{U_Cons}) correspondingly becomes \begin{align} &\displaystyle\quad\frac{\sum_{i} U_i^{n+1} - \sum_{i} U_i^n}{\Delta t} + \sum_{i}\left(\frac{F_{i+\frac{1}{2}}(U^n) - F_{i-\frac{1}{2}}(U^n)}{\Delta x}\right) \notag \\[4pt] &\label{U_Cons1}\displaystyle = - \sum_{i}\sum_{j=0}^{N_v}\, v_j\, m(v_j)\, \frac{\varepsilon_{i+\frac{1}{2}}\, g_{i+\frac{1}{2}, j}^{n+1} - \varepsilon_{i-\frac{1}{2}}\, g_{i-\frac{1}{2}, j}^{n+1}}{\Delta x}\, w_j\, \Delta v, \end{align} with $\varepsilon_{i+\frac{1}{2}}=\varepsilon(x_{i+\frac{1}{2}})$, $\varepsilon_{i-\frac{1}{2}}=\varepsilon(x_{i-\frac{1}{2}})$. This again has the conservation property (\ref{cons-U}). \end{remark} \begin{remark}\label{rmk-cons} Typically, a discrete collision operator, particularly those based on spectral approximations in velocity space \cite{gamba2017fast, pareschi2000numerical,gamba2009spectral, mouhot2006fast}, does not conserve {\it exactly} the moments $U$ (\ref{cons-U}), which needs to be taken care of with extra efforts \cite{mieussens2000discrete, zhang2017conservative, gamba2014conservative}. What differs here is that the conserved variables $U$ are obtained from the macroscopic system (\ref{U}), which has the zero right hand side, thus the conservation property (\ref{cons-U}) can be easily guaranteed by {\it any} conservative discretization of the spatial derivative in (\ref{U}). What differs here from those typical kinetic solvers in \cite{zhang2017conservative, gamba2014conservative} is that in the latter cases the moments were obtained by taking the discrete moments from $f$, computed from the original kinetic equation for $f$, with the collision operator discretized not in an exactly conserved way! This observation is not new, and in fact was already pointed out in \cite{JinYan}. In section \ref{sec:7} this point will be further explored for general kinetic systems and this offers a generic recipe for obtaining (exactly) conservative schemes through solving the moment systems. \end{remark} {\bf Microscopic equation}\\ Equation (\ref{g_discrete}) is approximated at grid point $x_{i+\frac{1}{2}}$; the term $(\mathbb I - \Pi_{M^n})(v\cdot\nabla_x g^n)$ in the left-hand-side is approximated by a first order upwind scheme \begin{equation} \label{1st-order} (\mathbb I - \Pi_{M^n})(v\, \partial_{x}g^n) \big|_{x_{i+\frac{1}{2}}} \approx \left(\mathbb I - \Pi_{i+\frac{1}{2}}^n \right)\left(v^{+}D^{-}+v^{-}D^{+}\right)g_{i+\frac{1}{2}}^n. \end{equation} The transport term $(\mathbb I -\Pi_{M^n})(v\cdot\nabla_x M^n)$ in the right-hand-side of (\ref{g_discrete}) is discretized by a central difference scheme \begin{equation} (\mathbb I -\Pi_{M^n})(v \, \partial_{x}M^n) \big|_{x_{i+\frac{1}{2}}} \approx \left(\mathbb I - \Pi_{i+\frac{1}{2}}^n \right)\left(v\, \delta^{0}M_{i+\frac{1}{2}}^n\right), \end{equation} where $\Pi_{i+\frac{1}{2}}^n$ is an approximation of $\Pi_{M(U(t_n, x_{i+\frac{1}{2}}))}$. A suitable choice of $\Pi_{i+\frac{1}{2}}^n$ is given by (\cite{MM-Lemou}) \begin{equation}\label{Pie} \Pi_{i+\frac{1}{2}}^n = \frac{\Pi_{i}^n +\Pi_{i+1}^n}{2}=\frac{\Pi(U_i^n)+\Pi(U_{i+1}^n)}{2}, \qquad \text{ or }\, \Pi_{i+\frac{1}{2}}=\Pi \left(\frac{U_{i}+U_{i+1}}{2}\right), \end{equation} and $M_{i+\frac{1}{2}}^n \approx\frac{M_{i}^n+M_{i+1}^n}{2}$. \\[2pt] {\bf I.}\, For the Boltzmann equation, the discretized scheme of the microscopic equations (\ref{g_discrete}) is given by \begin{align} &\displaystyle \quad\frac{g_{i+\frac{1}{2}}^{n+1}-g_{i+\frac{1}{2}}^{n}}{\Delta t} + \left(\mathbb I - \Pi_{i+\frac{1}{2}}^n \right)\left(v^{+}\, \frac{g_{i+\frac{1}{2}}^{n}-g_{i-\frac{1}{2}}^{n}}{\Delta x}+ v^{-}\, \frac{g_{i+\frac{3}{2}}^{n}-g_{i+\frac{1}{2}}^{n}}{\Delta x}\right) - \mathcal Q_{B}(g^n_{i+\frac{1}{2}}, g^n_{i+\frac{1}{2}}) \notag \\[8pt] &\displaystyle = \frac{1}{\varepsilon}\bigg[\frac{1}{2}\left(\mathcal Q_{B}(M_{i+\frac{1}{2}}^n+g_{i+\frac{1}{2}}^n, M_{i+\frac{1}{2}}^n+g_{i+\frac{1}{2}}^n) - \mathcal Q_{B}(M_{i+\frac{1}{2}}^n-g_{i+\frac{1}{2}}^n, M_{i+\frac{1}{2}}^n-g_{i+\frac{1}{2}}^n)\right) + \frac{1}{2}(\beta_1^n + \beta_2^n) g_{i+\frac{1}{2}}^n \notag\\[8pt] &\displaystyle\label{full_g0} \qquad - \frac{1}{2}(\beta_1^{n+1} + \beta_2^{n+1}) g_{i+\frac{1}{2}}^{n+1} - \left(\mathbb I - \Pi_{i+\frac{1}{2}}^n \right)\left(v\, \frac{M_{i+1}^n-M_{i}^n}{\Delta x}\right)\bigg], \end{align} thus \begin{align} &\displaystyle g_{i+\frac{1}{2}}^{n+1}=\frac{1}{1+\frac{\Delta t}{2\varepsilon}(\beta_1^{n+1}+\beta_2^{n+1})}\, \bigg[g_{i+\frac{1}{2}}^{n} - \Delta t \left(\mathbb I - \Pi_{i+\frac{1}{2}}^n \right)\left(v^{+}\, \frac{g_{i+\frac{1}{2}}^{n}-g_{i-\frac{1}{2}}^{n}}{\Delta x}+ v^{-}\, \frac{g_{i+\frac{3}{2}}^{n}-g_{i+\frac{1}{2}}^{n}}{\Delta x}\right) \notag\\[8pt] &\displaystyle\qquad + \Delta t\, \mathcal Q_{B}(g^n_{i+\frac{1}{2}}, g^n_{i+\frac{1}{2}}) + \frac{\Delta t}{\varepsilon} \bigg(\frac{1}{2}\left(\mathcal Q_{B}(M_{i+\frac{1}{2}}^n+g_{i+\frac{1}{2}}^n, M_{i+\frac{1}{2}}^n+g_{i+\frac{1}{2}}^n) - \mathcal Q_{B}(M_{i+\frac{1}{2}}^n-g^n_{i+\frac{1}{2}}, M_{i+\frac{1}{2}}^n-g_{i+\frac{1}{2}}^n)\right)\notag\\[8pt] &\displaystyle \label{full_g}\qquad\qquad\qquad\qquad\qquad\qquad\quad + \frac{1}{2}(\beta_1^n + \beta_2^n) g_{i+\frac{1}{2}}^n - \left(\mathbb I - \Pi_{i+\frac{1}{2}}^n \right)\left(v\, \frac{M_{i+1}^n-M_{i}^n}{\Delta x}\right)\bigg)\bigg]\,. \end{align} \begin{remark} To improve to second-order spatial discretization of $v\, \partial_x g$ in the above equation, one uses a second order upwind (MUSCL) discretization in (\ref{1st-order}), and then (\ref{full_g}) is replaced by \begin{align} &\displaystyle g_{i+\frac{1}{2}}^{n+1}=\frac{1}{1+\frac{\Delta t}{2\varepsilon}(\beta_1^{n+1}+\beta_2^{n+1})}\, \bigg[g_{i+\frac{1}{2}}^{n} - \Delta t \left(\mathbb I - \Pi_{i+\frac{1}{2}}^n\right)\left(\frac{G_{i+1}^n - G_i^n}{\Delta x}\right) \notag\\[8pt] &\displaystyle\qquad + \Delta t\, \mathcal Q_{B}(g^n_{i+\frac{1}{2}}, g^n_{i+\frac{1}{2}}) + \frac{\Delta t}{\varepsilon} \bigg(\frac{1}{2}\left(\mathcal Q_{B}(M_{i+\frac{1}{2}}^n+g_{i+\frac{1}{2}}^n, M_{i+\frac{1}{2}}^n+g_{i+\frac{1}{2}}^n) - \mathcal Q_{B}(M_{i+\frac{1}{2}}^n-g^n_{i+\frac{1}{2}}, M_{i+\frac{1}{2}}^n-g_{i+\frac{1}{2}}^n)\right)\notag\\[8pt] &\displaystyle \label{full_g}\qquad\qquad\qquad\qquad\qquad\qquad\quad + \frac{1}{2}(\beta_1^n + \beta_2^n) g_{i+\frac{1}{2}}^n - \left(\mathbb I - \Pi_{i+\frac{1}{2}}^n \right)\left(v\, \frac{M_{i+1}^n-M_{i}^n}{\Delta x}\right)\bigg)\bigg]\,, \end{align} where \begin{equation} \label{419} G_i^n = v^{+} g_i^{+, n} + v^{-} g_i^{-, n} = v^{+} \left(g_{i-\frac{1}{2}}^n + \frac{\Delta x}{2}\, \delta g_{i-\frac{1}{2}}^n\right) + v^{-}\left(g_{i+\frac{1}{2}}^n - \frac{\Delta x}{2}\, \delta g_{i+\frac{1}{2}}^n\right), \qquad i = 0, \cdots, N, \end{equation} and $\delta g$ represents a slope with a slope limiter, given by \cite{LeVeque} $$\delta g_{j-\frac{1}{2}}^n = \frac{1}{\Delta x}\, \text{minmod}\left\{g_{j+\frac{1}{2}}^n - g_{j-\frac{1}{2}}^n, \, g_{j-\frac{1}{2}}^n - g_{j-\frac{3}{2}}^n\right\}, \qquad j = 0, \cdots, N+1. $$ \end{remark} {\bf II. }\, For the nFPL equation, we first introduce the symmetrized operator in \cite{JinYan} $$ \widetilde P h = \frac{1}{\sqrt{M}}\nabla_v \cdot\left(M\nabla_v \left(\frac{h}{\sqrt{M}}\right)\right). $$ Thus the penalty operator given in (\ref{FP}) can be rewritten as $$ P_{FP}^{M} f = \sqrt{M}\widetilde P \frac{f}{\sqrt{M}}. $$ Use (\ref{LG2}), (\ref{full_g0}) correspondingly becomes \begin{align} &\displaystyle g_{i+\frac{1}{2}}^{n+1}= \left(\mathbb I - \frac{\Delta t}{2\varepsilon}(\beta_1^{n} + \beta_2^{n}) P^{n+1}\right)^{-1} \bigg[g_{i+\frac{1}{2}}^{n} - \Delta t \left(\mathbb I - \Pi_{i+\frac{1}{2}}^n \right)\left(v^{+}\, \frac{g_{i+\frac{1}{2}}^{n}-g_{i-\frac{1}{2}}^{n}}{\Delta x}+ v^{-}\, \frac{g_{i+\frac{3}{2}}^{n}-g_{i+\frac{1}{2}}^{n}}{\Delta x}\right) \notag\\[8pt] &\displaystyle \qquad + \Delta t\, \mathcal Q_{L}(g^n_{i+\frac{1}{2}}, g^n_{i+\frac{1}{2}}) + \frac{\Delta t}{\varepsilon} \bigg(\frac{1}{2}\left(\mathcal Q_{L}(M_{i+\frac{1}{2}}^n+g_{i+\frac{1}{2}}^n, M_{i+\frac{1}{2}}^n+g_{i+\frac{1}{2}}^n) - \mathcal Q_{L}(M_{i+\frac{1}{2}}^n-g_{i+\frac{1}{2}}^n, M_{i+\frac{1}{2}}^n-g_{i+\frac{1}{2}}^n)\right) \notag\\[8pt] &\displaystyle \label{full_gL}\qquad\qquad\qquad\qquad\qquad\qquad\quad -\frac{1}{2}(\beta_1^n + \beta_2^n)P^n g_{i+\frac{1}{2}}^n - \left(\mathbb I - \Pi_{i+\frac{1}{2}}^n \right)\left(v\, \frac{M_{i+1}^n-M_{i}^n}{\Delta x}\right)\bigg)\bigg]\,. \end{align} Rewrite the above equation (\ref{full_gL}) as \begin{align} &\displaystyle \left(\frac{g_{i+\frac{1}{2}}}{\sqrt{M}}\right)^{n+1} = \left(\mathbb I - \frac{\Delta t}{2\varepsilon}(\beta_1^{n} + \beta_2^{n}) \widetilde P^{n+1}\right)^{-1}\bigg\{\frac{1}{\sqrt{M^{n+1}}} \bigg[g_{i+\frac{1}{2}}^{n} - \Delta t \left(\mathbb I - \Pi_{i+\frac{1}{2}}^n \right) \notag\\[8pt] &\displaystyle \qquad\qquad\qquad \cdot\left(v^{+}\, \frac{g_{i+\frac{1}{2}}^{n}-g_{i-\frac{1}{2}}^{n}}{\Delta x}+ v^{-}\, \frac{g_{i+\frac{3}{2}}^{n}-g_{i+\frac{1}{2}}^{n}}{\Delta x}\right) + \Delta t\, \mathcal Q_{L}(g^n_{i+\frac{1}{2}}, g^n_{i+\frac{1}{2}}) \notag\\[8pt] &\displaystyle\qquad\qquad\qquad + \frac{\Delta t}{\varepsilon} \bigg(\frac{1}{2}\left(\mathcal Q_{L}(M_{i+\frac{1}{2}}^n+g_{i+\frac{1}{2}}^n, M_{i+\frac{1}{2}}^n+g_{i+\frac{1}{2}}^n) - \mathcal Q_{L}(M_{i+\frac{1}{2}}^n-g_{i+\frac{1}{2}}^n, M_{i+\frac{1}{2}}^n-g_{i+\frac{1}{2}}^n)\right) \notag\\[8pt] &\displaystyle\qquad\qquad\qquad\qquad\label{full_g1} -\frac{1}{2}(\beta_1^n + \beta_2^n)\sqrt{M^n} \widetilde P\, \frac{g_{i+\frac{1}{2}}^n}{\sqrt{M^n}} - \left(\mathbb I - \Pi_{i+\frac{1}{2}}^n \right)\left(v\, \frac{M_{i+1}^n-M_{i}^n}{\Delta x}\right)\bigg)\bigg]\bigg\}\,. \end{align} One can apply the Conjugate Gradient (CG) method to get $\left(\frac{g_{i+\frac{1}{2}}}{\sqrt{M}}\right)^{n+1}$, which is used in \cite{JinYan}. \\[2pt] A second order discretization of $(\mathbb I - \Pi_{M^n})(v\, \partial_{x}g^n)$ can also be used as in (\ref{419}). {\bf Velocity discretization of $\widetilde P$ }\, As was done in \cite{JinYan}, the discretization of $\tilde P$ in one dimension is given by \begin{align} &\displaystyle (\tilde P h)_{j}= \frac{1}{(\Delta v)^2}\frac{1}{\sqrt{M_j}}\bigg\{\sqrt{M_j M_{j+1}} \left(\left(\frac{h}{\sqrt{M}}\right)_{j+1} - \left(\frac{h}{\sqrt{M}}\right)_{j}\right) - \sqrt{M_j M_{j-1}}\left(\left(\frac{h}{\sqrt{M}}\right)_{j} - \left(\frac{h}{\sqrt{M}}\right)_{j-1}\right)\bigg\} \notag\\[6pt] &\displaystyle\qquad\quad = \frac{1}{(\Delta v)^2}\left(h_{j+1} -\frac{\sqrt{M_{j+1}} + \sqrt{M_{j-1}}}{\sqrt{M_j}}h_j + h_{j-1}\right). \end{align} It is obvious that $\tilde P$ is symmetric. We discretize dimension-by-dimension in velocity space. \subsection{The Asymptotic-Preserving property of the scheme} In this section, we investigate the formal fluid dynamics behavior (for $\varepsilon \ll 1$) of the discretized numerical scheme given by (\ref{full_g}) and (\ref{full_U}) for the Boltzmann equation, in order to show that the scheme is Asymptotic-Preserving (AP)\cite{jin1999efficient, jin2010asymptotic} in the fluid dynamic regime. For notation simplicity, rewrite the term $$ \mathcal L_{M_{i+\frac{1}{2}}^n}(g_{i+\frac{1}{2}}^n) = \frac{1}{2}\left(\mathcal Q(M_{i+\frac{1}{2}}^n+g_{i+\frac{1}{2}}^n, M_{i+\frac{1}{2}}^n+g_{i+\frac{1}{2}}^n) - \mathcal Q(M_{i+\frac{1}{2}}^n-g_{i+\frac{1}{2}}^n, M_{i+\frac{1}{2}}^n-g_{i+\frac{1}{2}}^n)\right). $$ From the right hand side of (\ref{full_g0}), one can see \begin{align} &\displaystyle \mathcal L_{M_{i+\frac{1}{2}}^n}(g_{i+\frac{1}{2}}^n) + \frac{1}{2}(\beta_1^n + \beta_2^n)g_{i+\frac{1}{2}}^n - \frac{1}{2}(\beta_1^{n+1}+\beta_2^{n+1})g_{i+\frac{1}{2}}^{n+1} \notag \\[6pt] &\displaystyle \label{AP0} \qquad\qquad\qquad - \left(\mathbb I - \Pi_{i+\frac{1}{2}}^n \right)\left(v\, \frac{M_{i+1}^n-M_{i}^n}{\Delta x}\right) = O(\varepsilon). \end{align} We make the following assumptions \cite{Filbet-Jin}: there exists a constant $C>0$ such that \begin{equation}\label{assump1} ||g^n|| + \left|\left|\frac{g^{n+1}-g^n}{\Delta t}\right|\right|\leq C, \end{equation} and \begin{equation}\label{assump2} ||U^n|| + \left|\left|\frac{U^{n+1}-U^n}{\Delta t}\right|\right| \leq C. \end{equation} Denote $\beta=\frac{1}{2}(\beta_1 + \beta_2)$, we have in (\ref{AP0}) \begin{align*} &\displaystyle \text{term }I: =\frac{1}{2}(\beta_1^n + \beta_2^n)g_{i+\frac{1}{2}}^n - \frac{1}{2}(\beta_1^{n+1}+\beta_2^{n+1})g_{i+\frac{1}{2}}^{n+1} = \beta^n g_{i+\frac{1}{2}}^n - \beta^{n+1}g_{i+\frac{1}{2}}^{n+1} \notag\\[4pt] &\displaystyle\qquad\quad = \beta^{n+1}(g^n - g^{n+1}) + (\beta^n - \beta^{n+1})g^n. \end{align*} Under the assumption (\ref{assump1}) and (\ref{assump2}), and since $\beta^n$ only depends on $U^n$, one gets $$ || \text{term }I || = O(\Delta t). $$ From (\ref{AP0}), $g_{i+\frac{1}{2}}^n$ is approximated by $$ g_{i+\frac{1}{2}}^n = \mathcal L_{M_{i+\frac{1}{2}}^n}^{-1}\bigg\{\left(\mathbb I - \Pi_{i+\frac{1}{2}}^n \right)\left(v\, \frac{M_{i+1}^n-M_{i}^n}{\Delta x}\right)\bigg\} + O(\varepsilon) + O(\Delta t), $$ thus $g_{i+\frac{1}{2}}^{n+1}$ is approximated by \begin{equation}\label{g_AP} g_{i+\frac{1}{2}}^{n+1} = \mathcal L_{M_{i+\frac{1}{2}}^n}^{-1}\bigg\{\left(\mathbb I - \Pi_{i+\frac{1}{2}}^n \right)\left(v\, \frac{M_{i+1}^n-M_{i}^n}{\Delta x}\right)\bigg\} + O(\varepsilon) + O(\Delta t). \end{equation} Plug (\ref{g_AP}) into (\ref{full_U}), \begin{align} &\displaystyle \frac{U_i^{n+1}-U_i^n}{\Delta t} + \frac{F_{i+\frac{1}{2}}(U^n)-F_{i-\frac{1}{2}}(U^n)}{\Delta x} = \frac{\varepsilon}{\Delta x}\bigg\langle v m \bigg\{ \mathcal L_{M_{i+\frac{1}{2}}^n}^{-1}\left[\left(\mathbb I - \Pi_{i+\frac{1}{2}}^n\right)\left(v\, \frac{M_{i+1}^n - M_i^n}{\Delta x}\right)\right] \notag\\[6pt] &\displaystyle \label{U_AP} - \mathcal L_{M_{i -\frac{1}{2}}^n}^{-1}\left[\left(\mathbb I - \Pi_{i -\frac{1}{2}}^n\right)\left(v\, \frac{M_i^n - M_{i-1}^n}{\Delta x}\right)\right]\bigg\} \bigg\rangle + O(\varepsilon\Delta t + \varepsilon^2). \end{align} Following the same calculation as \cite{MM-Lemou, Filbet-Jin}, one obtains $$ (\mathbb I - \Pi_{M})(v\cdot\nabla_x M) = \left(B: \left(\nabla_x u + (\nabla_x u)^{T}-\frac{2}{d}(\nabla_x \cdot u)\mathbb I\right) + A \cdot \frac{\nabla_x T}{\sqrt{T}}\right) M + O(\varepsilon), $$ where $$ A = \left(\frac{|v-u|^2}{2T}-\frac{d+2}{2}\right)\frac{v-u}{\sqrt{T}}, \qquad B= \frac{1}{2}\left(\frac{(v-u)\otimes(v-u)}{2T}-\frac{|v-u|^2}{dT}\mathbb I\right). $$ Therefore, $$ \mathcal L_{M^n}^{-1}\bigg((\mathbb I - \Pi_{M^n})(v\cdot\nabla_x M^n)\bigg) = \mathcal L_{M^n}^{-1}(BM): \left(\nabla_x u + (\nabla_x u)^{T}-\frac{2}{d}(\nabla_x\cdot u)\mathbb I\right) + \mathcal L_{M^n}^{-1}(AM)\cdot\frac{\nabla_x T}{\sqrt{T}}. $$ Thus (\ref{U_AP}) is a consistent time discretization scheme to the compressible Navier--Stokes system, with the order of $\varepsilon$ term given by $$\varepsilon \nabla_x\cdot \begin{pmatrix} 0 \\ \mu \sigma(u) \\ \mu \sigma(u)u + \kappa\nabla_x T \end{pmatrix}, $$ with $$ \sigma(u)=\nabla_x u + (\nabla_x u)^{T} - \frac{2}{d}\nabla_x \cdot u\mathbb I, $$ where the viscosity $\mu=\mu(T)$ and the thermal conductivity $\kappa=\kappa(T)$ only depend on the temperature and whose general expressions can be found in \cite{Bardos}. We summarize the conclusions in the following theorem. Compared to Proposition 4.3 in \cite{MM-Lemou}, the result here is valid for the full Boltzmann instead of the BGK equation. \begin{theorem} Consider the time and space discretizations of the Boltzmann equation, given by equation (\ref{full_g}) and (\ref{full_U}), then (i) In the limit $\varepsilon\to 0$, the moments $U^n$ satisfy the following discretization of the Euler equations $$ \frac{U_i^{n+1}-U_i^n}{\Delta t} + \frac{F_{i+\frac{1}{2}}(U^n)-F_{i-\frac{1}{2}}(U^n)}{\Delta x} = 0. $$ (ii) The scheme (\ref{full_g}) and (\ref{full_U}) is asymptotically equivalent, with an error of $O(\varepsilon^2)$, to the following scheme, \begin{align*} &\displaystyle \frac{U_i^{n+1}-U_i^n}{\Delta t} + \frac{F_{i+\frac{1}{2}}(U^n)-F_{i-\frac{1}{2}}(U^n)}{\Delta x} = \frac{\varepsilon}{\Delta x}\bigg\langle v m \bigg\{ \mathcal L_{M_{i+\frac{1}{2}}^n}^{-1}\left[\left(\mathbb I - \Pi_{i+\frac{1}{2}}^n\right)\left(v\, \frac{M_{i+1}^n - M_i^n}{\Delta x}\right)\right] \notag\\[6pt] &\displaystyle \label{U_AP} \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad - \mathcal L_{M_{i -\frac{1}{2}}^n}^{-1}\left[\left(\mathbb I - \Pi_{i -\frac{1}{2}}^n\right)\left(v\, \frac{M_i^n - M_{i-1}^n}{\Delta x}\right)\right]\bigg\} \bigg\rangle, \end{align*} which is a consistent approximation of the compressible Navier-Stokes equation, provided that the viscous terms are resolved numerically. \end{theorem} From (ii), it shows that one needs the mesh size and time step to be $O(\varepsilon)$ in order to capture the Navier-Stokes approximation. This is necessary for {\it any} scheme since the viscosity and heat conductivity are of $O(\varepsilon)$. \section{Numerical Implementation} \label{Sec:Num} We mention some details in the numerical implementation. Assume we have all the values of $U$ and $g$ at time $t^n$, namely $g_{-\frac{1}{2}}^n, \cdots, g_{N+\frac{1}{2}}^n$, and $U_{-1}^n, \, U_{0}^n, \cdots, U_{N+1}^n, \, U_{N+2}^n$. \\[2pt] (i) {\it Step 1}. $g$ is calculated at staggered grids $x_{\frac{1}{2}}, \cdots, x_{N-\frac{1}{2}}$. We use equation (\ref{full_g}) for the Boltzmann or (\ref{full_gL}) for the Landau equation (with a rewritten form of (\ref{full_g1})). The projection operator is given in (\ref{Pie}). Here the second choice is used. Denote $$M^{\ast} = \frac{M(U_i) + M(U_{i+1})}{2}, $$ by definition of $\Pi$, one has $$ \Pi_{M^{\ast}}(\psi) = \frac{1}{\rho}\left[ \langle\psi\rangle + \frac{(v-u)\cdot \langle (v-u)\psi \rangle}{T} + \left(\frac{|v-u|^2}{2T} - \frac{d}{2}\right)\frac{2}{d}\bigg\langle\left(\frac{|v-u|^2}{2T} - \frac{d}{2}\right)\psi \bigg\rangle\right] M^{\ast}, $$ where $\rho$, $u$, $T$ are associated with $M^{\ast}$ as in (\ref{U_eqn}). If one assumes the periodic in $x$ boundary condition, then \begin{equation}\label{g_BC} g_{-\frac{1}{2}} = g_{N-\frac{1}{2}}, \qquad g_{N+\frac{1}{2}} = g_{\frac{1}{2}}\,. \end{equation} Free-flow boundary condition is used in the shock-tube tests, that is, \begin{equation}\label{g_BC1} g_{-\frac{1}{2}} = g_{\frac{1}{2}}, \qquad g_{N+\frac{1}{2}} = g_{N-\frac{1}{2}}, \end{equation} and similarly for $U$. (ii) {\it Step 2}. $U$ is calculated at $i=1, \cdots, N$, by using (\ref{full_U}), where values of $g_{\frac{1}{2}}^{n+1}, \cdots, g_{N+\frac{1}{2}}^{n+1}$ are used. The numerical flux $F$ is calculated by first or second order splitting with slope limiters. We apply a second-order TVD method. Following \cite{MM-Lemou}, we use a simple reconstruction of the upwind flux $F_{i+\frac{1}{2}}(U^n)$ ($i=0, \cdots, N$) from the flux splitting that is naturally derived from its kinetic formulation: \begin{equation}\label{flux_split} F(U) = \langle v^{+}m M(U)\rangle + \langle v^{-}m M(U)\rangle := F^{+}(U) + F^{-}(U). \end{equation} A second order approximation of the positive and negative flux is obtained by a linear piecewise polynomial $\hat F_i$ for $i=0, \cdots, N+1$. Then we reconstruct the numerical flux $F_{i+\frac{1}{2}}(U)$ ($i=0, \cdots, N$) in a split form, \begin{equation}\label{FS_2} F_{i+\frac{1}{2}}^n = F^{+}(U_i^n) + s_i^{+, n}\, \frac{\Delta x}{2} + F^{-}(U_{i+1}^n) - s_{i+1}^{-, n}\, \frac{\Delta x}{2}\,, \end{equation} where a slope limiter $s_i^{\pm, n}$ is introduced to suppress possible spurious oscillations near discontinuities. We use a second order TVD minmod slope limiter \cite{LeVeque}, \begin{equation}\label{slope} s_i^{\pm, n} =\frac{1}{\Delta x}\, \text{minmod}\left\{F^{\pm}(U_{i+1}^n)-F^{\pm}(U_i^n), \, F^{\pm}(U_i^n)-F^{\pm}(U_{i-1}^n)\right\}. \end{equation} Note that we need $F(U_{-1}), \, F(U_0), \, F(U_{N+1}),\, F(U_{N+2})$ when computing $s_0^{+}, \, s_{N+1}^{-}$, thus two ghost cells are needed. For periodic BC, we let $$ U_0 = U_N, \qquad U_{-1}=U_{N-1}, \qquad U_{N+1}=U_1, \qquad U_{N+2}=U_2. $$ Implementation details of solving (\ref{U_discrete}) are shown in the Appendix. \section{Numerical Examples} \label{sec:NE} {\bf Test I: The micro-macro scheme for the Boltzmann equation} Consider the spatial variable $x\in[0,1]$. Periodic boundary condition is used except for the shock tube tests. The velocity variable $v\in[-L_v,L_v]^2$ with $L_v=8.4$. $N_x=100$, $\Delta t = \Delta x/20$. Note that the velocity domain should be chosen large enough so that the numerical solution $f$ is essentially zero at its boundary. The fast spectral method in \cite{Lorenzo} is applied to evaluate the collision operator $\mathcal Q$ and $32$ points are used in each velocity dimension. In order to compare different schemes, we denote by `FJ' the Filbet-Jin AP method with penalty proposed in \cite{Filbet-Jin} for the Boltzmann equation; by `JY' the Jin-Yan AP method with penalty in \cite{JinYan} for the Landau equation. `MM' stands for the micro-macro scheme for the full Boltzmann and Landau equations we propose in the current paper. `DS' represents a direct, explicit 4th order Runge-Kutta time discretization solver for the Boltzmann or Landau equations. \\[2pt] {\bf Test I (a) } \\ The initial data is given by \begin{equation}\label{Ia_IC} \rho^{0}(x)=\frac{2+\sin(2\pi x)}{3}, \qquad u^{0} = (0.2, 0), \qquad T^{0}(x)=\frac{3+\cos(2\pi x)}{4}\,. \end{equation} The following non-equilibrium double-peak initial distribution is considered, \begin{equation}\label{dp} f^{0}(x,v)=\frac{\rho^{0}}{4\pi T^{0}}\left(e^{-\frac{|v-u^{0}|^2}{2T^{0}}} + e^{-\frac{|v+u^{0}|^2}{2T^{0}}}\right). \end{equation} {\bf Test I (b) } \\ In this example, we consider a mixed regime with the Knudsen number $\varepsilon$ varying in space, where $x\in[0,1]$, $v\in[-6,6]^2$, \begin{align} &\displaystyle \varepsilon(x) = \begin{cases} 10^{-2} + \frac{1}{2}\left(\tanh(25-20x)+\tanh(-5+20x)\right), \qquad x\leq 0.65, \\[4pt] 10^{-2}, \qquad x>0.65. \end{cases} \end{align} The initial data is given by (\ref{Ia_IC})--(\ref{dp}). \\[2pt] {\bf Test I (c). } We study a Sod shock tube test problem for the Boltzmann equation. The equilibrium initial distribution is given by $$ f^{0}(x,v) = \frac{\rho^{0}}{2\pi T^{0}} e^{-\frac{|v-u^{0}|^2}{2 T^{0}}},$$ where the initial data for $\rho^{0}$, $u^{0}$ and $T^{0}$ are given by \begin{align} \begin{cases} &\displaystyle\rho_{l}=1, \qquad u_{l}=(0,0), \qquad T_{l}=1, \qquad x\leq 0.5, \\[2pt] &\displaystyle\rho_{r}=0.125, \qquad u_{r}=(0,0), \qquad T_{r}=0.25, \qquad x>0.5. \end{cases} \end{align} \\[2pt] There are different choices of the free parameter $\beta$ in the penalty operators. We list below: {\it Choice 1. } In the BGK penalty operator, $P = \beta(M-f)$, where $\beta$ is a positive constant chosen for stability. One can split the collision operator $\mathcal Q$ into the gaining part and the losing part $\mathcal Q(f,f) = \mathcal Q^{+} - f \mathcal Q^{-}$. In order to obtain positivity, it is sufficient to require $\beta > \mathcal Q^{-}$ (\cite{QinJin}). In our case, $$ \beta_1^n > \mathcal Q_1^{-}, \qquad \beta_2^n > \mathcal Q_2^{-}, $$ where $\mathcal Q_1$, $\mathcal Q_2$ represent the collision operator $\mathcal Q(M^n+g^n,M^n+g^n)$ and $\mathcal Q(M^n-g^n,M^n-g^n)$ respectively. Here $\beta_1^n$, $\beta_2^n$ are space and time dependent. {\it Choice 2. } Another choice is given in \cite{Filbet-Jin}, recall (\ref{penalty1}), we let \begin{align} &\displaystyle \beta_1^n =\sup_{v}\left|\frac{\mathcal Q(M^n+g^n, M^n+g^n)}{g^n}\right|, \\[4pt] &\displaystyle \beta_2^n =\sup_{v} \left|\frac{\mathcal Q(M^n-g^n, M^n-g^n)}{g^n}\right|. \end{align} \\[6pt] \indent Now we present and compare numerical results using difference schemes. Property of conservation of moments will also be verified. In the figure titles, $P_0$, $P_1$, $P_2$ represent the mass, momentum (in $v_1$ direction) and the total energy respectively. For Test I (a), Figure \ref{TestI-1} (for $\varepsilon=1$) and Figure \ref{TestI-2} (for $\varepsilon=10^{-4}$) show the time evolution of mass, momentum and energy obtained from $f$ (using $f=\mathcal M + \varepsilon g$), denoted by `Mf' (see Remark \ref{rmk-cons}), and from solving the macroscopic equations, denoted by `ME' (moment equations) below. Figure \ref{TestI-1} uses `DS' and Figure~\ref{TestI-2} uses `MM' for small $\varepsilon$. One can observe that the moments calculated from `ME' are perfectly conserved with values unchanged as time propagates, while the conservation is not guaranteed if the moments are obtained from $f$ itself, however Figure~\ref{VP_Fig} later shows that the error in total energy conservation is bounded for long time. This phenomenon verifies the proof that moments solved from `ME' are conserved as shown in (\ref{U_Cons1}). Moments computed from $f$, although not exactly conserved, however owes an spectral accuracy due to the numerical error of the spectral method used for the collision operators. One observes that if {\it not} using the moments systems to obtain the conserved quantities, third moments usually own a larger error than lower (first and second) moments by using the same discretization, a phenomenon that is also observed in several other tests in the following sections. The reason might be due to that the error in $f$ is enlarged more when multiplying by $|v|^2$ (instead of $1$ or $v$) in the integration to get third moments. In Section \ref{sec:7} we will use this idea to obtain the conservative solvers for more general kinetic equations and for general numerical schemes, not just the micro-macro decomposition based \cite{MM-Lemou} or penalty based \cite{Filbet-Jin, JinYan} approaches. The density $\rho$, bulk velocity $u_1$ and temperature $T$ defined as the following: $$ \rho = \int_{\mathbb R^2} f dv, \, u_i = \frac{1}{\rho}\int_{\mathbb R^2} v_i f dv (i = 1, 2), \, T = \frac{1}{2\rho}\int_{\mathbb R^2} (v-u)^2 f dv. $$ The numerical solutions are shown in Figure \ref{TestI-a-sol} for Test I (a). Here $u_2=0$ and we omit plotting it. MM uses the penalty parameters in {\it Choice 1}. One can observe that the two different approaches `FJ' and `MM' are consistent and produce the same results. For Test I (b), the function $\varepsilon$ is plotted in Figure \ref{fig_eps1}, whose values range from $10^{-2}$ to 1, and is discontinuous at $x=0.65$. Figure \ref{TestI-b2} shows that by comparing with the `DS' solutions as a reference, `MM' is able to capture the macroscopic behavior efficiently with coarse mesh size and time steps when $\varepsilon$ is discontinuous, by using the penalty parameter $\beta$ in {\it Choice 1}. One can observe from Figure \ref{TestI-c}, for Test I (c), that the macroscopic quantities are well approximated although the mesh size and time steps are larger than $\varepsilon$, by using both the `FJ' and `MM' schemes, which give similar numerical results for the Sod problem. \begin{figure}[H] \begin{subfigure}{1\textwidth} \includegraphics[width=1\textwidth, height=0.59\textwidth]{Test1a_DS_eps1-eps-converted-to.pdf} \end{subfigure} \caption{Test I (a). Time evolution of mass, momentum and energy by DS. `Mf' versus `ME'. $\varepsilon=1$. } \label{TestI-1} \end{figure} \begin{figure}[H] \centering \begin{subfigure}{1\textwidth} \includegraphics[width=1\textwidth, height=0.59\textwidth]{Test1a_MM_eps-4-eps-converted-to.pdf} \end{subfigure} \caption{Test I (a). Time evolution of mass, momentum and energy by MM: `Mf' versus `ME'. $\varepsilon=10^{-4}$, $t=1$. } \label{TestI-2} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.45\linewidth]{rho_ep1-eps-converted-to.pdf} \includegraphics[width=0.45\linewidth]{rho_ep-3-eps-converted-to.pdf} \centering \includegraphics[width=0.45\linewidth]{u1_ep1-eps-converted-to.pdf} \includegraphics[width=0.45\linewidth]{u1_ep-3-eps-converted-to.pdf} \centering \includegraphics[width=0.45\linewidth]{T_ep1-eps-converted-to.pdf} \includegraphics[width=0.45\linewidth]{T_ep-3-eps-converted-to.pdf} \caption{Test I (a). $t=0.2$. First column: $\varepsilon=1$. Second column: $\varepsilon=10^{-3}$. } \label{TestI-a-sol} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.49\linewidth]{eps1-eps-converted-to.pdf} \caption{A spatially varying Knudsen number $\varepsilon(x)$ for Test I (b). } \label{fig_eps1} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.45\linewidth]{Rho_1b_New-eps-converted-to.pdf} \centering \includegraphics[width=0.45\linewidth]{u1_1b_New-eps-converted-to.pdf} \centering \includegraphics[width=0.45\linewidth]{T_1b_New-eps-converted-to.pdf} \caption{Test I (b). Numerical solutions at $t=0.2$ by `DS' and `MM'. } \label{TestI-b2} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.45\linewidth]{rho_sh-eps-converted-to.pdf} \centering \includegraphics[width=0.45\linewidth]{u1_sh-eps-converted-to.pdf} \centering \includegraphics[width=0.45\linewidth]{T_sh-eps-converted-to.pdf} \caption{Test I (c). `FJ' versus `MM'. Numerical solutions at $t=0.2$. $\varepsilon=10^{-4}$. } \label{TestI-c} \end{figure} {\bf Test II: The micro-macro scheme for the nFPL equation} {\bf Test II (a). } The initial data is given by $$ \rho^{0}(x)=\frac{2+\sin(\pi x)}{3}, \qquad u^{0} = (0.2, 0), \qquad T^{0}(x)=\frac{3+\cos(\pi x)}{4}\,. $$ Consider the double-peak initial distribution (\ref{dp}). Let $x\in[-1,1]$, $v\in [-6,6]^2$ and $N_x=100$, $N_v=32$, $\Delta t = \Delta x/20$. $\varepsilon=1$. {\bf Test II (b). } \\ We consider a Sod shock tube test for the nFPL equation with an equilibrium initial distribution: $$ f^{0}(x,v) = \frac{\rho^{0}}{2\pi T^{0}} e^{-\frac{|v-u^{0}|^2}{2 T^{0}}},$$ where the initial data for $\rho^{0}$, $u^{0}$ and $T^{0}$ are given by \begin{align*} &\displaystyle (\rho, u_1, T) = (1, 0, 1), \qquad\qquad \text{if } -0.5 \leq x <0, \\[4pt] &\displaystyle (\rho, u_1, T) = (0.125, 0, 0.25), \qquad \text{if } 0 \leq x \leq 0.5. \end{align*} Let $x\in[-0.5, 0.5]$, $v\in [-6,6]^2$ and $N_x=100$, $N_v=32$, $\Delta t = \Delta x/20$. $\varepsilon=10^{-3}$. Figure \ref{TestII-a} shows the numerical solutions of Test II (a) by `MM' compared with `DS', for both $\mathcal O(1)$ or moderately small $\varepsilon$, in good agreement. In Figure \ref{TestII-b} for Test II (b), one can see that the macroscopic quantities are well approximated for the Sod shock tube test for the nFPL equation, although the mesh size and time steps are coarse, thus it verifies the AP property. \begin{figure}[H] \centering \includegraphics[width=0.42\linewidth]{rho_LDeps1-eps-converted-to.pdf} \centering \includegraphics[width=0.42\linewidth]{u1_LDeps1-eps-converted-to.pdf} \centering \includegraphics[width=0.42\linewidth]{T_LDeps1-eps-converted-to.pdf} \caption{Test II (a). `DS' (circle) versus `MM' (asteroid). $\varepsilon=1$, $t=0.25$. } \label{TestII-a} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.42\linewidth]{rho_LDeps-3-eps-converted-to.pdf} \centering \includegraphics[width=0.42\linewidth]{u1_LDeps-3-eps-converted-to.pdf} \centering \includegraphics[width=0.42\linewidth]{T_LDeps-3-eps-converted-to.pdf} \caption{Test II (b). Numerical solutions by `MM'. $\varepsilon=10^{-3}$, $t=0.2$. } \label{TestII-b} \end{figure} \section{A Conservative Scheme for the Vlasov-Amp\'{e}re-Boltzmann system} \label{sec:7} In order to further elaborate the issue of numerical conservative of moments in kinetic solvers, in this section, we develop a conservative scheme for the Vlasov-Amp\'{e}re-Boltzmann system with or without the collisional term, which is not only of interest for the systems under study, but also gives a general guidance on how to obtain numerically the exact conservation of moments for a general kinetic solver. \subsection{The collisionless Vlasov-Poisson and Vlasov-Amp\'{e}re systems} \label{Sec:VP} First, we consider the Vlasov-Poisson (VP) system without collisions between particles, \begin{align} \begin{cases} &\displaystyle \partial_t f + v\cdot\nabla_x f - E\cdot\nabla_v f = 0\,, \\[4pt] &\displaystyle\label{VP} \nabla_x\cdot E = c(x) - \int_{\mathbb R^d} f\, dv\,. \end{cases} \end{align} Here $E$ is the electric field, while $c(x)$ is the background density. The domain is given by $\Omega_{x,v} = \Omega\times\mathbb R^d$. This system arises in modeling collisionless plasmas \cite{Liboff}. For simplicity, we will always assume periodic boundary condition in $x$ for $f$. Denote the moments as $$ \rho = \int_{\mathbb R^d} f\, dv, \qquad \rho u = \int_{\mathbb R^d} v f\, dv, \qquad E_{\text{Kin}} = \int_{\mathbb R^d} \frac{1}{2}|v|^2\, f\, dv\,. $$ Moment equations for (\ref{VP}) are given by \begin{align} \begin{cases} &\displaystyle \partial_t \rho + \nabla_x \cdot(\rho u)=0\,, \\[10pt] &\displaystyle \partial_t (\rho u) + \nabla_x \cdot \int_{\mathbb R^d}\, v\otimes v f\, dv + E\, \rho = 0\,, \\[10pt] &\displaystyle \label{moment2}\partial_t \int_{\mathbb R^d}\, \frac{1}{2} |v|^2 f\, dv +\nabla_x \cdot \int_{\mathbb R^d}\, \frac{|v|^2}{2}\, v f\, dv + E \cdot (\rho u) =0\,. \end{cases} \end{align} It is easy to check that the system (\ref{VP}) conserves the total energy defined $$E_{\text{Total}} = \frac{1}{2}\int_{\Omega}\int_{\mathbb R^d}\, |v|^2 f\, dv dx + \frac{1}{2}\int_{\Omega}|E|^2\, dx. $$ While there have been previous works to develop schemes that conserve this total energy, for example see \cite{YD-Cheng1, YD-Cheng}, our strategy is different, and it serves the purpose for a generic strategy to develop energy conserving schemes for collisional system, see the next section. We also refer to \cite{DG-BP} for Discontinuous Galerkin solvers for the Boltzmann-Poisson system. In order to construct a scheme that conserves $E_{\text{Total}}$, we solve the following Vlasov-Amp\'{e}re (VA) system by adopting the Amp\'{e}re's law, instead of solving the Vlasov-Poisson system (\ref{VP}), \begin{align} &\displaystyle \label{VP_f}\partial_t f + v\cdot\nabla_x f - E\cdot\nabla_v f = 0\,, \\[4pt] &\displaystyle \label{Amp}\partial_t E = \rho u. \end{align} Note that the VA and VP systems are equivalent when the charge solves the continuity equation $$ \partial_t \rho + \nabla_x\cdot \rho u =0. $$ {\bf Step 1. }\, Update $f^{n+1}$ by solving (\ref{VP_f}) explicitly, that is, \begin{equation} f^{n+1} = f^{n} - \Delta t\, (v\cdot\nabla_x f^n + E^n\cdot\nabla_v f^n). \end{equation} Here the transport term $v\cdot\nabla_x f$ is approximated by a non-oscillatory high resolution shock-capturing method, and a spectral discretization in the velocity space is used for the term $E\cdot\nabla_v f$. \\[2pt] {\bf Step 2. }\, Update $E^{n+1}$ by using a forward Euler solver of (\ref{Amp}), \begin{equation}\label{E_Amp1} E^{n+1} = E^n + \Delta t\, (\rho u)^n. \end{equation} {\bf Step 3. }\, Update the moments at $t^{n+1}$ by solving equations (\ref{moment2}) and using $f^n$. \begin{align} \label{moment3} \begin{cases} &\displaystyle \frac{\rho^{n+1}-\rho^n}{\Delta t} + \nabla_x \cdot \int_{\mathbb R^d} v f^n \, dv =0\,, \\[10pt] &\displaystyle \frac{(\rho u)^{n+1}-(\rho u)^n}{\Delta t} + \nabla_x \cdot \int_{\mathbb R^d} v\otimes v f^n\, dv + E^n \cdot\rho^{n} = 0\,, \\[10pt] &\displaystyle \frac{E_{\text{Kin}}^{n+1}-E_{\text{Kin}}^n}{\Delta t} +\nabla_x \cdot \int_{\mathbb R^d} \frac{|v|^2}{2}\, v f^n\, dv + \frac{E^n + E^{n+1}}{2}\cdot (\rho u)^{n} =0\,. \end{cases} \end{align} \begin{theorem} Let $(\rho, u, E_{kin}, E)_i$ be the numerical approximation of the corresponding quantities at grid point $x_i$. If one discretizes the divergence term in (\ref{moment3}) by a conservative spatial discretization, then one has the conservations of total mass and energy \begin{equation}\label{E_Kin_Dis} \sum_{i=0}^{N_x}\, \rho_i^{n+1}\, = \\ \sum_{i=0}^{N_x}\, \rho_i^{n}, \quad \sum_{i=0}^{N_x}\, \left( (E_{\text{Kin}}^{n+1})_{i}+ \frac{1}{2}(E_i^{n+1})^2\right) = \sum_{i=0}^{N_x}\, \left( (E_{\text{Kin}}^{n})_{i}+ \frac{1}{2} (E_i^{n})^2\right). \end{equation} \end{theorem} \begin{proof} Sum over all $i$ for the spatial discretized system of the first equation in (\ref{moment3}) gives \begin{equation}\label{rho-cons}\Delta x \sum_{i=0}^{N_x}\, \rho_i^{n+1}\, = \Delta x \sum_{i=0}^{N_x}\, \rho_i^{n}. \end{equation} Also the third equation of (\ref{moment3}), after using (\ref{E_Amp1}), gives \begin{equation}\label{E_Kin} \Delta x \sum_{i=0}^{N_x} \frac{(E_{\text{Kin}})_i^{n+1} - (E_{\text{Kin}})_i^n}{\Delta t} + \frac{1}{2 \Delta t}\, \sum_{i=0}^{N_x} \sum_{i=0}^{N_x}\left(E_i^{n+1} + E_i^n \right)(\rho u)^n_i = 0. \end{equation} Using (\ref{E_Amp1}), one obtains (\ref{E_Kin_Dis}). \end{proof} Since the goal of this section is to preserve the total energy in time, we will only conduct numerical examples to check the conservation property, and not consider other discretization issues for the system. \\[2pt] {\bf Test III} Let the initial data be $$ f(t=0, x, v) = (1+\cos(2x))\, \frac{e^{-|v|^2/2}}{\sqrt{2\pi}}. $$ Periodic boundary condition in space is assumed for $f$, $E$ and $\phi$. The initial condition of the electric field $E$ can be obtained from the Poisson equation $$ -\Delta_x \phi = c(x) - \int_{\mathbb R^d} f\, dv, $$ by using a second-order finite-difference Poisson solver and central difference spacial discretization for $E = -\nabla_x \phi$. To make the solution unique, we also set the boundary data for $\phi$, $$ \phi(x_L) = \phi(x_R) = 0. $$ Set $c(x)=1$. Let $x\in[0, \pi]$, $v\in[-2\pi, 2\pi]$, $N_x=200$, $N_v=64$ and $\Delta t = \Delta x/20$ in the following test. In Figure \ref{VP_Fig}, the first figure shows the density $\rho(x)$ at time $t=0.5$, computed from either solving the moment equations (`ME') or from the solution $f$ (`Mf'). In the second figure, the electric field $E(x)$ is compared between using the Poisson equation or the Amp\'ere's Law. In the third figure, we plot mass as a function of time and compare it between using `ME' and `Mf'. One can see that the two solutions match well in the first three figures. In the fourth figure, the total energy, which is obtained from solving the Vlasov-Poisson (`Mf-Poiss'), Vlasov-Amp\'ere system (`Mf-Amp'), or the moment equations and the Amp\'ere's Law (`ME-Amp') respectively. This verifies the proof shown in (\ref{E_Kin_Dis}) that the numerical total energy is perfectly conserved for `ME-Amp'. The other two lines of `Mf-Poiss' and `Mf-Amp', though non-conserved, has a small numerical error (in the order of numerical consistency error). However, it is remarkable to observe that the numerical total energy has an $O(10^{-3})$ error for long time which is exactly the same order of magnitude of the numerical total error in the simulation of the Vlasov--Poisson--Landau system computed by means of operator splitting of a DG scheme for the collisionless Vlasov--Poisson advection coupled to the collisional conservative step in Figure 12 of \cite{zhang2017conservative}, under the same boundary conditions as assumed here. \begin{figure}[H] \centering \includegraphics[width=0.49\linewidth]{VP_Rho-eps-converted-to.pdf} \centering \includegraphics[width=0.49\linewidth]{VP_E-eps-converted-to.pdf} \centering \includegraphics[width=0.49\linewidth]{Rho_T-eps-converted-to.pdf} \centering \includegraphics[width=0.49\linewidth]{Energy_T-eps-converted-to.pdf} \caption{Test III. $\rho, E$ at $t=0.5$ in the first row; mass and total energy with respect to time in the second row. } \label{VP_Fig} \end{figure} \subsection{The Vlasov-Amp\'ere-Boltzmann system} We can easily extend the scheme introduced in section \ref{Sec:VP} to the collisional problems, for example the Vlasov-Amp\'ere-Boltzmann system that will be studied in this section. This system models collisional plasma \cite{krall1973principles}. Consider the Vlasov-Amp\'ere-Boltzmann system, \begin{align} \begin{cases} &\displaystyle \partial_t f + v\cdot\nabla_x f - E\cdot\nabla_v f = \frac{1}{\varepsilon}\mathcal Q_{\text{B}}(f,f)\,, \\[4pt] &\displaystyle \label{VA}\partial_t E = \rho u\,. \end{cases} \end{align} The time-discretized scheme for the moments equations of (\ref{VA}) are the same as the Vlasov-Poisson and is given in (\ref{moment2}). With $\rho$, $\rho u$ and $E_{\text{Kin}}$, one can get the temperature $T$ using the relation $ E_{\text{Kin}}=\frac{1}{2}\rho\, u^2 + \frac{N_d}{2}\, \rho T$ and thus compute the local equilibrium \begin{equation} \label{moment_M} M_{\text{eq}}(x,v) = \frac{\rho(x)}{(2 \pi T(x))^{N_{d}/2}}\, \exp\left( -\frac{(v-u(x))^2}{2 T(x)} \right). \end{equation} To overcome the stiffness of the collision operator in the fluid regime, we simply use the Filbet-Jin penalty AP schemes here. Step 2 and Step 3 given by (\ref{E_Amp1}) and (\ref{moment3}) to update $E$ and the moments quantities are exactly the same as the scheme given in section \ref{Sec:VP}. With the collision term in (\ref{VA}), step 1 correspondingly becomes $$ \frac{f^{n+1}-f^n}{\Delta t} + v\cdot\nabla_x f^n - E^n \cdot\nabla_v f^n = \frac{\mathcal Q(f^n) - P(f^n)}{\varepsilon} + \frac{P(f^{n+1})}{\varepsilon}, $$ which gives $$ f^{n+1} = \frac{\varepsilon}{\varepsilon + \beta \Delta t}\left(f^n - \Delta t\, v\cdot\nabla_x f^n + \Delta t\, E^n \cdot\nabla_v f^n \right) + \Delta t\, \frac{\mathcal Q(f^n) - P(f^n)}{\varepsilon + \beta\Delta t} + \frac{\beta\Delta t}{\varepsilon + \beta\Delta t}\, \mathcal M^{n+1}, $$ with $\mathcal M^{n+1}$ defined through the moments quantities solved from (\ref{moment3}). In the following numerical experiments we use $\Delta x=\pi/200$, $\Delta t=\Delta x/20$. In Figure \ref{TestIII-a}, we show a similar set of figures as Figure \ref{VP_Fig} above. The first row shows the numerical solution at output time $t=0.5$, with $\varepsilon=1$. The numerical solutions such as $\rho$, $E$ match well no matter whether the Amp\'ere's Law or the Poisson equation is used. In this test, moments (mass and total energy) are perfectly conserved if obtained from 'ME' or 'ME-Amp', as shown in the second row of Figure \ref{TestIII-b}. The red line in the third figure indicates that the mass obtained from $f$ is not perfectly conserved but has a spectrally small error. The green (`Mf-Amp') and red (`Mf-Poiss') lines in the fourth figure show that the energy, if obtained from $f$ coupled with the Amp\'ere's Law or the Poisson equation for $E$, is not perfectly conserved but still have a small error. For the last test, we will only use the exactly conservative scheme and check the penalty method for the Vlasov-Amp\'ere-Boltzmann equation, for the case of small $\varepsilon$. Figure \ref{TestIII-b} shows in the first row the numerical solution $\rho$, $E$ at output time $t=0.1$, with $\varepsilon=0.05$. In the second row, we show that mass and total energy are perfectly conserved if using the moments equations given by (\ref{moment3}). \begin{figure}[H] \centering \includegraphics[width=0.49\linewidth]{VPB_Rho1-eps-converted-to.pdf} \centering \includegraphics[width=0.49\linewidth]{VPB_E1-eps-converted-to.pdf} \centering \includegraphics[width=0.49\linewidth]{VPB_Mass_T-eps-converted-to.pdf} \centering \includegraphics[width=0.49\linewidth]{VPB_Energy_T-eps-converted-to.pdf} \caption{$\varepsilon=1$. Numerical solutions $\rho, E$ at $t=0.5$ in the first row; mass and total energy with respect to time in the second row. } \label{TestIII-a} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.49\linewidth]{VPB_Rho2-eps-converted-to.pdf} \centering \includegraphics[width=0.49\linewidth]{VPB_E2-eps-converted-to.pdf} \centering \includegraphics[width=0.49\linewidth]{VPB_Mass2_T-eps-converted-to.pdf} \centering \includegraphics[width=0.49\linewidth]{VPB_Energy2_T-eps-converted-to.pdf} \caption{$\varepsilon=0.05$. Numerical solutions $\rho, E$ of the Vlasov-Amp\'ere-Boltzmann equation at $t=0.1$ in the first row (blue circle uses $N_x=100$, $\Delta t=\Delta x/20$, red line is as reference solution using fine mesh $N_x=200$, $\Delta t= 10^{-4}$); mass and total energy with respect to time in the second row. } \label{TestIII-b} \end{figure} \begin{remark} The schemes proposed in this section give the desired conservation property thanks to the use of moment equations. Here we obtain the moment system first (so the right hand side vanishes) and then discretize it. If one obtains the moments from the discretized $f$ equation, due to the non-conservation of the approximate collision operator, the discrete moments are not necessarily conserved. This has already been addressed in \cite{JinYan} for a different purpose, but here it serves the purpose as a generic strategy to devise conservative schemes for general collision kinetic system. The only price paid is the extra effort to solve the moment system. \end{remark} \section{Conclusions and future work} \label{sec:FW} The micro-macro decomposition based method for multiscale kinetic equations has found many applications as an effective method to derive Asymptotic-Preserving schemes that work efficiently in all regimes, including both the kinetic and fluid regimes. However, so far it has been developed only for the BGK model. In this paper we extend it to general collisional kinetic equations, including the Boltzmann and the Fokker-Planck Landau equations. One of the difficulty in this formulation is the numerical stiff linearized collision operator, which needs to be treated implicitly thus becomes numerically difficult. Our main idea is to use a relation between the (numerically stiff) linearized collision operator with the nonlinear quadratic ones, the later's stiffness can be overcome using the BGK penalization method of Filbet and Jin for the Boltzmann, or the linear Fokker-Planck penalization method of Jin and Yan for the Fokker-Planck Landau equations. Such a scheme allows the computation of multiscale collisional kinetic equations efficiently in all regimes, including the fluid regime in which the fluid dynamic behavior can be correctly computed even without numerically resolving the small Knudsen number. It is implicit but can be implemented {\it explicitly}. This scheme preserves the moments (mass, momentum and energy) {\it exactly} due to the use of the macroscopic system which is naturally in a conservative form. We then utilize this conservation property for more general kinetic equations, using the Vlasov-Amp\'{e}re and Vlasov-Amp\'{e}re-Boltzmann systems as examples. The main idea is to evolve both the kinetic equation for the probability density distribution and the moment system, the later naturally induces a scheme that conserves exactly the moments numerically if they are physically conserved. This recipe is generic and applies to all kinetic equations. Numerical examples demonstrate the conservation properties of our schemes, as well as it robustness in the fluid dynamic and mixed regimes. Notice that the numerical total energy exhibited an $O(10^{-3})$ error persistent for long time that coincides with the order of magnitude of the numerical total energy error in the implementation of the Vlasov--Poisson--Landau system by operator splitting of a DG scheme for the collisionless Vlasov--Poisson advection coupled to the collisional conservative step in \cite{zhang2017conservative}. This observation opens an interesting problem of understanding how to diminish this computational error on obtaining the total energy evolution from the kinetic pdf that solves the Vlasov--Poisson with either Boltzmann or Landau collisional forms, by perhaps either imposing a conservation constraint in the kinetic simulation of our proposed scheme or to address operator splitting improvements in the approach used in \cite{zhang2017conservative}. To extend the micro-macro method for multi-dimensional problems remain to be pursued. Here one needs to extend the staggered grid to higher dimension, a task that was investigated for hyperbolic systems of conservative laws \cite{JT} but yet to be studied for kinetic equations. \section*{Appendix: Details of Numerical Implementation} Details of solving (\ref{U_discrete}) are shown below. In the case of $x\in\mathbb R$, $v\in\mathbb R^2$ ($d=2$), $$u_1 = \frac{1}{\rho}\int_{\mathbb R^d} f v_1\, dv, \qquad u_2 = \frac{1}{\rho}\int_{\mathbb R^d} f v_2\, dv. $$ We then have \begin{align*} &\displaystyle\frac{\partial\rho}{\partial t} + \partial_x F_1 = - \varepsilon\, \partial_x \langle g \rangle, \\[4pt] &\displaystyle \frac{\partial}{\partial t} (\rho u_1) + \partial_x F_2 = - \varepsilon\, \partial_x \langle v_1^2\, g \rangle, \\[4pt] &\displaystyle \frac{\partial}{\partial t} (\rho u_2) + \partial_x F_3 = - \varepsilon\, \partial_x \langle v_1 v_2\, g \rangle, \\[4pt] &\displaystyle \frac{\partial E}{\partial t} + \partial_x F_4 = - \varepsilon\, \partial_x \langle v_1\, \frac{|v|^2}{2} g \rangle, \end{align*} where \begin{equation*}\label{F12} F_1 = \langle v_1 M \rangle, \qquad F_2 = \langle v_1^2\, M \rangle, \qquad F_3 = \langle v_1 v_2\, M \rangle, \qquad F_4 = \langle v_1\, \frac{|v|^2}{2}M \rangle. \end{equation*} with $M$ associated with $\rho$, $u_1$, $u_2$, $T$ as defined in (\ref{Max}). The kinetic formulation of the flux splitting (\ref{flux_split}) is given by \begin{equation*}\label{FS} F_1^{\pm} = \langle v_1^{\pm} M \rangle, \qquad F_2 ^{\pm}= \langle v_1^{\pm}\, v_1 M \rangle, \qquad F_3^{\pm} = \langle v_1^{\pm}\, v_2 M \rangle, \qquad F_4^{\pm} = \langle v_1^{\pm}\, \frac{|v|^2}{2} M \rangle, \end{equation*} with $v_1^{\pm} = (v_1 \pm |v_1|)/2$, $v_2^{\pm} = (v_2 \pm |v_2|)/2$. \bibliographystyle{siam} \bibliography{MM_Boltzmann.bib} \end{document}
{"config": "arxiv", "file": "1809.00028/Arxiv_Submit.tex"}
\begin{document} \title{Enumeration of binary trees \\ compatible with a perfect phylogeny} \author{Julia A. Palacios\thanks{Department of Statistics and Department of Biomedical Data Science, Stanford University, Stanford, CA, USA. Corresponding author: juliapr@stanford.edu} \\ Anand Bhaskar\thanks{Department of Genetics, Stanford University, Stanford University, Stanford, CA, USA.} \\ Filippo Disanto\thanks{Department of Mathematics, University of Pisa, Pisa, Italy.} \\ Noah A. Rosenberg\thanks{Department of Biology, Stanford University, Stanford, CA, USA.}} \date{\today} \maketitle \begin{abstract} Evolutionary models used for describing molecular sequence variation suppose that at a non-recombining genomic segment, sequences share ancestry that can be represented as a genealogy---a rooted, binary, timed tree, with tips corresponding to individual sequences. Under the infinitely-many-sites mutation model, mutations are randomly superimposed along the branches of the genealogy, so that every mutation occurs at a chromosomal site that has not previously mutated; if a mutation occurs at an interior branch, then all individuals descending from that branch carry the mutation. The implication is that observed patterns of molecular variation from this model impose combinatorial constraints on the hidden state space of genealogies. In particular, observed molecular variation can be represented in the form of a perfect phylogeny, a tree structure that fully encodes the mutational differences among sequences. For a sample of $n$ sequences, a perfect phylogeny might not possess $n$ distinct leaves, and hence might be compatible with many possible binary tree structures that could describe the evolutionary relationships among the $n$ sequences. Here, we investigate enumerative properties of the set of binary ranked and unranked tree shapes that are compatible with a perfect phylogeny, and hence, the binary ranked and unranked tree shapes conditioned on an observed pattern of mutations under the infinitely-many-sites mutation model. We provide a recursive enumeration of these shapes. We consider both perfect phylogenies that can be represented as binary and those that are multifurcating. The results have implications for computational aspects of the statistical inference of evolutionary parameters that underlie sets of molecular sequences. \end{abstract} \section{Introduction} Coalescent and mutation models are used in population genetics to estimate evolutionary parameters from samples of molecular sequences \citep{marjoram2006modern}. The central idea is that observed molecular variation is the result of a process of mutation along the branches of the genealogy of the sample. This genealogy is a timed tree that represents the ancestral relationships of the sample at a chromosomal segment. Consisting of a tree topology and its branch lengths, the genealogy is a nuisance parameter that is modeled as a realization of the coalescent process dictated by evolutionary parameters---which are in turn inferred by integrating over the space of genealogies. For large sample sizes, however, this integration is computationally challenging because the state space of tree topologies increases exponentially with the number of sampled sequences. Recently, a coarser coalescent model known as the \emph{Tajima coalescent} \citep{Tajima1983,veber}, coupled with the infinitely-many-sites mutation model \citep{Kimura69} has been introduced for population-genetic inference problems \citep{Palacios2019}. Whereas the standard coalescent model \citep{Kingman:1982uj} induces a probability measure on the space of ranked labeled tree topologies, the Tajima coalescent induces a probability measure on the space of ranked \emph{unlabeled} tree topologies. Removing the labels of the tips from the tree topology, as in the Tajima coalescent, reduces the cardinality of the space of tree topologies substantially, shrinking computation time in inference problems. Under infinitely-many-sites mutation, only a subset of tree topologies (labeled or unlabeled) are compatible with an observed data set, so that the computational complexity of inference varies among different data sets. Hence, \citet{Cappello2019} used importance sampling to approximate cardinalities of the spaces of labeled and unlabeled ranked tree shapes conditioned on a data set of molecular sequences, demonstrating a striking reduction of the cardinality of the space of ranked unlabeled tree shapes versus the labeled counterpart when conditioning on observed data with a sparse number of mutations. Here, we extend beyond the approximate work of \citet{Cappello2019} and obtain exact results. We provide a recursive algorithm for exact computation of the cardinality of the spaces of labeled and unlabeled ranked tree shapes compatible with a sequence data set. We provide a number of other enumerative results relevant for inference of tree topologies in phylogenetics and population genetics. Python code for enumeration is available at \\ {\tt https://colab.research.google.com/drive/1cAx2xyn7OtmG-F-9nxJ3CHRc7e7AjuCj?usp=sharing}. \section{Preliminaries} \subsection{Types of trees} The \textbf{coalescent} is a continuous-time Markov chain with values in the space $\mathcal{P}_{n}$ of partitions of $[n]=\{1,2,\ldots,n\}$ \citep{Kingman:1982uj}. The process starts with the trivial partition of $n$ singletons, labeled $\{1\},\{2\},\ldots,\{n\}$, at time 0; at each transition, two blocks are chosen uniformly at random to merge into a single block. The process ends with a single block with label $\{1,2,\ldots,n\}$. In the standard coalescent, the holding times are exponentially distributed with rate $\binom{k}{2}$ when there are $k$ blocks. Transition probabilities for the coalescent can be factored into two independent components, a pure death process and a discrete jump chain. A full realization of the process can be represented by a timed rooted binary tree: a genealogy. The tips of the genealogy are labeled by $\{1,2,\ldots,n\}$. Figure \ref{fig:tree_topologies}A shows a realization of the jump process, a ranked labeled tree shape. A lumping of the standard coalescent process, called the \textbf{Tajima coalescent} \citep{veber}, consists in removing the labels of the tips of the genealogy. The pure death process of the lumped process is the same as the standard coalescent. The discrete jump chain can be described as a simple urn process \citep{janson2011}. Start with an urn of $n$ balls labeled $0$; at the $i$th transition, draw two balls and return one to the urn with label $i$. The process ends when there is a single ball with label $n-1$ in the urn. A full realization of the urn process can be represented as a ranked unlabeled tree shape with internal nodes labeled by the transition index. \begin{figure} \begin{center} \includegraphics[scale=0.46]{tree_mixing3.pdf} \end{center} \caption{\small{\textbf{Different types of trees.} (A) A ranked labeled tree shape. (B) A ranked unlabeled tree shape. (C) An unranked unlabeled tree shape. (D) An unranked labeled tree shape. The ranked unlabeled tree shape in (B) is obtained by discarding leaf labels from the ranked labeled tree shape in (A). The unranked labeled tree shape in (D) is obtained by discarding the sequence of internal node ranks in (A). The unranked unlabeled tree shape in (C) is obtained by discarding the sequence of internal node ranks in (B) or the leaf labels in (D).}} \label{fig:tree_topologies} \end{figure} A \textbf{ranked labeled tree shape} of size $n$, denoted by $T^{L}_{n}$, is a rooted binary labeled tree of $n$ leaves with a total ordering for the internal nodes. Without loss of generality, we use label set $[n]$ to label the $n$ leaves. The space of ranked labeled tree shapes with $n$ leaves will be denoted by $\mathcal{T}^{L}_{n}$. Figure \ref{fig:tree_topologies}A shows an example of a ranked labeled tree shape with $n=8$ leaves. Ranked labeled tree shapes are also known as labeled histories. A \textbf{ranked unlabeled tree shape} of size $n$, denoted by $T^{R}_{n}$, is a rooted binary unlabeled tree of $n$ leaves with a total ordering for the internal nodes. The space of ranked unlabeled tree shapes with $n$ leaves will be denoted by $\mathcal{T}^{R}_{n}$. Figure \ref{fig:tree_topologies}B shows an example of a ranked unlabeled tree shape with $n=8$ leaves. We will refer to a ranked unlabeled tree shape simply as a ranked tree shape; these ranked tree shapes are also known as unlabeled histories, or Tajima trees. Figure \ref{fig:trees1} shows all ranked unlabeled tree shapes with $3,4,5,$ and $6$ leaves. \begin{figure} \begin{center} \includegraphics[scale=0.5]{RankedTreeShapes.eps} \end{center} \caption{\small{\textbf{An enumeration of all possible ranked tree shapes with 3, 4, 5, and 6 leaves.} }} \label{fig:trees1} \end{figure} An \textbf{unranked unlabeled tree shape} of size $n$, denoted by $T_{n}$, is a rooted binary unlabeled tree of $n$ leaves with unlabeled internal nodes. The space of unranked (unlabeled) tree shapes with $n$ leaves will be denoted by $\mathcal{T}_{n}$. Figure \ref{fig:tree_topologies}C shows an example of an unranked unlabeled tree shape with $n=8$ leaves. These shapes are also called unlabeled topologies or Otter trees \citep{otter1948number}. An \textbf{unranked labeled tree shape} of size $n$, denoted by $T^X_n$, is a rooted binary labeled tree of $n$ leaves with unlabeled internal nodes. The space of unranked labeled tree shapes with $n$ leaves will be denoted by $\mathcal{T}^X_{n}$. Figure \ref{fig:tree_topologies}D shows an example of an unranked labeled tree shape with $n=8$ leaves. These tree shapes are also called labeled topologies. \subsection{Mutations on trees} Many generative models of neutral molecular evolution assume that a process of mutations is superimposed on the genealogy as a continuous-time Markov process. In the \textbf{infinitely-many-sites mutation model}, every mutation along the branches of the tree occurs at a chromosomal site that has not previously mutated \citep{Kimura69}. Therefore, if a mutation occurs at an interior branch along the genealogy, all sequences descended from that branch carry the mutation. Because every site can mutate at most once, the sequence of mutated sites can be encoded as a binary sequence, with 0 denoting the ancestral type and 1 denoting the mutant type at any site. Figure \ref{fig:data0}A shows a realization of the Tajima coalescent together with a realization of mutations from the infinitely-many-sites mutation model with 5 individuals and 4 mutated sites. In what follows, we assume that we observe molecular data only as binary sequences at the tips of the tree. \subsection{Observed binary molecular sequence data as a perfect phylogeny}\label{sec:data} \begin{figure} \begin{center} \includegraphics[scale=0.5]{Tajima_model_simple.eps} \end{center} \caption{\small{\textbf{Tajima coalescent and infinitely-many-sites generative model of binary molecular data.} \textbf{(A)} A Tajima genealogy of 5 individuals, with 4 superimposed mutations depicted as gray squares. The root is labeled by the ancestral type $0000$, and the leaves are labeled by the genetic type at each of three mutated sites. The first two leaves from left to right are labeled $0001$ because one mutation occurs in their path to the root. The third and fourth individuals have three mutations in their path to the root and are labeled 1110; the last individual is labeled $1000$ because only one mutation occurs along its path to the root. The order and label of the mutations is unimportant; however, it is assumed that the same position, or site, in a sequence of 0s and 1s corresponds across individuals. For ease of exposition, we label the mutations a, b, c and d. The first site corresponds to mutation a, the second to b, the third to c, and the fourth to d. \textbf{(B)} Left, a perfect phylogeny representation of the observed data at the tips of (A). Data consist of 3 unique haplotypes $0001$, $1110$ and $1000$, with frequencies 2, 2, and 1. The corresponding frequencies are the labels of tips of the perfect phylogeny. Right, perfect phylogeny topology obtained by removing the edge labels of the perfect phylogeny. \textbf{(C)} The only three ranked tree shapes compatible with the perfect phylogeny topology in (B).}} \label{fig:data0} \end{figure} The \textbf{perfect phylogeny algorithm}, proposed by \citet{Gusfield1991}, generates a graphical representation of binary molecular sequence data that have been produced according to the infinitely-many-sites mutation model. Label individual sequences $1,2, \ldots, n$, and label mutated or ``segregating'' sites $a,b,\ldots$. The original algorithm generates a tree structure known as a \textbf{perfect phylogeny}, with tips labeled $1,2,\ldots,n$ and with edges labeled $a,b,\ldots$, that is in bijection with the observed ``labeled data.'' An edge can have no labels, one label, or more than one label. Perfect phylogenies have been central to coalescent-based inference algorithms, in which maximum likelihood or Bayesian estimation of evolutionary parameters that have given rise to the particular distribution of mutations and clade sizes on the perfect phylogeny are sought by importance sampling or Markov chain Monte Carlo \citep{griffiths_sampling_1994,StephensDonnelly2000,Palacios2019,cappello2020tajima}. In this study, we assume that individual sequences are not uniquely labeled, but instead, are identified by their sequences of 0s and 1s, or \textbf{haplotypes}. Hence, the number of tips in our perfect phylogeny is the number of unique haplotypes, and the labels at the tips correspond to the observed frequencies of the haplotypes. For the genealogy in Figure \ref{fig:data0}A, Figure \ref{fig:data0}B shows the perfect phylogeny of the data observed at its tips. The key assumption of the bijection between sequence data sets and perfect phylogenies is that if a site mutates once, then all descendants of the lineage on which the mutation occurred must also have the mutation---and no other individuals will have the mutation. That is, every unique mutation, or site, partitions the sample of haplotypes into two groups: those with the mutation and those without the mutation. Hence, we group sites that induce the same partition on the haplotypes, and we call each such group of sites a \textbf{mutation group}. In this study, we are not concerned with the mutation labels, and hence, we remove the edge labels of the perfect phylogeny (right side of Figure \ref{fig:data0}B), so that we consider only the topology of the perfect phylogeny. In dropping the edge labels, we treat a perfect phylogeny topology as a perfect phylogeny. Henceforth, a \textbf{perfect phylogeny} is a multifurcating rooted tree with $k$ leaves, representing $k$ distinct haplotypes, each labeled by a positive integer $(n_{i})_{1\leq i \leq k}$, with $\sum^{k}_{i=1}n_{i}=n$. We use the symbol $\Pi_{n}$ to denote the space of perfect phylogenies of size $n$ sequences, and we use $\pi \in \Pi_{n}$ to denote a perfect phylogeny with $n$ sequences. A perfect phylogeny $\pi$ is completely specified in a parenthetical notation, in which every leaf is represented by its label, every binary internal node is represented by $(\cdot,\cdot)$ and every multifurcating internal node is represented by $(\cdot,\ldots,\cdot)$. For example, the perfect phylogeny $\pi_{1}$ on the right in Figure \ref{fig:data0}B in parenthetical notation is $((2,1),2)$, indicating that there are two internal nodes, one merging leaves $(2,1)$ and one merging $(2,1)$ with $2$. The most extreme unresolved perfect phylogeny with $n$ tips---the perfect phylogeny that is compatible with all ranked tree shapes with $n$ tips---has two representations. It can be written as a star, in which the root has degree $n$ and is the only internal node, that is, $\pi=(1,1,\ldots,1)$. It can also be written as a single node $\pi=(n)$. For our purposes, with mutations discarded, the star and single-node perfect phylogenies are indistinguishable, and they will be represented as a single-node perfect phylogeny. Details of the algorithm for generating the perfect phylogeny from binary molecular data can be found in \citet{Cappello2019}, which presents a slight modification to Gusfield's algorithm \citep{Gusfield1991}. We say that a binary tree $T$ is \textbf{compatible} with a perfect phylogeny $\pi$ if the tree can be reduced to $\pi$ by collapsing internal edges of $T$. The number of tree shapes, ranked or unranked, that are compatible with a perfect phylogeny gives the cardinality of the corresponding posterior sampling tree space in statistical inference from sequence data sets. Given a perfect phylogeny $\pi \in \Pi_{n}$, we are interested in calculating the number of compatible ranked tree shapes with $n$ leaves and the number of compatible unranked tree shapes with $n$ leaves. \subsection{Known enumerative results} \label{sec:known} In advance of our effort to count tree shapes compatible with a perfect phylogeny, we state some known enumerative results for the unconstrained spaces of ranked labeled tree shapes, unranked labeled tree shapes, ranked unlabeled tree shapes, and unranked unlabeled tree shapes \citep{steel16}. Let $L_{n}=|\mathcal{T}^{L}_{n}|$ denote the cardinality of the space of ranked labeled trees with $n$ leaves. Then \begin{equation} \label{eq:Ln} L_{n}=\prod^{n}_{i=2}\binom{i}{2}=\frac{n!(n-1)!}{2^{n-1}}. \end{equation} The product is obtained by noting that for each decreasing $i$ from $n$ to $2$, there are $\binom{i}{2}$ ways of merging two labeled branches. The sequence of values of $L_n$ begins 1, 1, 3, 18, 180, 2700, 56700. Let $X_n=|\mathcal{T}^{X}_{n}|$ denote the number of unranked labeled trees with $n$ leaves. We have \begin{equation} \label{eq:Xn} X_{n}=(2n-3)!! = \frac{(2n-2)!}{2^{n-1} (n-1)!}. \end{equation} To generate trees in $\mathcal{T}^{X}_{n}$ from trees in $\mathcal{T}^{X}_{n-1}$, a pendant edge connected to the $n$th label can be placed along each of the $2n-3$ edges of a tree with $n-1$ leaves, including an edge above the root. $X_n$ is obtained as the solution to the recursion $X_n = (2n-3)X_{n-1}$, with $X_1=1$. The sequence of values of $X_n$ begins $1, 1, 3, 15, 105, 945, 10395$. The number of ranked tree shapes with $n$ tips is the $(n-1)$-th Euler zigzag number \citep{stanley2011enumerative}. Let $R_{n}=|\mathcal{T}^{R}_{n}|$ denote the number of ranked tree shapes with $n$ leaves. We have the following recursion: \begin{align} R_1 &= 1, \, R_2= 1, \nonumber \\ R_{n+1} &= \frac{1}{2} \sum_{k=0}^{n-1} {n-1 \choose k} R_{k+1} R_{n-k}, \, n \geq 2. \label{eq:res1} \end{align} The sequence of values of $R_n$ begins 1, 1, 1, 2, 5, 16, 61. For $n \geq 1$, if the tree has $n+1$ tips, and hence $n$ interior nodes, then the root divides the tree into two ranked subtrees $T^{R}_1$ and $T^{R}_2$, where $T^{R}_1$ has $k$ interior nodes, $0 \leq k \leq n-1$, and $T^{R}_2$ has $n-1 - k$ interior nodes. There are ${n-1 \choose k}$ ways of interleaving the $k$ and $n-1-k$ interior nodes of $T^{R}_1$ and $T^{R}_2$, such that the relative orderings of the interior nodes of $T^{R}_1$ and $T^{R}_2$ are preserved in the interleaving. The number of possible ranked tree shapes with such a configuration is ${n-1 \choose k} R_{k+1} R_{n-k}$. Summing over the possibilities for $k$ from $0$ to $n-1$, and acknowledging that the identity of $T^{R}_1$ and $T^{R}_2$ can be interchanged, we get eq.~\ref{eq:res1}. Let $S_{n}=|\mathcal{T}_{n}|$ denote the number of unranked tree shapes with $n$ leaves. We have the following recursion: \begin{align} S_1 &= 1, \nonumber \\ S_{2n-1} &= \sum^{n-1}_{k=1}S_{k}S_{2n-1-k}, \, n \geq 2, \\ S_{2n} &= \bigg(\sum^{n-1}_{k=1}S_{k}S_{2n-k}\bigg)+\frac{1}{2}S_{n}(S_{n}+1), \, n \geq 1. \end{align} $S_{n}$ is the $n$th Wedderburn-Etherington number \citep{harding71}. The sequence begins 1, 1, 1, 2, 3, 6, 11. When the number of leaves is $2n-1$, the root divides the tree shape into two subtree shapes $T_{1}$ and $T_{2}$ with $k$ and $2n-1-k$ leaves, for $k=1,2,\ldots,n-1$. When the number of leaves is even, the root divides the tree shape into subtree shapes with $k$ and $2n-k$ leaves for $k=1,2,\ldots,n-1$ or two subtree shapes with $n$ leaves; these tree shapes are indistinguishable in $S_{n}$ cases and distinguishable in $\frac{1}{2}S_n(S_{n}-1)$ cases. \section{Enumeration for binary perfect phylogenies} To count ranked and unranked tree shapes compatible with a perfect phylogeny, we first consider binary perfect phylogenies: those perfect phylogenies for which the outdegree of any node, traversing from root to tips, is either 0 (leaves or taxa) or 2 (internal nodes). We then consider multifurcating perfect phylogenies in Section \ref{sec:four}. \subsection{Lattice structure of binary perfect phylogenies} The binary perfect phylogenies for a set of $n$ tips possess a structure that will assist in enumerating binary ranked and unranked trees compatible with a set of sequences. In particular, we can make the set $\Pi_{n}$ of all binary perfect phylogenies of $[n]$ into a \textbf{poset} by defining $\pi \leq \sigma$ if either $\sigma$ is the same as $\pi$, or if $\sigma$ can be obtained by sequentially collapsing pairs of pendant edges, or cherries, of $\pi$. We then say $\pi$ is a \textbf{refinement} of $\sigma$. For example, $\pi=(2,3)$ refines $\sigma=(5)$. We say that two binary perfect phylogenies in $\Pi_{n}$ are \textbf{comparable} if they are equal or if one is a refinement of the other. An example of two perfect phylogenies that are not comparable is $\pi=(2,3)$ and $\sigma=(4,1)$. Given two binary perfect phylogenies $\pi_{1}$ and $\pi_{2}$ in $\Pi_{n}$, their \textbf{meet}, denoted $\pi_{1} \wedge \pi_{2}$, is the largest perfect phylogeny that refines both $\pi_{1}$ and $\pi_{2}$. Similarly, the \textbf{join} of two binary perfect phylogenies $\pi_{1} \vee \pi_{2}$ is the smallest perfect phylogeny that is refined by both $\pi_{1}$ and $\pi_{2}$. Formal definitions of these notions appear in Definition \ref{def:binope}. Under the meet and join operations, we will see in Theorem \ref{thm:lattice} that the poset $\Pi_{n} \cup \{\emptyset\}$ is a \textbf{lattice} $\mathcal{L}_n = (\Pi_{n} \cup \{\emptyset\}, \wedge, \vee)$. As a lattice, $\mathcal{L}_n$ possesses a \textbf{Hasse diagram} with a minimal and a maximal element. The \textbf{maximal} element of $\mathcal{L}_n$ is the single node perfect phylogeny $(n)$ and the \textbf{minimal} element is $\emptyset$. Figures \ref{fig:hasse1} and \ref{fig:hasse2} show the Hasse diagrams of $\mathcal{L}_2$, $\mathcal{L}_3$, $\mathcal{L}_4$, $\mathcal{L}_5$. \begin{figure} \centering \includegraphics[scale=0.3]{Perfect_Phylo_Hasse_small.eps} \caption{\small{\textbf{Hasse diagrams of the lattices of binary perfect phylogenies with $n=2$, $3$, and $4$ taxa.} }} \label{fig:hasse1} \end{figure} \begin{figure} \centering \includegraphics[scale=0.3]{Perfect_Phylo_HasseE.eps} \caption{\small{\textbf{Hasse diagram of the lattice of binary perfect phylogenies with $n=5$ taxa. }}} \label{fig:hasse2} \end{figure} \begin{defn} \label{def:binope} \textbf{Binary perfect phylogeny operations}. We define the binary perfect phylogeny symmetric operations $\wedge, \vee: (\cup_{n \geq 1} \Pi_{n} \cup \{ \emptyset \}) \times (\cup_{n \geq 1} \Pi_{n} \cup \{ \emptyset \}) \rightarrow (\cup_{n \geq 1} \Pi_{n} \cup \{ \emptyset \})$, where $\Pi_{n}$ is the space of binary perfect phylogenies of $n$ leaves, as follows: \begin{enumerate} \item $\pi \wedge \emptyset =\emptyset$, for all $\pi \in \Pi_{n}$. \item $\pi \vee \emptyset = \pi$, for all $\pi \in \Pi_{n}$. \item $\pi \wedge (n) = \pi$, for all $\pi \in \Pi_{n}$. \item $\pi \vee (n) = (n)$, for all $\pi \in \Pi_{n}$. \item $\pi_1 \wedge \pi_2 = \emptyset$, for all $\pi_1 \in \Pi_{n_1}, \pi_2 \in \Pi_{n_2}$, with $n_1 \neq n_2$. \item $\pi_1 \vee \pi_2 = \emptyset$, for all $\pi_1 \in \Pi_{n_1}, \pi_2 \in \Pi_{n_2}$, with $n_1 \neq n_2$. \item Let $\pi_{1}=(n_{1},n_{2})$ and $\pi_{2}=(n_{3},n_{4})$ be two perfect phylogenies in $\Pi_{n}$ with $n_{1}+n_{2}=n_{3}+n_{4}=n$. Then \begin{equation*} \pi_{1} \vee \pi_{2}=(n_{1},n_{2}) \vee (n_{3},n_{4})=\begin{cases} (n_{1},n_{2}) & \text{ if } n_{1}=n_{3} \text{ or } n_{1}=n_{4}\\ (n) & \text{ otherwise}. \\ \end{cases} \end{equation*} \item For all $\pi_{1}$, $\pi_{2}$, $\pi_{3}$, $\pi_{4}$ with $(\pi_{1},\pi_{2}) \in \Pi_{n}$ and $(\pi_{3},\pi_{4}) \in \Pi_{n}$, \begin{equation*} (\pi_{1},\pi_{2}) \wedge (\pi_{3},\pi_{4})= (\pi_{1}\wedge \pi_{3}, \pi_{2} \wedge \pi_{4}) \vee (\pi_{1}\wedge \pi_{4}, \pi_{2} \wedge \pi_{3}), \end{equation*} with the convention that $(\pi,\emptyset)=\emptyset$. That is, the meet of two perfect phylogenies is the join of the two perfect phylogenies formed by merging two subtrees at the root. These four subtrees (two per newly formed perfect phylogeny) correspond to the meets of all pairs of subtrees, one from each of the original perfect phylogenies. \item For all $\pi_{1}$, $\pi_{2}$, $\pi_{3}$, $\pi_{4}$ with $(\pi_{1},\pi_{2}) \in \Pi_{n}$ and $(\pi_{3},\pi_{4}) \in \Pi_{n}$, $\pi_{i}\in \Pi_{n_{i}}$ for $i=1,2,3,4$. \begin{eqnarray} (\pi_{1},\pi_{2}) \vee (\pi_{3},\pi_{4})=\begin{cases} (n) & \text{ if } n_{1}\neq n_{3} \text{ and } n_{1} \neq n_{4}\\ (\pi_{1}, \pi_{2} \vee \pi_{4}) & \text{ if } \pi_{1}=\pi_{3}\\ (\pi_{1}, \pi_{2} \vee \pi_{3}) & \text{ if } \pi_{1}=\pi_{4}\\ (\pi_{2}, \pi_{1} \vee \pi_{4}) & \text{ if } \pi_{2}=\pi_{3}\\ (\pi_{2}, \pi_{1} \vee \pi_{3}) & \text{ if } \pi_{2}=\pi_{4}\\ (\pi_{1}\vee \pi_{3}, \pi_{2} \vee \pi_{4}) \wedge (\pi_{1}\vee \pi_{4}, \pi_{2} \vee \pi_{3}) & \text{ otherwise}, \end{cases} \nonumber \end{eqnarray} with the convention that $(\pi,\emptyset)=\emptyset$. That is, the join of two perfect phylogenies is the meet of the two perfect phylogenies formed by merging two subtrees at the root. These four subtrees (two per newly formed perfect phylogeny) correspond to the joins of all pairs of subtrees, one from each of the original perfect phylogenies. In the particular case that the two original perfect phylogenies share one of the subtrees descending from the root, then the join of the two perfect phylogenies is the perfect phylogeny that merges, at the root, the shared subtree with the join of the two different subtrees, one from each of the original perfect phylogenies. In the case that no two pairs of subtrees, one from each of the original perfect phylogenies, have the same size, the join is the maximal single node perfect phylogeny $(n)$. \item For all $\pi_{1},\pi_{2}, \pi_{3} \in \Pi_{n},$ \begin{equation*} \pi_{1} \wedge (\pi_{2} \vee \pi_{3})=(\pi_{1} \wedge \pi_{2}) \vee (\pi_{1} \wedge \pi_{3}), \end{equation*} and \begin{equation*} \pi_{1} \vee (\pi_{2} \wedge \pi_{3})=(\pi_{1} \vee \pi_{2}) \wedge (\pi_{1} \vee \pi_{3}). \end{equation*} \item Let $\pi, \sigma \in \Pi_{n}$ be two perfect phylogenies that are not comparable. There exist unique $\gamma,\rho \in (\Pi_{n} \cup \{\emptyset\}) \setminus \{\pi,\sigma\} $ such that \begin{equation*}\pi \wedge \sigma = \gamma, \quad \pi \vee \gamma = \pi, \quad \text{ and }\quad \sigma \vee \gamma =\sigma, \end{equation*} and \begin{equation*}\pi \vee \sigma = \rho, \quad \pi \wedge \rho = \pi, \quad \text{ and }\quad \sigma \wedge \rho =\sigma. \end{equation*} \end{enumerate} \end{defn} Note that the meet and join operations are symmetric and that pairs $(\pi_1,\pi_2)$ are unordered; for convenience, we have expanded expressions in parts 7 and 9 of the definition that could potentially be simplified using the symmetry. We illustrate the operations in Definition \ref{def:binope} by considering a series of examples. \begin{examp} Consider $\pi_{1}=((4,2),6)$ and $\pi_{2}=((3,3),6)$ depicted in Figure \ref{fig:refinement}A. Their meet and join are given by: \begin{align*} ((4,2),6) \wedge ((3,3),6) &= ((4,2)\wedge (3,3),6\wedge 6) \vee ((4,2)\wedge 6, 6 \wedge (3,3))\text{ by Defn.~\ref{def:binope} (8)}\\ &= (\emptyset,6) \vee ((4,2),(3,3))\text{ by Defn.~\ref{def:binope} (3, 5, 8)}\\ &= \emptyset \vee ((4,2),(3,3)) \text{ by convention}\\ &= ((4,2),(3,3)) \text{ by Defn.~\ref{def:binope} (2)}. \\[1.5ex] ((4,2),6) \vee ((3,3),6) &= (6, (4,2)\vee (3,3)) \text{ by Defn.~\ref{def:binope} (9)}\\ &= (6,6) \text{ by Defn.~\ref{def:binope} (7).} \end{align*} \end{examp} \begin{examp} For a more complex example, consider $\pi_{1}=((3,1),2),6)$ and $\pi_{2}=((4,2),6)$ depicted in Figure \ref{fig:refinement}B. \begin{align*} (((3,1),2),6) \wedge ((4,2),6) &= (((3,1),2)\wedge (4,2),6\wedge 6) \vee (((3,1),2)\wedge 6, 6 \wedge (4,2))\text{ by Defn.~\ref{def:binope} (8)}\\ & = (((3,1),2)\wedge(4,2),6) \vee (((3,1),2),(4,2)) \text{ by Defn.~\ref{def:binope} (3)} \\ & = ( ((3,1) \wedge 4, 2\wedge 2),6) \vee (((3,1),2),(4,2)) \text{ by Defn.~\ref{def:binope} (2, 5, 8)} \\ & = (((3,1),2),6) \vee (((3,1),2),(4,2)) \text{ by Defn.~\ref{def:binope} (3)} \\ &= (((3,1),2),6) \text{ by Defn.~\ref{def:binope} (4, 9)}. \\[1.5ex] ((3,1),2),6) \vee ((4,2),6) &= (((3,1),2)\vee (4,2),6) \text{ by Defn.~\ref{def:binope} (9)} \\ &= ((4,2),6) \text{ by Defn.~\ref{def:binope} (4, 9)}. \end{align*} \end{examp} \begin{figure} \begin{center} \includegraphics[scale=0.18]{isometry.eps} \caption{\small{\textbf{Examples of perfect phylogeny operations.} \textbf{(A)} For perfect phylogenies $((4,2),6)$ and $((3,3),6)$, their meet is $((4,2),(3,3))$, and their join is $(6,6)$. \textbf{(B)} For perfect phylogenies $(((3,1),2),6)$ and $((4,2),6)$, their meet is $(((3,1),2),6)$ and their join is $((4,2),6)$.}} \label{fig:refinement} \end{center} \end{figure} To make use of the operations $\wedge$ and $\vee$ for counting binary ranked and unranked trees compatible with a perfect phylogeny, we need a theorem that shows that the two operations $\wedge$ and $\vee$ induce the same order. That is, we will show that $(\Pi_{n} \cup \{\emptyset\}, \wedge,\vee)$ is a lattice. A \textit{lattice} \citep{nation1998notes} is an algebra $\mathcal{L}(L,\wedge,\vee)$ satisfying, for all $x,y,z \in L$, \begin{enumerate} \item $x \wedge x =x$ and $x \vee x=x$, \item $x \wedge y =y \wedge x$ and $x \vee y=y \vee x$, \item $x \wedge (y \wedge z) = (x \wedge y) \wedge z$ and $x \vee (y \vee z)=(x\vee y) \vee z$, \item $x \wedge (x \vee y)=x$ and $ x\vee (x \wedge y)=x$. \end{enumerate} In the Appendix, we verify these conditions for $(\Pi_{n} \cup \{\emptyset\},\wedge,\vee)$, giving the following theorem. \begin{theorem}\label{thm:lattice} $(\Pi_{n} \cup \{\emptyset\},\wedge,\vee)$ is a lattice. \end{theorem} \subsection{Unranked unlabeled tree shapes compatible with a binary perfect phylogeny} \label{sec:unranked} With the lattice structure of the binary perfect phylogenies established, we are now equipped to calculate the number of compatible unranked unlabeled tree shapes with $n$ leaves. Notice that an unranked unlabeled tree shape can be transformed into a perfect phylogeny with the same number of tips by assigning the count 1 to all leaves. We use $\mathcal{P}(T_{n})$ to denote the perfect phylogeny with $n$ tips that corresponds to the unranked unlabeled tree shape $T_{n}$. \begin{defn} \label{def:treeshape_comp} \textbf{Unranked unlabeled tree shape $T_{n}$ compatible with a perfect phylogeny $\pi \in \Pi_{n}$.} An unranked unlabeled tree shape with $n$ leaves, $T_{n}$, is compatible with a perfect phylogeny $\pi \in \Pi_{n}$, if (1) a one-to-one correspondence exists between the $k$ leaves of $\pi$ with leaf counts $n_{1},n_{2},\ldots,n_{k}$ and $k$ disjoint subtrees of $T_{n}$ containing $n_{1},n_{2},\ldots,n_{k}$ leaves, respectively; and (2) $\mathcal{P}(T_{n})\leq \pi$, that is, $\mathcal{P}(T_{n})$ is a refinement of $\pi$. \end{defn} We use the symbol $\mathcal{G}_{c}(\pi)=\{T_{n}:T_{n} \rightsquigarrow \pi\}$ to denote the set of unranked unlabeled tree shapes compatible with a perfect phylogeny $\pi \in \Pi_{n}$. For a perfect phylogeny $\pi$ consisting of a single leaf with leaf count $n$, the number of compatible unranked unlabeled tree shapes is simply the number of unranked unlabeled tre shapes of size $n$, or $|\mathcal{G}_{c}(\pi)| = S_n$. Figure \ref{fig:compat} shows an example of an unranked unlabeled tree shape compatible with a perfect phylogeny of sample size 7. \begin{figure} \centering \includegraphics[scale=0.60]{compatible.eps} \caption{\small{\textbf{Example of a tree shape compatible with a perfect phylogeny.} \textbf{(A)} A perfect phylogeny. \textbf{(B)} An unranked unlabeled tree shape that is compatible with the perfect phylogeny in (A). The numbers indicate the one-to-one correspondence described in Definition \ref{def:treeshape_comp}.}}. \label{fig:compat} \end{figure} \begin{prop} For $n_1,n_2 \geq 1$, the number of unranked unlabeled tree shapes compatible with a cherry perfect phylogeny $(n_{1},n_{2}) \in \Pi_{n}$ is \begin{eqnarray} |\mathcal{G}_{c}((n_{1},n_{2}))|=\begin{cases} S_{n_{1}}S_{n_{2}} &\text{if } n_{1} \neq n_{2}\\ \frac{1}{2}S_{n_{1}}(S_{n_{1}}+1) & \text{if } n_{1} = n_{2}. \end{cases} \end{eqnarray} \label{prop6} \end{prop} \begin{proof} By Definition \ref{def:treeshape_comp}, an unranked unlabeled tree shape is compatible with the perfect phylogeny $\pi = (n_1,n_2)$ if it possesses two subtrees, one with $n_1$ leaf descendants and another with $n_2$ leaf descendants. Decomposing an unranked unlabeled tree shape at its root, the number of shapes with this property is $S_{n_{1}}S_{n_{2}}$ for $n_1 \neq n_2$ and $\frac{1}{2}S_{n_{1}}(S_{n_{1}}+1)$ for $n_1=n_2$. \end{proof} \begin{prop} For $n_1,n_2 \geq 1$ and $\pi_{1} \in \Pi_{n_{1}}$, $\pi_{2} \in \Pi_{n_{2}}$, the number of unranked unlabeled tree shapes compatible with a binary perfect phylogeny $\pi=(\pi_{1},\pi_{2}) \in \Pi_{n}$ is \begin{eqnarray} |\mathcal{G}_{c}((\pi_{1},\pi_{2}))|= \begin{cases} |\mathcal{G}_{c}(\pi_{1})| \, |\mathcal{G}_{c}(\pi_{2})|-\frac{1}{2}|\mathcal{G}_{c}(\pi_{1} \wedge \pi_{2})| \, (|\mathcal{G}_{c}(\pi_{1} \wedge \pi_{2})|-1) & \text{if } \pi_{1} \wedge \pi_{2} \neq \emptyset\\ |\mathcal{G}_{c}(\pi_{1})| \, |\mathcal{G}_{c}(\pi_{2})| & \text{if } \pi_{1} \wedge \pi_{2}= \emptyset. \\ \end{cases} \end{eqnarray} \label{prop7} \end{prop} \begin{proof} If $\pi_{1} \wedge \pi_{2}=\emptyset$, then no tree shapes are compatible with both $\pi_{1}$ and $\pi_{2}$. Hence, the number of tree shapes compatible with $(\pi_{1},\pi_{2})$ is simply the product of the number of tree shapes compatible with $\pi_{1}$ and the number of tree shapes compatible with $\pi_{2}$. If $\pi_{1} \wedge \pi_{2} \neq \emptyset$, then certain tree shapes can be compatible with both $\pi_{1}$ and $\pi_{2}$, i.e., compatible with $\pi_{1} \wedge \pi_{2}$. We sum four quantities. (1) Consider the set of tree shapes compatible with both perfect phylogenies $\pi_{1}$ and $\pi_{2}$. They can either be assigned the same tree shape, in $|\mathcal{G}_{c}(\pi_{1} \wedge\pi_{2})|$ ways, or they can be assigned different tree shapes, in $\frac{1}{2}(|\mathcal{G}_{c}(\pi_{1} \wedge\pi_{2})|^{2}-|\mathcal{G}_{c}(\pi_{1} \wedge\pi_{2})|)$ ways, resulting in $\frac{1}{2}|\mathcal{G}_{c}(\pi_{1} \wedge\pi_{2})|(|\mathcal{G}_{c}(\pi_{1} \wedge\pi_{2})|+1)$ tree shapes. (2) If $\pi_{2}$ is a refinement of $\pi_{1}$ and $\pi_{1} \neq \pi_{2}$, then there are $|\mathcal{G}_{c}(\pi_{1} \wedge\pi_{2})| (|\mathcal{G}_{c}(\pi_{1})|-|\mathcal{G}_{c}(\pi_{1} \wedge\pi_{2})|)$ tree shapes. (3) Similarly, if $\pi_{1}$ is a refinement of $\pi_{2}$ and $\pi_{1} \neq \pi_{2}$, then there are $|\mathcal{G}_{c}(\pi_{1} \wedge\pi_{2})| (|\mathcal{G}_{c}(\pi_{2})|-|\mathcal{G}_{c}(\pi_{1} \wedge\pi_{2})|)$. (4) If $\pi_{1}$ and $\pi_{2}$ are not comparable, that is, if neither is a refinement of the other, then there are $(|\mathcal{G}_{c}(\pi_{1})|- |\mathcal{G}_{c}(\pi_{1} \wedge\pi_{2})|) (|\mathcal{G}_{c}(\pi_{2})|-|\mathcal{G}_{c}(\pi_{1} \wedge\pi_{2})|)$ tree shapes. Scenarios (2), (3), and (4) are mutually exclusive, and only one of the quantities in (2), (3), and (4) is nonzero; summing the four quantities gives the result. \end{proof} Propositions \ref{prop6} and \ref{prop7} provide a recursive formula for calculating the number of tree shapes compatible with a binary perfect phylogeny. For example, examining Figure \ref{fig:refinement}A, the number of tree shapes compatible with $(4,2)$ is $S_{4}S_{2}=2$, and the number of tree shapes compatible with $((4,2),6)$ is $|\mathcal{G}_{c}(4,2)| \, |\mathcal{G}_{c}(6)|- \frac{1}{2}|\mathcal{G}_{c}(4,2)| \, (|\mathcal{G}_{c}(4,2)|-1) =(2) (6)-\frac{1}{2}(2)(1)=11.$ Table 1 shows the number of tree shapes compatible with certain perfect phylogenies of sample size $10$. \subsection{Ranked unlabeled tree shapes compatible with a binary perfect phylogeny} \label{sec:ranked} Next, for a binary perfect phylogeny, we compute the number of compatible ranked unlabeled tree shapes with $n$ leaves. \begin{defn} \label{def:ranked_comp} \textbf{Ranked unlabeled tree shape $T^{R}_{n}$ compatible with a perfect phylogeny $\pi \in \Pi_{n}$}. A ranked unlabeled tree shape with $n$ leaves, $T^{R}_{n}$, is compatible with a perfect phylogeny $\pi \in \Pi_{n}$ if the unranked unlabeled tree shape $T_{n}$ obtained by removing the ranking from ${T}^{R}_{n}$ is compatible with $\pi$. \end{defn} \begin{prop} For $n_1, n_2 \geq 1$, the number of ranked unlabeled tree shapes compatible with a cherry perfect phylogeny $(n_{1},n_{2}) \in \Pi_{n}$ is \begin{eqnarray} |\mathcal{G}^{T}_{c}((n_{1},n_{2}))|=\begin{cases} \binom{n_{1}+n_{2}-2}{n_{1}-1}R_{n_{1}}R_{n_{2}} &\text{if } n_{1} \neq n_{2}\\ \frac{1}{2}\binom{2n_{1}-2}{n_{1}-1}R^{2}_{n_{1}} & \text{if } n_{1} = n_{2}. \end{cases} \end{eqnarray} \label{prop9} \end{prop} \begin{proof} By Definition \ref{def:ranked_comp}, a ranked unlabeled tree shape $T^R$ is compatible with the perfect phylogeny $\pi = (n_1,n_2)$ if the associated unranked unlabeled tree shape $T$ obtained by removing the ranking of $T^R$ is compatible with $\pi$. By Definition \ref{def:treeshape_comp}, the unranked unlabeled tree shape $T$ is compatible with the perfect phylogeny $\pi = (n_1,n_2)$ if it possesses two subtrees, one with $n_1$ leaf descendants and another with $n_2$ leaf descendants. We decompose a ranked unlabeled tree at its root into subtrees of size $n_1$ and $n_2$. If $n_{1} \neq n_{2}$, then the $n_{1}-1$ interior nodes of the subtree with $n_{1}$ leaves and the $n_{2}-1$ interior nodes of the subtree with $n_{2}$ leaves can be interleaved in $\binom{n_{1}+n_{2}-2}{n_{1}-1}$ ways. If $n_{1}=n_{2}$, then the two ranked subtrees can be the same in $R_{n_{1}}$ ways, each with $\frac{1}{2}\binom{2n_{1}-2}{n_{1}-1}$ ways of interleaving the two ranked unlabeled subtrees; the two ranked subtrees can differ in $\frac{1}{2}(R^{2}_{n_{1}}-R_{n_{1}})$ ways, each with $\binom{2n_{1}-2}{n_{1}-1}$ ways of interleaving the subtrees. \end{proof} \begin{prop} For $n_1, n_2 \geq 1$ and $\pi_1 \in \Pi_{n_1}, \pi_2 \in \Pi_{n_2}$, the number of ranked unlabeled tree shapes compatible with a binary perfect phylogeny $\pi=(\pi_{1},\pi_{2}) \in \Pi_{n}$ is \begin{equation} |\mathcal{G}^{T}_{c}((\pi_1, \pi_2))|=\begin{cases} \binom{2 n_{1}-2}{n_{1}-1}(|\mathcal{G}^{T}_{c}(\pi_{1})| \, |\mathcal{G}^{T}_{c}(\pi_{2})| -\frac{1}{2}|\mathcal{G}^{T}_{c}(\pi_{1} \wedge \pi_{2})|^{2}) & \text{ if } \pi_{1} \wedge \pi_{2} \neq \emptyset\\ \binom{n_{1}+n_{2}-2}{n_{1}-1}|\mathcal{G}^{T}_{c}(\pi_{1})| \, |\mathcal{G}^{T}_{c}(\pi_{2})| & \text{ if } \pi_{1} \wedge \pi_{2}= \emptyset. \end{cases} \end{equation} \label{prop10} \end{prop} \begin{proof} If $\pi_{1} \wedge \pi_{2} = \emptyset$, then the number of ranked tree shapes compatible with $(\pi_{1},\pi_{2})$ is simply the product of the number of ranked tree shapes compatible with $\pi_{1}$, the number of ranked tree shapes compatible with $\pi_{2}$, and the number of ways of interleaving their rankings. If $\pi_{1} \wedge \pi_{2} \neq \emptyset$, then certain ranked tree shapes can be compatible with both $\pi_{1}$ and $\pi_{2}$, i.e., compatible with $\pi_{1}\wedge \pi_{2}$. We therefore have three cases: the two perfect phylogenies are the same, one is a refinement of the other (two possible ways), or neither is a refinement of the other. The cardinalities in these cases are $\frac{1}{2}|\mathcal{G}^{T}_{c}(\pi_{1} \wedge \pi_{2})|^{2}$, $|\mathcal{G}^{T}_{c}(\pi_{1} \wedge \pi_{2})| \, (|\mathcal{G}^{T}_{c}(\pi_{2})|-|\mathcal{G}^{T}_{c}(\pi_{1} \wedge \pi_{2})|)+|\mathcal{G}^{T}_{c}(\pi_{1} \wedge \pi_{2})|(|\mathcal{G}^{T}_{c}(\pi_{1})|-|\mathcal{G}^{T}_{c}(\pi_{1} \wedge \pi_{2})|)$, and $(|\mathcal{G}^{T}_{c}(\pi_{1})|-|\mathcal{G}^{T}_{c}(\pi_{1} \wedge \pi_{2})|)(|\mathcal{G}^{T}_{c}(\pi_{2})|-|\mathcal{G}^{T}_{c}(\pi_{1} \wedge \pi_{2})|)$, respectively, all multiplied by the possible number of interleavings of the rankings $\binom{2n_{1}-2}{n_{1}-1}$. \end{proof} Propositions \ref{prop9} and \ref{prop10} provide a recursive formula for calculating the number of ranked tree shapes compatible with a binary perfect phylogeny. For Figure \ref{fig:refinement}A, the number of ranked tree shapes compatible with $(4,2)$ is $(4)(2)=8$, and the number of ranked tree shapes compatible with $((4,2),6)$ is $\binom{10}{5} (|\mathcal{G}^{T}_{c}(4,2)| \, |\mathcal{G}^{T}_{c}(6)|- \frac{1}{2}|\mathcal{G}^{T}_{c}(4,2)|^2) =\binom{10}{5}[(8)(16)-\frac{1}{2}(8)^{2}]=24,192$. Table 1 shows the number of ranked unlabeled tree shapes compatible with some of the perfect phylogenies of sample size $10$. We can observe that these numbers exceed corresponding numbers of unranked unlabeled tree shapes compatible with the perfect phylogenies, just as the numbers of ranked unlabeled tree shapes exceed the numbers of unranked unlabeled tree shapes (Section \ref{sec:known}). For the ranked unlabeled tree shapes compatible with a binary perfect phylogeny, we can examine the asymptotic growth of the number of compatible ranked unlabeled tree shapes in particular families of binary perfect phylogenies. For a fixed integer value $x \geq 1$, consider the family of binary perfect phylogenies $B_x(n)=(x,n-x)$ as $n$ increases. These are cherry phylogenies with labels $x$ and $n-x$ at their two leaves. Let $b_x(n)$ be the number of ranked unlabeled tree shapes compatible with $B_x(n)$. Among the integer sequences $b_1(n)$, $b_2(n)$, $b_3(n)$, $\ldots$, the next proposition shows that $b_2(n)$ has the fastest asymptotic growth. In other words, as $n$ grows large, the value of $x$ for which the number of ranked unlabeled tree shapes compatible with the perfect phylogeny $B_x(n)$ is asymptotically largest is $x=2$. \begin{prop} \label{prop:asymptotic} Among the integer sequences $b_1(n)$, $b_2(n)$, $b_3(n)$, $\ldots$, the sequence $b_2(n)$ has the fastest asymptotic growth. \end{prop} \begin{proof} \noindent For a fixed integer value $x \geq 0$, let $\beta_x = (x+1,n-x+1)$ be a binary perfect phylogeny with two leaves, labeled by $x+1$ (say to the left of the root) and $n-x+1$ (to the right of the root). The set of ranked unlabeled tree shapes compatible with $\beta_x$ corresponds to the set of ranked unlabeled tree shapes with $n+1$ internal nodes ($n+2$ leaves), $x$ internal nodes for the left root subtree, and $n-x$ internal nodes for the right root subtree. We consider an increasing sequence of values of $n$. Supposing $n > 2x$ so that the root subtrees of $\beta_x$ cannot have the same sample size, we apply Proposition \ref{prop10}, finding that the number of ranked unlabeled tree shapes compatible with $\beta_x$ is \begin{equation}\label{pippo} {{n}\choose{x}} e_x e_{n-x}, \end{equation} where $e_i$ is the number of ranked unlabeled tree shapes with $i$ internal nodes. Following eq.~\ref{eq:res1}, the integer $e_i$ is the $i$th Euler number, $e_i=R_{i+1}$. The exponential generating function of the sequence $(e_i)$ is \citep{brent2013fast} \begin{equation}\label{pino} \sum_{i=0}^{\infty} \frac{e_i z^i}{i!} = \sec(z) + \tan(z). \end{equation} We can write the ratio $q_i=\frac{e_i}{i!}$ as \nocite{flajolet2009analytic} \nocite{brent2013fast} (Flajolet and Sedgewick 2009, p.~269; Brent and Harvey, 2013) \begin{equation}\label{kio} q_i = \left\{ \begin{array}{l l} 2 \left( \frac{2}{\pi} \right)^{i+1} \sum_{k=0}^{\infty}\frac{(-1)^{k}}{(2k+1)^{i+1}} , & \text{if } i \text{ is even} \\ 2 \left[ \left( \frac{2}{\pi} \right)^{i+1} - \left( \frac{1}{\pi} \right)^{i+1} \right] \sum_{k=1}^{\infty} \frac{1}{k^{i+1}} , & \text{if } i \text{ is odd}. \\ \end{array} \right. \end{equation} As $i$ becomes large, by applying singularity analysis to eq.~\ref{pino}, or by computing directly from eq.~\ref{kio}, we have the asymptotic relation \begin{equation}\label{cocco} q_i \sim 2 \left( \frac{2}{\pi} \right)^{i+1}. \end{equation} With $q_x = e_x/x!$, we rewrite eq.~\ref{pippo} as $n! \, q_x q_{n-x}$. Letting $n \rightarrow \infty$ for a fixed $x$, we can use eq.~\ref{kio} to rewrite $q_x$, and because $x$ is constant as $n$ grows, we can use eq.~\ref{cocco} for the asymptotic value of $q_{n-x}$. Hence, for increasing values of $n$, the number of ranked tree shapes compatible with the perfect phylogeny $\beta_x$ behaves asymptotically like the product of $n!$ and \begin{equation}\label{asino} q_x q_{n-x} \sim 4 \left( \frac{2}{\pi} \right)^{n+2} c_x, \end{equation} where \begin{equation}\label{casa} c_x = \left\{ \begin{array}{l l} \sum_{k=0}^{\infty}\frac{(-1)^{k}}{(2k+1)^{x+1}} , & \text{if } x \text{ is even} \\ \left( 1 - \frac{1}{2^{x+1}} \right) \sum_{k=1}^{\infty} \frac{1}{k^{x+1}} , & \text{if } x \text{ is odd}. \\ \end{array} \right. \end{equation} Note that $\zeta(s) = \sum_{k=1}^{\infty} \frac{1}{k^s}$ is the Riemann zeta function. If $x$ is even, then $$c_x = 1 + \left(- \frac{1}{3^{x+1}} + \frac{1}{5^{x+1}} \right) + \left(- \frac{1}{7^{x+1}} + \frac{1}{9^{x+1}} \right) + ... \leq 1.$$ Among odd values of $x$, we have $c_1= \frac{3}{4} \, \zeta(2) = \pi^2/8 \approx 1.2337$ for $x=1$. For odd $x\geq 3$, we have $$c_x < \zeta(x+1) \leq \zeta(3) \approx 1.2021 < c_1.$$ Hence, $c_1 > 1$ exceeds $c_x$ both for even $x$ and for all odd $x \geq 3$. Because $c_x$ has its maximum at $x=1$, from eq.~\ref{asino}, we conclude that the product $q_x q_{n-x}$ grows asymptotically fastest for $x=1$. In particular, as $n \rightarrow \infty$, the value of $x$ for which the binary perfect phylogeny $\beta_x$ has the largest number of compatible ranked unlabeled tree shapes is $x=1$---that is, when $\beta_x = \beta_1=(2,n)$. \end{proof} In Table 1, we can observe an example of Proposition \ref{prop:asymptotic}. The value of $b_2(10)$, or 2176, exceeds the values of $b_x(10)$ for all other values of $x$ (with the trivial exception that $b_2(10)=b_8(10)$). The asymptotic approximation from eq.~\ref{asino} gives \begin{equation*} b_2(n) \sim 2 \bigg(\frac{2}{\pi}\bigg)^{n-2} (n-2)!, \end{equation*} which, for $n=10$, yields $b_2(10) \approx 2175.66$. \subsection{Ranked labeled tree shapes compatible with a labeled binary perfect phylogeny} \label{sec:labeled} Propositions \ref{prop6}, \ref{prop7}, \ref{prop9} and \ref{prop10} provide recursive formulas for enumerating unranked unlabeled tree shapes and ranked unlabeled tree shapes compatible with a binary perfect phylogeny. In these cases, a perfect phylogeny representation does not use individual sequence labels; the labels of the tips of the perfect phylogeny are simply counts of numbers of sequences. We now consider \textbf{labeled perfect phylogenies} that partition the set of labeled individual sequences. We still use the parenthetical notation described in Section \ref{sec:data} to denote a labeled perfect phylogeny, for example $\pi=(2,3)$, however, it must be understood that this labeled perfect phylogeny partitions the sampled sequences into two different sets of labeled sequences. Consider $\{x_{1},x_{2}\}$ and $\{x_{3},x_{4},x_{5}\}$ in the perfect phylogeny of Figure \ref{fig:labeled_data0}B. We are now interested in calculating the number of ranked labeled tree shapes compatible with a labeled binary perfect phylogeny. Figure \ref{fig:labeled_data0}C shows all the ranked labeled tree shapes compatible with the labeled perfect phylogeny. For ranked labeled tree shapes, the enumeration follows a simple recursive expression. \begin{defn} \label{def:labeled_comp} \textbf{Ranked labeled tree shape $T^{L}_{n}$ compatible with a labeled perfect phylogeny $\pi \in \Pi^{L}_{n}$}. A ranked labeled tree shape with $n$ leaves, $T^{L}_{n}$, is compatible with a perfect phylogeny $\pi \in \Pi^{L}_n$ if the unranked unlabeled tree shape $T_{n}$ obtained by removing the ranks and the labels from ${T}^{L}_{n}$ is compatible with $\pi$ and the one-to-one correspondence between the $k$ leaves of $\pi$ and the $k$ disjoint subtrees of $T^{L}_{n}$ correspond to the same partition of the individual sequences. \end{defn} \begin{prop} For $n_1, n_2 \geq 1$ and $\pi_{1}\in \Pi_{n_{1}}^L, \pi_{2}\in \Pi_{n_{2}}^L$ the number of ranked labeled tree shapes compatible with a labeled binary perfect phylogeny $\pi=(\pi_{1},\pi_{2})$ is \begin{equation} |\mathcal{G}^{L}_{c}(\pi)|= \binom{n_{1}+n_{2}-2}{n_{1}-1}|\mathcal{G}^{L}_{c}(\pi_{1})| \, |\mathcal{G}^{L}_{c}(\pi_{2})|. \end{equation} \label{prop13} \end{prop} \begin{proof} We can count the number of ranked labeled tree shapes by dividing $\pi$ at the root into two subtrees, one with $n_1$ leaves and perfect phylogeny $\pi_1$, and the other with $n_2$ leaves and perfect phylogeny $\pi_2$, both partitioning the sampled sequences. The number of such trees is the product of the numbers of ranked labeled trees for the two subtrees and the number of ways of interleaving the internal nodes of the two subtrees. In this case, the two perfect phylogenies $\pi_{1}$ and $\pi_{2}$ can never be identical because they correspond to different sets of sequences. \end{proof} \begin{figure} \begin{center} \includegraphics[scale=0.5]{Kingman_model_simple.eps} \end{center} \caption{\small{\textbf{Coalescent and infinitely-many-sites generative model of binary molecular data.} \textbf{(A)} A genealogy of 5 individuals, with 2 superimposed mutations depicted as gray squares. The root is labeled by the ancestral type $00$, and the leaves are labeled by the genetic type at each of three mutated sites. The first two leaves from left to right are labeled $01$ because one mutation occurs in their path to the root. The third, fourth and fifth individuals have one mutation in their path to the root and are labeled $10$. The order and label of the mutations is unimportant; however, individual labels $x_{1},x_{2},x_{3},x_{4},x_{5}$ are important. For ease of exposition, we label the mutations a, b. The first site corresponds to mutation a, and the second to b. \textbf{(B)} Left, a labeled perfect phylogeny representation of the observed data at the tips of (A). Data consist of 2 unique haplotypes $01$ and $10$, with frequencies 2 and 3, respectively. The corresponding frequencies are the labels of tips of the perfect phylogeny; however, it is understood that the two leaves correspond to $\{x_{1},x_{2}\}$ and $\{x_{3},x_{4},x_{5}\}$ respectively. Right, perfect phylogeny topology obtained by removing the edge labels of the perfect phylogeny. \textbf{(C)} The nine ranked labeled tree shapes compatible with the labeled perfect phylogeny topology in (B). Note that in (C), if we ignore the branching order and drop the internal node labels, in each row, the three trees are equivalent---so that each row corresponds to one of the three {\it unranked} labeled tree shapes compatible with the labeled perfect phylogeny topology in (B).}} \label{fig:labeled_data0} \end{figure} Counts for the number of ranked labeled tree shapes for some of the perfect phylogenies of 10 samples (with an arbitrary labeling) appear in Table 1. Given a perfect phylogeny in the table, we can observe that the number of ranked labeled tree shapes far exceeds the number of ranked unlabeled tree shapes. Continuing with ((4,2),6), the number of ranked labeled tree shapes compatible with this (arbitrarily labeled) perfect phylogeny is ${10 \choose 5} |\mathcal{G}_c^L((4,2))| \, |\mathcal{G}_c^L((6))| = {10 \choose 5} {4 \choose 3} \, |\mathcal{G}_c^L((4))|\, |\mathcal{G}_c^L((2))| \,|\mathcal{G}_c^L((6))| = {10 \choose 5} {4 \choose 3} L_4 L_2 L_6 = 252 \times 4 \times 18 \times 1 \times 2700=48,988,800$. We can obtain a result analogous to Proposition \ref{prop:asymptotic}; we characterize, for binary labeled perfect phylogenies $B_x(n)=(x,n-x)$, the one compatible with the largest number of ranked labeled tree shapes. Let $b_x'(n)$ denote the number of ranked labeled tree shapes compatible with $B_x(n)$. \begin{prop} \label{prop:max} Fix $n \geq 2$. Among the values $b_1'(n), b_2'(n), \ldots, b_{\lfloor \frac{n}{2} \rfloor}'(n)$, the largest is $b_1'(n)$. \end{prop} \begin{proof} Applying Proposition \ref{prop13} , we have $b_x'(n)={n-2 \choose x-1} \, L_x \, L_{n-x}$. Simplifying with eq.~\ref{eq:Ln}, we obtain $b_x'(n) = [n! \, (n-2)! / {2^{n-2}}]{n \choose x}^{-1}$. As it is quickly verified that the binomial coefficients ${n \choose x}$ increase monotonically from $x=1$ to $x=\lfloor \frac{n}{2} \rfloor$, $b_x'$ decreases monotonically from $x=1$ to $x=\lfloor \frac{n}{2} \rfloor$. \end{proof} An example of Proposition \ref{prop:max} is visible in Table 1, in which $b_1'(10)=57,153,600$ exceeds $b_2'(10)$, $b_3'(10)$, $b_4'(10)$, and $b_5'(10)$. \subsection{Unranked labeled tree shapes compatible with a labeled binary perfect phylogeny} \label{sec:unrankedlabeled} Continuing with the labeled perfect phylogenies from Section \ref{sec:labeled}, we now count the unranked labeled binary perfect phylogenies compatible with a labeled binary perfect phylogeny. Consider $\{x_{1},x_{2}\}$ and $\{x_{3},x_{4},x_{5}\}$ in the perfect phylogeny of Figure \ref{fig:labeled_data0}B. We calculate the number of unranked labeled tree shapes compatible with a labeled binary perfect phylogeny. Each row of Figure \ref{fig:labeled_data0}C corresponds to one of the unranked labeled tree shapes compatible with the labeled perfect phylogeny. \begin{defn} \label{def:unrankedlabeled_comp} \textbf{Ranked labeled tree shape $T^{X}_{n}$ compatible with a labeled perfect phylogeny $\pi \in \Pi^{L}_{n}$}. An unranked labeled tree shape with $n$ leaves, $T^{X}_{n}$, is compatible with a perfect phylogeny $\pi \in \Pi^{L}_n$ if the unranked unlabeled tree shape $T_{n}$ obtained by removing the labels from ${T}^{X}_{n}$ is compatible with $\pi$ and the one-to-one correspondence between the $k$ leaves of $\pi$ and the $k$ disjoint subtrees of $T^{X}_{n}$ correspond to the same partition of the individual sequences. \end{defn} \begin{prop} For $n_1, n_2 \geq 1$ and $\pi_{1}\in \Pi_{n_{1}}^L, \pi_{2}\in \Pi_{n_{2}}^L$, the number of ranked labeled tree shapes compatible with a labeled binary perfect phylogeny $\pi=(\pi_{1},\pi_{2})$ is \begin{equation} |\mathcal{G}^{X}_{c}(\pi)|= |\mathcal{G}^{X}_{c}(\pi_{1})| \, |\mathcal{G}^{X}_{c}(\pi_{2})|. \end{equation} \label{prop16} \end{prop} \begin{proof} We divide $\pi$ at the root into two subtrees, one with $n_1$ leaves and perfect phylogeny $\pi_1$, and the other with $n_2$ leaves and perfect phylogeny $\pi_2$. The subtrees must partition the sampled sequences in the same way as $\pi$. The number of such trees is the simply product of the numbers of unranked labeled trees for the two subtrees. As in Proposition \ref{prop13}, perfect phylogenies $\pi_{1}$ and $\pi_{2}$ are not identical because they correspond to different sets of sequences; with the ranking dropped, unlike in Proposition \ref{prop13}, we need not consider the number of ways of interleaving the internal nodes of the two subtrees. \end{proof} For some of the perfect phylogenies of 10 samples (with an arbitrary labeling), counts for the number of unranked labeled tree shapes appear in Table 1. The number of unranked labeled tree shapes far exceeds the number of unranked unlabeled tree shapes, and it generally exceeds the number of ranked unlabeled tree shapes. For the example ((4,2),6), the number of unranked labeled tree shapes compatible with this (arbitrarily labeled) perfect phylogeny is $|\mathcal{G}_c^X((4,2))| \, |\mathcal{G}_c^X((6))| = |\mathcal{G}_c^X((4))| \,|\mathcal{G}_c^X((2))| \,|\mathcal{G}_c^X((6))| = X_4 X_2 X_6 = 15 \times 1 \times 945 =14,175$. For binary labeled perfect phylogenies $B_x(n)=(x,n-x)$, the one compatible with the largest number of unranked labeled tree shapes follows the result of Proposition \ref{prop:max}. Let $b_x''(n)$ denote the number of unranked labeled tree shapes compatible with $B_x(n)$. \begin{prop} \label{prop:max2} Fix $n \geq 2$. Among the values $b_1''(n), b_2''(n), \ldots, b_{\lfloor \frac{n}{2} \rfloor}''(n)$, the largest is $b_1''(n)$. \end{prop} \begin{proof} Applying Proposition \ref{prop16} , we have $b_x''(n)= X_x \, X_{n-x}$ for $1 \leq x \leq \lfloor \frac{n}{2} \rfloor$. Simplifying with eq.~\ref{eq:Xn}, we obtain $$b_x''(n) = \frac{(n-2)!}{2^{n-2}} \frac{{2x-2 \choose x-1}{2n-2x-2 \choose n-x-1}}{{n-2 \choose x-1}}.$$ Then $b_{x+1}''(n)/b_{x}''(n) = \frac{2x-1}{2n-2x-3} \leq 1$ for $1 \leq x \leq \frac{n-1}{2}$, with equality requiring $x=\frac{n-1}{2}$, so that $b_{x}''(n)$ monotonically decreases from $x=1$ to $x=\lfloor \frac{n}{2} \rfloor$. \end{proof} In Table 1, we observe that as in Proposition \ref{prop:max2}, $b_1''(10)=2,027,025$ exceeds $b_2''(10)$, $b_3''(10)$, $b_4''(10)$, and $b_5''(10)$. \section{Enumeration for multifurcating perfect phylogenies} \label{sec:four} \label{sec:multifurcating} Recall that perfect phylogenies need not be strictly binary, and that nodes can have more than two descendants. To complete the description of the numbers of trees of various types that are compatible with a perfect phylogeny, we must consider multifurcating perfect phylogenies. We proceed by reducing the multifurcating case to the binary case that has already been solved. We now consider a \textbf{multifurcating perfect phylogeny} that consists of a single internal node subtending $k$ leaves with labels $n_{1},n_{2},\ldots,n_{k}$. An example is depicted in Figure \ref{fig:compatBin}. Because multiple leaves can each correspond to groups with the same number of samples, so that the same numerical label can be assigned to many of those leaves, it is convenient to denote the vector of unique labels by $\mathbf{a}=(a_{1},a_{2},\ldots,a_{s})$ and the corresponding vector of their multiplicities by $\mathbf{m}=(m_{1},m_{2},\ldots,m_{s})$, where $m_{j}$ denotes the number of leaves with label $a_{j}$, $1 \leq j \leq s \leq k$. In the example of Figure \ref{fig:compatBin}, $\mathbf{a}=(2,3)$ and $\mathbf{m}=(2,2)$, as two leaves $(m_1=2)$ have label 2 $(a_1=2)$ and two leaves $(m_2=3)$ have label 3 $(a_2=3)$. We extend the notion of the binary perfect phylogeny poset to the multifurcating case. We define $\pi \leq \sigma$ for two multifurcating perfect phylogenies if $\sigma$ can be obtained by sequentially collapsing pairs of pendant edges of $\pi$. Given two multifurcating perfect phylogenies $\pi_{1}$ and $\pi_{2}$, their meet $\pi_{1} \wedge \pi_{2}$ is the largest multifurcating perfect phylogeny that refines both $\pi_{1}$ and $\pi_{2}$. For example, the meet between $\pi_{1}=(1,2,3,(2,2))$ and $\pi_{2}=(1,2,2,(2,3))$ is given by: \begin{align*} (1,2,3,(2,2)) \wedge (1,2,2,(2,3)) &= (1,(2,2),(2,3)). \end{align*} Similarly, their join is the smallest multifurcating perfect phylogeny $\pi_{1} \vee \pi_{2}$ for which both $\pi_{1}$ and $\pi_{2}$ are refinements: \begin{align*} (1,2,3,(2,2)) \vee (1,2,2,(2,3)) &= (1,2,2,2,3). \end{align*} The lattice structure enables us to count the number of ranked unlabeled tree shapes compatible with a multifurcating perfect phylogeny $\pi=(n_{1},n_{2},\ldots,n_{k})$. We use a recursive inclusion-exclusion principle with label vector $\mathbf{a}$ and multiplicities $\mathbf{m}$. The key idea is to decompose the computation into a sum over all possible binary perfect phylogenies, applying Propositions \ref{prop9} and \ref{prop10} to each binary perfect phylogeny. To recursively generate all possible binary perfect phylogenies from $\pi$, we define the operator $\mathcal{B}_{i,j}(\pi)$ that collapses two leaves with labels $a_{i}$ and $a_{j}$ in $\pi$. For example $\mathcal{B}_{2,3}(2,2,3,4)=((2,3),2,4)$. If $\sum^{s}_{i=1} m_{i}>2$, then \begin{align}\label{eq:first} |\mathcal{G}_{c}(\pi)|&=\underbrace{\sum_{i=1}^s|\mathcal{G}_{c}(\mathcal{B}_{i,i}(\pi))|\,1_{m_{i}>1}}_{\small{\substack{\text{collapsing two pendant edges}\\ \text{with the same leaf values}}}}+\underbrace{\sum_{i=1}^{s-1} \sum_{j=i+1}^s|\mathcal{G}_{c}(\mathcal{B}_{i,j}(\pi))\,|1_{m_{i}>0} \, 1_{m_{j}>0}}_{\small{\substack{\text{collapsing two pendant edges}\\ \text{with different leaf values}}}} \nonumber \\ & \quad - \underbrace{\sum_{i=1}^{s-1} \sum_{j=i+1}^s|\mathcal{G}_{c}(\mathcal{B}_{i,i}(\pi) \wedge \mathcal{B}_{j,j}(\pi))|\,1_{m_{i}>1}\,1_{m_{j}>1}}_{\small{\substack{\text{collapsing all pairs containing two distinct pairs of pendant edges,}\\ \text{each pair with the same leaf values}}}} \nonumber\\ & \quad - \underbrace{\sum_{i=1}^{s-1} \sum_{j=i+1}^s \sum_{k=1 \atop k \neq i, k \neq j}^s |\mathcal{G}_{c}(\mathcal{B}_{i,j}(\pi) \wedge \mathcal{B}_{k,k}(\pi)) | \, 1_{m_{i}>0} \, 1_{m_{j}>0} \, 1_{m_{k}>1}}_{\small{\substack{\text{collapsing a pair of edges with different leaf values}\\ \text{and collapsing a pair of edges with the same leaf values}}}} \nonumber\\ & \quad - \underbrace{\sum_{i=1}^{s-1} \sum_{j=i+1}^s \sum_{k=1 \atop k \neq i, k \neq j}^{s-1} \sum_{\ell=k+1 \atop \ell \neq i, \ell \neq j}^s |\mathcal{G}_{c}(\mathcal{B}_{i,j}(\pi) \wedge \mathcal{B}_{k,\ell}(\pi))|\, 1_{m_{i}>0}\, 1_{m_{j}>0}\, 1_{m_{k}>0}\, 1_{m_{\ell}>0}}_{\small{\substack{\text{collapsing two different pairs of pendant edges,}\\ \text{each pair with different leaf values}}}}. \end{align} To interpret eq.~\ref{eq:first} as an inclusion-exclusion formula, notice that the first two sums that are added on the right-hand side of eq.~\ref{eq:first} correspond to enumerations of single events (so that the sum is analogous to a union $\cup A_{i}$), and the following three sums that are subtracted correspond to intersections of pairs of these events (analogous to intersections $A_{i} \cap A_{j}$). Eq.~\ref{eq:first} provides a recursive approach for counting the number of ranked unlabeled tree shapes compatible with a multifurcating perfect phylogeny by expressing the calculation in terms of binary perfect phylogenies. The recursive application of the equation proceeds until all terms reach $\sum^{s}_{i=1}m_{i}=2$, when the binary perfect phylogenies are reached. \begin{examp} The number of ranked unlabeled tree shapes compatible with $\pi=(2,2,3,3)$ is: \begin{align*} |\mathcal{G}^{T}_{c}(2,2,3,3)| & =|\mathcal{G}^{T}_{c}((2,2),3,3)| +|\mathcal{G}^{T}_{c}(2,2,(3,3))| +|\mathcal{G}^{T}_{c}((2,3),2,3)| -|\mathcal{G}^{T}_{c}((2,2),(3,3))|\\ & = \big[ |\mathcal{G}^{T}_{c}((2,2),(3,3))| +|\mathcal{G}^{T}_{c}(((2,2),3),3)| \big] + \big[ |\mathcal{G}^{T}_{c}((2,2),(3,3))| +|\mathcal{G}^{T}_{c}(((3,3),2),2)| \big]\\ & \quad + \big[ |\mathcal{G}^{T}_{c}(((2,3),2),3)| +|\mathcal{G}^{T}_{c}(((2,3),3),2)| + |\mathcal{G}^{T}_{c}((2,3),(2,3))| \big] -|\mathcal{G}^{T}_{c}((2,2),(3,3))| \\ & =|\mathcal{G}^{T}_{c}((2,2),(3,3))| +|\mathcal{G}^{T}_{c}(((2,2),3),3)| +|\mathcal{G}^{T}_{c}(((3,3),2),2)| +|\mathcal{G}^{T}_{c}(((2,3),2),3)| \\ & \quad +|\mathcal{G}^{T}_{c}(((2,3),3),2)| + |\mathcal{G}^{T}_{c}((2,3),(2,3))| \\ & =168+280+144+420+360+315=1687. \end{align*} In obtaining this sum, in intermediate steps, we use the fact that the values of $\mathcal{G}_{c}^T$ for (2), (3), (2,2), (3,3), (2,3), ((2,2),3), ((3,3),2), ((2,3),2)), ((2,3),3), and are 1, 1, 1, 3, 3, 10, 18, 15, and 45, respectively. \end{examp} For counting the number of unranked unlabeled tree shapes compatible with $\pi=(n_{1},n_{2},\ldots,n_{k})$, we simply replace $\mathcal{G}^{T}_{c}$ with $\mathcal{G}_{c}$ in eq.~\ref{eq:first}. We use Propositions \ref{prop6} and \ref{prop7} in place of Propositions \ref{prop9} and \ref{prop10}. \begin{examp} The number of unranked unlabeled tree shapes compatible with $\pi=(2,2,3,3)$ is: \begin{align*} |\mathcal{G}_{c}(2,2,3,3)| &= |\mathcal{G}_{c}((2,2),(3,3))|+|\mathcal{G}_{c}(((2,2),3),3)|+|\mathcal{G}_{c}(((3,3),2),2)|\\ & \quad +|\mathcal{G}_{c}(((2,3),2),3)|+|\mathcal{G}_{c}(((2,3),3),2)|+ |\mathcal{G}_{c}((2,3),(2,3))| \\ &=1+1+1+1+1+1=6. \end{align*} This example is quite straightforward; the values of $\mathcal{G}_{c}$ for the perfect phylogenies that appear in intermediate steps---(2), (3), (2,2), (3,3), (2,3), ((2,2),3), ((3,3),2), ((2,3),2)), and ((2,3),3)---all equal 1. \end{examp} To count the number of ranked labeled tree shapes compatible with a labeled multifurcating perfect phylogeny $\pi=(n_{1},n_{2},\ldots,n_{k})$, we assume that although any leaf in the perfect phylogeny can have multiplicity larger than one, each leaf is uniquely defined by its associated samples, all of which are all assumed to have different labels. Therefore, we take $\mathbf{a}=(n_{1},n_{2},\ldots,n_{k})$ and $\mathbf{m}=(1,1,\ldots,1)$. Eq.~\ref{eq:first} reduces to \begin{align}\label{eq:first_2} |\mathcal{G}^{L}_{c}(\pi)|&=\underbrace{\sum_{i=1}^{s-1} \sum_{j=i+1}^s|\mathcal{G}^{L}_{c}(\mathcal{B}_{i,j}(\pi))|\, 1_{m_{i}>0}\, 1_{m_{j}>0}}_{\small{\substack{\text{collapsing two pendant edges}}}} \nonumber \\ & \quad - \underbrace{\sum_{i=1}^{s-1} \sum_{j=i+1}^s \sum_{k=1 \atop k \neq i, k \neq j}^{s-1} \sum_{\ell=k+1 \atop \ell\neq i, \ell \neq j}^s|\mathcal{G}^{L}_{c}(\mathcal{B}_{i,j}(\pi) \wedge \mathcal{B}_{k,\ell}(\pi))|\, 1_{m_{i}>0}\, 1_{m_{j}>0}\, 1_{m_{k}>0}\, 1_{m_{\ell}>0}}_{\small{\substack{\text{collapsing two pairs of pendant edges}}}}. \end{align} The enumeration makes use of Proposition \ref{prop13}. \begin{examp} Consider a labeled multifurcating perfect phylogeny that groups $2$, $2$, $3$, and $3$ samples at the root. We assume that $\mathbf{a}=(a_{1},a_{2},a_{3},a_{4})=(2,2,3,3)$. Applying the recursion formula in eq.~\ref{eq:first_2}, we get \begin{align*} |\mathcal{G}^{L}_{c}(a_{1},a_{2},a_{3},a_{4})| &= |\mathcal{G}^{L}_{c}((a_{1},a_{2}),a_{3},a_{4})|+|\mathcal{G}^{L}_{c}((a_{1},a_{3}),a_{2},a_{4})|+|\mathcal{G}^{L}_{c}((a_{1},a_{4}),a_{2},a_{3})|\\ & \quad + |\mathcal{G}^{L}_{c}((a_{2},a_{3}),a_{1},a_{4})|+|\mathcal{G}^{L}_{c}((a_{2},a_{4}),a_{1},a_{3})|+|\mathcal{G}^{L}_{c}((a_{3},a_{4}),a_{1},a_{2})|\\ & \quad - |\mathcal{G}^{L}_{c}((a_{1},a_{2}),(a_{3},a_{4}))|-|\mathcal{G}^{L}_{c}((a_{1},a_{3}),(a_{2},a_{4}))|-|\mathcal{G}^{L}_{c}((a_{1},a_{4}),(a_{2},a_{3}))|\\ &=|\mathcal{G}^{L}_{c}((2,2),3,3)|+4|\mathcal{G}^{L}_{c}((2,3),2,3)|+|\mathcal{G}^{L}_{c}((3,3),2,2)|\\ & \quad -|\mathcal{G}^{L}_{c}((2,2),(3,3))|-2|\mathcal{G}^{L}_{c}((2,3),(2,3))|. \end{align*} Now, because \begin{align*} |\mathcal{G}^{L}_{c}(a_{1},a_{2},a_{3})| &= |\mathcal{G}^{L}_{c}((a_{1},a_{2}),a_{3})|+|\mathcal{G}^{L}_{c}((a_{1},a_{3}),a_{2})|+|\mathcal{G}^{L}_{c}((a_{2},a_{3}),a_{1})|, \end{align*} we have \begin{align*} |\mathcal{G}^{L}_{c}((2,2),3,3)| &= 2|\mathcal{G}^{L}_{c}(((2,2),3),3)|+|\mathcal{G}^{L}_{c}((2,2),(3,3))| \\ |\mathcal{G}^{L}_{c}((2,3),2,3)| &= |\mathcal{G}^{L}_{c}(((2,3),2),3)|+|\mathcal{G}^{L}_{c}(((2,3),3),2)|+|\mathcal{G}^{L}_{c}((2,3),(2,3))| \\ |\mathcal{G}^{L}_{c}((3,3),2,2)| &= 2|\mathcal{G}^{L}_{c}(((3,3),2),2)|+|\mathcal{G}^{L}_{c}((2,2),(3,3))|. \end{align*} Summing all terms, we get \begin{align*} |\mathcal{G}^{L}_{c}(a_{1},a_{2},a_{3},a_{4})| &=2|\mathcal{G}^{L}_{c}(((2,2),3),3)|+2|\mathcal{G}^{L}_{c}(((3,3),2),2)|+|\mathcal{G}^{L}_{c}((2,2),(3,3))|\\ & \quad +4|\mathcal{G}^{L}_{c}(((2,3),2),3)|+4|\mathcal{G}^{L}_{c}(((2,3),3),2)|+2|\mathcal{G}^{L}_{c}((2,3),(2,3))|\\ &=2\times 5040 + 2 \times 2592 + 6048 + 4 \times 3780 + 4 \times 3240 + 2 \times 5670= 60,732. \end{align*} In obtaining this sum, we use the fact that the values of $\mathcal{G}_{c}^L$ for (2), (3), (2,2), (3,3), (2,3), ((2,2),3), ((3,3),2), ((2,3),2)), and ((2,3),3), and are 1, 3, 2, 54, 9, 60, 324, 45, and 405, respectively. \end{examp} The number of unranked labeled tree shapes compatible with $\pi=(n_{1},n_{2},\ldots,n_{k})$ is obtained by replacing $\mathcal{G}^{L}_{c}$ with $\mathcal{G}^{X}_{c}$ in eq.~\ref{eq:first_2}. We use Proposition \ref{prop16} in place of Proposition \ref{prop13}. \begin{examp} The number of unranked labeled tree shapes compatible with a labeled multifurcating perfect phylogeny that groups 2, 2, 3, and 3 samples at the root, with $\mathbf{a}=(a_{1},a_{2},a_{3},a_{4})=(2,2,3,3)$ is: \begin{align*} |\mathcal{G}^{X}_{c}(a_{1},a_{2},a_{3},a_{4})| &=2|\mathcal{G}^{X}_{c}(((2,2),3),3)|+2|\mathcal{G}^{X}_{c}(((3,3),2),2)|+|\mathcal{G}^{X}_{c}((2,2),(3,3))|\\ & \quad +4|\mathcal{G}^{X}_{c}(((2,3),2),3)|+4|\mathcal{G}^{X}_{c}(((2,3),3),2)|+2|\mathcal{G}^{X}_{c}((2,3),(2,3))|\\ &=2\times 9 + 2 \times 9 + 9 + 4 \times 9 + 4 \times 9 + 2 \times 9= 135. \end{align*} The sum uses values of $\mathcal{G}_{c}^{X}$ for (2), (3), (2,2), (3,3), (2,3), ((2,2),3), ((3,3),2), ((2,3),2)), and ((2,3),3), equal to 1, 3, 1, 9, 3, 3, 9, 3, and 9, respectively. \end{examp} \begin{figure} \centering \includegraphics[scale=0.5]{binary_compatible.eps} \caption{\small{\textbf{Example of all possible binary perfect phylogeny topologies for a given multifurcating perfect phylogeny topology.} The binary perfect phylogenies are obtained from a multifurcating perfect phylogeny by resolving multifurcating nodes into sequences of bifurcations.}} \label{fig:compatBin} \end{figure} \begin{table}[ht] \label{table:tree_count} \begin{center} \caption{Number of trees compatible with example perfect phylogenies of 10 samples.} \begin{tabular}{rrrrr} \hline Perfect & Unranked unlabeled & Ranked unlabeled & Ranked labeled & Unranked labeled \\ phylogeny & tree shapes & tree shapes & tree shapes & tree shapes \\ \hline (9,1) & 46 & 1385 & 57,153,600 & 2,027,025 \\ (8,2) & 23 & 2176 & 12,700,800 & 135,135 \\ (7,3) & 11 & 1708 & 4,762,800 & 31,185 \\ (6,4) & 12 & 1792 & 2,721,600 & 14,175 \\ (5,5) & 6 & 875 & 2,268,000 & 11,025 \\ ((8,1),1) & 23 & 272 & 1,587,600 & 135,135 \\ ((7,2),1) & 11 & 427 & 396,900 & 10,395 \\ ((6,3),1) & 6 & 336 & 170,100 & 2835 \\ ((5,4),1) & 6 & 350 & 113,400 & 1575 \\ ((7,1),2) & 11 & 488 & 453,600 & 10,395 \\ ((6,2),2) & 6 & 768 & 129,600 & 945 \\ ((5,3),2) & 3 & 600 & 64,800 & 315 \\ ((4,4),2) & 3 & 320 & 51,840 & 225 \\ ((6,1),3) & 6 & 448 & 226,800 & 2835 \\ ((5,2),3) & 3 & 700 & 75,600 & 315 \\ ((4,3),3) & 2 & 560 & 45,360 & 135 \\ ((5,1),4) & 6 & 560 & 181,440 & 1575 \\ ((4,2),4) & 4 & 896 & 72,576 & 225 \\ ((3,3),4) & 2 & 336 & 54,432 & 135 \\ ((4,1),5) & 5 & 560 & 226,800 & 1575 \\ ((3,2),5) & 3 & 735 & 113,400 & 315 \\ \hline \end{tabular} \end{center} The entries in the table are obtained by repeated use of Propositions \ref{prop6} and \ref{prop7} for unranked unlabeled tree shapes, \ref{prop9} and \ref{prop10} for ranked unlabeled tree shapes, \ref{prop13} for ranked labeled tree shapes, and \ref{prop16} for unranked labeled tree shapes. An arbitrary labeling of the perfect phylogeny is assumed for counting the associated ranked and unranked labeled tree shapes. \end{table} \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \section{Conclusion} The infinitely-many-sites mutations model is a popular model of molecular variation for problems of population genetics \citep{wakeley_coalescent_2008} and related areas \citep{jones2020inference}, in which constraints are imposed on the space of trees that can explain the observed patterns of molecular variation. A realization of the coalescent model on a genealogy and a superimposed infinitely-many-sites mutation model can be summarized as a perfect phylogeny. Here, we have examined combinatorial properties of the genealogical tree structures that are compatible with a perfect phylogeny, demonstrating that the binary perfect phylogenies possess a lattice structure (Theorem \ref{thm:lattice}). We have used this lattice structure to provide recursive enumerative results counting the trees---unranked unlabeled trees, ranked unlabeled trees, ranked labeled trees, and unranked labeled trees---compatible with binary and multifurcating perfect phylogenies. In our enumerative results, the count of the number of trees of a specified type that are compatible with a perfect phylogeny is obtained by a decomposition of the perfect phylogeny at its root. The number of associated trees is obtained by counting trees for each subtree immediately descended from the root of the perfect phylogeny---and where appropriate, counting interleavings of nodes within those trees, taking care to consider cases that avoid double-counting, or both. This same technique was applicable for each of the types of trees we considered, appearing in Sections \ref{sec:unranked}, \ref{sec:ranked}, \ref{sec:labeled}, \ref{sec:unrankedlabeled}, and \ref{sec:multifurcating}. Owing to the recursive structure of the computation, the decomposition itself proceeds rapidly from the root through the internal nodes, so that a count can be quickly obtained even if the number itself is large. We obtained results concerning the cherry perfect phylogenies with the largest numbers of ranked unlabeled, unranked labeled, and ranked labeled tree shapes (Propositions \ref{prop:asymptotic}, \ref{prop:max}, and \ref{prop:max2}), and it will be informative to seek a similar result for the unranked unlabeled case. The result in Proposition \ref{prop:asymptotic} on asymptotic growth of the number of ranked unlabeled tree shapes compatible with a binary perfect phylogeny is reminiscent of a result concerning ``lodgepole'' trees. A number of studies have examined another combinatorial structure for evolutionary trees, the number of ``coalescent histories'' associated with a labeled species tree and its matching labeled gene tree. These coalescent histories encode different evolutionary scenarios possible for the coalescence of gene lineages on a species tree. \citet{disanto2015coalescent} found that the lodgepole trees, a class of trees in which cherry nodes with 2 descendants successively branch from a single species tree edge, possesses a particularly large number of coalescent histories. Similarly, in Proposition \ref{prop:asymptotic}, as $n$ increases, the number of ranked unlabeled tree shapes compatible with a cherry perfect phylogeny is largest when the perfect phylogeny has one subtree with sample size 2. Perfect phylogenies have been widely studied in varied estimation problems, for the ``perfect phylogeny problem'' asking whether a perfect phylogeny can be constructed from data given on a set of characters \citep{agarwala1994faster, kannan1997fast, felsenstein2004inferring,gusfield2014recombinatorics, steel16}, statistical inference of evolutionary parameters under the coalescent \citep{griffiths_sampling_1994,StephensDonnelly2000,TavareNotes,Palacios2019,cappello2020tajima}, and algorithmic estimation of haplotype phase from diploid data \citep{gusfield2002haplotyping,bafna2004note,gusfield2014recombinatorics}. However, the literature on perfect phylogenies has largely focused on such applications and on algorithmic problems of obtaining perfect phylogenies from data under various constraints, with little emphasis on the enumerative combinatorics of the perfect phylogenies themselves, and of their associated refinements. In describing a lattice for the binary perfect phylogenies with sample size $n$, this study suggests that the mathematical properties of sets of perfect phylogenies as combinatorial structures {\it per se} can be informative. The link to coalescent histories suggests possible connections to related concepts such as ``ancestral configurations'' \citep{wu2012coalescent,disanto2017enumeration}, which also can be described in terms of lattices (E.~Alimpiev \& N.A.R., unpublished); it will be useful to consider perfect phylogenies alongside such structures arising in the combinatorics of evolutionary trees. Finally, returning to considerations of coalescent-based inference from sequences, recall that inference of evolutionary parameters from a given perfect phylogeny is performed by integrating over the space of genealogies. A standard approach to inference integrates over the space of ranked labeled tree shapes generated by the Kingman coalescent \citep{drummond2012bayesian}. However, this inference is computationally intractable for large sample sizes. We have observed a striking reduction in the cardinality of the set of ranked unlabeled tree shapes compatible with an observed perfect phylogeny, relative to the number of ranked labeled tree shapes compatible with an observed perfect phylogeny (Table 1). This observation contributes to a growing branch of the area of coalescent-based inference \citep{veber, Palaciosgenetics, Palacios2019, Cappello2019} that can make use of ranked unlabeled trees to estimate the evolutionary parameters. \section{Acknowledgments} J.A.P. and N.A.R. acknowledge support from National Institutes of Health Grant R01-GM-131404. J.A.P. acknowledges support from the Alfred P. Sloan Foundation. {\small \bibliography{JP} } \section*{Appendix: Proof of Theorem \ref{thm:lattice}} To prove Theorem \ref{thm:lattice}, we must verify four pairs of conditions concerning perfect phylogenies $\pi \in \Pi_{n} \cup \{\emptyset\}$. Note that any binary perfect phylogeny $\pi \in \Pi_{n} \cup \{\emptyset\}$ is equal to $\emptyset$, $(n)$, or $(\pi_1,\pi_2)$ for two non-empty binary perfect phylogenies $\pi_1 \in \Pi_{n_1}$ and $\pi_2 \in \Pi_{n_2}$, where $1 \leq n_1,n_2 < n$ and $n_1+n_2=n$. Hence, we must demonstrate the four pairs of conditions for perfect phylogeny pairs that include $\emptyset$, $(n)$, or both, and for perfect phylogeny pairs that include neither $\emptyset$ nor $(n)$. Because perfect phylogenies can be decomposed into smaller perfect phylogenies, we proceed by induction on $n$, with a base case of $n=1$. In the inductive step we assume that $(\Pi_{k} \cup \{\emptyset\}, \wedge, \vee)$ is a lattice for all $k$, $1 \leq k < n$. We then verify that it follows that $(\Pi_{n} \cup \{\emptyset\}, \wedge, \vee)$ is a lattice. We start with Condition 2, which is trivial. \subsection*{Condition 2: $\pi \wedge \sigma = \sigma \wedge \pi$ and $\pi \vee \sigma = \sigma \vee \pi$} For all $n$, condition 2 of the definition of a lattice is trivially satisfied, as the operations $\wedge$ and $\vee$ are symmetric by definition. In subsequent derivations, we frequently apply Condition 2 without always noting its application. \subsection*{The $n=1$ case for Conditions 1, 3, and 4} Consider $n=1$, for which $\Pi_1$ contains only the perfect phylogeny $(1)$, and $\Pi_1 \cup \{\emptyset\}$ contains only $(1)$ and $\emptyset$. For $\Pi_1 \cup \{\emptyset\}$, demonstrating Condition 1 of the requirements for a lattice requires that we show $(1) \wedge (1) = (1)$, $\emptyset \wedge \emptyset = \emptyset$, $(1) \vee (1) = (1)$, and $\emptyset \vee \emptyset = \emptyset$. These four relations are true by parts (3), (1), (4), and (2) of Defn.~\ref{def:binope}, respectively. Demonstrating Condition 3 requires that we verify a pair of conditions for each of the eight choices of $(x,y,z)$ for $x,y,z \in \Pi_1 \cup \{\emptyset\}$. Demonstrating Condition 4 requires that we verify a pair of conditions for each of the four choices of $(x,y)$. The 16 verifications for Condition 3 and eight verifications for Condition 4 all quickly follow by Defn.~\ref{def:binope} (1-4). Hence, $(\Pi_{1} \cup \{\emptyset\}, \wedge, \vee)$ is a lattice. \subsection*{\bf Condition 1: $\pi \wedge \pi = \pi$ and $\pi \vee \pi = \pi$} First, we demonstrate the first part of the condition. We see $\emptyset \wedge \emptyset = \emptyset$ by Defn.~\ref{def:binope} (1) and $(n) \wedge (n) = (n)$ by Defn.~\ref{def:binope} (3). Consider $\pi = (\pi_1,\pi_2)$ for $\pi_1 \in \Pi_{n_1}$ and $\pi_2 \in \Pi_{n_2}$, where $1 \leq n_1,n_2 < n$ and $n_1 + n_2 = n$. \begin{align*} \pi \wedge \pi & = (\pi_1,\pi_2) \wedge (\pi_1,\pi_2) \\ & = (\pi_1 \wedge \pi_1, \pi_2 \wedge \pi_2) \vee (\pi_1 \wedge \pi_2, \pi_2 \wedge \pi_1) \text{ by Defn.~\ref{def:binope} (8)}\\ & = (\pi_1, \pi_2) \vee (\pi_1 \wedge \pi_2, \pi_1 \wedge \pi_2) \text{ by the inductive hypothesis}. \end{align*} If $n_1 \neq n_2$, then we apply Defn.~\ref{def:binope} (5), the convention $(\pi, \emptyset) = \emptyset$, and Defn.~\ref{def:binope} (2), and we obtain $\pi \wedge \pi = (\pi_1,\pi_2) \vee (\emptyset, \emptyset) = (\pi_1, \pi_2) \vee \emptyset = (\pi_1, \pi_2) = \pi$. If $n_1 = n_2$, then we have two cases: $\pi_1 \leq \pi_2$ (without loss of generality), and $\pi_1,\pi_2$ are not comparable. If $\pi_1 \leq \pi_2$, then $\pi_1 \wedge \pi_2 = \pi_1$ and $\pi_1 \vee \pi_2 = \pi_2$. By Defn.~\ref{def:binope} (9), $(\pi_1,\pi_2) \vee (\pi_1,\pi_1) = (\pi_1, \pi_2 \vee \pi_1) = (\pi_1,\pi_2) = \pi$, so that $\pi \wedge \pi = \pi$. If $\pi_1$ and $\pi_2$ are not comparable, then by Defn.~1 (11), $\pi_{1}\wedge \pi_{2}=\delta$ for some $\delta \in (\Pi_{n1} \cup \{\emptyset\}) \setminus \{\pi_{1},\pi_{2}\}$, with $\delta \vee \pi_{1}=\pi_{1}$ and $\delta \vee \pi_{2}=\pi_{2}$. We then have by Defn.~\ref{def:binope} (9), \begin{align*} (\pi_1, \pi_2) \vee (\pi_1 \wedge \pi_2, \pi_1 \wedge \pi_2) & = (\pi_1, \pi_2) \vee (\delta,\delta). \end{align*} But $(\delta,\delta)$ refines $(\pi_{1},\pi_{2})$, as $\delta$ refines $\pi_{1}$ and $\delta$ refines $\pi_{2}$, so that $(\pi_1, \pi_2)$ can be obtained by collapsing cherries separately in the two subtrees of $(\delta, \delta)$. Hence, $\pi \wedge \pi = (\pi_1, \pi_2) \vee (\delta, \delta) = (\pi_1, \pi_2) = \pi$. For the second part of the condition, we have $\emptyset \vee \emptyset = \emptyset$ by Defn.~\ref{def:binope} (2) and $(n) \vee (n) = (n)$ by Defn.~\ref{def:binope} (4). Consider $\pi = (\pi_1,\pi_2)$ for $\pi_1 \in \Pi_{n_1}$ and $\pi_2 \in \Pi_{n_2}$, where $1 \leq n_1,n_2 < n$ and $n_1 + n_2 = n$. \begin{align*} \pi \vee \pi & = (\pi_1,\pi_2) \vee (\pi_1,\pi_2) \\ & = (\pi_1, \pi_2 \vee \pi_2) \text{ by Defn.~\ref{def:binope} (9)}\\ & = (\pi_1, \pi_2) \text{ by the inductive hypothesis} \\ & = \pi. \end{align*} \subsection*{Condition 4: $\pi \wedge (\pi \vee \sigma)=\pi$ and $\pi \vee (\pi \wedge \sigma)=\pi$} First, we see that both parts of the condition hold if at least one of $\pi, \sigma$ is in $\{ \emptyset,(n)\}$, by Defn.~1 (1-4). Next, we have the following 3 cases: \begin{enumerate} \item[i.] If $\pi \leq \sigma$, then $\pi \wedge \sigma = \pi$ and $\pi \vee \sigma= \sigma$. Hence, $\pi \wedge (\pi \vee \sigma)=\pi \wedge \sigma = \pi$. By Condition 1, $\pi \vee (\pi \wedge \sigma)=\pi \vee \pi =\pi$. \item[ii.] If $\sigma \leq \pi$, then $\pi \wedge \sigma =\sigma$ and $\pi \vee \sigma=\pi$. Hence, by Condition 1, $\pi \wedge (\pi \vee \sigma)=\pi \wedge \pi = \pi$. We also have $\pi \vee (\pi \wedge \sigma)=\pi \vee \sigma =\pi$. \item[iii.] If $\pi$ and $\sigma$ are not comparable, then by Defn.~1 (11), there exists a perfect phylogeny $\gamma$ such that $\pi \vee \sigma = \gamma$, $\pi \wedge \gamma=\pi$, and $\sigma \wedge \gamma=\sigma$. Hence $\pi \wedge(\pi \vee \sigma)=\pi \wedge \gamma = \pi$. By Defn.~1 (11), there exists a perfect phylogeny $\rho$ such that $\pi \wedge \sigma=\rho$, $\pi \vee \rho=\pi$, and $\sigma \vee \rho = \sigma$. We have $\pi \vee (\pi \wedge \sigma)=\pi \vee \rho =\pi$. \end{enumerate} \subsection*{Condition 3: $\pi \wedge (\sigma \wedge \rho)=(\pi \wedge \sigma) \wedge \rho$ and $\pi \vee (\sigma \vee \rho)=(\pi \vee \sigma) \vee \rho$} First, we see that both parts of the condition hold if at least one of $\pi, \sigma, \rho$ is in $\{\emptyset,(n)\}$, by Defn.~1 (1-4). Assume now that $\pi=(\pi_{1},\pi_{2})$, $\sigma=(\sigma_{1},\sigma_{2})$, and $\rho=(\rho_{1},\rho_{2})$. Then \begin{align*} \pi \wedge (\sigma \wedge \rho) &=(\pi_{1},\pi_{2}) \wedge \left( (\sigma_{1},\sigma_{2}) \wedge (\rho_{1},\rho_{2}) \right)\\ &=(\pi_{1},\pi_{2}) \wedge [ (\sigma_{1} \wedge \rho_{1}, \sigma_{2}\wedge \rho_{2}) \vee (\sigma_{1} \wedge \rho_{2}, \sigma_{2}\wedge \rho_{1})] \text{ by Defn.~\ref{def:binope} (8)}\\ &=[(\pi_{1},\pi_{2})\wedge (\sigma_{1}\wedge \rho_{1},\sigma_{2}\wedge\rho_{2})]\vee [(\pi_{1},\pi_{2})\wedge (\sigma_{1}\wedge\rho_{2},\sigma_{2}\wedge \rho_{1})] \text{ by Defn.~\ref{def:binope} (10)}\\ &=[(\pi_{1}\wedge (\sigma_{1} \wedge \rho_{1}), \pi_{2} \wedge (\sigma_{2}\wedge \rho_{2}))\vee(\pi_{1}\wedge (\sigma_{2} \wedge \rho_{2}), \pi_{2} \wedge (\sigma_{1}\wedge \rho_{1})) ] \\ & \quad \vee [(\pi_{1}\wedge (\sigma_{1} \wedge \rho_{2}), \pi_{2} \wedge (\sigma_{2}\wedge \rho_{1}))\vee(\pi_{1}\wedge (\sigma_{2} \wedge \rho_{1}), \pi_{2} \wedge (\sigma_{1}\wedge \rho_{2}) )]\text{ by Defn.~\ref{def:binope} (8)} \end{align*} By the inductive hypothesis {\it for both parts of the condition}, $\pi_{i} \wedge (\sigma_{j} \wedge \rho_{k})= (\pi_{i} \wedge \sigma_{j})\wedge \rho_{k}$ and $\pi_{i} \vee (\sigma_{j} \vee \rho_{k})= (\pi_{i} \vee \sigma_{j})\vee \rho_{k}$ for all $i,j,k \in \{1,2 \}$. We then get \begin{align*} \pi \wedge (\sigma \wedge \rho) &= [((\pi_{1}\wedge \sigma_{1}) \wedge \rho_{1}, (\pi_{2} \wedge \sigma_{2})\wedge \rho_{2})\vee ((\pi_{1}\wedge \sigma_{2}) \wedge \rho_{2}, (\pi_{2} \wedge \sigma_{1})\wedge \rho_{1})]\\ & \quad \vee [((\pi_{1}\wedge \sigma_{1}) \wedge \rho_{2}, (\pi_{2} \wedge \sigma_{2})\wedge \rho_{1})\vee ((\pi_{1}\wedge \sigma_{2}) \wedge \rho_{1}, (\pi_{2} \wedge \sigma_{1})\wedge \rho_{2})]. \end{align*} By the inductive hypothesis for operator $\vee$ and by Condition 2, we can rearrange parentheses and swap the order of terms to obtain: \begin{align*} \pi \wedge (\sigma \wedge \rho) &=((\pi_{1}\wedge \sigma_{1}) \wedge \rho_{1}, (\pi_{2} \wedge \sigma_{2})\wedge \rho_{2})\vee [(\pi_{1}\wedge \sigma_{1}) \wedge \rho_{2}, (\pi_{2} \wedge \sigma_{2})\wedge \rho_{1})\\ & \quad \vee ((\pi_{1}\wedge \sigma_{2}) \wedge \rho_{2}, (\pi_{2} \wedge \sigma_{1})\wedge \rho_{1}]\vee ((\pi_{1}\wedge \sigma_{2}) \wedge \rho_{1}, (\pi_{2} \wedge \sigma_{1})\wedge \rho_{2}). \end{align*} Dropping the brackets and viewing this expression as having four perfect phylogenies separated by the $\vee$ operator, we group the first two and the last two perfect phylogenies together and apply Defn.~1 (8) to each group. We get \begin{align*} \pi \wedge (\sigma \wedge \rho) &=[(\pi_{1}\wedge \sigma_{1},\pi_{2}\wedge \sigma_{2}) \wedge (\rho_{1},\rho_{2})] \vee [(\pi_{1}\wedge \sigma_{2},\pi_{2}\wedge \sigma_{1}) \wedge (\rho_{1},\rho_{2})] \text{ by Defn.~1 (8)}\\ &=[(\pi_{1}\wedge \sigma_{1},\pi_{2}\wedge \sigma_{2}) \vee (\pi_{1}\wedge \sigma_{2},\pi_{2}\wedge \sigma_{1})] \wedge (\rho_{1},\rho_{2}) \text{ by Defn.~1 (10)}\\ &=(\pi\wedge \sigma)\wedge \rho \text{ by Defn.~1 (8).} \end{align*} For the second part of the condition, suppose $\pi=(\pi_{1},\pi_{2}) \in \Pi_{n}$, $\sigma=(\sigma_{1},\sigma_{2}) \in \Pi_{n}$, and $\rho=(\rho_{1},\rho_{2})\in \Pi_{n}$ are three perfect phylogenies of size $n$. We consider four cases. First, suppose the three perfect phylogenies have mutually different subtree sizes---that is, $\{|\pi_1|,|\pi_2|\}$, $\{|\sigma_1|,|\sigma_2|\}$, and $\{|\rho_1|,|\rho_2|\}$ are mutually distinct. Then $\pi \vee \sigma = \pi \vee \rho = \sigma \vee \rho=(n)$ by Defn.~\ref{def:binope}~(9). We then have $\pi \vee (\sigma \vee \rho)=\pi \vee (n)=(n)= (n) \vee \rho = (\pi\vee \sigma) \vee \rho$ by Defn.~1 (4). The same argument applies if it is merely assumed that $\sigma$ and $\rho$ have pairs of subtrees whose sizes differ, $\{|\sigma_1|,|\sigma_2|\} \neq \{|\rho_1|,|\rho_2|\})$. Then $\pi \vee (\sigma \vee \rho)=\pi\vee(n)=(n)=(\pi\vee\sigma)\vee \rho$ by Defn.~1 (4, 9), where we have used the fact that $\sigma \vee \rho = (n)$ and $\sigma \leq \pi \vee \sigma$, so that $(\pi \vee \sigma) \vee \rho = (n)$. If $\{|\sigma_1|,|\sigma_2|\} = \{|\rho_1|,|\rho_2|\})$ but $\{|\pi_1|,|\pi_2|\} \neq \{|\sigma_1|,|\sigma_2|\})$ and $\{|\pi_1|,|\pi_2|\} \neq \{|\rho_1|,|\rho_2|\})$, then $\pi \vee \sigma = (n)$. Because $\sigma \leq \sigma \vee \rho$ and $\pi \vee \sigma = (n)$, $\pi \vee (\sigma \vee \rho) = (n)$. Similarly, $(\pi\vee\sigma)\vee \rho = (n) \vee \rho = (n)$ by Defn.~1 (4, 9). It remains to consider the case in which at least a pair of subtrees, one each from $\pi$, $\sigma$ and $\rho$ have the same size, or $\{|\pi_1|,|\pi_2|\} = \{|\sigma_1|,|\sigma_2|\}) = \{|\rho_1|,|\rho_2|\})$. We have \begin{align} \pi \vee (\sigma \vee \rho) &=(\pi_{1},\pi_{2}) \vee \left( (\sigma_{1},\sigma_{2}) \vee (\rho_{1},\rho_{2}) \right) \nonumber \\ &=(\pi_{1},\pi_{2}) \vee [ (\sigma_{1} \vee \rho_{1}, \sigma_{2}\vee \rho_{2}) \wedge (\sigma_{1} \vee \rho_{2}, \sigma_{2}\vee \rho_{1})] \text{ by Defn.~\ref{def:binope} (9)} \nonumber \\ &=[(\pi_{1},\pi_{2})\vee (\sigma_{1}\vee \rho_{1},\sigma_{2}\vee\rho_{2})]\wedge [(\pi_{1},\pi_{2})\vee (\sigma_{1}\vee \rho_{2},\sigma_{2}\vee \rho_{1})] \text{ by Defn.~\ref{def:binope} (10)} \nonumber \\ &=[(\pi_{1}\vee (\sigma_{1} \vee \rho_{1}), \pi_{2} \vee (\sigma_{2}\vee \rho_{2}))\wedge(\pi_{1}\vee (\sigma_{2} \vee \rho_{2}), \pi_{2} \vee (\sigma_{1}\vee \rho_{1})) ] \nonumber \\ &\quad \wedge [(\pi_{1}\vee (\sigma_{1} \vee \rho_{2}), \pi_{2} \vee (\sigma_{2}\vee \rho_{1}))\wedge(\pi_{1}\vee (\sigma_{2} \vee \rho_{1}), \pi_{2} \vee (\sigma_{1}\vee \rho_{2})) ]\text{ by Defn.~\ref{def:binope} (9)} \nonumber \\ &=((\pi_{1}\vee \sigma_{1}) \vee \rho_{1}, (\pi_{2} \vee \sigma_{2})\vee \rho_{2})\wedge ((\pi_{1}\vee \sigma_{1}) \vee \rho_{2}, (\pi_{2} \vee \sigma_{2})\vee \rho_{1}) \nonumber \\ &\quad \wedge((\pi_{1}\vee \sigma_{2}) \vee \rho_{2}, (\pi_{2} \vee \sigma_{1})\vee \rho_{1})\wedge ((\pi_{1}\vee \sigma_{2}) \vee \rho_{1}, (\pi_{2} \vee \sigma_{1})\vee \rho_{2}) \text{ by ind.~hypothesis} \nonumber \\ &=[(\pi_{1}\vee \sigma_{1},\pi_{2}\vee \sigma_{2}) \vee (\rho_{1},\rho_{2})] \wedge [(\pi_{1}\vee \sigma_{2},\pi_{2}\vee \sigma_{1}) \vee (\rho_{1},\rho_{2})] \text{ by Defn.~1 (9)} \nonumber \\ &=[(\pi_{1}\vee \sigma_{1},\pi_{2}\vee \sigma_{2}) \wedge (\pi_{1}\vee \sigma_{2},\pi_{2}\vee \sigma_{1})] \vee (\rho_{1},\rho_{2}) \text{ by Defn.~1 (10)}\nonumber \\ &=(\pi\vee \sigma)\vee \rho \text{ by Defn.~1 (9).} \label{eq20} \end{align} Note that this derivation includes the case of shared subtrees at the root, in which it is not only the sizes of the subtrees that are the same, but the subtrees themselves. For example, suppose $\pi=(\pi_{1},\pi_{2})$ and $\sigma=(\pi_{1},\sigma_{1})$. By Defn.~1 (9), we have \begin{align*} \pi\vee\sigma &= (\pi_{1},\pi_{2})\vee (\pi_{1},\sigma_{1})= (\pi_{1},\pi_{2}\vee \sigma_{1}). \end{align*} However, we will show that we can replace the previous equality by the extended expression: \begin{align} \label{eq21} \pi\vee\sigma &= (\pi_{1}\vee \pi_{1},\pi_{2}\vee\sigma_{1}) \wedge (\pi_{1}\vee \sigma_{1},\pi_{1}\vee \pi_{2}), \end{align} and then the previous derivation remains unchanged. To prove this assertion, we have: \begin{align*} \pi_{2}\vee \sigma_{1}&=(\pi_{2} \vee (\pi_{1}\wedge \pi_{2})) \vee \sigma_{1} \quad \text{ by Condition 4}\\ &=(\pi_{2} \vee \sigma_{1}) \vee (\pi_{1}\wedge \pi_{2}) \quad \text{ by the inductive hypothesis and Condition 2}\\ &=\pi_{2}\vee [(\sigma_{1} \wedge \pi_{1})\vee \sigma_{1} ]\vee (\pi_{1}\wedge \pi_{2}) \quad \text{ by Conditions 2 and 4}\\ &=[\pi_{2}\vee (\sigma_{1} \wedge \pi_{1})]\vee [\sigma_{1}\vee (\pi_{1}\wedge \pi_{2})] \quad \text{ by the inductive hypothesis.} \end{align*} Then \begin{align}\label{eq22} \pi\vee\sigma &= (\pi_{1},\pi_{2}\vee \sigma_{1}) \nonumber \\ &=(\pi_{1},[\pi_{2}\vee (\sigma_{1} \wedge \pi_{1})]\vee [\sigma_{1}\vee (\pi_{1}\wedge \pi_{2})]) \nonumber \\ &=(\pi_{1},\pi_{2}\vee (\sigma_{1}\wedge \pi_{1}))\vee (\pi_{1},\sigma_{1}\vee(\pi_{1}\wedge \pi_{2})) \quad \text{ by Defn.~1 (9)} \nonumber \\ &=(\pi_{1},(\pi_{2}\vee \sigma_{1}) \wedge (\pi_{2}\vee \pi_{1})) \vee (\pi_{1},(\sigma_{1}\vee\pi_{1})\wedge (\sigma_{1}\vee\pi_{2})) \quad \text{ by Defn.~1 (10).} \end{align} By Condition 4 and Defn.~1 (9) we have \begin{align*} \pi_{1}&=\pi_{1} \vee (\pi_{1}\wedge\sigma_{1})=(\pi_{1} \vee \pi_{1}) \wedge ( \pi_{1}\vee\sigma_{1}), \end{align*} and \begin{align*} \pi_{1}&=\pi_{1} \vee (\pi_{1}\wedge \pi_{2})= (\pi_{1} \vee \pi_{1})\wedge (\pi_{1}\vee \pi_{2}). \end{align*} Replacing the first $\pi_{1}$ in the first pair of eq.~\ref{eq22} by $(\pi_{1} \vee \pi_{1}) \wedge ( \pi_{1}\vee\sigma_{1})$, and the first $\pi_{1}$ in the second pair of eq.~\ref{eq22} by $(\pi_{1} \vee \pi_{1}) \wedge ( \pi_{1}\vee\pi_{2})$, we get \begin{align*} \pi\vee\sigma &=((\pi_{1} \vee \pi_{1}) \wedge ( \pi_{1}\vee\sigma_{1}),(\pi_{2}\vee \sigma_{1}) \wedge (\pi_{2}\vee \pi_{1})) \vee ((\pi_{1} \vee \pi_{1}) \wedge ( \pi_{1}\vee\pi_{2}),(\sigma_{1}\vee\pi_{1})\wedge (\sigma_{1}\vee\pi_{2}))\\ &=(\pi_{1}\vee \pi_{1},\pi_{2}\vee\sigma_{1}) \wedge (\pi_{1}\vee \sigma_{1},\pi_{1}\vee \pi_{2})\quad \text{ by Defn.~1 (8).} \end{align*} Thus, eq.~\ref{eq21} holds, so that eq.~\ref{eq20} holds for the case in which subtrees are shared at the root. \clearpage \end{document}
{"config": "arxiv", "file": "2108.04849/Main.tex"}
TITLE: 1-Factorizations of complete graphs sharing a unique 1-factor QUESTION [2 upvotes]: The complete graph $K_6$ has 15 1-factors (i.e. perfect matching), and 6 1-factorizations (i.e. partitions of the edges into perfect matching). As you can see on the actual drawings, they have a nice property Any two 1-factorizations have a unique common 1-factor. This fact can also be derived by some counting arguments using the fact that an edge belong to 3 perfect matching in $K_6$, but the picture is self-sufficient. Is this statement still true for $n>6$ ($n$ even) ? I didn't think too much about the problem, there might be an obvious why this must/cannot happen. As they are already 6240 1-factorisation of $K_8$ I would expect either a counterexample in $K_8$ or maybe $K_{10}$, or an explanation to why this is happening. For information, this fact came in handy when triying to build a projective plane from $K_6$ : The points are either vertices or 1-factors. The lines are either edges or 1-factorizations. REPLY [2 votes]: For $n\ge8$ it is possible to construct two $1$-factorisations of $K_n$ that share more than one perfect matching. Take a Hamiltonian cycle. Then it is relatively easy to find two different sets of $1$-factorisations of the remaining edges (I leave this as an exercise). These sets, when combined with the Hamiltonian cycle, produce two $1$-factorisations of $K_n$ with at least two common $1$-factors. So the nice property cannot hold for $n>6$. This argument does not work for $n=6$ because there is only one way to make a $1$-factorisation with the remaining edges.
{"set_name": "stack_exchange", "score": 2, "question_id": 3639450}
TITLE: Algebraic identities QUESTION [6 upvotes]: Given that $$a+b+c=2$$ and $$ab+bc+ca=1$$ Then the value of $$(a+b)^2+(b+c)^2+(c+a)^2$$ is how much? Attempt: Tried expanding the expression. Thought the expanded version would contain a term from the expression of $a^3+b^3+c^3-3abc$, but its not the case. REPLY [0 votes]: Consider the monic polynomial with roots $a$, $b$, $c$ $$P(x)=(x-a)(x-b)(x-c)$$ Expanding out: $$P(x)=x^{3}+x^{2}(-a-b-c)+x(ab+bc+ac)-abc$$ (This is essentially Vieta's formula). Then plugging in your values this polynomial equals: $$P(x)=x^{3}-2x^{2}+x-abc$$ Further, since $a$, $b$, $c$ are roots $$P(a)=a(a^{2}-2a+1-bc)=0$$ $$P(b)=b(b^{2}-2b+1-ac)=0$$ $$P(c)=c(c^{2}-2c+1-ab)=0$$ We assume $a,b,c$ are non-zero. Then division gives us: $$P(a)=a^{2}-2a+1-bc=0$$ $$P(b)=b^{2}-2b+1-ac=0$$ $$P(c)=c^{2}-2c+1-ab=0$$ Adding up the above three, and using $a+b+c=2$ and $ab+bc+ac=1$ $$a^{2}+b^{2}+c^{2}=2(a+b+c)-3+(ab+bc+ca)$$ $$=2$$ Now we can tackle your problem. Just expand: $$(a+b)^{2}+(b+c)^{2}+(a+c)^{2}$$ $$=2(a^2+b^2+c^2)+2(ab+bc+ca)=2\cdot 2+2=6$$
{"set_name": "stack_exchange", "score": 6, "question_id": 2346162}
TITLE: proof- can NOT be a linear combination QUESTION [1 upvotes]: How can I prove that $X^2-Y,X-Y^2$ CAN NOT be written as a combination of $<X^3-Y^3,X^2Y-X>$ ? REPLY [3 votes]: Suppose $X^2-Y=p(X,Y)(X^3-Y^3)+q(X,Y)(X^2Y-X)$. Evaluating at $X=0$ gives $Y=p(0,Y)Y^3$, which is impossible. The same technique works for $X-Y^2$.
{"set_name": "stack_exchange", "score": 1, "question_id": 543368}
\begin{document} \maketitle \begin{abstract} {\small \noindent A dynamic coloring of a graph $G$ is a proper coloring such that for every vertex $v\in V(G)$ of degree at least $2$, the neighbors of $v$ receive at least $2$ colors. The smallest integer $k$ such that $G$ has a dynamic coloring with $ k $ colors, is called the {\it dynamic chromatic number} of $G$ and denoted by $\chi_2(G)$. In this paper we will show that if $G$ is a regular graph, then $ \chi_{2}(G)- \chi(G) \leq 2\lfloor \log^{\alpha(G)}_{2}\rfloor +3 $ and if $G$ is a graph and $\delta(G)\geq 2$, then $ \chi_{2}(G)- \chi(G) \leq \lceil \sqrt[\delta -1]{4\Delta^{2}} $ $\rceil (\lfloor \log^{\alpha(G)}_{\frac{2\Delta(G)}{2\Delta(G)-\delta(G)}} \rfloor +1)+1 $ and in general case if $G$ is a graph, then $ \chi_{2}(G)- \chi(G) \leq 3+ \min \lbrace \alpha(G),\alpha^{\prime}(G),\frac{\alpha(G)+\omega(G)}{2}\rbrace $. } \begin{flushleft} \noindent {\bf Key words:} Dynamic chromatic number; Independent number. \noindent {\bf Subject classification: 05C15, 05D40.} \end{flushleft} \end{abstract} \section{Introduction} \label{} All graphs in this paper are finite, undirected and simple. We follow the notation and terminology of \cite{MR1367739}. A {\it proper vertex coloring} of $G$ by $k$ colors is a function $c: V(G)\longrightarrow \lbrace 1, \ldots ,k\rbrace$, with this property: if $u,v\in V(G)$ are adjacent, then $c(u)$ and $c(v)$ are different. A {\it vertex $k$-coloring} is a proper vertex coloring by $k$ colors. We denote a bipartite graph $G$ with bipartition $(X,Y)$ by $G[X,Y]$. Let $G$ be a graph with a proper vertex coloring $c$. For every $v\in V (G)$, we denote the degree of $v$ in $G$, the neighbor set of $v$ and the color of $v$ by $d(v)$, $N(v)$, and $c(v)$, respectively. For any $S\subseteq V(G)$, $N(S)$ denote the set of vertices of $G$, such that each of them has at least one neighbour in $S$. There are many ways to color the vertices of graphs, an interesting way for vertex coloring was recently introduced by Montgomery et al. in \cite{Mont}. A proper vertex $k$-coloring of a graph $G$ is called {\it dynamic} if for every vertex $v$ with degree at least $2$, the neighbors of $v$ receive at least two different colors. The smallest integer $k$ such that $G$ has a dynamic $k$-coloring is called the {\it dynamic chromatic number} of $G$ and denoted by $\chi_2(G)$. There exists a generalization for the dynamic coloring of graphs \cite{Mont}. For an integer $r > 0$, a conditional $(k, r)-$coloring of a graph $G$ is a proper $k$-coloring of the vertices of $G$ such that every vertex $v$ of degree $d(v)$ in $G$ is adjacent to vertices with at least $\min \lbrace r, d(v) \rbrace$ different colors. The smallest integer $k$ for which a graph $G$ has a conditional $(k, r)-$coloring is called the $r$th order conditional chromatic number, denoted by $\chi_r(G)$. Conditional coloring is a generalization of the traditional vertex coloring for which $r = 1$. From \cite{MR2251583} we know that if $\Delta(G)\leq 2$, for any $r$ we can easily have an algorithm of polynomial time to give the graph $G$ a $(k, r)-$coloring but for any $k \geq 3$ and $r \geq 2$ it is $NP-$complete to check if a graph is $(k, r)-$colorable \cite{MR2483491}. The other concept that has a relationship with the dynamic coloring is the hypergraph coloring. A hypergraph $ H $, is a pair $ (X,Y) $, where $ X $ is the set of vertices and $ Y $ is a set of non-empty subsets of $ X $, called edges. The proper coloring of $ H $ is a coloring of $ X $ such that for every edge $ e $ with $ \vert e \vert >1 $, there exist $ v,u\in X $ such that $ c(u)\neq c(v) $. For the hypergraph $ H=(X,Y) $, consider the bipartite graph $ \widehat{H} $ with two parts $ X $ and $ Y $, that $ v\in X $ is adjacent to $ e\in Y $ if and only if $ v\in e $ in $ H $. Now consider a dynamic coloring $ c $ of $ \widehat{H} $, clearly by inducing $ c $ on $ X $, we obtain a proper coloring of $ H $. The graph $G^{\frac{1}{2}}$ is said to be the $2$-subdivision of a graph $G$ if $G^{\frac{1}{2}}$ is obtained from $G$ by replacing each edge with a path with exactly one inner vertices \cite{MR2519165}. There exists a relationship between $ \chi(G)$ and $ \chi_{2}(G^{\frac{1}{2}}) $. We have $\chi(G) \leq \chi_{2}(G^{\frac{1}{2}}) $ and $ \chi(G^{\frac{1}{2}})=2 $. For example it was shown in \cite{Mont} if $ G\cong K_{n} $ then $ \chi_{2}(K_{n}^{\frac{1}{2}}) \geq n $. In the previous example and Proposition $1$ and $2$, we present some graphs such that the difference between the chromatic number and the dynamic chromatic number can be arbitrarily large. It seems that when $ \Delta (G) $ is close to $ \delta(G) $, then $ \chi_{2}(G) $ is also close to $ \chi(G) $. Montgomery conjectured that for regular graphs the difference is at most $2$. \begin{conj} \noindent {\bf [Montgomery \rm \cite{Mont}\bf]} For any $r$-regular graph $G$, $\chi_2(G)-\chi(G)\leq 2$. \end{conj} In \cite{strongly} it was proved that if $G$ is a strongly regular graph and $G \neq C_{4},C_{5},K_{r,r}$, then $\chi_2(G)-\chi(G)\leq 1$. Also in \cite{Akbari} it was proved that if $G$ is an $r$-regular graph, $\chi_2(G)-\chi(G)\leq \chi(G)$. Recently the dynamic coloring of Cartesian product of graphs has been studied in \cite{product}. We will prove some inequalities for the difference between the chromatic number and the dynamic chromatic number of regular graphs. For example, it was shown \cite{Akbari} that, if $G$ is a regular graph, then $\chi_{2}(G)- \chi(G) \leq \lceil\frac{\alpha(G)}{2} \rceil +1 $, we prove that if $G$ is a regular graph, then $ \chi_{2}(G)- \chi(G) \leq 2\lfloor\log^{\alpha(G)}_{2}\rfloor +3 $. In general case, finding the optimal upper bound for $ \chi_{2}(G)-\chi(G) $ seems to be an intriguing problem. In this paper we will prove various inequalities relating it to other graph parameters. At the end we will introduce a generalization of the Montgomery's Conjecture for the dynamic coloring of regular graphs. \begin{defi} Let $c$ be a proper vertex coloring of a graph $G$, define $B_{c}$ as the set of vertices such that for each vertex $v\in B_{c}$, $d(v)\geq 2$ and the neighbors of $v$ receives a same color and let $A_{c}=V(G)\backslash B_{c}$. \end{defi} Now we state some lemmas and theorems without proof. \begin{thm} {\rm\cite{MR1991048}} For a connected graph $G$ if $\Delta(G)\leq 3$, then $\chi_{2}(G) \leq 4$ unless $G = C_{5}$, in which case $\chi_{2}(C_{5}) = 5$ and if $\Delta(G) \geq 4$, then $\chi_{2}(G) \leq \Delta(G)+ 1$. \end{thm} So if $\Delta(G)\leq 3$, then $\chi_{2}(G) \leq 5$. \begin{lem} \noindent {\bf [The Lovasz Local Lemma \rm \cite{MR1885388}\bf]} Suppose ${A_{1},\ldots ,A_{n}}$ be a set of random events such that for each $i$, $Pr(A_{i})\leq p$ and $A_{i}$ is mutually independent of the set of all but at most $d$ other events. If $4pd\leq 1$, then with positive probability, none of the events occur. \end{lem} \begin{lem} \noindent {\rm \cite{Akbari}} Let $r \geq 4$ be a natural number. Suppose that $G [A,B]$ is a bipartite graph such that all vertices of Part $A$ have degree $r$ and all vertices of Part $B$ have degree at most $r$. Then one can color the vertices of Part $B$ with two colors such that the dynamic property holds for all vertices of Part $A$. \end{lem} \begin{lem} \noindent {\rm \cite{MR1367739}} A set of vertices in a graph is an independent dominating set if and only if it is a maximal independent set. \end{lem} So if $G$ is a graph, then $G$ has an independent dominating set $T$. \begin{thm} \noindent {\bf [Dirac \rm \cite{MR1367739}\bf]} If $G$ is a simple graph with at least three vertices and $ \delta(G) \geq \frac{n(G)}{2} $, then $G$ is Hamiltonian. \end{thm} \section{Main Results} \label{} Before proving the theorems we need to prove some lemmas. \begin{lem} If $G$ is a graph, $G\neq \overline{K_{n}}$ and $T_{1}$ is an independent set of $G$, then there exists $T_{2}$ such that, $T_{2}$ is an independent dominating set for $T_{1}$ and $ \vert T_{1} \cap T_{2}\vert \leq \frac{2\Delta(G) -\delta(G) }{2\Delta(G)} \vert T_{1}\vert$. \end{lem} \begin{proof}{ The proof is constructive. For each $u\in N(T_{1})$, define the variable $ f(u) $ as the number of vertices which are adjacent to $u$ and are in $T_{1}$. $\sum_{u\in N(T_{1})} f(u)$ is the number of edges of $G[T_{1},N(T_{1})]$, so $\sum_{u\in N(T_{1})} f(u)\geq \vert T_{1}\vert\delta(G)$. Let $T_{3}=T_{1}$, $T_{4}=\emptyset $, $s=0$, $i=1$ and repeat the following procedure until $\sum_{u\in N(T_{1})} f(u)=0$. \noindent {\bf Step 1.} Select a vertex $u$ such that $f(u)$ is maximum among $ \lbrace f(v) \vert v\in N(T_{1})\rbrace$ and add $u$ to the set $T_{4}$ and let $t_{i}=f(u)$. \noindent {\bf Step 2.} For each $v\in N(T_{1})$ that is adjacent to $u$, change the value of $ f(v) $ to $0$. Change the value of $ f(u) $ to $0$. \noindent {\bf Step 3.} For each $v\in N(T_{1})$ that is adjacent to at least one vertex of $N(u)\cap T_{3}$ and it is not adjacent to $u$, decrease $ f(v)$ by the number of common neighbours of $v$ and $u$ in $T_{3}$. \noindent {\bf Step 4.} Remove the elements of $N(u)$ from $T_{3}$. Increase $s$ by $t_{i}$ and $i$ by $1$. When the above procedure terminates, because of the steps $1$ and $2$, $ T_{4} $ is an independent set and because of the steps $1$ and $4$, $ T_{4} $ is a dominating set for $ T_{1} \backslash T_{3} $. Now let $T_{2}=T_{4}\cup T_{3}$, because of the step $4$, $ T_{2} $ is an independent dominating set for $T_{1}$. Assume that the above procedure has $l$ iterations. Because of the step $4$, we have $ s=\sum_{i=1}^{l} t_{i} $. Each vertex in $N(T_{1})$ has at most $\Delta(G)-1 $ neighbours in $N(T_{1}) $, so in the step $2$ of the $i$th iteration, $\sum_{u\in N(T_{1})} f(u)$ is decreased at most $t_{i} \Delta(G)$ and in the step $3$ of the $i$th iteration, $\sum_{u\in N(T_{1})} f(u)$ is decreased at most $ t_{i} \Delta(G)$, so in the $i$th iteration, $\sum_{u\in N(T_{1})} f(u)$ is decreased at most $2 t_{i} \Delta(G)$. When the procedure terminates, $\sum_{ u\in N(T_{1})} f(u)=0 $, so: \begin{center} $ \delta(G)\vert T_{1}\vert - \sum_{ i=1 }^{ i=l } (2t_{i}\Delta(G)) \leq 0$, $ \delta(G)\vert T_{1}\vert - 2s\Delta(G) \leq 0$, $ s\geq \frac{\delta(G)}{2\Delta(G)} \vert T_{1}\vert$ , $\vert T_{1} \cap T_{2}\vert =\vert T_{3}\vert = \vert T_{1}\vert - s \leq \frac{2\Delta(G) -\delta(G) }{2\Delta(G)} \vert T_{1}\vert$. \end{center} }\end{proof} \begin{lem} If $G$ is a graph, $ \delta \geq 2 $ and $T$ is an independent set of $G$, then we can color the vertices of $T$ with $ \lceil (4\Delta^{2})^{\frac{1}{\delta-1}} \rceil $ colors such that for each $u\in \lbrace v \vert v\in V(G), N(v)\subseteq T\rbrace$, $N(u)$ has at least two different colors. \end{lem} \begin{proof}{ Let $ \eta = \lceil (4\Delta^{2})^{\frac{1}{\delta-1}} \rceil $. Color every vertex of $T$ randomly and independently by one color from $ \lbrace 1,\cdots,\eta \rbrace $, with the same probability. For each $u\in \lbrace v \vert v\in V(G), N(v)\subseteq T\rbrace$, let $A_{u}$ be the event that all of the neighbors of $ u $ have the same color. Each $A_{u}$ is mutually independent of a set of all $A_{v}$ events but at most $\Delta ^{2}$ of them. Clearly, $Pr(A_{u})\leq \frac{1}{\eta^{\delta-1}}$. We have: \begin{center} $4pd= 4(\frac{1}{\eta})^{\delta-1} \Delta ^{2}\leq 1$. \end{center} So by Local Lemma there exists a coloring with our condition for $T$ with positive probability. }\end{proof} \begin{lem} Let $c$ be a vertex $k$-coloring of a graph $G$, then there exists a dynamic coloring of $G$ with at most $k+\vert B_{c}\vert$ colors. \end{lem} \begin{proof}{ Suppose that $B_{c}=\lbrace v_{1},\ldots ,v_{\vert B_{c}\vert}\rbrace$. For each $1\leq i\leq \vert B_{c}\vert$ select $u_{i}\in N(v_{i})$ and recolor $u_{i}$ by the color $k+i$, respectively. The result is a dynamic coloring of $G$ with at most $k+\vert B_{c} \vert$ colors. }\end{proof} \begin{thm} \label{t1} If $G$ is a graph and $\delta(G) \geq 2$, then $ \chi_{2}(G)- \chi(G) \leq \lceil \sqrt[\delta -1]{4\Delta^{2}} \rceil (\lfloor \log^{\alpha(G)}_{\frac{2\Delta(G)}{2\Delta(G)-\delta(G)}} \rfloor +1)+1 $. \end{thm} \begin{proof}{ Let $ \eta = \lceil \sqrt[\delta -1]{4\Delta^{2}} \rceil $ and $k=\lfloor \log^{\alpha(G)}_{\frac{2\Delta(G)}{2\Delta(G)-\delta(G)}}\rfloor +1$. By Lemma $ 3 $, let $T_{1}$ be an independent dominating set for $G$. Consider the vertex $\chi(G)$-coloring of $G$, by Lemma $ 5 $, recolor the vertices of $T_{1}$ by the colors $\chi +1,\ldots,\chi+\eta$ such that for each $u\in \lbrace v \vert v\in V(G), N(v)\subseteq T_{1}\rbrace$, $N(u)$ has at least two different colors. Therefore we obtain a coloring $c_{1}$ such that $B_{c_{1}}\subseteq T_{1}$. Let $T'_{1}=T_{1}$. For $i=2$ to $i=k$ repeat the following procedure: \noindent {\bf Step 1.} By Lemma $ 4 $, find an independent set $T_{i}$ such that, $T_{i}$ is an independent dominating set for $T'_{i-1}$ and $\vert T_{i} \cap T'_{i-1} \vert\leq \frac{2\Delta(G) -\delta(G)} {2\Delta(G) } \vert T'_{i-1}\vert$. \noindent {\bf Step 2.} By Lemma $ 5 $, recolor the vertices of $T_{i}$ with the colors $\chi+\eta i-(\eta -1),\ldots, \chi+\eta i $ such that for each $u\in \lbrace v \vert v\in V(G), N(v)\subseteq T_{i}\rbrace$, $N(u)$ has at least two different colors. \noindent {\bf Step 3.} Let $ T'_{i}=T_{i} \cap T'_{i-1} $. After each iteration of above procedure we obtain a proper coloring $c_{i}$ such that $B_{c_{i}}\subseteq T'_{i}$, so when the procedure terminates, we have a coloring $c_{k}$ with at most $\chi(G) +\eta k $ colors, such that $B_{c_{k}}\subseteq T'_{k}$ and $\vert T'_{k}\vert \leq 1$, so by Lemma $ 6 $ we have a dynamic coloring with at most $\chi(G) +\lceil \sqrt[\delta -1]{4\Delta^{2}} \rceil (\lfloor \log^{\alpha(G)}_{\frac{2\Delta(G)}{2\Delta(G)-\delta(G)}} \rfloor +1)+1 $ colors. }\end{proof} \begin{cor} if $G$ is a graph and $\Delta(G)\leq 2^{\frac{\delta(G) -3}{2}}$, then $ \chi_{2}(G)- \chi(G) \leq 2\lfloor \log^{\alpha(G)}_{\frac{2\Delta(G)}{2\Delta(G)-\delta(G)}} \rfloor +3 $. \end{cor} \begin{thm} If $G$ is an $r$-regular graph , then $ \chi_{2}(G)- \chi(G) \leq 2\lfloor \log^{\alpha(G)}_{2} \rfloor +3$. \end{thm} \begin{proof}{ If $r=0$, then the theorem is obvious. For $1\leq r\leq 3$, we have $\chi(G)\geq 2$, by Theorem $1$, $ \chi_{2}(G)\leq 5 $ so $\chi_{2}(G)\leq \chi(G) +3$. So assume that $ r \geq 4 $, we use a proof similar to the proof of Theorem $3$, in the proof of Theorem $3$, for each $i$, $ 1\leq i \leq k $ we used Lemma $ 5 $, to recolor the vertices of $T_{i}$ with the colors $\chi+\eta i-(\eta -1),\ldots, \chi+\eta i $ such that for each $u\in \lbrace v \vert v\in V(G), N(v)\subseteq T_{i}\rbrace$, $N(u)$ has at least two different colors. In the new proof, for each $i$, $ 1\leq i \leq k $, let $ A= \lbrace v \vert v\in V(G), N(v)\subseteq T_{i}\rbrace$ and $B=T_{i}$ and by Lemma $2$, recolor the vertices of $T_{i}$ with the colors $ \chi+2i-1 $ and $ \chi+2i $ such that for each $u\in \lbrace v \vert v\in V(G), N(v)\subseteq T_{i}\rbrace$, $N(u)$ has at least two different colors. The other parts of the proof are similar. This completes the proof. }\end{proof} \begin{defi} Suppose that $c$ is a proper vertex coloring of $G$, let $H_{c}=G [B_{c}] $, define $X_{c}$ as the set of isolated vertices in $ H_{c} $ and let $ Y_{c}=G[B_{c}\backslash X_{c}] $. \end{defi} \begin{lem} If $c$ is a proper vertex coloring of $G$, then $ H_{c} $ is a bipartite graph. \end{lem} \begin{proof}{ Since the vertices of $ H_{c} $ are in $ B_{c} $, so $ H_{c} $ has at most $2$ different colors in each of it's connectivity components, so $ H_{c} $ is a bipartite graph. }\end{proof} \begin{lem} If $G$ is a simple graph, then there exists a vertex $(\chi(G)+3 )$-coloring $ c^{\prime} $ of $G$ such that $ B_{c^{\prime}} $ is an independent set. \end{lem} \begin{proof}{ If $ \chi(G) \leq 1 $, then the theorem is obvious, so suppose that $ \chi(G) \geq 2 $. Let $c$ be a vertex $(\chi(G)+1)$-coloring of $G$, such that $\vert B_{c}\vert$ is minimum. Every vertex in $B_{c}$ has a neighbour in $A_{c}$, otherwise if there exists $v\in B_{c}$ such that $ N(v)\subseteq B_{c} $, then because $\chi(G)+1\geq 3$, so we can change the color of $v$ such that the size of $B_{c}$ decreases, but it is a contradiction. Let $T_{c}=\lbrace v\in A_{c} \mid N(v)\subseteq Y_{c} \rbrace$. Clearly $ T_{c}\cup X_{c} $ is an independent set. Since $ Y_{c}\subseteq H_{c} $ and by Lemma $7$, $ Y_{c} $ is a bipartite graph. Properly recolor the vertices of $ Y_{c} $ by the colors $ \chi+2 $, $ \chi+3 $, retain the color of other vertices and name this coloring $c'$. Clearly $ Y_{c'}\subseteq Y_{c} $. Since every vertex of $ Y_{c} $ has at least one neighbour in $ A_{c} $ and one neighbour in $ Y_{c} $, so $ Y_{c} $ is a subset of $A_{c'}$. So $Y_{c'}=\emptyset$ and $X_{c'}\subseteq T_{c}\cup X_{c}$. Therefore $ c^{\prime} $ is a vertex $(\chi(G)+3 )$-coloring of $G$ such that $ B_{c^{\prime}} $ is an independent set. }\end{proof} \begin{cor} If $G$ is a simple graph, then $ \chi_{2}(G)-\chi(G)\leq \alpha(G)+3 $. \end{cor} \begin{proof}{ By Lemma $ 8 $ and Lemma $ 6 $ the proof is easy. }\end{proof} \begin{thm} If $G$ is a simple graph, then $ \chi_{2}(G)-\chi(G)\leq \alpha(G)+1 $. \end{thm} \begin{proof}{ For $\chi(G)=1 $ the theorem is obvious and if $\chi(G)=2 $, then $ \Delta(G)\leq\alpha(G) $ so by Theorem $1$, $ \chi_{2}(G)\leq \Delta(G) +2 \leq \alpha(G)+ \chi(G)$. If $ \alpha(G)=1 $, then $ \chi_{2}(G)= \chi(G) $. Suppose that $G$ is the connected graph, otherwise we apply the following proof for each of it's connectivity components and also suppose $\chi(G)\geq 3$ and $ \alpha(G)\geq 2 $. Let $c$ be a vertex $\chi(G)$-coloring of $G$, such that $\vert B_{c}\vert$ is minimum. Every vertex in $B_{c}$ has a neighbour in $A_{c}$, otherwise there exists $v\in B_{c}$ such that $ N(v)\subseteq B_{c} $, then because $\chi(G)\geq 3$, by changing the color of $v$ decrease the size of $B_{c}$. Now, two cases can be considered: Case 1. $Y_{c}=\emptyset$. In this case we have $B_{c}=X_{c}$. Since the vertices of $X_{c}$ is an independent set, so $\vert X_{c} \vert \leq \alpha(G)$. By Lemma $ 6 $, there exists a dynamic coloring with $\chi(G)+\alpha(G) $ colors. Case 2. $Y_{c}\neq\emptyset$. Let $T_{c}=\lbrace v\in A_{c} \mid N(v)\subseteq Y_{c} \rbrace$. Clearly $ T_{c}\cup X_{c} $ is an independent set. Since $ Y_{c}\subseteq H_{c} $ and by Lemma $7$, $ Y_{c} $ is a bipartite graph. Properly color the vertices of $ Y_{c} $ by the colors $ \chi+1 $, $ \chi+2 $, retain the color of other vertices and name this coloring $c'$. Clearly $ Y_{c'}\subseteq Y_{c} $. Since every vertex of $ Y_{c} $ has at least one neighbour in $ A_{c} $ and one neighbour in $ Y_{c} $, so $ Y_{c} $ are the subset of $A_{c'}$. So $Y_{c'}=\emptyset$ and $X_{c'}\subseteq T_{c}\cup X_{c}$. If $\vert X_{c'}\vert =1 $, then the theorem is obvious, so assume $ \vert X_{c'}\vert \geq 1 $. Let $ v\in X_{c'} $. Now we have two situations: Case 2A. $ N(v) \cap N(X_{c'}\setminus\lbrace v\rbrace) =\emptyset$. So $ N(v) \cup X_{c'}\setminus \lbrace v \rbrace $ is the independent set. So $ \vert N(v) \cup X_{c'}\setminus \lbrace v \rbrace \vert \leq \alpha(G)$. Since $ \vert N(v) \vert \geq 2 $, $ \vert X_{c'} \vert \leq \alpha(G)-1$. By Lemma $ 6 $, there exists a dynamic coloring with ${\displaystyle{\chi(G)+2+(\alpha(G)-1)}} $ colors. Case 2B. $ N(v) \cap N(X_{c'}\setminus\lbrace v\rbrace) \neq\emptyset$. Therefore there exists a vertex $ u\in X_{c'} $ such that $ N(v) \cap N(u)\neq \emptyset $. Color one of the common neighbours of $ v $ and $ u $, with the color $ \chi+3 $, retain the color of other vertices and name this coloring $c''$. Clearly $ B_{c''}\subseteq X_{c'} \setminus \lbrace v,u\rbrace $. $ \vert B_{c''} \vert \leq \vert X_{c'}\setminus \lbrace v,u\rbrace \vert \leq \alpha(G)-2 $. By Lemma $ 6 $, there exists a dynamic coloring with $\chi(G)+3+(\alpha(G)-2) $ colors. }\end{proof} For $G=C_{4},C_{5}$, we have $ \chi_{2}(G)= \chi(G)+\alpha(G) $. \begin{prop} For any two numbers $a$, $b$ $ (a\geq b \geq 3) $, there exists a graph $G$, such that $ \chi(G)=a $, $ \alpha(G) =b $ and $ \chi_{2}(G)-\chi(G)=\alpha(G) -1 $. \end{prop} \begin{proof}{ For every two number $a$, $b$ $ (a\geq b \geq 3) $, consider the graph $G$ derived from $K_{a+b-1}$ by replacing the edges of a matching of size $ b-1 $ with $ P_{3} $. It is easy to see that $ G $ satisfies the conditions of the proposition. }\end{proof} \begin{thm} If $G$ is a simple graph, then $ \chi_{2}(G)-\chi(G) \leq \frac{\alpha(G)+\omega(G) }{2} +3 $. \end{thm} \begin{proof}{ For $\chi(G)=1 $ the theorem is obvious. Suppose that $G$ is the connected graph with $ \chi(G) \geq 2 $, otherwise we apply the following proof for each of it's connectivity components. By Lemma $8$, suppose that $ c $ is a vertex $(\chi(G)+3 )$-coloring of $G$ such that $ B_{c} $ is an independent set. Also suppose that $ T_{1} $ is a maximal independent set that contains $ B_{c} $. Consider a partition $ \lbrace \lbrace v_{1},v_{2} \rbrace ,\ldots \lbrace v_{2s-1},v_{2s}\rbrace ,T_{2}=\lbrace v_{2s+1},\ldots ,v_{l} \rbrace \rbrace $ for the vertices of $T_{1} $ such that for $ 1\leq i \leq s $, $ N(v_{2i-1})\cap N(v_{2i})\neq \emptyset$ and for $ 2s < i <j \leq l $, $ N(v_{i})\cap N(v_{j})=\emptyset $. For $ 1\leq i \leq s $, let $ w_{i}\in N(v_{2i-1})\cap N(v_{2i})$, recolor $ w_{i} $ by the color $ \chi+3+i $ and name it $c^{\prime}$. Consider a partition $ \lbrace\lbrace v_{2s+1},v_{2s+2} \rbrace , \ldots ,\lbrace v_{2t-1},v_{2t} \rbrace , T_{3}=\lbrace v_{2t+1},\ldots ,v_{l} \rbrace \rbrace $ for the vertices of $ T_{2} $ such that for $ s < i \leq t $ there exist $ u_{2i-1}\in N(v_{2i-1}) $ and $ u_{2i}\in N(v_{2i}) $, such that $ u_{2i-1}$ and $u_{2i} $ are not adjacent and for $ 2t < i <j \leq l $, every neighbour of $v_{i} $ is adjacent to every neighbour of $v_{j} $. For $ s < i \leq t $, suppose that $ u_{2i-1}\in N(v_{2i-1}) $, $ u_{2i}\in N(v_{2i}) $ such that $ u_{2i-1}$ and $u_{2i} $ are not adjacent. Now if $ c(u_{2i-1}) \neq c^{\prime}(u_{2i-1}) $, then recolor $u_{2i-1}$ by the color $ \chi+3+i $ and also if $ c(u_{2i}) \neq c^{\prime}(u_{2i}) $, then recolor $u_{2i}$ by the color $ \chi+3+i $. After above procedure we obtain a coloring, name it $ c^{\prime\prime} $. We state that $c^{\prime\prime}$ maintains the condition of the dynamic coloring, since if $ z $ is a vertex with $ N(z)=\lbrace u_{2i-1},u_{2i} \rbrace $ for some $ s < i \leq t $ and $ c^{\prime\prime}(u_{2i-1})=c^{\prime\prime}(u_{2i} )$, therefore $ c^{\prime\prime}(u_{2i-1})=c^{\prime\prime}(u_{2i} )=\chi+3+i$. It means that $ c(u_{2i})=c^{\prime}(u_{2i} )$, so $u_{2i} $ is the common neighbour of $ v_{2i} $ and $ z $, so $ z\in T_{1} $. Therefore $\lbrace z , v_{2i-1} \rbrace \in T_{1}\setminus T_{2}$. It is a contradiction. For $ v_{i}\in T_{3} $ let $ x_{i}\in N(v_{i}) $. Suppose that $X= \lbrace x_{i} \vert v_{i}\in T_{3} \rbrace $. The vertices of $X $ make a clique, recolor each of them by a new color. We have $ \vert X \vert =l-2t\leq \omega(G) $. Therefore: \begin{center} $\chi_{2}(G)-\chi(G) \leq s+(t-s)+(l-2t)+3 \leq \frac{\alpha(G) +\omega(G)}{2}+3$. \end{center} }\end{proof} \begin{cor} If $G$ is a triangle-free graph, then $ \chi_{2}(G)-\chi(G) \leq \frac{\alpha(G) }{2} +4 $ \end{cor} If $G$ is an $r$-regular graph and $ r > \frac{n}{2} $, then every vertex $ v\in V(G) $ appears in some triangles, therefore $ \chi_{2}(G)=\chi(G)$. In the next theorem, we present an upper bound for the dynamic chromatic number of $r$-regular graph $G$ with $ r \geq \frac{n}{k} $ in terms of $ n $ and $ r $. \begin{thm} If $G$ is an $r$-regular graph with $ n $ vertices, then $ \chi_{2}(G)-\chi(G) \leq 2 \lceil \frac{n}{r} \rceil -2$. \end{thm} \begin{proof}{ If $ r\leq 2 $, then the theorem is obvious. For $ r=3 $, if $ n\geq 8 $, for every vertex $v$, we have $ d_{\overline{G}}(v) \geq \frac{n}{2}$, so by Theorem $ 2 $, $\overline{G} $ is Hamiltonian, so $\overline{G} $ has a perfect matching. Therefore $G$ has a vertex $ ( \frac{n}{2} )$-coloring $ c $, such that every color used in exactly two vertices. $ c $ is a dynamic coloring and we have: $ \chi_{2}(G) \leq \frac{n}{2} \leq \chi(G) + 2 \lceil \frac{n}{r} \rceil -2 $. Also for graphs with $ r=3 $ and $ n \leq 7 $, the theorem is obvious. Therefore suppose that, $ r\geq 4 $ and $ c $ is a vertex $ \chi(G) $-coloring of $G$. For every $ 1 \leq k \leq \lceil \frac{n}{r} \rceil -1 $, let $ T_{k} $ be a maximum independent set of $ G \setminus \cup _{i=1}^{k-1}T_{i}$. By Lemma $ 2 $, recolor the vertices of $T_{1}$ with two new colors, such that for each $u\in \lbrace v \vert v\in V(G), N(v)\subseteq T_{1}\rbrace$, $u$ has two different colors in $N(u)$. Therefore $G$ has the coloring $ c^{\prime} $ by $ \chi(G) +2 $ colors such that $ B_{c^{\prime}}\subseteq T_{1} $. Also by Lemma $ 3 $, by recoloring every $ T_{k} $ $( 2 \leq k \leq \lceil \frac{n}{r} \rceil -1 ) $, by two new colors, $ G $ has a coloring $ c^{\prime\prime} $ such that for every vertex $ v\in V(G) $ with $ N(v)\subseteq T_{k} $, for some $ k $, $ v $ has at least two different colors in its neighbours. We state that $ c^{\prime\prime} $ is a dynamic coloring, otherwise suppose that $ u\in B_{c^{\prime\prime}} $, we have $ u\in T_{1} $ so $ N(u) $ is an independent set and $ N(u) \cap ( \cup_{i=1}^{\lceil \frac{n}{r} \rceil -1} T_{k})= \emptyset$. Considering the definitions of $ T_{k} $ $ ( 1 \leq k \leq \lceil \frac{n}{r} \rceil -1) $, we have: \begin{center} $ r= \vert N(u) \vert \leq \vert T_{\lceil \frac{n}{r} \rceil-1}\vert \leq \vert T_{\lceil \frac{n}{r} \rceil-2}\vert \leq \ldots \leq \vert T_{2}\vert $, \end{center} and $ \vert T_{2}\vert < \vert T_{1}\vert $, since otherwise, $ T_{2} \cup \lbrace u\rbrace $ is an independent set and $ \vert T_{2} \cup \lbrace u \rbrace \vert > \vert T_{1} \vert$. Therefore $ n \geq r \lceil \frac{n}{r} \rceil +1 $, but it is a contradiction. This completes the proof. }\end{proof} For any number $a \geq 3$, there is a graph $G$ with $ \alpha^{\prime}(G) =a $ such that $\chi_2(G)-\chi(G) \geq \alpha^{\prime}(G)-2$. To see this for a given number $a$ consider a bipartite graph $G$. Assume that every vertex in $A$ is corresponding to a $2$-subset of the set $\{1,\ldots ,a\}$ and $B=\{1, \ldots, a\}$. If $A=\{i,j\}$, then we join this vertex to $i$ and $j$ in part $B$ $ \cite{akbari2}$. \begin{thm} If $G$ is a simple graph, then $ \chi_{2}(G)-\chi(G) \leq \alpha^{\prime}(G) +3 $. \end{thm} \begin{proof}{ Let $G$ be a simple graph, by lemma $8$, suppose that $c$ is a vertex $ (\chi(G)+3 )$-coloring of $G$ such that $ B_{c} $ is an independent set. Let $ M=\lbrace v_{1}u_{1},\ldots,v_{\alpha^{\prime}}u_{\alpha^{\prime}} \rbrace $ be a maximum matching of $G$ and $ W=\lbrace v_{1},u_{1},\ldots,v_{\alpha^{\prime}},u_{\alpha^{\prime}} \rbrace $. Let $ X=B_{c} \cap W $ and $ Y= \lbrace v_{i}\vert u_{i}\in X \rbrace \cup \lbrace u_{i}\vert v_{i}\in X \rbrace$. Recolor the vertices of $Y$ by a different new color. Also recolor every vertex in $ N(B_{c}\setminus X) \cap W $, by a different new color. Name this coloring $ c^{\prime} $. We state that $ c^{\prime} $ is a dynamic coloring of $G$. In order to complete the proof, it is enough to show that we used at most $\alpha^{\prime}(G)$ new colors in $ c^{\prime} $. If $ e=v_{i}u_{i} $ $ (1 \leq i \leq \alpha^{\prime}(G) ) $ is an edge of $ M $ such that $ c(v_{i})\neq c^{\prime}(v_{i}) $ and $ c(u_{i})\neq c^{\prime}(u_{i}) $, then three cases can be considered: Case A: $ v_{i},u_{i}\in Y $. It means that $ \lbrace v_{i},u_{i} \rbrace \subseteq B_{c} $. Therefore $ v_{i} $ and $ u_{i} $ are adjacent, but $ B_{c} $ is an independent set. Case B: $( v_{i}\in Y$ and $u_{i}\notin Y )$ or $( u_{i}\in Y$ and $v_{i}\notin Y )$. Without loss of generality suppose that $v_{i}\in Y$ and $u_{i}\notin Y$. So $ u_{i}\in X $ and $ v_{i}\notin X $, therefore there exists $ u^{\prime} \in B_{c} $ such that $u^{\prime} u_{i} \in E(G) $, but $ B_{c} $ is an independent set. Case C: $ v_{i},u_{i}\notin Y $. It means that $ v_{i},u_{i}\notin X $ and there exist $ v^{\prime},u^{\prime}\in B_{c} $ such that $ v^{\prime}v_{i},u^{\prime}u_{i}\in E(G)$. Now $ M^{\prime}= (M \setminus \lbrace v_{i}u_{i}\rbrace ) \cup \lbrace v^{\prime}v_{i},u^{\prime}u_{i}\rbrace$ is a matching that is greater than $ M $. Therefore we recolor at most one of the $ v_{i}$ and $u_{i} $ for each $ 1\leq i \leq \alpha^{\prime}(G) $, this completes the proof. }\end{proof} \section{A Generalization of the Montgomery's Conjecture } \label{} \begin{prop} For every two numbers $ a $, $ b $ $ (a\geq b \geq 2) $, there exists a graph $ G_{a,b} $ such that $ \delta(G_{a,b})=b $, $ \Delta(G_{a,b})=a+1 $ and $ \chi_{2}(G_{a,b} ) - \chi(G_{a,b}) = \lceil \frac{\Delta(G_{a,b})}{\delta(G_{a,b})} \rceil $. \end{prop} \begin{proof}{ To see this for given two numbers $ a $, $ b $ $ (a\geq b \geq 2) $, consider a graph $ G_{a,b} $ by the following definition. Suppose that $ a=bc+d $ such that $ 0\leq d < b $. \begin{center} $ V( G_{a,b} )=\lbrace v_{i,j} \vert 1\leq i \leq b, 1 \leq j\leq c+1 \rbrace$ $\cup \lbrace v_{i,c+2} \vert 1 \leq i\leq d \rbrace \cup \lbrace w_{k} \vert 1\leq k \leq c+1\rbrace $, $ E(G_{a,b})=\lbrace v_{i_{1},j_{1}}v_{i_{2},j_{2}}\vert j_{1}\neq j_{2},\lbrace j_{1},j_{2}\rbrace \neq \lbrace c+1 ,c+2 \rbrace \rbrace \cup \lbrace v_{i,j}w_{k} \vert j=k \rbrace $. \end{center} We have $ \chi(G_{a,b})=c+1$, $ \chi_{2}(G_{a,b} )=2(c+1)$, it is easy to see that $G_{a,b}$ has the conditions of the proposition. }\end{proof} \begin{figure} \begin{center} \begin{picture}(154.95,152.6)(0,0) \linethickness{0.3mm} \qbezier(48,92)(47.98,92.85)(47.41,93.41) \qbezier(47.41,93.41)(46.85,93.98)(46,94) \qbezier(46,94)(45.15,93.98)(44.59,93.41) \qbezier(44.59,93.41)(44.02,92.85)(44,92) \qbezier(44,92)(44.02,91.15)(44.59,90.59) \qbezier(44.59,90.59)(45.15,90.02)(46,90) \qbezier(46,90)(46.85,90.02)(47.41,90.59) \qbezier(47.41,90.59)(47.98,91.15)(48,92) \linethickness{0.3mm} \qbezier(48,20)(47.98,20.85)(47.41,21.41) \qbezier(47.41,21.41)(46.85,21.98)(46,22) \qbezier(46,22)(45.15,21.98)(44.59,21.41) \qbezier(44.59,21.41)(44.02,20.85)(44,20) \qbezier(44,20)(44.02,19.15)(44.59,18.59) \qbezier(44.59,18.59)(45.15,18.02)(46,18) \qbezier(46,18)(46.85,18.02)(47.41,18.59) \qbezier(47.41,18.59)(47.98,19.15)(48,20) \linethickness{0.3mm} \qbezier(38,58)(37.98,58.85)(37.41,59.41) \qbezier(37.41,59.41)(36.85,59.98)(36,60) \qbezier(36,60)(35.15,59.98)(34.59,59.41) \qbezier(34.59,59.41)(34.02,58.85)(34,58) \qbezier(34,58)(34.02,57.15)(34.59,56.59) \qbezier(34.59,56.59)(35.15,56.02)(36,56) \qbezier(36,56)(36.85,56.02)(37.41,56.59) \qbezier(37.41,56.59)(37.98,57.15)(38,58) \linethickness{0.3mm} \qbezier(102,92)(101.98,92.85)(101.41,93.41) \qbezier(101.41,93.41)(100.85,93.98)(100,94) \qbezier(100,94)(99.15,93.98)(98.59,93.41) \qbezier(98.59,93.41)(98.02,92.85)(98,92) \qbezier(98,92)(98.02,91.15)(98.59,90.59) \qbezier(98.59,90.59)(99.15,90.02)(100,90) \qbezier(100,90)(100.85,90.02)(101.41,90.59) \qbezier(101.41,90.59)(101.98,91.15)(102,92) \linethickness{0.3mm} \qbezier(102,20)(101.98,20.85)(101.41,21.41) \qbezier(101.41,21.41)(100.85,21.98)(100,22) \qbezier(100,22)(99.15,21.98)(98.59,21.41) \qbezier(98.59,21.41)(98.02,20.85)(98,20) \qbezier(98,20)(98.02,19.15)(98.59,18.59) \qbezier(98.59,18.59)(99.15,18.02)(100,18) \qbezier(100,18)(100.85,18.02)(101.41,18.59) \qbezier(101.41,18.59)(101.98,19.15)(102,20) \linethickness{0.3mm} \qbezier(92,58)(91.98,58.85)(91.41,59.41) \qbezier(91.41,59.41)(90.85,59.98)(90,60) \qbezier(90,60)(89.15,59.98)(88.59,59.41) \qbezier(88.59,59.41)(88.02,58.85)(88,58) \qbezier(88,58)(88.02,57.15)(88.59,56.59) \qbezier(88.59,56.59)(89.15,56.02)(90,56) \qbezier(90,56)(90.85,56.02)(91.41,56.59) \qbezier(91.41,56.59)(91.98,57.15)(92,58) \linethickness{0.3mm} \qbezier(154,92)(153.98,92.85)(153.41,93.41) \qbezier(153.41,93.41)(152.85,93.98)(152,94) \qbezier(152,94)(151.15,93.98)(150.59,93.41) \qbezier(150.59,93.41)(150.02,92.85)(150,92) \qbezier(150,92)(150.02,91.15)(150.59,90.59) \qbezier(150.59,90.59)(151.15,90.02)(152,90) \qbezier(152,90)(152.85,90.02)(153.41,90.59) \qbezier(153.41,90.59)(153.98,91.15)(154,92) \linethickness{0.3mm} \put(48,92){\line(1,0){50}} \linethickness{0.3mm} \put(48,20){\line(1,0){50}} \linethickness{0.3mm} \multiput(46,22)(0.12,0.15){450}{\line(0,1){0.15}} \linethickness{0.3mm} \multiput(46,90)(0.12,-0.15){450}{\line(0,-1){0.15}} \linethickness{0.3mm} \multiput(90,56)(0.12,-0.41){83}{\line(0,-1){0.41}} \linethickness{0.3mm} \multiput(90,60)(0.12,0.36){83}{\line(0,1){0.36}} \linethickness{0.3mm} \put(38,58){\line(1,0){50}} \linethickness{0.3mm} \multiput(36,56)(0.12,-0.41){83}{\line(0,-1){0.41}} \linethickness{0.3mm} \multiput(36,60)(0.12,0.45){67}{\line(0,1){0.45}} \linethickness{0.3mm} \qbezier(152,94)(127.34,108.67)(99.71,109.01) \qbezier(99.71,109.01)(72.09,109.36)(48,94) \linethickness{0.3mm} \multiput(46,18)(0.22,-0.13){2}{\line(1,0){0.22}} \multiput(46.43,17.75)(0.22,-0.12){2}{\line(1,0){0.22}} \multiput(46.87,17.5)(0.22,-0.12){2}{\line(1,0){0.22}} \multiput(47.3,17.25)(0.22,-0.12){2}{\line(1,0){0.22}} \multiput(47.74,17.01)(0.22,-0.12){2}{\line(1,0){0.22}} \multiput(48.18,16.77)(0.22,-0.12){2}{\line(1,0){0.22}} \multiput(48.62,16.53)(0.22,-0.12){2}{\line(1,0){0.22}} \multiput(49.06,16.3)(0.22,-0.12){2}{\line(1,0){0.22}} \multiput(49.51,16.07)(0.22,-0.11){2}{\line(1,0){0.22}} \multiput(49.95,15.84)(0.22,-0.11){2}{\line(1,0){0.22}} \multiput(50.4,15.61)(0.22,-0.11){2}{\line(1,0){0.22}} \multiput(50.85,15.39)(0.23,-0.11){2}{\line(1,0){0.23}} \multiput(51.3,15.17)(0.23,-0.11){2}{\line(1,0){0.23}} \multiput(51.75,14.96)(0.23,-0.11){2}{\line(1,0){0.23}} \multiput(52.21,14.74)(0.23,-0.1){2}{\line(1,0){0.23}} \multiput(52.66,14.53)(0.23,-0.1){2}{\line(1,0){0.23}} \multiput(53.12,14.33)(0.23,-0.1){2}{\line(1,0){0.23}} \multiput(53.57,14.12)(0.23,-0.1){2}{\line(1,0){0.23}} \multiput(54.03,13.92)(0.23,-0.1){2}{\line(1,0){0.23}} \multiput(54.49,13.73)(0.23,-0.1){2}{\line(1,0){0.23}} \multiput(54.96,13.53)(0.23,-0.1){2}{\line(1,0){0.23}} \multiput(55.42,13.34)(0.23,-0.09){2}{\line(1,0){0.23}} \multiput(55.88,13.16)(0.23,-0.09){2}{\line(1,0){0.23}} \multiput(56.35,12.97)(0.23,-0.09){2}{\line(1,0){0.23}} \multiput(56.82,12.79)(0.47,-0.18){1}{\line(1,0){0.47}} \multiput(57.28,12.61)(0.47,-0.17){1}{\line(1,0){0.47}} \multiput(57.75,12.44)(0.47,-0.17){1}{\line(1,0){0.47}} \multiput(58.22,12.27)(0.47,-0.17){1}{\line(1,0){0.47}} \multiput(58.7,12.1)(0.47,-0.16){1}{\line(1,0){0.47}} \multiput(59.17,11.94)(0.47,-0.16){1}{\line(1,0){0.47}} \multiput(59.64,11.78)(0.48,-0.16){1}{\line(1,0){0.48}} \multiput(60.12,11.62)(0.48,-0.15){1}{\line(1,0){0.48}} \multiput(60.6,11.46)(0.48,-0.15){1}{\line(1,0){0.48}} \multiput(61.07,11.31)(0.48,-0.15){1}{\line(1,0){0.48}} \multiput(61.55,11.17)(0.48,-0.14){1}{\line(1,0){0.48}} \multiput(62.03,11.02)(0.48,-0.14){1}{\line(1,0){0.48}} \multiput(62.51,10.88)(0.48,-0.14){1}{\line(1,0){0.48}} \multiput(62.99,10.74)(0.48,-0.13){1}{\line(1,0){0.48}} \multiput(63.47,10.61)(0.48,-0.13){1}{\line(1,0){0.48}} \multiput(63.96,10.48)(0.48,-0.13){1}{\line(1,0){0.48}} \multiput(64.44,10.35)(0.49,-0.12){1}{\line(1,0){0.49}} \multiput(64.93,10.22)(0.49,-0.12){1}{\line(1,0){0.49}} \multiput(65.41,10.1)(0.49,-0.12){1}{\line(1,0){0.49}} \multiput(65.9,9.98)(0.49,-0.11){1}{\line(1,0){0.49}} \multiput(66.39,9.87)(0.49,-0.11){1}{\line(1,0){0.49}} \multiput(66.88,9.76)(0.49,-0.11){1}{\line(1,0){0.49}} \multiput(67.36,9.65)(0.49,-0.1){1}{\line(1,0){0.49}} \multiput(67.85,9.55)(0.49,-0.1){1}{\line(1,0){0.49}} \multiput(68.34,9.45)(0.49,-0.1){1}{\line(1,0){0.49}} \multiput(68.84,9.35)(0.49,-0.09){1}{\line(1,0){0.49}} \multiput(69.33,9.25)(0.49,-0.09){1}{\line(1,0){0.49}} \multiput(69.82,9.16)(0.49,-0.09){1}{\line(1,0){0.49}} \multiput(70.31,9.08)(0.49,-0.08){1}{\line(1,0){0.49}} \multiput(70.81,8.99)(0.49,-0.08){1}{\line(1,0){0.49}} \multiput(71.3,8.91)(0.49,-0.08){1}{\line(1,0){0.49}} \multiput(71.8,8.84)(0.5,-0.07){1}{\line(1,0){0.5}} \multiput(72.29,8.76)(0.5,-0.07){1}{\line(1,0){0.5}} \multiput(72.79,8.69)(0.5,-0.07){1}{\line(1,0){0.5}} \multiput(73.28,8.63)(0.5,-0.06){1}{\line(1,0){0.5}} \multiput(73.78,8.56)(0.5,-0.06){1}{\line(1,0){0.5}} \multiput(74.28,8.5)(0.5,-0.06){1}{\line(1,0){0.5}} \multiput(74.77,8.45)(0.5,-0.05){1}{\line(1,0){0.5}} \multiput(75.27,8.39)(0.5,-0.05){1}{\line(1,0){0.5}} \multiput(75.77,8.34)(0.5,-0.05){1}{\line(1,0){0.5}} \multiput(76.27,8.3)(0.5,-0.04){1}{\line(1,0){0.5}} \multiput(76.77,8.26)(0.5,-0.04){1}{\line(1,0){0.5}} \multiput(77.27,8.22)(0.5,-0.04){1}{\line(1,0){0.5}} \multiput(77.77,8.18)(0.5,-0.03){1}{\line(1,0){0.5}} \multiput(78.27,8.15)(0.5,-0.03){1}{\line(1,0){0.5}} \multiput(78.77,8.12)(0.5,-0.03){1}{\line(1,0){0.5}} \multiput(79.27,8.09)(0.5,-0.02){1}{\line(1,0){0.5}} \multiput(79.77,8.07)(0.5,-0.02){1}{\line(1,0){0.5}} \multiput(80.27,8.05)(0.5,-0.01){1}{\line(1,0){0.5}} \multiput(80.77,8.04)(0.5,-0.01){1}{\line(1,0){0.5}} \multiput(81.27,8.03)(0.5,-0.01){1}{\line(1,0){0.5}} \multiput(81.77,8.02)(0.5,-0){1}{\line(1,0){0.5}} \multiput(82.27,8.02)(0.5,-0){1}{\line(1,0){0.5}} \multiput(82.77,8.01)(0.5,0){1}{\line(1,0){0.5}} \multiput(83.27,8.02)(0.5,0.01){1}{\line(1,0){0.5}} \multiput(83.77,8.02)(0.5,0.01){1}{\line(1,0){0.5}} \multiput(84.27,8.03)(0.5,0.01){1}{\line(1,0){0.5}} \multiput(84.77,8.05)(0.5,0.02){1}{\line(1,0){0.5}} \multiput(85.27,8.06)(0.5,0.02){1}{\line(1,0){0.5}} \multiput(85.77,8.08)(0.5,0.02){1}{\line(1,0){0.5}} \multiput(86.28,8.11)(0.5,0.03){1}{\line(1,0){0.5}} \multiput(86.78,8.13)(0.5,0.03){1}{\line(1,0){0.5}} \multiput(87.28,8.16)(0.5,0.03){1}{\line(1,0){0.5}} \multiput(87.77,8.2)(0.5,0.04){1}{\line(1,0){0.5}} \multiput(88.27,8.23)(0.5,0.04){1}{\line(1,0){0.5}} \multiput(88.77,8.27)(0.5,0.04){1}{\line(1,0){0.5}} \multiput(89.27,8.32)(0.5,0.05){1}{\line(1,0){0.5}} \multiput(89.77,8.37)(0.5,0.05){1}{\line(1,0){0.5}} \multiput(90.27,8.42)(0.5,0.05){1}{\line(1,0){0.5}} \multiput(90.77,8.47)(0.5,0.06){1}{\line(1,0){0.5}} \multiput(91.26,8.53)(0.5,0.06){1}{\line(1,0){0.5}} \multiput(91.76,8.59)(0.5,0.06){1}{\line(1,0){0.5}} \multiput(92.26,8.65)(0.5,0.07){1}{\line(1,0){0.5}} \multiput(92.75,8.72)(0.5,0.07){1}{\line(1,0){0.5}} \multiput(93.25,8.79)(0.5,0.08){1}{\line(1,0){0.5}} \multiput(93.74,8.87)(0.49,0.08){1}{\line(1,0){0.49}} \multiput(94.24,8.95)(0.49,0.08){1}{\line(1,0){0.49}} \multiput(94.73,9.03)(0.49,0.09){1}{\line(1,0){0.49}} \multiput(95.23,9.12)(0.49,0.09){1}{\line(1,0){0.49}} \multiput(95.72,9.2)(0.49,0.09){1}{\line(1,0){0.49}} \multiput(96.21,9.3)(0.49,0.1){1}{\line(1,0){0.49}} \multiput(96.7,9.39)(0.49,0.1){1}{\line(1,0){0.49}} \multiput(97.19,9.49)(0.49,0.1){1}{\line(1,0){0.49}} \multiput(97.68,9.59)(0.49,0.11){1}{\line(1,0){0.49}} \multiput(98.17,9.7)(0.49,0.11){1}{\line(1,0){0.49}} \multiput(98.66,9.81)(0.49,0.11){1}{\line(1,0){0.49}} \multiput(99.15,9.92)(0.49,0.12){1}{\line(1,0){0.49}} \multiput(99.64,10.04)(0.49,0.12){1}{\line(1,0){0.49}} \multiput(100.12,10.16)(0.49,0.12){1}{\line(1,0){0.49}} \multiput(100.61,10.28)(0.48,0.13){1}{\line(1,0){0.48}} \multiput(101.09,10.4)(0.48,0.13){1}{\line(1,0){0.48}} \multiput(101.58,10.53)(0.48,0.13){1}{\line(1,0){0.48}} \multiput(102.06,10.67)(0.48,0.14){1}{\line(1,0){0.48}} \multiput(102.54,10.8)(0.48,0.14){1}{\line(1,0){0.48}} \multiput(103.02,10.94)(0.48,0.14){1}{\line(1,0){0.48}} \multiput(103.5,11.08)(0.48,0.15){1}{\line(1,0){0.48}} \multiput(103.98,11.23)(0.48,0.15){1}{\line(1,0){0.48}} \multiput(104.46,11.38)(0.48,0.15){1}{\line(1,0){0.48}} \multiput(104.94,11.53)(0.48,0.16){1}{\line(1,0){0.48}} \multiput(105.41,11.69)(0.47,0.16){1}{\line(1,0){0.47}} \multiput(105.89,11.85)(0.47,0.16){1}{\line(1,0){0.47}} \multiput(106.36,12.01)(0.47,0.17){1}{\line(1,0){0.47}} \multiput(106.83,12.18)(0.47,0.17){1}{\line(1,0){0.47}} \multiput(107.31,12.35)(0.47,0.17){1}{\line(1,0){0.47}} \multiput(107.78,12.52)(0.47,0.18){1}{\line(1,0){0.47}} \multiput(108.24,12.69)(0.47,0.18){1}{\line(1,0){0.47}} \multiput(108.71,12.87)(0.23,0.09){2}{\line(1,0){0.23}} \multiput(109.18,13.05)(0.23,0.09){2}{\line(1,0){0.23}} \multiput(109.64,13.24)(0.23,0.09){2}{\line(1,0){0.23}} \multiput(110.11,13.43)(0.23,0.1){2}{\line(1,0){0.23}} \multiput(110.57,13.62)(0.23,0.1){2}{\line(1,0){0.23}} \multiput(111.03,13.81)(0.23,0.1){2}{\line(1,0){0.23}} \multiput(111.49,14.01)(0.23,0.1){2}{\line(1,0){0.23}} \multiput(111.95,14.21)(0.23,0.1){2}{\line(1,0){0.23}} \multiput(112.41,14.42)(0.23,0.1){2}{\line(1,0){0.23}} \multiput(112.86,14.63)(0.23,0.11){2}{\line(1,0){0.23}} \multiput(113.32,14.84)(0.23,0.11){2}{\line(1,0){0.23}} \multiput(113.77,15.05)(0.23,0.11){2}{\line(1,0){0.23}} \multiput(114.22,15.27)(0.22,0.11){2}{\line(1,0){0.22}} \multiput(114.67,15.49)(0.22,0.11){2}{\line(1,0){0.22}} \multiput(115.12,15.71)(0.22,0.11){2}{\line(1,0){0.22}} \multiput(115.57,15.94)(0.22,0.11){2}{\line(1,0){0.22}} \multiput(116.01,16.17)(0.22,0.12){2}{\line(1,0){0.22}} \multiput(116.45,16.4)(0.22,0.12){2}{\line(1,0){0.22}} \multiput(116.9,16.64)(0.22,0.12){2}{\line(1,0){0.22}} \multiput(117.34,16.87)(0.22,0.12){2}{\line(1,0){0.22}} \multiput(117.77,17.12)(0.22,0.12){2}{\line(1,0){0.22}} \multiput(118.21,17.36)(0.22,0.12){2}{\line(1,0){0.22}} \multiput(118.65,17.61)(0.22,0.13){2}{\line(1,0){0.22}} \multiput(119.08,17.86)(0.22,0.13){2}{\line(1,0){0.22}} \multiput(119.51,18.11)(0.21,0.13){2}{\line(1,0){0.21}} \multiput(119.94,18.37)(0.21,0.13){2}{\line(1,0){0.21}} \multiput(120.37,18.63)(0.21,0.13){2}{\line(1,0){0.21}} \multiput(120.8,18.89)(0.21,0.13){2}{\line(1,0){0.21}} \multiput(121.22,19.16)(0.21,0.13){2}{\line(1,0){0.21}} \multiput(121.64,19.43)(0.21,0.14){2}{\line(1,0){0.21}} \multiput(122.06,19.7)(0.21,0.14){2}{\line(1,0){0.21}} \multiput(122.48,19.97)(0.21,0.14){2}{\line(1,0){0.21}} \multiput(122.9,20.25)(0.21,0.14){2}{\line(1,0){0.21}} \multiput(123.32,20.53)(0.21,0.14){2}{\line(1,0){0.21}} \multiput(123.73,20.81)(0.21,0.14){2}{\line(1,0){0.21}} \multiput(124.14,21.1)(0.2,0.14){2}{\line(1,0){0.2}} \multiput(124.55,21.39)(0.2,0.15){2}{\line(1,0){0.2}} \multiput(124.96,21.68)(0.2,0.15){2}{\line(1,0){0.2}} \multiput(125.36,21.97)(0.2,0.15){2}{\line(1,0){0.2}} \multiput(125.76,22.27)(0.2,0.15){2}{\line(1,0){0.2}} \multiput(126.16,22.57)(0.13,0.1){3}{\line(1,0){0.13}} \multiput(126.56,22.87)(0.13,0.1){3}{\line(1,0){0.13}} \multiput(126.96,23.18)(0.13,0.1){3}{\line(1,0){0.13}} \multiput(127.36,23.49)(0.13,0.1){3}{\line(1,0){0.13}} \multiput(127.75,23.8)(0.13,0.1){3}{\line(1,0){0.13}} \multiput(128.14,24.11)(0.13,0.11){3}{\line(1,0){0.13}} \multiput(128.53,24.43)(0.13,0.11){3}{\line(1,0){0.13}} \multiput(128.91,24.75)(0.13,0.11){3}{\line(1,0){0.13}} \multiput(129.3,25.07)(0.13,0.11){3}{\line(1,0){0.13}} \multiput(129.68,25.39)(0.13,0.11){3}{\line(1,0){0.13}} \multiput(130.06,25.72)(0.13,0.11){3}{\line(1,0){0.13}} \multiput(130.43,26.05)(0.12,0.11){3}{\line(1,0){0.12}} \multiput(130.81,26.38)(0.12,0.11){3}{\line(1,0){0.12}} \multiput(131.18,26.72)(0.12,0.11){3}{\line(1,0){0.12}} \multiput(131.55,27.05)(0.12,0.11){3}{\line(1,0){0.12}} \multiput(131.92,27.39)(0.12,0.11){3}{\line(1,0){0.12}} \multiput(132.28,27.74)(0.12,0.11){3}{\line(1,0){0.12}} \multiput(132.65,28.08)(0.12,0.12){3}{\line(1,0){0.12}} \multiput(133.01,28.43)(0.12,0.12){3}{\line(1,0){0.12}} \multiput(133.37,28.78)(0.12,0.12){3}{\line(1,0){0.12}} \multiput(133.72,29.13)(0.12,0.12){3}{\line(0,1){0.12}} \multiput(134.07,29.49)(0.12,0.12){3}{\line(0,1){0.12}} \multiput(134.43,29.84)(0.12,0.12){3}{\line(0,1){0.12}} \multiput(134.77,30.2)(0.12,0.12){3}{\line(0,1){0.12}} \multiput(135.12,30.57)(0.11,0.12){3}{\line(0,1){0.12}} \multiput(135.46,30.93)(0.11,0.12){3}{\line(0,1){0.12}} \multiput(135.8,31.3)(0.11,0.12){3}{\line(0,1){0.12}} \multiput(136.14,31.67)(0.11,0.12){3}{\line(0,1){0.12}} \multiput(136.48,32.04)(0.11,0.12){3}{\line(0,1){0.12}} \multiput(136.81,32.41)(0.11,0.13){3}{\line(0,1){0.13}} \multiput(137.14,32.79)(0.11,0.13){3}{\line(0,1){0.13}} \multiput(137.47,33.17)(0.11,0.13){3}{\line(0,1){0.13}} \multiput(137.79,33.55)(0.11,0.13){3}{\line(0,1){0.13}} \multiput(138.12,33.93)(0.11,0.13){3}{\line(0,1){0.13}} \multiput(138.44,34.32)(0.11,0.13){3}{\line(0,1){0.13}} \multiput(138.75,34.7)(0.1,0.13){3}{\line(0,1){0.13}} \multiput(139.07,35.09)(0.1,0.13){3}{\line(0,1){0.13}} \multiput(139.38,35.48)(0.1,0.13){3}{\line(0,1){0.13}} \multiput(139.69,35.88)(0.1,0.13){3}{\line(0,1){0.13}} \multiput(140,36.27)(0.1,0.13){3}{\line(0,1){0.13}} \multiput(140.3,36.67)(0.1,0.13){3}{\line(0,1){0.13}} \multiput(140.6,37.07)(0.15,0.2){2}{\line(0,1){0.2}} \multiput(140.9,37.48)(0.15,0.2){2}{\line(0,1){0.2}} \multiput(141.19,37.88)(0.15,0.2){2}{\line(0,1){0.2}} \multiput(141.49,38.29)(0.14,0.2){2}{\line(0,1){0.2}} \multiput(141.78,38.69)(0.14,0.21){2}{\line(0,1){0.21}} \multiput(142.06,39.11)(0.14,0.21){2}{\line(0,1){0.21}} \multiput(142.35,39.52)(0.14,0.21){2}{\line(0,1){0.21}} \multiput(142.63,39.93)(0.14,0.21){2}{\line(0,1){0.21}} \multiput(142.91,40.35)(0.14,0.21){2}{\line(0,1){0.21}} \multiput(143.18,40.77)(0.14,0.21){2}{\line(0,1){0.21}} \multiput(143.45,41.19)(0.13,0.21){2}{\line(0,1){0.21}} \multiput(143.72,41.61)(0.13,0.21){2}{\line(0,1){0.21}} \multiput(143.99,42.03)(0.13,0.21){2}{\line(0,1){0.21}} \multiput(144.25,42.46)(0.13,0.21){2}{\line(0,1){0.21}} \multiput(144.51,42.89)(0.13,0.21){2}{\line(0,1){0.21}} \multiput(144.77,43.32)(0.13,0.22){2}{\line(0,1){0.22}} \multiput(145.03,43.75)(0.13,0.22){2}{\line(0,1){0.22}} \multiput(145.28,44.18)(0.12,0.22){2}{\line(0,1){0.22}} \multiput(145.53,44.61)(0.12,0.22){2}{\line(0,1){0.22}} \multiput(145.77,45.05)(0.12,0.22){2}{\line(0,1){0.22}} \multiput(146.01,45.49)(0.12,0.22){2}{\line(0,1){0.22}} \multiput(146.25,45.93)(0.12,0.22){2}{\line(0,1){0.22}} \multiput(146.49,46.37)(0.12,0.22){2}{\line(0,1){0.22}} \multiput(146.72,46.81)(0.12,0.22){2}{\line(0,1){0.22}} \multiput(146.96,47.26)(0.11,0.22){2}{\line(0,1){0.22}} \multiput(147.18,47.7)(0.11,0.22){2}{\line(0,1){0.22}} \multiput(147.41,48.15)(0.11,0.22){2}{\line(0,1){0.22}} \multiput(147.63,48.6)(0.11,0.23){2}{\line(0,1){0.23}} \multiput(147.85,49.05)(0.11,0.23){2}{\line(0,1){0.23}} \multiput(148.06,49.5)(0.11,0.23){2}{\line(0,1){0.23}} \multiput(148.27,49.96)(0.1,0.23){2}{\line(0,1){0.23}} \multiput(148.48,50.41)(0.1,0.23){2}{\line(0,1){0.23}} \multiput(148.69,50.87)(0.1,0.23){2}{\line(0,1){0.23}} \multiput(148.89,51.33)(0.1,0.23){2}{\line(0,1){0.23}} \multiput(149.09,51.79)(0.1,0.23){2}{\line(0,1){0.23}} \multiput(149.28,52.25)(0.1,0.23){2}{\line(0,1){0.23}} \multiput(149.48,52.71)(0.09,0.23){2}{\line(0,1){0.23}} \multiput(149.67,53.17)(0.09,0.23){2}{\line(0,1){0.23}} \multiput(149.85,53.64)(0.09,0.23){2}{\line(0,1){0.23}} \multiput(150.04,54.1)(0.18,0.47){1}{\line(0,1){0.47}} \multiput(150.22,54.57)(0.18,0.47){1}{\line(0,1){0.47}} \multiput(150.39,55.04)(0.17,0.47){1}{\line(0,1){0.47}} \multiput(150.57,55.51)(0.17,0.47){1}{\line(0,1){0.47}} \multiput(150.74,55.98)(0.17,0.47){1}{\line(0,1){0.47}} \multiput(150.9,56.45)(0.16,0.47){1}{\line(0,1){0.47}} \multiput(151.07,56.93)(0.16,0.47){1}{\line(0,1){0.47}} \multiput(151.23,57.4)(0.16,0.48){1}{\line(0,1){0.48}} \multiput(151.38,57.88)(0.15,0.48){1}{\line(0,1){0.48}} \multiput(151.54,58.35)(0.15,0.48){1}{\line(0,1){0.48}} \multiput(151.69,58.83)(0.15,0.48){1}{\line(0,1){0.48}} \multiput(151.84,59.31)(0.14,0.48){1}{\line(0,1){0.48}} \multiput(151.98,59.79)(0.14,0.48){1}{\line(0,1){0.48}} \multiput(152.12,60.27)(0.14,0.48){1}{\line(0,1){0.48}} \multiput(152.26,60.75)(0.13,0.48){1}{\line(0,1){0.48}} \multiput(152.39,61.23)(0.13,0.48){1}{\line(0,1){0.48}} \multiput(152.52,61.72)(0.13,0.48){1}{\line(0,1){0.48}} \multiput(152.65,62.2)(0.12,0.49){1}{\line(0,1){0.49}} \multiput(152.77,62.69)(0.12,0.49){1}{\line(0,1){0.49}} \multiput(152.89,63.17)(0.12,0.49){1}{\line(0,1){0.49}} \multiput(153.01,63.66)(0.11,0.49){1}{\line(0,1){0.49}} \multiput(153.12,64.15)(0.11,0.49){1}{\line(0,1){0.49}} \multiput(153.23,64.64)(0.11,0.49){1}{\line(0,1){0.49}} \multiput(153.34,65.12)(0.1,0.49){1}{\line(0,1){0.49}} \multiput(153.44,65.61)(0.1,0.49){1}{\line(0,1){0.49}} \multiput(153.54,66.11)(0.1,0.49){1}{\line(0,1){0.49}} \multiput(153.64,66.6)(0.09,0.49){1}{\line(0,1){0.49}} \multiput(153.73,67.09)(0.09,0.49){1}{\line(0,1){0.49}} \multiput(153.82,67.58)(0.09,0.49){1}{\line(0,1){0.49}} \multiput(153.91,68.07)(0.08,0.49){1}{\line(0,1){0.49}} \multiput(153.99,68.57)(0.08,0.49){1}{\line(0,1){0.49}} \multiput(154.07,69.06)(0.08,0.49){1}{\line(0,1){0.49}} \multiput(154.15,69.56)(0.07,0.5){1}{\line(0,1){0.5}} \multiput(154.22,70.05)(0.07,0.5){1}{\line(0,1){0.5}} \multiput(154.29,70.55)(0.07,0.5){1}{\line(0,1){0.5}} \multiput(154.36,71.05)(0.06,0.5){1}{\line(0,1){0.5}} \multiput(154.42,71.54)(0.06,0.5){1}{\line(0,1){0.5}} \multiput(154.48,72.04)(0.06,0.5){1}{\line(0,1){0.5}} \multiput(154.53,72.54)(0.05,0.5){1}{\line(0,1){0.5}} \multiput(154.59,73.04)(0.05,0.5){1}{\line(0,1){0.5}} \multiput(154.63,73.53)(0.05,0.5){1}{\line(0,1){0.5}} \multiput(154.68,74.03)(0.04,0.5){1}{\line(0,1){0.5}} \multiput(154.72,74.53)(0.04,0.5){1}{\line(0,1){0.5}} \multiput(154.76,75.03)(0.03,0.5){1}{\line(0,1){0.5}} \multiput(154.79,75.53)(0.03,0.5){1}{\line(0,1){0.5}} \multiput(154.83,76.03)(0.03,0.5){1}{\line(0,1){0.5}} \multiput(154.85,76.53)(0.02,0.5){1}{\line(0,1){0.5}} \multiput(154.88,77.03)(0.02,0.5){1}{\line(0,1){0.5}} \multiput(154.9,77.53)(0.02,0.5){1}{\line(0,1){0.5}} \multiput(154.92,78.03)(0.01,0.5){1}{\line(0,1){0.5}} \multiput(154.93,78.53)(0.01,0.5){1}{\line(0,1){0.5}} \multiput(154.94,79.03)(0.01,0.5){1}{\line(0,1){0.5}} \multiput(154.95,79.53)(0,0.5){1}{\line(0,1){0.5}} \put(154.95,80.03){\line(0,1){0.5}} \multiput(154.95,81.03)(0,-0.5){1}{\line(0,-1){0.5}} \multiput(154.94,81.54)(0.01,-0.5){1}{\line(0,-1){0.5}} \multiput(154.93,82.04)(0.01,-0.5){1}{\line(0,-1){0.5}} \multiput(154.92,82.54)(0.01,-0.5){1}{\line(0,-1){0.5}} \multiput(154.9,83.04)(0.02,-0.5){1}{\line(0,-1){0.5}} \multiput(154.88,83.54)(0.02,-0.5){1}{\line(0,-1){0.5}} \multiput(154.86,84.04)(0.02,-0.5){1}{\line(0,-1){0.5}} \multiput(154.83,84.54)(0.03,-0.5){1}{\line(0,-1){0.5}} \multiput(154.8,85.04)(0.03,-0.5){1}{\line(0,-1){0.5}} \multiput(154.76,85.54)(0.03,-0.5){1}{\line(0,-1){0.5}} \multiput(154.72,86.04)(0.04,-0.5){1}{\line(0,-1){0.5}} \multiput(154.68,86.54)(0.04,-0.5){1}{\line(0,-1){0.5}} \multiput(154.64,87.03)(0.04,-0.5){1}{\line(0,-1){0.5}} \multiput(154.59,87.53)(0.05,-0.5){1}{\line(0,-1){0.5}} \multiput(154.54,88.03)(0.05,-0.5){1}{\line(0,-1){0.5}} \multiput(154.48,88.53)(0.06,-0.5){1}{\line(0,-1){0.5}} \multiput(154.42,89.03)(0.06,-0.5){1}{\line(0,-1){0.5}} \multiput(154.36,89.52)(0.06,-0.5){1}{\line(0,-1){0.5}} \multiput(154.3,90.02)(0.07,-0.5){1}{\line(0,-1){0.5}} \multiput(154.23,90.52)(0.07,-0.5){1}{\line(0,-1){0.5}} \multiput(154.16,91.01)(0.07,-0.5){1}{\line(0,-1){0.5}} \multiput(154.08,91.51)(0.08,-0.49){1}{\line(0,-1){0.49}} \multiput(154,92)(0.08,-0.49){1}{\line(0,-1){0.49}} \put(42,100){\makebox(0,0)[cc]{$�V_{11}� �$}} \put(102,100){\makebox(0,0)[cc]{$�V_{12}� �$}} \put(100,58){\makebox(0,0)[cc]{$W_{2}� �$}} \put(108,22){\makebox(0,0)[cc]{$�V_{22}� �$}} \put(34,22){\makebox(0,0)[cc]{$�V_{21}� �$}} \put(26,58){\makebox(0,0)[cc]{$�W_{1}� �$}} \put(154,100){\makebox(0,0)[cc]{$�V_{13}� �$}} \end{picture} \end{center} \caption{$ G_{3,2} $} \end{figure} Now, we state a conjecture that is a generalization of the Montgomery's Conjecture. \begin{conj} For any nontrivial connected graph $G$, $\chi_2(G)-\chi(G)\leq 2\lceil \frac{\Delta(G)}{\delta(G)} \rceil$. \end{conj} By Theorem $1$, if $G$ is a graph and $ 1 \leq \delta(G) \leq 2$, then Conjecture $2$ is true. \begin{lem} If $G$ is a graph and $ \delta\geq 2 $, then $ \chi_{2}(G)\leq \lceil(4\Delta^{2})^{\frac{1}{\delta-1}} \rceil\chi(G) $. \end{lem} \begin{proof}{ Suppose that $G$ is a graph, $ \delta\geq 2 $ and let $ k = \lceil(4\Delta^{2})^{\frac{1}{\delta-1}} \rceil $. Consider a vertex $\chi(G)$-coloring of $G$ and for each $1\leq i \leq \chi(G)$, recolor every vertex colored by $i$ randomly and independently by one of the colors $\lbrace ki-(k-1),ki-(k-2),\cdots,ki \rbrace$, with the same probability. Obviously no two adjacent vertices have the same color. For each vertex $v$ let $E_{v}$ be the event that all of the neighbors of $v$ have the same color. We have $P(E_{v})\leq (\frac{1}{k})^{\delta-1}$. $E_{v}$ is dependent to $N[v]\cup N[N[v]]$, so depends to at most $\Delta^{2}$ events. We have $4pd\leq 4\frac{1}{4\Delta^{2}}\Delta^{2} \leq 1$, so by Local Lemma there exists a dynamic coloring by $\lceil(4\Delta^{2})^{\frac{1}{\delta-1}} \rceil\chi(G)$ colors with positive probability. }\end{proof} \begin{thm} There exists $ \delta_{0} $ such that every bipartite graph $G$ with $ \delta(G)\geq \delta_{0} $, has $\chi_2(G)-\chi(G)\leq 2\lceil\frac{\Delta(G)}{\delta(G)}\rceil$. \end{thm} \begin{proof}{ If $G$ is an $ r $-regular graph and $ r\geq 4 $ then it was proved by \cite{akbari2}, that $\chi_2(G)-\chi(G)\leq 2$, so suppose that $ \delta(G) \neq \Delta(G) $. Now two cases can be considered: Case A. $ \frac{\Delta }{\delta } \geq \delta^{\frac{3}{\delta -4} } $. We have: \begin{center} $\Delta \geq \delta^{\frac{\delta-1}{\delta-4}} $, $ \Delta^{\delta-4} \geq \delta^{\delta-1}$, $ (\frac{\Delta}{\delta})^{\delta-1} \geq \Delta^{3} $, $ (\frac{\Delta}{\delta})^{\delta-1} \geq 4\Delta^{2} $, $ \frac{\Delta}{\delta} \geq (4\Delta^{2})^{\frac{1}{\delta-1}} $. \end{center} So by Lemma $ 9$, $\chi_2(G)-\chi(G)\leq 2\lceil\frac{\Delta(G)}{\delta(G)}\rceil$. Case B. $ \frac{\Delta }{\delta } \leq \delta^{\frac{3}{\delta -4} } $. There exists $ \delta_{0} $ such that: $(4\Delta^{2})^{\frac{1}{\delta-1}} \leq (4(\delta)^{\frac{2\delta-2}{\delta-4}})^{\frac{1}{\delta-1}} \leq (4(\delta_{0})^{\frac{2\delta_{0}-2}{\delta_{0}-4}})^{\frac{1}{\delta_{0}-1}} \leq 3$. So by Lemma $ 9 $ and since $ \delta \neq \Delta $ we have $\chi_2(G)-\chi(G)\leq 2\lceil\frac{\Delta(G)}{\delta(G)}\rceil$. }\end{proof} \section{Concluding Remarks About the Montgomery's Conjecture} \label{} In Lemma $ 4 $ we proved if $T_{1}$ is an independent set for a graph $G$, then there exists $T_{2}$ such that, $T_{2}$ is an independent dominating set for $T_{1}$ and $ \vert T_{1} \cap T_{2}\vert \leq \frac{2\Delta(G) -\delta(G) }{2\Delta(G)} \vert T_{1}\vert$. Finding the optimal upper bound for $ \vert T_{1} \cap T_{2}\vert$ seems to be an intriguing open problem, we do conjecture the following: \begin{conj} There is a constant $C$ such that for every $r$-regular graph with $ r\neq 0 $, if $T_{1}$ is an independent set, then there exists an independent dominating set $T_{2}$ for $T_{1}$ such that $ \vert T_{1} \cap T_{2}\vert \leq C $. \end{conj} \begin{prop} If Conjecture $ 3 $ is true, then there is a constant $C_{1}$ such that for every $r$-regular graph $G$, $\chi_{2}(G)-\chi(G) \leq C_{1}$. \end{prop} \begin{proof} { If $r\leq 3$, then by Theorem $1$, $\chi_{2}(G)-\chi(G) \leq \chi_{2}(G)-1 \leq 4$. So, we can assume that $ r\geq 4 $. Consider a vertex $\chi(G)$-coloring of $G$, by Lemma $ 3 $, let $T_{1}$ be an independent dominating set for $G$. Suppose that $ A= \lbrace v \vert v\in V(G), N(v)\subseteq T_{1}\rbrace$ and $B=T_{1}$ and by Lemma $2$, recolor the vertices of $T_{1}$ with the colors $ \chi+1 $ and $ \chi+2 $ such that for each $u\in \lbrace v \vert v\in V(G), N(v)\subseteq T_{1}\rbrace$, $N(u)$ has at least two different colors, retain the color of other vertices and name this coloring $c$. $B_{c}$ is the independent set, by Conjecture $ 3 $, there is a constant $C$ such that there exists an independent dominating set $T_{2}$ for $T_{1}$ and $\vert T_{1} \cap T_{2}\vert \leq C$, now suppose that $ A= \lbrace v \vert v\in V(G), N(v)\subseteq T_{2}\rbrace$ and $B=T_{2}$ and by Lemma $2$, recolor the vertices of $T_{2}$ with the colors $ \chi+3 $ and $ \chi+4 $ such that for each $u\in \lbrace v \vert v\in V(G), N(v)\subseteq T_{2}\rbrace$, $N(u)$ has at least two different colors, retain the color of other vertices and name it $c'$. $ \vert B_{c'}\vert \leq C $, so by Lemma $ 6 $, there exists a dynamic coloring with at most $ \chi(G)+4 +C $ colors. Let $C_{1}=C+4$, this completes the proof. }\end{proof} \section{Acknowledgment} \label{} The authors would like to thank Professor Saieed Akbari for his invaluable comments. \bibliographystyle{plain} \bibliography{Refff}
{"config": "arxiv", "file": "0911.4199/Dynamic_coloring.tex"}
\begin{document} \begin{abstract} We show that there is no positive loop inside the component of a fiber in the space of Legendrian embeddings in the contact manifold $ST^*M$, provided that the universal cover of $M$ is $\RM^n$. We consider some related results in the space of one-jets of functions on a compact manifold. We give an application to the positive isotopies in homogeneous neighborhoods of surfaces in a tight contact 3-manifold. \end{abstract} \maketitle \section{Introduction and formulation of the results} \subsection{} \label{intro}On the Euclidean unit $2$-sphere, the set of points which are at a given distance of the north pole is in general a circle. When the distance is $\pi$, this circle becomes trivial: it is reduced to the south pole. Such a focusing phenomenon cannot appear on a surface of constant, non-positive curvature. In this case the image by the exponential map of a unit circle of vectors tangent to the surface at a given point is never reduced to one point. \medskip In this paper, we generalize this remark in the context of contact topology\footnote{in particular, no Riemannian structure is involved}. Our motivation comes from the theory of the orderability of the group of contactomorphisms of Eliashberg, Kim and Polterovitch \cite{EKP}. \medskip \subsection{Positive isotopies} Consider a $(2n+1)$-dimensional manifold $V$ endowed with a {\em cooriented} contact structure $\xi$. At each point of $V$, the contact hyperplane then separates the tangent space in {\em a positive and a negative side}. \begin{defi} \label{isoPos} A smooth path $L_t=\varphi_t(L) , t\in [0,1]$ in the space of Legendrian embeddings (resp. immersions) of a $n$-dimensional compact manifold $L$ in $(V,\xi)$ is called {\em a Legendrian isotopy (resp. homotopy)}. If, in addition, for every $x \in L$ and every $t \in [0,1]$, the velocity vector $\dot{\varphi_t}(x)$ lies in the {\em positive} side of $\xi$ at $\varphi_t(x)$, then this Legendrian isotopy (resp. homotopy) will be called {\em positive}. \end{defi} \medskip \begin{rem} This notion of positivity does not depend on the parametrization of the $L_t$'s. \end{rem} If the cooriented contact structure $\xi$ is induced by a globaly defined contact form $\alpha$, the above condition can be rephrased as $\alpha(\dot{\varphi_t}(x))>0$. In particular, a positive contact hamiltonian induces positive isotopies. \medskip A positive isotopy (resp. homotopy) will be also called a {\em positive path} in the space of Legendrian embeddings (resp. immersions). \begin{example} \label{example1} The space $J^1(N)=T^*N \times \RM $ of one-jets of functions on a $n$-dimensional manifold $N$ has a natural contact one form $\alpha=du-\lambda$, where $\lambda$ is the Liouville one-form of $T^*N$ and $u$ is the $\RM$-coordinate. The corresponding contact structure will be denoted by $\zeta$. Given a smooth function $f \colon N \to \RM$, its one-jet extension $j^1f$ is a Legendrian submanifold. A path between two functions gives rise to an isotopy of Legendrian embeddings between their one-jets extensions. \medskip A path $f_{t, t \in [0,1]}$ of functions on $N$ such that, for any fixed $q \in N$, $f_t(q)$ is an increasing function of $t$, gives rise to a positive Legendrian isotopy $j^1f_{t, t \in [0,1]}$ in $J^1(N)$. \medskip Conversely, one can check that a positive isotopy consisting only of one-jets extensions of functions is always of the above type. {\em In particular there are no positive loops consisting only of one-jet extensions of functions}. \end{example} \begin{example} \label{ex1} Consider a Riemannian manifold $(N,g)$. Its unit tangent bundle $\pi\colon S_1N \to N$ has a natural contact one-form: If $u$ is a unit tangent vector to $N$, and $v$ a vector tangent to $S_1N$ at $u$, then $$\alpha (u) \cdot v=g(u, D\pi (u) \cdot v).$$ The corresponding contact structure will be denoted by $\zeta_1$. The constant contact Hamiltonian $h=1$ induces the geodesic flow. \medskip Any fiber of $\pi\colon S_1N \to N$ is Legendrian. Moving a fiber by the geodesic flow is a typical example of a positive path. \end{example} \subsection{Formulation of results} \label{results} Let $N$ be a closed manifold. \begin{thm} \label{thm1} There is no closed positive path in the component of the space of Legendrian embeddings in $(J^1(N), \zeta)$ containing the one-jet extensions of functions. \end{thm} The Liouville one-form of $T^*N$ induces a contact distribution of the fiber-wise spherization $ST^*N$. This contact structure is contactomorphic to the $\zeta_1$ of Example \ref{ex1}. Our generalization of the introductory Remark \ref{intro} is as follows. \begin{thm} \label{thm2} There is no positive path of Legendrian embeddings between two distinct fibers of $\pi \colon ST^*N \to N$, provided that the universal cover of $N$ is $\RM^n$. \end{thm} \begin{thm} \label{thm3} {\bf 0).} Any compact Legendrian submanifold of $J^1(\RM^n)$ belongs to a closed path of Legendrian embeddings. {\bf i).} There exists a component of the space of Legendrian embeddings in $(J^1(S^1), \zeta)$ whose elements are homotopic to $j^10$ and which contains a closed positive path. {\bf ii).} There exists a closed positive path in the component of the space of Legendrian immersions in $(J^1(S^1), \zeta)$ which contains the one-jet extensions of functions. {\bf iii).} Given any connected surface $N$, there exists a positive path of Legendrian immersions between any two fibers of $\pi\colon ST^*N \to N$. \end{thm} \medskip F. Laudenbach \cite{La} proved recently the following generalization of Theorem \ref{thm3} {\bf ii)}: for any closed $N$, there exists a closed positive path in the component of the space of Legendrian immersions in $(J^1(N), \zeta)$ which contains the one-jet extensions of functions. \medskip Theorem \ref{thm3} {\bf 0)} implies that for any contact manifold $(V,\xi)$, there exists a closed positive path of Legendrian embeddings (just consider a Darboux ball and embed the example of Theorem \ref{thm3} {\bf 0}). \medskip A Legendrian manifold $L \subset (J^1(N),\zeta)$ will be called {\em positive} if it is connected by a positive path to the one-jet extension of the zero function. The one-jet extension of a positive function is a positive Legendrian manifold. But, in general, the value of the $u$ coordinate can be negative at some points of a positive Legendrian manifold. \medskip Consider a closed manifold $N$ and fix a function $f\colon N \to {\RM}$. Assume that $0$ is a regular value of $f$. Denote by $\Lambda$ the union for $\lambda \in {\RM}$ of the $j^1(\lambda f)$. It is a {\em smooth embedding} of ${\RM} \times N$ in $J^1(N)$, foliated by the $j^1(\lambda f)$. We denote by $\Lambda_+$ the subset $\bigcup_{\lambda>0}(j^1(\lambda f)) \subset \Lambda$. \medskip Consider the manifold $M=f^{-1}([0,+\infty[) \subset N$. Its boundary $\partial M$ is the set $f^{-1}(0)$. Fix some field $\KM$ and denote by $b(f)$ the total dimension of the homology of $M$ with coefficients in that field ($b(f)=\dim_{\KM} {H_*(\{f\ge 0\},\KM)}$). We say that a point $x \in J^1(N)$ is {\em above} some subset of the manifold $N$ if its image under the natural projection $J^1(N) \rightarrow N$ belongs to this subset. \begin{thm} \label{thm4} For any positive Legendrian manifold $L \subset (J^1(N), \zeta)$ in general position with respect to $\Lambda$, there exists at least $b(f)$ points of intersection of $L$ with $\Lambda_+$ lying above $M\setminus \partial M$. More precisely, for a generic positive Legendrian manifold $L$, there exists at least $b(f)$ different {\em positive} numbers $\lambda_1,\dots,\lambda_{b(f)}$ such that $L$ intersects each manifold $j^1(\lambda_i f)$ above $M\setminus \partial M$. \end{thm} \begin{rem} Theorem \ref{thm4} implies the Morse estimate for the number of critical points of a Morse function $F$ on $N$. This can be seen as follows. By adding a sufficiently large constant to $F$, one can assume that $L=j^1(F)$ is a positive Legendrian manifold. If $f$ is a constant positive function, then $M=N$, and intersections of $L=j^1F$ with $\Lambda$ are in one to one correspondence with the critical points of $F$. Furthermore, $F$ is Morse if and only if $L$ is transversal to $\Lambda$. \medskip In fact, one can prove that Theorem \ref{thm4} implies (a weak form of) Arnold's conjecture for Lagrangian intersection in cotangent bundles, proved by Chekanov \cite{Ch} in its Legendrian version. This is no accident: our proof rely on the main ingredient of Chekanov's proof: the technique of generating families (see Theorem \ref{Chgen}). \medskip One can also prove that Theorem \ref{thm4} implies Theorem \ref{thm1}. Theorem \ref{thm5} below, which in turn implies Theorem \ref{thm2}, is also a direct consequence of Theorem \ref{thm4}. \end{rem} \begin{thm}\label{thm5} Consider a line in $\RM^n$. Denote by $\Lambda$ the union of all the fibers of $\pi\colon ST^*{\RM}^n \to {\RM}^n $ above this line. Consider one of these fibers and a positive path starting from this fiber. The end of this positive path is a Legendrian sphere. This sphere must intersect $\Lambda$ in at least $2$ points. \end{thm} \subsection{An application to positive isotopies in homogeneous neighborhoods of a surface in a tight contact 3 manifold} In Theorem \ref{thm4} and in Theorem \ref{thm5}, we observe the following feature: The submanifold $\Lambda$ is foliated by Legendrian submanifolds. We pick one of them, and we conclude that we cannot disconnect it from $\Lambda$ by a positive contact isotopy. In dimension 3, our $\Lambda$ is a surface foliated by Legendrian curves (in a non generic way). \medskip Recall that generically, a closed oriented surface $S$ contained in a contact $3$-manifold $(M,\xi )$ is {\it convex}: there exists a vector field tranversal to $S$ and whose flow preserves $\xi$. Equivalently, a convex surface admits a {\it homogeneous neighborhood} $U\simeq S\times \RM$, $S\simeq S\times \{ 0\}$, where the restriction of $\xi$ is $\RM$-invariant. Given such an homogeneous neighborhood, we obtain a smooth, canonically oriented, multicurve $\Gamma_U \subset S$, called the {\it dividing curve} of $S$, made of the points of $S$ where $\xi$ is tangent to the $\RM$-direction. It is automatically transversal to $\xi$. According to Giroux \cite{Gi}, the dividing curve $\Gamma_U$ does not depend on the choice of $U$ up to an isotopy amongst the multicurves transversal to $\xi$ in $S$. The {\it characteristic foliation} $\xi S$ of $S\subset (M,\xi )$ is the integral foliation of the singular line field $TS\cap \xi$. \medskip Let $S$ be a closed oriented surface of genus $g(S)\geq 1$ and $(U,\xi )$, $U\simeq S\times \RM$, be an homogeneous neighborhood of $S\simeq S\times \{0\}$. The surface $S$ is $\xi$-convex, and we denote by $\Gamma_U$ its dividing multicurve. We assume that $\xi$ is tight on $U$, which, after Giroux, is the same than to say that no component of $\Gamma_U$ is contractible in $S$. \begin{thm} \label{thm6} Assume $L$ is a Legendrian curve in $S$ having minimal geometric intersection $2k>0$ with $\Gamma_U$. If $(L_s)_{s\in [0,1]}$ is a positive Legendrian isotopy of $L=L_0$ then $\sharp (L_1 \cap S )\geq 2k$. \end{thm} \begin{rem} The positivity assumption is essential: if we push $L$ in the homogeneous direction, we get an isotopy of Legendrian curves which becomes instantaneously disjoint from $S$. If $k=0$, this is a positive isotopy of $L$ that disjoints $L$ from $S$. \end{rem} \begin{rem} For a small positive isotopy, the result is obvious. Indeed, $L$ is an integral curve of the characteristic foliation $\xi S$ of $S$, which contains at least one singularity in each component of $L \setminus \Gamma_U$. For two consecutive components, the singularities have opposite signs. Moreover, when one moves $L$ by a small positive isotopy, the positive singularities are pushed in $S \times \RM^+$ and the negative ones in $S \times \RM^-$. Between two singularities of opposite signs, we will get one intersection with $S$. \end{rem} \medskip The relationship with the preceeding results is given by the following corollary of theorem \ref{thm4}, applied with $N=S^1$ and $f(\theta)= \cos (k\theta)$, for some fixed $k \in \NM $. In this situation, the surface $\Lambda$ of Theorem \ref{thm4} will be called $\Lambda_k$. It is an infinite cylinder foliated by Legendrian circles. Its characteristic foliation $\xi \Lambda_k$ has $2k$ infinite lines of singularities. The standard contact space $(J^1(S^1), \zeta)$ is itself an homogeneous neighborhood of $\Lambda_k$, and the corresponding dividing curve consists in $2k$ infinite lines, alternating with the lines of singularities. \medskip Let $L_0=j^10 \subset \Lambda_k$. Theorem \ref{thm4} gives: \begin{cor} \label{ex:fund} Let $L_1$ be a generic positive deformation of $L_0$. Then $\sharp\{L_1 \cap \Lambda_k\}\geq 2k$. \end{cor} Indeed, there are $k$ intersections with $\Lambda_{k,+}$, and $k$ other intersections which are obtained in a similar way with the function $-f$. \qed \medskip This corollary will be the building block to prove Theorem \ref{thm6}. \subsection{Organization of the paper} This paper is organized as follows. The proof of Theorem $3$, which in a sense shows that the hypothesis of Theorems \ref{thm1} and \ref{thm2} are optimal, consists essentially in a collection of explicit constructions. It is done in the next section (\ref{proof3}) and it might serve as an introduction to the main notions and objects discussed in this paper. The rest of the paper is essentially devoted to the proof of Theorems 1, 2, 4, 5 and 6, but contains a few statements which are more general than the theorems mentioned in this introduction. \subsection{Acknowledgements} This work was motivated by a question of Yasha Eliashberg. This paper is an based on an unpublished preprint of 2006 \cite{CFP}. Since then, Chernov and Nemirovski proved a statement which generalises our Theorems 1 and 2 \cite{CN1,CN2}, and they found new applications of this to causality problems in space-time. All this is also related to the work of Bhupal \cite{Bh} and to the work of Sheila Sandon \cite{Sa}, who reproved some results of \cite{EKP} using the generating families techniques. \medskip Vincent Colin is partially supported by the ANR Symplexe, the ANR Floer power and the Institut universitaire de France. Petya Pushkar is partially supported by RFBR grant 08-01-00388. \section{Proof of theorem \ref{thm3} }\label{proof3} \subsection{A positive loop}\label{posLoop} In order to prove statement { \bf i)} of Theorem \ref{thm3}, we begin by the description of a positive loop in the space of Legendrian embeddings in $J^1(S^1)$. Due to Theorem \ref{thm1}, this cannot happen in the component of the zero section $j^10$. \medskip Take $\epsilon >0$ and consider a Legendrian submanifold $L$ homotopic to $j^10$ and embedded in the half-space $\{p>2\epsilon \}\subset J^1(S^1)$. Consider the contact flow $\varphi_t\colon (q,p,u) \to (q-t,p,u-t\epsilon), t\in \RM$. The corresponding contact Hamiltonian $h(q,p,t)=-\epsilon+p$ is positive near $\varphi_t(L)$, for all $t \in \RM$, and hence, $\varphi_t(L)$ is a positive path. \medskip On the other hand, one can go from $\varphi_{2\pi}(L)$ back to $L$ just by increasing the $u$ coordinate, which is also a positive path. This proves statement {\bf i)} of Theorem \ref{thm3}. \begin{figure} \label{fig2} \begin{center} \includegraphics{positiveLeg.eps} \caption{The front projection of an $L \subset \{p>2 \epsilon\}$, wxhich is homotopic to $j^10$ through Legendrian immersions.} \end{center} \end{figure} \subsection{} \label{thm3A} We now consider statement {\bf ii)}. Take $L \subset \{p>2\epsilon \}$ as above, but assume in addition that $L$ is homotopic to $j^10$ through Legendrian immersions. Such a $L$ exists (one can show that the $L$ whose front projection is depicted in fig. \ref{fig2} is such an example), but cannot be Legendrian isotopic to $j^10$, since, by \cite{Ch}, it would intersect $\{p=0\}$). \medskip \paragraph{\bf Step 1.} The homotopy between $j^10$ and $L$ can be transformed into a positive path of Legendrian immersions between $j^10$ and a vertical translate $L'$ of $L$, by combining it with an upwards translation with respect to the $u$ coordinate. \medskip \paragraph{\bf Step 2.} Then, using the flow $\varphi_t$ (defined in \ref{posLoop}) for $t \in [0, 2k\pi]$ with $k$ big enough, one can reach reach another translate $L''$ of $L$, on which the $u$ coordinate can be arbitrarily low. \medskip \paragraph{\bf Step 3.} Consider now a path of Legendrian immersions from $L$ to $j^10$. It can be modified into a positive path between $L''$ and $j^10$, like in Step~1. \medskip This proves statement {\bf ii)} of Theorem \ref{thm3}. \subsection{} The proof of statement {\bf 0)} uses again the same idea. Given any compact Legendrian submanifold $L \subset J^1(\RM^n)$, there exists $L'$, which is Legendrian isotopic to $L$ and which is contained into the half-space $p_1>\epsilon>0$, for some system $(p_1,\dots,p_n,q_1,\dots, q_n)$ of canonical coordinates on $T^*(\RM^n)$. It is possible to find a positive path between $L$ and a sufficiently high vertical translate $L''$ of $L'$. Because $p_1>\epsilon$, one can now slide down $L''$ by a positive path as low as we want with respect to the $u$ coordinate, as above. Hence we can assume that $L$ is connected by a positive path of embeddings to some $L'''$, which is a vertical translate of $L'$, on which the $u$ coordinate is very negative. So we can close this path back to $L$ in a positive way. \subsection{} We now prove statement {\bf iii)}. Consider two points $x$ and $y$ on the surface $N$, an embedded path from $x$ to $y$, and an open neighborhood $U$ of this path, diffeomorphic to $\RM^2$. Hence it is enough to consider the particular case $N=\RM^2$. We consider this case below. \medskip \subsubsection{The hodograph transform} \label{hodograph} We now recall the classical "hodograph" contactomorphism \cite{Acusps} which identifies $(ST^*\RM^2, \zeta_1)$ and $(J^1(S^1), \zeta)$, and more generally $(ST^*\RM^n, \zeta_1)$ and ($J^1(S^{n-1}), \zeta)$. The same trick will be used later to prove Theorems 2 and 5 (sections \ref{proof2} and \ref{proof4}). \medskip Fix a scalar product $\langle.,.\rangle$ on $\RM^n$ and identify the sphere $S^{n-1}$ with the standard unit sphere in $\RM^n$. Identify a covector at a point $q \in S^{n-1}$ with a vector in the hyperplane tangent to the sphere at $q$ (perpendicular to $q$). Then to a point $(p,q,u)\in J^1(S^{n-1})=T^*S^{n-1}\times\RM$ we associate the cooriented contact element at the point $uq+p\in\RM^n$, which is parallel to $T_qS^{n-1}$, and cooriented by $q$. \medskip One can check that the fiber of $\pi \colon ST^*\RM^n \rightarrow \RM$ over some point $x \in \RM^n$ is the image by this contactomorphism of $j^1l_x$, where $l_x: S^{n-1} \rightarrow \RM$, $q \mapsto \langle x,q\rangle$. \subsubsection{End of the proof of Theorem \ref{thm3} {\bf iii)}} One can assume that $x=0 \subset \RM^2$. The case when $x=y$ follows directely from Theorem \ref{thm3} {\bf ii)} via the contactomorphism described above. The fiber $\pi^{-1}(x)$ corresponds to $j^10$. \medskip Suppose now that $x \neq y$. We need to find a positive path of Legendrian immersions in $(J^1(S^1), \zeta)$ between $j^10$ and $j^1l_y$. \medskip To achieve this, it is enough to construct a positive path of Legendrian immersions between $j^10$ and a translate of $j^10$ that would be entirely below $j^1l_y$, with respect to the $u$ coordinate. This can be done as in \ref{thm3A}, just by decreasing even more the $u$ coordinate like in step 2. This finishes the proof of Theorem \ref{thm3}. \qed \bigskip \section{Morse theory for generating families quadratic at infinity} \subsection{Generating families} We briefly recall the construction of a generating family for a Legendrian manifold (the details can be found in~\cite{AG}). Let $\rho \colon E \to N$ be a smooth fibration over a smooth manifold $N$, with fiber $W$. Let $F\colon E \to \RM$ be a smooth function. For a point $q$ in $N$ we consider the set $B_q \subset \rho^{-1}(q)$ whose points are the critical points of the restriction of $F$ to the fiber $\rho^{-1}(q)$. Denote $B_{F}$ the set $B_F=\bigcup_{q\in N} B_q \subset E$. Assume that the rank of the matrix $ (F_{wq},F_{ww})$ ($w$, $q$ are local coordinates on the fiber and base respectively) formed by second derivatives is maximal (that is, equal to the dimension of~$N$) at each point of $B_{F}$. This condition holds for a generic~$F$ and does not depend on the choice of the local coordinates $w,q$. \medskip The set $B_{F}\subset W$ is then a smooth submanifold of the same dimension as~$N$, and the restriction of the map $$(q,w)\stackrel {l_{F}}\longermapsto (q,d_{N}(F(q,w)), F(q,w)),$$ where $d_N$ denotes the differential along $N$, to $B_{F}$ defines a Legendrian immersion of $B_{\?F}$ into~$(J^1(M), \zeta)$. If this is an embedding (this is generically the case), then $F$ is called a {\it generating family} of the Legendrian submanifold $L_F = l_{\?F}(B_{\?F})$. \medskip A point $x \in J^1(N)$ is by definition a triple consisting in a point $q(x)$ in the manifold $N$, a covector $p(x)\in T^*_{q(x)}N$ and a real number $u(x)$. A point $x\in L$ will be called a {\it critical point} of the Legendrian submanifold $L\subset (J^1(N), \zeta)$ if $p(x)=0$. The value of the $u$ coordinate at a critical point of a Legendrian manifold $L$ will be called a {\em critical value} of $L$. The set of all critical values will be denoted by $Crit(L)$. \medskip Observe that, for a manifold $L=L_F$ given by a generating family $F$ the set $Crit(L_F)$ coincides with the set of critical values of the generating family~$F$. \medskip We call a critical point $x\in L$ {\it nondegenerate} if $L$ intersects the manifold given by the equation $p=0$ transversally at $x$. If an embedded Legendrian submanifold $L_F$ is given by a generating family $F$, then the non-degenerate critical points of $F$ are in one to one correspondance with the non-degenerate critical points of $L_F$. \medskip We describe now the class of generating families we will be working with. Pick a closed manifold $E$ which is a fibration over some closed manifold $N$. A function $F\colon E\times \RM^K \?\?\? \to \RM$ is called {\it $E$-quadratic at infinity\/} if it is a sum of a non-degenerate quadratic form $Q$ on $\RM^K$ and a function on $E\times\RM^K$ with bounded differential (i.e. the norm of the differential is uniformly bounded for some Riemannian metric which is a product of a Riemannian metric on $E$ and the Euclidean metric on $\RM^K$). This definition does not depend on the choice of the metrics. If a function which is $E$-quadratic at infinity is a generating family (with respect to the fibration $E\times \RM^K \to N$), then we call it a {\em generating family $E$-quadratic at infinity}. \subsection{Morse theory for generating families $E$-quadratic at infinity} We gather here some results from Morse theory which will be needed later. Let $E\to N$ be a fibration, $E$ is a closed manifold. Consider a function $F$, $E$-quadratic at infinity. Denote by $F^a$ the set $\{F\le a\}$. For sufficiently big positive numbers $C_1<C_2$, the set $F^{-C_2}$ is a deformation retract of $F^{-C_1}$. Hence the homology groups $H_*(F^a, F^{-C}, \KM)$ depend only on $a$. We will denote them by $H_*(F,a)$. It is known (see \cite{CZ}) that for sufficiently big $a$, $H_*(F,a)$ is isomorphic to $H_*(E, \KM)$. \medskip For any function $F$ which is $E$-quadratic at infinity, and any integer\\ $k\in \{1,\dots,\dim H_*(E,\KM)\}$, we define a {\em Viterbo number} $c_k(F)$ by $$ c_k(F)=\inf\{c|\dim i_*(H_*(F,c))\ge k\}, $$ where $i_*$ is the map induced by the natural inclusion $F^c\to F^a$, when $a$ is a sufficiently big number. Our definition is similar to Viterbo's construction \cite{Vi} in the symplectic setting. The following proposition is an adaptation of \cite{Vi}: \begin{prop} \label{Morseprop} {\bf i.}~Each number $c_k(F)$, $k\in\{1,\dots,\dim H_*(E, \KM)\}$ is a critical value of $F$, and if $F$ is an excellent Morse function (i.e all its critical points are non-degenerate and all critical values are different) then the numbers $c_k(F)$ are different. \medskip {\bf ii.}~Consider a family $F_{t, t\in [a,b]}$ of functions which are all $E$-quadratic at infinity. For any $k\in\{1,\dots,\dim H_*(E, \KM)\}$ the number $c_k(F_t)$ depends on $t$ continuously. If the family $F_{t, t\in [a,b]}$ is generic (i.e. intersects the discriminant consisting of non excellent Morse functions transversally at its smooth points) then $c_k(F_t)$ is a continuous piecewise smooth function with a finite number of singular points. \qed \end{prop} \begin{rem} At this moment, it is unknown wether $c_i(L_F)$ depends on $F$ for a given Legendrian manifold $L=L_F$. Conjecturally there should be a definition of some analogue of $c_i$ in terms of augmentations on relative contact homology. \end{rem} \section{Proof of Theorem \ref{thm1}} We will in fact prove Theorem \ref{thm7} below, which is more general than Theorem \ref{thm1}. Fix a closed (compact, without boundary) manifold $N$ and a smooth fibration $E\to N$ such that $E$ is compact. A Legendrian manifold $L\subset (J^1(N), \zeta)$ will be called a {\em $E$-quasifunction} if it is Legendrian isotopic to a manifold given by some generating family $E$-quadratic at infinity. We say that a connected component $\mathcal{L}$ of the space of Legendrian submanifolds in $(J^1(N), \zeta)$ is {\em $E$-quasifunctional} if $\mathcal{L}$ contains an $E$-quasifunction. For example, the component $\mathcal{L}$ containing the one jets extensions of the smooth functions on $M$ is $E$-quasifunctional, with $E$ coinciding with $N$ (the fiber is just a point). \begin{thm} \label{thm7} \label{noqloops} An $E$-qua\-si\-func\-ti\-onal component contains no closed positive path. \end{thm} \subsection{} The proof of Theorem 7 will be given in \ref{pfthm7}. It will use the following generalization of Chekanov's theorem (see \cite{P1}), and proposition \ref{prop} below. \begin{thm} \label{Chgen} Consider a Legendrian isotopy $L_{t, t\in [0,1]}$ such that $L_0$ is an $E$-quasifunction. Then there exist a number $K$ and a smooth family of functions $E$-quadratic at infinity $F_t\colon E\times\RM^K \to \RM$, such that for any $t\in[0,1]$, $F_t$ is a generating family of $L_t$. \qed \end{thm} \medskip Note that it follows from Theorem~\ref{Chgen} that {\em any} Legendrian manifold in some $E$-quasifunctional component is in fact an $E$-quasifunction. \medskip Consider a positive path $L_{t,t\in [0,1]}$ given by a family $F_{t,t\in[0,1]}$ of E-quadratic at infinity generating families. We are going to prove the following inequality: \medskip \begin{prop}\label{prop} The Viterbo numbers of the family $F_t$ are monotone increasing functions with respect to $t$: $c_{i}(F_0)<c_{i}(F_1)$ for any $i\in\{1,...,\dim H_*(E)\}$. \end{prop} \subsection{Proof of Proposition \ref{prop}} Assume that the inequality is proved for a generic family. This, together with continuity of Viterbo numbers, gives us a weak inequality $c_{i,M}(F_0)\le c_{i,M}(F_1)$, for any family. But positivity is a $C^{\infty}-$open condition, so we can perturb the initial family $F_t$ into some family $\widetilde{F}_t$ coinciding with $F_t$ when $t$ is sufficiently close to $0,1$, such that $\widetilde{F}_t$ still generates a positive path of legendrian manifolds and such that the family $\widetilde{F}_{t, t \in [1/3,2/3]}$ is generic. We have $$ c_{i}(F_0)=c_{i,M}(\widetilde{F}_0)\le c_{i}(\widetilde{F}_{1/3}) < c_{i}(\widetilde{F}_{2/3})\le c_{i}(\widetilde{F}_1)=c_{i}(F_1), $$ and hence inequality is strong for all families. \medskip We now prove the inequality for generic families. Excellent Morse functions form an open dense set in the space of all E-quadratic at infinity functions on $N\times \RM^K$. The complement of the set of excellent Morse functions forms a discriminant, which is a singular hypersurface. A generic one-parameter family of E-quadratic at infinity functions $F_t$ on $E \times \RM^K$ has only a finite number of transverse intersections with the discriminant in its smooth points, and for every $t$ except possibly finitely many, the Hessian $d_{ww}F_{t}$ is non-degenerate at every critical point of the function $F_t$. \medskip We will use the notion of {\em Cerf diagram} of a family of functions $g_{t,t\in[a,b]}$ on a smooth manifold. The Cerf diagram is a subset in $[a,b]\times \RM$ consisting of all the pairs of type $(t,z)$, where $z$ is a critical value of $g_t$. In the case of a generic family of functions on a closed manifold, the Cerf diagram is a curve with non-vertical tangents everywhere, with a finite number of transversal self-intersections and cuspidal points as singularities. \medskip The graph of the Viterbo number $c_{i}(F_t)$ is a subset of the Cerf diagram of the family $F_t$. To prove the monotonicity of the Viterbo numbers, it is sufficient to show that the Cerf diagram of $F_t$ has a positive slope at every point except finite set. The rest of the proof of Proposition \ref{prop} is devoted to that. \medskip We say that a point $x$ on a Legendrian manifold $L\subset J^1(N)$ is non-vertical if the differential of the natural projection $L \to N$ is non-degenerate at $x$. Let $L_t$ be a smooth family of Legendrian manifolds and $x(t_0)=(p(t_0), q(t_0),u(t_0))$ a non-vertical point. By the implicit function theorem, there exists a unique family $x(t)=(p(t),q(t),u(t))$, defined for $t$ sufficiently close to $t_0$, such that $x(t)\in L_t$ and $q(t)=q(t_0)$. We call the number $\frac {d}{dt}\big|_{t=t_0}u(t)$ {\em vertical speed} of the point $x(t_0)$. \begin{lem} \label{l1} For a positive path of Legendrian manifolds, the vertical speed of every non-vertical point is positive. \qed \end{lem} Consider a path $L_t$ in the space of legendrian manifolds given by a ge\-ne\-rating family $F_t$. Consider the point $x(t_0)\in L_{t_0}$ and the point $(q,w)\in N\times \RM^{K}$ such that $$d_wF_{t_0}(q,w)=0, x(t_0)=(p,q,u), p=d_qF_{t_0}(q,w), u=F_{t_0}(q,w).$$ Then $x$ is non-vertical if and only if the hessian $d_{ww}F_{t_0}(q,w)$ is non-degenerate. For such a point $x$, the following lemma holds: \begin{lem} \label{l2} The vertical speed at $x$ is equal to $\frac {d}{dt}\big|_{t=t_0}F_t(q,w)$ \qed. \end{lem} Let $G_t$ be a family of smooth functions and assume that the point $z(t_0)$ is a Morse critical point for $G_{t_0}$. By the implicit function theorem, for each $t$ sufficiently close to $t_0$, the function $G_t$ has a unique critical point $z(t)$ close to $z_0$, and $z(t)$ is a smooth path. \begin{lem} \label{l3} The speed of the critical value $\frac {d}{dt}\big|_{t=0}G_t(z(t))$ is equal to $\frac {d}{dt}\big|_{t=0}G_t(z(t_0))$. \end{lem} Indeed, $\frac {d}{dt}\big|_{t=t_0}G_t(z(t)) = \frac{d}{dt}\big|_{t=t_0}G_t(z(t_0))+\frac {\partial G_t}{\partial z}(z(t_0)) \cdot \frac {dz}{dt}(t_0)$. \qed \medskip At almost every point on the Cerf diagram, the slope of the Cerf diagram at this point is the speed of a critical value of the function $F_t$. By lemma \ref{l3} and lemma \ref{l2}, it is the vertical speed at some non-vertical point. By lemma \ref{l1} it is positive. This finishes the proof of Proposition \ref{prop}. \qed \medskip \subsection{Proof of Theorem \ref{thm7}} \label{pfthm7} Suppose now that there is a closed positive loop $L_{t, t\in [0,1]}$ in some $E$-qua\-si\-func\-ti\-onal component $\mathcal{L}$. The condition of positivity is open. We slightly perturb the loop $L_{t, t\in [0,1]} $ such that $Crit(L_0)$ is a finite set of cardinality $A$. Note that $A>0$, since $L_0$ is a $E$-quasifunction. Consider the $A$-th multiple of the loop $L_{t, t\in [0,1]}$. By Theorem~\ref{Chgen}, $L_t$ has a generating family $F_t$, for all $t\in [0,A]$. By Proposition~\ref{prop}, we have that $$c_1(\widetilde{F_0})<c_1(\widetilde{F_1})<\dots<c_1(\widetilde{F_A}).$$ All these $A+1$ numbers belong to the set $Crit(L_0)$. This is impossible due to the cardinality of this set. This finishes the proof of Theorem~\ref{thm7} and hence of Theorem $\ref{thm1}$. \qed \medskip \subsection{Proof of Theorem \ref{thm2}} \label{proof2} Theorem \ref{thm2} is a corollary of Theorem \ref{thm1}, via the the contactomorphism between $(ST^*(\RM^n), \zeta_1)$ and $(J^1(S^{n-1}), \zeta)$ we have seen in \ref{hodograph}. \medskip Consider the fiber $\pi^{-1}(x)$ of the fibration $\pi\colon ST^*\RM^n \to \RM^n$. It corresponds to a Legendrian manifold $j^1l_x\subset J^1(S^{n-1})$, where $l_x$ is the function $l_x=\langle q,x\rangle$. It is a Morse function for $x\ne 0$, and has only two critical points and two critical values $\pm||x||$. The critical points of $l_x$ are non-degenerate if $x\ne 0$. It follows from Proposition~\ref{Morseprop} that $c_1(F)=-||x||$, $c_2(F)=||x||$. Indeed, any small generic Morse perturbation of $F$ has two critical points with critical values close to $\pm||x||$. Viterbo numbers for this perturbation should be different. Hence by continuity $c_1(F)=-||x||$, $c_2(F)=||x||$. Viterbo numbers for $j^1l_0$ are equal to zero, because $Crit(S(0))=\{0\}$. The existence of a positive path would contradict the monotonicity (Proposition \ref{prop}) of Viterbo numbers. \qed \section{Morse theory for positive Legendrian submanifolds} \label{proof4} In this section we prove Theorem \ref{thm4} and deduce Theorem \ref{thm5} from it. We need first to generalize some of the previous constructions and results to the case of manifolds with boundary. \medskip Let $N$ be a compact closed manifold. Fix a function $f\colon W\to\RM$ such that $0$ is a regular value of $f$. Denote by $M$ the set $f^{-1}([0,+\infty[)=\{f\ge0\}$. Denote by $b(f)=\dim_{\KM} {H_*(M)}$ the dimension $H_*(M)$ (all the homologies here and below are counted with coefficients in a fixed field $\KM$). \subsection{Viterbo numbers for manifolds with boundary} The definition of the Viterbo numbers for a function quadratic at infinity on a manifold with boundary is the same as in the case of closed manifold. We repeat it briefly. Given a function $F$ which is quadratic at infinity, we define the Viterbo numbers $c_{1,M}(F),...,c_{b(f),M}(F)$ as follows. \medskip A {\em generalized critical value} of $F$ is a real number which is a critical for $F$ or for the restriction $F|_{\partial M\times \RM^N}$. Denote by $F^a$ the set $\{(q,w)|F(q,w) \leq a \}$. The homotopy type of the set $F^a$ is changed only if $a$ passes through a generalized critical value. One can show that, for sufficiently big $K_1,K_2>0$, the homology of the pair $(F^{K_1},F^{-K_2})$ is independent of $K_1,K_2$, and naturally isomorphic, by the Thom isomorphism, to $H_{*-\ind Q}(M)$. So, for any $a \in \RM$ and sufficiently large $K_2$ the projection $H_*(F^a,F^{-K_2}) \to H_{*-\ind Q}(M)$ is well defined and independent of $K_2$. Denote the image of this projection by~$I(a)$. \begin{defi} The Viterbo numbers are $$c_{k,M} (F) = \inf\{c| \dim I(c) \ge k \}, k \in \{1, \dots, b(f))\}.$$ \end{defi} Any Viterbo number $c_{k,M}(F)$ is a generalized critical value of the function $F$. Obviously, $c_{1,M}(F)\le ...\le c_{b(f),M}(F)$. For any continuous family $F_t$ of quadratic at infinity functions, $c_{i,M}(F_t)$ depends continuously on $t$. \subsection{Proof of Theorem \ref{thm4}} Consider a $1$-parameter family of quadratic at infinity functions $F_{t, t\in[a,b]}\colon N\times\RM^{K}\to \RM$, such that $F_t$ is a generating family for the Legendrian manifold $L_t$ and such that the path $L_{t,t\in [a,b]}$ is positive. We will consider the restriction of the function $F_t$ to $M\times\RM^{K}$ and denote it by $F_t$ also. The following proposition generalizes Proposition~\ref{prop}. \begin{prop} \label{prop1} The Viterbo numbers of the family $F_t$ are monotone increasing: $c_{i,M}(F_a)<c_{i,M}(F_b)$ for any $i\in\{1,...,b(f)\}$. \end{prop} The difference with Proposition \ref{prop} is that the Cerf diagram of a generic family has one more possible singularity. This singularity corresponds to the case when a Morse critical point meets the boundary of the manifold. In this case, the Cerf diagram is locally diffeomorphic to a parabola with a tangent half-line. \qed \medskip We now prove theorem \ref{thm4}. Consider the 1-parameter family of functions $H_{\lambda\ge 0}$, $$H_\lambda(q,w)=F_1(q,w)-\lambda f(q)$$ on $M\times \RM^K$. The manifold $L$ intersects $j^1\lambda_0f$ at some point above $M$ if and only if the function $H_{\lambda_0}$ has $0$ as an ordinary critical value (not a critical value of the restriction to the boundary). \medskip Consider the numbers $c_{k,M}(H_\lambda)$. By Proposition \ref{prop1}, $c_{k,M}(H_0)>0$. For a sufficiently big value of $\lambda$, each of them is negative. To show that, consider a sufficiently small $\varepsilon >0$ belonging to the component of the regular values of $f$ which contains $0$. Denote by $M_1\subset M$ the set $\{f\ge\varepsilon\}$. The manifold $M_1$ is diffeomorphic to the manifold $M$, and the inclusion map is an homotopy equivalence. Denote by $G_\lambda$ the restriction of $H_\lambda$ to the $M_1\times\RM^K$. Consider the following commutative diagram: $$ \begin{CD} H_*(G_\lambda^a,G_\lambda^{-K_2})@>i_1>>H_*(G_\lambda^{K_1},G_\lambda^{-K_2})@>Th_1>>H_{*-\ind(Q)}(M_1)\\ @Vj_1VV @. @Vj_2VV\\ H_*(H_\lambda^a,H_\lambda^{-K_2})@>i_2>>H_*(H_\lambda^{K_1},H_\lambda^{-K_2})@>Th_2>>H_{*-\ind(Q)}(M) \end{CD} $$ where $K_1,K_2$ are sufficiently big numbers, $Th_1,Th_2$ denote Thom isomorphisms and $i_1,i_2,j_1,j_2$ are the maps induced by the natural inclusions. It follows from the commutativity of the diagram and from the fact that $j_2$ is an isomorphism that $c_{k,M_1}(G_\lambda)\ge c_{k,M}(H_\lambda)$ for every~$k$. \medskip For sufficiently big $\lambda$ and for every $q\in M_1$, the critical values of the function $G_\lambda$ restricted to $q\times \RM^K$ are negative. Hence all generalized critical values of $G_\lambda$ are negative. It follows that all the numbers $c_{k,M_1}(G_\lambda)$ are negative, and the same holds for $c_{k,M}(H_\lambda)$. We fix $\lambda_0$ such that $c_{k,M}(H_{\lambda_0})<0$ for every $k \in \{1,\dots, b(f) \}$. \medskip Consider now $c_{k,M}(H_\lambda)$ as a function of $\lambda\in[0,\lambda_0]$. We are going to show that its zeroes correspond to the intersections above $M\setminus \partial M$. For a manifold $L_1$ in general position, all the generalized critical values of $F_1$ are non-zero. In particular all the critical values of the function the $F_1|_{\partial M\times \RM^K}$ are non-zero. The function $F_1|_{\partial M\times \RM^K}$ coincides with ${H_\lambda |} _{{\partial M\times \RM^K}}$ since $f=0$ on $\partial M$. Hence, if zero is a critical value for $H_\lambda$, then it is an ordinary critical value at some inner point. This finishes the proof of Theorem \ref{thm4}. \qed \medskip \begin{rem}The function $c_{i,M}(H_{\lambda})$ can be constant on some sub-intervals in $]0,\lambda_0[$, even for a generic function $F_1$. Indeed, the critical values of the restriction of $H_\lambda$ to $\partial M\times\RM^K$ do not depend on $\lambda$. It is possible that $c_{i,M}(H_\lambda)$ is equal to such a critical value for some $\lambda$'s. \end{rem} The following proposition concerns the case of a general (non necessarily generic) positive Legendrian manifold. We suppose again that $f$ is a function having $0$ as regular value and that $L$ is a positive manifold. \begin{prop} \label{common} For any connected component of the set $M=\{f\ge 0\}$ there exists a positive $\lambda$ such that $L$ intersects with $j^1\lambda f$ above this component. \end{prop} Consider a connected component $M_0$ of the manifold $M$. It is possible to replace $f$ by some function $\tilde{f}$ such that $0$ is a regular value for $\tilde{f}$, $\tilde{f}$ coincides with $f$ on $M_0$ and $\tilde{f}$ is negative on $N\setminus M_0$. We consider $c_{1,M_0}(F_1-\lambda \tilde{f})$ as a function of $\lambda$. It is a continuous function, positive in some neighborhood of zero, and negative for the big values of $\lambda$. \medskip Fix some $\alpha$ and $\beta$ such that $c_{1,M_0}(F_1-\alpha \tilde{f})>0$ and $c_{1,M_0}(F_1-\beta \tilde{f})<0$. Assume that for any $\lambda \in [\alpha, \beta]$, $L$ does not intersect $j^1\lambda\tilde{f}$ above $M_0$. Then this is also true for any small enough generic perturbation $L'$of $L$. Denote by $F'$ a generating family for $L'$. Each zero $\lambda_0$ of $c_{1,M_0}(F'-\lambda_0 \tilde{f})$ corresponds to an intersection of $L'$ with $j^1\lambda_0\tilde{f}$ above $M_0$. Such a $\lambda_0$ exists by Theorem \ref{thm4}. This is a contradiction. \qed \subsection{Proof of Theorem \ref{thm5}} We can suppose that the origin of $\RM^n$ belongs to the line considered in the statement of Theorem~\ref{thm5}. Consider now again the contactomorphism of \ref{hodograph} $(J^1(S^{n-1}, \zeta))=(ST^*(\RM^n), \zeta_1)$. \medskip For such a choice of the origin, the union of all the fibers above the points on the line forms a manifold of type $\Lambda(f)$, where $f$ is the restriction of linear function to the sphere $S^{n-1}$. \medskip The manifold $M=\{f\ge 0\}$ has one connected component (it is an hemisphere). By Proposition \ref{common} there is at least one intersection of the considered positive Legendrian sphere with $\Lambda_+(f)$. Another point of intersection comes from $\Lambda_+(-f)$. These two points are different because $\Lambda_+(-f)$ does not intersect with $\Lambda_+(f)$ \section{Positive isotopies in homogeneous neighborhoods} The strategy for proving Theorem~\ref{thm6} is to link the general case to the case of $\Lambda_k \subset (J^1(S^1) ,\zeta)$. \medskip Let $d=\sharp (L_1 \cap S)$. We first consider the infinite cyclic cover $\overline{S}$ of $S$ associated with $[L]\in \pi_1 (S)$. The surface $\overline{S}$ is an infinite cylinder. We call $\overline{U}$ the corresponding cover of $U$ endowed with the pullback $\overline{\xi}$ of $\xi$. By construction, $\overline{U}$ is $\overline{\xi}$-homogeneous. We also call $\overline{L}_s$ a continuous compact lift of $L_s$ in $\overline{U}$. \medskip By compacity of the family $(\overline{L}_s )_{s\in [0,1]}$, we can find a large compact cylinder $\overline{C} \subset \overline{S}$ such that for all $s\in [0,1]$, $\overline{L}_s \subset int (\overline{C} \times \RM )$. We also assume that $\partial \overline{C} \pitchfork \Gamma_{\overline{U}}$. \medskip The following lemma shows that in adition we can assume that the boundary of $\overline{C}$ is Legendrian. \begin{lem} \label{lemmaA} If we denote by $\pi :\overline{S} \times \RM \rightarrow \overline{S}$ the projection forgetting the $\RM$-factor, we can find a lift $\overline{C}_0$ of a $C^0$-small deformation of $\overline{C}$ in $\overline{S}$ which contains $\overline{L}_0$, whose geometric intersection with $\overline{L}_1$ is $d$ and whose boundary is Legendrian. \end{lem} To prove this, we only have to find a Legendrian lift $\gamma$ of a small deformation of $\partial \overline{C}$, and to make a suitable slide of $\overline{C}$ near its boundary along the $\RM$-factor to connect $\gamma$ to a small retraction of $\overline{C} \times \{ 0\}$. The plane field $\overline{\xi}$ defines a connection for the fibration $\pi :\overline{S} \times \RM \rightarrow \overline{S}$ outside any small neigbourhood $N(\Gamma_{\overline{S}} )$ of $\Gamma_{\overline{S}}$. We thus can pick any $\overline{\xi }$-horizontal lift of $\partial \overline{C} -N(\Gamma_{\overline{S}} )$. \medskip We still have to connect the endpoints of these Legendrian arcs in $N(\Gamma_{\overline{S}} )\times \RM$. These endpoints lie at different $\RM$-coordinates, however this is possible to adjust since $\overline{\xi}$ is almost vertical in $N(\Gamma_{\overline{S}} )\times \RM$ (and vertical along $\Gamma_{\overline{S}} \times \RM$). To make it more precise, we first slightly modify $\overline{C}$ so that $\partial \overline{C}$ is tangent to $\overline{\xi} \overline{S}$ near $\Gamma_{\overline{S}}$. Let $\delta$ be the metric closure of a component of $\partial \overline{C} \setminus \Gamma_{\overline{S}}$ contained in the metric closure $R$ of a component of $\overline{S} \setminus \Gamma_{\overline{S}}$. On $int(R) \times \RM$, the contact structure $\overline{\xi}$ is given by an equation of the form $dz +\beta$ where $z$ denotes the $\RM$-coordinate and $\beta$ is a $1$-form on $int(R)$, such that $d\beta$ is an area form that goes to $+ \infty$ as we approach $\partial R$. Now, let $\delta'$ be another arc properly embedded in $R$ and which coincides with $\delta$ near its endpoints. If we take two lifts of $\delta$ and $\delta'$ by $\pi$ starting at the same point (these two lifts are compact curves, since they coincide with the characteristic foliation near their endpoints, and thus lift to horizontal curves near $\Gamma_{\overline{S}}$ where $\beta$ goes to infinity), the difference of altitude between the lifts of the two terminal points is given by the area enclosed between $\delta$ and $\delta'$, measured with $d\beta$. As $d\beta$ is going to infinity near $\partial \delta =\partial \delta'$, taking $\delta'$ to be a small deformation of $\delta$ sufficiently close to $\partial \delta$, we can give this difference any value we want. This proves Lemma \ref{lemmaA} \qed \medskip Let $\overline{U}_0 =\overline{C}_0 \times \RM$. \begin{lem}\label{lemma: embedd} There exists a embedding of $(\overline{U}_0 ,\overline{\xi} ,\overline{L}_0 )$ in $(J^1(S^1) ,\zeta ,\Lambda_k )$ such that the image of $\overline{L}_1$ intersects $p$ times $A$. \end{lem} The surface $\overline{C}_0$ is $\overline{\xi}$-convex and its dividing set has exactly $2k$ components going from one boundary curve to the other. All the other components of $\Gamma_{\overline{C}_0}$ are boundary parallel. Moreover, the curve $\overline{L}_0$ intersects by assumption exactly once every non boundary parallel component and avoids the others. Then one can easily embed $\overline{C}_0$ in a larger annulus $\overline{C}_1$ and extend the system of arcs $\Gamma_{\overline{C}_0} (\overline{\xi} )$ outside of $\overline{C}_0$ by gluing small arcs, in order to obtain a system $\Gamma$ of $2k$ non boundary parallel arcs on $\overline{C}_1$ Simultaneously, we extend the contact structure $\overline{\xi}$ from $\overline{U}_0$, considered as an homogeneous neighborhood of $\overline{C}_0$, to a neigbourhood $\overline{U}_1 \simeq \overline{C}_1 \times \RM$ of $\overline{C}_1$. To achieve this one only has to extend the characteristic foliation, in a way compatible with $\Gamma$, and such that the boundary of $\overline{C}_1$ is also Legendrian. Note that the $\RM$-factor is not changed above $\overline{C}_0$. \medskip To summarize, $\overline{U}_1$ is an homogeneous neigborhood of $\overline{C}_1$ for the extension $\overline{\xi}_1$, and $\overline{C}_1$ has Legendrian boundary with dividing curve $\Gamma_{\overline{C}_1} (\overline{\xi}_1 )=\Gamma$. By genericity, we can assume that the characteristic foliation of $\overline{C}_1$ is Morse-Smale. Then, using Giroux's realization lemma \cite{Gi}, one can perform a $C^0$-small modification of $\overline{C}_1$ relative to $\overline{L}_0 \cup \partial \overline{C}_1$, leading to a surface $\overline{C}_2$, through annuli transversal to the $\RM$-direction, and whose support is contained in an arbitrary small neighborhood of saddle separatrices of $\overline{\xi}_1 \overline{C}_1$, so that the characteristic foliation of $\overline{C}_2$ for $\overline{\xi}_1$ is conjugated to $\zeta \Lambda_k$. If this support is small enough and if we are in the generic case (which can always been achieved) where $\overline{L_1}$ doesn't meet the separatrices of singularities of $\overline{\xi}_1 \overline{C}_1$, we get that $\sharp (\overline{L}_1 \cap \overline{C}_2)=\sharp (\overline{L}_1 \cap \overline{C}_1 )=d$. As we are dealing with homogeneous neighborhoods, we see that $(\overline{U}_1 ,\overline{\xi}_1 , \overline{L}_0 )$ is conjugated with $(J^1(S^1) ,\zeta ,\Lambda_k )$. This proves Lamme \ref{lemma: embedd}. \qed \medskip The combination of Lemma~\ref{lemma: embedd} and corollary~\ref{ex:fund} ends the proof of Theorem~\ref{thm6} by showing that $d \geq 2k$. \qed \medskip When $S$ is a sphere the conclusion of theorem~\ref{thm6} also holds since we are in the situation where $k=0$. However in this case, we have a more precise disjunction result. \begin{thm} Let $(U,\xi )$ be a $\xi$-homogeneous neighborhood of a sphere $S$. If $\xi$ is tight (i.e. $\Gamma_U$ is connected), then any legendrian curve $L\subset S$ can be made disjoint from $S$ by a positive isotopy. \end{thm} Consider $\RM^3$ with coordinates $(x,y,z)$ endowed with the contact structure $\zeta =\ker (dz+xdy)$. The radial vector field $$R=2z\frac{\partial}{\partial z} +x\frac{\partial}{\partial x}+y\frac{\partial}{\partial y}$$ is contact. Due to Giroux's realization lemma, the germ of $\xi$ near $S$ is isomorphic to the germ given by $\zeta$ near a sphere $S_0$ transversal to $R$. Let $L_0$ be the image of $L$ in $S_0$ by this map. By genericity, we can assume that $L_0$ avoids the vertical axis $\{ x=0,z=0\}$. Now, if we push $L_0$ enough by the flow of $\frac{\partial}{\partial z}$, we have a positive isotopy of $L_0$ whose endpoint $L_1$ avoids $S_0$. This isotopy takes place in a $\zeta$-homogeneous collar containing $S_0$ and obtained by flowing back and forth $S_0$ by the flow of $R$. This collar embeds in $U$ by an embedding sending $S_0$ to $S$ and the $R$-direction to the $\RM$-direction. \qed \bigskip \bigskip \bigskip {\small Vincent Colin, Universit\'e de Nantes, Laboratoire de math\'ematiques Jean Leray, UMR 6629 du CNRS. email: Vincent.Colin@univ-nantes.fr \medskip Emmanuel Ferrand, Universit\'e Pierre et Marie Curie, Institut Math\'ematique de Jussieu, UMR 7586 du CNRS. email: emmanuel.ferrand@upmc.fr \medskip Petya Pushkar, D\'epartement de Math\'ematiques, Universit\'e Libre de Bruxelles. email: ppushkar@ulb.ac.be }
{"config": "arxiv", "file": "1004.5263/pil13.tex"}
\begin{document} \maketitle \abstract{We study initial-boundary value problems for linear evolution equations of arbitrary spatial order, subject to arbitrary linear boundary conditions and posed on a rectangular 1-space, 1-time domain. We give a new characterisation of the boundary conditions that specify well-posed problems using Fokas' transform method. We also give a sufficient condition guaranteeing that the solution can be represented using a series. The relevant condition, the analyticity at infinity of certain meromorphic functions within particular sectors, is significantly more concrete and easier to test than the previous criterion, based on the existence of admissible functions.} \section{Introduction} \label{sec:P1:Intro} In this work, we consider \smallskip \noindent{\bfseries The initial-boundary value problem $\Pi(n,A,a,h,q_0)$:} Find $q\in C^\infty([0,1]\times[0,T])$ which satisfies the linear, evolution, constant-coefficient partial differential equation \BE \label{eqn:P1:Intro:PDE} \partial_tq(x,t) + a(-i\partial_x)^nq(x,t) = 0 \EE subject to the initial condition \BE \label{eqn:P1:Intro:IC} q(x,0) = q_0(x) \EE and the boundary conditions \BE \label{eqn:P1:Intro:BC} A\left(\partial_x^{n-1}q(0,t),\partial_x^{n-1}q(1,t),\partial_x^{n-2}q(0,t),\partial_x^{n-2}q(1,t),\dots,q(0,t),q(1,t)\right)^\T = h(t), \EE \noindent where the pentuple $(n,A,a,h,q_0)\in\mathbb{N}\times\mathbb{R}^{n\times2n}\times\mathbb{C}\times(C^\infty[0,T])^n \times C^\infty[0,1]$ is such that \begin{description} \item[$(\Pi1)$]{the \emph{order} $n\geq 2$,} \item[$(\Pi2)$]{the \emph{boundary coefficient matrix} $A$ is in reduced row-echelon form,} \item[$(\Pi3)$]{if $n$ is odd then the \emph{direction coefficient} $a=\pm i$, if $n$ is even then $a=e^{i\theta}$ for some $\theta\in[-\pi/2,\pi/2]$,} \item[$(\Pi4)$]{the \emph{boundary data} $h$ and the \emph{initial datum} $q_0$ are compatible in the sense that \BE \label{eqn:P1:Intro:Compatibility} A\left(q_0^{(n-1)}(0),q_0^{(n-1)}(1),q_0^{(n-2)}(0),q_0^{(n-2)}(1),\dots,q_0(0),q_0(1)\right)^\T = h(0). \EE} \end{description} Provided $\Pi$ is well-posed, in the sense of admitting a unique, smooth solution, its solution may be found using Fokas' unified transform method~\cite{Fok2008a,FP2001a}. The representation thus obtained is a contour integral of transforms of the initial and boundary data. Certain problems, for example those with periodic boundary conditions, may be solved using classical methods such as Fourier's separation of variables~\cite{Fou1822a}, to yield a representation of the solution as a discrete Fourier series. By the well-posedness of $\Pi$, these are two different representations of the same solution. For individual examples, Pelloni~\cite{Pel2005a} and Chilton~\cite{Chi2006a} discuss a method of recovering a series representation from the integral representation through a contour deformation and a residue calculation. Particular examples have been identified of well-posed problems for which this deformation fails but there is no systematic method of determining its applicability. Pelloni~\cite{Pel2004a} uses Fokas' method to decide well-posedness of a class of problems with uncoupled, non-Robin boundary conditions giving an explicit condition, the number that must be specified at each end of the space interval, whose validity may be ascertained immediately. However there exist no criteria for well-posedness that are at once more general than Pelloni's and simpler to check than the technical `admissible set' characterisation of~\cite{FP2001a}. The principal result of this work is a new characterisation of well-posedness. The condition is the decay of particular integrands within certain sectors of the complex plane. Indeed, let $D=\{\rho\in\mathbb{C}:\Re(a\rho^n)<0\}$. Then \begin{thm} \label{thm:P1:Intro:WellPosed} The problem $\Pi(n,A,a,h,q_0)$ is well-posed if and only if $\eta_j(\rho)$ is entire and the ratio \BE \label{eqn:P1:Intro:thm.WellPosed:Decay} \frac{\eta_j(\rho)}{\DeltaP(\rho)}\to0 \mbox{ as }\rho\to\infty \mbox{ from within } D, \mbox{ away from the zeros of } \DeltaP. \EE for each $j$. \end{thm} We provide a small contribution to Fokas' method, making it fully algorithmic. We express the solution in terms of the PDE characteristic determinant, $\DeltaP$, the determinant of the matrix \BE \label{eqn:P1:Intro:PDE.Characteristic.Matrix} \M{\mathcal{A}}{k}{j}(\rho) = \begin{cases}\begin{array}{l}c_{(J_j-1)/2}(\rho)\left(\omega^{(n-1-[J_j-1]/2)(k-1)}\phantom{\displaystyle\sum_{r\in\widehat{J}^+}}\right. \\ \hspace{2.5mm} - \displaystyle\sum_{r\in\widehat{J}^+}\alpha_{\widehat{J}^+_r\hspace{0.5mm}(J_j-1)/2}\omega^{(n-1-r)(k-1)}(i\rho)^{(J_j-1)/2-r} \\ \hspace{2.5mm}\left.+ e^{-i\omega^{k-1}\rho}\displaystyle\sum_{r\in\widehat{J}^-}\alpha_{\widehat{J}^-_r\hspace{0.5mm}(J_j-1)/2}\omega^{(n-1-r)(k-1)}(i\rho)^{(J_j-1)/2-r}\right)\end{array} & J_j\mbox{ odd,} \\ \begin{array}{l}c_{J_j/2}(\rho)\left(-\omega^{(n-1-J_j/2)(k-1)}e^{-i\omega^{k-1}\rho}\phantom{\displaystyle\sum_{r\in\widehat{J}^+}}\right. \\ \hspace{2.5mm} - \displaystyle\sum_{r\in\widehat{J}^+}\beta_{\widehat{J}^+_r\hspace{0.5mm}J_j/2}\omega^{(n-1-r)(k-1)}(i\rho)^{J_j/2-r} \\ \hspace{2.5mm}\left.+ e^{-i\omega^{k-1}\rho}\displaystyle\sum_{r\in\widehat{J}^-}\beta_{\widehat{J}^-_r\hspace{0.5mm}J_j/2}\omega^{(n-1-r)(k-1)}(i\rho)^{J_j/2-r}\right)\end{array} & J_j\mbox{ even.}\end{cases} \EE The matrix $\mathcal{A}$ appears in the generalised spectral Dirichlet to Neumann map derived in Section~\ref{sec:P1:Implicit}. The application of the map to the formal result Theorem~\ref{thm:P1:Implicit:Formal} yields the following implicit equation for $q$, the solution of $\Pi$. \begin{thm} \label{thm:P1:Intro:Implicit} Let $\Pi(n,A,a,h,q_0)$ be well-posed with solution $q$. Then $q(x,t)$ may be expressed in terms of contour integrals of transforms of the boundary data, initial datum and solution at final time as follows: \begin{multline} \label{thm:P1:Intro:thm.Implicit:q} 2\pi q(x,t)=\int_\mathbb{R}e^{i\rho x-a\rho^nt}\hat{q}_0(\rho)\d\rho - \int_{\partial D^+}e^{i\rho x-a\rho^nt}\sum_{j\in J^+}\frac{\zeta_j(\rho) - e^{a\rho^nT}\eta_j(\rho)}{\DeltaP(\rho)}\d\rho \\ - \int_{\partial D^-}e^{i\rho(x-1)-a\rho^nt}\sum_{j\in J^-}\frac{\zeta_j(\rho) - e^{a\rho^nT}\eta_j(\rho)}{\DeltaP(\rho)}\d\rho, \end{multline} where the sectors $D^\pm = D\cap\mathbb{C}^\pm$ and $D = \{\rho\in\mathbb{C}:\Re(a\rho^n)<0\}$. \end{thm} Equation~\eqref{thm:P1:Intro:thm.Implicit:q} gives only an implicit representation of the solution as the functions $\eta_j$ are defined in terms of the Fourier transform of the solution evaluated at final time, which is not a datum of the problem. Nevertheless the importance of the PDE characteristic determinant is clear. The integrands are meromorphic functions so $q$ depends upon their behaviour as $\rho\to\infty$ from within $D^\pm$ and upon their poles, which can only arise at zeros of $\DeltaP$. It is the behaviour at infinity that is used to characterise well-posedness in Theorem~\ref{thm:P1:Intro:WellPosed}, the proof of which is given in Section~\ref{sec:P1:WellPosed}. In Section~\ref{sec:P1:Reps} we derive two representations of the solution of an initial-boundary value problem. Let $(\sigma_k)_{k\in\mathbb{N}}$ be a sequence containing each nonzero zero of $\DeltaP$ precisely once and define the index sets \begin{gather*} K^{\mathbb{R}}=\{k\in\mathbb{N}:\sigma_k\in\mathbb{R}\},\\ K^+=\{k\in\mathbb{N}:\Im\sigma_k\geq0\},\\ K^-=\{k\in\mathbb{N}:\Im\sigma_k<0\}. \end{gather*} Then the following theorems give representations of the solution to the problem $\Pi$. \begin{thm} \label{thm:P1:Intro:Reps.Int} Let the problem $\Pi(n,A,a,h,q_0)$ be well-posed. Then the solution $q$ may be expressed using contour integrals of transforms of the initial and boundary data by \begin{multline} \label{eqn:P1:Intro:thm.Reps.Int:q} q(x,t) = \frac{i}{2}\sum_{k\in K^+}\res_{\rho=\sigma_k}\frac{e^{i\rho x-a\rho^nt}}{\DeltaP(\rho)} \sum_{j\in J^+} \zeta_j(\rho) + \int_{\partial\widetilde{E}^+}e^{i\rho x-a\rho^nt}\sum_{j\in J^+}\frac{\zeta_j(\rho)}{\DeltaP(\rho)}\d\rho \\ + \frac{i}{2}\sum_{k\in K^-}\res_{\rho=\sigma_k}\frac{e^{i\rho(x-1)-a\rho^nt}}{\DeltaP(\rho)} \sum_{j\in J^-} \zeta_j(\rho) + \int_{\partial\widetilde{E}^-}e^{i\rho(x-1)-a\rho^nt}\sum_{j\in J^-}\frac{\zeta_j(\rho)}{\DeltaP(\rho)}\d\rho \\ - \frac{1}{2\pi}\left\{\sum_{k\in K^\mathbb{R}} \int_{\Gamma_k} + \int_{\mathbb{R}} \right\} e^{i\rho x-a\rho^nt} \left( \frac{1}{\DeltaP(\rho)}-1 \right)H(\rho) \d\rho, \end{multline} \end{thm} \begin{thm} \label{thm:P1:Intro:Reps.Ser} Let $a=\pm i$ and let the problems $\Pi=\Pi(n,A,a,h,q_0)$ and $\Pi'=\Pi(n,A,-a,h,q_0)$ be well-posed. Then the solution $q$ of $\Pi$ may be expressed as a discrete series of transforms of the initial and boundary data by \begin{multline} \label{eqn:P1:Intro:thm.Reps.Ser:q} q(x,t) = \frac{i}{2}\sum_{k\in K^+}\res_{\rho=\sigma_k}\frac{e^{i\rho x-a\rho^nt}}{\DeltaP(\rho)} \sum_{j\in J^+} \zeta_j(\rho) \\ + \frac{i}{2}\sum_{k\in K^-}\res_{\rho=\sigma_k}\frac{e^{i\rho(x-1)-a\rho^nt}}{\DeltaP(\rho)} \sum_{j\in J^-} \zeta_j(\rho) \\ - \frac{1}{2\pi}\left\{\sum_{k\in K^\mathbb{R}} \int_{\Gamma_k} + \int_{\mathbb{R}} \right\} e^{i\rho x-a\rho^nt} \left( \frac{1}{\DeltaP(\rho)}-1 \right)H(\rho) \d\rho. \end{multline} \end{thm} The final integral term in both equations~\eqref{eqn:P1:Intro:thm.Reps.Int:q} and~\eqref{eqn:P1:Intro:thm.Reps.Ser:q} depends upon $H$, a linear combination of $t$-transforms of the boundary data which evaluates to $0$ if $h=0$. Hence if $\Pi$ is a homogeneous initial-boundary value problem then the final term makes no contribution to equations~\eqref{eqn:P1:Intro:thm.Reps.Int:q} and~\eqref{eqn:P1:Intro:thm.Reps.Ser:q}. Special cases of Theorem~\ref{thm:P1:Intro:Reps.Int} have appeared before but the representations differ from equation~\eqref{eqn:P1:Intro:thm.Reps.Int:q}. The result is shown for several specific examples in~\cite{FP2005a,Pel2005a}, including a second order problem with Robin boundary conditions. For simple boundary conditions, the result is mentioned in Remark~6 of~\cite{FP2001a} and Lemma~4.2 of~\cite{Pel2004a} contains the essence of the proof. Unlike earlier forms, equation~\eqref{eqn:P1:Intro:thm.Reps.Int:q} represents $q$ using discrete series as far as possible; only the parts of the integral terms that cannot be represented as series remain. This may not have any advantage for computation but is done to highlight the contrast with equation~\eqref{eqn:P1:Intro:thm.Reps.Ser:q}. In Theorem~\ref{thm:P1:Intro:Reps.Ser} the well-posedness of $\Pi'$ is used to show that the first two integral terms of equation~\eqref{eqn:P1:Intro:thm.Reps.Int:q} evaluate to zero. Under the map $a\mapsto-a$, $D$ maps to $E$, the interior of its complement; we exploit this fact together with Theorem~\ref{thm:P1:Intro:WellPosed} to show the decay of \BES \frac{\zeta_j(\rho)}{\DeltaP(\rho)} \mbox{ as } \rho\to\infty \mbox{ from within } \widetilde{E}. \EES This maximally generalises of the arguments of Pelloni and Chilton in the sense that the deformation of contours cannot yield a series representation of the solution to $\Pi$ if $\Pi'$ is ill-posed. Theorem~\ref{thm:P1:Intro:WellPosed} is useful because it reduces the complexity of the analysis necessary to prove that a particular initial-boundary value problem is well-posed but its use still requires some asymptotic analysis. It would be preferable to give a condition that may be validated by inspection of the boundary coefficient matrix and is sufficient for well-posedness. We discuss such criteria in Section~\ref{sec:P1:Alt}. Section~\ref{sec:P1:Alt} also contains a proof of the following result, complementing Theorem~\ref{thm:P1:Intro:Reps.Ser}. This theorem highlights the essential difference between odd order problems, whose well-posedness depends upon the direction coefficient, and even order problems, whose well-posedness is determined by the boundary coefficient matrix only. \begin{thm} \label{thm:P1:Intro:Even.Can.Deform.If.Well-posed} Let $n$ be even and $a=\pm i$. Using the notation of Theorem~\ref{thm:P1:Intro:Reps.Ser}, the problem $\Pi'$ is well-posed if and only if $\Pi$ is well-posed. \end{thm} In Section~\ref{sec:P1:Spectrum} we investigate the \emph{PDE discrete spectrum}, the set of zeros of the PDE characteristic determinant. We prove a technical lemma, describing the distribution of the $\sigma_k$ which is used in the earlier sections. Under certain conditions we are able to exploit symmetry arguments to improve upon the general results Langer presents~\cite{Lan1931a} for the particular exponential polynomials of interest. \section{Implicit solution of IBVP} \label{sec:P1:Implicit} In Section~\ref{ssec:P1:Implicit:Fokas} we give the standard results of Fokas' unified transform method in the notation of this work. In Section~\ref{ssec:P1:Implicit:DtoN} we state and prove Lemma~\ref{lem:P1:Implicit:DtoNMap}, the generalised spectral Dirichlet to Neumann map. In Section~\ref{ssec:P1:Implicit:ApplyMap} we apply the map to the formal results of Section~\ref{ssec:P1:Implicit:Fokas}, concluding the proof of Theorem~\ref{thm:P1:Intro:Implicit}. The latter two sections contain formal definitions of many of the terms and much of the notation used throughout this work. \subsection{Fokas' method} \label{ssec:P1:Implicit:Fokas} The first steps of Fokas' transform method yield a formal representation for the solution of the initial-boundary value problem, given in the following \begin{thm} \label{thm:P1:Implicit:Formal} Let the initial-boundary value problem $\Pi(n,A,a,h,q_0)$ be well-posed. Then its solution $q$ may be expressed formally as the sum of three contour integrals, \begin{multline} \label{eqn:P1:Implicit:thm.Formal:q} q(x,t)=\frac{1}{2\pi}\left(\int_\mathbb{R}e^{i\rho x-a\rho^nt}\hat{q}_0(\rho)\d\rho - \int_{\partial D^+}e^{i\rho x-a\rho^nt}\sum_{j=0}^{n-1}c_j(\rho)\widetilde{f}_j(\rho)\d\rho\right. \\ \left.- \int_{\partial D^-}e^{i\rho(x-1)-a\rho^nt}\sum_{j=0}^{n-1}c_j(\rho)\widetilde{g}_j(\rho)\d\rho\right), \end{multline} where \BE \label{eqn:P1:Implicit:thm.Formal:Definitions} \begin{aligned} \widetilde{f}_j(\rho)&=\int_0^Te^{a\rho^ns}f_j(s)\d s, & \widetilde{g}_j(\rho)&=\int_0^Te^{a\rho^ns}g_j(s)\d s, \\ f_j(t) &= \partial_x^jq(0,t), & g_j(t) &= \partial_x^jq(1,t), \\ \hat{q}_0(\rho)&=\int_0^1e^{-i\rho y}q_0(y)\d y, & c_j(\rho) &=-a\rho^n(i\rho)^{-(j+1)}. \end{aligned} \EE \end{thm} The above theorem is well established and its proof, via Lax pair and Riemann-Hilbert formalism, appears in~\cite{Fok2001a,Fok2008a,FP2001a}. We state it here without proof to highlight the difference in notation to previous publications. We use $\rho$ to denote the spectral parameter, in place of $k$ in the earlier work. We use $f_j$ and $g_j$ exclusively to denote the boundary functions; even for simple boundary conditions in which some of the boundary functions are equal to boundary data we denote the boundary data separately by $h_k$. The transformed boundary functions are the $2n$ unknowns in equation~\eqref{eqn:P1:Implicit:thm.Formal:q}, of which at most $n$ may be explicitly specified by the boundary conditions~\eqref{eqn:P1:Intro:BC}. To determine the remaining $n$ or more we require a generalised Dirichlet to Neumann map in the form of Lemma~\ref{lem:P1:Implicit:DtoNMap}. This is derived from the boundary conditions and the global relation. \begin{lem}[Global relation] \label{lem:P1:Implicit:GR} Let $\Pi(n,A,a,h,q_0)$ be well-posed with solution $q$. Let \BES \hat{q}_T(\rho)=\int_0^1e^{-i\rho y}q(y,T)\d y \EES be the usual spatial Fourier transform of the solution evaluated at final time. Then the transformed functions $\hat{q}_0$, $\hat{q}_T$, $\widetilde{f}_j$ and $\widetilde{g}_j$ satisfy \BE \label{lem:P1:Implicit:lem.GR:GR} \sum_{j=0}^{n-1}c_j(\rho)\left( \tilde{f}_j(\rho)-e^{-i\rho}\tilde{g}_j(\rho) \right) = \hat{q}_0(\rho) - e^{a\rho^nT}\hat{q}_T(\rho),\qquad \rho\in\mathbb{C}. \EE \end{lem} The global relation is derived using an application of Green's Theorem to the domain $[0,1]\times[0,T]$ in the aforementioned publications. As the $t$-transform, \BE \label{eqn:P1:Implicit:tTransform} \widetilde{X}(\rho)=\int_0^Te^{a\rho^nt}X(t)\d t, \EE is invariant under the map $\rho\mapsto \exp{(2j\pi i/n)}\rho$ for any integer $j$, the global relation provides a system of $n$ equations in the transformed functions to complement the boundary conditions. \subsection{Generalised spectral Dirichlet to Neumann map} \label{ssec:P1:Implicit:DtoN} We give a classification of boundary conditions and formally state the generalised spectral Dirichlet to Neumann map. \begin{ntn} \label{ntn:P1:Implicit:Index.Sets} Consider the problem $\Pi(n,A,a,h,q_0)$, which need not be well-posed. Define $\omega=\exp{(2\pi i/n)}$. Define the \emph{boundary coefficients} $\M{\alpha}{k}{j}$, $\M{\beta}{k}{j}$ to be the entries of $A$ such that \BE \label{ntn:P1:Implicit:ntn.Index.Sets:Boundary.Coefficients} \begin{pmatrix}\alpha_{1\hspace{0.5mm}n-1} & \beta_{1\hspace{0.5mm}n-1} & \alpha_{1\hspace{0.5mm}n-2} & \beta_{1\hspace{0.5mm}n-2}& \dots & \alpha_{1\hspace{0.5mm}0} & \beta_{1\hspace{0.5mm}0} \\ \alpha_{2\hspace{0.5mm}n-1} & \beta_{2\hspace{0.5mm}n-1} & \alpha_{2\hspace{0.5mm}n-2} & \beta_{2\hspace{0.5mm}n-2}& \dots & \alpha_{2\hspace{0.5mm}0} & \beta_{2\hspace{0.5mm}0} \\ \vdots & \vdots & \vdots & \vdots & & \vdots & \vdots \\ \alpha_{n\hspace{0.5mm}n-1} & \beta_{n\hspace{0.5mm}n-1} & \alpha_{n\hspace{0.5mm}n-2} & \beta_{n\hspace{0.5mm}n-2}& \dots & \alpha_{n\hspace{0.5mm}0} & \beta_{n\hspace{0.5mm}0}\end{pmatrix}=A. \EE We define the following index sets and functions. $\widehat{J}^+ = \{j\in\{0,1,\dots,n-1\}$ such that $\alpha_{k\hspace{0.5mm}j}$ is a pivot in $A$ for some $k\}$, the set of columns of $A$ relating to the left of the space interval which contain a pivot. $\widehat{J}^- = \{j\in\{0,1,\dots,n-1\}$ such that $\beta_{k\hspace{0.5mm}j}$ is a pivot in $A$ for some $k\}$, the set of columns of $A$ relating to the right of the space interval which contain a pivot. $\widetilde{J}^+ = \{0,1,\dots,n-1\}\setminus \widehat{J}^+$, the set of columns of $A$ relating to the left of the space interval which do not contain a pivot. $\widetilde{J}^- = \{0,1,\dots,n-1\}\setminus \widehat{J}^-$, the set of columns of $A$ relating to the right of the space interval which do not contain a pivot. $J = \{2j+1$ such that $j\in\widetilde{J}^+\}\cup\{2j$ such that $j\in\widetilde{J}^-\}$, an index set for the boundary functions whose corresponding columns in $A$ do not contain a pivot. Also, the decreasing sequence $(J_j)_{j=1}^n$ of elements of $J$. $J' = \{2j+1$ such that $j\in\widehat{J}^+\}\cup\{2j$ such that $j\in\widehat{J}^-\} = \{0,1,\dots,2n-1\} \setminus J$, an index set for the boundary functions whose corresponding columns in $A$ contain a pivot. Also, the decreasing sequence $(J'_j)_{j=1}^n$ of elements of $J'$. The functions $$V(\rho)=(V_1(\rho),V_2(\rho),\dots,V_n(\rho))^\T, \qquad V_j(\rho)=\begin{cases} \widetilde{f}_{(J_j-1)/2}(\rho) & J_j \mbox{ odd,} \\ \widetilde{g}_{J_j/2}(\rho) & J_j \mbox{ even,}\end{cases}$$ the boundary functions whose corresponding columns in $A$ do not contain a pivot. The functions $$W(\rho)=(W_1(\rho),W_2(\rho),\dots,W_n(\rho))^\T, \qquad W_j(\rho)=\begin{cases} \widetilde{f}_{(J'_j-1)/2}(\rho) & J'_j \mbox{ odd,} \\ \widetilde{g}_{J'_j/2}(\rho) & J'_j \mbox{ even,}\end{cases}$$ the boundary functions whose corresponding columns in $A$ contain a pivot. $(\widehat{J}^+_j)_{j\in\widehat{J}^+}$, a sequence such that $\alpha_{\widehat{J}^+_j\hspace{0.5mm}j}$ is a pivot in $A$ when $j\in\widehat{J}^+$. $(\widehat{J}^-_j)_{j\in\widehat{J}^-}$, a sequence such that $\beta_{\widehat{J}^-_j\hspace{0.5mm}j}$ is a pivot in $A$ when $j\in\widehat{J}^-$. \end{ntn} \begin{defn}[Classification of boundary conditions] \label{defn:P1:Implicit:BC.Classification} The boundary conditions of the problem $\Pi(n,A,a,h,q_0)$ are said to be \begin{enumerate} \item{\emph{homogeneous} if $h=0$. Otherwise the boundary conditions are \emph{inhomogeneous}.} \item{\emph{uncoupled} if \begin{align*} &\mbox{if } \M{\alpha}{k}{j} \mbox{ is a pivot in } A \mbox{ then } \M{\beta}{k}{r}=0 \hsforall r \mbox{ and} \\ &\mbox{if } \M{\beta}{k}{j} \mbox{ is a pivot in } A \mbox{ then } \M{\alpha}{k}{r}=0 \hsforall r. \end{align*} Otherwise we say that the boundary conditions are \emph{coupled}.} \item{\emph{non-Robin} if \BES \hsforall k\in\{1,2,\dots,n\}, \mbox{ if } \M{\alpha}{k}{j}\neq0 \mbox{ or } \M{\beta}{k}{j}\neq0 \mbox{ then } \M{\alpha}{k}{r}=\M{\beta}{k}{r}=0\hsforall r\neq j, \EES that is each contains only one order of partial derivative. Otherwise we say that boundary condition is \emph{of Robin type}. Note that whether boundary conditions are of Robin type or not is independent of whether they are coupled, unlike Duff's definition~\cite{Duf1956a}.} \item{\emph{simple} if they are uncoupled and non-Robin.} \end{enumerate} \end{defn} The terms `generalised' and `spectral' are prefixed to the name `Dirichlet to Neumann map' of the Lemma below to avoid confusion regarding its function. {\bfseries Generalised:} The boundary conditions we study are considerably more complex than those considered in~\cite{Chi2006a,Fok2001a,FP2001a,FP2005a,Pel2004a,Pel2005a}. Indeed, as $A$ may specify any linear boundary conditions, the known boundary functions may not be `Dirichlet' (zero order) and the unknown boundary functions need not be `Neumann' (first order). Further, if $A$ has more than $n$ non-zero entries then the lemma must be capable of expressing more than $n$ unknown boundary functions in terms of fewer than $n$ known boundary data. {\bfseries Spectral:} Owing to the form of equation~\eqref{eqn:P1:Implicit:thm.Formal:q} we are interested not in the boundary functions themselves but in their $t$-transforms, as defined in equations~\eqref{eqn:P1:Implicit:thm.Formal:Definitions}. It is possible, though unnecessarily complicated, to perform a generalized Dirichlet to Neumann map in real time and subsequently transform to spectral time but, as the global relation is in spectral time, to do so requires the use of an inverse spectral transform. Instead, we exploit the linearity of the $t$-transform~\eqref{eqn:P1:Implicit:tTransform}, applying it to the boundary conditions, and derive the map in spectral time. The crucial component of the lemma is given in the following \begin{defn} \label{defn:P1:Implicit:DeltaP} Let $\Pi(n,A,a,h,q_0)$ be an initial-boundary value problem having the properties $(\Pi1)$--$(\Pi4)$ but not necessarily well-posed. We define the \emph{PDE characteristic matrix} $\mathcal{A}(\rho)$ by equation~\eqref{eqn:P1:Intro:PDE.Characteristic.Matrix} and the \emph{PDE characteristic determinant} to be the entire function \BE \label{eqn:P1:Implicit:defn.DeltaP:DeltaP} \DeltaP(\rho)=\det\mathcal{A}(\rho). \EE \end{defn} \begin{lem}[Generalised spectral Dirichlet to Neumann map] \label{lem:P1:Implicit:DtoNMap} Let $\Pi(n,A,a,h,q_0)$ be well-posed with solution $q$. Then \begin{enumerate} \item{The vector $V$ of transformed boundary functions satisfies the \emph{reduced global relation} \BE \label{eqn:P1:Implicit:lem.DtoNMap:RGR} \mathcal{A}(\rho)V(\rho) = U(\rho) - e^{a\rho^nT}\begin{pmatrix}\hat{q}_T(\rho)\\\vdots\\\hat{q}_T(\omega^{n-1}\rho)\end{pmatrix}, \EE where \begin{align} \label{eqn:P1:Implicit:lem.DtoNMap:U} U(\rho) &= (u(\rho,1),u(\rho,2),\dots,u(\rho,n))^\T, \\ \label{eqn:P1:Implicit:lem.DtoNMap:u} u(\rho,k) &= \hat{q}_0(\omega^{k-1}\rho) - \sum_{l\in\widehat{J}^+} c_l(\omega^{k-1}\rho)\widetilde{h}_{\widehat{J}^+_l}(\rho) + e^{-i\omega^{k-1}\rho}\sum_{l\in\widehat{J}^-} c_l(\omega^{k-1}\rho)\widetilde{h}_{\widehat{J}^-_l}(\rho) \end{align} and $\widetilde{h}_j$ is the function obtained by applying the $t$-transform~\eqref{eqn:P1:Implicit:tTransform} to the boundary datum $h_j$.} \item{The PDE characteristic matrix is of full rank, is independent of $h$ and $q_0$ and differing values of $a$ only scale $\mathcal{A}$ by a nonzero constant factor.} \item{The vectors $V$ and $W$ of transformed boundary functions satisfy the \emph{reduced boundary conditions} \BE \label{eqn:P1:Implicit:lem.DtoNMap:RBC} W(\rho) = \left(\widetilde{h}_1(\rho),\widetilde{h}_2(\rho),\dots,\widetilde{h}_n(\rho)\right)^\T - \widehat{A} V(\rho), \EE where the \emph{reduced boundary coefficient matrix} is given by \BE \label{eqn:P1:Implicit:lem.DtoNMap:RBC.Matrix} \widehat{A}_{k\hspace{0.5mm}j} = \begin{cases} \M{\alpha}{k}{(J_j-1)/2} & J_j \mbox{ odd,} \\ \M{\beta}{k}{J_j/2} & J_j \mbox{ even.}\end{cases} \EE} \end{enumerate} \end{lem} \begin{proof} Applying the $t$-transform~\eqref{eqn:P1:Implicit:tTransform} to each line of the boundary conditions~\eqref{eqn:P1:Intro:BC} yields a system of $n$ equations in the transformed boundary functions. As $A$ is in reduced row-echelon form it is possible to split the vector containing all of the transformed boundary functions into the two vectors $V$ and $W$, justifying the reduced boundary conditions. The reduced boundary conditions may also be written \begin{align} \label{eqn:GettingA:General:Main.Lemma.Proof.Reduced.GR1} \widetilde{f}_j(\rho) &= \widetilde{h}_{\widehat{J}^+_j}(\rho) - \sum_{r\in\widetilde{J}^+}\alpha_{\widehat{J}^+_j\hspace{0.5mm}r}\widetilde{f}_r(\rho) - \sum_{r\in\widetilde{J}^-}\beta_{\widehat{J}^+_j\hspace{0.5mm}r}\widetilde{g}_r(\rho), & \mbox{for } j&\in\widehat{J}^+ \mbox{ and} \\ \label{eqn:GettingA:General:Main.Lemma.Proof.Reduced.GR2} \widetilde{g}_j(\rho) &= \widetilde{h}_{\widehat{J}^-_j}(\rho) - \sum_{r\in\widetilde{J}^+}\alpha_{\widehat{J}^-_j\hspace{0.5mm}r}\widetilde{f}_r(\rho) - \sum_{r\in\widetilde{J}^-}\beta_{\widehat{J}^-_j\hspace{0.5mm}r}\widetilde{g}_r(\rho), & \mbox{for } j&\in\widehat{J}^-. \end{align} As the $t$-transform is invariant under the map $\rho\mapsto\omega^j\rho$ for any integer $j$, the global relation Lemma~\ref{lem:P1:Implicit:GR} yields the system \BES \sum_{j=0}^{n-1}c_{j}(\rho)\omega^{(n-1-j)r}\widetilde{f}_{j}(\rho) - \sum_{j=0}^{n-1}e^{-i\omega^r\rho}c_{j}(\rho)\omega^{(n-1-j)r}\widetilde{g}_{j}(\rho) = \hat{q}_0(\omega^r\rho) - e^{a\rho^nT}\hat{q}_T(\omega^r\rho), \EES for $r\in\{0,1,\dots,n-1\}$. Using the fact $\widehat{J}^+\cup\widetilde{J}^+=\widehat{J}^-\cup\widetilde{J}^-=\{0,1,\dots,n-1\}$ we split the sums on the left hand side to give \begin{multline*} \sum_{j\in\widehat{J}^+}c_{j}(\rho)\omega^{(n-1-j)r}\widetilde{f}_{j}(\rho) + \sum_{j\in\widetilde{J}^+}c_{j}(\rho)\omega^{(n-1-j)r}\widetilde{f}_{j}(\rho) \\ - \sum_{j\in\widehat{J}^-}e^{-i\omega^r\rho}c_{j}(\rho)\omega^{(n-1-j)r}\widetilde{g}_{j}(\rho) - \sum_{j\in\widetilde{J}^-}e^{-i\omega^r\rho}c_{j}(\rho)\omega^{(n-1-j)r}\widetilde{g}_{j}(\rho) \\ = \hat{q}_0(\omega^r\rho) - e^{a\rho^nT}\hat{q}_T(\omega^r\rho), \end{multline*} for $r\in\{0,1,\dots,n-1\}$. Substituting equations~\eqref{eqn:GettingA:General:Main.Lemma.Proof.Reduced.GR1} and~\eqref{eqn:GettingA:General:Main.Lemma.Proof.Reduced.GR2} and interchanging the summations we obtain the reduced global relation. The latter statement of (ii) is a trivial observation from the form of the PDE characteristic matrix. A full proof that $\mathcal{A}$ is full rank is given in the proof of Lemma~2.17 of~\cite{Smi2011a}. \end{proof} \subsection{Applying the map} \label{ssec:P1:Implicit:ApplyMap} We solve the system of linear equations~\eqref{eqn:P1:Implicit:lem.DtoNMap:RGR} for $V$ using Cramer's rule hence, by equation~\eqref{eqn:P1:Implicit:lem.DtoNMap:RBC}, determining $W$ also. \begin{ntn} \label{ntn:P1:Implicit:DeltaP} Denote by $\widehat{\zeta}_j(\rho)$ the determinant of the matrix obtained by replacing the $j$\Th column of the PDE characteristic matrix with the vector $U(\rho)$ and denote by $\widehat{\eta}_j(\rho)$ the determinant of the matrix obtained by replacing the $j$\Th column of the PDE characteristic matrix with the vector $(\hat{q}_T(\rho),\hat{q}_T(\omega\rho),\dots,\hat{q}_T(\omega^{n-1}\rho))^\T$ for $j\in\{1,2,\dots,n\}$ and $\rho\in\mathbb{C}$. Define \BE \label{eqn:P1:Implicit:defn.DeltaP:zetajhat.etajhat} \begin{split} \widehat{\zeta}_j(\rho) &= \widetilde{h}_{j-n}(\rho) - \sum_{k=1}^{n}\M{\widehat{A}}{j-n}{k}\widehat{\zeta}_{k}(\rho), \\ \widehat{\eta}_j(\rho) &= \widetilde{h}_{j-n}(\rho) - \sum_{k=1}^{n}\M{\widehat{A}}{j-n}{k}\widehat{\eta}_{k}(\rho), \end{split}\EE for $j\in\{n+1,n+2,\dots,2n\}$ and $\rho\in\mathbb{C}$. Define \BE \label{eqn:P1:Implicit:defn.DeltaP:zetaj.etaj} \zeta_j(\rho) = \begin{cases}c_{(J_j-1)/2}(\rho)\widehat{\zeta}_j(\rho) \\ c_{J_j/2}(\rho)\widehat{\zeta}_j(\rho) \\ c_{(J'_{j-n}-1)/2}(\rho)\widehat{\zeta}_j(\rho) \\ c_{J'_{j-n}/2}(\rho)\widehat{\zeta}_j(\rho)\end{cases} \qquad \eta_j(\rho) = \begin{cases}c_{(J_j-1)/2}(\rho)\widehat{\eta}_j(\rho) & J_j \mbox{ odd,} \\ c_{J_j/2}(\rho)\widehat{\eta}_j(\rho) & J_j \mbox{ even,} \\ c_{(J'_{j-n}-1)/2}(\rho)\widehat{\eta}_j(\rho) & J'_{j-n} \mbox{ odd,} \\ c_{J'_{j-n}/2}(\rho)\widehat{\eta}_j(\rho) & J'_{j-n} \mbox{ even,}\end{cases} \EE for $\rho\in\mathbb{C}$ and define the index sets \begin{align*} J^+ &= \{j:J_j \mbox{ odd}\} \cup\{n+j:J'_j \mbox{ odd} \}, \\ J^- &= \{j:J_j \mbox{ even}\}\cup\{n+j:J'_j \mbox{ even}\}. \end{align*} \end{ntn} The generalised spectral Dirichlet to Neumann map Lemma~\ref{lem:P1:Implicit:DtoNMap} and Cramer's rule yield expressions for the transformed boundary functions: \BE \label{eqn:P1:Implicit:Zeta.Eta.f.g} \frac{\zeta_j(\rho) - e^{a\rho^nT}\eta_j(\rho)}{\DeltaP(\rho)} = \begin{cases}c_{(J_j-1)/2}(\rho)\widetilde{f}_{(J_j-1)/2}(\rho) & J_j \mbox{ odd,} \\ c_{J_j/2}(\rho)\widetilde{g}_{J_j/2}(\rho) & J_j \mbox{ even,} \\ c_{(J'_{j-n}-1)/2}(\rho)\widetilde{f}_{(J'_{j-n}-1)/2}(\rho) & J'_{j-n} \mbox{ odd,} \\ c_{J'_{j-n}/2}(\rho)\widetilde{g}_{J'_{j-n}/2}(\rho) & J'_{j-n} \mbox{ even,}\end{cases} \EE hence \begin{align*} \sum_{j=0}^{n-1}c_j(\rho)\widetilde{f}_j(\rho) &= \sum_{j\in J^+}\frac{\zeta_j(\rho) - e^{a\rho^nT}\eta_j(\rho)}{\DeltaP(\rho)}, \\ \sum_{j=0}^{n-1}c_j(\rho)\widetilde{g}_j(\rho) &= \sum_{j\in J^-}\frac{\zeta_j(\rho) - e^{a\rho^nT}\eta_j(\rho)}{\DeltaP(\rho)}. \end{align*} Substituting these equations into Theorem~\ref{thm:P1:Implicit:Formal} completes the proof of Theorem~\ref{thm:P1:Intro:Implicit}. \begin{rmk} There are several simplifications of the above definitions for specific types of boundary conditions. If the boundary conditions are simple, as studied in~\cite{Pel2004a}, then $\widehat{A}=0$. Hence, if the boundary conditions are simple and homogeneous then $\zeta_j=\eta_j=0$ for each $j>n$. Non-Robin boundary conditions admit a significantly simplified form of the PDE characteristic matrix; see equation~(2.2.5) of~\cite{Smi2011a}. For homogeneous boundary conditions, $\eta_j$ is $\zeta_j$ with $\hat{q}_T$ replacing $\hat{q}_0$. \end{rmk} \begin{rmk} \label{rmk:P1:Implicit:General.PDE} It is possible to extend the results above to initial-boundary value problems for a more general linear, constant-coefficient evolution equation, \BE \label{eqn:P1:Implicit:rmk.General.PDE:PDE} \partial_tq(x,t) + \sum_{j=0}^na_j(-i\partial_x)^jq(x,t) = 0, \EE with leading coefficient $a_n$ having the properties of $a$. In this case the spectral transforms must be redefined with $\sum_{j=0}^na_j\rho^j$ replacing $a\rho^n$ and the form of the boundary coefficient matrix also changes. The $\omega^X$ appearing in equation~\eqref{eqn:P1:Intro:PDE.Characteristic.Matrix} represent a rotation by $2X\pi/n$, corresponding to a map between simply connected components of $D$. The partial differential equation~\eqref{eqn:P1:Implicit:rmk.General.PDE:PDE} has dispersion relation $\sum_{j=0}^na_j\rho^n$ so $D$ is not simply a union of sectors but a union of sets that are asymptotically sectors; see Lemma~1.1 of~\cite{FS1999a}. Hence we replace $\omega^X$ with a biholomorphic map between the components of $D$. \end{rmk} \section{New characterisation of well-posedness} \label{sec:P1:WellPosed} This section provides a proof of Theorem~\ref{thm:P1:Intro:WellPosed}. The first subsection justifies that the decay condition is satisfied by all well-posed problems. The second subsection proves that the decay condition is sufficient for well-posedness. We clarify the definitions of $\widetilde{D}$ and $\widetilde{E}$ from Section~\ref{sec:P1:Intro}. By Lemma~\ref{lem:P1:Spectrum:Properties}, there exists some $\varepsilon>0$ such that the pairwise intersection of closed discs of radius $\varepsilon$ centred at zeros of $\DeltaP$ is empty. We define \BES \widetilde{D}=D\setminus\bigcup_{k\in\mathbb{N}}\overline{B}(\sigma_k,\varepsilon), \quad \widetilde{E}=E\setminus\bigcup_{k\in\mathbb{N}}\overline{B}(\sigma_k,\varepsilon). \EES \subsection{Well-posedness $\Rightarrow$ decay} \label{ssec:P1:WellPosed:WP.Implies.Decay} As the problem is well-posed, the solution evaluated at final time $q_T\in C^\infty[0,1]$ hence $\hat{q}_T$ and $\eta_j$ are entire. Similarly, $f_k,g_k\in C^\infty[0,T]$ hence $\widetilde{f}_k,\widetilde{g}_k$ are entire and decay as $\rho\to\infty$ from within $D$. Hence, by equation~\eqref{eqn:P1:Implicit:Zeta.Eta.f.g}, \BE \label{eqn:P1:WellPosed:WP.Implies.Decay.a} \frac{\zeta_j(\rho) - e^{a\rho^nT}\eta_j(\rho)}{\DeltaP(\rho)c_k(\rho)} \EE is entire and decays as $\rho\to0$ from within $D$ for each $j\in\{1,2,\dots,2n\}$, where $k$ depends upon $j$. We define the new complex set \BES \mathcal{D} = \{\rho\in D \mbox{ such that }-\Re(a\rho^nT)>2n|\rho|\}. \EES As $\mathcal{D}\subset D$, the ratio~\eqref{eqn:P1:WellPosed:WP.Implies.Decay.a} is analytic on $\mathcal{D}$ and decays as $\rho\to\infty$ from within $\mathcal{D}$. For $p\in\{1,2,\dots,n\}$, let $D_p$ be the $p$\Th simply connected component of $D$ encountered when moving anticlockwise from the positive real axis and let $\widetilde{D}_p=\widetilde{D}\cap D_p$. Then for each $p\in\{1,2,\dots,n\}$ there exists $R>0$ such that the set \BES \mathcal{D}_p = \left(\widetilde{D}_p\cap \mathcal{D}\right)\setminus\overline{B}(0,R) \EES is simply connected, open and unbounded. By definition, $\DeltaP(\rho)$ is an exponential polynomial whose terms are each \BES W(\rho)e^{-i\sum_{y\in Y}\omega^r\rho} \EES where $W$ is a monomial of degree at least $1$ and $Y\subset\{0,1,2,\dots,n-1\}$ is an index set. Hence \BES \frac{1}{\DeltaP(\rho)} = o(e^{n|\rho|}\rho^{-1}) \mbox{ as } \rho\to\infty \mbox{ or as } \rho\to0. \EES As $\zeta_j$ and $\eta_j$ also grow no faster than $o(e^{n|\rho|})$, the ratios \BES \frac{\zeta_j(\rho)}{\DeltaP(\rho)c_k(\rho)}, \quad \frac{\eta_j(\rho)}{\DeltaP(\rho)c_k(\rho)} = o(e^{2n|\rho|}\rho^{-1}), \mbox{ as } \rho\to\infty. \EES Hence the ratio \BE \label{eqn:P1:WellPosed:WP.Implies.Decay.d} \frac{e^{a\rho^nT}\eta_j(\rho)}{\DeltaP(\rho)c_k(\rho)} \EE decays as $\rho\to\infty$ from within $\mathcal{D}$ and away from the zeros of $\DeltaP$. However the ratio \BE \label{eqn:P1:WellPosed:WP.Implies.Decay.b} \frac{\zeta_j(\rho)}{\DeltaP(\rho)c_k(\rho)} \EE is the sum of ratios~\eqref{eqn:P1:WellPosed:WP.Implies.Decay.a} and~\eqref{eqn:P1:WellPosed:WP.Implies.Decay.d} hence it also decays as $\rho\to\infty$ from within $D'$ and away from the zeros of $\DeltaP$. The terms in each of $\zeta_j(\rho)$ and $\DeltaP(\rho)$ are exponentials, each of which either decays or grows as $\rho\to\infty$ from within one of the simply connected components $\widetilde{D}_p$ of $\widetilde{D}$. Hence as $\rho\to\infty$ from within a particular component $\widetilde{D}_p$ the ratio~\eqref{eqn:P1:WellPosed:WP.Implies.Decay.b} either decays or grows. But, as observed above, these ratios all decay as $\rho\to\infty$ from within each $\mathcal{D}_p$. Hence the ratio~\eqref{eqn:P1:WellPosed:WP.Implies.Decay.b} decays as $\rho\to\infty$ from within $\widetilde{D}_p$. Now it is a simple observation that the ratio \BE \label{eqn:P1:WellPosed:WP.Implies.Decay.c} \frac{\eta_j(\rho)}{\DeltaP(\rho)c_k(\rho)} \EE must also decay as $\rho\to\infty$. Indeed ratio~\eqref{eqn:P1:WellPosed:WP.Implies.Decay.c} is the same as ratio~\eqref{eqn:P1:WellPosed:WP.Implies.Decay.b} but with $\hat{q}_T(\omega^{k-1}\rho)$ replacing $u(\rho,k)$ and, as observed above, $q_T\in C^\infty[0,1]$ also. Finally, the exponentials in $\eta_j$ and $\DeltaP$ ensure that the ratio \BE \label{eqn:P1:WellPosed:WP.Implies.Decay.e} \frac{\eta_j(\rho)}{\DeltaP(\rho)} \EE also decays as $\rho\to\infty$ from within $\widetilde{D}_p$. Indeed the transforms that multiply each term in $\eta_j$ ensure that the decay of ratio~\eqref{eqn:P1:WellPosed:WP.Implies.Decay.c} must come from the decay of ratio~\eqref{eqn:P1:WellPosed:WP.Implies.Decay.e}, not from $1/c_k(\rho)$. \subsection{Decay $\Rightarrow$ well-posedness} \label{ssec:P1:WellPosed:Decay.Implies.WP} Many of the definitions of Section~\ref{sec:P1:Implicit} require the problem $\Pi(n,A,a,h,q_0)$ to be well-posed. The statement of the following Lemma clarifies what is meant by $\eta_j$ when $\Pi$ is not known to be well-posed a priori and the result is the principal tool in the proof of Theorem~\ref{thm:P1:Intro:WellPosed}. \begin{lem} \label{lem:P1:WellPosed:Ass.D.Implies.Admissible} Consider the problem $\Pi(n,A,a,h,q_0)$ with associated PDE characteristic matrix $\mathcal{A}$ whose determinant is $\DeltaP$. Let the polynomials $c_j$ be defined by $c_j(\rho) = -a\rho^n(i\rho)^{-(j+1)}$. Let $U:\mathbb{C}\to\mathbb{C}$ be defined by equation~\eqref{eqn:P1:Implicit:lem.DtoNMap:U} and let $\widehat{A}\in\mathbb{R}^{n\times n}$ be defined by equation~\eqref{eqn:P1:Implicit:lem.DtoNMap:RBC}. Let $\zeta_j,\eta_j:\mathbb{C}\to\mathbb{C}$ be defined by Notation~\ref{ntn:P1:Implicit:DeltaP}, where $q_T:[0,1]\to\mathbb{C}$ is some function such that $\eta_j$ is entire and the decay condition~\eqref{eqn:P1:Intro:thm.WellPosed:Decay} is satisfied. Let the functions $\widetilde{f}_j,\widetilde{g}_j:\mathbb{C}\to\mathbb{C}$ be defined by equation~\eqref{eqn:P1:Implicit:Zeta.Eta.f.g}. Let $f_j,g_j:[0,T]\to\mathbb{C}$ be the functions for which \BE \label{eqn:P1:WellPosed:lem.Ass.D.Implies.Admissible:Defn.fj} \widetilde{f}_j(\rho) = \int_0^Te^{a\rho^nt}f_j(t)\d t,\quad \widetilde{g}_j(\rho) = \int_0^Te^{a\rho^nt}g_j(t)\d t, \quad \rho\in\mathbb{C}. \EE Then $\{f_j,g_j:j\in\{0,1,\dots,n-1\}\}$ is an admissible set in the sense of Definition~1.3 of~\cite{FP2001a}. \end{lem} \begin{proof} By equation~\eqref{eqn:P1:Implicit:Zeta.Eta.f.g} and the definition of the index sets $J^\pm$ in Notation~\ref{ntn:P1:Implicit:DeltaP} we may write equations~(1.13) and~(1.14) of~\cite{FP2001a} as \begin{align} \label{eqn:App:Ass.D.Implies.Admissible.Lem:Alt.F} \widetilde{F}(\rho) &= \sum_{j\in J^+}{\frac{\zeta_j(\rho)-e^{a\rho^nT}\eta_j(\rho)}{\DeltaP(\rho)}}, \\ \label{eqn:App:Ass.D.Implies.Admissible.Lem:Alt.G} \widetilde{G}(\rho) &= \sum_{j\in J^-}{\frac{\zeta_j(\rho)-e^{a\rho^nT}\eta_j(\rho)}{\DeltaP(\rho)}}. \end{align} By Cramer's rule and the calculations in the proof of Lemma~\ref{lem:P1:Implicit:DtoNMap}, equation~(1.17) of~\cite{FP2001a} is satisfied. As $\eta_j$ is entire, $\hat{q}_T$ is entire so, by the standard results on the inverse Fourier transform, $q_T:[0,1]\to\mathbb{C}$, defined by \BES q_T(x) = \frac{1}{2\pi}\int_{\mathbb{R}}e^{i\rho x}\hat{q}_T(\rho)\d \rho, \EES is a $C^\infty$ smooth function. We know $\zeta_j$ is entire by construction and $\eta_j$ is entire by assumption hence $\widetilde{F}$ and $\widetilde{G}$ are meromorphic on $\mathbb{C}$ and analytic on $\widetilde{D}$. By the definition of $D$ and the decay assumption \BES \frac{e^{a\rho^nT}\eta_j(\rho)}{\DeltaP(\rho)}\to0 \mbox{ as } \rho\to\infty \mbox{ from within } \widetilde{D}. \EES As $\hat{q}_0$ and $\widetilde{h}_j$ are entire so is $U$. As $\hat{q}_T$ is also entire and the definitions of $\zeta_j$ and $\eta_j$ differ only by which of these functions appears, the ratio $\zeta_k(\rho)/\DeltaP(\rho)\to0$ as $\rho\to\infty$ from within $\widetilde{D}$ also. This establishes that \BES \frac{\zeta_j(\rho)-e^{a\rho^nT}\eta_j(\rho)}{\DeltaP(\rho)}\to0 \mbox{ as } \rho\to\infty \mbox{ from within } \widetilde{D}. \EES Hence, by equations~\eqref{eqn:App:Ass.D.Implies.Admissible.Lem:Alt.F} and~\eqref{eqn:App:Ass.D.Implies.Admissible.Lem:Alt.G}, $\widetilde{F}(\rho),\widetilde{G}(\rho)\to0$ as $\rho\to\infty$ within $\widetilde{D}$. An argument similar to that in Example~7.4.6 of~\cite{AF1997a} yields \begin{align*} f_j(t) &= -\frac{i^j}{2\pi}\int_{\partial D}\rho^je^{-a\rho^nt}\widetilde{F}(\rho)\d \rho, \\ g_j(t) &= -\frac{i^j}{2\pi}\int_{\partial D}\rho^je^{-a\rho^nt}\widetilde{G}(\rho)\d \rho. \end{align*} Because $\widetilde{F}(\rho),\widetilde{G}(\rho)\to0$ as $\rho\to\infty$ within $\widetilde{D}$, these definitions guarantee that $f_j$ and $g_j$ are $C^\infty$ smooth. The compatibility of the $f_j$ and $g_j$ with $q_0$ is ensured by the compatibility condition $(\Pi4)$. \end{proof} The desired result is now a restatement of Theorems~1.1 and~1.2 of~\cite{FP2001a}. For this reason we refer the reader to the proof presented in Section~4 of that publication. The only difference is that we make use of Lemma~\ref{lem:P1:WellPosed:Ass.D.Implies.Admissible} in place of Proposition~4.1. \section{Representations of the solution} \label{sec:P1:Reps} The proofs of Theorems~\ref{thm:P1:Intro:Reps.Int} and~\ref{thm:P1:Intro:Reps.Ser} are similar calculations. In Section~\ref{ssec:P1:Reps:Ser} we present the derivation of the series representation and, in Section~\ref{ssec:P1:Reps:Int}, note the way this argument may be adapted to yield the integral representation. We derive the result in the case $n$ odd, $a=i$; the other cases are almost identical. \subsection{Series Representation} \label{ssec:P1:Reps:Ser} As $\Pi$ is well-posed, Theorem~\ref{thm:P1:Intro:Implicit} holds. We split the latter two integrals of equation~\eqref{thm:P1:Intro:thm.Implicit:q} into parts whose integrands contain the data, that is $\zeta_j$, and parts whose integrands contain the solution evaluated at final time, that is $\eta_j$. \begin{multline} \label{eqn:P1:Reps:Ser:q.Implicit} 2\pi q(x,t) = \int_\mathbb{R}e^{i\rho x-i\rho^nt}\hat{q}_0(\rho)\d\rho + \left\{\int_{\partial E^+}-\int_{\mathbb{R}}\right\}e^{i\rho x-i\rho^nt}\sum_{j\in J^+}\frac{\zeta_j(\rho)}{\DeltaP(\rho)}\d\rho \\ + \int_{\partial D^+}e^{i\rho x+i\rho^n(T-t)}\sum_{j\in J^+}\frac{\eta_j(\rho)}{\DeltaP(\rho)}\d\rho + \left\{\int_{\partial E^-}+\int_{\mathbb{R}}\right\}e^{i\rho(x-1)-i\rho^nt}\sum_{j\in J^-}\frac{\zeta_j(\rho)}{\DeltaP(\rho)}\d\rho \\ + \int_{\partial D^-}e^{i\rho(x-1)+i\rho^n(T-t)}\sum_{j\in J^-}\frac{\eta_j(\rho)}{\DeltaP(\rho)}\d\rho. \end{multline} As $\Pi'$ is well-posed, Theorem~\ref{thm:P1:Intro:WellPosed} ensures the ratios \BES \frac{\eta'_j(\rho)}{\DeltaPprime(\rho)}\to0 \mbox{ as }\rho\to\infty \mbox{ from within } \widetilde{D}', \EES for each $j$. By definition $E=D'$ and, by statement (ii) of Lemma~\ref{lem:P1:Implicit:DtoNMap}, the zeros of $\mathcal{A}'$ are precisely the zeros of $\mathcal{A}$ hence $\widetilde{E}=\widetilde{D}'$. Define $\xi_j(\rho)$ to be the function obtained by replacing $\hat{q}'_T(\omega^{k-1}\rho)$ with $u(\rho,k)$ in the definition of $\eta'_j(\rho)$. As $q'_T$, $q_0$ and $h_j$ are all smooth functions, $\xi_j$ has precisely the same decay properties of $\eta'_j$. But $\xi_j=\zeta_j$ by definition. Hence the well-posedness of $\Pi'$ is equivalent to \BE \label{eqn:P1:Reps:Ser:zetaj.Decay} \frac{\zeta_j(\rho)}{\DeltaP(\rho)}\to0 \mbox{ as }\rho\to\infty \mbox{ from within } \widetilde{E}, \EE for each $j$. The decay property obtained by applying Theorem~\ref{thm:P1:Intro:WellPosed} directly to $\Pi$ together with the decay property~\eqref{eqn:P1:Reps:Ser:zetaj.Decay} permits the use of Jordan's Lemma to deform the contours of integration over $\widetilde{D}^\pm$ and $\widetilde{E}^\pm$ in equation~\eqref{eqn:P1:Reps:Ser:q.Implicit} to obtain \begin{multline} \label{eqn:P1:Reps:Ser:q.Integrals.at.zeros} 2\pi q(x,t) = \int_\mathbb{R}e^{i\rho x-i\rho^nt}\hat{q}_0(\rho)\d\rho + \left\{\int_{\partial (E^+\setminus\widetilde{E}^+)}-\int_{\mathbb{R}}\right\}e^{i\rho x-i\rho^nt}\sum_{j\in J^+}\frac{\zeta_j(\rho)}{\DeltaP(\rho)}\d\rho \\ + \int_{\partial (D^+\setminus\widetilde{D}^+)}e^{i\rho x+i\rho^n(T-t)}\sum_{j\in J^+}\frac{\eta_j(\rho)}{\DeltaP(\rho)}\d\rho \\ + \left\{\int_{\partial (E^-\setminus\widetilde{E}^-)}+\int_{\mathbb{R}}\right\}e^{i\rho(x-1)-i\rho^nt}\sum_{j\in J^-}\frac{\zeta_j(\rho)}{\DeltaP(\rho)}\d\rho \\ + \int_{\partial (D^-\setminus\widetilde{D}^-)}e^{i\rho(x-1)+i\rho^n(T-t)}\sum_{j\in J^-}\frac{\eta_j(\rho)}{\DeltaP(\rho)}\d\rho. \end{multline} Indeed, $\zeta_j$, $\eta_j$ and $\DeltaP$ are entire functions hence the ratios can have poles only at the zeros of $\DeltaP$, neighbourhoods of which are excluded from $\widetilde{D}^\pm$ and $\widetilde{E}^\pm$ by definition. Finally, the exponential functions in the integrands each decay as $\rho\to\infty$ from within the sectors enclosed by their respective contour of integration. The right hand side of equation~\eqref{eqn:P1:Reps:Ser:q.Integrals.at.zeros} is the sum of three integrals over $\mathbb{R}$ and four others. The former may be combined into a single integral using the following lemma, whose proof appears at the end of this section. \begin{lem} \label{lem:P1:Reps:q0} Let $\Pi(n,A,a,q_0,h)$ be well-posed. Then \BE \label{eqn:P1:Repd:lem.q0:Statement} \sum_{j\in J^+}\zeta_j(\rho) - e^{-i\rho}\sum_{j\in J^-}\zeta_j(\rho) = \DeltaP(\rho)\left[ \hat{q}_0(\rho) + \left( \frac{1}{\DeltaP(\rho)}-1 \right)H(\rho) \right], \EE where \BES H(\rho) = \sum_{j\in\widehat{J}^+}c_j(\rho)\tilde{h}_{\widehat{J}^+_j}(\rho) - e^{-i\rho}\sum_{j\in\widehat{J}^-}c_j(\rho)\tilde{h}_{\widehat{J}^-_j}(\rho), \EES \end{lem} The other integrals in equation~\eqref{eqn:P1:Reps:Ser:q.Integrals.at.zeros} are around the boundaries of discs and circular sectors centred at each zero of $\DeltaP$. Over the next paragraphs we combine and simplify these integrals to the desired form. Consider $\sigma\in D^+$ such that $\DeltaP(\sigma)=0$. Then the fourth integral on the right hand side of equation~\eqref{eqn:P1:Reps:Ser:q.Integrals.at.zeros} includes \BES \int_{C(\sigma,\varepsilon)}e^{i\rho x+i\rho^n(T-t)}\sum_{j\in J^+}\frac{\eta_j(\rho)}{\DeltaP(\rho)}\d\rho = \int_{C(\sigma,\varepsilon)}\frac{e^{i\rho x-i\rho^nt}}{\DeltaP(\rho)}\sum_{j\in J^+}{\zeta_j(\rho)}\d\rho, \EES the equality being justified by the following lemma, whose proof appears at the end of the section. \begin{lem} \label{lem:P1:Reps:zeta.eta.relation} Let $\Pi(n,A,a,q_0,h)$ be well-posed. Then for each $j\in\{1,2,\dots,2n\}$, the functions \BE\label{eqn:P1:Reps:lem.zeta.eta.relation:entire.function} \sum_{j\in\ J^+}\frac{\zeta_j(\rho) - e^{a\rho^nT}\eta_j(\sigma)}{\DeltaP(\rho)},\qquad \sum_{j\in\ J^-}\frac{\zeta_j(\rho) - e^{a\rho^nT}\eta_j(\sigma)}{\DeltaP(\rho)} \EE are entire. \end{lem} Consider $\sigma\in(\partial D)\cap\mathbb{C}^+$ such that $\DeltaP(\sigma)=0$. Define $\Gamma^D=\partial(B(\sigma,\varepsilon)\cap D)$ and $\Gamma^E=\partial(B(\sigma,\varepsilon)\cap E)$. Then the second and fourth integrals on the right hand side of equation~\eqref{eqn:P1:Reps:Ser:q.Integrals.at.zeros} include \begin{gather*} \int_{\Gamma^E}\frac{e^{i\rho x-i\rho^nt}}{\DeltaP(\rho)}\sum_{j\in J^+}{\zeta_j(\rho)}\d\rho \quad \mbox{and} \\ \int_{\Gamma^D}e^{i\rho x+i\rho^n(T-t)}\sum_{j\in J^+}\frac{\eta_j(\rho)}{\DeltaP(\rho)}\d\rho = \int_{\Gamma^D}\frac{e^{i\rho x-i\rho^nt}}{\DeltaP(\rho)}\sum_{j\in J^+}{\zeta_j(\rho)}\d\rho, \end{gather*} respectively, by Lemma~\ref{lem:P1:Reps:zeta.eta.relation}. The sum of the above expressions is \BES \int_{C(\sigma,\varepsilon)}\frac{e^{i\rho x-i\rho^nt}}{\DeltaP(\rho)}\sum_{j\in J^+}{\zeta_j(\rho)}\d\rho. \EES Consider $0\neq\sigma\in\mathbb{R}$ such that $\DeltaP(\sigma)=0$. Define $\Gamma^D=\partial(B(\sigma,\varepsilon)\cap D)$ and let $\Gamma^E=\partial(B(\sigma,\varepsilon)\cap E)$. Then the fourth and fifth integrals on the right hand side of equation~\eqref{eqn:P1:Reps:Ser:q.Integrals.at.zeros} include \begin{gather*} \int_{\Gamma^D}e^{i\rho x+i\rho^n(T-t)}\sum_{j\in J^+}\frac{\eta_j(\rho)}{\DeltaP(\rho)}\d\rho = \int_{\Gamma^D}\frac{e^{i\rho x-i\rho^nt}}{\DeltaP(\rho)}\sum_{j\in J^+}{\zeta_j(\rho)}\d\rho \quad \mbox{and} \\ \int_{\Gamma^E}\frac{e^{i\rho (x-1)-i\rho^nt}}{\DeltaP(\rho)}\sum_{j\in J^-}{\zeta_j(\rho)}\d\rho, \end{gather*} respectively, by analyticity and Lemma~\ref{lem:P1:Reps:zeta.eta.relation}. The sum of the above expressions is \BES \int_{C(\sigma,\varepsilon)}\frac{e^{i\rho x-i\rho^nt}}{\DeltaP(\rho)}\d\rho\sum_{j\in J^+}{\zeta_j(\sigma)} - \int_{\Gamma^E}\frac{e^{i\rho x-i\rho^nt}}{\DeltaP(\rho)}\left(\sum_{j\in J^+}{\zeta_j(\rho)} - e^{-i\rho}\sum_{j\in J^-}{\zeta_j(\rho)} \right)\d\rho. \EES Similar calculations may be performed for $\sigma\in E^-, D^-, (\partial D)\cap\mathbb{C}^-,\{0\}$. Define the index set $K^{\mathbb{R}}\subset\mathbb{N}$ by $k\in K^{\mathbb{R}}$ if and only if $\sigma_k\in\mathbb{R}$. For each $k\in K^\mathbb{R}$ define $\Gamma_k=\partial(B(\sigma_k,\varepsilon)\cap\mathbb{C}^-)$. Then, substituting the calculations above and applying Lemma~\ref{lem:P1:Reps:q0}, equation~\eqref{eqn:P1:Reps:Ser:q.Integrals.at.zeros} yields \begin{multline*} 2\pi q(x,t) = \sum_{k\in K^+}\int_{C(\sigma_k,\varepsilon)}\frac{e^{i\rho x-i\rho^nt}}{\DeltaP(\rho)} \sum_{j\in J^+} \zeta_j(\rho)\d\rho \\ + \sum_{k\in K^-}\int_{C(\sigma_k,\varepsilon)}\frac{e^{i\rho(x-1)-i\rho^nt}}{\DeltaP(\rho)} \sum_{j\in J^-} \zeta_j(\rho)\d\rho \\ - \left\{\sum_{k\in K^\mathbb{R}} \int_{\Gamma_k^-} + \int_{\mathbb{R}} \right\} e^{i\rho x-i\rho^nt} \left( \frac{1}{\DeltaP(\rho)}-1 \right)H(\rho) \d\rho. \end{multline*} A residue calculation at each $\sigma_k$ completes the proof. \subsection{Integral Representation} \label{ssec:P1:Reps:Int} As $\Pi$ is well-posed, equation~\eqref{eqn:P1:Reps:Ser:q.Implicit} holds but, as $\Pi(n,A,-a,h,q_0)$ may not be well-posed, it is not possible to use Jordan's Lemma to deform the second and fifth integrals on the right hand side over $\widetilde{E}$. However it is still possible to deform the fourth and seventh integrals over $\widetilde{D}$. Hence two additional terms appear in equation~\eqref{eqn:P1:Reps:Ser:q.Integrals.at.zeros}, \BES \int_{\partial\widetilde{E}^+}e^{i\rho x-i\rho^nt}\sum_{j\in J^+}\frac{\zeta_j(\rho)}{\DeltaP(\rho)}\d\rho + \int_{\partial\widetilde{E}^-}e^{i\rho(x-1)-i\rho^nt}\sum_{j\in J^-}\frac{\zeta_j(\rho)}{\DeltaP(\rho)}\d\rho. \EES The remainder of the derivation is unchanged from that presented in Section~\ref{ssec:P1:Reps:Ser}. \subsection{Proofs of technical lemmata} \begin{proof}[Proof of Lemma~\textup{\ref{lem:P1:Reps:q0}.}] We expand the left hand side of equation~\eqref{eqn:P1:Repd:lem.q0:Statement} in terms of $u(\rho,l)$ and rearrange the result. To this end we define the matrix-valued function $X^{l\hspace{0.5mm}j}:\mathbb{C}\to\mathbb{C}^{(n-1)\times(n-1)}$ to be the $(n-1)\times(n-1)$ submatrix of \BES \BP\mathcal{A}&\mathcal{A}\\\mathcal{A}&\mathcal{A}\end{pmatrix} \EES whose $(1,1)$ entry is the $(l+1,r+j)$ entry. Then \BE \label{eqn:P1:Repd:lem.q0:zetaj} \widehat{\zeta}_j(\rho)=\sum_{l=1}^n u(\rho,l)\det X^{l\hspace{0.5mm}j}(\rho). \EE By Notation~\ref{ntn:P1:Implicit:DeltaP} and equation~\eqref{eqn:P1:Repd:lem.q0:zetaj}, the left hand side of equation~\eqref{eqn:P1:Repd:lem.q0:Statement} is equal to \begin{multline*} \sum_{l=1}^n u(\rho,l)\left[\left( \sum_{j:J_j\text{ odd}} c_{(J_j-1)/2}(\rho)\det X^{l\hspace{0.5mm}j} - \sum_{j:J'_j\text{ odd}} c_{(J'_j-1)/2}(\rho)\sum_{k=1}^n\M{\widehat{A}}{j}{k}\det X^{l\hspace{0.5mm}j} \right)\right. \\ \left.-e^{-i\rho}\left( \sum_{j:J_j\text{ even}} c_{J_j/2}(\rho)\det X^{l\hspace{0.5mm}j} - \sum_{j:J'_j\text{ even}} c_{J'_j/2}(\rho)\sum_{k=1}^n\M{\widehat{A}}{j}{k}\det X^{l\hspace{0.5mm}j} \right)\right] + H(\rho). \end{multline*} Splitting the sums over $k$ into $k:J_k$ is odd and $k:J_k$ is even and rearranging inside the parentheses, we evaluate the square bracket to \BE \label{eqn:P1:Repd:lem.q0:Square.Bracket} \begin{split} &\left[\sum_{j:J_j\text{ odd}} \left( c_{(J_j-1)/2}(\rho) - \sum_{\mathclap{k:J'_k\text{ odd}}} c_{(J'_k-1)/2}(\rho)\M{\widehat{A}}{k}{j} + e^{-i\rho}\sum_{\mathclap{k:J'_k\text{ even}}} c_{J'_k/2}(\rho)\M{\widehat{A}}{k}{j} \right)\det X^{l\hspace{0.5mm}j}\right. \\ &\hspace{3mm} \left.+ \sum_{j:J_j\text{ even}} \left( -c_{J_j/2}(\rho)e^{-i\rho} - \sum_{\mathclap{k:J'_k\text{ odd}}} c_{(J'_k-1)/2}(\rho)\M{\widehat{A}}{k}{j} + e^{-i\rho}\sum_{\mathclap{k:J'_k\text{ even}}} c_{J'_k/2}(\rho)\M{\widehat{A}}{k}{j} \right)\det X^{l\hspace{0.5mm}j}\right]\hspace{-1mm}.\end{split} \EE Making the change of variables $k\mapsto r$ defined by \begin{gather*} J'_k \mbox{ is odd if and only if } \widehat{J}^+\ni r = (J'_k-1)/2 \mbox{, in which case } k=\widehat{J}^+_r, \\ J'_k \mbox{ is even if and only if } \widehat{J}^-\ni r = J'_k/2 \mbox{, in which case } k=\widehat{J}^-_r, \end{gather*} it is clear that each of the parentheses in expression~\eqref{eqn:P1:Repd:lem.q0:Square.Bracket} evaluates to $\M{\mathcal{A}}{1}{j}$. Hence \BES \sum_{j\in J^+}\zeta_j(\rho) - e^{-i\rho}\sum_{j\in J^-}\zeta_j(\rho) = \sum_{l=1}^n u(\rho,l) \sum_{j=1}^n \M{\mathcal{A}}{1}{j}(\rho)\det X^{l\hspace{0.5mm}j}(\rho) + H(\rho). \qedhere \EES \end{proof} \begin{proof}[Proof of Lemma~\textup{\ref{lem:P1:Reps:zeta.eta.relation}.}] The $t$-transforms of the boundary functions are entire, as are the monomials $c_j$, hence the sum of products of a $t$-transform and monomials $c_j$ is also entire. By equation~\eqref{eqn:P1:Implicit:Zeta.Eta.f.g} this establishes that expressions~\eqref{eqn:P1:Reps:lem.zeta.eta.relation:entire.function} are entire functions of $\rho$. \end{proof} \section{Alternative characterisations} \label{sec:P1:Alt} In this section we discuss sufficient conditions for well-posedness of initial-boundary value problems and present a proof of Theorem~\ref{thm:P1:Intro:Even.Can.Deform.If.Well-posed}. These topics are unified by the arguments and notation used. \subsection{Sufficient conditions for well-posedness} \label{ssec:P1:Alt:Non-Robin} Throughout Section~\ref{ssec:P1:Alt:Non-Robin} we assume the boundary conditions are non-Robin. This simplifies the PDE characteristic matrix greatly, leading to corresponding simplifications in the arguments presented below. Nevertheless, we identify suprising counterexamples to the qualitative hypothesis `highly coupled boundary conditions lead to well-posed problems whose solutions may be expressed using series.' We give the condition whose effects are of interest. \begin{cond} \label{cond:P1:Alt:Non-Robin:First.Condition} For $A$, a boundary coefficient matrix specifying non-Robin boundary conditions, we define $C=|\{j:\M{\alpha}{k}{j},\M{\beta}{k}{j}\neq0$ for some $k\}|$, the number of boundary conditions that couple the ends of the space interval, and $R=|\{j:\M{\beta}{k}{j}=0$ for all $k\}|$, the number of right-handed boundary functions, whose corresponding column in $A$ is $0$. Let $a=\pm i$ and let $A$ be such that \BES R \leq \left\{ \begin{matrix} \frac{n}{2} & \mbox{if } n \mbox{ is even and } a=\pm i \\ \frac{n+1}{2} & \mbox{if } n \mbox{ is odd and } a=i \\ \frac{n-1}{2} & \mbox{if } n \mbox{ is odd and } a=-i \end{matrix} \right\} \leq R+C. \EES \end{cond} We investigate the effect of Condition~\ref{cond:P1:Alt:Non-Robin:First.Condition} upon the behaviour of the ratio \BE \label{eqn:P1:Alt:Non-Robin:ratio} \frac{\eta_m(\rho)}{\DeltaP(\rho)} \EE in the limit $\rho\to\infty$ from within $\widetilde{D}$. The PDE characteristic determinant is an exponential polynomial, a sum of terms of the form \BES Z(\rho)e^{-i\rho\sum_{y\in Y}\omega^{y}} \EES where $Z$ is some monomial and $Y\subset\{0,1,\dots,n-1\}$. As the problem may be ill-posed $\eta_m$ is defined as in Lemma~\ref{lem:P1:WellPosed:Ass.D.Implies.Admissible}, a sum of terms of the form \BE \label{eqn:P1:Alt:Non-Robin:Term.of.etam} X(\rho)e^{-i\rho\sum_{y\in Y}\omega^{y}}\int_0^1e^{-i\rho x\omega^{z}}q_T(x)\d x \EE where $X$ is some monomial, $q_T\in C^\infty[0,1]$, $Y\subset\{0,1,\dots,n-1\}$ and $z\in\{0,1,\dots,n-1\}\setminus Y$. Fix $j\in\{1,2,\dots,n\}$ and let $\rho\in \widetilde{D}_j$. Then the modulus of \BE \label{eqn:P1:Alt:Non-Robin:ExponentialY} e^{-i\sum_{y\in Y}\omega^y\rho} \EE is uniquely maximised for the index set \BES Y=\begin{cases} \{j-1,j,\dots,j-2+\tfrac{n}{2}\} & n \mbox{ even,} \\ \{j-1,j,\dots,j-2+\tfrac{1}{2}(n+\Im(a))\} & n \mbox{ odd.} \end{cases} \EES By Condition~\ref{cond:P1:Alt:Non-Robin:First.Condition} $\DeltaP(\rho)$ has a term given by that exponential multiplied by some monomial coefficient, $Z_j(\rho)$. That term dominates all other terms in $\DeltaP(\rho)$ but it also dominates all terms in $\eta_m(\rho)$. Hence the ratio~\eqref{eqn:P1:Alt:Non-Robin:ratio} is bounded in $\widetilde{D}_j$ for each $j\in\{1,2,\dots,n\}$ and decaying as $\rho\to\infty$ from within $\widetilde{D}_j$. If it were possible to guarantee that $Z_j\neq0$ then it would be proven that Condition~\ref{cond:P1:Alt:Non-Robin:First.Condition} is sufficient for well-posedness. Unfortunately this is not the case, as the following example shows. \begin{eg} \label{eg:P1:Alt:Non-Robin:3Pseudo} Let \BES A=\BP1&-1&0&0&0&0\\0&0&1&-1&0&0\\0&0&0&0&1&2\EP \EES and consider the problem $\Pi(3,A,i,0,q_0)$. Then \BES \widetilde{D}_1\subseteq\left\{\rho\in\mathbb{C}:0<\arg\rho<\frac{\pi}{3}\right\} \EES and \BES \mathcal{A}(\rho) = \BP -c_2(\rho)(e^{-i\rho}-1) & -c_1(\rho)(e^{-i\rho}-1) & -c_0(\rho)(e^{-i\rho}+2) \\ -c_2(\rho)(e^{-i\omega\rho}-1) & -\omega c_1(\rho)(e^{-i\omega\rho}-1) & -\omega^2 c_0(\rho)(e^{-i\omega\rho}+2) \\ -c_2(\rho)(e^{-i\omega^2\rho}-1) & -\omega^2 c_1(\rho)(e^{-i\omega^2\rho}-1) & -\omega c_0(\rho)(e^{-i\omega^2\rho}+2) \EP. \EES We calculate \begin{align*} \DeltaP(\rho) &= (\omega-\omega^2)c_2(\rho)c_1(\rho)c_0(\rho)\left[9+(2-2)(e^{i\rho}+e^{i\omega\rho}+e^{i\omega^2\rho})\right. \\ &\hspace{60mm} \left.+ (1-4)(e^{-i\rho}+e^{-i\omega\rho}+e^{-i\omega^2\rho})\right], \\ \intertext{in this case, as $\M{\beta}{1}{2} + \M{\beta}{2}{1} + \M{\beta}{3}{0}=0$, the coefficients of $e^{i\omega^j\rho}$ cancel for each $j$, } &= 3(\omega-\omega^2)c_2(\rho)c_1(\rho)c_0(\rho)\left[3-(e^{-i\rho}+e^{-i\omega\rho}+e^{-i\omega^2\rho})\right], \\ \eta_3(\rho) &= (\omega^2-\omega)c_2(\rho)c_1(\rho)c_0(\rho)\sum_{j=0}^2{\omega^j\hat{q}_T(\omega^j\rho)(e^{i\omega^j\rho} - e^{-i\omega^{j+1}\rho} - e^{-i\omega^{j+2}\rho} + 1)}. \end{align*} Fix $\delta>0$. Consider a sequence, $(\rho_j)_{j\in\mathbb{N}}$, defined by $\rho_j=R_je^{i\pi/12}$ where \BE \label{eqn:P1:Alt:Non-Robin:3Pseudo.eg:Bounds.On.Rj} R_j\not\in\bigcup_{m=0}^\infty{\left(\left(1-\tfrac{\sqrt{3}}{3}\right)\sqrt{2}\pi m - \delta , \left(1-\tfrac{\sqrt{3}}{3}\right)\sqrt{2}\pi m + \delta\right)} \EE is a strictly increasing sequence of positive real numbers with limit $\infty$ chosen such that $\rho_j\in\widetilde{D}_1$ and. The ratio $\eta_3(\rho_j)/\DeltaP(\rho_j)$ evaluates to \BES \frac{ -\hat{q}_T(\rho_j) - \omega\hat{q}_T(\omega\rho_j)e^{-i(1-\omega)\rho_j} + \omega^2\hat{q}_T(\omega^2\rho_j)e^{i(\omega^2+\omega)\rho_j} + O(1)}{3(e^{-i(1-\omega)\rho_j} + 1) + O(e^{-R_j(\sqrt{3}-1)/2\sqrt{2}})}. \EES The denominator is $O(1)$ but, by condition~\eqref{eqn:P1:Alt:Non-Robin:3Pseudo.eg:Bounds.On.Rj}, is bounded away from $0$. The terms in the numerator all approach infinity at different rates, depending upon $\hat{q}_T$. Hence the ratio is unbounded and, by Theorem~\ref{thm:P1:Intro:WellPosed}, the problem is ill-posed. \end{eg} Indeed, third order initial-boundary value problems with pseudo-periodic boundary conditions are ill-posed if and only if \begin{alignat*}{4} a &= i & &\mbox{ and } & \M{\beta}{1}{2} + \M{\beta}{2}{1} + \M{\beta}{3}{0} &= 0 & &\mbox{ or} \\ a &= -i & &\mbox{ and } & \frac{1}{\M{\beta}{1}{2}} + \frac{1}{\M{\beta}{2}{1}} + \frac{1}{\M{\beta}{3}{0}} &= 0. & & \end{alignat*} A combinatorial necessary and sufficient condition for $Z_j\neq0$ in odd order problems is presented as Condition~3.22 of~\cite{Smi2011a} but is omitted here due to its technicality however we do improve upon that condition; see Remark~\ref{rmk:P1:Spectrum:Symmetry.In.Coefficients}. No further third order examples are known which obey Condition~\ref{cond:P1:Alt:Non-Robin:First.Condition} but are ill-posed. Condition~3.22 of~\cite{Smi2011a} may be adapted to even problems by setting $k=n/2-R$. The pseudo-periodic problems of second and fourth order are ill-posed if and only if \begin{alignat*}{3} n &= 2 & &\mbox{ and } & 0 &= \M{\beta}{1}{1} + \M{\beta}{2}{0}, \\ n &= 4 & &\mbox{ and } & 0 &= \M{\beta}{1}{3}\M{\beta}{2}{2} + \M{\beta}{2}{2}\M{\beta}{3}{1} + \M{\beta}{3}{1}\M{\beta}{4}{0} + \M{\beta}{4}{0}\M{\beta}{1}{3} + 2(\M{\beta}{1}{3}\M{\beta}{3}{1} + \M{\beta}{2}{2}\M{\beta}{4}{0}). \end{alignat*} For example, the problem $\Pi(4,A,\pm i,h,q_0)$ with boundary coefficient matrix \BES A=\BP1&1&0&0&0&0&0&0\\0&0&1&-1&0&0&0&0\\0&0&0&0&1&1&0&0\\0&0&0&0&0&0&1&-1\EP \EES is ill-posed. \begin{rmk} \label{rmk:P1:Alt:Well-Posed.No.Series} The essential difference between the odd and even cases presented above is that for odd order problems the well-posedness criteria depend upon the direction coefficient whereas for even order problems they do not. This means it is possible to construct examples of odd order problems that are well-posed but whose solutions cannot be represented by a series using Theorem~\ref{thm:P1:Intro:Reps.Ser}. Indeed the problem $\Pi(3,A,i,h,q_0)$, with boundary coefficient matrix given by \BES A=\BP1&-1&0&0&0&0\\0&0&1&-1&0&0\\0&0&0&0&1&\frac{1}{2}\EP, \EES is well-posed but is ill-posed in the opposite direction. This is the issue mentioned in Remark~3.3 of~\cite{FP2000a}. \end{rmk} \begin{rmk} \label{rmk:P1:Alt:Simple} There are classes of examples for which $Z_j\neq0$ is guaranteed. Indeed, Condition~\ref{cond:P1:Alt:Non-Robin:First.Condition} is precisely the necessary and sufficient condition for well-posedness of problems with simple boundary conditions proved in~\cite{Pel2004a}. \end{rmk} \begin{rmk} \label{rmk:P1:Alt:Deformation.In.Upper.Half-Plane.Only} There exist problems $\Pi$ for which $\Pi'$ is ill-posed but for which \BES \frac{\zeta_j(\rho)}{\DeltaP(\rho)}\to0 \mbox{ as } \rho\to\infty \mbox{ from within } \widetilde{E}^+, \EES for all $j\in J^+$ or from within $\widetilde{E}^-$ for all $j\in J^-$. This is a property of the $\zeta_j$, dependent upon which column of $\mathcal{A}$ is replaced with the transformed data, not of the sectors in which the decay or blow-up occurs. In this case it is possible to deform contours over the corresponding $\widetilde{E}^\pm$ hence one of the terms \BES \int_{\partial\widetilde{E}^+}e^{i\rho x-i\rho^nt}\sum_{j\in J^+}\frac{\zeta_j(\rho)}{\DeltaP(\rho)}\d\rho, \qquad \int_{\partial\widetilde{E}^-}e^{i\rho(x-1)-i\rho^nt}\sum_{j\in J^-}\frac{\zeta_j(\rho)}{\DeltaP(\rho)}\d\rho \EES evaluates to zero in equation~\eqref{eqn:P1:Intro:thm.Reps.Int:q} but the other does not. \end{rmk} \begin{rmk} \label{rmk:P1:Alt:Conjecture.Conditions.Necessary} It is a conjecture that Condition~\ref{cond:P1:Alt:Non-Robin:First.Condition} together with Condition~3.22 of~\cite{Smi2011a} (as modified above to include $n$ even) are necessary as well as sufficient for well-posedness of problems with non-Robin boundary conditions. Any counterexample must satisfy several strong symmetry conditions that appear to be mutually exclusive. Indeed for a problem, which fails Condition~\ref{cond:P1:Alt:Non-Robin:First.Condition} or which satisfies Condition~\ref{cond:P1:Alt:Non-Robin:First.Condition} but for which $Z_j=0$, to be well-posed several monomial coefficients $X$ from equation~\eqref{eqn:P1:Alt:Non-Robin:Term.of.etam} must be identically zero. \end{rmk} \begin{rmk} \label{rmk:P1:Alt:Condition.Robin} We give a condition equivalent to Condition~\ref{cond:P1:Alt:Non-Robin:First.Condition} for Robin type boundary conditions. Indeed, we define $B_1=|\{j\in\widetilde{J}^-:\hsexists k,r$ for which $\M{\beta}{k}{j}\neq0$ and $\M{\alpha}{k}{r}$ is a pivot$\}|$, $B_2=|\widetilde{J}^-|$ and $B_3=|\{j\in\widetilde{J}^+:\hsexists k,r$ for which $\M{\alpha}{k}{j}\neq0$ and $\M{\beta}{k}{r}$ is a pivot$\}|$. Then the condition is \BES B_2-B_1 \leq \left\{ \begin{matrix} \frac{n}{2} & \mbox{if } n \mbox{ is even and } a=\pm i \\ \frac{n+1}{2} & \mbox{if } n \mbox{ is odd and } a=i \\ \frac{n-1}{2} & \mbox{if } n \mbox{ is odd and } a=-i \end{matrix} \right\} \leq B_2+B_3. \EES \end{rmk} \subsection{Series representations for $n$ even} \label{ssec:P1:Alt:Series.If.Even} \begin{proof}[Proof of Theorem~\textup{\ref{thm:P1:Intro:Even.Can.Deform.If.Well-posed}.}] By Theorem~\ref{thm:P1:Intro:WellPosed}, the well-posedness of $\Pi(n,A,i,h,q_0)$ and the arguments of Section~\ref{ssec:P1:Alt:Non-Robin}, for each $j\in\{1,2,\dots,n\}$ there exists some $\Ymax\subset\{0,1,\dots,n-1\}$ such that \begin{enumerate} \item{the term \BES Z_{\Ymax}(\rho)e^{-i\rho\sum_{y\in\Ymax}\omega^y} \EES appears in $\DeltaP$ with $Z_{\Ymax}\neq0$ a polynomial, \BES \DeltaP(\rho)=O\left(|Z_{\Ymax}(\rho)|e^{\Im\left(\rho\sum_{y\in\Ymax}\omega^y\right)}\right) \mbox{ as } \rho\to\infty \mbox{ from within } \widetilde{D}_j \EES} \item{for all $Y\subset\{0,1,\dots,n-1\}$, $z\in\{0,1,\dots,n-1\}\setminus Y$ for which \BES \M{X}{Y}{z}(\rho)e^{-i\rho\sum{y\in Y}\omega^y}\hat{q}_T(\omega^z\rho) \EES is a term in $\eta_k$, for some $k$, with $\M{X}{Y}{z}\neq0$ a polynomial such that \BES \M{X}{Y}{z}(\rho)e^{-i\rho\sum{y\in Y}\omega^y}\hat{q}_T(\omega^z\rho) = o\left(|Z_{\Ymax}(\rho)|e^{\Im\left(\rho\sum_{y\in\Ymax}\omega^y\right)}\right) \EES as $\rho\to\infty$ from within $\widetilde{D}_j$.} \end{enumerate} Hence, for all such $Y$, $z$, \BE \label{eqn:P1:Alt:Series.If.Even:Well-posed.Condition} \Im\left[e^{i\phi}\left(\sum_{y\in Y}\omega^y+\omega^z-\sum_{y\in\Ymax}\omega^y\right)\right] < 0 \EE for all $x\in(0,1)$ and for all $\phi\in(0,\pi/n)$. If $\Pi(n,A,-i,h,q_0)$ is ill-posed then there exist $Y$, $z$ satisfying the conditions above, $x\in(0,1)$ and $\phi\in(\pi/n,2\pi/n)$ such that \BES N = \Im\left[e^{i\phi}\left(\sum_{y\in Y}\omega^y+\omega^z-\sum_{y\in\Ymax}\omega^y\right)\right] > 0 \EES Define $\overline{Y}_{\mathrm{max}}=\{n-y:y\in\Ymax\}$ and \BES \overline{Y} = \begin{cases}Y\cup\{z\} & \Im(e^{i\phi}\omega^z)\geq0, \\ Y & \Im(e^{i\phi}\omega^z)<0.\end{cases} \EES Then, as $n$ is even, \BES \Im\left[e^{i\phi}\left(\sum_{y\in \overline{Y}}\omega^y+\sum_{y\in\overline{Y}_{\mathrm{max}}}\omega^y\right)\right] \geq N > 0, \EES hence there exists some $\bar{x}\in(0,1)$ such that \BES \Im\left[e^{i(\phi-\frac{\pi}{n})}\left(\sum_{y\in Y}\omega^y+\bar{x}\omega^z-\sum_{y\in\Ymax}\omega^y\right)\right] > 0, \EES which contradicts inequality~\eqref{eqn:P1:Alt:Series.If.Even:Well-posed.Condition}. The argument is identical in the other direction, switching the intervals in which $\phi$ lies. \end{proof} \section{PDE discrete spectrum} \label{sec:P1:Spectrum} In this section we investigate the PDE discrete spectrum, the set of zeros of an exponential polynomial. We use the definitions, results and arguments presented in~\cite{Lan1931a}. \begin{lem} \label{lem:P1:Spectrum:Properties} The PDE characteristic determinant and PDE discrete spectrum have the following properties: \begin{enumerate} \item{$\DeltaP(\rho)=(-1)^{n-1}\DeltaP(\omega\rho)$.} \item{let $Y\subset\{0,1,\dots,n-1\}$, $Y'=\{y+1\mod n:y\in Y\}$. Let $Z_Y$ and $Z_{Y'}$ be the polynomial coefficients of $\exp{(-i\rho\sum_{y\in Y}\omega^y)}$ and $\exp{(-i\rho\sum_{y\in Y'}\omega^y)}$, respectively, in $\DeltaP$. Then $Z_Y(\rho) = (-1)^{n-1}Z_{Y'}(\omega\rho)$.} \item{either $\DeltaP$ is a polynomial or the PDE discrete spectrum is asymptotically distributed in finite-width semi-strips each parallel to the outward normal to a side of a polygon with order of rotational symmetry a multiple of $n$. Further, the radial distribution of the zeros within each strip is asymptotically inversely proportional to the length of the corresponding side.} \end{enumerate} \end{lem} \begin{proof} (i) The identity \BES \M{\mathcal{A}}{k}{j}(\omega\rho) = \M{\mathcal{A}}{k+1}{j}(\rho) \EES follows directly from the definition~\eqref{eqn:P1:Intro:PDE.Characteristic.Matrix} of the PDE characteristic matrix. A composition with the cyclic permutation of order $n$ in the definition of the determinant yields the result. (ii) By definition there exist a collection of index sets $\mathcal{Y}\subset\mathcal{P}\{0,1,\dots,n-1\}$ and polynomial coefficients $Z_Y(\rho)$ such that \BES \DeltaP(\rho) = \sum_{Y\in\mathcal{Y}}Z_Y(\rho)e^{-i\rho\sum_{y\in Y}\omega^y}. \EES By part (i), \BES \sum_{Y\in\mathcal{Y}}Z_Y(\rho)e^{-i\rho\sum_{y\in Y}\omega^y} = (-1)^{n-1}\sum_{Y\in\mathcal{Y}}Z_Y(\omega\rho)e^{-i\rho\sum_{y\in Y}\omega^{y+1}} \EES Define the collection $\mathcal{Y}'=\{\{y+1\mod n:y\in Y\}:Y\in\mathcal{Y}\}$. Then \BES \sum_{Y\in\mathcal{Y}}Z_Y(\rho)e^{-i\rho\sum_{y\in Y}\omega^y} = (-1)^{n-1}\sum_{Y'\in\mathcal{Y}'}Z_{Y'}(\rho)e^{-i\rho\sum_{y\in Y'}\omega^y}. \EES Equating coefficients of $\exp{(-i\rho\sum_{y\in Y}\omega^y)}$ yields $\mathcal{Y}=\mathcal{Y}'$ and the result follows. (iii) The result follows from part (ii) and Theorem~8 of~\cite{Lan1931a}. \end{proof} An immediate corollary of Lemma~\ref{lem:P1:Spectrum:Properties} is that the PDE discrete spectrum has no finite accumulation point and is separated by some $\varepsilon>0$. \begin{rmk} \label{rmk:P1:Spectrum:Symmetry.In.Coefficients} A corollary of (ii) is that $Z_Y=0$ if and only if $Z_{Y'}=0$. This means it is only necessary to check $Z_j\neq0$ for a particular $j$ in conjunction with Condition~\ref{cond:P1:Alt:Non-Robin:First.Condition} to ensure well-posedness. This permits a simplification of the general Condition~3.22 of~\cite{Smi2011a}. \end{rmk} It is possible to strengthen part (iii) of Lemma~\ref{lem:P1:Spectrum:Properties} in certain cases. \begin{thm} \label{thm:P1:Spectrum:Odd.Rays} Let $n\geq3$ be odd and let $A$ be such that $\DeltaP$ is not a polynomial. If $n\geq 7$ we additionally require that Condition~\ref{cond:P1:Alt:Non-Robin:First.Condition} holds and the relevant coefficients, $Z_j$, are all nonzero. Then the PDE discrete spectrum must lie asymptotically on rays instead of semi-strips. \end{thm} \begin{proof} Assume $n\geq7$ and the additional conditions hold. If \BES Y=\left\{1,2,\dots,\frac{n-1}{2}\right\}\in\mathcal{Y} \EES then, by part (ii) of Lemma~\ref{lem:P1:Spectrum:Properties}, $\{0,1,\dots,(n-3)/2\}\in\mathcal{Y}$ hence the indicator diagram of $\DeltaP$ has subset the convex hull of \BES S=\left\{\overline{\omega^r\sum_{y\in Y}\omega^y}:r\in\{0,1,\dots,n-1\}\right\}. \EES If $\overline{Y}=\{1,2,\dots,(n+1)/2\}\in\mathcal{Y}$ then the indicator diagram contains the regular $2n$-gon that forms the convex hull of $S\cup\{-s:s\in S\}$. We show that the indicator diagram is precisely the convex hull of $S$ or of $S\cup\{-s:s\in S\}$ and that there are no points $\sum_{y\in Y'}\omega^y$, for $Y'\in\mathcal{Y}$, on the boundary of the indicator diagram other than at the vertices. Excepting rotations of $Y$ and $\overline{Y}$, which all correspond to vertices, the sets $Y'\in\mathcal{Y}$ whose corresponding exponent has greatest modulus \BES s'=\left|\sum_{y\in Y'}\omega^y\right| \EES are rotations and reflections of \begin{align*} Y_1 &= \{1,2,\dots,(n-3)/2\} \mbox{ or } \\ Y_2 &= \{1,2,\dots,(n-3)/2,(n+1)/2\}. \end{align*} However, the minimum modulus of the boundary of the indicator diagram is greater than or equal to \begin{multline*} \frac{1}{2} \left| \sum_{y=0}^{(n-3)/2}\omega^y + \sum_{y=1}^{(n-1)/2}\omega^y \right| = \left| \frac{1}{2}(1+\omega^{(n-1)/2}) + \sum_{y=1}^{(n-3)/2}\omega^y \right| \\ > \left| \sum_{y=1}^{(n-3)/2}\omega^y \right| = s_1 > \left| \omega^{(n+1)/2} + \sum_{y=1}^{(n-3)/2}\omega^y \right| = s_2, \end{multline*} hence any point corresponding to $Y'$ is interior to the indicator diagram. It is easy to check that this also holds if $n=3$ or $n=5$. As there can only be two colinear exponents lying on any side of the indicator diagram, the argument in Sections~1--7 of~\cite{Lan1931a} may be simplified considerably and yield the stronger condition that the zeros of the exponential polynomial lie asymptotically on a ray, a semi-strip of zero width. The arguments of Sections~8--9 applied to this result complete the proof. \end{proof} \begin{rmk} \label{rmk:P1:Spectrum:Rays.2.4.7} Theorem~\ref{thm:P1:Spectrum:Odd.Rays} does not hold for $n$ even. Indeed, \BES \frac{1}{2}\left(\sum_{j=0}^{(n-2)/2}\omega^j - \sum_{j=1}^{n/2}\omega^j \right) = \sum_{j=1}^{(n-2)/2}\omega^j, \EES hence if \BE \label{eqn:P1:Spectrum:rmk.Rays.2.4.7:Even} \{0,1,\dots,(n-2)/2\},\{1,2,\dots,n/2\},\{1,2,\dots,(n-2)/2\} \in \mathcal{Y} \EE then, by part 2.\ of Lemma~\ref{lem:P1:Spectrum:Properties}, there are three colinear exponents on each side of the indicator diagram. Condition~\eqref{eqn:P1:Spectrum:rmk.Rays.2.4.7:Even} does not represent a pathological counterexample; it is satisfied by most pseudoperiodic, including all quasiperiodic, boundary conditions. \end{rmk} \bigskip The author is sincerely grateful to B. Pelloni for her continued support and encouragement. He is funded by EPSRC. \bibliographystyle{amsplain} \bibliography{dbrefs} \end{document}
{"config": "arxiv", "file": "1104.5571/Well-posed_IBVP.tex"}
TITLE: Number of undirected trees with unlabled vertices and labeled edges QUESTION [0 upvotes]: I would appreciate some help coming up with an expression for the number of spanning trees of an undirected graph with m labeled edges but m+1 unlabled vertices. The answer is supposed to be ${m+1}^{m-2}$, but the best I came up with is ${m+1}^{m-1}$, with help from this discussion. What I did was using the Cayley forumula for undirected labeled trees with m+1 vertices, choosing the m+1 as my root and from there I "moved" the number on each vetex onto the edge before it. In this way I managed to have a tree with m labeled edges, as I looked for, but from the given answer I guess I missed some double counting which I'm not able to detect. REPLY [0 votes]: Let $T$ be a tree with $m$ edges labelled $1$ through $m$ and $m+1$ vertices. Pick any vertex, and label it $0$. Root $T$ at $0$. Label each vertex of height $1$ with the label of the edge joining it to $0$. Label each vertex of height $2$ with the label of the unique edge joining it to a vertex of height $1$. Continue in this fashion until every vertex has been given the label of the edge joining it to its parent in the rooted tree. Each pair $\langle T,v\rangle$ of an edge-labelled tree with $m$ edges labelled $1$ through $m$ and a vertex $v$ of $T$ gives rise in this way to a distinct vertex-labelled tree on $m+1$ vertices labelled $0$ through $m$. If we start with such a vertex-labelled tree and take the vertex $v$ with label $0$ as the root, we can transfer the label on each of the $m$ remaining vertices to the edge joining it to its parent to reverse the process to recover $\langle T,v\rangle$. This shows that if there are $t_m$ edge-labelled trees with $m$ edges labelled $1$ through $m$, then $t_m(m+1)=(m+1)^{m-1}$, and hence $t_m=(m+1)^{m-2}$.
{"set_name": "stack_exchange", "score": 0, "question_id": 3808153}
TITLE: Euler–Lagrange equation (changing variable) QUESTION [0 upvotes]: Create the Euler–Lagrange equation for the following questions (if it's necessary change the variables). $$\tag{1}\int_{y_1}^{y_2}\dfrac{x'^2}{\sqrt{x'^2+x^2}}\,\mathrm{d}y$$ $$\tag{2}\int_{x_1}^{x_2}y^{3/2}\,\mathrm{d}s$$ $$\tag{3}\int\dfrac{y\cdot{y'}}{1+yy'}\,\mathrm{d}x$$ I don't have an idea about $(1)$ and $(3)$. But here it's what I have tried for $(2)$. 2) $$\int_{x_1}^{x_2}y^{3/2}\,\mathrm{d}s = \int_{x_1}^{x_2}y^{3/2}(1+y'^2)^{1/2} = \int_{y_1}^{y_2}(1+x'^2)(y^{3/2})\,\mathrm{d}y$$ So our Euler equation is: $$\dfrac{\mathrm{d}}{\mathrm{d}y}\left(\dfrac{\partial F}{\partial x'}\right) - \dfrac{\partial F}{\partial x} = 0$$ Then I have to find $Y'$ or $X'$. But I did not take a differential equations course yet, we use Beltrami identity to calculate the extremum points. REPLY [2 votes]: We know that the functional $$I[y]=\int_{x_1}^{x_2} f(x,y(x),y'(x))\,\mathrm{d}x$$ is extremized only if $y$ satisfies the Euler–Lagrange equation $$\frac{\partial{f}}{\partial{y}}-\frac{\mathrm{d}}{\mathrm{d}x}\frac{\partial{f}}{\partial{y'}}=0.$$ Now, if $f$ is independent of $x$, then the Euler–Lagrange equation reduces to the Beltrami identity $$f-y'\frac{\partial{f}}{\partial{y'}}=C=\text{constant}.$$ Note that the names of the variables are immaterial. Henceforth, we use the same notation as above. In the first case, we have $f(x,y,y')=\dfrac{y'^2}{\sqrt{y^2+y'^2}}$, which is independent of $x$. Hence we can use the Beltrami identity. Differentiation gives $$\frac{\partial{f}}{\partial{y'}}=\frac{2y^2y'+y'^3}{\left(y^2+y'^2\right)^{3/2}}\implies\frac{y'^2}{\sqrt{y^2+y'^2}}-\frac{2y^2y'^2+y'^4}{\left(y^2+y'^2\right)^{3/2}}=C.$$ Rewriting the first term gives $$\frac{y^2y'^2+y'^4}{\left(y^2+y'^2\right)^{3/2}}-\frac{2y^2y'^2+y'^4}{\left(y^2+y'^2\right)^{3/2}}=C\implies-\frac{y^2y'^2}{\left(y^2+y'^2\right)^{3/2}}=C,$$ which is a first-order non-linear ordinary differential equation. In total, two constants will be obtained. Of course, we could use the Euler–Lagrange equation instead, but that would be much more complicated; just consider the derivatives: $$\frac{\partial{f}}{\partial{y}}=-\frac{yy'^2}{\left(y^2+y'^2\right)^{3/2}},\,\frac{\partial{f}}{\partial{y'}}=\frac{2y^2y'+y'^3}{\left(y^2+y'^2\right)^{3/2}},\,\frac{\mathrm{d}}{\mathrm{d}x}\frac{\partial{f}}{\partial{y'}}=\frac{y(2y^2-y'^2)(yy''-y'^2)}{\left(y^2+y'^2\right)^{5/2}}.$$ The second and thrid cases can be treated similarly; both integrands are independent of $x$. However, in the second case, note that $\mathrm{d}s=\sqrt{1+y'^2}\,\mathrm{d}x$, so $f(x,y,y')=y^{3/2}\sqrt{1+y'^2}$; this expression is independent of $x$, as it should be.
{"set_name": "stack_exchange", "score": 0, "question_id": 407791}
TITLE: Is this family of functions equicontinuous in $\mathbb{R}$? QUESTION [3 upvotes]: Let $\{ \varphi_{\lambda}(x,y): \lambda \in \mathbb{R} \}$ be the family of functions defined by $$\varphi_{\lambda}(x,y):= \frac{1}{1 + x^4 + \frac{1}{2}\arctan(\lambda)\sin(y^6)}$$ for all $\lambda \in \mathbb{R}$. Is this family of functions equicontinuous in $\mathbb{R}$? It is easy to see that this family of functions is uniformly bounded by $1$, but I don't know how to see if is equicontinuous. I have thought about using subsequences and maybe Arzela-Ascoli theorem, but I do not get the point... For example, it is not equi-Lipschitz as its partial derivatives are not bounded in a convex open set, so this doesn't prove nothing neither... Any hint will be appreciated. Thanks in advance REPLY [1 votes]: Take $\lambda,\mu$. Since $\frac1a-\frac1b = \frac{b-a}{ab}$ we have $$ |\phi_\lambda(x,y)-\phi_\mu(x,y)|\le \frac1{(1-\frac\pi4)^2} \frac12 |\arctan(\lambda)-\arctan(\mu)|. $$ This implies $\phi_\mu\to \phi_\lambda$ uniformly for $\arctan(\mu) \to \arctan(\lambda)$. Since the set $\{\arctan \lambda: \ \lambda\in \mathbb R\}$ is bounded, we can find for each sequence $(\lambda_n)$ a subsequence such that $(\arctan(\lambda_{n_k}))$ converges, and so $(\phi_{\lambda_{n_k}})$ converges uniformly. Then by Arzela-Ascoli, the family is equicontinuous. REPLY [1 votes]: A direct calculation/estimate works: The denominator is bounded below by $1 - \pi/4 > 0$, so that $$ | \varphi_{\lambda}(x_1,y_1) - \varphi_{\lambda}(x_2,y_2)| \le \frac{|x_2^4 + \frac{1}{2}\arctan(\lambda)\sin(y_2^6) - x_1^4 - \frac{1}{2}\arctan(\lambda)\sin(y_1^6)|}{(1-\pi/4)^2} \\ \le \frac{|x_2^4 -x_1^4|+ \frac{\pi}{4}|\sin(y_2^6)-\sin(y_1^6)|}{(1-\pi/4)^2} = C |f(x_2)-f(x_1)| + D|g(y_2)-g(y_1)| $$ with some constants $C, D$ and the continuous functions $f(x) = x^4$ and $g(y) = \sin(y^6)$. For given $(x_1, y_1) \in \Bbb R^2$ this becomes arbitrary small if $(x_2, y_2)$ is sufficiently close to $(x_1, y_1)$, independently of $\lambda$.
{"set_name": "stack_exchange", "score": 3, "question_id": 3989974}
TITLE: $f,g$ continuous maps. $f = g$ a.e. equal, is $f = g$ on a locally compact group? QUESTION [2 upvotes]: In this question (functions $f=g$ $\lambda$-a.e. for continuous real-valued functions are then $f=g$ everywhere) it is stated that if $ f,g : \mathbb{R} \to \mathbb{R} $ are continuous and $ f = g $ a.e. equal then in fact $ f(x) = g(x) $ on all of $ \mathbb{R} $. What if $ f,g : G \to \mathbb{C} $ (continuous maps) where $ G $ is a locally compact group? Does it still hold that $ f = g $ a.e. equal implies $ f(x) = g(x) $ for all $ x\in G $? REPLY [2 votes]: If you are talking about the Haar-measure on $G$, this is true. For if there was a nonempty open set $\emptyset \neq U \subset G$ with $\mu(U) = 0$, we can assume (by translation) w.l.o.g. that $e \in U$ (the identity of $G$). Then for every compact $K \subset G$, we could cover $K$ by finitely(!) many of the translates $xU$ for $x \in K$ (here, we use that $K$ is compact). This implies $\mu(K) = 0$ for every(!) compact $K \subset G$. By inner regularity of the Haar measure, i.e. by $$ \mu(V) = \sup\{\mu(K) \mid K \subset V \text{ compact} \}, $$ we conclude $\mu(V) = 0$ for all open subsets $V \subset G$. By outer regularity, we get $\mu \equiv 0$, a contradiction. This shows that every nonempty open set has positive measure. By continuity of $f,g$ we know that $$ U := \{ x \in G \mid f(x) \neq g(x) \} $$ is open. Thus it is either empty (i.e. $f \equiv g$), or has positive measure (i.e. NOT $f=g$ a.e.).
{"set_name": "stack_exchange", "score": 2, "question_id": 830304}
TITLE: Can a gauge anomaly be *removed* by quantum corrections? QUESTION [4 upvotes]: Consider a classical gauge field coupled to a vector field $j^\mu$. Gauge invariance requires that $\mathcal A_\mathrm{cl}:=\partial_\mu j^\mu$ vanishes: $$ \mathcal A_\mathrm{cl}\equiv 0 $$ In other words, the source of a classical gauge field must be conserved, for otherwise the theory is inconsistent. Let us now move on to the quantum theory, with boldface denoting operators. Even if the classical theory is gauge invariant, $\boldsymbol{\mathcal A_\mathrm{cl}}\equiv \boldsymbol 0$, we may still have a quantum anomaly, $\boldsymbol{\mathcal A_\mathrm{qm}}\neq \boldsymbol 0$, which would render the theory inconsistent. A situation I have never seen discussed is a gauge theory coupled to a non-conserved classical source, $\boldsymbol{\mathcal A_\mathrm{cl}}\neq \boldsymbol 0$, but with a quantum anomaly that satisfies $\boldsymbol{\mathcal A_\mathrm{qm}}\equiv -\boldsymbol{\mathcal A_\mathrm{cl}}$. In such a case, the quantum source would be conserved, $$ \partial_\mu \boldsymbol j^\mu=\boldsymbol{\mathcal A_\mathrm{qm}}+\boldsymbol{\mathcal A_\mathrm{cl}}\equiv \boldsymbol 0 $$ which would mean that the quantum theory is consistent after all. If this picture is consistent, it would open up the door to a very weird but interesting phenomenology. For one thing, the theory probably lacks a classical limit, or at least the limit is highly non-trivial. A model like this would certainly require some meticulous tuning to ensure that the quantum anomaly precisely matches the classical one, but it seems to me that it is in principle conceivable. Or is it? Is there any obstruction to this mechanism? Is there any way to argue that this just cannot happen? Conversely, if this mechanism works, has it ever been used in the literature? REPLY [2 votes]: The suggested mechanism is one of the basic mechanisms of anomaly cancellations. Let me emphasize first, that the symmetry in the classical theory of your question is required to be anomalous and not just broken or non-existent. This requires the current divergence to satisfy the Wess-Zumino consistency condition, or equivalently, the integrated anomaly to be a one-cocycle on the gauge group. Anomalies in gauge theories with fermions manifest their selves at the one loop level, while in bosonic theories, the anomaly exists already at the classical level due to Wess-Zumino-Witten terms (These terms depend explicitly on $\hbar$ since their integrals over closed surfaces should be multiples of $2 \pi$). Since the net anomaly should vanish, then an anomalous theory can exist only if its anomaly is compensated by another theory with exactly the opposite anomaly. This is basically what happens with a Dirac fermion composed of two Weyl fermions of opposite chiralities. An anomaly compensation mechanism of the kind described in the question happens, for example, in the quantum Hall effect. Here, the system is made of a bulk and an edge. The bulk theory is a Chern-Simon's theory in 2+1 D; it is anomalous at the classical level. The edge theory can be described as a theory of chiral fermions in 1+1 D. Its anomaly occurs at the one-loop level and exactly compensates the anomaly of the bulk theory. Please see the following article by Jiusi and Nair where this point is clearly explained on page 11 (This article is new, but this anomaly compensation mechanism is known long ago). (In the Abelian case, the gauge group is electromagnetism and clearly we should have this anomaly cancelation.) Now, you could choose for the edge theory not a chiral fermion but a chiral boson. Still the anomaly gets compensated but this time, it is manifested for both theories at the classical level. This example shows that an anomaly is a real property of a system, its level of manifestation is a matter of description of the system. All descriptions are incomplete, for example, the fermionic description of QCD is by means of confined quarks, while the bosonic (low energy sigma model) description is not renormalizable. However, the anomaly can be exactly computed in both descriptions. Thus the main point is the description of anomaly as classical or quantum is not an absolute; it relies on our description of the system which is not unique. In addition, the tuning is not very complex, because the coefficient of the Wess-Zumino-Witten (hence the anomaly) term is fixed by a quantization condition (generalization od Dirac's quantization condition of the monopole), while in the fermionic case the anomaly coefficient depends on the fermion representation, thus we have only a discrete number of cases that we need to fit between.
{"set_name": "stack_exchange", "score": 4, "question_id": 403987}
TITLE: Inconsistency when solving IVP using Laplace Transform with Dirac Delta QUESTION [3 upvotes]: solving $\dot{x}(t) + x(t) = \delta (t) $ Using Laplace transform for $x(0) = 1$, we get: $sX(s)-1 + X(s) = 1$ $X(s) = \frac{2}{s+1}$ so, $x(t) = 2e^{-t}$ However, evaluating at t=0, $x(0) = 2 \neq 1$ This disagrees with the initial condition. What went wrong here? REPLY [1 votes]: The reason that you get $x(0)=2$ is that you both set $x(0^-)=1$ and apply a Dirac $\delta$ that adds a unit step making $x(0^+)-x(0^-)=1$. Thus, $x(0^+)=x(0^-)+\left(x(0^+)-x(0^-)\right) = 1 + 1 = 2.$ This will be clearer if you displace the $\delta$ somewhat and take the differential equation as $\dot{x}(t) + x(t) = \delta(t-\epsilon)$ with $x(0^-)=x_0.$ When $\epsilon>0$ this gives $\left(sX(s)-x_0\right)+X(s)=e^{-\epsilon s}$ i.e. $$ X(s) = \frac{x_0+e^{-\epsilon s}}{s+1} = \frac{x_0}{s+1} + \frac{e^{-\epsilon s}}{s+1} $$ so $$ x(t) = x_0 e^{-t} H(t) + e^{-(t-\epsilon)} H(t-\epsilon) . $$ This function first has a step of size $x_0$ at $t=0$ and later another step of size $1$ (from the $\delta$ term) at $t=\epsilon$. If you take $x_0=1$ and let $\epsilon\to 0$ then you will get a total step of size $2$ at $t=0$ which is what you have seen.
{"set_name": "stack_exchange", "score": 3, "question_id": 4281658}
TITLE: Find the number of functions $f(x)$ with $f(f(n)) = n+2022$ for every nonnegative integer n. QUESTION [1 upvotes]: Find the number of functions $f(x)$ from nonnegative integers to nonnegative integers so that $f(f(n)) = n+2022$ for every nonnegative integer n. Let $a_0 = f(0)$ and let $a_n = f(a_{n-1})$ for $n\ge 1$. Then $a_{n} - a_{n-2} = 2022$ for every $n\ge 2$. The characteristic equation of the corresponding homogeneous recurrence is $x^2 - 1,$ which has roots $\pm 1.$ Also, $b_n = 1011 n + b$ satisfies the inhomogeneous recurrence $b_n - b_{n-2} = 2022$ for any integer $b$. So $a_n = A(-1)^n + 1011 n + B$ for some integers $A,B$ and all $n$. We must ensure that $a_n$ is always nonnegative. We must have $a_0\ge 0\Rightarrow A \ge -B.$ Also, $a_1\ge 0\Rightarrow -A +1011 + B \ge 0, a_2\ge 0\Rightarrow A +2022 + B\ge 0.$ In general, for all $n=2k+1> 0$, $B\ge A-1011 n$ and for all $n=2k\ge 0, A\ge -B-1011n$. To satisfy both of these inequalities, it suffices to have $B\ge A-1011$ and $A\ge -B\Rightarrow B\ge -B-1011\Rightarrow 2B\ge -1011.$ So $B$ is at least $-505.$ Though I'm not sure how to determine what $f(n)$ can be. Clearly $f(n) = n+1011$ solves the functional equation and $f(0)$ cannot be zero, so it must be positive. If there exists $n$ so that $f(n) = n$, then we get $n=n+2022$ by plugging this n into the given equation, which is a contradiction. Hence $f$ has no fixed points. REPLY [0 votes]: $$f(f(n))=n+2022\tag0$$ $$f(f(f(n))=f(n+2022)$$ $$f(n+2022)=f(n)+2022\tag1$$ Thus specifying $f(n)$ for $n\in[0,2021]$ fixes $f$. Now write the nonnegative integers in a zero-indexed infinite matrix with $2022$ columns so that $n$ has – and is treated as equal to – the coordinates $(\lfloor n/2022\rfloor,n\bmod2022)$. Then if $f$ sends $(q+d,r)$ to $(q,s)$ where $d\ge1$ (a backwards jump), by $(1)$ it must also send $(d-1,r)$ to $(-1,s)$ which is absurd. As a corollary $f$ cannot send $(q,r)$ to $(q+d,s)$ with $d\ge2$ (a long forwards jump), since by $(0)$ $(q+d,s)$ must be sent to $(q+1,r)$, a backwards jump. Therefore a number $(0,r)=r\in[0,2021]$ can only be sent by $f$ to $(0,s)$ or $(1,s)$; $s\ne r$ since $f$ would then have a fixed point by $(0)$, which is inconsistent with $(1)$. If $f((0,r))=(0,s)$, columns $r$ and $s$ of the infinite matrix are completely defined; if $f((0,r))=(1,s)$, $(1)$ implies $f((0,s))=(0,r)$ with the same end result as before. The final result is that all admissible functions $f$ are specified by a pairing of residue classes modulo $2022$ and then, for each pair $(r,s)$, whether $f((0,r))=(0,s)$ or the other way round. The number of such $f$ is thus $$\frac{2022!}{1011!}$$ A similar argument shows that for $f^{(p)}(n)=n+pq$ the number of such functions is $$\frac{(pq)!}{q!}$$
{"set_name": "stack_exchange", "score": 1, "question_id": 4538449}
TITLE: Finding the maxima without second derivative test QUESTION [0 upvotes]: How can I verify that a critical point is a maximum without using the second derivative test? Here is the specific situation. There is a function $f(x)\ge 0$ and $x\ge 0$ as they are both distances. Now I found that $$ \frac {df}{dx}=\frac {2}{(1+\frac{2^2}{x^2})x^2}-\frac{8}{(1+\frac{8^2}{x^2}) x^2} $$ and when $\frac {df}{dx}=0, x=\pm 4$, but clearly cannot be $-4$. Now I must show that $x=4$ maximizes $f(x)$ without using the second derivative test. REPLY [0 votes]: Simplifying the derivative you get $$ f'(x)=\frac{6(16-x^2)}{(x^2+4)(x^2+64)}=\frac{6(4-x)(4+x)}{(x^2+4)(x^2+64)}. $$ Now you see that for $0\le x<4$: $f'(x)>0$ $\Rightarrow$ the function increases, for $x>4$: $f'(x)<0$ $\Rightarrow$ the function decreases. Hence, $x=4$ is the maximum point.
{"set_name": "stack_exchange", "score": 0, "question_id": 1441890}
\begin{document} \title[Cartan geometry and nef tangent bundle]{Holomorphic Cartan geometry on manifolds\\[5pt] with numerically effective tangent bundle} \author[I. Biswas]{Indranil Biswas} \address{School of Mathematics, Tata Institute of Fundamental Research, Homi Bhabha Road, Bombay 400005, India} \email{indranil@math.tifr.res.in} \author[U. Bruzzo]{Ugo Bruzzo} \address{Scuola Internazionale Superiore di Studi Avanzati, Via Beirut 2--4, 34013, Trieste, and Istituto Nazionale di Fisica Nucleare, Sezione di Trieste, Italy} \email{bruzzo@sissa.it} \subjclass[2000]{32M10, 14M17, 53C15} \keywords{Cartan geometry, numerically effectiveness, rational homogeneous space} \date{} \begin{abstract} Let $X$ be a compact connected K\"ahler manifold such that the holomorphic tangent bundle $TX$ is numerically effective. A theorem of \cite{DPS} says that there is a finite unramified Galois covering $M\, \longrightarrow\, X$, a complex torus $T$, and a holomorphic surjective submersion $f\, :\, M\,\longrightarrow\,T$, such that the fibers of $f$ are Fano manifolds with numerically effective tangent bundle. A conjecture of Campana and Peternell says that the fibers of $f$ are rational and homogeneous. Assume that $X$ admits a holomorphic Cartan geometry. We prove that the fibers of $f$ are rational homogeneous varieties. We also prove that the holomorphic principal ${\mathcal G}$--bundle over $T$ given by $f$, where $\mathcal G$ is the group of all holomorphic automorphisms of a fiber, admits a flat holomorphic connection. \end{abstract} \maketitle \section{Introduction}\label{intro.} Let $X$ be a compact connected K\"ahler manifold such that the holomorphic tangent bundle $TX$ is numerically effective. (The notions of numerically effective vector bundle and numerically flat vector bundle over a compact K\"ahler manifold were introduced in \cite{DPS}.) From a theorem of Demailly, Peternell and Schneider we know that there is a finite unramified Galois covering $$ \gamma\, :\, M\, \longrightarrow\, X\, , $$ a complex torus $T$, and a holomorphic surjective submersion $$ f\, :\, M\,\longrightarrow\,T\, , $$ such that the fibers of $f$ are Fano manifolds with numerically effective tangent bundle (see \cite[p. 296, Main~Theorem]{DPS}). It is conjectured by Campana and Peternell that the fibers of $f$ are rational homogeneous varieties (i.e., varieties of the form ${\mathcal G}/P$, where $P$ is a parabolic subgroup of a complex semisimple group ${\mathcal G}$) \cite[p. 170]{CP}, \cite[p. 296]{DPS}. Our aim here is to verify this conjecture under the extra assumption that $X$ admits a holomorphic Cartan geometry. Let $(E'_H\, ,\theta')$ be a holomorphic Cartan geometry on $X$ of type $G/H$, where $H$ is a complex Lie subgroup of a complex Lie group $G$. (The definition of Cartan geometry is recalled in Section \ref{sec2}.) Consider the pullback $\theta$ of $\theta'$ to the holomorphic principal $H$--bundle $E_H\,:=\, \gamma^*E'_H$, where $\gamma$ is the above covering map. The pair $(E_H\, ,\theta)$ is a holomorphic Cartan geometry on $M$. Using $(E_H\, ,\theta)$ we prove the following theorem (see Theorem \ref{thm1}): \begin{theorem}\label{thm0} There is a semisimple linear algebraic group $\mathcal G$ over $\mathbb C$, a parabolic subgroup $P\, \subset\, \mathcal G$, and a holomorphic principal $\mathcal G$--bundle $$ {\mathcal E}_{\mathcal G}\, \longrightarrow\, T\, , $$ such that the fiber bundle ${\mathcal E}_{\mathcal G}/P\, \longrightarrow\, T$ is holomorphically isomorphic to the fiber bundle $f\, :\, M\, \longrightarrow\, T$. \end{theorem} The group $\mathcal G$ in Theorem \ref{thm0} is the group of all holomorphic automorphisms of a fiber of $f$. Let ${\rm ad}({\mathcal E}_{\mathcal G}) \,\longrightarrow\, T$ be the adjoint vector bundle of the principal $\mathcal G$--bundle ${\mathcal E}_{\mathcal G}$ in Theorem \ref{thm0}. Let $K^{-1}_f\,\longrightarrow\, M$ be the relative anti--canonical line bundle for the projection $f$. We prove the following (see Proposition \ref{lem1} and Proposition \ref{lem3}): \begin{proposition}\label{lem0} Let $X$ is a compact connected K\"ahler manifold such that $TX$ is numerically effective, and let $(E'_H\, ,\theta')$ be a holomorphic Cartan geometry on $X$ of type $G/H$. Then the following two statements hold: \begin{enumerate} \item The adjoint vector bundle ${\rm ad}({\mathcal E}_{\mathcal G})$ is numerically flat. \item The principal $\mathcal G$--bundle ${\mathcal E}_{\mathcal G}$ admits a flat holomorphic connection. \end{enumerate} \end{proposition} \section{Cartan Geometry and numerically effectiveness}\label{sec2} Let $G$ be a connected complex Lie group. Let $ H\, \subset\, G $ be a connected complex Lie subgroup. The Lie algebra of $G$ (respectively, $H$) will be denoted by $\mathfrak g$ (respectively, $\mathfrak h$). Let $Y$ be a connected complex manifold. The holomorphic tangent bundle of $Y$ will be denoted by $TY$. Let $E_H\, \longrightarrow\, Y$ be a holomorphic principal $H$--bundle. For any $g\,\in\, H$, let \begin{equation}\label{betah} \beta_g\,:\, E_H\, \longrightarrow\, E_H \end{equation} be the biholomorphism defined by $z\, \longmapsto\, zg$. For any $v\, \in\, {\mathfrak h}$, let \begin{equation}\label{z} \zeta_v\, \in\, H^0(E_H,\, TE_H) \end{equation} be the holomorphic vector field on $E_H$ associated to the one--parameter family of biholomorphisms $t\,\longmapsto \, \beta_{\exp(tv)}$. Let $$ \text{ad}(E_H)\, :=\, E_H\times^H{\mathfrak h}\,\longrightarrow\,Y $$ be the adjoint vector bundle associated $E_H$ for the adjoint action of $H$ on $\mathfrak h$. The adjoint vector bundle of a principal $G$--bundle is defined similarly. A \textit{holomorphic Cartan geometry} of type $G/H$ on $Y$ is a holomorphic principal $H$--bundle \begin{equation}\label{e0} p\, :\, E_H\, \longrightarrow\, Y \end{equation} together with a $\mathfrak g$--valued holomorphic one--form \begin{equation}\label{e00} \theta\, \in\, H^0(E_H,\, \Omega^1_{E_H}\otimes_{\mathbb C} {\mathfrak g}) \end{equation} satisfying the following three conditions: \begin{enumerate} \item $\beta^*_g\theta\, =\, \text{Ad}(g^{-1})\circ\theta$ for all $g\, \in\,H$, where $\beta_g$ is defined in \eqref{betah}, \item $\theta(z)(\zeta_v(z)) \, =\, v$ for all $v\, \in\, {\mathfrak h}$ and $z\, \in\, E_H$ (see \eqref{z} for $\zeta_v$), and \item for each point $z\, \in\, E_H$, the homomorphism from the holomorphic tangent space \begin{equation}\label{e-1} \theta(z) \,:\, T_zE_H\, \longrightarrow\, {\mathfrak g} \end{equation} is an isomorphism of vector spaces. \end{enumerate} (See \cite{S}.) A holomorphic line bundle $L\,\longrightarrow\, Y$ is called \textit{numerically effective} if $L$ admits Hermitian structures such that the negative part of the curvatures are arbitrarily small \cite[p. 299, Definition 1.2]{DPS}. If $Y$ is a projective manifold, then $L$ is numerically effective if and only if the restriction of it to every complete curve has nonnegative degree. A holomorphic vector bundle $E\,\longrightarrow \, Y$ is called \textit{numerically effective} if the tautological line bundle ${\mathcal O}_{{\mathbb P}(E)}(1) \,\longrightarrow\, {\mathbb P}(E)$ is numerically effective. Let $X$ be a compact connected K\"ahler manifold such that the holomorphic tangent bundle $TX$ is numerically effective. Then there is a finite \'etale Galois covering \begin{equation}\label{e1} \gamma\, :\, M\, \longrightarrow\, X\, , \end{equation} a complex torus $T$ and a holomorphic surjective submersion \begin{equation}\label{e2} f\, :\, M\, \longrightarrow\, T \end{equation} such that the fibers of $f$ are connected Fano manifolds with numerically effective tangent bundle \cite[p. 296, Main~Theorem]{DPS}. \begin{theorem}\label{thm1} Let $(E'_H\, ,\theta')$ be a holomorphic Cartan geometry on $X$ of type $G/H$, where $X$ is a compact connected K\"ahler manifold such that the holomorphic tangent bundle $TX$ is numerically effective. Then there is \begin{enumerate} \item a semisimple linear algebraic group $\mathcal G$ over $\mathbb C$, \item a parabolic subgroup $P\, \subset\, \mathcal G$, and \item a holomorphic principal $\mathcal G$--bundle ${\mathcal E}_{\mathcal G}\, \longrightarrow\, T$, \end{enumerate} such that the fiber bundle ${\mathcal E}_{\mathcal G}/P\, \longrightarrow\, T$ is holomorphically isomorphic to the fiber bundle $f$ in \eqref{e2}. \end{theorem} \begin{proof} Let \begin{equation}\label{f1} (E_H\, ,\theta) \end{equation} be the holomorphic Cartan geometry on $M$ obtained by pulling back the holomorphic Cartan geometry $(E'_H\, ,\theta')$ on $X$ using the projection $\gamma$ in \eqref{e1}. Let \begin{equation}\label{eg} E_G\, :=\, E_H\times^H G \, \longrightarrow\, M \end{equation} be the holomorphic principal $G$--bundle obtained by extending the structure group of $E_H$ using the inclusion of $H$ in $G$. So $E_G$ is a quotient of $E_H\times G$, and two points $(z_1\, ,g_1)$ and $(z_2\, ,g_2)$ of $E_H\times G$ are identified in $E_G$ if there is an element $h\, \in\, H$ such that $z_2\,=\, z_1h$ and $g_2\,=\, h^{-1}g_1$. Let $$ \theta_{\text{MC}}\, :\, TG\, \longrightarrow\, G\times\mathfrak g $$ be the $\mathfrak g$--valued Maurer--Cartan one--form on $G$ constructed using the left invariant vector fields. Consider the $\mathfrak g$--valued holomorphic one--form $$ \widetilde{\theta}\, :=\, p^*_1 \theta + p^*_2\theta_{\text{MC}} $$ on $E_H\times G$, where $p_1$ (respectively, $p_2$) is the projection of $E_H\times G$ to $E_H$ (respectively, $G$), and $\theta$ is the one--form in \eqref{f1}. This form $\widetilde{\theta}$ descends to a $\mathfrak g$--valued holomorphic one--form on the quotient space $E_G$ in \eqref{eg}, and the descended form defines a holomorphic connection on $E_G$; see \cite{At2} for holomorphic connection. Therefore, the principal $G$--bundle $E_G$ in \eqref{eg} is equipped with a holomorphic connection. This holomorphic connection on $E_G$ will be denoted by $\nabla^G$. The inclusion map ${\mathfrak h}\,\,\hookrightarrow\,\mathfrak g$ produces an inclusion $$ \text{ad}(E_H)\,\hookrightarrow\, \text{ad}(E_G) $$ of holomorphic vector bundles. Using the form $\theta$, the quotient bundle $\text{ad}(E_G)/\text{ad}(E_H)$ gets identified with the holomorphic tangent bundle $TM$. Therefore, we get a short exact sequence of holomorphic vector bundles on $M$ \begin{equation}\label{eq1} 0\,\longrightarrow\, \text{ad}(E_H)\,\longrightarrow\,\text{ad}(E_G) \,\longrightarrow\, TM \,\longrightarrow\, 0\, . \end{equation} The holomorphic connection $\nabla^G$ on $E_G$ induces a holomorphic connection on the adjoint vector bundle $\text{ad}(E_G)$. This induced connection on $\text{ad}(E_G)$ will be denoted by $\nabla^{\rm ad}$. For any point $x\, \in\, T$, consider the holomorphic vector bundle \begin{equation}\label{e3} \text{ad}(E_G)^x\, :=\, \text{ad}(E_G)\vert_{f^{-1}(x)} \, \longrightarrow\, f^{-1}(x) \end{equation} (see \eqref{e2} for $f$). Let $\nabla^x$ be the holomorphic connection on $\text{ad}(E_G)^x$ obtained by restricting the above connection $\nabla^{\rm ad}$. Any complex Fano manifold is rationally connected \cite[p. 766, Theorem 0.1]{KMM}. In particular, $f^{-1}(x)$ is a rationally connected smooth complex projective variety. Since $M$ is rationally connected, the curvature of the connection $\nabla^x$ vanishes identically (see \cite[p. 160, Theorem 3.1]{Bi}). From the fact that $f^{-1}(x)$ is rationally connected it also follows that $f^{-1}(x)$ is simply connected \cite[p. 545, Theorem 3.5]{Ca}, \cite[p. 362, Proposition 2.3]{Ko}. Since $\nabla^x$ is flat, and $f^{-1}(x)$ is simply connected, we conclude that the vector bundle $\text{ad}(E_G)^x$ in \eqref{e3} is holomorphically trivial. Let \begin{equation}\label{e4} 0\,\longrightarrow\, \text{ad}(E_H)\vert_{f^{-1}(x)} \, \,\longrightarrow\,\text{ad}(E_G)^x \,\stackrel{\alpha}{\longrightarrow}\, (TM)\vert_{f^{-1}(x)} \,\longrightarrow\, 0 \end{equation} be the restriction to $f^{-1}(x)\, \subset\, M$ of the short exact sequence in \eqref{eq1}. Let $T_xT$ be the tangent space to $T$ at the point $x$. The trivial vector bundle over $f^{-1}(x)$ with fiber $T_xT$ will be denoted by $f^{-1}(x)\times T_xT$. Let $$ (df)\vert_{f^{-1}(x)}\, :\, (TM)\vert_{f^{-1}(x)} \,\longrightarrow\, f^{-1}(x)\times T_xT $$ be the differential of $f$ restricted to $f^{-1}(x)$. The kernel of the composition homomorphism $$ \text{ad}(E_G)^x\,\stackrel{\alpha}{\longrightarrow}\, (TM)\vert_{f^{-1}(x)} \, \stackrel{(df)\vert_{f^{-1}(x)}}{\longrightarrow}\, f^{-1}(x)\times T_xT $$ (see \eqref{e4} for $\alpha$) will be denoted by ${\mathcal K}^x$. So, from \eqref{e4} we get the short exact sequence of vector bundles \begin{equation}\label{e5} 0\,\longrightarrow\, {\mathcal K}^x \, \,\longrightarrow\,\text{ad}(E_G)^x \,\longrightarrow\,f^{-1}(x)\times T_xT \,\longrightarrow\, 0 \end{equation} over $f^{-1}(x)$. Since both $\text{ad}(E_G)^x$ and $f^{-1}(x)\times T_xT$ are holomorphically trivial, using \eqref{e5} it can be shown that the vector bundle ${\mathcal K}^x$ is also holomorphically trivial. To prove that ${\mathcal K}^x$ is also holomorphically trivial, fix a point $z_0\, \in\, f^{-1}(x)$, and fix a subspace \begin{equation}\label{e6} V_{z_0}\, \subset\, \text{ad}(E_G)^x_{z_0} \end{equation} that projects isomorphically to the fiber of $f^{-1}(x)\times T_xT$ over the point $z_0$. Since $\text{ad}(E_G)^x$ is holomorphically trivial, there is a unique holomorphically trivial subbundle $$ V\, \subset\, \text{ad}(E_G)^x $$ whose fiber over $z_0$ coincides with the subspace $V_{z_0}$ in \eqref{e6}. Consider the homomorphism $$ V\, \longrightarrow\, f^{-1}(x)\times T_xT $$ obtained by restricting the projection in \eqref{e5}. Since this homomorphism is an isomorphism over $z_0$, and both $V$ and $f^{-1}(x)\times T_xT$ are holomorphically trivial, we conclude that this homomorphism is an isomorphism over $f^{-1}(x)$. Therefore, $V$ gives a holomorphic splitting of the short exact sequence in \eqref{e5}. Consequently, the vector bundle $\text{ad}(E_G)^x$ decomposes as \begin{equation}\label{de} \text{ad}(E_G)^x\,=\, {\mathcal K}^x\oplus V\, . \end{equation} Since $\text{ad}(E_G)^x$ is trivial, from a theorem of Atiyah on uniqueness of decomposition, \cite[p. 315, Theorem 2]{At1}, it follows that the vector bundle ${\mathcal K}^x$ is trivial; decompose all the three vector bundles in \eqref{de} as direct sums of indecomposable vector bundles, and apply Atiyah's result. {}From \eqref{e4} we get a short exact sequence of holomorphic vector bundles \begin{equation}\label{e7} 0\,\longrightarrow\, \text{ad}(E_H)\vert_{f^{-1}(x)} \, \,\longrightarrow\,{\mathcal K}^x \,\longrightarrow\, T(f^{-1}(x)) \,\longrightarrow\, 0\, , \end{equation} where $T(f^{-1}(x))$ is the holomorphic tangent bundle of $f^{-1}(x)$. Since ${\mathcal K}^x$ is trivial, from \eqref{e7} it follows that the tangent bundle $T(f^{-1}(x))$ is generated by its global sections. This immediately implies that $f^{-1}(x)$ is a homogeneous manifold. Since $f^{-1}(x)$ is a Fano homogeneous manifold, we conclude that there is a semisimple linear algebraic group ${\mathcal G}'$ over $\mathbb C$, and a parabolic subgroup $P'\, \subset\, {\mathcal G}'$, such that $f^{-1}(x)\,=\, {\mathcal G}'/P'$. Since a quotient space of the type ${\mathcal G}'/P'$ is rigid \cite[p. 131, Corollary]{Ak}, if follows that any two fibers of $f$ are holomorphically isomorphic. Let \begin{equation}\label{e8} {\mathcal G}\, :=\, \text{Aut}^0(f^{-1}(x)) \end{equation} be the group of all holomorphic automorphisms of $f^{-1}(x)$. It is known that ${\mathcal G}$ is a connected semisimple complex linear algebraic group \cite[p. 131, Theorem 2]{Ak}. Since $f^{-1}(x)$ is isomorphic to ${\mathcal G}'/P'$, it follows that ${\mathcal G}$ is a semisimple linear algebraic group over $\mathbb C$ of adjoint type (this means that the center of $\mathcal G$ is trivial). As before, let \begin{equation}\label{e8a} z_0\, \in\, f^{-1}(x) \end{equation} be a fixed point. Let \begin{equation}\label{e9} {\mathcal P}\, \subset\, {\mathcal G} \end{equation} be the subgroup that fixes the point $z_0$. Note that $\mathcal P$ is a parabolic subgroup of $\mathcal G$, and the quotient ${\mathcal G}/P$ is identified with $f^{-1}(x)$. Consider the trivial holomorphic fiber bundle $$ T\times f^{-1}(x)\, \longrightarrow\, T $$ with fiber $f^{-1}(x)$. Let ${\mathcal E}\, \longrightarrow\, T$ be the holomorphic fiber bundle given by the sheaf of holomorphic isomorphisms from $T\times f^{-1}(x)$ to $M$, where $M$ is the fiber bundle in \eqref{e1}; recall that all the fibers of $f$ are holomorphically isomorphic. It is straightforward to check that $\mathcal E$ is a holomorphic principal ${\mathcal G}$--bundle, where $\mathcal G$ is the group defined in \eqref{e8}. Let \begin{equation}\label{e10} \varphi\,:\, {\mathcal E}_{\mathcal G}\, :=\, {\mathcal E}\,\longrightarrow\, T \end{equation} be this holomorphic principal ${\mathcal G}$--bundle. The fiber of ${\mathcal E}_{\mathcal G}$ over any point $y\,\in\, T$ is the space of all holomorphic isomorphisms from $f^{-1}(x)$ to $f^{-1}(y)$. So there is a natural projection \begin{equation}\label{np} {\mathcal E}_{\mathcal G}\, \longrightarrow\, M \end{equation} that sends any $\xi\, \in\, \varphi^{-1}(y)$ to the image of the point $z_0$ in \eqref{e8a} by the map $$ \xi\,:\, f^{-1}(x) \,\longrightarrow\, f^{-1}(y)\, . $$ This projection identifies the fiber bundle $$ {\mathcal E}_{\mathcal G}/{\mathcal P}\,\longrightarrow\, T $$ with the fiber bundle $M\, \longrightarrow\, T$, where $\mathcal P$ is the subgroup in \eqref{e9}. This completes the proof of the theorem. \end{proof} \section{Principal bundles over a torus} Let $G_0$ be a reductive linear algebraic group defined over $\mathbb C$. Fix a maximal compact subgroup $ K_0\, \subset\, G\, . $ Let $Y$ be a complex manifold and $E_{G_0}\, \longrightarrow\, Y$ a holomorphic principal $G_0$--bundle over $Y$. A \textit{unitary flat connection} on $E_{G_0}$ is a flat holomorphic connection $\nabla^0$ on $E_{G_0}$ which has the following property: there is a $C^\infty$ reduction of structure group $ E_{K_0}\,\subset\, E_{G_0} $ of $E_{G_0}$ to the subgroup $K_0$ such that $\nabla^0$ is induced by a connection on $E_{K_0}$ (equivalently, the connection $\nabla^0$ preserves $E_{K_0}$). Note that $E_{G_0}$ admits a unitary flat connection if and only if $E_{G_0}$ is given by a homomorphism $\pi_1(Y)\, \longrightarrow\, K_0$. Let $P\, \subset\, G_0$ be a parabolic subgroup. Let $R_u(P)\, \subset\, P$ be the unipotent radical. The quotient group $L(P)\, :=\, P/R_u(P)$, which is called the Levi quotient of $P$, is reductive (see \cite[p. 158, \S~11.22]{Bo}). Given a holomorphic principal $P$--bundle $E_{P}\, \longrightarrow\, Y$, let $$ E_{L(P)}\, :=\, E_{P}\times^P L(P)\, \longrightarrow\, Y $$ be the principal $L(P)$--bundle obtained by extending the structure group of $E_{P}$ using the quotient map $P\, \longrightarrow\, L(P)$. Note that $E_{L(P)}$ is identified with the quotient $E_{P}/R_u(P)$. By a \textit{unitary flat connection} on $E_{P}$ we will mean a unitary flat connection on the principal $L(P)$--bundle $E_{L(P)}$ (recall that $L(P)$ is reductive). A vector bundle $E\,\longrightarrow \, Y$ is called \textit{numerically flat} if both $E$ and its dual $E^*$ are numerically effective \cite[p. 311, Definition 1.17]{DPS}. \begin{proposition}\label{prop2} Let $E_{G_0}$ be a holomorphic principal $G_0$--bundle over a compact connected K\"ahler manifold $Y$. Then the following four statements are equivalent: \begin{enumerate} \item There is a parabolic proper subgroup $P\, \subset\, G_0$ and a strictly anti--dominant character $\chi$ of $P$ such that the associated line bundle $$ E_{G_0}(\chi)\, :=\, E_{G_0}\times^{P} {\mathbb C} \,\longrightarrow \, E_{G_0}/P $$ is numerically effective. \item The adjoint vector bundle ${\rm ad}(E_{G_0})$ is numerically flat. \item The principal $\mathcal G$--bundle $E_{G_0}$ is pseudostable, and $c_2({\rm ad}(E_{G_0}))\,=\, 0$ (see \cite[p. 26, Definition 2.3]{BG} for the definition of pseudostability). \item There is a parabolic subgroup $P_0\, \subset\, G_0$ and a holomorphic reduction of structure group $ E_{P_0}\, \subset\, E_{G_0} $ of $E_{G_0}$ such that $E_{P_0}$ admits a unitary flat connection. \end{enumerate} \end{proposition} \begin{proof} This proposition follows from \cite[p. 154, Theorem 1.1]{BS} and \cite[Theorem 1.2]{BB}. \end{proof} \begin{lemma}\label{lem2} Let $T$ be a complex torus, and let $E_{G_0}\,\longrightarrow\, T_0$ be a holomorphic principal $G_0$--bundle. Let $P\, \subset\, G_0$ be a parabolic subgroup. If the four equivalent statements in Proposition \ref{prop2} hold, then the holomorphic tangent bundle of $E_{G_0}/P$ is numerically effective. \end{lemma} \begin{proof} Assume that the four equivalent statements in Proposition \ref{prop2} hold. Let $\delta\, :\, E_{G_0}/P\, \longrightarrow\, T_0$ be the natural projection. Let $$ T_\delta\, :=\, {\rm kernel}(d\delta)\, \subset\, T(E_{G_0}/P) $$ be the relative tangent bundle for the projection $\delta$. The vector bundle $T_\delta\,\longrightarrow\, E_{G_0}/P$ is a quotient of the adjoint vector bundle ${\rm ad}(E_{G_0})$. Since ${\rm ad}(E_{G_0})$ is numerically effective (second statement in Proposition \ref{prop2}), it follows that $T_\delta$ is numerically effective \cite[p. 308, Proposition 1.15(i)]{DPS}. Consider the short exact sequence of vector bundles on $E_{G_0}/P$ $$ 0\,\longrightarrow\, T_\delta \,\longrightarrow\, T(E_{G_0}/P) \,\stackrel{d\delta}{\longrightarrow}\, \delta^*TT_0 \,\longrightarrow\, 0\, . $$ Since $\delta^*TT_0$ and $T_\delta$ are numerically effective ($TT_0$ is trivial), it follows that $T(E_{G_0}/P)$ is numerically effective \cite[p. 308, Proposition 1.15(ii)]{DPS}. This completes the proof of the lemma. \end{proof} As before, $X$ is a compact connected K\"ahler manifold such that $TX$ is numerically effective, and $(E'_H\, ,\theta')$ be a holomorphic Cartan geometry on $X$ of type $G/H$. Also, $\gamma$ and $f$ are the maps constructed in \eqref{e1} and \eqref{e2} respectively. Let \begin{equation}\label{g1} K^{-1}_f\, \longrightarrow\, M \end{equation} be the relative anti--canonical line bundle for the projection $f$. Let $\mathcal G$ be the group in \eqref{e8}, and let ${\mathcal E}_{\mathcal G}\, \longrightarrow\, T$ be the principal $\mathcal G$--bundle constructed in \eqref{e10}. Let $ \text{ad}({\mathcal E}_{\mathcal G})\,\longrightarrow\, T $ be the adjoint vector bundle. \begin{proposition}\label{lem1} Let $X$ is a compact connected K\"ahler manifold such that $TX$ is numerically effective, and let $(E'_H\, ,\theta')$ be a holomorphic Cartan geometry on $X$ of type $G/H$. Then the relative anti--canonical line bundle $K^{-1}_f$ in \eqref{g1} is numerically effective. Also, the following three statements hold: \begin{enumerate} \item The adjoint vector bundle ${\rm ad}({\mathcal E}_{\mathcal G})$ is numerically flat. \item The principal $\mathcal G$--bundle ${\mathcal E}_{\mathcal G}$ is pseudostable, and $c_2({\rm ad}({\mathcal E}_{\mathcal G}))\,=\, 0$. \item There is a parabolic subgroup ${\mathcal P}\, \subset\, {\mathcal G}$ and a holomorphic reduction of structure group $ {\mathcal E}_{\mathcal P}\, \subset\, {\mathcal E}_{\mathcal G} $ of ${\mathcal E}_{\mathcal G}$ such that ${\mathcal E}_{\mathcal P}$ admits a unitary flat connection. \end{enumerate} \end{proposition} \begin{proof} Let $\gamma\,:\, M\,\longrightarrow\, X$ be the covering in \eqref{e1}, and let $ f\, :\, M\, \longrightarrow\, T $ be the projection in \eqref{e2}. There is a semisimple complex linear algebraic group $\mathcal G$, a parabolic subgroup $P\, \subset\, \mathcal G$, and a holomorphic principal $\mathcal G$--bundle ${\mathcal E}_{\mathcal G}\, \longrightarrow\, T$ such that the fiber bundle ${\mathcal E}_{\mathcal G}/P\, \longrightarrow\, T$ is holomorphically isomorphic to the one given by $f$ (see Theorem \ref{thm1}). Since the canonical line bundle $K_T\, \longrightarrow\, T$ is trivial, the line bundle $K^{-1}_f$ is isomorphic to $K^{-1}_M$. The anti--canonical line bundle $K^{-1}_M$ is numerically effective because $TM$ is numerically effective. Hence $K^{-1}_f$ is numerically effective. Recall that ${\mathcal E}_{\mathcal G}/{\mathcal P}\,=\, M$ using the projection in \eqref{np}. The line bundle $K^{-1}_f$ corresponds to a strictly anti--dominant character of $\mathcal P$ because $K^{-1}_f$ is relatively ample. Hence the first of the four statements in Proposition \ref{prop2} holds. Now Proposition \ref{prop2} completes the proof of the proposition. \end{proof} \begin{proposition}\label{lem3} Let $X$ and $(E'_H\, ,\theta')$ be as in Lemma \ref{lem1}. The principal $\mathcal G$--bundle ${\mathcal E}_{\mathcal G}$ constructed in Theorem \ref{thm1} admits a flat holomorphic connection. \end{proposition} \begin{proof} We know that principal $\mathcal G$--bundle ${\mathcal E}_{\mathcal G}$ is pseudostable, and $c_2({\rm ad}({\mathcal E}_{\mathcal G}))\,=\, 0$ (see the second statement in Proposition \ref{lem1}). Hence the proposition follows from \cite[p. 20, Theorem 1.1]{BG}. \end{proof}
{"config": "arxiv", "file": "1101.4192.tex"}
\begin{document} \pagestyle{headings} \title{The Noncommutative Geometry of Graph $C^*$-Algebras I: The Index Theorem}\footnote{\noindent This research was supported by Australian Research Council and a University of Newcastle Project Grant} \author{David Pask} \email{david.pask@newcastle.edu.au} \author{Adam Rennie} \email{ adam.rennie@newcastle.edu.au} \address{School of Mathematical and Physical Sciences\\ University of Newcastle, Callaghan\\ NSW Australia, 2308} \begin{abstract} We investigate conditions on a graph $C^*$-algebra for the existence of a faithful semifinite trace. Using such a trace and the natural gauge action of the circle on the graph algebra, we construct a smooth $(1,\infty)$-summable semfinite spectral triple. The local index theorem allows us to compute the pairing with $K$-theory. This produces invariants in the $K$-theory of the fixed point algebra, and these are invariants for a finer structure than the isomorphism class of $C^*(E)$. \vspace{3mm} \noindent {\bf Keywords:} Graph $C^*$-algebra, spectral triple, index theorem, semifinite von Neumann algebra, trace, $K$-theory, $KK$-theory. \vspace{3mm} \noindent {\bf MSC (2000)} primary: 46L80, 58B34; secondary 46L51, 46L08 \end{abstract} \maketitle \section{Introduction} \label{intro} The aim of this paper, and the sequel \cite{PRen}, is to investigate the noncommutative geometry of graph $C^*$-algebras. In particular we construct finitely summable spectral triples to which we can apply the local index theorem. The motivation for this is the need for new examples in noncommutative geometry. Graph $C^*$-algebras allow us to treat a large family of algebras in a uniform manner. Graph $C^*$-algebras have been widely studied, see \cite{BPRS,kpr,KPRR,H,PR,RSz,T} and the references therein. The freedom to use both graphical and analytical tools make them particularly tractable. In addition, there are many natural generalisations of this family to which our methods will apply, such as Cuntz-Krieger, Cuntz-Pimsner algebras, Exel-Laca algebras, $k$-graph algebras and so on; for more information on these classes of algebras see the above references and \cite{R}. We expect these classes to yield similar examples. One of the key features of this work is that the natural construction of a spectral triple $(\A,\HH,\D)$ for a graph $C^*$-algebra is almost never a spectral triple in the original sense, \cite[Chapter VI]{C}. That is, the key requirement that for all $a\in \A$ the operator $a(1+\D^2)^{-1/2}$ be a compact operator on the Hilbert space $\HH$ is almost never true. However, if we broaden our point of view to consider semifinite spectral triples, where we require $a(1+\D^2)^{-1/2}$ to be in the ideal of compact operators in a semifinite von Neumann algebra, we obtain many $(1,\infty)$-summable examples. The only connected $(1,\infty)$-summable example arising from our construction which satisfies the original definition of spectral triples is the Dirac triple for the circle. {\bf The way we arrive at the correct notion of compactness is to regard the fixed point subalgebra $F$ for the $S^1$ gauge action on a graph algebra as the scalars.} This provides a unifying point of view that will help the reader motivate the various constructions, and understand the results. For instance the $C^*$-bimodule we employ is a $C^*$-module over $F$, the range of the ($C^*$-) index pairing lies in $K_0(F)$, the `differential' operator $\D$ is linear over $F$ and it is the `size' of $F$ that forces us to use a general semifinite trace. The single $(1,\infty)$-summable example where the operator trace arises as the natural trace is the circle, and in this case $F=\C$. The algebras which arise from our construction, despite naturally falling into the semifinite picture of spectral triples, are all type I algebras, \cite{DHS}. Thus even when dealing with type I algebras there is a natural and important role for general semifinite traces. Many of our examples arise from nonunital algebras. Fortunately, graph $C^*$-algebras (and their smooth subalgebras) are quasi-local in the sense of \cite{GGISV}, and many of the results for smooth local algebras presented in \cite{R1,R2} are valid for smooth quasi-local algebras. Here `local' refers to the possibility of using a notion of `compact support' to deal with analytical problems. After some background material, we begin in Section \ref{triplesI} by constructing an odd Kasparov module $(X,V)$ for $C^*(E)$-$F$, where $F$ is the fixed point algebra. This part of the construction applies to any locally finite directed graph with no sources. The class $(X,V)$ can be paired with $K_1(C^*(E))$ to obtain an index class in $K_0(F)$. This pairing is described in the Appendix, and it is given in terms of the index of Toeplitz operators on the underlying $C^*$-module. We conjecture that this pairing is the Kasparov product. When our graph $C^*$-algebra has a faithful (semifinite, lower-semicontinuous) gauge invariant trace $\tau$, we can define a canonical faithful (semifinite, lower semicontinuous) trace $\tilde\tau$ on the endomorphism algebra of the $C^*$-$F$-module $X$. Using $\tilde\tau$, in Section \ref{triplesII} we construct a semifinite spectral triple $(\A,\HH,\D)$ for a smooth subalgebra $\A\subset C^*(E)$. The numerical index pairing of $(\A,\HH,\D)$ with $K_1(C^*(E))$ can be computed using the semifinite local index theorem, \cite{CPRS2}, and we prove that $$ \la K_1(C^*(E)),(\A,\HH,\D)\ra=\tilde\tau_*\la K_1(C^*(E)),(X,V)\ra,$$ where $\la K_1(C^*(E)),(X,V)\ra\subset K_0(F)$ denotes the $K_0(F)$-valued index and $\tilde\tau_*$ is the map induced on $K$-theory by $\tilde\tau$. We show by an example that this pairing is an invariant of a finer structure than the isomorphism class of $C^*(E)$. To ensure that readers without a background in graph C*-algebras or a background in spectral triples can access the results in this paper, we have tried to make it self contained. The organisation of the paper is as follows. Section \ref{background} describes graph $C^*$-algebras and semifinite spectral triples, as well as quasilocal algebras and the local index theorem. Section \ref{traces} investigates which graph $C^*$-algebras have a faithful positive trace, and we provide some necessary and some sufficient conditions. In Section \ref{triplesI} we construct a $C^*$-module for any locally finite graph $C^*$-algebra. Using the generator of the gauge action on this $C^*$-module, we obtain a Kasparov module whenever the graph has no sources, and so a $KK$-class. In Section \ref{triplesII}, we restrict to those graph $C^*$-algebras with a faithful gauge invariant trace, and construct a spectral triple from our Kasparov module. Section \ref{index} describes our results pertaining to the index theorem. In the sequel to this paper, \cite{PRen}, we identify a large subclass of our graph $C^*$-algebras with faithful trace which satisfy a natural semifinite and nonunital generalisation of Connes' axioms for noncommutative manifolds. These examples are all one dimensional. {\bf Acknowledgements} We would like to thank Iain Raeburn and Alan Carey for many useful comments and support. We also thank the referee for many useful comments that have improved the work. In addition, we thank Nigel Higson for showing us a proof that the pairing in the Appendix does indeed represent the Kasparov product. \section{Graph $C^*$-Algebras and Semifinite Spectral Triples}\label{background} \vspace{-7pt} \subsection{The $C^*$-algebras of Graphs}\label{graphalg} \vspace{-7pt} For a more detailed introduction to graph $C^*$-algebras we refer the reader to \cite{BPRS,kpr} and the references therein. A directed graph $E=(E^0,E^1,r,s)$ consists of countable sets $E^0$ of vertices and $E^1$ of edges, and maps $r,s:E^1\to E^0$ identifying the range and source of each edge. {\bf We will always assume that the graph is} {\bf row-finite} which means that each vertex emits at most finitely many edges. Later we will also assume that the graph is \emph{locally finite} which means it is row-finite and each vertex receives at most finitely many edges. We write $E^n$ for the set of paths $\mu=\mu_1\mu_2\cdots\mu_n$ of length $|\mu|:=n$; that is, sequences of edges $\mu_i$ such that $r(\mu_i)=s(\mu_{i+1})$ for $1\leq i<n$. The maps $r,s$ extend to $E^*:=\bigcup_{n\ge 0} E^n$ in an obvious way. A \emph{loop} in $E$ is a path $L \in E^*$ with $s ( L ) = r ( L )$, we say that a loop $L$ has an exit if there is $v = s ( L_i )$ for some $i$ which emits more than one edge. If $V \subseteq E^0$ then we write $V \ge w$ if there is a path $\mu \in E^*$ with $s ( \mu ) \in V$ and $r ( \mu ) = w$ (we also sometimes say that $w$ is downstream from $V$). A \emph{sink} is a vertex $v \in E^0$ with $s^{-1} (v) = \emptyset$, a \emph{source} is a vertex $w \in E^0$ with $r^{-1} (w) = \emptyset$. A \emph{Cuntz-Krieger $E$-family} in a $C^*$-algebra $B$ consists of mutually orthogonal projections $\{p_v:v\in E^0\}$ and partial isometries $\{S_e:e\in E^1\}$ satisfying the \emph{Cuntz-Krieger relations} \begin{equation*} S_e^* S_e=p_{r(e)} \mbox{ for $e\in E^1$} \ \mbox{ and }\ p_v=\sum_{\{ e : s(e)=v\}} S_e S_e^* \mbox{ whenever $v$ is not a sink.} \end{equation*} It is proved in \cite[Theorem 1.2]{kpr} that there is a universal $C^*$-algebra $C^*(E)$ generated by a non-zero Cuntz-Krieger $E$-family $\{S_e,p_v\}$. A product $S_\mu:=S_{\mu_1}S_{\mu_2}\dots S_{\mu_n}$ is non-zero precisely when $\mu=\mu_1\mu_2\cdots\mu_n$ is a path in $E^n$. Since the Cuntz-Krieger relations imply that the projections $S_eS_e^*$ are also mutually orthogonal, we have $S_e^*S_f=0$ unless $e=f$, and words in $\{S_e,S_f^*\}$ collapse to products of the form $S_\mu S_\nu^*$ for $\mu,\nu\in E^*$ satisfying $r(\mu)=r(\nu)$ (cf.\ \cite[Lemma 1.1]{kpr}). Indeed, because the family $\{S_\mu S_\nu^*\}$ is closed under multiplication and involution, we have \begin{equation} C^*(E)=\clsp\{S_\mu S_\nu^*:\mu,\nu\in E^*\mbox{ and }r(\mu)=r(\nu)\}.\label{spanningset} \end{equation} The algebraic relations and the density of $\mbox{span}\{S_\mu S_\nu^*\}$ in $C^*(E)$ play a critical role throughout the paper. We adopt the conventions that vertices are paths of length 0, that $S_v:=p_v$ for $v\in E^0$, and that all paths $\mu,\nu$ appearing in (\ref{spanningset}) are non-empty; we recover $S_\mu$, for example, by taking $\nu=r(\mu)$, so that $S_\mu S_\nu^*=S_\mu p_{r(\mu)}=S_\mu$. If $z\in S^1$, then the family $\{zS_e,p_v\}$ is another Cuntz-Krieger $E$-family which generates $C^*(E)$, and the universal property gives a homomorphism $\gamma_z:C^*(E)\to C^*(E)$ such that $\gamma_z(S_e)=zS_e$ and $\gamma_z(p_v)=p_v$. The homomorphism $\gamma_{\overline z}$ is an inverse for $\gamma_z$, so $\gamma_z\in\Aut C^*(E)$, and a routine $\epsilon/3$ argument using (\ref{spanningset}) shows that $\gamma$ is a strongly continuous action of $S^1$ on $C^*(E)$. It is called the \emph{gauge action}. Because $S^1$ is compact, averaging over $\gamma$ with respect to normalised Haar measure gives an expectation $\Phi$ of $C^*(E)$ onto the fixed-point algebra $C^*(E)^\gamma$: \[ \Phi(a):=\frac{1}{2\pi}\int_{S^1} \gamma_z(a)\,d\theta\ \mbox{ for }\ a\in C^*(E),\ \ z=e^{i\theta}. \] The map $\Phi$ is positive, has norm $1$, and is faithful in the sense that $\Phi(a^*a)=0$ implies $a=0$. From Equation (\ref{spanningset}), it is easy to see that a graph $C^*$-algebra is unital if and only if the underlying graph is finite. When we consider infinite graphs, formulas which involve sums of projections may contain infinite sums. To interpret these, we use strict convergence in the multiplier algebra of $C^*(E)$: \begin{lemma}\label{strict} Let $E$ be a row-finite graph, let $A$ be a $C^*$-algebra generated by a Cuntz-Krieger $E$-family $\{T_e,q_v\}$, and let $\{p_n\}$ be a sequence of projections in $A$. If $p_nT_\mu T_\nu^*$ converges for every $\mu,\nu\in E^*$, then $\{p_n\}$ converges strictly to a projection $p\in M(A)$. \end{lemma} \begin{proof} Since we can approximate any $a\in A=\pi_{T,q}(C^*(E))$ by a linear combination of $T_\mu T_\nu^*$, an $\epsilon/3$-argument shows that $\{p_na\}$ is Cauchy for every $a\in A$. We define $p:A\to A$ by $p(a):=\lim_{n\to\infty}p_na$. Since \[ b^*p(a)=\lim_{n\to\infty}b^*p_na=\lim_{n\to\infty}(p_nb)^*a=p(b)^*a, \] the map $p$ is an adjointable operator on the Hilbert $C^*$-module $A_A$, and hence defines (left multiplication by) a multiplier $p$ of $A$ \cite[Theorem 2.47]{RW}. Taking adjoints shows that $ap_n\to ap$ for all $a$, so $p_n\to p$ strictly. It is easy to check that $p^2=p=p^*$. \end{proof} \vspace{-10pt} \subsection{Semifinite Spectral Triples} \vspace{-10pt} We begin with some semifinite versions of standard definitions and results. Let $\tau$ be a fixed faithful, normal, semifinite trace on the von Neumann algebra ${\mathcal N}$. Let ${\mathcal K}_{\mathcal N }$ be the $\tau$-compact operators in ${\mathcal N}$ (that is the norm closed ideal generated by the projections $E\in\mathcal N$ with $\tau(E)<\infty$). \begin{defn} A semifinite spectral triple $(\A,\HH,\D)$ is given by a Hilbert space $\HH$, a $*$-algebra $\A\subset \cn$ where $\cn$ is a semifinite von Neumann algebra acting on $\HH$, and a densely defined unbounded self-adjoint operator $\D$ affiliated to $\cn$ such that 1) $[\D,a]$ is densely defined and extends to a bounded operator for all $a\in\A$ 2) $a(\lambda-\D)^{-1}\in\K_\cn$ for all $\lambda\not\in{\R}\ \mbox{and all}\ a\in\A.$ 3) The triple is said to be even if there is $\Gamma\in\cn$ such that $\Gamma^*=\Gamma$, $\Gamma^2=1$, $a\Gamma=\Gamma a$ for all $a\in\A$ and $\D\Gamma+\Gamma\D=0$. Otherwise it is odd. \end{defn} \begin{defn}\label{qck} A semifinite spectral triple $(\A,\HH,\D)$ is $QC^k$ for $k\geq 1$ ($Q$ for quantum) if for all $a\in\A$ the operators $a$ and $[\D,a]$ are in the domain of $\delta^k$, where $\delta(T)=[\dd,T]$ is the partial derivation on $\cn$ defined by $\dd$. We say that $(\A,\HH,\D)$ is $QC^\infty$ if it is $QC^k$ for all $k\geq 1$. \end{defn} {\bf Note}. The notation is meant to be analogous to the classical case, but we introduce the $Q$ so that there is no confusion between quantum differentiability of $a\in\A$ and classical differentiability of functions. \noindent{\bf Remarks concerning derivations and commutators}. By partial derivation we mean that $\delta$ is defined on some subalgebra of $\cn$ which need not be (weakly) dense in $\cn$. More precisely, $\mbox{dom}\ \delta=\{T\in\cn:\delta(T)\mbox{ is bounded}\}$. We also note that if $T\in{\mathcal N}$, one can show that $[\dd,T]$ is bounded if and only if $[(1+\D^2)^{1/2},T]$ is bounded, by using the functional calculus to show that $\dd-(1+\D^2)^{1/2}$ extends to a bounded operator in $\cn$. In fact, writing $\dd_1=(1+\D^2)^{1/2}$ and $\delta_1(T)=[\dd_1,T]$ we have \ben \mbox{dom}\ \delta^n=\mbox{dom}\ \delta_1^n\ \ \ \ \mbox{for all}\ n.\een We also observe that if $T\in\cn$ and $[\D,T]$ is bounded, then $[\D,T]\in\cn$. Similar comments apply to $[\dd,T]$, $[(1+\D^2)^{1/2},T]$. The proofs can be found in \cite{CPRS2}. The $QC^\infty$ condition places some restrictions on the algebras we consider. Recall that a topological algebra is Fr\'{e}chet if it is locally convex, metrizable and complete, and that a subalgebra of a $C^*$-algebra is a pre-$C^*$-algebra if it is stable under the holomorphic functional calculus. For nonunital algebras, we consider only functions $f$ with $f(0)=0$. \begin{defn}A $*$-algebra $\A$ is smooth if it is Fr\'{e}chet and $*$-isomorphic to a proper dense subalgebra $i(\A)$ of a $C^*$-algebra $A$ which is a pre-$C^*$-algebra. \end{defn} Asking for $i(\A)$ to be a {\it proper} dense subalgebra of $A$ immediately implies that the Fr\'{e}chet topology of $\A$ is finer than the $C^*$-topology of $A$. We will denote the norm closure $\overline{\A}=A$, when the norm closure $\overline{\A}$ is unambiguous. If $\A$ is smooth in $A$ then $M_n(\A)$ is smooth in $M_n(A)$, \cite{GVF,LBS}, so $K_*(\A)\cong K_*(A)$, the isomorphism being induced by the inclusion map $i$. A smooth algebra has a sensible spectral theory which agrees with that defined using the $C^*$-closure, and the group of invertibles is open. The point of contact between smooth algebras and $QC^\infty$ spectral triples is the following Lemma, proved in \cite{R1}. \begin{lemma}\label{smo} If $(\A,\HH,\D)$ is a $QC^\infty$ spectral triple, then $(\A_\delta,\HH,\D)$ is also a $QC^\infty$ spectral triple, where $\A_\delta$ is the completion of $\A$ in the locally convex topology determined by the seminorms \ben q_{n,i}(a)=\n\delta^nd^i(a)\n,\ \ n\geq 0,\ i=0,1,\een where $d(a)=[\D,a]$. Moreover, $\A_\delta$ is a smooth algebra. \end{lemma} We call the topology on $\A$ determined by the seminorms $q_{n,i}$ of Lemma \ref{smo} the $\delta$-topology. Whilst smoothness does not depend on whether $\A$ is unital or not, many analytical problems arise because of the lack of a unit. As in \cite{GGISV,R1,R2}, we make two definitions to address these issues. \begin{defn} An algebra $\A$ has local units if for every finite subset of elements $\{a_i\}_{i=1}^n\subset\A$, there exists $\phi\in\A$ such that for each $i$ \ben \phi a_i= a_i\phi=a_i.\een \end{defn} \begin{defn} Let $\A$ be a Fr\'{e}chet algebra and $\A_c\subseteq\A$ be a dense subalgebra with local units. Then we call $\A$ a quasi-local algebra (when $\A_c$ is understood.) If $\A_c$ is a dense ideal with local units, we call $\A_c\subset\A$ local. \end{defn} Quasi-local algebras have an approximate unit $\{\phi_n\}_{n\geq 1}\subset\A_c$ such that for all $n$, $\phi_{n+1}\phi_n=\phi_n$, \cite{R1}; we call this a local approximate unit. {\bf Example} For a graph $C^*$-algebra $A=C^*(E)$, Equation (\ref{spanningset}) shows that $$ A_c=\mbox{span}\{S_\mu S_\nu^*:\mu,\nu\in E^*\ \mbox{and}\ r(\mu)=r(\nu)\}$$ is a dense subalgebra. It has local units because $$ p_{v}S_\mu S_\nu^*=\left\{\begin{array}{lr} S_\mu S_\nu^* & v=s(\mu)\\ 0 & \mbox{otherwise}\end{array}\right..$$ Similar comments apply to right multiplication by $p_{s(\nu)}$. By summing the source and range projections (without repetitions) of all $S_{\mu_i}S_{\nu_i}^*$ appearing in a finite sum $$ a=\sum_ic_{\mu_i,\nu_i}S_{\mu_i}S_{\nu_i}^*$$ we obtain a local unit for $a\in A_c$. By repeating this process for any finite collection of such $a\in A_c$ we see that $A_c$ has local units. We also require that when we have a spectral triple the operator $\D$ is compatible with the quasi-local structure of the algebra, in the following sense. \begin{defn} If $(\A,\HH,\D)$ is a spectral triple, then we define $\Omega^*_\D(\A)$ to be the algebra generated by $\A$ and $[\D,\A]$. \end{defn} \begin{defn}\label{lst} A local spectral triple $(\A,\HH,\D)$ is a spectral triple with $\A$ quasi-local such that there exists an approximate unit $\{\phi_n\}\subset\A_c$ for $\A$ satisfying \ben \Omega^*_\D(\A_c)=\bigcup_n\Omega^*_\D(\A)_n,\ \ {\rm where}\een \ben \Omega^*_\D(\A)_n=\{\omega\in\Omega^*_\D(\A):\phi_n\omega=\omega\phi_n=\omega\}.\een \end{defn} {\bf Remark} A local spectral triple has a local approximate unit $\{\phi_n\}_{n\geq 1}\subset\A_c$ such that $\phi_{n+1}\phi_n=\phi_n\phi_{n+1}=\phi_n$ and $\phi_{n+1}[\D,\phi_n]=[\D,\phi_n]\phi_{n+1}=[\D,\phi_n]$, see \cite{R1,R2}. We require this property to prove the summability results we require. \vspace{-5pt} \subsection{Summability and the Local Index Theorem} In the following, let $\mathcal N$ be a semifinite von Neumann algebra with faithful normal trace $\tau$. Recall from \cite{FK} that if $S\in\mathcal N$, the \emph{ t-th generalized singular value} of S for each real $t>0$ is given by $$\mu_t(S)=\inf\{||SE||\ : \ E \mbox{ is a projection in } {\mathcal N} \mbox { with } \tau(1-E)\leq t\}.$$ The ideal $\LL^1({\mathcal N})$ consists of those operators $T\in {\mathcal N}$ such that $\n T\n_1:=\tau( |T|)<\infty$ where $|T|=\sqrt{T^*T}$. In the Type I setting this is the usual trace class ideal. We will simply write $\LL^1$ for this ideal in order to simplify the notation, and denote the norm on $\LL^1$ by $\n\cdot\n_1$. An alternative definition in terms of singular values is that $T\in\LL^1$ if $\|T\|_1:=\int_0^\infty \mu_t(T) dt <\infty.$ Note that in the case where ${\mathcal N}\neq{\mathcal B}({\mathcal H})$, $\LL^1$ is not complete in this norm but it is complete in the norm $||.||_1 + ||.||_\infty$. (where $||.||_\infty$ is the uniform norm). Another important ideal for us is the domain of the Dixmier trace: $${\mathcal L}^{(1,\infty)}({\mathcal N})= \left\{T\in{\mathcal N}\ : \Vert T\Vert_{_{{\mathcal L}^{(1,\infty)}}} := \sup_{t> 0} \frac{1}{\log(1+t)}\int_0^t\mu_s(T)ds<\infty\right\}.$$ We will suppress the $({\mathcal N})$ in our notation for these ideals, as $\cn$ will always be clear from context. The reader should note that ${\mathcal L}^{(1,\infty)}$ is often taken to mean an ideal in the algebra $\widetilde{\mathcal N}$ of $\tau$-measurable operators affiliated to ${\mathcal N}$, \cite{FK}. Our notation is however consistent with that of \cite{C} in the special case ${\mathcal N}={\mathcal B}({\mathcal H})$. With this convention the ideal of $\tau$-compact operators, ${\mathcal K}({\mathcal N})$, consists of those $T\in{\mathcal N}$ (as opposed to $\widetilde{\mathcal N}$) such that \ben \mu_\infty(T):=\lim _{t\to \infty}\mu_t(T) = 0.\een \begin{defn}\label{summable} A semifinite local spectral triple is $(1,\infty)$-summable if \ben a(\D-\lambda)^{-1}\in\LL^{(1,\infty)}\ \ \ \mbox{for all}\ a\in\A_c,\ \ \lambda\in\C\setminus\R.\een Equivalently, $a(1+\D^2)^{-1/2}\in\LL^{(1,\infty)}$ for all $a\in \A_c$. \end{defn} {\bf Remark} If $\A$ is unital, $\ker\D$ is $\tau$-finite dimensional. Note that the summability requirements are only for $a\in\A_c$. We do not assume that elements of the algebra $\A$ are all integrable in the nonunital case. We need to briefly discuss the Dixmier trace, but fortunately we will usually be applying it in reasonably simple situations. For more information on semifinite Dixmier traces, see \cite{CPS2}. For $T\in\LL^{(1,\infty)}$, $T\geq 0$, the function \ben F_T:t\to\frac{1}{\log(1+t)}\int_0^t\mu_s(T)ds \een is bounded. For certain generalised limits $\omega\in L^\infty(\R_*^+)^*$, we obtain a positive functional on $\LL^{(1,\infty)}$ by setting $$ \tau_\omega(T)=\omega(F_T).$$ This is the Dixmier trace associated to the semifinite normal trace $\tau$, denoted $\tau_\omega$, and we extend it to all of $\LL^{(1,\infty)}$ by linearity, where of course it is a trace. The Dixmier trace $\tau_\omega$ is defined on the ideal $\LL^{(1,\infty)}$, and vanishes on the ideal of trace class operators. Whenever the function $F_T$ has a limit at infinity, all Dixmier traces return the value of the limit. We denote the common value of all Dixmier traces on measurable operators by $\bigintcross$. So if $T\in\LL^{(1,\infty)}$ is measurable, for any allowed functional $\omega\in L^\infty(\R_*^+)^*$ we have $$\tau_\omega(T)=\omega(F_T)=\bigintcross T.$$ {\bf Example} Let $\D=\frac{1}{i}\frac{d}{d\theta}$ act on $L^2(S^1)$. Then it is well known that the spectrum of $\D$ consists of eigenvalues $\{n\in\Z\}$, each with multiplicity one. So, using the standard operator trace, the function $F_{(1+\D^2)^{-1/2}}$ is $$ N\to\frac{1}{\log 2N+1}\sum_{n=-N}^N(1+n^2)^{-1/2}$$ which is bounded. So $(1+\D^2)^{-1/2}\in\LL^{(1,\infty)}$ and for any Dixmier trace $\mbox{Trace}_\omega$ $$\mbox{Trace}_\omega((1+\D^2)^{-1/2})=\bigintcross(1+\D^2)^{-1/2}=2.$$ In \cite{R1,R2} we proved numerous properties of local algebras. The introduction of quasi-local algebras in \cite{GGISV} led us to review the validity of many of these results for quasi-local algebras. Most of the summability results of \cite{R2} are valid in the quasi-local setting. In addition, the summability results of \cite{R2} are also valid for general semifinite spectral triples since they rely only on properties of the ideals $\LL^{(p,\infty)}$, $p\geq 1$, \cite{C,CPS2}, and the trace property. We quote the version of the summability results from \cite{R2} that we require below. \begin{prop}[\cite{R2}]\label{wellbehaved} Let $(\A,\HH,\D)$ be a $QC^\infty$, local $(1,\infty)$-summable semifinite spectral triple relative to $(\cn,\tau)$. Let $T\in\cn$ satisfy $T\phi=\phi T=T$ for some $\phi\in\A_c$. Then \ben T(1+\D^2)^{-1/2}\in\LL^{(1,\infty)}.\een For $Re(s)>1$, $T(1+\D^2)^{-s/2}$ is trace class. If the limit \be \lim_{s\to 1/2^+}(s-1/2)\tau(T(1+\D^2)^{-s})\label{mumbo}\ee exists, then it is equal to \ben \frac{1}{2}\bigintcross T(1+\D^2)^{-1/2}.\een In addition, for any Dixmier trace $\tau_\omega$, the function \ben a\mapsto \tau_\omega(a(1+\D^2)^{-1/2})\een defines a trace on $\A_c\subset\A$. \end{prop} In \cite{CPRS2}, the noncommutative geometry local index theorem of \cite{CM} was extended to semifinite spectral triples. In the simplest terms, the local index theorem provides a formula for the pairing of a finitely summable spectral triple $(\A,\HH,\D)$ with the $K$-theory of $\overline{\A}$. The precise statement that we require is \begin{thm}[\cite{CPRS2}] Let $(\A,\HH,\D)$ be an odd $QC^\infty$ $(1,\infty)$-summable local semifinite spectral triple, relative to $(\cn,\tau)$. Then for $u\in\A$ unitary the pairing of $[u]\in K_1(\overline{\A})$ with $(\A,\HH,\D)$ is given by $$ \la [u],(\A,\HH,\D)\ra={\rm res}_{s=0}\tau(u[\D,u^*](1+\D^2)^{-1/2-s}).$$ In particular, the residue on the right exists. \end{thm} For more information on this result see \cite{CPS2,CPRS2,CPRS3,CM}. \vspace{-12pt} \section{Graph $C^*$-Algebras with Semifinite Graph Traces}\label{traces} \vspace{-12pt} This section considers the existence of (unbounded) traces on graph algebras. We denote by $A^+$ the positive cone in a $C^*$-algebra $A$, and we use extended arithmetic on $[0,\infty]$ so that $0\times \infty=0$. From \cite{PhR} we take the basic definition: \begin{defn} A trace on a $C^*$-algebra $A$ is a map $\tau:A^+\to[0,\infty]$ satisfying 1) $\tau(a+b)=\tau(a)+\tau(b)$ for all $a,b\in A^+$ 2) $\tau(\lambda a)=\lambda\tau(a)$ for all $a\in A^+$ and $\lambda\geq 0$ 3) $\tau(a^*a)=\tau(aa^*)$ for all $a\in A$ We say: that $\tau$ is faithful if $\tau(a^*a)=0\Rightarrow a=0$; that $\tau$ is semifinite if $\{a\in A^+:\tau(a)<\infty\}$ is norm dense in $A^+$ (or that $\tau$ is densely defined); that $\tau$ is lower semicontinuous if whenever $a=\lim_{n\to\infty}a_n$ in norm in $A^+$ we have $\tau(a)\leq\lim\inf_{n\to\infty}\tau(a_n)$. \end{defn} We may extend a (semifinite) trace $\tau$ by linearity to a linear functional on (a dense subspace of) $A$. Observe that the domain of definition of a densely defined trace is a two-sided ideal $I_\tau\subset A$. \begin{lemma}\label{finiteonfinite} Let $E$ be a row-finite directed graph and let $\tau:C^*(E)\to\C$ be a semifinite trace. Then the dense subalgebra $$ A_c:={\rm span}\{S_\mu S_\nu^*:\mu,\nu\in E^*\}$$ is contained in the domain $I_\tau$ of $\tau$. \end{lemma} \begin{proof} Let $v\in E^0$ be a vertex, and let $p_v\in A_c$ be the corresponding projection. We claim that $p_v\in I_\tau$. Choose $a\in I_\tau$ positive, so $\tau(a)<\infty$, and with $\Vert p_v-a\Vert<1$. Since $p_v$ is a projection, we also have $\Vert p_v-p_vap_v\Vert<1$ and $p_vap_v\in I_\tau$, so we have $\tau(p_vap_v)<\infty$. The subalgebra $p_vC^*(E)p_v$ has unit $p_v$, and as $\Vert p_v-p_vap_v\Vert<1$, $p_vap_v$ is invertible. Thus there is some $b\in p_v C^*(E)p_v$ such that $bp_vap_v=p_v$. Then, again since the trace class elements form an ideal, we have $\tau(p_v)<\infty$. Now since $S_\mu S_\nu^*=p_{s(\mu)}S_\mu S_\nu^*$, it is easy to see that every element of $A_c$ has finite trace. \end{proof} It is convenient to denote by $A=C^*(E)$ and $A_c=\mbox{span}\{S_\mu S_\nu^*:\mu,\nu\in E^*\}.$ \begin{lemma}\label{necessary} Let $E$ be a row-finite directed graph. \par\noindent {\bf (i)} If $C^*(E)$ has a faithful semifinite trace then no loop can have an exit. \par\noindent {\bf (ii)} If $C^* (E)$ has a gauge-invariant, semifinite, lower semicontinuous trace $\tau$ then $\tau \circ \Phi = \tau$ and $$ \tau(S_\mu S_\nu^*)=\delta_{\mu,\nu}\tau(p_{r(\mu)}). $$ \noindent In particular, $\tau$ is supported on $C^* ( \{ S_\mu S_\mu^* : \mu \in E^* \} )$. \end{lemma} \begin{proof} Suppose $E$ has a loop $L = e_1 \ldots e_n$ which has an exit. Let $v_i = s( e_i )$ for $i=1 , \cdots , n$ so that $ r ( e_n ) = v_1$. Without loss of generality suppose that $v_1$ emits an edge $f$ which is not part of $L$. If $w = r(f)$ then we have $$ \tau (p_{v_1} ) \ge \tau ( S_{e_1} S_{e_1}^* + S_f S_f^* ) = \tau ( S_{e_1}^* S_{e_1} ) + \tau ( S_f^* S_f ) = \tau ( p_{v_2} ) + \tau ( p_w ) . $$ \noindent Similarly we may show that $\tau ( p_{v_i} ) \ge \tau ( p_{v_{i+1}} )$ for $i = 1 , \ldots , n-1$ and so $\tau ( p_{v_1} ) \geq \tau( p_{v_1} ) + \tau ( p_w )$ which means, by Lemma \ref{finiteonfinite}, that we must have $\tau(p_w) =0$. Since $p_w$ is positive, this implies that $\tau$ is not faithful. Now suppose the trace $\tau$ is gauge-invariant. Then $$ \tau ( S_\mu S_\nu^* ) = \tau ( \gamma_z S_\mu S_\nu^* ) = \tau ( z^{\vert \mu \vert - \vert \nu \vert} S_\mu S_\nu^* ) = z^{\vert \mu \vert - \vert \nu \vert} \tau ( S_\mu S_\nu^* ) $$ \noindent for all $z \in S^1$, and so $\tau ( S_\mu S_\nu^* )$ is zero unless $\vert \mu \vert = \vert \nu \vert$. Hence $\tau \circ \Phi = \tau$ on $A_c$. Moreover, if $\vert \mu \vert = \vert \nu \vert$ then $$ \tau ( S_\mu S_\nu^* ) = \tau ( S_\nu^* S_\mu ) = \tau ( \delta_{\mu , \nu} p_{r ( \mu )} ) = \delta_{\mu , \nu} \tau ( p_{r ( \mu )} ) , $$ \noindent so the restriction of $\tau$ to $A_c$ is supported on $\mbox{span} \{ S_\mu S_\mu^* : \mu \in E^* \}$. To extend these conclusions to the $C^*$ completions, let $\{\phi_n\}\subset \Phi(A)$ be an approximate unit for $A$ consisting of an increasing sequence of projections. Then for each $n$, the restriction of $\tau$ to $A_n:=\phi_nA\phi_n$ is a finite trace, and so norm continuous. Observe also that $\phi_nA_c\phi_n$ is dense in $A_n$ and $\phi_nA_c\phi_n\subseteq A_c$. We claim that \begin{equation} \mbox{when restricted to}\ A_n,\ \tau\ \mbox{satisfies}\ \tau\circ\Phi=\tau.\label{nthoftheway}\end{equation} To see this we make two observations, namely that $$\Phi(A_n)=\Phi(\phi_nA\phi_n)=\phi_n\Phi(A)\phi_n\subseteq\phi_nA\phi_n=A_n$$ and that on $\phi_nA_c\phi_n\subseteq A_c$ we have $\tau\circ\Phi=\tau$. The norm continuity of $\tau$ on $A_n$ now completes the proof of the claim. Now let $a\in A^+$, and let $a_n=a^{1/2}\phi_na^{1/2}$ so that $a_n\leq a_{n+1} \leq\cdots\leq a$ and $\Vert a_n-a\Vert\to 0$. Then $$\tau(a)\geq \lim\sup\tau(a_n)\geq\lim\inf\tau(a_n)\geq\tau(a),$$ the first inequality coming from the positivity of $\tau$, and the last inequality from lower semicontinuity. Since $\tau$ is a trace and $\phi_n^2=\phi_n$ we have \begin{equation} \tau(a)=\lim_{n\to\infty}\tau(a_n)= \lim_{n\to\infty}\tau(\phi_na\phi_n).\label{first}\end{equation} Similarly, let $b_n=\Phi(a)^{1/2}\phi_n\Phi(a)^{1/2}$ so that $b_n\leq b_{n+1}\leq \cdots\leq \Phi(a)$ and $\Vert b_n-\Phi(a)\Vert\to 0$. Then \begin{equation}\tau(\Phi(a))=\lim_{n\to\infty}\tau(b_n)= \lim_{n\to\infty}\tau(\phi_n\Phi(a)\phi_n)= \lim_{n\to\infty}\tau(\Phi(\phi_na\phi_n)).\label{second}\end{equation} However $\phi_na\phi_n\in A_n$ so by (\ref{nthoftheway}) we have $(\tau\circ\Phi)(\phi_na\phi_n)=\tau(\phi_na\phi_n)$. Then by Equations (\ref{first}) and (\ref{second}) we have $\tau(a)=(\tau\circ\Phi)(a)$ for all $a\in A^+$. By linearity this is true for all $a\in A$, so $\tau=\tau\circ\Phi$ on all of $A$. Finally, $$\phi_n\mbox{span}\{S_\mu S_\mu^*:\mu\in E^*\}\phi_n\subseteq \mbox{span}\{S_\mu S_\mu^*:\mu\in E^*\},$$ so by the arguments above $\tau$ is supported on $C^*(\{S_\mu S_\mu^*:\mu\in E^*\})$. \end{proof} Whilst the condition that no loop has an exit is necessary for the existence of a faithful semifinite trace, it is not sufficient. One of the advantages of graph $C^*$-algebras is the ability to use both graphical and analytical techniques. There is an analogue of the above discussion of traces in terms of the graph. \begin{defn}[cf.\ \cite{T}] If $E$ is a row-finite directed graph, then a graph trace on $E$ is a function $g:E^0\to{\R}^+$ such that for any $v\in E^0$ we have \begin{equation} \label{tracecond} g(v)=\sum_{s(e)=v}g(r(e)). \end{equation} \noindent If $g(v)\neq 0$ for all $v\in E^0$ we say that $g$ is faithful. \end{defn} {\bf Remark} One can show by induction that if $g$ is a graph trace on a directed graph with no sinks, and $n\geq 1$ \begin{equation} g(v)=\sum_{s(\mu)=v,\ |\mu|=n}g(r(\mu)). \label{nosinksum}\end{equation} For graphs with sinks, we must also count paths of length at most $n$ which end on sinks. To deal with this more general case we write \begin{equation} g(v)=\sum_{s(\mu)=v,\ |\mu|\preceq n}g(r(\mu))\geq\sum_{s(\mu)=v,\ |\mu|=n}g(r(\mu)),\label{sinksum}\end{equation} where $|\mu|\preceq n$ means that $\mu$ is of length $n$ or is of length less than $n$ and terminates on a sink. As with traces on $C^*(E)$, it is easy to see that a necessary condition for $E$ to have a faithful graph trace is that no loop has an exit. \begin{lemma}\label{infpaths} Suppose that $E$ is a row-finite directed graph and there exist vertices $v,w\in E^0$ with an infinite number of paths from $v$ to $w$. Then there is no faithful graph trace on $E^0$. \end{lemma} \begin{proof} First suppose that there are an infinite number of paths from $v$ to $w$ of the same length, $k$ say. Then for any $N\in\N$ and any graph trace $g:E^0\to\R^+$ $$g(v)=\sum_{s(\mu)=v,\ |\mu|\preceq k}g(r(\mu))\geq \sum^N g(w)=Ng(w).$$ So to assign a finite value to $g(v)$ we require $g(w)=0$. Thus we may suppose that there are infinitely many paths of different length from $v$ to $w$, and without loss of generality that all the paths have different length. Choose the shortest path $\mu_1$ of length $k_1$, say. Then, with $E^m(v)=\{\mu\in E^*:s(\mu)=v,\ |\mu|\preceq m\}$, we have \be g(v)=\sum_{\mu\in E^{k_1}(v)}g(r(\mu))=g(w)+ \sum_{\mu\in E^{k_1}(v),\ r(\mu)\neq w}g(r(\mu)).\label{stepone}\ee Observe that at least one of the paths, call it $\mu_2$, in the rightmost sum can be extended until it reaches $w$. Choose the shortest such extension from $r(\mu_2)$ to $w$, and denote the length by $k_2$. So \begin{align} &\sum_{\mu\in E^{k_1}(v),\ \mu\neq\mu_1}g(r(\mu))=g(r(\mu_2))+\sum_{\mu\in E^{k_1}(v),\ \mu\neq\mu_1,\mu_2}g(r(\mu))\nno &=\sum_{\mu\in E^{k_2}(r(\mu_2))}g(r(\mu))+\sum_{\mu\in E^{k_1}(v),\ \mu\neq\mu_1,\mu_2}g(r(\mu))\nno &=g(w)+\sum_{\mu\in E^{k_2}(r(\mu_2)),\ \mu\neq\mu_2}g(r(\mu))+\sum_{\mu\in E^{k_1}(v),\ \mu\neq\mu_1,\mu_2}g(r(\mu)).\end{align} So by equation (\ref{stepone}) we have $$g(v)=2g(w)+\ \mbox{sum}_1+\mbox{sum}_2.$$ The two sums on the right contain at least one path which can be extended to $w$, and so chossing the shortest, $$g(v)=3g(w)+\ \mbox{sum}_1+\mbox{sum}_2+\mbox{sum}_3.$$ It is now clear how to proceed, and we deduce as before that for all $N\in\N$, $g(v)\geq Ng(w)$. \end{proof} \begin{defn}\label{ends} Let $E$ be a row-finite directed graph. An {\em end} will mean a sink, a loop without exit or an infinite path with no exits. \end{defn} {\bf Remark} We shall identify an end with the vertices which comprise it. Once on an end (of any sort) the graph trace remains constant. \begin{cor} Suppose that $E$ is a row-finite directed graph and there exists a vertex $v\in E^0$ with an infinite number of paths from $v$ to an end. Then there is no faithful graph trace on $E^0$. \end{cor} \begin{proof} Because the value of the graph trace is constant on an end $\Omega$, say $g_\Omega$, we have, as in Lemma \ref{infpaths}, $$ g(v)\geq Ng_E$$ for all $N\in\N$. Hence there can be no faithful graph trace. \end{proof} Thus if a row-finite directed graph $E$ is to have a faithful graph trace, it is necessary that no vertex connects infinitely often to any other vertex or to an end, and that no loop has an exit. \begin{prop}\label{Eendsinends} Let $E$ be a row-finite directed graph and suppose there exists $N\in\N$ such that for all vertices $v$ and $w$ and for all ends $\Omega$, 1) the number of paths from $v$ to $w$, and 2) the number of paths from $v$ to $\Omega$ is less than or equal to $N$. If in addition the only infinite paths in $E$ are eventually in ends, then $E$ has a faithful graph trace. \end{prop} \begin{proof} First observe that our hypotheses on $E$ rule out loops with exit, since we can define infinite paths using such loops, but they are not ends. Label the set of ends by $i=1,2,...$. Assign a positive number $g_i$ to each end, and define $g(v)=g_i$ for all $v$ in the $i$-th end. If there are infinitely many ends, choose the $g_i$ so that $\sum_ig_i<\infty$. For each end, choose a vertex $v_i$ on the end. For $v\in E^0$ not on an end, define \begin{equation}g(v)=\sum_i\sum_{s(\mu)=v,\ r(\mu)=v_i}g_i. \label{backwarddefn}\end{equation} Then the conditions on the graph ensure this sum is finite. Using Equation (\ref{sinksum}), one can check that $g:E^0\to\R^+$ is a faithful graph trace. \end{proof} There are many directed graphs with much more complicated structure than those described in Proposition \ref{Eendsinends} which possess faithful graph traces. The difficulty in defining a graph trace is going `forward', and this is what prevents us giving a concise sufficiency condition. Extending a graph trace `backward' from a given set of values can always be handled as in Equation (\ref{backwarddefn}). \begin{prop}\label{trace=graphtrace} Let $E$ be a row-finite directed graph. Then there is a one-to-one correspondence between faithful graph traces on $E$ and faithful, semifinite, lower semicontinuous, gauge invariant traces on $C^*(E)$. \end{prop} \begin{proof} Given a faithful graph trace $g$ on $E$ we define $\tau_g$ on $A_c$ by \begin{equation} \label{taudef} \tau_g( S_\mu S_\nu^* ) :=\delta_{\mu , \nu} g (r( \mu ) ). \end{equation} One checks that $\tau_g$ is a gauge invariant trace on $A_c$, and is faithful because for $a = \sum_{i=1}^n c_{\mu_i , \nu_i } S_{\mu_i} S_{\nu_i}^* \in A_c$ we have $a^* a \ge \sum_{i=1}^n \vert c_{\mu_i , \nu_i} \vert^2 S_{\nu_i} S_{\nu_i}^*$ and then \begin{align} \langle a , a \rangle_g &:= \tau_g ( a^* a ) \ge \tau_g ( \sum_{i=1}^n \vert c_{\mu_i , \nu_i} \vert^2 S_{\nu_i} S_{\nu_i}^* ) \nno &= \sum_{i=1}^n \vert c_{\mu_i , \nu_i} \vert^2 \tau_g ( S_{\nu_i} S_{\nu_i}^* ) = \sum_{i=1}^n \vert c_{\mu_i , \nu_i } \vert^2 g ( r ( \nu_i ) ) > 0 . \end{align} Then $\la a,b\ra_g=\tau_g(b^*a)$ defines a positive definite inner product on $A_c$ which makes it a Hilbert algebra (that the left regular representation of $A_c$ is nondegenerate follows from $A_c^2=A_c$). Let $\HH_g$ be the Hilbert space completion of $A_c$. Then defining $\pi:A_c\to\B(\HH_g)$ by $\pi(a)b=ab$ for $a,b\in A_c$ yields a faithful $*$-representation. Thus $\{\pi(S_e),\pi(p_v):e\in E^1,\ v\in E^0\}$ is a Cuntz-Krieger $E$ family in $\B(\HH_g)$. The gauge invariance of $\tau_g$ shows that for each $z\in S^1$ the map $\gamma_z:A_c\to A_c$ extends to a unitary $U_z:\HH_g\to\HH_g$. Then for $a,b\in A_c$ we compute $$ (U_z\pi(a)U_{\bar{z}})(b)=U_za\gamma_{\bar{z}}(b)= \gamma_z(a\gamma_{\bar{z}}(b))=\gamma_z(a)b=\pi(\gamma_z(a))(b).$$ Hence $U_z\pi(a)U_{\bar{z}}=\pi(\gamma_z(a))$ and defining $\al_z(\pi(a)):=U_z\pi(a)U_{\bar{z}}$ gives a point norm continuous action of $S^1$ on $\pi(A_c)$ implementing the gauge action. Since for all $v\in E^0$, $\pi(p_v)p_v=p_v$, $\pi(p_v)\neq 0$. Thus we can invoke the gauge invariant uniqueness theorem, \cite[Theorem 2.1]{BPRS}, and the map $\pi:A_c\to\B(\HH_g)$ extends by continuity to $\pi:C^*(E)\to\B(\HH_g)$ and $\pi(C^*(E))=\overline{\pi(A_c)}^{\Vert\cdot\Vert}$ in $\B(\HH_g)$. In particular the representation is faithful on $C^*(E)$. Now, $\pi(C^*(E))\subseteq\pi(A_c)''=\overline{\pi(A_c)}^{u.w.}$, where $u.w.$ denotes the ultra-weak closure. The general theory of Hilbert algebras, see for example \cite[Thm 1, Sec 2, Chap 6, Part I]{Dix}, now shows that the trace $\tau_g$ extends to an ultra weakly lower semicontinuous, faithful, (ultra weakly) semifinite trace $\bar{\tau}_g$ on $\pi(A_c)''$. Trivially, the restriction of this extension to $\pi(C^*(E))$ is faithful. It is semifinite in the norm sense on $C^*(E)$ since $\pi(A_c)$ is norm dense in $\pi(C^*(E))$ and $\tau_g$ is finite on $\pi(A_c)$. To see that this last statement is true, let $a\in A_c$, choose any local unit $\phi\in A_c$ for $a$ and then $$ \infty>\tau_g(a)=\tau_g(\phi a)=\la a,\phi\ra_g=:\bar{\tau}_g(\phi a) =\bar{\tau}_g(a).$$ It is norm lower semicontinuous on $\pi(C^*(E))$ because if $\pi(a)\in C^*(E)^+$ and $\pi(a_n)\in C^*(E)^+$ with $\pi(a_n)\to\pi(a)$ in norm, then $\pi(a_n)\to\pi(a)$ ultra weakly and so $\bar{\tau}_g(\pi(a))\leq\lim\inf\bar{\tau}_g(\pi(a_n))$. We have seen that the gauge action of $S^1$ on $C^*(E)$ is implemented in the representation $\pi$ by the unitary representation $S^1\ni z\to U_z\in\B(\HH_g)$. We wish to show that $\bar{\tau}_g$ is invariant under this action, but since the $U_z$ do not lie in $\pi(A_c)''$, we can not use the tracial property directly. Now $T\in\pi(A_c)''$ is in the domain of definition of $\bar{\tau}_g$ if and only if $T=\pi(\xi)\pi(\eta)^*$ for left bounded elements $\xi,\eta\in\HH_g$. Then $\bar{\tau}_g(T)=\bar{\tau}_g(\pi(\xi)\pi(\eta)^*):=\la\xi,\eta\ra_g.$ Since $U_z(\xi)$ and $U_z(\eta)$ are also left bounded elements of $\HH_g$ we have \bean \bar{\tau}_g(U_zTU_{\bar{z}})&=& \bar{\tau}_g(U_z\pi(\xi)\pi(\eta)^*U_{\bar{z}}) =\bar{\tau}_g(U_z\pi(\xi)[U_z\pi(\eta)]^*)\nno &=&\bar{\tau}_g(\pi(\gamma_z(\xi))[\pi(\gamma_z(\eta))]^*) =\la U_z(\xi),U_z(\eta)\ra_g\nno&=&\la \xi,\eta\ra_g=\bar{\tau}_g(T).\eean That is, $\bar{\tau}_g(\al_z(T))=\bar{\tau}_g(T)$, and $\bar{\tau}_g$ is $\al_z$-invariant. Thus $a\to \bar{\tau}_g(\pi(a))$ defines a faithful, semifinite, lower semicontinuous, gauge invariant trace on $C^*(E)$. Conversely, given a faithful, semifinite, lower semicontinuous and gauge invariant trace $\tau$ on $C^*(E)$, we know by Lemma \ref{finiteonfinite} that $\tau$ is finite on $A_c$ and so we define $g(v):=\tau(p_v)$. It is easy to check that this is a faithful graph trace. \end{proof} \vspace{-24pt} \section{Constructing a $C^*$- and Kasparov Module}\label{triplesI} \vspace{-8pt} There are several steps in the construction of a spectral triple. We begin in Subsection \ref{Cstarmodule} by constructing a $C^*$-module. We define an unbounded operator $\D$ on this $C^*$-module as the generator of the gauge action of $S^1$ on the graph algebra. We show in Subsection \ref{CstarDeeee} that $\D$ is a regular self-adjoint operator on the $C^*$-module. We use the phase of $\D$ to construct a Kasparov module. \subsection{Building a $C^*$-module}\label{Cstarmodule} \vspace{-7pt} The constructions of this subsection work for any locally finite graph. Let $A=C^*(E)$ where $E$ is any locally finite directed graph. Let $F=C^*(E)^\gamma$ be the fixed point subalgebra for the gauge action. Finally, let $A_c,F_c$ be the dense subalgebras of $A,F$ given by the (finite) linear span of the generators. We make $A$ a right inner product $F$-module. The right action of $F$ on $A$ is by right multiplication. The inner product is defined by $$ (x|y)_R:=\Phi(x^*y)\in F.$$ Here $\Phi$ is the canonical expectation. It is simple to check the requirements that $(\cdot|\cdot)_R$ defines an $F$-valued inner product on $A$. The requirement $(x|x)_R=0\Rightarrow x=0$ follows from the faithfulness of $\Phi$. \begin{defn}\label{Fmod} Define $X$ to be the $C^*$-$F$-module completion of $A$ for the $C^*$-module norm $$\Vert x\Vert_X^2:=\Vert(x|x)_R\Vert_A=\Vert(x|x)_R\Vert_F= \Vert \Phi(x^*x)\Vert_F.$$ Define $X_c$ to be the pre-$C^*$-$F_c$-module with linear space $A_c$ and the inner product $(\cdot|\cdot)_R$. \end{defn} {\bf Remark} Typically, the action of $F$ does not map $X_c$ to itself, so we may only consider $X_c$ as an $F_c$ module. This is a reflection of the fact that $F_c$ and $A_c$ are quasilocal not local. The inclusion map $\iota:A\to X$ is continuous since $$\Vert a\Vert_X^2=\Vert\Phi(a^*a)\Vert_F\leq\Vert a^*a\Vert_A=\Vert a\Vert^2_A.$$ We can also define the gauge action $\gamma$ on $A\subset X$, and as \bean\Vert\gamma_z(a)\Vert^2_X&=&\Vert\Phi((\gamma_z(a))^*(\gamma_z(a)))\Vert_F =\Vert\Phi(\gamma_z(a^*)\gamma_z(a))\Vert_F\nno&=& \Vert\Phi(\gamma_z(a^*a))\Vert_F =\Vert\Phi(a^*a)\Vert_F=\Vert a\Vert^2_X,\eean for each $z\in S^1$, the action of $\gamma_z$ is isometric on $A\subset X$ and so extends to a unitary $U_z$ on $X$. This unitary is $F$ linear, adjointable, and we obtain a strongly continuous action of $S^1$ on $X$, which we still denote by $\gamma$. For each $k\in\Z$, the projection onto the $k$-th spectral subspace for the gauge action defines an operator $\Phi_k$ on $X$ by $$\Phi_k(x)= \frac{1}{2\pi}\int_{S^1}z^{-k}\gamma_z(x)d\theta,\ \ z=e^{i\theta},\ \ x\in X.$$ Observe that on generators we have $\Phi_k(S_\al S_\beta^*)=S_\al S_\beta^*$ when $|\al|-|\beta|=k$ and is zero when $|\al|-|\beta|\neq k$. The range of $\Phi_k$ is \begin{equation} \mbox{Range}\ \Phi_k=\{x\in X:\gamma_z(x)=z^kx\ \ \mbox{for all}\ z\in S^1\}. \label{kthproj}\end{equation} These ranges give us a natural $\Z$-grading of $X$. {\bf Remark} If $E$ is a finite graph with no loops, then for $k$ sufficiently large there are no paths of length $k$ and so $\Phi_k=0$. This will obviously simplify many of the convergence issues below. \begin{lemma}\label{phiendo} The operators $\Phi_k$ are adjointable endomorphisms of the $F$-module $X$ such that $\Phi_k^*=\Phi_k=\Phi_k^2$ and $\Phi_k\Phi_l=\delta_{k,l}\Phi_k$. If $K\subset\Z$ then the sum $\sum_{k\in K}\Phi_k$ converges strictly to a projection in the endomorphism algebra. The sum $\sum_{k\in\Z}\Phi_k$ converges to the identity operator on $X$. \end{lemma} \begin{proof} It is clear from the definition that each $\Phi_k$ defines an $F$-linear map on $X$. First, we show that $\Phi_k$ is bounded: $$\Vert\Phi_k(x)\Vert_X\leq\frac{1}{2\pi} \int_{S^1}\Vert\gamma_z(x)\Vert_Xd\theta\leq\frac{1}{2\pi}\int_{S^1}\Vert x\Vert_Xd\theta=\Vert x\Vert_X.$$ So $\Vert \Phi_k\Vert\leq 1$. Since $\Phi_k S_\mu=S_\mu$ whenever $\mu$ is a path of length $k$, $\Vert\Phi_k\Vert=1$. On the subspace $X_c$ of finite linear combinations of generators, one can use Equation (\ref{kthproj}) to see that $\Phi_k\Phi_l=\delta_{k,l}\Phi_k$ since $$\Phi_k\Phi_lS_\al S_\beta^*=\Phi_k\delta_{|\al|-|\beta|,l}S_\al S_\beta^*=\delta_{|\al|-|\beta|,k}\delta_{|\al|-|\beta|,l}S_\al S_\beta^*.$$ For general $x\in X$, we approximate $x$ by a sequence $\{x_m\}\subset X_c$, and the continuity of the $\Phi_k$ then shows that the relation $\Phi_k\Phi_l=\delta_{k,l}\Phi_k$ holds on all of $X$. Again using the continuity of $\Phi_k$, the following computation allows us to show that for all $k$, $\Phi_k$ is adjointable with adjoint $\Phi_k$: \bean (\Phi_kS_\al S_\beta^*|S_\rho S_\s^*)_R&=& \Phi\left(\delta_{|\al|-|\beta|,k}S_\beta S_\al^*S_\rho S_\s^*\right)\nno &=&\delta_{|\al|-|\beta|,k}\delta_{|\beta|-|\al|+|\rho|-|\s|,0}S_\beta S_\al^*S_\rho S_\s^*\nno &=&\Phi\left(\delta_{|\rho|-|\s|,k}S_\beta S_\al^*S_\rho S_\s^*\right) =(S_\al S_\beta^*|\Phi_{k}S_\rho S_\s^*)_R.\eean To address the last two statements of the Lemma, we observe that the set $\{\Phi_k\}_{k\in\Z}$ is norm bounded in $End_F(X)$, so the strict topology on this set coincides with the $*$-strong topology, \cite[Lemma C.6]{RW}. First, if $K\subset\Z$ is a finite set, the sum $$\sum_{k\in K}\Phi_k$$ is finite, and defines a projection in $End_F(X)$ by the results above. So assume $K$ is infinite and let $\{K_i\}$ be an increasing sequence of finite subsets of $K$ with $K=\cup_iK_i$. For $x\in X$, let $$T_ix=\sum_{k\in K_i}\Phi_kx.$$ Choose a sequence $\{x_m\}\subset X_c$ with $x_m\to x$. Let $\epsilon>0$ and choose $m$ so that $\Vert x_m-x\Vert_X<\epsilon/2$. Since $x_m$ has finite support, for $i,j$ sufficiently large we have $T_ix_m-T_jx_m=0$, and so for sufficiently large $i,j$ \bean \Vert T_ix-T_jx\Vert_X&=&\Vert T_ix-T_ix_m+T_ix_m-T_jx_m+T_jx_m-T_jx_m\Vert_X\nno &\leq&\Vert T_i(x-x_m)\Vert_X+\Vert T_j(x-x_m)\Vert_X+\Vert T_ix_m-T_jx_m\Vert_X\nno &<&\epsilon.\eean This proves the strict convergence, since the $\Phi_k$ are all self-adjoint. To prove the final statement, let $x,\{x_m\}$ be as above, $\epsilon>0$, and choose $m$ so that $\Vert x-x_m\Vert_X<\epsilon/2$. Then \bean \Vert x-\sum_{k\in\Z}\Phi_kx\Vert_X&=&\Vert x-\sum\Phi_k x_m+\sum\Phi_kx_m-\sum\Phi_kx\Vert_X\nno &\leq&\Vert x-x_m\Vert_X+\Vert\sum\Phi_k(x-x_m)\Vert_X<\epsilon.\qed\eean \hideqed \end{proof} \begin{cor}\label{gradedsum} Let $x\in X$. Then with $x_k=\Phi_kx$ the sum $\sum_{k\in \Z}x_k$ converges in $X$ to $x$. \end{cor} \vspace{-7pt} \subsection{The Kasparov Module}\label{CstarDeeee} \vspace{-7pt} {\bf In this subsection we assume that $E$ is locally finite and furthermore has no sources. That is, every vertex receives at least one edge.} Since we have the gauge action defined on $X$, we may use the generator of this action to define an unbounded operator $\D$. We will not define or study $\D$ from the generator point of view, rather taking a more bare-hands approach. It is easy to check that $\D$ as defined below is the generator of the $S^1$ action. The theory of unbounded operators on $C^*$-modules that we require is all contained in Lance's book, \cite[Chapters 9,10]{L}. We quote the following definitions (adapted to our situation). \begin{defn} Let $Y$ be a right $C^*$-$B$-module. A densely defined unbounded operator $\D:{\rm dom}\ \D\subset Y\to Y$ is a $B$-linear operator defined on a dense $B$-submodule ${\rm dom}\ \D\subset Y$. The operator $\D$ is closed if the graph $$ G(\D)=\{(x|\D x)_R:x\in{\rm dom}\ \D\}$$ is a closed submodule of $Y\oplus Y$. \end{defn} If $\D:\mbox{dom}\ \D\subset Y\to Y$ is densely defined and unbounded, define a submodule $$\mbox{dom}\ \D^*:=\{y\in Y:\exists z\in Y\ \mbox{such that}\ \forall x\in\mbox{dom}\ \D, (\D x|y)_R=(x|z)_R\}.$$ Then for $y\in \mbox{dom}\ \D^*$ define $\D^*y=z$. Given $y\in\mbox{dom}\ \D^*$, the element $z$ is unique, so $\D^*:\mbox{dom}\D^*\to Y$, $\D^*y=z$ is well-defined, and moreover is closed. \begin{defn} Let $Y$ be a right $C^*$-$B$-module. A densely defined unbounded operator $\D:{\rm dom}\ \D\subset Y\to Y$ is symmetric if for all $x,y\in{\rm dom}\ \D$ $$ (\D x|y)_R=(x|\D y)_R.$$ A symmetric operator $\D$ is self-adjoint if ${\rm dom}\ \D={\rm dom}\ \D^*$ (and so $\D$ is necessarily closed). A densely defined unbounded operator $\D$ is regular if $\D$ is closed, $\D^*$ is densely defined, and $(1+\D^*\D)$ has dense range. \end{defn} The extra requirement of regularity is necessary in the $C^*$-module context for the continuous functional calculus, and is not automatic, \cite[Chapter 9]{L}. With these definitions in hand, we return to our $C^*$-module $X$. \begin{prop}\label{CstarDee} Let $X$ be the right $C^*$-$F$-module of Definition \ref{Fmod}. Define $X_\D\subset X$ to be the linear space $$ X_\D= \{x=\sum_{k\in\Z}x_k\in X:\Vert\sum_{k\in\Z}k^2(x_k|x_k)_R\Vert<\infty\}.$$ For $x=\sum_{k\in\Z}x_k\in X_\D$ define $$ \D x=\sum_{k\in\Z}kx_k.$$ Then $\D:X_\D\to X$ is a self-adjoint regular operator on $X$. \end{prop} {\bf Remark} Any $S_\al S_\beta^*\in A_c$ is in $X_\D$ and $$\D S_\al S_\beta^*=(|\al|-|\beta|)S_\al S_\beta^*.$$ \begin{proof} First we show that $X_\D$ is a submodule. If $x\in X_\D$ and $f\in F$, in the $C^*$-algebra $F$ we have \bean\sum_{k\in\Z}k^2(x_kf|x_kf)_R&=&\sum_{k\in\Z}k^2f^*(x_k|x_k)_Rf =f^*\sum_{k\in\Z}k^2(x_k|x_k)_Rf\nno &\leq& f^*f\Vert\sum_{k\in\Z}k^2 (x_k|x_k)_R\Vert.\eean So $$\Vert \sum_{k\in\Z}k^2(x_kf|x_kf)_R\Vert \leq\Vert f^*f\Vert\ \Vert\sum_{k\in\Z}k^2(x_k|x_k)_R\Vert<\infty.$$ Observe that if $x\in X$ is a finite sum of graded components, $$ x=\sum_{k=-N}^Mx_k,$$ then $x\in X_\D$. In particular if $P=\sum_{finite}\Phi_k$ is a finite sum of the projections $\Phi_k$, $Px\in X_\D$ for any $x\in X$. The following calculation shows that $\D$ is symmetric on its domain, so that the adjoint is densely defined. Let $x,y\in\mbox{dom}\D$ and use Corollary \ref{gradedsum} to write $x=\sum_kx_k$ and $y=\sum_ky_k$. Then \bean (\D x|y)_R&=&(\sum_kkx_k|\sum_my_m)_R=\Phi((\sum_kkx_k)^*(\sum_my_m)) =\Phi(\sum_{k,m}kx_k^*y_m)\nno&=&\sum_kkx_k^*y_k =\Phi(\sum_{k,m}x_m^*ky_k)=\Phi((\sum_mx_m)^*(\sum_kky_k))\nno &=&(x|\D y)_R.\eean Thus $\mbox{dom}\D\subseteq\mbox{dom}\D^*$, and so $\D^*$ is densely defined, and of course closed. Now choose any $x\in X$ and any $y\in \mbox{dom}\D^*$. Let $P_{N,M}=\sum_{k=-N}^M\Phi_k$, and recall that $P_{N,M}x\in\mbox{dom}\D$ for all $x\in X$. Then \bean (x|P_{N,M}\D^*y)_R=(P_{N,M}x|\D^*y)_R&=&(\D P_{N,M}x|y)_R\nno &=&(\sum_{k=-N}^Mkx_k|y)_R=(x|\sum_{k=-N}^Mky_k)_R.\eean Since this is true for all $x\in X$ we have $$P_{N,M}\D^*y=\sum_{k=-N}^Mky_k.$$ Letting $N,M\to\infty$, the limit on the left hand side exists by Corollary \ref{gradedsum}, and so the limit on the right exists, and so $y\in\mbox{dom}\D$. Hence $\D$ is self-adjoint. Finally, we need to show that $\D$ is regular. By \cite[Lemma 9.8]{L}, $\D$ is regular if and only if the operators $\D\pm iId_X$ are surjective. This is straightforward though, for if $x=\sum_kx_k$ we have $$ x=\sum_{k\in\Z} \frac{(k\pm i)}{(k\pm i)}x_k= (\D\pm iId_X)\sum_{k\in\Z}\frac{1}{(k\pm i)}x_k.$$ The convergence of $\sum_kx_k$ ensures the convergence of $\sum_k(k\pm i)^{-1}x_k$. \end{proof} There is a continuous functional calculus for self-adjoint regular operators, \cite[Theorem 10.9]{L}, and we use this to obtain spectral projections for $\D$ at the $C^*$-module level. Let $f_k\in C_c({\R})$ be $1$ in a small neighbourhood of $k\in{\Z}$ and zero on $(-\infty,k-1/2]\cup[k+1/2,\infty)$. Then it is clear that $$ \Phi_k=f_k(\D).$$ That is the spectral projections of $\D$ are the same as the projections onto the spectral subspaces of the gauge action. The next Lemma is the first place where we need our graph to be locally finite and have no sources. \begin{lemma}\label{finrank} Assume that the directed graph $E$ is locally finite and has no sources. For all $a\in A$ and $k\in\Z$, $a\Phi_k\in End^0_F(X)$, the compact endomorphisms of the right $F$-module $X$. If $a\in A_c$ then $a\Phi_k$ is finite rank. \end{lemma} {\bf Remark} The proof actually shows that for $k>0$ $$\Phi_k=\sum_{|\rho|=k}\Theta^R_{S_\rho,S_\rho}$$ where the sum converges in the strict topology. \begin{proof} We will prove the Lemma by first showing that for each $v\in E^0$ and $k\geq 0$ $$p_v\Phi_k=\sum_{s(\rho)=v,\ |\rho|=k}\Theta^R_{S_\rho,S_\rho}.$$ This is a finite sum, by the row-finiteness of $E$. For $k<0$ the situation is more complicated, but a similar formula holds in that case also. First suppose that $k\geq 0$ and $a=p_v\in A_c$ is the projection corresponding to a vertex $v\in E^0$. For $\al$ with $|\al|\geq k$ denote by $\underline{\al}=\al_1\cdots\al_{k}$ and $\overline{\al}=\al_{k+1}\cdots\al_{|\al|}$. With this notation we compute the action of $p_v$ times the rank one endomorphism $\Theta^R_{S_\rho,S_\rho}$, $|\rho|=k$, on $S_\al S_\beta^*$. We find \bean p_v\Theta^R_{S_\rho,S_\rho}S_\al S_\beta^*&=&p_vS_\rho(S_\rho|S_\al S_\beta^*)_R=\delta_{v,s(\rho)}p_vS_\rho\Phi(S_\rho^*S_\al S_\beta^*)\nno &=&\delta_{v,s(\rho)}p_vS_\rho\delta_{|\al|-|\beta|,k} \delta_{\rho,\underline{\al}} S_{\overline{\al}}S_\beta^*= \delta_{|\al|-|\beta|,k}\delta_{\rho,\underline{\al}}\delta_{v,s(\rho)}S_\al S_\beta^*.\eean Of course if $|\al|<|\rho|$ we have $$p_v\Theta^R_{S_\rho,S_\rho}S_\al S_\beta^*= p_vS_\rho\Phi(S_\rho^*S_\al S_\beta^*)=0.$$ This too is $\delta_{|\al|-|\beta|,k}p_vS_\al S_\beta^*$. Thus for any $\al$ we have $$ \sum_{|\rho|=k}p_v\Theta^R_{S_\rho,S_\rho}S_\al S_\beta^*= \sum_{|\rho|=k, s(\rho)=v}\delta_{v,s(\rho)}\delta_{|\al|-|\beta|,k} \delta_{\rho,\underline{\al}}p_vS_\al S_\beta^*=\delta_{v,s(\al)}\delta_{|\al|-|\beta|,k}S_\al S_\beta^*.$$ This is of course the action of $p_v\Phi_k$ on $S_\al S_\beta^*$, and if $v$ is a sink, $p_v\Phi_k=0$, as it must. Since $E$ is locally finite, the number of paths of length $k$ starting at $v$ is finite, and we have a finite sum. For general $a\in A_c$ we may write $$a=\sum_{i=1}^nc_{\mu_i,\nu_i}S_{\mu_i}S^*_{\nu_i}$$ for some paths $\mu_i,\nu_i$. Then $S_{\mu_i}S^*_{\nu_i}=S_{\mu_i}S^*_{\nu_i}p_{s(\nu_i)}$, and we may apply the above reasoning to each term in the sum defining $a$ to get a finite sum again. Thus $a\Phi_k$ is finite rank. Now we consider $k<0$. Given $v\in E^0$, let $|v|_k$ denote the number of paths $\rho$ of length $|k|$ ending at $v$, i.e. $r(\rho)=v$. Since we assume that $E$ is locally finite and has no sources, $\infty>|v|_k>0$ for each $v\in E^0$. We consider the action of the finite rank operator $$ \frac{1}{|v|_k}\sum_{|\rho|=|k|,r(\rho)=v}p_v\Theta^R_{S^*_\rho,S^*_\rho}.$$ For $S_\al S_\beta^*\in X$ we find \bean \frac{1}{|v|_k}\sum_{|\rho|=|k|,r(\rho)=v}p_v\Theta^R_{S^*_\rho,S^*_\rho}S_\al S_\beta^*&=&\frac{1}{|v|_k}\sum_{|\rho|=|k|,r(\rho)=v}p_vS_\rho^*\Phi(S_\rho S_\al S_\beta^*)\nno &=&\frac{1}{|v|_k}\sum_{|\rho|=|k|,r(\rho)=v} \delta_{|\al|-|\beta|,-|k|}p_vS_\rho^*S_\rho S_\al S_\beta^*\nno &=&\delta_{|\al|-|\beta|,-|k|}\delta_{v,s(\al)}p_v S_\al S_\beta^*=p_v\Phi_{k}S_\al S_\beta^*.\eean Thus $p_v\Phi_{-|k|}$ is a finite rank endomorphism, and by the argument above, we have $a\Phi_{-|k|}$ finite rank for all $a\in A_c$. To see that $a\Phi_k$ is compact for all $a\in A$, recall that every $a\in A$ is a norm limit of a sequence $\{a_i\}_{i\geq 0}\subset A_c$. Thus for any $k\in\Z$ $a\Phi_k=\lim_{i\to\infty}a_i\Phi_k$ and so is compact. \end{proof} \begin{lemma}\label{compactendo} Let $E$ be a locally finite directed graph with no sources. For all $a\in A$, $a(1+\D^2)^{-1/2}$ is a compact endomorphism of the $F$-module $X$. \end{lemma} \begin{proof} First let $a=p_v$ for $v\in E^0$. Then the sum $$ R_{v,N}:=p_v\sum_{k=-N}^N\Phi_k(1+k^2)^{-1/2}$$ is finite rank, by Lemma \ref{finrank}. We will show that the sequence $\{R_{v,N}\}_{N\geq 0}$ is convergent with respect to the operator norm $\Vert\cdot\Vert_{End}$ of endomorphisms of $X$. Indeed, assuming that $M>N$, \begin{align} \Vert R_{v,N}-R_{v,M}\Vert_{End}&=\Vert p_v\sum_{k=-M}^{-N}\Phi_k(1+k^2)^{-1/2}+p_v\sum_{k=N}^M\Phi_k(1+k^2)^{-1/2}\Vert_{End}\nno &\leq2(1+N^2)^{-1/2}\to 0,\end{align} since the ranges of the $p_v\Phi_k$ are orthogonal for different $k$. Thus, using the argument from Lemma \ref{finrank}, $a(1+\D^2)^{-1/2}\in End^0_F(X)$. Letting $\{a_i\}$ be a Cauchy sequence from $A_c$, we have $$\Vert a_i(1+\D^2)^{-1/2}-a_j(1+\D^2)^{-1/2}\Vert_{End}\leq\Vert a_i-a_j\Vert_{End}=\Vert a_i-a_j\Vert_A\to 0,$$ since $\Vert(1+\D^2)^{-1/2}\Vert\leq 1$. Thus the sequence $a_i(1+\D^2)^{-1/2}$ is Cauchy in norm and we see that $a(1+\D^2)^{-1/2}$ is compact for all $a\in A$. \end{proof} \begin{prop}\label{Kasmodule} Assume that the directed graph $E$ is locally finite and has no sources. Let $V=\D(1+\D^2)^{-1/2}$. Then $(X,V)$ defines a class in $KK^1(A,F)$. \end{prop} \begin{proof} We will use the approach of \cite[Section 4]{K}. We need to show that various operators belong to $End^0_F(X)$. First, $V-V^*=0$, so $a(V-V^*)$ is compact for all $a\in A$. Also $a(1-V^2)=a(1+\D^2)^{-1}$ which is compact from Lemma \ref{compactendo} and the boundedness of $(1+\D^2)^{-1/2}$. Finally, we need to show that $[V,a]$ is compact for all $a\in A$. First we suppose that $a\in A_c$. Then \bean [V,a]&=&[\D,a](1+\D^2)^{-1/2}-\D(1+\D^2)^{-1/2}[(1+\D^2)^{1/2},a](1+\D^2)^{-1/2}\nno &=&b_1(1+\D^2)^{-1/2}+Vb_2(1+\D^2)^{-1/2},\eean where $b_1=[\D,a]\in A_c$ and $b_2=[(1+\D^2)^{1/2},a]$. Provided that $b_2(1+\D^2)^{-1/2}$ is a compact endomorphism, Lemma \ref{compactendo} will show that $[V,a]$ is compact for all $a\in A_c$. So consider the action of $[(1+\D^2)^{1/2},S_\mu S_\nu^*](1+\D^2)^{-1/2}$ on $x=\sum_{k\in\Z}x_k$. We find \bea &&\sum_{k\in\Z}[(1+\D^2)^{1/2},S_\mu S_\nu^*](1+\D^2)^{-1/2}x_k\nno &=& \sum_{k\in\Z} \left((1+(|\mu|-|\nu|+k)^2)^{1/2}-(1+k^2)^{1/2}\right)(1+k^2)^{-1/2}S_\mu S_\nu^*x_k\nno &=&\sum_{k\in\Z}f_{\mu,\nu}(k)S_\mu S_\nu^* \Phi_kx.\label{limit}\eea The function \ben f_{\mu,\nu}(k)=\left((1+(|\mu|-|\nu|+k)^2)^{1/2}-(1+k^2)^{1/2}\right) (1+k^2)^{-1/2}\een goes to $0$ as $k\to\pm\infty$, and as the $S_\mu S_\nu^*\Phi_k$ are finite rank with orthogonal ranges, the sum in (\ref{limit}) converges in the endomorphism norm, and so converges to a compact endomorphism. For $a\in A_c$ we write $a$ as a finite linear combination of generators $S_\mu S_\nu^*$, and apply the above reasoning to each term in the sum to find that $[(1+\D^2)^{1/2},a](1+\D^2)^{-1/2}$ is a compact endomorphism. Now let $a\in A$ be the norm limit of a Cauchy sequence $\{a_i\}_{i\geq 0}\subset A_c$. Then $$\Vert[V,a_i-a_j]\Vert_{End}\leq 2\Vert a_i-a_j\Vert_{End}\to 0,$$ so the sequence $[V,a_i]$ is also Cauchy in norm, and so the limit is compact. \end{proof} \section{The Gauge Spectral Triple of a Graph Algebra}\label{triplesII} In this section we will construct a semifinite spectral triple for those graph $C^*$-algebras which possess a faithful gauge invariant trace, $\tau$. Recall from Proposition \ref{trace=graphtrace} that such traces arise from faithful graph traces. We will begin with the right $F_c$ module $X_c$. In order to deal with the spectral projections of $\D$ we will also assume throughout this section that $E$ is locally finite and has no sources. This ensures, by Lemma \ref{finrank} that for all $a\in A$ the endomorphisms $a\Phi_k$ of $X$ are compact endomorphisms. As in the proof of Proposition \ref{trace=graphtrace}, we define a ${\C}$-valued inner product on $X_c$: $$ \la x,y\ra:=\tau((x|y)_R)=\tau(\Phi(x^*y))=\tau(x^*y).$$ This inner product is linear in the second variable. We define the Hilbert space $\HH=L^2(X,\tau)$ to be the completion of $X_c$ for $\la\cdot,\cdot\ra$. We need a few lemmas in order to obtain the ingredients of our spectral triple. \begin{lemma}\label{endoproof} The $C^*$-algebra $A=C^*(E)$ acts on $\HH$ by an extension of left multiplication. This defines a faithful nondegenerate $*$-representation of $A$. Moreover, any endomorphism of $X$ leaving $X_c$ invariant extends uniquely to a bounded linear operator on $\HH$. \end{lemma} \begin{proof} The first statement follows from the proof of Proposition \ref{trace=graphtrace}. Now let $T$ be an endomorphism of $X$ leaving $X_c$ invariant. Then \cite[Cor 2.22]{RW}, $$(Tx|Ty)_R\leq \| T\|_{End}^2(x|y)_R$$ in the algebra $F$. Now the norm of $T$ as an operator on $\HH$, denoted $\Vert T\Vert_\infty$, can be computed in terms of the endomorphism norm of $T$ by \begin{align} \|T\|_\infty^2&:=\sup_{\|x\|_\HH\leq 1}\la Tx,Tx\ra=\sup_{\|x\|_\HH\leq 1}\tau((Tx|Tx)_R)\nno &\leq \sup_{\|x\|_\HH\leq 1}\n T\n_{End}^2\tau((x|x)_R)=\n T\n_{End}^2.\qed\end{align} \hideqed \end{proof} \begin{cor} The endomorphisms $\{\Phi_k\}_{k\in\Z}$ define mutually orthogonal projections on $\HH$. For any $K\subset \Z$ the sum $\sum_{k\in K}\Phi_k$ converges strongly to a projection in $\B(\HH)$. In particular, $\sum_{k\in\Z}\Phi_k=Id_{\HH}$, and for all $x\in \HH$ the sum $\sum_k\Phi_kx$ converges in norm to $x$. \end{cor} \begin{proof} As in Lemma \ref{phiendo}, we can use the continuity of the $\Phi_k$ on $\HH$, which follows from Corollary \ref{endoproof}, to see that the relation $\Phi_k\Phi_l=\delta_{k,l}\Phi_k$ extends from $X_c\subset\HH$ to $\HH$. The strong convergence of sums of $\Phi_k$'s is just as in Lemma \ref{phiendo} after replacing the $C^*$-module norm with the Hilbert space norm. \end{proof} \begin{lemma} The operator $\D$ restricted to $X_c$ extends to a closed self-adjoint operator on $\HH$. \end{lemma} \begin{proof} The proof is essentially the same as Proposition \ref{CstarDee}. \end{proof} \begin{lemma}\label{deltacomms} Let $\HH,\D$ be as above and let $\dd=\sqrt{\D^*\D}=\sqrt{\D^2}$ be the absolute value of $\D$. Then for $S_\al S_\beta^*\in A_c$, the operator $[\dd,S_\al S_\beta^*]$ is well-defined on $X_c$, and extends to a bounded operator on $\HH$ with $$\Vert[\dd,S_\al S_\beta^*]\Vert_{\infty}\leq \Bigl||\al|-|\beta|\Bigr|.$$ Similarly, $\Vert[\D,S_\al S_\beta^*]\Vert_\infty= \Bigl||\al|-|\beta|\Bigr|$. \end{lemma} \begin{proof} It is clear that $S_\al S_\beta^*X_c\subset X_c$, so we may define the action of the commutator on elements of $X_c$. Now let $x=\sum_kx_k\in\HH$ and consider the action of $[\dd,S_\al S_\beta^*]$ on $x_k$. We have $$[\dd,S_\al S_\beta^*]x_k=\Bigl(\Bigl||\al|-|\beta|+k\Bigr|-\Bigl|k\Bigr|\Bigr)S_\al S_\beta^*x_k,$$ and so, by the triangle inequality, $$\Vert[\dd,S_\al S_\beta^*]x_k\Vert_{\infty}\leq\Bigl||\al|-|\beta|\Bigr|\Vert x_k\Vert_\infty,$$ since $\Vert S_\al S_\beta^*\Vert_\infty=1.$ As the $x_k$ are mutually orthogonal, $\Vert[\dd,S_\al S_\beta^*]\Vert_\infty\leq \Bigl||\al|-|\beta|\Bigr|$. The statements about $[\D,S_\al S_\beta^*]=(|\al|-|\beta|)S_\al S_\beta^*$ are easier. \end{proof} \begin{cor}\label{smodense} The algebra $A_c$ is contained in the smooth domain of the derivation $\delta$ where for $T\in\B(\HH)$, $\delta(T)=[\dd,T]$. That is $$ A_c\subseteq\bigcap_{n\geq 0}{\rm dom}\ \delta^n.$$ \end{cor} \begin{defn} Define the $*$-algebra $\A\subset A$ to be the completion of $A_c$ in the $\delta$-topology. By Lemma \ref{smo}, $\A$ is Fr\'{e}chet and stable under the holomorphic functional calculus. \end{defn} \begin{lemma}\label{smoalg} If $a\in\A$ then $[\D,a]\in\A$ and the operators $\delta^k(a)$, $\delta^k([\D,a])$ are bounded for all $k\geq 0$. If $\phi\in F\subset\A$ and $a\in\A$ satisfy $\phi a=a=a\phi$, then $\phi[\D,a]=[\D,a]=[\D,a]\phi$. The norm closed algebra generated by $\A$ and $[\D,\A]$ is $A$. In particular, $\A$ is quasi-local. \end{lemma} We leave the straightforward proofs of these statements to the reader. \vspace{-7pt} \subsection{Traces and Compactness Criteria} \vspace{-9pt} We still assume that $E$ is a locally finite graph with no sources and that $\tau$ is a faithful semifinite lower semicontinuous gauge invariant trace on $C^*(E)$. We will define a von Neumann algebra $\cn$ with a faithful semifinite normal trace $\tilde\tau$ so that $\A\subset\cn\subset\B(\HH)$, where $\A$ and $\HH$ are as defined in the last subsection. Moreover the operator $\D$ will be affiliated to $\cn$. The aim of this subsection will then be to prove the following result. \begin{thm}\label{mainthm} Let $E$ be a locally finite graph with no sources, and let $\tau$ be a faithful, semifinite, gauge invariant, lower semiconitnuous trace on $C^*(E)$. Then $(\A,\HH,\D)$ is a $QC^\infty$, $(1,\infty)$-summable, odd, local, semifinite spectral triple (relative to $(\cn,\tilde\tau)$). For all $a\in \A$, the operator $a(1+\D^2)^{-1/2}$ is not trace class. If $v\in E^0$ has no sinks downstream $$\tilde\tau_\omega(p_v(1+\D^2)^{-1/2})=2\tau(p_v),$$ where $\tilde\tau_\omega$ is any Dixmier trace associated to $\tilde\tau$. \end{thm} We require the definitions of $\cn$ and $\tilde\tau$, along with some preliminary results. \begin{defn} Let $End^{00}_F(X_c)$ denote the algebra of finite rank operators on $X_c$ acting on $\HH$. Define $\cn=(End^{00}_F(X_c))''$, and let $\cn_+$ denote the positive cone in $\cn$. \end{defn} \begin{defn} Let $T\in\cn$ and $\mu\in E^*$. Let $|v|_k=$ the number of paths of length $k$ with range $v$, and define for $|\mu|\neq 0$ $$\omega_\mu(T)= \la S_\mu,TS_\mu\ra+\frac{1}{|r(\mu)|_{|\mu|}}\la S_\mu^*,TS_\mu^*\ra.$$ For $|\mu|=0$, $S_\mu=p_v$, for some $v\in E^0$, set $\omega_\mu(T)=\la S_\mu,TS_\mu\ra.$ Define $$\tilde\tau:\cn_+\to[0,\infty],\ \ \mbox{by}\ \ \ \ \tilde\tau(T)= \lim_{L\uparrow}\sum_{\mu\in L\subset E^*}\omega_\mu(T)$$ where $L$ is in the net of finite subsets of $E^*$. \end{defn} {\bf Remark} For $T,S\in\cn_+$ and $\lambda\geq 0$ we have $$\tilde\tau(T+S)=\tilde\tau(T)+\tilde\tau(S)\ \ \ \mbox{and}\ \ \ \tilde\tau(\lambda T)=\lambda\tilde\tau(T)\ \ \mbox{where}\ \ 0\times\infty=0.$$ \begin{prop}\label{tildetau} The function $\tilde\tau:\cn_+\to[0,\infty]$ defines a faithful normal semifinite trace on $\cn$. Moreover, $$End_F^{00}(X_c)\subset\cn_{\tilde\tau}:= {\rm span}\{T\in\cn_+:\tilde\tau(T)<\infty\},$$ the domain of definition of $\tilde\tau$, and $$\tilde\tau(\Theta^R_{x,y})=\la y,x\ra=\tau(y^*x),\ \ \ x,y\in X_c.$$ \end{prop} \begin{proof} First, since $\tilde\tau$ is defined as the limit of an increasing net of sums of positive vector functionals, $\tilde\tau$ is a positive ultra-weakly lower semicontinuous weight on $\cn_+$, \cite{KR}, that is a normal weight. Now observe (using the fact that $p_v\Phi_k$ is a projection for all $k\in\Z$ and $v\in E^0$) that for any vertex $v\in E^0$, $k\in\Z$ and $T\in\cn_+$ \bean\tilde\tau(p_v\Phi_kTp_v\Phi_k)&=&\la \Phi_kp_v,T\Phi_kp_v\ra+ \sum_{s(\mu)=v}\la \Phi_kS_\mu,T\Phi_kS_\mu\ra\nno&+& \sum_{r(\mu)=v}\frac{1}{|r(\mu)|_{|\mu|}}\la \Phi_kS_\mu^*,T\Phi_kS_\mu^*\ra. \eean If $k=0$ this is equal to $\la p_v,Tp_v\ra<\infty$. If $k>0$ we find \bean\tilde\tau(p_v\Phi_kTp_v\Phi_k) &=&\sum_{s(\mu)=v,|\mu|=k}\la S_\mu, TS_\mu\ra \leq\Vert T\Vert\sum_{s(\mu)=v,|\mu|=k}\tau(S_\mu^*S_\mu)\nno &=&\Vert T\Vert\sum_{s(\mu)=v,|\mu|=k}\tau(p_{r(\mu)}) \leq\Vert T\Vert \tau(p_v)<\infty,\eean the last inequality following from the fact that $\tau$ arises from a graph trace, by Proposition \ref{trace=graphtrace}, and Equations (\ref{nosinksum}) and (\ref{sinksum}). Similarly, if $k<0$ \hspace{-7pt}\begin{align*}\tilde\tau(p_v\Phi_kTp_v\Phi_k) &=\sum_{r(\mu)=v,|\mu|=|k|} \frac{1}{|v|_{|k|}}\la S_\mu^*, TS_\mu^*\ra \leq\Vert T\Vert\sum_{r(\mu)=v,|\mu|=|k|} \frac{1}{|v|_{|k|}}\tau(S_\mu^*S_\mu)\nno &=\Vert T\Vert\sum_{r(\mu)=v,|\mu|=k} \frac{1}{|v|_{|k|}}\tau(p_{r(\mu)}) =\Vert T\Vert \tau(p_v)<\infty.\end{align*} Hence $\tilde\tau$ is a finite positive function on each $p_v\Phi_k\cn p_v\Phi_k$. Taking limits over finite sums of vertex projections, $p=p_{v_1}+\cdots+p_{v_n}$, converging to the identity, and finite sums $P=\Phi_{k_1}+\cdots+\Phi_{k_m}$, we have for $T\in\cn_+$ $$\lim_{pP\nearrow 1}\sup\tilde\tau(pPTpP)\leq\tilde\tau(T)\leq \lim_{pP\nearrow1}\inf\tilde\tau(pPTpP),$$ the first inequality following from the definition of $\tilde\tau$, and the latter from the ultra-weak lower semicontinuity of $\tilde\tau$, so for $T\in\cn_+$ \be\lim_{pP\nearrow1}\tilde\tau(pPTpP)=\tilde\tau(T).\label{tildetaulimit}\ee For $x\in X_c\subset\HH$, $\Theta^R_{x,x}\geq 0$ and so we compute \bean \tilde\tau(\Theta^R_{x,x})&=& \sup_F\sum_{\mu\in F}\la S_\mu,x(x|S_\mu)_R\ra+ \frac{1}{|r(\mu)|_{|\mu|}}\la S_\mu^*,x(x|S_\mu^*)_R\ra\nno &=&\sup_F\sum_{\mu\in F}\tau(\Phi(S_\mu^*x\Phi(x^*S_\mu)))+ \frac{1}{|r(\mu)|_{|\mu|}}\tau(\Phi(S_\mu x\Phi(x^*S_\mu^*))).\eean Now since $x\in X_c$, there are only finitely many $\omega_\mu$ which are nonzero on $\Theta^R_{x,x}$, so this is always a finite sum, and $\tilde\tau(\Theta^R_{x,x})<\infty$. To compute $\Theta^R_{x,y}$, suppose that $x=S_\al S_\beta^*$ and $y=S_\s S_\rho^*$. Then $(y|S_\mu)_R=\Phi(S_\rho S_\s^*S_\mu)$ and this is zero unless $|\s|=|\mu|+|\rho|$. In this case, $|\s|\geq |\mu|$ and we write $\s=\underline{\s}\overline{\s}$ where $|\underline{\s}|=|\mu|$. Similarly, $(y|S^*_\mu)_R=\Phi(S_\rho S_\s^*S_\mu^*)$ is zero unless $|\rho|=|\s|+|\mu|$. We also require the computation $$ S_\al S_\beta^* S_\rho S_\s^*S_\mu S_\mu^*= S_\al S_\beta^* S_\rho S_\s^*\delta_{\underline{\s},\mu},\qquad |\s|\geq |\mu|$$ $$ S_\al S_\beta^*S_\rho S_\s^*S_\mu^*S_\mu= S_\al S_\beta^*S_\rho S_\s^* \delta_{r(\mu),s(\s)} \qquad|\mu|\geq|\s|.$$ Now we can compute for $|\rho|\neq|\s|$ , so that only one of the sums over $|\mu|=\pm(|\s|-|\rho|)$ in the next calculation is nonempty: \bean\tilde\tau(\Theta^R_{x,y}) &=&\sum_{\mu}\tau(S_\mu^* x\Phi(y^*S_\mu))+ \sum_{\mu}\frac{1}{|r(\mu)|_{|\mu|}} \tau(S_\mu x\Phi(y^*S_\mu^*))\nno &=&\sum_{|\mu|=|\s|-|\rho|}\tau(xy^*S_\mu S_\mu^*) +\sum_{|\mu|=|\rho|-|\s|}\frac{1}{|r(\mu)|_{|\mu|}}\tau(xy^*S_\mu^* S_\mu)\nno &=&\sum_{|\mu|=|\s|-|\rho|}\tau(xy^*\delta_{\underline{\s},\mu}) +\sum_{|\mu|=|\rho|-|\s|,r(\mu)=s(\s)}\frac{1}{|r(\mu)|_{|\mu|}}\tau(xy^*)\nno &&\nno &=&\tau(xy^*)=\tau(y^*x)=\tau((y|x)_R)=\la y,x\ra. \eean When $|\s|=|\rho|$, we have $$\tilde\tau(\Theta^R_{x,y})=\sum_{v\in E^0}\tau(\Phi(p_vxy^*p_v))=\sum_{v\in E^0}\tau(y^*p_vx)$$ and the same conclusion is obtained as above. By linearity, whenever $x,y\in X_c$, $\tilde\tau(\Theta^R_{x,y})=\tau((y|x)_R)$. For any two $\Theta^R_{x,y}$, $\Theta^R_{w,z}\in End_F^{00}(X_c)$ we find \bean\tilde\tau(\Theta^R_{w,z}\Theta^R_{x,y})&=&\tilde\tau(\Theta^R_{w(z|x)_R,y}) =\tau((y|w(z|x)_R)_R) =\tau((y|w)_R(z|x)_R)\nno&=&\tau((z|x)_R(y|w)_R) =\tilde\tau(\Theta^R_{x(y|w)_R,z})=\tilde\tau(\Theta^R_{x,y}\Theta^R_{w,z}).\eean Hence by linearity, $\tilde\tau$ is a trace on $End_F^{00}(X_c)\subset \cn$. We saw previously that $\tilde\tau$ is finite on $pP\cn pP$ whenever $p$ is a finite sum of vertex projections $p_v$ and $P$ is a finite sum of the spectral projections $\Phi_k$. Since $\tilde\tau$ is ultra-weakly lower semicontinuous on $pP\cn_+ pP$, it is completely additive in the sense of \cite[Definition 7.1.1]{KR}, and therefore is normal by \cite[Theorem 7.1.12]{KR}, which is to say, ultra-weakly continuous. The algebra $End^{00}_F(X_c)$ is strongly dense in $\cn$, so $pPEnd^{00}_F(X_c) pP$ is strongly dense in $pP\cn pP$. Let $T\in pP\cn pP$, and choose a bounded net $ T_i$, converging $*$-strongly to $T$, with $T_i\in pP End_F^{00}(X_c) pP$. Then, since multiplication is jointly continuous on bounded sets in the $*$-strong topology, $$\tilde\tau(TT^*)=\lim_{i}\tilde\tau(T_iT_i^*)=\lim_{i}\tilde\tau(T_i^*T_i) =\tilde\tau(T^*T).$$ Hence $\tilde\tau$ is a trace on each $pP\cn pP$ and so on $\cup_{pP} pP\cn pP$, where the union is over all finite sums $p$ of vertex projections and finite sums $P$ of the $\Phi_k$. Next we want to show that $\tilde\tau$ is semifinite, so for all $T\in\cn$ we want to find a net $R_i\geq0$ with $R_i\leq T^*T$ and $\tilde\tau(R_i)<\infty$. Now $$\lim_{pP\nearrow1}T^*pPT=T,\ \ \ \ T^*pPT\leq T$$ and we just need to show that $\tilde\tau(T^*pPT)<\infty$. It suffices to show this for $pP=p_v\Phi_k$, $v\in E^0,\ k\in\Z$. In this case we have (with $q$ a finite sum of vertex projections and $Q$ a finite sum of $\Phi_k$) \bean \tilde\tau(T^*p_v\Phi_kT)&=& \lim_{qQ\nearrow 1}\tilde\tau(qQT^*p_v\Phi_kTqQ) \qquad\mbox{by equation}\ (\ref{tildetaulimit})\nno &=&\lim_{qQ\nearrow 1}\tilde\tau(qQT^*qQp_v\Phi_kTqQ) \qquad\mbox{eventually}\ qQp_v\Phi_k=p_v\Phi_k\nno &=&\lim_{qQ\nearrow1}\tilde\tau(qQp_v\Phi_kT^*qQTqQp_v\Phi_k) \quad\tilde\tau\ \mbox{is a trace on}\ qQ\cn qQ\nno &=&\lim_{qQ\nearrow1}\tilde\tau(p_v\Phi_kT^*qQTp_v\Phi_k) =\tilde\tau(p_v\Phi_kTp_v\Phi_k)<\infty \eean Thus $\tilde\tau$ is semifinite, normal weight on $\cn_+$, and is a trace on a dense subalgebra. Now let $T\in\cn$. By the above \be\tilde\tau(T^*pPT)=\tilde\tau(pPT^*TpP).\label{almost}\ee By lower semicontinuity and the fact that $T^*pPT\leq T^*T$, the limit of the left hand side of Equation (\ref{almost}) as $pP\to 1$ is $\tilde\tau(T^*T)$. By Equation (\ref{tildetaulimit}), the limit of the right hand side is $\tilde\tau(TT^*)$. Hence $\tilde\tau(T^*T)=\tilde\tau(TT^*)$ for all $T\in\cn$, and $\tilde\tau$ is a normal, semifinite trace on $\cn$. \end{proof} {\bf Notation} If $g:E^0\to\R_+$ is a faithful graph trace, we shall write $\tau_g$ for the associated semifinite trace on $C^*(E)$, and $\tilde\tau_g$ for the associated faithful, semifinite, normal trace on $\cn$ constructed above. \begin{lemma}\label{tracepvphi} Let $E$ be a locally finite graph with no sources and a faithful graph trace $g$. Let $v\in E^0$ and $k\in\Z$. Then $$ \tilde\tau_g(p_v\Phi_k)\leq \tau_g(p_v)$$ with equality when $k\leq 0$ or when $k>0$ and there are no sinks within $k$ vertices of $v$. \end{lemma} \begin{proof} Let $k\geq 0$. Then, by Lemma \ref{finrank} we have \bean \tilde\tau_g\left(p_v\Phi_k\right)&=& \tilde\tau_g\left(p_v\sum_{|\rho|=k}\Theta^R_{S_\rho,S_\rho}\right) =\tilde\tau_g\left(\sum_{|\rho|=k}\Theta^R_{p_vS_\rho,S_\rho}\right)\nno &=&\tau_g\left(\sum_{|\rho|=k}(S_\rho|p_vS_\rho)_R\right)= \tau_g\left(\sum_{|\rho|=k}\Phi(S_\rho^*p_vS_\rho)\right)\nno&=& \tau_g\left(\sum_{|\rho|=k,s(\rho)=v}S_\rho^*S_\rho\right)= \tau_g\left(\sum_{|\rho|=k,s(\rho)=v}p_{r(\rho)}\right).\eean Now $\tau_g(p_v)=g(v)$ where $g$ is the graph trace associated to $\tau_g$, Proposition \ref{trace=graphtrace}, and Equation (\ref{sinksum}) shows that\be g(v)=\sum_{|\rho|\preceq k,\ s(\rho)=v}g(r(\rho))\geq\sum_{|\rho|=k,s(\rho)=v}g(r(\rho)),\label{dblestar}\ee with equality provided there are no sinks within $k$ vertices of $v$ (always true for $k=0$). Hence for $k\geq 0$ we have $\tilde\tau_g(p_v\Phi_k)\leq\tau_g(p_v),$ with equality when there are no sinks within $k$ vertices of $v$. For $k<0$ we proceed as above and observe that there is at least one path of length $|k|$ ending at $v$ since $E$ has no sources. Then \begin{align} \tilde\tau_g(p_v\Phi_{k})&=\frac{1}{|v|_k}\sum_{|\rho|=|k|,\ r(\rho)=v}\tau_g(S_\rho p_vS_\rho^*)=\frac{1}{|v|_k}\sum_{|\rho|=|k|,\ r(\rho)=v}\tau_g(S_\rho^*S_\rho p_v)\nno &=\frac{1}{|v|_k}\sum_{|\rho|=|k|,\ r(\rho)=v}\tau_g(p_v)=\tau_g(p_v).\qed\end{align} \hideqed \end{proof} \begin{prop}\label{Dixytilde=tau} Assume that the directed graph $E$ is locally finite, has no sources and has a faithful graph trace $g$. For all $a\in A_c$ the operator $a(1+\D^2)^{-1/2}$ is in the ideal $\LL^{(1,\infty)}(\cn,\tilde\tau_g)$. \end{prop} \begin{proof} It suffices to show that $a(1+\D^2)^{-1/2}\in\LL^{(1,\infty)}(\cn,\tilde\tau_g)$ for a vertex projection $a=p_v$ for $v\in E^0$, and extending to more general $a\in A_c$ using the arguments of Lemma \ref{finrank}. Since $p_v\Phi_k$ is a projection for all $v\in E^0$ and $k\in\Z$, we may compute the Dixmier trace using the partial sums (over $k\in\Z$) defining the trace of $p_v(1+\D^2)^{-1/2}$. For the partial sums with $k\geq 0$, Lemma \ref{tracepvphi} gives us \begin{equation} \tilde\tau_g\left(p_v\sum_{0}^N(1+k^2)^{-1/2}\Phi_k\right) \leq\sum_{k=0}^N(1+k^2)^{-1/2}\tau_g(p_v).\label{dblestarry}\end{equation} We have equality when there are no sinks within $N$ vertices of $v$. For the partial sums with $k<0$ Lemma \ref{tracepvphi} gives $$\sum_{k=-N}^{-1}(1+k^2)^{-1/2}\tilde{\tau}_g(p_v\Phi_{k})= \sum_{k=-N}^{-1}(1+k^2)^{-1/2}\tau_g(p_v),$$ and the sequence $$\frac{1}{\log 2N+1}\sum_{k=-N}^N(1+k^2)^{-1/2}\tilde\tau_g(p_v\Phi_k)$$ is bounded. Hence $p_v(1+\D^2)^{-1/2}\in\LL^{(1,\infty)}$ and for any $\omega$-limit we have $$\tilde\tau_{g\omega}(p_v(1+\D^2)^{-1/2})= \omega\mbox{-}\!\lim\frac{1}{\log 2N+1}\sum_{k=-N}^N(1+k^2)^{-1/2}\tilde\tau_g(p_v\Phi_k).$$ When there are no sinks downstream from $v$, we have equality in Equation (\ref{dblestarry}) for any $v\in E^0$ and so $$\tilde\tau_{g\omega}(p_v(1+\D^2)^{-1/2})=2\tau_g(p_v).\qed$$ \hideqed \end{proof} {\bf Remark} Using Proposition \ref{wellbehaved}, one can check that \be res_{s=0}\tilde\tau_g(p_v(1+\D^2)^{-1/2-s})= \frac{1}{2}\tilde\tau_{g\omega}(p_v(1+\D^2)^{-1/2}).\label{res}\ee We will require this formula when we apply the local index theorem. \begin{cor}\label{compactresolvent} Assume $E$ is locally finite, has no sources and has a faithful graph trace $g$. Then for all $a\in A$, $a(1+\D^2)^{-1/2}\in\K_\cn$. \end{cor} \begin{proof} (of Theorem \ref{mainthm}.) That we have a $QC^\infty$ spectral triple follows from Corollary \ref{smodense}, Lemma \ref{smoalg} and Corollary \ref{compactresolvent}. The properties of the von Neumann algebra $\cn$ and the trace $\tilde\tau$ follow from Proposition \ref{tildetau}. The $(1,\infty)$-summability and the value of the Dixmier trace comes from Proposition \ref{Dixytilde=tau}. The locality of the spectral triple follows from Lemma \ref{smoalg}. \end{proof} \section{The Index Pairing}\label{index} Having constructed semifinite spectral triples for graph $C^*$-algebras arising from locally finite graphs with no sources and a faithful graph trace, we can apply the semifinite local index theorem described in \cite{CPRS2}. See also \cite{CPRS3,CM,Hig}. There is a $C^*$-module index, which takes its values in the $K$-theory of the core which is described in the Appendix. The numerical index is obtained by applying the trace $\tilde\tau$ to the difference of projections representing the $K$-theory class. Thus for any unitary $u$ in a matrix algebra over the graph algebra $A$ $$\la [u],[(\A,\HH,\D)]\ra\in \tilde\tau_*(K_0(F)).$$ We compute this pairing for unitaries arising from loops (with no exit), which provide a set of generators of $K_1(\A)$. To describe the $K$-theory of the graphs we are considering, recall the notion of ends introduced in Definition \ref{ends}. \begin{lemma}\label{Kofgraph} Let $C^*(E)$ be a graph $C^*$-algebra such that no loop in the locally finite graph $E$ has an exit. Then, $$K_0(C^*(E))=\Z^{\#ends},\ \ \ \ K_1(C^*(E))= \Z^{\#loops}.$$ \end{lemma} \begin{proof} This follows from the continuity of $K_*$ and \cite[Corollary 5.3]{RSz}. \end{proof} If $A=C^*(E)$ is nonunital, we will denote by $A^+$ the algebra obtained by adjoining a unit to $A$; otherwise we let $A^+$ denote $A$. \begin{defn} Let $E$ be a locally finite graph such that $C^*(E)$ has a faithful graph trace $g$. Let $L$ be a loop in $E$, and denote by $p_1,\dots,p_n$ the projections associated to the vertices of $L$ and $S_1,\dots, S_n$ the partial isometries associated to the edges of $L$, labelled so that $S^*_nS_n=p_1$ and $$ S^*_iS_i=p_{i+1},\ i=1,\dots,n-1,\ \ S_iS_i^*=p_i,\ i=1,\dots ,n.$$ \end{defn} \begin{lemma}\label{loops} Let $A=C^*(E)$ be a graph $C^*$-algebra with faithful graph trace $g$. For each loop $L$ in $E$ we obtain a unitary in $A^+$, $$u=1+S_{1}+S_{2}+\cdots+S_{n} -(p_1+p_2+\cdots+p_n),$$ whose $K_1$ class does not vanish. Moreover, distinct loops give rise to distinct $K_1$ classes, and we obtain a complete set of generators of $K_1$ in this way. \end{lemma} \begin{proof}The proof that $u$ is unitary is a simple computation. The $K_1$ class of $u$ is the generator of a copy of $K_1(S^1)$ in $K_1(C^*(E))$, as follows from \cite{RSz}. Distinct loops give rise to distinct copies of $K_1(S^1)$, since no loop has an exit. \end{proof} \begin{prop}\label{specflow} Let $E$ be a locally finite graph with no sources and a faithful graph trace $g$ and $A=C^*(E)$. The pairing between the spectral triple $(\A,\HH,\D)$ of Theorem \ref{mainthm} with $K_1(A)$ is given on the generators of Lemma \ref{loops} by $$\la [u],[(\A,\HH,\D)]\ra=-\sum_{i=1}^n\tau_g(p_i)=-n\tau_g(p_1).$$ \end{prop} \begin{proof} The semifinite local index theorem, \cite{CPRS2} provides a general formula for the Chern character of $(\A,\HH,\D)$. In our setting it is given by a one-cochain $$\phi_1(a_0,a_1)=res_{s=0}\sqrt{2\pi i}\tilde\tau_g(a_0[\D,a_1](1+\D^2)^{-1/2-s}),$$ and the pairing (spectral flow) is given by $$sf(\D,u\D u^*)=\la [u],(\A,\HH,\D)\ra=\frac{1}{\sqrt{2\pi i}}\phi_1(u,u^*).$$ Now $[\D,u^*]=-\sum S_{i}^*$ and $u[\D,u^*]=-\sum_{i=1}^n p_{i}$. Using Equation (\ref{res}) and Proposition \ref{Dixytilde=tau}, $$ sf(\D,u\D u^*)=-res_{s=0}\tilde\tau_g(\sum_{i=1}^n p_{i}(1+\D^2)^{-1/2-s})=-\sum_{i=1}^n\tau_g(p_{i})=-n\tau_g(p_{1}),$$ the last equalities following since all the $p_{i}$ have equal trace and there are no sinks `downstream' from any $p_i$, since no loop has an exit. \end{proof} {\bf Remark} The $C^*$-algebra of the graph consisting of a single edge and single vertex is $C(S^1)$ (we choose Lebesgue measure as our trace, normalised so that $\tau(1)=1$). For this example, the spectral triple we have constructed is the Dirac triple of the circle, $(C^\infty(S^1),L^2(S^1),\frac{1}{i}\frac{d}{d\theta})$, (as can be seen from Corollary \ref{fryingpan}.) The index theorem above gives the correct normalisation for the index pairing on the circle. That is, if we denote by $z$ the unitary coming from the construction of Lemma \ref{loops} applied to this graph, then $\la[\bar z],(\A,\HH,\D)\ra=1$. \begin{prop}\label{C*specflow} Let $E$ be a locally finite graph with no sources and a faithful graph trace $g$, and $A=C^*(E)$. The pairing between the spectral triple $(\A,\HH,\D)$ of Theorem \ref{mainthm} with $K_1(A)$ can be computed as follows. Let $P$ be the positive spectral projection for $\D$, and perform the $C^*$ index pairing of Proposition \ref{themapH}: $$K_1(A)\times KK^1(A,F)\to K_0(F),\ \ \ \ [u]\times[(X,P)]\to [\ker PuP]-[{\rm coker}PuP].$$ Then we have $$sf(\D,u\D u^*)=\tilde\tau_g(\ker PuP)-\tilde\tau_g({\rm coker}PuP)=\tilde\tau_{g*}([\ker PuP]-[{\rm coker}PuP]).$$ \end{prop} \begin{proof} It suffices to prove this on the generators of $K_1$ arising from loops $L$ in $E$. Let $u=1+\sum_iS_i-\sum_ip_i$ be the corresponding unitary in $A^+$ defined in Lemma \ref{loops}. We will show that $\ker PuP=\{0\}$ and that $\mbox{coker}PuP=\sum_{i=1}^n p_{i}\Phi_1$. For $a\in PX$ write $a=\sum_{m\geq 1}a_m$. For each $m\geq 1$ write $a_m=\sum_{i=1}^n p_{i}a_m+(1-\sum_{i=1}^n p_{i})a_m$. Then \bean &&PuPa_m=P(1-\sum_{i=1}^n p_{i}+\sum_{i=1}^n S_{i})a_m\nno &=&P(1-\sum^n p_{i}+\sum^n S_{i})(\sum^n p_{i}a_m)+ P(1-\sum^n p_{i}+\sum^n S_{i})(1-\sum^n p_{i})a_m\nno &=&P\sum^n S_{i}a_m +P(1-\sum^n p_{i})a_m\nno &=&\sum^n S_{i}a_m+(1-\sum^n p_{i})a_m.\eean It is clear from this computation that $PuPa_m\neq 0$ for $a_m\neq 0$. Now suppose $m\geq 2$. If $\sum_{i=1}^n p_{i}a_m=a_m$ then $a_m=\lim_N\sum^N_{k=1} S_{\mu_k}S_{\nu_k}^*$ with $|\mu_k|-|\nu_k|=m\geq 2$ and $S_{{\mu_k}_1}=S_{i}$ for some $i$. So we can construct $b_{m-1}$ from $a_m$ by removing the initial $S_{i}$'s. Then $a_m=\sum_{i=1}^n S_{i}b_{m-1}$, and $\sum_{i=1}^np_{i}b_{m-1}=b_{m-1}$. For arbitrary $a_m$, $m\geq 2$, we can write $a_m=\sum_ip_ia_m+(1-\sum_ip_i)a_m$, and so \bean a_m&=&\sum^n p_{i}a_m+(1-\sum^n p_{i})a_m\nno &=&\sum^n S_{i}b_{m-1}+(1-\sum^n p_{i})a_m\ \ \ \mbox{and by adding zero}\nno &=&\sum^n S_{i}b_{m-1}+(1-\sum^n p_{i})b_{m-1}+\bigl(\sum^n S_{i}+(1-\sum^n p_{i})\bigr)(1-\sum^n p_{i})a_m\nno &=&ub_{m-1}+u(1-\sum^np_i)a_m\nno&=&PuPb_{m-1}+PuP(1-\sum^n p_{i})a_m.\eean Thus $PuP$ maps onto $\sum_{m\geq2}\Phi_mX$. For $m=1$, if we try to construct $b_0$ from $\sum_{i=1}^n p_{i}a_1$ as above, we find $PuPb_0=0$ since $Pb_0=0$. Thus $\mbox{coker}PuP=\sum^n p_{i}\Phi_1X$. By Proposition \ref{specflow}, the pairing is then \begin{align} sf(\D,u\D u^*)&=-\sum^n\tau_g(p_{i})=-\tilde\tau_g(\sum^n p_{i}\Phi_1)\nno &=-\tilde\tau_{g*}([\mbox{coker}PuP])=-\tilde\tau_g(\mbox{coker}PuP).\end{align} Thus we can recover the numerical index using $\tilde\tau_g$ and the $C^*$-index. \end{proof} The following example shows that the semifinite index provides finer invariants of directed graphs than those obtained from the ordinary index. The ordinary index computes the pairing between the $K$-theory and $K$-homology of $C^*(E)$, while the semifinite index also depends on the core and the gauge action. \begin{cor}[Example]\label{fryingpan} Let $C^*(E_n)$ be the algebra determined by the graph \vspace{-2pt} \[ \beginpicture \setcoordinatesystem units <1cm,1cm> \setplotarea x from 0 to 12, y from -0.5 to 0.5 \put{$\cdots$} at 0.5 0 \put{$\bullet$} at 3 0 \put{$\bullet$} at 5 0 \put{$\bullet$} at 7 0 \put{$\bullet$} at 9 0 \put{$L$} at 10.5 0 \circulararc -325 degrees from 9 0.2 center at 9.6 0 \arrow <0.25cm> [0.2,0.5] from 1.2 0 to 2.8 0 \arrow <0.25cm> [0.2,0.5] from 3.2 0 to 4.8 0 \arrow <0.25cm> [0.2,0.5] from 5.2 0 to 6.8 0 \arrow <0.25cm> [0.2,0.5] from 7.2 0 to 8.8 0 \arrow <0.25cm> [0.2,0.5] from 10.228 0.1 to 10.226 -0.1 \endpicture \] \smallskip where the loop $L$ has $n$ edges. Then $C^*(E_n)\cong C(S^1)\otimes\K$ for all $n$, but $n$ is an invariant of the pair of algebras $(C^*(E_n),F_n)$ where $F_n$ is the core of $C^*(E_n)$. \end{cor} \begin{proof} Observe that the graph $E_n$ has a one parameter family of faithful graph traces, specified by $g(v)=r\in \R_+$ for all $v\in E^0$. First consider the case where the graph consists only of the loop $L$. The $C^*$-algebra $A$ of this graph is isomorphic to $M_n(C(S^1))$, via $$ S_i\to e_{i,i+1},\ i=1,\dots,n-1,\ \ S_n\to id_{S^1}e_{n,1},$$ where the $e_{i,j}$ are the standard matrix units for $M_n(\C)$, \cite{aH}. The unitary $$S_1S_2\cdots S_n+S_2S_3\cdots S_1+\cdots+S_nS_1\cdots S_{n-1}$$ is mapped to the orthogonal sum $id_{S^1}e_{1,1}\oplus id_{S^1}e_{2,2}\oplus\cdots\oplus id_{S^1}e_{n,n}$. The core $F$ of $A$ is $\C^n=\C[p_1,\dots,p_n]$. Since $KK^1(A,F)$ is equal to $$\oplus^n KK^1(A,\C)=\oplus^nKK^1(M_n(C(S^1)),\C)=\oplus^nK^1(C(S^1))=\Z^n$$ we see that $n$ is the rank of $KK^1(A,F)$ and so an invariant, but let us link this to the index computed in Propositions \ref{specflow} and \ref{C*specflow} more explicitly. Let $\phi:C(S^1)\to A$ be given by $\phi(id_{S^1})=S_1S_2\cdots S_n\oplus \sum_{i=2}^ne_{i,i}$. We observe that $\D=\sum_{i=1}^np_i\D$ because the `off-diagonal' terms are $p_i\D p_j=\D p_ip_j=0$. Since $S_1S_1^*=S^*_nS_n=p_1$, we find (with $P$ the positive spectral projection of $\D$) $$\phi^*(X,P)=(p_1X,p_1Pp_1)\oplus\mbox{degenerate\ module}\in KK^1(C(S^1),F).$$ Now let $\psi:F\to\C^n$ be given by $\psi(\sum_jz_jp_j)=(z_1,z_2,...,z_n)$. Then $$\psi_*\phi^*(X,P)=\oplus_{j=1}^n(p_1Xp_j,p_1Pp_1)\in\oplus^nK^1(C(S^1)).$$ Now $X\cong M_n(C(S^1))$, so $p_1Xp_j\cong C(S^1)$ for each $j=1,\dots,n$. It is easy to check that $p_1\D p_1$ acts by $\frac{1}{i}\frac{d}{d\theta}$ on each $p_1Xp_j$, and so our Kasparov module maps to $$\psi_*\phi^*(X,P)=\oplus^n(C(S^1),P_{\frac{1}{i}\frac{d}{d\theta}})\in \oplus^nK^1(C(S^1)),$$ where $P_{\frac{1}{i}\frac{d}{d\theta}}$ is the positive spectral projection of $\frac{1}{i}\frac{d}{d\theta}$. The pairing with $id_{S^1}$ is nontrivial on each summand, since $\phi(id_{S^1})=S_1\cdots S_n\oplus \sum_{i=2}^ne_{i,i}$ is a unitary mapping $p_1Xp_j$ to itself for each $j$. So we have, \cite{HR}, \begin{align}id_{S^1}\times\psi_*\phi^*(X,P)&=\sum^n_{j=1}Index(Pid_{S^1}P:p_1PXp_j\to p_1PXp_j)\nno &=-\sum_{j=1}^n[p_j]\in K_0(\C^n).\end{align} By Proposition \ref{C*specflow}, applying the trace to this index gives $-n\tau_g(p_1)$. Of course in Proposition \ref{C*specflow} we used the unitary $S_1+S_2+\cdots+S_n$, however in $K_1(A)$ $$[S_1S_2\cdots S_n]=[S_1+S_2+\cdots+S_n]=[id_{S^1}].$$ To see this, observe that $$(S_1+\cdots+S_n)^n=S_1S_2\cdots S_n+S_2S_3\cdots S_1+\cdots+S_nS_1\cdots S_{n-1}.$$ This is the orthogonal sum of $n$ copies of $id_{S^1}$, which is equivalent in $K_1$ to $n[id_{S^1}]$. Finally, $[S_1+\cdots+S_n]=[id_{S^1}]$ and so $$[(S_1+\cdots+S_n)^n]=n[S_1+\cdots+S_n]=n[id_{S^1}].$$ Since we have cancellation in $K_1$, this implies that the class of $S_1+\cdots+S_n$ coincides with the class of $S_1S_2\cdots S_n$. Having seen what is involved, we now add the infinite path on the left. The core becomes $\K\oplus\K\oplus\cdots\oplus\K$ ($n$ copies). Since $A=C(S^1)\otimes\K= M_n(C(S^1))\otimes\K$, the intrepid reader can go through the details of an argument like the one above, with entirely analogous results. \end{proof} Since the invariants obtained from the semifinite index are finer than the isomorphism class of $C^*(E)$, depending as they do on $C^*(E)$ and the gauge action, they can be regarded as invariants of the differential structure. That is, the core $F$ can be recovered from the gauge action, and we regard these invariants as arising from the differential structure defined by $\D$. Thus in this case, the semifinite index produces invariants of the differential topology of the noncommutative space $C^*(E)$. \appendix \vspace{-10pt} \section{Toeplitz Operators on $C^*$-modules} \vspace{-7pt} In this Appendix we define a bilinear product $$ K_1(A)\times KK^1(A,B)\to K_0(B).$$ Here we suppose that $A, B$ are ungraded $C^*$-algebras. This product should be the Kasparov product, though it is difficult to compare the two (see the footnote to Proposition \ref{themapH} below). We denote by $A^+$ the minimal (one-point) unitization if $A$ is nonunital. Otherwise $A^+$ will mean $A$. To deal with unitaries in matrix algebras over $A$, we recall that $K_1(A)$ may be defined by considering unitaries in matrix algebras over $A^+$ which are equal to $1_n$ mod $A$ (for some $n$), \cite[p 107]{HR}. We consider odd Kasparov $A$-$B$-modules. So let $E$ be a fixed countably generated ungraded $B$-$C^*$-module, with $\phi:A\to End_B(E)$ a $*$-homomorphism, and let $P\in End_B(E)$ be such that $a(P-P^*), a(P^2-P), [P,a]$ are all compact endomorphisms. Then by \cite[Lemma 2, Section 7]{K}, the pair $(\phi,P)$ determines a $KK^1(A,B)$ class, and every class has such a representative. The equivalence relations on pairs $(\phi,P)$ that give $KK^1$ classes are unitary equivalence $(\phi,P)\sim (U\phi U^*,UPU^*)$ and homology, $P_1\sim P_2$ if $P_1\phi_1(a)-P_2\phi_2(a)$ is a compact endomorphism for all $a\in A$. Now let $u\in M_m(A^+)$ be a unitary, and $(\phi,P)$ a representative of a $KK^1(A,B)$ class. Observe that $(P\otimes 1_m)E\otimes\C^m$ is a $B$-module, and so can be extended to a $B^+$ module. Writing $P_m=P\otimes 1_m$, the operator $P_m\phi(u)P_m$ is Fredholm, since (dropping the $\phi$ for now) $$ P_muP_m P_mu^*P_m=P_m[u,P_m]u^*P_m+P_m,$$ and this is $P_m$ modulo compact endomorphisms. To ensure that $\ker P_muP_m$ and $\ker P_mu^*P_m$ are closed submodules, we need to know that $P_muP_m$ is regular, but by \cite[Lemma 4.10]{GVF}, we can always replace $P_muP_m$ by a regular operator on a larger module. Then the index of $P_muP_m$ is defined as the index of this regular operator, so there is no loss of generality in supposing that $P_muP_m$ is regular. Then we can define $$Index(P_muP_m)=[\ker P_muP_m]-[\mbox{coker} P_muP_m]\in K_0(B).$$ This index lies in $K_0(B)$ rather than $K_0(B^+)$ by \cite[Proposition 4.11]{GVF}. So given $u$ and $(\phi,P)$ we define a $K_0(B)$ class by setting $$ u\times (\phi,P)\to [\ker P_muP_m]-[\mbox{coker} P_muP_m].$$ Observe the following. If $u=1_m$ then $1_m\times (\phi,P)\to Index(P_m)=0$ so for any $(\phi,P)$ the map defined on unitaries sends the identity to zero. Given the unitary $u\oplus v\in M_{2m}(A^+)$ (say) then $$ u\oplus v\times (\phi,P)\to Index(P_{2m}(u\oplus v)P_{2m})=Index(P_muP_m)+Index(P_mvP_m),$$ so for each $(\phi,P)$ the map respects direct sums. Finally, if $u$ is homotopic through unitaries to $v$, then $P_muP_m$ is norm homotopic to $P_mvP_m$, so $$ Index(P_muP_m)=Index(P_mvP_m).$$ By the universal property of $K_1$, \cite[Proposition 8.1.5]{RLL}, for each $(\phi,P)$ as above there exists a unique homomorphism $H_P:K_1(A)\to K_0(B)$ such that $$ H_P([u])=Index(P_muP_m).$$ Now observe that $H_{UPU^*,U\phi(\cdot)U^*}=H_{P,\phi}$ since $$Index(UPU^*(U\phi(u)U^*)UPU^*)=Index(UPuPU^*)=Index(PuP).$$ The homomorphisms $H_P$ are bilinear, since \bean H_{P\oplus Q}([u])&=&Index((P\oplus Q)(\phi(u)\oplus\psi(u))(P\oplus Q))\nno &=&Index(P\phi(u)P)+Index(Q\psi(u)Q)=H_P([u])+H_Q([u]).\eean Finally, if $(\phi_1,P_1)$ and $(\phi_2,P_2)$ are homological, the classes defined by $(\phi_1\oplus\phi_2,P_1\oplus 0)$ and $(\phi_1\oplus\phi_2,0\oplus P_2)$ are operator homotopic, \cite[p 562]{K}, so \bean Index(P_1\phi_1(u)P_1)&=&Index((P_1\oplus 0)(\phi_1(u)\oplus\phi_2(u))(P_1\oplus 0))\nno &=&Index((0\oplus P_2)(\phi_1(u)\oplus\phi_2(u))(0\oplus P_2))\nno &=&Index(P_2\phi_2(u)P_2).\eean So $H_P$ depends only on the $KK$-equivalence class of $(\phi,P)$. Thus \begin{prop}\label{themapH} With the notation above, the map\footnote{As noted at the end of the introduction, Nigel Higson has shown us a proof that the map $H$ is equal to the Kasparov product. The Kasparov module defined by $PuP$ in $KK^0(\C,B)=K_0(B)$ is not a product Kasparov module, but the class of the product of representatives $u,P$ coincides with the class of $PuP$. } $$H:K_1(A)\times KK^1(A,B)\to K_0(B)$$ $$H([u],[(\phi,P)]):=[\ker(PuP)]-[{\rm coker}PuP]$$ is bilinear. \end{prop} This is a kind of spectral flow, where we are counting the net number of eigen-$B$-modules which cross zero along any path from $P$ to $uPu^*$. \vspace{-10pt}
{"config": "arxiv", "file": "math0508025/arxiversion.tex"}
TITLE: Deriving Trapezoid Rule via Newton-Cotes formula QUESTION [0 upvotes]: Newton-Cotes is given by $$ \int_{a}^{b} f(x) d x \approx \sum_{i=0}^{n} f\left(x_{i}\right) A_{i}, \: A_{i}=\int_{a}^{b} \ell_{i}(x) d x, \quad i=0,1, \ldots, n $$ For $n=1$, $a=x_0$ and $b=x_1$ I get: $$ \int_{a}^{b} f(x) d x = \int_{x_0}^{x_1} f(x) d x \approx \sum_{i=0}^{1} f\left(x_{i}\right) A_{i} = f(x_0)A_0+f(x_1)A_1 $$ So my problem is that I do not know how to integrate the indicator functions $\ell_0(x)$ and $\ell_1(x)$ over $[a,b]$ ? REPLY [0 votes]: For $n=1$, we have that $l_0$ and $l_1$ are linear functions. Moreover, $l_0(a) = l_1(b) = 1$ and $l_0(b) = l_1(a) = 0$. It follows that $$A_i = \int_a^b l_i(x) dx = \frac{1}{2}(b-a)$$ for $i=1,2$.
{"set_name": "stack_exchange", "score": 0, "question_id": 3877933}
TITLE: Derivation of transformer equations QUESTION [3 upvotes]: I've learnt in high school that in an ideal transformer, $$\frac{V_s}{N_s} = \frac{V_p}{N_p}$$ I looked for derivation for this formula, and in every source I look, the argument goes thus: $$|V_p| = N_p \frac{d\Phi}{dt}$$ $$|V_s| = N_s \frac{d\Phi}{dt}$$ rearrange the equations, and we have the identity. What bothers me is that doesn't Faraday's Law describe voltage induced by a changing magnetic flux? Since we have a voltage source in the primary circuit supplying Vp, how can we say $$|V_p| = N_p \frac{d\Phi}{dt}$$ then? None of the sites I came across explains this, not even Hyperphysics. I feel like I'm missing something obvious and fundamental here since I haven't had to do any physics for years. Please, enlighten me. REPLY [2 votes]: What bothers me is that doesn't Faraday's Law describe voltage induced by a changing magnetic flux? ... isn't Vp coming from the voltage source, while Faraday's Law applies for induced voltage? It's a simple application of KVL. Assuming ideal circuit elements, if there is a voltage source $V_{AC}$connected to the primary, KVL yields $$V_p = V_{AC}$$ But it is also the case that $$V_p = N_p \frac{\mathrm d\Phi}{\mathrm dt}$$ Thus, it must be that $$\frac{\mathrm d\Phi}{\mathrm dt} = \frac{V_{AC}}{N_p}$$ Whence $$V_s = N_s \frac{\mathrm d\Phi}{\mathrm dt} = V_{AC}\frac{N_s}{N_p}$$ Doesn't KVL just say that the voltage across the coil is equal to the supply voltage? Vp still is coming from the voltage source, not induction, and I wonder why Vp/Np = d(phi)/dt holds Both equations must hold. Since the voltage source fixes the voltage across the primary, by Faraday's law, the (rate of change) of flux is fixed by the voltage source. This is no different, in principle, from the case of a voltage source $V_S$ across a resistor. By KVL we have $$V_S = V_R$$ But, by Ohm's law, it also the case that $$V_R = R I$$ Thus, it must be that $$I = \frac{V_S}{R}$$ Just as the resistor current is not an independent variable when the voltage across the resistor is fixed by the voltage source, the (rate of change) of transformer flux is not an independent variable when the voltage across the primary is fixed by the (AC) voltage source.
{"set_name": "stack_exchange", "score": 3, "question_id": 224396}
\begin{document} \begin{center} {\bf \Large Lebesgue measure and integration theory on non-archimedean real closed fields with archimedean value group} \end{center} \centerline{Tobias Kaiser} \vspace{0.7cm}\noi \footnotesize {{\bf Abstract.} Given a non-archimedean real closed field with archimedean value group which contains the reals, we establish for the category of semialgebraic sets and functions a full Lebesgue measure and integration theory such that the main results from the classical setting hold. The construction involves methods from model theory, o-minimal geometry and valuation theory. We set up the construction in such a way that it is determined by a section of the valuation. If the value group is isomorphic to the group of rational numbers the construction is uniquely determined up to isomorphisms. The range of the measure and integration is obtained in a controled and tame way from the real closed field we start with. The main example is given by the case of the field of Puiseux series where the range is the polynomial ring in one variable over this field.} \normalsize \section*{Introduction} In the last two decades an abstract integration theory in algebraic and non-archimedean geometry, named motivic integration, has been set up to solve deep geometric problems. It has been introduced by Kontsevich in 1995 and was further developed by Denef and Loeser [17], by Cluckers and Loeser [7, 8] and others to obtain, using geometry, valuation theory and model theory, a measure and integration theory on Henselian discretely valued fields (for example $p$-adic integrals). Next, Hrushovski and Kazhdan [27] have defined motivic integration for algebraically closed valued fields and Yimu Yin [40] has developed an analogue for o-minimal valued fields. The ranges of motivic integration are abstract spaces as Grothendieck rings of definable sets. We refer to Cluckers et al. [12, 13] for an overview on motivic integration, especially to the introduction of their Volume I. \vs{0.2cm} We develop a measure and integration theory in Lebesgue's style for the setting of ordered fields. Since every ordered field distinct from the field of reals is totally disconnected and not locally compact one has to restrict to a tame setting to have a chance to develop reasonable analysis. There exists a nice differential calculus in semialgebraic geometry over arbitrary real closed fields (see Bochnak et al. [2, 2.9]) or, more general, in o-minimal structures over (necessarily real closed) fields (see Van den Dries [18, Chapter 7]). For semialgebraic or o-minimal geometry over the reals the classical Lebesgue measure and Lebesgue integration is available and has been successfully used for geometric questions (see for example Yomdin and Comte [41]). One can also establish easily a Lebesgue measure and integration theory for an archimedean real closed field by embedding the latter into the field of reals (the construction can be also performed via the classical limit process, see [29]). But for general, i.e. non-archimedean, real closed fields such an integration theory does not exist so far. There have been defined real valued measures (see Hrushovski et al. [28]) and integration of piecewise constant definable functions with respect to the Euler characteristic in a motivic style (see Br\"ocker [5] and Cluckers and Edmundo [6]), but these but these measures lack several analytic and geometric properties of the Lebesgue measure. Ma\v{r}\'{i}kov\'{a} and Shiota [36, 37] have introduced via a limit process a restricted measure for definable sets in o-minimal fields. It is defined for $\IR$-bounded sets and the range is just a semiring. They obtain a partial transformation formula. Recently, Costin, Ehrlich and Friedman [15] have developed an integration theory on the surreals in the univariate case. They also show that a reasonable integration theory in the non-archimedean setting requires in general a tame setting. \vs{0.2cm} Given a non-archimedean real closed field with archimedean value group which contains the reals, we are able to construct a full Lebesgue measure and Lebesgue integration theory for the category of semialgebraic sets and functions with good control over the range such that the main properties of the real Lebesgue measure and integration hold: The Lebesgue measure for semialgebraic sets on such a field is finitely additive, monotone, translation invariant and reflects elementary geometry. The latter means, for instance, that the measure of an interval is, as it should be, the length of the interval. The Lebesgue integral in this setting is linear, the transformation formula, Lebesgue's theorem on dominated convergence and the fundamental theorem of calculus hold with the necessary adjustments. (For example, for Lebesgue's theorem we have to work, as usually in semialgebraic geometry, with one-parameter families instead of sequences.) Moreover, a version of Fubini's theorem can be established. Our construction relies on results and methods from model theory, o-minimal geometry and valuation theory. \vs{0.2cm} First we want to mention that such a Lebesgue measure and integration theory cannot in general be performed inside a given real closed field, the integrals take necessarily values outside the given field. This can be seen by looking at the function $x\mapsto \int_1^x dt/t$ for $x>0$. A very basic version of the transformation formula gives that this function defines a logarithm. But on certain real closed fields there are no reasonable logarithms with values in the field (see Kuhlmann et al. [33]). Moreover, assuming very basic facts for the measure as the above property that the measure of an interval is its length, it cannot be $\sigma$-additive if the range is an ordered ring extension (consider the cut of the given field obtained by the natural numbers and the infinitely large elements). So a construction via a limit process is not possible. We show that, given a non-archimedean real closed field $R$ with archimedean value group which contains the reals, it is enough to work in the immediate maximal extension (see for example Kaplansky [32]) with respect to the standard real valuation and then to add a logarithm to obtain the range for a reasonable measure and integration theory. The basic idea of the construction of the measure (and similarly of the integral) is the following: Let $A$ be a semialgebraic subset of some $R^n$. Then $A$ can be expressed by a formula in the language of ordered rings augmented by symbols for the elements of $R$ (by Tarski even by finitely many equalities and inequalities of polynomials over $R$). Replacing the tuple $a$ of elements from $R$ involved in this formula by variables, we get a parameterized semialgebraic family of subsets of $\IR^n$. For this family we can apply the results of Comte, Lion and Rolin [14, 35] (see also the related work of Cluckers and D. Miller [9, 10, 11]) and obtain that the function $F$ given by computing the Lebesgue measure of the members of the family is given by a polynomial in globally subanalytic functions depending on the parameter and their logarithms (see Van den Dries and Miller [23] for the notion of globally subanalytic sets and functions). \vs{0.2cm} \hrule \vs{0.1cm} {\footnotesize{\itshape 2010 Mathematics Subject Classification:} 03C64, 03H05, 06F20, 12J25, 14P10, 28B15, 28E05, 32B20} \newline {\footnotesize{\itshape Keywords and phrases:} non-archimedean real closed fields, semialgebraic sets and functions, measure and integration, power series fields, logarithmic-exponential series, o-minimal theories} \newline {\footnotesize{\itshape Acknowledgements:} The author was supported in parts by DFG KA 3297/1-2.} \newpage Now we want to plug in the above tuple $a$ in this function and to declare this as the Lebesgue measure of the given set $A$. For that purpose we need a suitable extension of $R$ where this can be done in a reasonable way. One could think of some ultrapower of $\IR$ containing $R$. Here we would enter non-standard measure and integration theory (see for example Robinson [39] and Cutland [16]). But by this abstract choice we would loose control about the values obtained by measuring and integrating. Instead, we use the work of Van den Dries, Macintyre and Marker [20, 21, 22] and choose the following embedding, carried out in two steps. Let $\Gamma$ be the value group of $R$ with respect to the standard real valuation. Then we embed $R$ into the power series field $\IR((t^\Gamma))$. This is a model of the theory $T_\an$ of the o-minimal structure $\IR_{\an}$ (see [23]; the sets definable in $\IR_\an$ are exactly the globally subanalytic sets), so that we can evaluate the globally subanalytic functions at the tuple $a$. Since $\Gamma$ is archimedean there is an order preserving embedding of $\Gamma$ in the additive group of the reals. Hence we can view $\IR((t^\Gamma))$ as a subfield of $\IR((t^\IR))$. The latter is finally embedded in the field $\IR((t))^{\mathrm{LE}}$ of logarithmic-exponential series (in short LE-series). From the description of $F$ and the definition of the logarithm on power series we see that $F(a)$ lies in the ring $\IR((t^\Gamma))\big[\log(t^\Gamma)]$. With $\Gamma$ viewed as a subgroup of $\IR$, the properties of the logarithm on the field of LE-series gives that the latter ring equals $\IR((t^\Gamma))\big[X\big]$ where $X:=\log(t^{-1})$ is independent over $\IR((t^\Gamma))$. We call this polynomial algebra over $\IR((t^\Gamma))$ the Lebesgue algebra of $R$. Every such choice of an embedding of the given real closed field into the field $\IR((t^{\mathrm{LE}}))$ gives a Lebesgue measure and integral (with all the desirable properties). To compare the construction obtained by different choices of embeddings, we introduce a natural notion of equivalence. We call the Lebesgue measure respectively integral with respect to different embeddings eisomorphic if they are the same up to an isomorphism of the range, i.e. of the Lebesgue algebra. We prove that we obtain isomorphy up to the choice of a section for $R$; i.e. a group homomorphism from the value group of $R$ into the multiplicative group of its positive elements that is compatible with the valuation. Hence we can describe the Lebesgue measure and integral in internal terms of the given real closed field. Moreover, if the value group is given by the rationals then the construction is, up to isomorphisms, uniquely determined. In the case that the given field is the field $\IP$ of real Puiseux series or convergent real Puiseux series, we obtain a nice restriction of the range of the measure and the integral; it is given by the polynomial ring $\IP[X]$ in one variable over $\IP$. \vs{0.2cm} We have explained how we construct the measure (and similarly the integral). Now we explain why the above mentioned results from classical Lebesgue measure theory and integration theory hold in our setting, independently from the choosen section. The above function $F$ is definable in the o-minimal structure $\IR_{\an,\exp}$ (see [20, 23] for this important structure). We have embedded $R$ into the non-standard model $\IR((t^{\mathrm{LE}}))$ of the theory $T_{\an,\exp}$ of $\IR_{\an,\exp}$ to plug in the tuple $a$. We formulate the properties of classical Lebesgue measure and integration theory as statements in the natural language of $\IR_{\an,\exp}$ and can then transfer these results to this non-standard model. But we want to obtain the results for raw data from $R$. This reduction requires some work using the set-up of the construction and o-minimality. \vs{0.2cm} The condition that the given real closed field $R$ contains the reals was introduced to keep the technical details at a reasonable level. Note that the real numbers can be adjoined in a unique way to an arbitrary real closed field and that the value group stays the same (see Prie\ss-Crampe [38, III \S 1]). \vs{0.2cm} The paper is organized as follows. After introducing the notations and terminology used throughout the work, we present and develop in Section 1 the setting and background for our constructions and results. We cover archimedean ordered abelian groups, the standard real valuation on real closed fields, the theory of power series fields over the reals including the construction of the partial logarithm, the o-minimal structures $\IR_\an$ and $\IR_{\an,\exp}$, and, finally, the results on integration of parameterized definable functions. Section 2 is devoted to the construction of the measure and the integral, and there elementary properties are shown. The construction is performed with respect to a certain tuple of raw data called Lebesgue datum. We define in Section 3 a natural notion of isomorphism between the results obtained from different Lebesgue data. We show that our construction does only depend on the choice of a section for the given real closed field. In the case that the value group is the group of rationals we obtain actually that the constructions are isomorphic. We present the main example given by the geometrically significant field of Puiseux series. Moreover, we consider extensions of real closed fields and the behaviour of the constructed measure with respect to the standard part map. For the rest of the paper we develop our theory for the field $\IP$ of real Puiseux series to keep the notations at a reasonable level. The functions obtained by integrating are given by so-called constructible functions (compare with Cluckers and D. Miller [9, 10, 11]). We introduce in Section 4 these functions on $\IP$ which takes values in the polynomial ring $\IP[X]$ and develop the necessary analysis. This requires some work since we deal with a kind of hybrid of the $T_\an$-model $\IP$ and the $T_{\an,\exp}$-model $\IR((t^{\mathrm{LE}}))$. We use these results in Section 5 to present the main theorems of integration, namely the transformation formula, Lebesgue's theorem on dominated convergence, the fundamental theorem of calculus and Fubini's theorem. The final Section 6 is devoted to an application. We exploit the smoothing properties of convolution to show an approximation result for unary functions definable in the $T_\an$-model $\IP$. \vs{0.2cm} In upcoming papers we deal with the case of arbitrary non-archimedean real closed fields including the surreals, develop integration on semialgebraic manifolds including Stokes' theorem and apply the measure and integration theory on the field of Puiseux series to geometric questions on semialgebraic sets over the reals. \section*{Notations and terminology} Throughout the paper we assume basic knowledge of the theory of real closed fields and semialgebraic geometry (see [2]), of o-minimal structures (see Van den Dries [18]), of model theory (see Hodges [26]) and of measure and integration theory (see Bauer [1] and Bourbaki [3, 4]). \vs{0.1cm} By $\IN=\big\{1,2,3,\ldots\big\}$ we denote the set of natural numbers and by $\IN_0=\big\{0,1,2,\ldots\big\}$ the set of natural numbers with $0$. Let $R$ be a real closed field. We set $R^*:=\{x\in R\mid x\neq 0\}, R_{>0}:=\{x\in R\mid x>0\}$ and $R_{\geq 0}:=\{x\in R\mid x\geq 0\}$. For $a,b\in R$ with $a<b$ let $]a,b[_R=]a,b[:=\{x\in R\mid a<x<b\}$ be the open and $[a,b]_R=[a,b]:=\{x\in R\mid a\leq x\leq b\}$ be the closed interval with endpoints $a$ and $b$. By $|x|$ we denote the euclidean norm of $x\in R^n$. Given a subset $A$ of $R^n$ we denote by $\mathbbm{1}_A$ the characteristic function of $A$. For a function $f:R^n\to R$ we set $f_+:=\max(f,0)$ and $f_-:=\max(-f,0)$. By $\mathrm{graph}(f)$ we denote the graph of $f$. If $f$ is non-negative we call $\mathrm{subgraph}(f):=\big\{(x,s)\in R^{n+1}\mid 0\leq s\leq f(x)\big\}$ the subgraph of $f$. For $A\subset R^{q+n}$ and $t\in R^q$ we set $A_t:=\{x\in R^n\mid (t,x)\in A\}$. For $f:R^{q+n}\to R$ and $t\in R^q$ we define $f_t:R^n\to R, f_t(x)=f(t,x)$. By $D_\varphi$ we denote the Jacobian of a partially differentiable function $\varphi:U\to R^n$ where $U\subset R^m$ is open (in the euclidean topology). Let $R[X]$ be the polynomial ring over $R$ in one varibale. We equip the polynomial ring with the standard degree $\mathrm{deg}$ and set $R[X]_{\leq n}:=\{f\in R[X]\mid \mathrm{deg}(f)\leq n\big\}$ for $n\in\IN_0$. Let $\ma{L}_{\mathrm{or}}=\big\{+,-,\cdot,<,0,1\big\}$ be the language of ordered rings. By Tarski, the theory of real closed fields has quantifier elimination in this language. Given a formula $\varphi$ in the language, the notation $\varphi(x_1,\ldots,x_n)$ indicates that the free variables of $\varphi$ are among the variables $x_1,\ldots,x_n$. Given an extension $R\subset S$ of real closed fields and a semialgebraic subset $A$ of $R^n$ or a semialgebraic function $f:R^n\to R$ we denote by $A_S$ and $f_S$ the canonical lifting of $A$ and $f$ to $S$, respectively (see [2, Chapter 5]). The above notations are used analogously in other situations if applicable. Finally, by $\infty$ we denote an element that is bigger then every element of a given ordered set. \section*{1. Preparations} In this section we present and develop the necessary background for the constructions and results of the paper. \subsection*{1.1 Ordered abelian groups} An {\bf ordered abelian group} is an additively written abelian group with total ordering $\leq$ which is compatible with the addition. We refer to Prie\ss-Crampe [38, Kapitel I] and Fuchs [25, Kapitel IV] for more information and the proofs of the results below. \vs{0.3cm} Let $\Gamma$ be an ordered abelian group. The group $\Gamma$ is {\bf divisible} if for every $\gamma\in\Gamma$ and $n\in\IN$ there is some $\delta\in\Gamma$ with $n\delta=\gamma$. Divisibility holds if and only if $\Gamma$ is a $\IQ$-vector space. \vs{0.3cm} The absolute value of $\gamma\in\Gamma$ is defined by $|\gamma|:=\max\{\gamma,-\gamma\}$. The group $\Gamma$ is called {\bf archimedean} if for every $\gamma,\delta\in\Gamma\setminus\{0\}$ there is some $n\in\IN$ with $|\gamma|\leq n|\delta|$. \vs{0.5cm} {\bf 1.1 Fact} (H\"older) \vs{0.1cm} {\it Assume that $\Gamma$ is archimedean. Then there is an embedding $\Gamma\hookrightarrow (\IR,+)$ of ordered groups.} \vs{0.5cm} So the archimedean ordered abelian groups are, up to isomorphisms of ordered groups, precisely the subgroups of the additive group of $\IR$.\\ Moreover, given $\gamma\in \Gamma_{>0}$ there is a uniquely determined embedding $\varphi:\Gamma\hookrightarrow (\IR,+)$ of ordered groups with $\varphi(\gamma)=1$. \subsection*{1.2. Standard real valuation} Let $R$ be real closed field. It is called {\bf archimedean} if the ordered group $(R,+)$ is archimedean. We refer to [25, Kapitel VII] for the following. \vs{0.5cm} {\bf 1.2 Fact} (H\"older) \vs{0.1cm} {\it Assume that $R$ is archimedean. Then there is a unique field embedding $R\hookrightarrow \IR$.} \vs{0.5cm} Let $R$ be an arbitrary real closed field. The set $$\ma{O}_{R}:=\big\{f\in R\mid -n\leq f\leq n\mbox{ for some }n\in \IN\big\}$$ of {\bf bounded} elements of $R$ is a valuation ring of $R$ with maximal ideal $$\mathfrak{m}_{R}:=\big\{f\in R\mid -1/n<f<1/n\mbox{ for all }n\in \IN\big\}$$ consisting of the {\bf infinitesimal} elements of $R$. Note that $\ma{O}_{R}$ is a convex subring of $R$. Let $v_{R}:R^*\to R^*/\ma{O}_{R}^*$ be the corresponding valuation with value group $\Gamma_{R}:= R^*/\ma{O}_{R}^*$. It makes $R$ an ordered valued field, meaning that $0<f\leq g$ implies $v_{R}(f)\geq v_{R}(g)$. Note that the value group $\Gamma_{R}$ is divisible and that $\Gamma_R=\{0\}$ if and only if $R$ is archimedean. We denote the residue field $\ma{O}_{R}/\mathfrak{m}_{R}$ by $\kappa_{R}$. The residue field is an archimedean real closed field. If $R$ contains the reals then $\kappa_R=\IR$. \vs{0.5cm} A {\bf section} for $R$ is a homomorphism from the the value group $\Gamma_R$ to the multiplicative group $R_{>0}$ such that $v_R(s(\gamma))=\gamma$ for all $\gamma\in \Gamma_R$. Since $\Gamma_R$ is divisible there is always a section for $R$. \subsection*{1.3. Power series fields over the reals} We refer to [38, Kap. II \S 5] and Van den Dries et al. [20, Section 1.2] for the following. \vs{0.5cm} Let $\Gamma=(\Gamma,+)$ be an additively written ordered abelian group. We consider the {\bf power series field} $\ma{R}:=\IR((t^\Gamma))$. The elements of $\ma{R}$ are the formal power series $f=\sum_{\gamma\in \Gamma}a_\gamma t^\gamma$ with exponents $\gamma\in \Gamma$ and coefficients $a_\gamma\in \IR$ such that the support of $f$, $\mathrm{supp}(f):=\{\gamma\in\Gamma\mid a_\gamma\neq 0\}$, is a well-ordered subset of $\Gamma$. \vs{0.3cm} {\bf Field structure:} The addition given by $$\big(\sum_{\gamma\in\Gamma}a_\gamma t^\gamma\big)+\big(\sum_{\gamma\in\Gamma}b_\gamma t^\gamma\big)=\sum_{\gamma\in\Gamma}(a_\gamma+b_\gamma)t^\gamma$$ and the multiplication given by $$\big(\sum_{\gamma\in\Gamma}a_\gamma t^\gamma\big)\cdot\big(\sum_{\gamma\in\Gamma}b_\gamma t^\gamma\big)=\sum_{\gamma\in\Gamma}(\sum_{\alpha+\beta=\gamma}a_\alpha b_\beta)t^\gamma$$ establish a field structure on $\ma{R}$ (note that for every $\gamma\in \Gamma$ the sum $\sum_{\alpha+\beta=\gamma}a_\alpha b_\beta$ is finite since the supports are well-ordered). In particular, $t^\gamma\cdot t^\delta=t^{\gamma+\delta}$ for all $\gamma,\delta\in \Gamma$. We identify $\IR$ with a subfield of $\ma{R}$ by the field embedding $\IR\hookrightarrow \ma{R}, a\mapsto at^0$. \vs{0.3cm} {\bf Ordering:} By setting $\sum_{\gamma\in\Gamma}a_\gamma t^\gamma<\sum_{\gamma\in\Gamma}b_\gamma t^\gamma$ if $a_\delta<b_\delta$ where $\delta=\min\big\{\gamma\in\Gamma\mid a_\gamma\neq b_\gamma\big\}$, the field $\ma{R}$ becomes an ordered field. \vs{0.3cm} The ordered field $\ma{R}$ is real closed if and only if $\Gamma$ is divisible. From now on we assume this. \vs{0.3cm} {\bf Valuation:} The valuation $v_{\ma{R}}$ is given by $\mathrm{ord}:\ma{R}^*\to \Gamma, f\mapsto \min \big(\mathrm{supp}(f)\big)$. We have $\Gamma_{\ma{R}}=\Gamma$, $$\ma{O}_{\ma{R}}=\IR((t^{\Gamma_{\geq 0}})):=\big\{f\in \ma{R}\mid \mathrm{supp}(f)\subset \Gamma_{\geq 0}\big\}$$ and $$\mathfrak{m}_{\ma{R}}=\IR((t^{\Gamma_{>0}})):=\big\{f\in \ma{R}\mid \mathrm{supp}(f)\subset \Gamma_{> 0}\big\}.$$ Note that $\kappa_{\ma{R}}=\IR$. \vs{0.5cm} {\bf 1.3 Fact} \vs{0.1cm} {\it Let $R$ be a real closed field that contains the reals and let $\ma{R}:=\IR((t^{\Gamma_R}))$. Let $s:\Gamma_R\to R_{>0}$ be a section for $R$. Then there is a field embedding $\sigma:R\hookrightarrow \ma{R}$ such that $\sigma(s(\gamma))=t^\gamma$ for all $\gamma\in\Gamma_R$. Such an embedding is valuation and order preserving.} \vs{0.5cm} We say that $\sigma$ is an embedding with respect to the section $s$. Note that it is in general not uniquely determined. Note also that such an embedding is the same as having a valuation preserving embedding with $t^\Gamma$ in its image. \vs{0.5cm} The power series field $\ma{R}=\IR((t^{\Gamma}))$ carries a {\bf partial logarithm} $$\log_\ma{R}=\log: \big(\IR_{>0}+\mathfrak{m}_\ma{R},\cdot\big)\stackrel{\cong}{\longrightarrow} \big(\ma{O}_\ma{R},+\big)$$ extending the logarithm on the reals (compare with [20, Section 1.2] and S. Kuhlmann [34]) which is defined as follows: \vs{0.3cm} Let $f\in \IR_{>0}+\mathfrak{m}_\ma{R}$. Then there are unique $a\in\IR_{>0}$ and $h\in\mathfrak{m}_\ma{R}$ such that $f=a(1+h)$. Then $\log(f)=\log(a)+L(h)$ where $$L(x)=\sum_{j=1}^\infty \frac{(-1)^{j+1}}{j}\, x^j$$ is the {\bf logarithmic series}. The partial logarithm gives an order isomorphism between the multiplicative group of positive units of the ordered valuation ring $\ma{O}_\ma{R}$ and the additive group of the latter. Its inverse is given by the partial exponential function. \subsection*{1.4. The o-minimal structures $\IR_\an$ and $\IR_{\an,\exp}$} For $n\in\IN_0$ let $\IR\{x_1,\ldots,x_n\}$ denote the ring of convergent real power series in $n$ variables and $$\IR\langle x_1,\ldots,x_n\rangle:=\Big\{f\in \IR\{x_1,\ldots,x_n\}\,\Big\vert\, f\mbox{ converges on a neighbourhood of }[-1,1]^n\Big\}.$$ Note that we obtain in the case $n=0$ the real field $\IR$. Given $f\in \IR\langle x_1,\ldots,x_n\rangle$, the function $$\widetilde{f}:\IR^n\to\IR, x\mapsto \left\{\begin{array}{lll} f(x),&&x\in [-1,1]^n,\\ &\mbox{if}&\\ 0,&&x\notin [-1,1]^n,\\ \end{array}\right.$$ is called a {\bf restricted analytic function}. (Note that $\IR^0=\{0\}$.) The language $\ma{L}_\an$ is obtained by augmenting the language $\ma{L}_{\mathrm{or}}=\big\{+,-,\cdot,<,0,1\big\}$ of ordered rings by a function symbol for every restricted analytic function. Let $\IR_\an$ be the natural $\ma{L}_\an$-structure on the real field. Sets resp. functions definable in $\IR_\an$ are the globally subanalytic sets resp. functions. These are the sets resp. functions that are subanalytic in the ambient projective space (see Van den Dries and Miller [23, p. 505]). The $\ma{L}_\an$-theory $\mbox{Th}(\IR_\an)$ is denoted by $T_\an$. \vs{0.5cm} Let $R$ be a model of $T_\an$. We call subsets and functions that are $\ma{L}_\an$-definable in $R$, again {\bf globally subanalytic}. Note that a model $R$ of $T_\an$ is real closed and contains the reals. Semialgebraic sets and functions are globally subanalytic. \vs{0.5cm} Let $\ma{L}_\an^\dagger$ be the extension by definition of $\ma{L}_\an$ by a unary function symbol $^{-1}$ for multiplicative inverse and let $\ma{L}_\an^\ddagger$ be the extension by definitions of $\ma{L}_\an^\dagger$ by unary functions symbols $\sqrt[n]{}$ for $n$-th root where $n\in \IN$ with $n\geq 2$. These functions are interpreted in $\IR$ in the obvious way. By [20], the theory $T_\an$ has quantifier elimination in the language $\ma{L}_\an^\dagger$ and is universally axiomatizable in the language $\ma{L}_\an^\ddagger$. From this one obtains the following. \vs{0.5cm} {\bf 1.4 Fact} \vs{0.1cm} {\it Let $R$ be a model of $T_\an$ and let $A\subset R$. Then the $\ma{L}_\an^\ddagger$-substructure $\langle A\rangle_R$ of $R$ generated by $A$ is a model of $T_\an$.} \vs{0.5cm} Examples for models of $T_\an$ are given by field of power series. \vs{0.5cm} {\bf 1.5 Fact} \vs{0.1cm} {\it Let $\Gamma$ be an ordered abelian group that is divisible. The real closed field $\ma{R}:=\IR((t^{\Gamma}))$ has a natural expansion to a model of $T_\an$. The restricted logarithm is given by $\log_{\ma{R}}|_{]1/2,3/2[_\ma{R}}$.} \vs{0.5cm} We write $\ma{R}_\an=\IR((t^\Gamma))_\an$ if we view the power series field over $\IR$ as a model of $T_\an$. \vs{0.5cm} {\bf 1.6 Proposition} {\it \begin{itemize} \item[(1)] Let $R$ be a real closed field containing the reals. Assume that $R$ has archimedean value group. Then there is at most one $\ma{L}_\an$-structure on $R$ making it to a model of $T_\an$. \item[(2)] Let $R,S$ be models of $T_\an$ with archimedean value groups. Then a field embedding $\varphi:R\hookrightarrow S$ is an $\ma{L}_\an$-embedding. Assuming that $\Gamma_R\neq \{0\}$ if $\Gamma_S\neq\{0\}$ we obtain that $\sigma$ is continuous. \item[(3)] Let $\Gamma$ be an archimedean ordered group and let $R$ be a real closed subfield of $\ma{R}:=\IR((t^\Gamma))$. Then $R$ is dense in $\langle R\rangle_\ma{R}$. \end{itemize}} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} (1): Let $n\in\IN$, let $f\in\IR\langle x_1,\ldots,x_n\rangle$ and let $a=(a_1,\ldots,a_n)\in R^n$. We have to show that $\widetilde{f}(a)$ can be defined only in one way once we want to establish an $\ma{L}_\an$-structure on $R$ making it into a $T_\an$-model. By the axiomatization of $T_\an$ (see [20, Section 2]), in particular by Axiom AC4) there, it is enough to show this for $a\in\mathfrak{m}_R^n$. Let $f=\sum_{\alpha\in\IN_0^n}a_\alpha x^\alpha$. For $N\in\IN$ let $f_N:=\sum_{||\alpha||\leq N}a_\alpha x^\alpha\in \IR[x_1,\ldots,x_n]$ and $g_N:=f-f_N$. Then there is some $C_N\in\IR_{>0}$ such that $|g_N(x)|\leq C_N|x|^N$ for all $x\in [-1,1]^n$. We have, again by the axiomatization of $T_\an$, that $\widetilde{f}(a)-\widetilde{f_N}(a)=\widetilde{g_N}(a)$ for $N\in\IN$. Let $\gamma:=v_R(|a|)\in \Gamma_{>0}\cup\{\infty\}$ where $\Gamma:=\Gamma_R$. Then $v_R(\widetilde{g_N}(a))\geq N\gamma$. Since $\Gamma$ is archimedian we see that $\lim_{N\to \infty}\widetilde{g_N}(a)=0$ (in the order topology). So $\widetilde{f}(a)=\lim_{N\to\infty}\widetilde{f_N}(a)$. The values $\widetilde{f_N}(a)=f_N(a)$ are uniquely determined in $R$ since these functions are polynomials. So there is only one choice for $\widetilde{f}(a)$. \vs{0.2cm} (2): Since $\varphi$ is clearly an embedding of real closed fields we have to show the following. Let $n\in\IN$ and let $f\in \IR\langle x_1,\ldots,x_n\rangle$. Then $\varphi\big(\widetilde{f}(a_1,\ldots,a_n)\big)=\widetilde{f}\big(\varphi(a_1),\ldots,\varphi(a_n)\big)$ for $a=(a_1,\ldots,a_n)\in R^n$. Again, it is enough to show this for $a\in\mathfrak{m}_R^n$. For $N\in\IN$ let $f_N$ and $g_N$ be defined as above. We obtain that for $N\in\IN$ \begin{eqnarray*} \varphi\big(\widetilde{f}(a)\big)-\widetilde{f}\big(\varphi(a)\big)&=& \varphi\big(\widetilde{f_N}(a)+\widetilde{g_N}(a)\big)-\Big(\widetilde{f_N}\big(\varphi(a)\big)+\widetilde{g_N}\big(\varphi(a)\big)\Big)\\ &=&\Big(\varphi\big(\widetilde{f_N}(a)\big)+\varphi\big(\widetilde{g_N}(a)\big)\Big)-\Big(\widetilde{f_N}\big(\varphi(a)\big)+\widetilde{g_N}\big(\varphi(a)\big)\Big)\\ &=&\varphi\big(\widetilde{g_N}(a)\big)\Big)-\widetilde{g_N}\big(\varphi(a)\big):=b_N. \end{eqnarray*} Since $|a|$ is infinitesimal we obtain that $\varphi(|a|)=|\varphi(a)|$ is infinitesimal and hence $\delta:=v_S(\varphi(|a|))\in\Gamma_S\cup\{\infty\}$ is positive. From $|\widetilde{g}_N(\varphi(a))|\leq C|\varphi(a)|^N$ we obtain that $v_S\big(\widetilde{g}_N(\varphi(a))\big)\geq N\delta.$ From $|\widetilde{g}_N(a)|\leq C|a|^N$ we obtain $\varphi(\widetilde{g}_N(a))\leq C\big(\varphi(|a|)\big)^N$ and therefore $v_S\big(\varphi\big(\widetilde{g_N}(a)\big)\big)\geq N\delta.$ Since $\Gamma_S$ is archimedean and $\delta>0$ we see that $\lim_{N\to\infty}b_N=0$. This shows that $\varphi\big(\widetilde{f}(a)\big)=\widetilde{f}\big(\varphi(a)\big)$. For the second statement we assume that $\Gamma_R\neq \{0\}$. It is enough to show that $\varphi$ is continuous at $0$. Choose an infinitesimal element $x_0$ of $R_{>0}$. Then $y_0:=\sigma(x)\in S_{>0}$ is also infinitesimal. Let $\varepsilon\in S_{>0}$. Since $\Gamma_S$ is archimedean we find some $n\in\IN$ such that $y_0^n<\varepsilon$. Setting $\delta:=x_0^n$ we obtain $|\sigma(x)|<\varepsilon$ for $x\in R$ with $|x|<\delta$. \vs{0.2cm} (3): We define the sequence $(R_k)_{k\in\IN_0}$ recursively by $R_0:=R$ and letting $R_{k+1}$ be the field generated over $R_k$ by the elements $h(a)$ where $h$ is an $n$-ary function symbol in $\ma{L}_\an^\ddagger, a\in R_k^n$ and $n\in\IN$. In the proof of (1) we have seen that we find a sequence $(c_N)_{N\in \IN}$ in $R_k$ such that $\lim_{N\to\infty}c_N=h(a)$ if $h$ correspondens to a restricted analytic function. Dealing then with sums, products, inverses and arbitrary roots one sees that $R_k$ is dense in $R_{k+1}$ and that the value group does not enlarge. Since $\langle R\rangle_\ma{R}=\bigcup_{k\in\IN_0}R_k$ we obtain the claim. \hfill$\Box$ \vs{0.5cm} Let $\ma{L}_{\an,\exp}$ be the extension of $\ma{L}_\an$ by a unary function symbol $\exp$ for the exponential function and let $\IR_{\an,\exp}$ be the natural $\ma{L}_{\an,\exp}$-structure on the real field. Its theory $\mathrm{Th}(\IR_{\an,\exp})$ is denoted by $T_{\an,\exp}$. Let $\ma{L}_{\an,\exp,\log}$ be the extension by definition of $\ma{L}_{\an,\exp}$ by a unary function symbol $\log$ with the natural interpretation on $\IR$. By [20], the theory $T_{\an,\exp}$ has quantifier elimination and is universally axiomatizable in the language $\ma{L}_{\an,\exp,\log}$. The theory $T_{\an,\exp}$ extends the theory $T_\an$. \vs{0.5cm} An important non-standard model of $T_{\an,\exp}$ is the {\bf field of logarithmic-exponential series} (or field of $LE$-series) $\IR((t))^{\mathrm{LE}}$ introduced by Van den Dries et al. [21]. It contains series of the form $$t^{-1}e^{1/t}+2e^{1/t}+t^{-1/2}-\log t+6+t+2t^2+\ldots+e^{-1/t^2}-te^{-1/t^2}+\ldots+e^{-e^{1/t}}.$$ We need the following: \vs{0.5cm} {\bf 1.7 Fact} \vs{0.1cm} {\it The power series field $\IR((t^\IR))_\an$ is an $\ma{L}_\an$-substructure of $\IR((t))^{\mathrm{LE}}$. The logarithm on $\IR((t))^{\mathrm{LE}}$ extends the partial logarithm on $\IR((t^\IR))$. For $r\in\IR$ we have that $\log(t^r)=r\log(t)$ with $\log(t)\in \IR((t))^{\mathrm{LE}}$ transcendental over $\IR((t^\IR))$.} \subsection*{1.5. Integration of parameterized definable functions} Comte, Lion and Rolin [14] (see also Lion and Rolin [35]) have shown the following seminal theorem: \vs{0.5cm} {\bf 1.8 Fact} \vs{0.1cm} {\it Let $n,k\in\IN$ with $k\leq n$ and let $q\in\IN_0$. Let $A\subset \IR^{q+n}$ be globally subanalytic such that $\dim(A_t)\leq k$ for all $t\in\IR^q$. The following holds: \begin{itemize} \item[(1)] The set $$\mathrm{Fin}_k(A):=\big\{t\in \IR^q\mid \mathrm{vol}_k(A_t)<\infty\}$$ is globally subanalytic. \item[(2)] There are $r\in\IN$, a real polynomial $P$ in $2r$ variables and globally subanalytic functions $\varphi_1,\ldots,\varphi_r:\mathrm{Fin}_k(A)\to \IR_{>0}$ such that $$\mathrm{vol}_k(A_t)=P\big(\varphi_1(t),\ldots,\varphi_r(t),\log(\varphi_1(t)),\ldots,\log(\varphi_r(t))\big)$$ for all $t\in \mathrm{Fin}_k(A)$. \end{itemize}} \vs{0.2cm} Here $\mathrm{vol}_k$ denotes the $k$-dimensional Hausdorff measure on $\IR^n$. It is also shown that the set $\mathrm{Fin}_k(A)$ is semialgebraic if $A$ is semialgebraic. A function is called {\bf constructible} if it is a finite sum of finite products of globally subanalytic functions and logarithms of positive globally subanalytic functions (see Cluckers and D. Miller [9, 10, 11]). We need the following version of the above theorem (where $\lambda_n$ denotes the Lebesgue measure on $\IR^n$): \vs{0.5cm} {\bf 1.9 Corollary} \vs{0.1cm} {\it Let $n\in\IN$ and let $q\in\IN_0$. \begin{itemize} \item[(A)] Let $A\subset\IR^{q+n}$ be globally subanalytic. The following holds: \begin{itemize} \item[(1)] The set $$\mathrm{Fin}(A):=\big\{t\in \IR^q\mid \lambda_n(A_t)<\infty\}$$ is globally subanalytic. \item[(2)] There is a constructible function $g:\IR^q\to \IR$ such that $\lambda_n(A_t)=g(t)$ for all $t\in \mathrm{Fin}(A)$. \end{itemize} \item[(B)] Let $f:\IR^{q+n}\to \IR$ be globally subanalytic. The following holds: \begin{itemize} \item[(1)] The set $$\mathrm{Fin}(f):=\Big\{t\in \IR^q\;\big\vert\, \int_{\IR^n}|f_t(x)|\,d\lambda_n(x)<\infty\Big\}$$ is globally subanalytic. \item[(2)] There is a constructible function $h:\IR^q\to\IR$ such that $$\int_{\IR^n}f_t(x)\,d\lambda_n(x)=h(t)$$ for all $t\in \mathrm{Fin}(f)$. \end{itemize} \end{itemize}} {\bf Proof:} \vs{0.1cm} (A) is Fact 1.8 in the case $k=n$. \vs{0.2cm} (B) We use the following from Lebesgue integration theory. Let $g:\IR^n\to\IR_{\geq 0}$ be measurable. Then $\int_{\IR^n}g(x)\,d\lambda_n(x)=\lambda_{n+1}\big(\mathrm{subgraph}(g)\big)$. Applying this to $f_+$ and $f_-$ we are done by (A). \hfill$\Box$ \vs{0.5cm} We have that $\mathrm{Fin}(f)$ is semialgebraic if $f$ is semialgebraic (see also [30, Theorem 2.2]). Moreover in this case, one has more detailed information on the constructible functions obtained by integrating (see [31]). But we will not need this. \vs{0.5cm} Cluckers and D. Miller [9, 10, 11] have extended the work of Comte et al.: \vs{0.5cm} {\bf 1.10 Fact} \vs{0.1cm} {\it Let $n\in\IN$ and let $q\in\IN_0$. Let $f:\IR^{q+n}\to \IR$ be constructible. The following holds: \begin{itemize} \item[(1)] There is a constructible function $g:\IR^q\to\IR$ such that $$\mathrm{Fin}(f):=\Big\{t\in \IR^q\big\vert \int_{\IR^n}|f_t(x)|\,d\lambda_n(x)<\infty\Big\}$$ equals the zero set of $g$. \item[(2)] There is a constructible function $h:\IR^q\to\IR$ such that $$\int_{\IR^n}f_t(x)\,d\lambda_n(x)=h(t)$$ for all $t\in \mathrm{Fin}(f)$. \end{itemize}} \section*{2. Construction of the measure and the integral} Let $R$ be a real closed field with value group $\Gamma:=\Gamma_R$. We assume that $R$ contains the reals and that $\Gamma$ is {\bf archimedean}. \subsection*{2.1 Lebesgue data} {\bf 2.1 Definition} \vs{0.1cm} A {\bf Lebesgue datum} for $R$ is a tuple $\alpha=\big(s,\sigma,\tau\big)$ where $s:\Gamma\to R_{>0}$ is a section for $R$, $\sigma:R\hookrightarrow \IR((t^\Gamma))$ is a field embedding with respect to $s$ and $\tau:\Gamma\hookrightarrow (\IR,+)$ is an embedding of ordered groups. The field embedding $$\Theta_\alpha:R\hookrightarrow \IR((t^\Gamma))\hookrightarrow \IR((t^\IR))\hookrightarrow \IR((t))^{\mathrm{LE}}$$ where \begin{itemize} \item[(a)] $R\hookrightarrow \IR((t^\Gamma))$ is given by $\sigma$, \item[(b)] $\IR((t^\Gamma))\hookrightarrow \IR((t^\IR))$ is given by $\sum_{\gamma\in \Gamma}a_\gamma t^\gamma\mapsto \sum_{\delta\in\tau(\Gamma)} a_{\tau^{-1}(\delta)} t^{\delta}$ and \item[(d)] $\IR((t^\IR))\hookrightarrow \IR((t))^{\mathrm{LE}}$ is the inclusion \end{itemize} is called the {\bf associated Lebesgue embedding}. \vs{0.5cm} Note that by Fact 1.1 and Fact 1.3 the field $R$ has a Lebesgue datum. In the case that $R=\IR$ there is only one Lebesgue datum which is trivial. \vs{0.5cm} For the rest of the section we fix a Lebesgue datum $\alpha=(s,\sigma,\tau)$ for $R$ with the associated Lebesgue embedding $\Theta:=\Theta_\alpha:R\to \IR((t))^{\mathrm{LE}}$. Via the embedding $\Theta$ we view $R$ as a subfield of $\IR((t))^{\mathrm{LE}}$. The latter field is abbreviated by $\ma{S}$ for the following. \subsection*{2.2 Construction of the measure and elementary properties} \vs{0.1cm} {\bf 2.2 Construction} \vs{0.1cm} Let $A\subset R^n$ be semialgebraic. We define its measure $\lambda_{R,n}(A)=\lambda_{R,n}^\alpha(A)\in \ma{S}_{\geq 0}\cup\{\infty\}$ as follows. Take a formula $\phi(x,y)$ in the language of ordered rings, $x=(x_1,\ldots,x_n), y=(y_1,\ldots,y_q)$, and a point $a\in R^q$ such that $A=\phi(R^n,a)$. Then the graph of the function $F:\IR^q\to \IR$ given by $$F(c):=\lambda_n\big(\phi(\IR^n,c)\big)\mbox{ if }\lambda_n\big(\phi(\IR^n,c)\big)<\infty$$ and $F(c)=-1$ otherwise, is by the results mentioned in Section 1.5 defined in $\IR_{\an,\exp}$ by an $\ma{L}_{\an,\exp}$-formula $\psi(y,z)$. Then the formula $\psi(y,z)$ defines in $\ma{S}$ the graph of a function $F_\ma{S}:\ma{S}^q\to \ma{S}$. A routine model theoretic argument shows that $F_\ma{S}(a)$ does not depend on the choices of $\phi, a$ and $\psi$. This allows us to define $\lambda_{R,n}(A):=F_{\ma{S}}(a)$ if $F_\ma{S}(a)\geq 0$, and $\lambda_{R,n}(A)=\infty$ otherwise (that is, $F_\ma{S}(a)=-1$). \vs{0.5cm} With a common model theoretic transfer argument we obtain the usual elementary properties of the Lebesgue measure. \vs{0.5cm} {\bf 2.3 Elementary properties} {\it \begin{itemize} \item[(1)] {\bf Additivity:}\\ Let $A,B\subset R^n$ be semialgebraic and disjoint. Then $\lambda_{R,n}(A\cup B)=\lambda_{R,n}(A)+\lambda_{R,n}(B)$. \item[(2)] {\bf Monotonicity:}\\ Let $A,B\subset R^n$ be semialgebraic such that $A\subset B$. Then $\lambda_{R,n}(A)\leq \lambda_{R,n}(B)$. \item[(3)] {\bf Translation invariance:}\\ Let $A\subset R^n$ be semialgebraic and let $c\in R^n$. Then $\lambda_{R,n}(A+c)=\lambda_{R,n}(A)$. \item[(4)] {\bf Product formula:}\\ Let $A_1\subset R^m$ and $A_2\subset R^n$ be semialgebraic. Then $$\lambda_{R,m+n}(A_1\times A_2)=\lambda_{R,m}(A_1)\lambda_{R,n}(A_2).$$ \item[(5)] {\bf Volume of cubes:}\\ Let $c_j,d_j\in R$ with $c_j\leq d_j$ for $j\in\{1,\ldots,n\}$. Then $$\lambda_{R,n}\big(\prod_{j=1}^n[c_j,d_j]\big)=\prod_{j=1}^n(d_j-c_j).$$ \item[(6)] {\bf Infinity:}\\ We have $\lambda_{R,n}(R^n)=\infty$ for all $n\in\IN$. \end{itemize}} {\bf Proof:} \vs{0.1cm} For the readers' convenience we present the proof of (2): \vs{0.1cm} We find formulas $\phi(x,y),\widetilde{\phi}(x,y)$ in the language of ordered rings, $x=(x_1,\ldots,x_n), y=(y_1,\ldots,y_q)$, and a point $a\in R^q$ such that $A=\phi(R^n,a)$ and $B=\widetilde{\phi}(\IR^n,a)$. Consider the functions $F:\IR^q\to \IR$ given by $F(c):=\lambda_n\big(\phi(\IR^n,c)\big)$ if $\lambda_n\big(\phi(\IR^n,c)\big)<\infty$ and $F(c)=-1$ otherwise, and $\widetilde{F}:\IR^q\to \IR$ given by $\widetilde{F}(c):=\lambda_n\big(\widetilde{\phi}(\IR^n,c)\big)$ if $\lambda_n\big(\widetilde{\phi}(\IR^n,c)\big)<\infty$ and $\widetilde{F}(c)=-1$ otherwise. Let $\psi(y,z)$ be an $\ma{L}_{\an,\exp}$-formula such that $\psi(\IR^{q+1})=\mathrm{graph}(F)$, and let $\widetilde{\psi}(y,z)$ be an $\ma{L}_{\an,\exp}$-formula such that $\widetilde{\psi}(\IR^{q+1})=\mathrm{graph}(\widetilde{F})$. Let $\Sigma(y)$ be the $\ma{L}_{\an,\exp}$-formula $$\forall t\;\forall\widetilde{t}\;\Big[\Big(\forall x\;\big(\phi(x,y)\rightarrow \widetilde{\phi}(x,y)\big)\wedge \psi(y,t)\wedge \widetilde{\psi}(y,\widetilde{t})\Big) \longrightarrow \Big((t\leq \widetilde{t})\wedge (t=-1\rightarrow \widetilde{t}=-1)\Big)\Big].$$ Then $\IR_{\an,\exp}\models \forall y\;\Sigma(y)$. So $\ma{S}\models \forall y\;\Sigma(y)$ and therefore $\Sigma(a)$ holds in $\ma{S}$. By construction of the measure, we get that $\lambda_{R,n}(A)\leq \lambda_{R,n}(B)$. \hfill$\Box$ \vs{0.5cm} Property (5) gives that the measure of cube is the naive volume. The same holds for other basic geometric subjects. \vs{0.5cm} {\bf 2.4 Example} \vs{0.1cm} {\it Let $B$ be a ball in $R^n$ with radius $r\in R_{>0}$. Then $\lambda_{R,n}(B)=\omega_n r^n$ where $\omega_n$ is the volume of the unit ball in $\IR^n$. In particular, we obtain in the case $n=2$ that $\lambda_{R,2}(B)=\pi r^2$.} \vs{0.5cm} From semialgebraic geometry on the reals (see [30, Remark 2.1]) we obtain the following: \vs{0.5cm} {\bf 2.5 Proposition} \vs{0.1cm} {\it Let $A\subset R^n$ be semialgebraic. The following holds: \begin{itemize} \item[(1)] $\lambda_{R,n}(A)>0$ if and only if $\dim(A)=n$. \item[(2)] $\lambda_{R,n}(\overline{A})=\lambda_{R,n}(A)$. \end{itemize}} \vs{0.2cm} A semialgebraic subset of $R^n$ is called {\bf integrable} if $\lambda_{R,n}(A)<\infty$. By $\chi_{R,n}=\chi_{R,n}^\alpha$ we denote the collection of all semialgebraic subsets of $R^n$ that are integrable. \subsection*{2.3. Construction of the integral and elementary properties} For defining integration we perform a similar construction. \vs{0.5cm} {\bf 2.6 Construction} \vs{0.1cm} Let $f:R^n\to R_{\geq 0}$ be semialgebraic. We define its integral $\int_{R^n}f(x)\,dx=\int_{R^n}^\alpha f(x)\,dx\in \ma{S}_{\geq 0}\cup\{\infty\}$ as follows. Take a formula $\phi(x,s,y)$ in the language of ordered rings, $x=(x_1,\ldots,x_n), y=(y_1,\ldots,y_q)$, and a point $a\in R^q$ such that $\mathrm{graph}(f))=\phi(R^{n+1},a)$. We choose thereby $\phi(x,s,y)$ in such a way that $\phi(\IR^{n+1},c)$ is the graph of a non-negative function $g_c:\IR^n\to \IR$ for every $c\in \IR^q$. The graph of the function $F:\IR^q\to \IR$ given by $$F(c):=\int_{\IR^n} g_c(x)\, dx \mbox{ if }\int_{\IR^n} g_c(x)\, dx<\infty$$ and $F(c)=-1$ otherwise, is by the results mentioned in Section 1.5 defined in $\IR_{\an,\exp}$ by an $\ma{L}_{\an,\exp}$-formula $\psi(y,z)$. Then the formula $\psi(y,z)$ defines in $\ma{S}$ the graph of a a function $F_\ma{S}:\ma{S}^n\to \ma{S}$. The model theoretic argument used in Construction 2.2 shows that $F_\ma{S}(a)$ does not depend on the choices of $\phi, a$ and $\psi$. This allows us to define $\int_{R^n}f(x)\,dx:=F_{\ma{S}}(a)$ if $F_\ma{S}(a)\geq 0$, and $\int_{R^n}f(x)\,dx=\infty$ otherwise (that is, $F_\ma{S}(a)=-1$). \vs{0.5cm} Applying the common model theoretic transfer argument we obtain the well-known connections to the Lebesgue measure. \vs{0.5cm} {\bf 2.7 Proposition} {\it \begin{itemize} \item[(1)] Let $f:R^n\to R_{\geq 0}$ be semialgebraic. Then $\int_{R^n}f(x)\,dx=\lambda_{R,n+1}\big(\mathrm{subgraph}(f)\big)$. \item[(2)] Let $A\subset R^n$ be semialgebraic. Then $\lambda_{R,n}(A)=\int_{R^n} \mathbbm{1}_A(x)dx$. \end{itemize}} \vs{0.5cm} In view of the above we often write $\int f\,d\lambda_{R,n}$ or $\int f(x)\,d\lambda_{R,n}(x)$ for $\int_{R^n}f(x)\,dx$. \vs{0.5cm} We extend the integral to semialgebraic functions that are not necessarily non-negative in the usual way. Given $f:R^n\to R$ semialgebraic we have with $f_+:=\max(f,0)$ and $f_-:=\max(-f,0)$ that $f_+,f_-:R^n\to R_{\geq 0}$ are semialgebraic and that $f=f_+-f_-$. We call $f$ {\bf integrable} if $\int f_+\,d\lambda_{R,n}<\infty$ and $\int f_-\,d\lambda_{R,n}<\infty$. We set then $$\int f\,d\lambda_{R,n}:=\int f_+\,d\lambda_{R,n}- \int f_-\,d\lambda_{R,n}\in\ma{S}.$$ By $\ma{L}^1_{R,n}=\ma{L}_{R,n}^{1,\alpha}$ we denote the set of integrable functions. Finally, we set $$\mathrm{Int}_{R,n}=\mathrm{Int}^\alpha_{R,n}:\ma{L}^1_{R,n}\to \ma{S}, f\mapsto \int f\,d\lambda_{R,n}.$$ \newpage {\bf 2.8 Elementary properties} \vs{0.1cm} {\it $\ma{L}^1_{R,n}$ is an $R$-vector space and the functional $$\mathrm{Int}_{R,n}:\ma{L}^1_{R,n}\to \ma{S}, f\mapsto \int f\,d\lambda_{R,n},$$ is an $R$-linear map that is monotone.} \vs{0.5cm} Let $A\subset R^n$ be semialgebraic and let $f:A\to R_{\geq 0}$ be semialgebraic. As usually one defines $\int_A f\,d\lambda_{R,n}$ as $\int \hat{f}d\lambda_{R,n}$ where $\hat{f}$ is the extension of $f$ by $0$ to $R^n$. Similar to above, one obtains the $R$-vector space $\ma{L}^1_{R,n}(A)=\ma{L}_{R,n}^{1,\alpha}(A)$ of semialgebraic functions integrable {\bf over} $A$ and the linear and monotone functional $\ma{L}^1_{R,n}(A)\to \ma{S}, f\mapsto \int_A f\,d\lambda_n.$ \vs{0.5cm} From the construction of the measure and integral it is obvious that we obtain in the case $R=\IR$ the usual Lebesgue measure and integral (restricted to the semialgebraic setting). \section*{3. Canonicity and functoriality of the construction} \subsection*{3.1 Canonicity of the construction} Let $R$ be a non-archimedean real closed field with archimedean value group $\Gamma:=\Gamma_R$ that contains (properly) the reals. We start by discussing which values are obtained as integrals. \vs{0.5cm} Let $\alpha=(s,\sigma,\tau)$ be a Lebesgue datum for $R$ with the associated Lebesgue embedding $\Theta:=\Theta_\alpha:R\to \IR((t))^{\mathrm{LE}}$. As above, let $\ma{R}:=\IR((t^\Gamma))$ and $\ma{S}:=\IR((t))^{\mathrm{LE}}$. Let $\rho_\alpha$ be the embedding $$\IR((t^\Gamma))\hookrightarrow \IR((t^\IR)), \sum_{\gamma\in \Gamma}a_\gamma t^\gamma\mapsto \sum_{\delta\in\tau(\Gamma)} a_{\tau^{-1}(\delta)} t^{\delta}.$$ We set $\ma{R}_\alpha:=\rho_\alpha(\ma{R})\subset \IR((t^\IR))$. Note that $\ma{R}_\alpha=\ma{R}_{\alpha,\an}$ is a $T_\an$-submodel of $\IR((t^\IR))$ and that $\rho_\alpha:\ma{R}\to \ma{R}_\alpha$ is an $\ma{L}_\an$-isomorphism. Moreover, the definition of $\ma{R}_\alpha$ does only depend on the choice of the embedding $\tau:\Gamma\hookrightarrow \IR$. We have that $\Theta_\alpha(R)\subset \ma{R}_\alpha$. We finally set $\langle R\rangle_\alpha:=\langle\Theta_\alpha(R)\rangle_{\ma{R}_{\alpha,\an}}$. The definition of $\langle R\rangle_\alpha$ does depend on all components of $\alpha$. \vs{0.5cm} Let $X:=\log(t^{-1})\in \ma{S}$. From the construction of the field of LE-series it follows that $X$ is transcendental over $\IR((t^\IR))$ (see Fact 1.7). We have that $\IR<X<t^{\IR_{<0}}$. \vs{0.5cm} {\bf 3.1 Definition} \vs{0.1cm} We call $\ma{R}_\alpha[X]$ the {\bf Lebesgue algebra of $R$} and $\langle R\rangle_\alpha[X]$ the {\bf volume algebra of $R$ with respect to $\alpha$}. \vs{0.5cm} The Lebesgue algebra and the volume algebra are $R$-algebras by the homomorphism $\Theta_\alpha$. The ordering on $\ma{S}$ equips the $R$-subalgebras $\ma{R}_\alpha[X]$ and $\langle R\rangle_\alpha[X]$ with an ordering. Note that these are induced by the above cut given by $X$. Moreover, we can identify $\ma{R}_\alpha[X]$ and $\langle R\rangle_\alpha[X]$ with the polynomial ring over $\ma{R}_\alpha$ and $\langle R\rangle_\alpha$, respectively. Note also that $\langle R\rangle_\alpha[X]$ is an $R$-subalgebra of $\ma{R}_\alpha[X]$. \vs{0.5cm} {\bf 3.2 Lemma} {\it \begin{itemize} \item[(1)] Let $f\in \ma{R}_{>0}$. Then $\log(\rho_\alpha(f))\in -\mathrm{ord}(f) X+\ma{O}_{\ma{R}_\alpha}$. \item[(2)] Let $f\in R_{>0}$. Then $\log(\Theta_\alpha(f))\in -\tau(v_R(f))X+\ma{O}_{\langle R\rangle_\alpha}$. \end{itemize}} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} (1): Let $\gamma:=\mathrm{ord}(f)\in\Gamma$ and let $g:=t^{-\gamma}f\in \ma{R}$. Then $\rho_\alpha(g)\in \IR_{>0}+\mathfrak{m}_{\ma{R}_\alpha}$. Let $r:=\tau(\gamma)\in\IR$. By Section 1.3 and Fact 1.7 we obtain $$\log(\rho_\alpha(f))=\log(t^r)+\log(\rho_\alpha(g))=-r\log(t^{-1})+\log(\rho_\alpha(g))\in -\mathrm{ord}(f) X+\ma{O}_{\ma{R}_\alpha}.$$ \vs{0.2cm} (2): Let $f\in R_{>0}$ and let $\gamma:=v_R(f)\in\Gamma$. Since $t^\gamma\in\sigma(R)$ we have that $h:=t^{-r}\rho_\alpha(\sigma(f))\in\rho_\alpha(\sigma(R))\subset \langle R\rangle_\alpha$ where $r:=\tau(\gamma)$. Since $h\in \IR_{>0}+\ma{O}_{\langle R\rangle_\alpha}$ and $\langle R\rangle_\alpha$ is a model of $T_\an$ we get that $\log(h)\in \ma{O}_{\langle R\rangle_\alpha}$. We are done since $\log(\Theta_\alpha(f))=-rX+\log(h)$. \hfill$\Box$ \vs{0.5cm} {\bf 3.3 Proposition} \vs{0.1cm} {\it Let $A$ be a semialgebraic subset of $R^n$ that is integrable. Then $\lambda^\alpha_{R,n}(A)\in \langle R\rangle_\alpha[X]$ with $\deg\big(\lambda^\alpha_{R,n}(A)\big)<n$.} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} Let $F:\IR^q\to \IR$ be a function obtained according to Construction 2.2. By Corollary 1.9 there are $r\in\IN$, a polynomial $Q$ in $r$ variables over the ring of globally subanalytic functions on $\IR^q$ and globally subanalytic functions $\varphi_1,\ldots,\varphi_r$ on $\IR^q$ that are positive such that $F(y)=Q\big(\log(\varphi_1(y)),\ldots, \log(\varphi_r(y))\big)$ for all $y\in \IR^q$. \vs{0.2cm} {\bf Claim:} One can assume that the total degree of $Q$ is bounded by $n-1$. \vs{0.1cm} {\bf Proof of the claim:} The case $n=1$ is an easy consequence of o-minimality (see [30, Theorem 2.3]). The higher dimensional cases follow inductively from Fubini's theorem by the inductive integration procedure in $[9]$. \hfill$\Box_{\mathrm{Claim\,1}}$ \vs{0.2cm} So we assume that the total degree of $Q$ is bounded by $n-1$. Since $\langle R\rangle_\alpha$ is a model of $T_\an$ we have that $\varphi(a)\in\langle R\rangle_\alpha$ for every globally subanalytic function $\varphi:\IR^q\to \IR$ and every $a\in \langle R\rangle_\alpha^q$. The assertion follows now with Lemma 3.2. \hfill$\Box$ \vs{0.5cm} We discuss whether and how our construction of the semialgebraic Lebesgue measure and Lebesgue integral does depend on the choice of the Lebesgue datum. The definition of integrable sets and functions does not as we show below. \vs{0.5cm} {\bf 3.4 Proposition} \vs{0.1cm} {\it Let $\alpha,\beta$ be Lebesgue data for $R$. The following holds: \begin{itemize} \item[(1)] Let $A$ be a semialgebraic subset of $R^n$. Then $A$ is integrable with respect to $\alpha$ if and only if it is integrable with respect to $\beta$. \item[(2)] Let $f:R^n\to R$ be semialgebraic. Then $f$ is integrable with respect to $\alpha$ if and only if it is integrable with respect to $\beta$. \end{itemize}} {\bf Proof:} \vs{0.1cm} By Proposition 2.7 it suffices to show (1). Let $F:\IR^q\to\IR$ and $a\in R^q$ be according to Construction 2.2. Let $B:=F^{-1}(-1)$. Then $B$ is a semialgebraic subset of $\IR^q$ by Section 1.5. Since $\Theta_\alpha,\Theta_\beta:R\hookrightarrow \ma{S}$ are field embeddings over the reals we get that $\Theta_\alpha(a)\in B_\ma{S}$ if and only if $\Theta_\beta(a)\in B_\ma{S}$. This gives (1). \hfill$\Box$ \vs{0.5cm} From now on we write $\chi_{R,n}$ and $\ma{L}^1_{R,n}$ for the set of integrable semialgebraic subsets of $R^n$ and of integrable semialgebraic functions $R^n\to R$, respectively. For the following we restrict a semialgebraic measure $\lambda_{R,n}^\alpha$ to $\chi_{R,n}$. \vs{0.5cm} We introduce a natural notion of equivalence between the constructions obtained from choosing different Lebesgue data. \vs{0.5cm} {\bf 3.5 Definition} \vs{0.1cm} \begin{itemize} \item[(a)] Let $\alpha,\beta$ be two Lebesgue data for $R$. We say that the semialgebraic Lebesgue measure with respect to $\alpha$ and the semialgebraic Lebesgue measure with respect to $\beta$ are {\bf isomorphic} if there is a ring isomorphism $\Phi:\ma{R}_\alpha[X]\stackrel{\cong}{\longrightarrow} \ma{R}_\beta[X]$ of the Lebesgue algebras such that $\lambda^\beta_{R,n}=\Phi\circ\lambda_{R,n}^\alpha$ for every $n\in\IN$. We call such an isomorphism a {\bf Lebesgue isomorphism} between the Lebesgue data $\alpha$ and $\beta$. \item[(b)] We say that the semialgebraic Lebesgue measure for $R$ is {\bf unique up to isomorphisms} if the semialgebraic Lebesgue measures with respect to any Lebesgue data for $R$ are isomorphic. \end{itemize} \vs{0.5cm} {\bf 3.6 Remark} \vs{0.1cm} {\it A Lebesgue isomorphism $\Phi:\ma{R}_\alpha[X]\stackrel{\cong}{\longrightarrow}\ma{R}_\beta[X]$ is an $R$-algebra isomorphism which is order preserving.} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} Let $\alpha=(s_\alpha,\sigma_\alpha,\tau_\alpha)$ and $\beta=(s_\beta,\sigma_\beta,\tau_\beta)$. \vs{0.2cm} a) Let $a\in R_{>0}$. We obtain by 2.3(5) and Construction 2.2 that $$\Phi\big(\Theta_\alpha(a)\big)=\Phi\big(\lambda_{R,1}^\alpha([0,a])\big) =\lambda_{R,1}^\beta([0,a])=\Theta_\beta(a).$$ This shows that $\Phi$ is an $R$-algebra isomorphism. \vs{0.2cm} b) Looking at the invertible elements of the polynomial ring we get that $\Phi(\ma{R}_\alpha)=\ma{R}_\beta$. Since these field are real closed we obtain that $\Phi|_{\ma{R}_\alpha}$ is order preserving. Let $\gamma\in \Gamma_{<0}$ and let $c\in R_{>0}$ with $v_R(c)=\gamma$. Let $$A:=\big\{(x,y)\in R^2\mid 1\leq x\leq c, 0\leq y\leq 1/x\big\}.$$ By Proposition 2.7(1), Construction 2.6 and Lemma 3.2(2) we obtain that $$\lambda_{R,2}^\alpha(A)=\int_{[1,c]}^\alpha dx/x=\log\big(\Theta_\alpha(c)\big) = -\tau_\alpha(\gamma)X+h_\alpha$$ and that $$\lambda_{R,2}^\beta(A)=\int_{[1,c]}^\beta dx/x=\log\big(\Theta_\beta(c)\big) = -\tau_\beta(\gamma)X+h_\beta$$ where $h_\alpha\in \ma{O}_{\langle R\rangle_\alpha}$ and $h_\beta\in \ma{O}_{\langle R\rangle_\beta}$. This shows that there are $r\in \IR_{>0}$ and $g\in \ma{O}_{\ma{R}_\beta}$ such that $\Phi(X)=rX+g$. By the ordering given on $\ma{R}_\alpha[X]$ respectively $\ma{R}_\beta[X]$ we conclude that $\Phi$ is order preserving. \hfill$\Box$ \vs{0.5cm} {\bf 3.7 Proposition} \vs{0.1cm} {\it Assume that two Lebesgue data $\alpha$ and $\beta$ differ only by the embedding of the value group $\Gamma$ into the group of reals. Then the semialgebraic Lebesgue measure with respect to $\alpha$ is isomorphic to the semialgebraic Lebesgue measure with respect to $\beta$.} \vs{0.5cm} {\bf Proof:} \vs{0.1cm} Let $\alpha=(s,\sigma,\tau_\alpha)$ and $\beta=(s,\sigma,\tau_\beta)$. We have to find an isomorphism $\Phi:\ma{R}_\alpha[X]\to \ma{R}_\beta[X]$ such that $\Phi\big(F_\ma{S}(\Theta_\alpha(a))\big)=F_\ma{S}\big(\Theta_\beta(a)\big)$ for a function $F:\IR^q\to \IR$ and a tuple $a\in R^q$ obtained by applying Construction 2.2 to a semialgebraic subset of some $R^n$. By Corollary 1.9(A) we know that such an $F$ is constructible. Via $\sigma$ we can identify $R$ with a subfield of $\ma{R}$. By [38, Satz 5 in I \S 3] we find some $r\in\IR_{>0}$ such that that $\tau_\beta=r\tau_\alpha$. \vs{0.2cm} {\bf Claim 1:} The field automorphism $$G:\IR((t^\IR))\to\IR((t^\IR)), \sum_{\alpha\in\IR}a_{\alpha}t^{\alpha}\mapsto \sum_{\alpha\in\IR}a_{\alpha}t^{r\alpha}$$ is an automorphism of the $\ma{L}_\an$-structure $\IR((t^\IR))$. \vs{0.1cm} {\bf Proof of Claim 1:} \vs{0.1cm} This follows from the definition of the $\ma{L}_\an$-structure on power series fields (see [20, Section 2]) or from Proposition 1.6(2). \hfill$\Box_{\mathrm{Claim\, 1}}$ \vs{0.2cm} We have that $\rho_\beta=G\circ \rho_\alpha$. So $G$ restricts to an isomorphism $H:\ma{R}_{\alpha,\an}\to \ma{R}_{\beta,\an}$ with $\rho_\beta=H\circ \rho_\alpha$. The map $\Phi:\ma{R}_\alpha[X]\to \ma{R}_\beta[X]$ extending $H$ and sending $X$ to $rX$ (note that $\log(t^{-r})=r\log(t^{-1}$)) is an isomorphism of the Lebesgue algebras. By the definition of a constructible function we are done by Claim 1 and establishing the following Claim 2: \vs{0.2cm} {\bf Claim 2:} Let $x\in \ma{R}_{\alpha}$ be positive. Then $\log\big(\Phi(x)\big)=\Phi\big(\log(x)\big)$. \vs{0.1cm} {\bf Proof of Claim 2:} Let $\delta\in\IR,a\in \IR_{>0}$ and $g\in\mathfrak{m}_{\ma{R}_\alpha}$ such that $x=at^\delta(1+g)$. Since $H(g)\in\mathfrak{m}_{\ma{R}_\beta}$ we obtain \begin{eqnarray*} \log\big(\Phi(x)\big)&=& \log\big(H(at^\delta(1+g))\big)\\ &=&\log\big(at^{r\delta}(1+H(g))\big)\\ &=&\log(a)-r\delta X+L(H(g)))\\ &\stackrel{\mathrm{Claim\, 1}}{=}& \log(a)-r\delta X+ H(L(g))\\ &=&\Phi\big(\log(a)-\delta X+ L(g)\big)\\ &=&\Phi\big(\log(x)\big). \end{eqnarray*} where $L$ denotes the logarithmic series. This shows Claim 2. \hfill$\Box_{\mathrm{Claim\, 2}}$ \hfill$\Box$ \vs{0.5cm} {\bf 3.8 Proposition} \vs{0.1cm} {\it Let $s,s'$ be sections for $R$ and let $\sigma,\sigma':R\to \ma{R}$ be embeddings with respect to $s$ and $s'$, respectively. Then there is a valuation preserving $\ma{L}_\an$-automorphism $K:\ma{R}_\an\to \ma{R}_\an$ such that $\sigma'=K\circ\sigma$.} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} We have that $\ma{R}$ is the immediate maximal extension of $R$ (see Kaplansky [32] and [38, III \S 3]). Since $\sigma,\sigma'$ are valuation preserving by Fact 1.3 we get by [32, Theorem 5] that there is a valuation preserving field automorphism $K:\ma{R}\to \ma{R}$ such that $\sigma'=K\circ\sigma$. By Proposition 1.6(2) we obtain that $K$ is an $\ma{L}_\an$-automorphism of $\ma{R}_\an$. \hfill$\Box$ \vs{0.5cm} In general, the construction of the Lebesgue measure does only depend on the choice of a section for the given real closed field. \vs{0.5cm} {\bf 3.9 Theorem} \vs{0.1cm} {\it Let $\alpha,\beta$ be Lebesgue data having the same section. Then the semialgebraic Lebesgue measure with respect to $\alpha$ is isomorphic to the semialgebraic Lebesgue measure with respect to $\beta$.} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} Let $\alpha=(s,\sigma_\alpha,\tau_\alpha)$ and let $\beta=(s,\sigma_\beta,\tau_\beta)$. By Proposition 3.7 we can assume that $\tau_\alpha=\tau_\beta=:\tau$. Via $\tau$ we can identify $\Gamma$ with a subgroup of $(\IR,+)$ and obtain that $\ma{R}=\ma{R}_\alpha=\ma{R}_\beta$. By Proposition 3.8 there is a valuation preserving $\ma{L}_\an$-automorphism $K:\ma{R}_\an\to \ma{R}_\an$ such that $\sigma_\beta=K\circ\sigma_\alpha$. \vs{0.2cm} {\bf Claim 1:} We have that $K(t^\gamma)=t^\gamma$ for all $\gamma\in\Gamma$. \vs{0.1cm} {\bf Proof of Claim 1:} Since $\sigma_\alpha$ and $\sigma_\beta$ are embeddings with respect to $s$ we have that $$K(t^\gamma)=K\big(\sigma_\alpha(s(\gamma))\big)=\sigma_\beta(s(\gamma))=t^\gamma$$ for all $\gamma\in\Gamma$. \hfill$\Box_{\mathrm{Claim\, 1}}$ \vs{0.2cm} Mapping $X\mapsto X$ we extend $K$ to an isomorphism $\Phi:\ma{R}[X]\to \ma{R}[X]$. As in the proof of Proposition 3.7 we are done once the following Claim 2 is established: \vs{0.2cm} {\bf Claim 2:} Let $x\in \ma{R}_{>0}$. Then $\log\big(\Phi(x)\big)=\Phi\big(\log(x)\big)$. \vs{0.1cm} {\bf Proof of Claim 2:} Let $\gamma\in\Gamma,a\in \IR_{>0}$ and $h\in\mathfrak{m}_{\ma{R}}$ such that $x=at^\gamma(1+h)$. We obtain \begin{eqnarray*} \log\big(\Phi(x)\big)&=& \log\Big(K\big((at^\gamma(1+h)\big)\Big)\\ &=&\log\big(K(t^\gamma)\big)+\log(a)+L\big(K(h)\big)\\ &\stackrel{\mathrm{Claim\,1}}{=}& \log\big(t^\gamma\big)+\log(a)+L\big(K(h)\big)\\ &=&-\gamma X+\log(a)+K\big(L(h)\big)\\ &=&\Phi\big(\log(a)-\gamma X+ L(h)\big)\\ &=&\Phi\big(\log(x)\big). \end{eqnarray*} where $L$ denotes the logarithmic series. \hfill$\Box_{\mathrm{Claim\, 2}}$ \hfill$\Box$ \vs{0.5cm} Theorem 3.9 is the best we can hope for in the case that the value group is not isomorphic to the rationals. \vs{0.5cm} {\bf 3.10 Theorem} \vs{0.1cm} {\it Assume that $\mathrm{rank}(\Gamma)>1$. Then the semialgebraic Lebesgue measure is not unique up to isomorphisms.} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} Let $\gamma\in \Gamma_{<0}$ and let $\tau:\Gamma\hookrightarrow \IR$ be an embedding with $\tau(\gamma)=-1$. Choose an element $\delta\in\Gamma_{<0}$ that is linearly independent from $\gamma$ and let $\zeta:=-\tau(\delta)$. Then $\zeta$ is irrational. We choose distinct elements $a,b,b'\in R_{>0}$ such that $v_R(a)=\gamma$ and $v_R(b)=v_R(b')=\delta$. Let $s$ be a section for $R$ that maps $\gamma$ to $a$ and $\delta$ to $b$. Let $s'$ be a section for $R$ that maps $\gamma$ to $x$ and $\delta$ to $b'$. This is possible since $\gamma$ and $\delta$ are linear independent. Let $\sigma, \sigma':R\to\ma{R}$ be field embeddings with respect to $s$ and $s'$, respectively. Then the semialgebraic measure with respect to the Lebesgue datum $\alpha=(s,\sigma,\tau)$ and the semialgebraic measure with respect to the Lebesgue datum $\beta=(s',\sigma',\tau)$ are not isomorphic. To show this let $$A:=\big\{(x,y)\in R^2\mid 1\leq x\leq a, 0\leq y\leq 1/x\big\}$$ and $$B:=\big\{(x,y)\in \ma{R}^2\mid 1\leq x\leq b, 0\leq y\leq 1/x\big\}.$$ We have that $\Theta_\alpha(a)=\Theta_\beta(a)=t^{-1}$. We also have that $\Theta_\alpha(b)=t^{-\zeta}$. Let $h:=b/b'$. Then $h$ is a unit in the valuation ring of $\ma{O}_R$; i.e., $v_R(h)=0$. Since $b\neq b'$ we have that $h\neq 1$. We obtain $$\Theta_\beta(b)=\Theta_\beta(b'h)=t^{-\zeta}\Theta_\beta(h)$$ where $\mathrm{ord}\big(\Theta_\beta(h)\big)=0$. Computing the Lebesgue measures of $A$ and $B$ with respect to $\alpha$ we obtain $$\lambda_{R,2}^\alpha(A)=\int_{[1,a]}^\alpha dx/x=\log\big(\Theta_\alpha(a)\big)=\log(t^{-1})=X$$ and $$\lambda_{R,2}^\alpha(B)=\int_{[1,b]}^\alpha dx/x=\log\big(\Theta_\alpha(b)\big)=\log(t^{-\zeta})=\zeta X.$$ Doing the computation with respect to $\beta$ we get $$\lambda_{R,2}^\beta(A)=\int_{[1,a]}^\beta dx/x=\log\big(\Theta_\beta(a)\big)=\log(t^{-1})=X$$ and $$\lambda_{R,2}^\beta(B)=\int_{[1,b]}^\beta dx/x=\log\big(\Theta_\beta(b)\big)=\log\big(t^{-\zeta}\Theta_\beta(h)\big)=\zeta X+g$$ where $g:=\log(\Theta_\beta(h))\neq 0$. Hence there can be no homomorphism $\Phi:\ma{R}_\alpha[X]\to \ma{R}_\beta[X]$ with $\lambda_{R,2}^\beta=\Phi\circ\lambda_{R,2}^\alpha$. \hfill$\Box$ \vs{0.5cm} In the case that the value group is isomorphic to the group of rationals we obtain the best possible result. \vs{0.5cm} {\bf 3.11 Theorem} \vs{0.1cm} {\it Assume that $\Gamma\cong \IQ$. Then the semialgebraic Lebesgue measure is unique up to isomorphisms.} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} Let $\alpha=(s_\alpha,\sigma_\alpha,\tau_\alpha)$ and $\beta=(s_\beta,\sigma_\beta,\tau_\beta)$ be two Lebesgue data for $R$. By Proposition 3.7 we may assume that $\tau_\alpha=\tau_\beta=:\tau$. Via $\tau$ we can identify $\Gamma$ with a subgroup of $(\IR,+)$ and obtain that $\ma{R}=\ma{R}_\alpha=\ma{R}_\beta$. By Proposition 3.8 there is a valuation preserving $\ma{L}_\an$-automorphism $K:\ma{R}_\an\to \ma{R}_\an$ such that $\sigma_\beta=K\circ\sigma_\alpha$. Since $K$ is an automorphism of the real field $\ma{R}$ we have that $K(x^q)=\big(K(x)\big)^q$ for all $x\in\ma{R}_{>0}$ and all $q\in\IQ$. Since $\Gamma$ is isomorphic to $\IQ$ there is an $r\in \IR_{>0}$ such that $\Gamma=r\IQ$. Note that $r$ does only depend on its class in $\IR/\IQ$. We have that $-r\in \Gamma$. Since $t^{-r}\in \sigma_\alpha(R)$ we find a (uniquely determined) $x^*\in R$ such that $\sigma_\alpha(x^*)=t^{-r}$. Then $\mathrm{ord}(\sigma_\beta(x^*))=-r$. Since $t^{-r}\in \sigma_\beta(R)$ we find $a^*\in\IR_{>0}$ and $h^*\in \mathfrak{m}_R$ such that $\sigma_\beta(x^*)=a^* t^{-r}(1+\sigma_\beta(h^*))$. Then $f^*:=\log\big(a^*(1+\sigma_\beta(h^*))\big)\in \ma{O}_{\langle R\rangle_\beta}$ by Lemma 3.2(2). Mapping $X\mapsto X+f^*/r$ we extend $K$ to an isomorphism $\Phi:\ma{R}[X]\to \ma{R}[X]$. As above we are done by establishing the following Claim. \vs{0.2cm} {\bf Claim:} Let $x\in \ma{R}_{>0}$. Then $\log\big(\Phi(x)\big)=\Phi\big(\log(x)\big)$. \vs{0.1cm} {\bf Proof of the claim:} Let $\gamma\in\Gamma,a\in \IR_{>0}$ and $h\in\mathfrak{m}_{\ma{R}}$ such that $x=at^\gamma(1+h)$. Let $q_\gamma:=\gamma/r\in\IQ$. We obtain \begin{eqnarray*} \log\big(\Phi(x)\big)&=& \log\Big(K\big((t^{-r})^{-q_\gamma}a(1+h)\big)\Big)\\ &=& \log\Big(\big(K(t^{-r})\big)^{-q_\gamma}K\big(a(1+h)\big)\Big)\\ &=&-q_\gamma\log\big(\sigma_\beta(x^*)\big)+\log\Big(K\big(a(1+h)\big)\Big)\\ &=&-q_\gamma(rX+f^*)+\log\Big(a\big(1+K(h)\big)\Big)\\ &=&-\gamma(X+f^*/r)+\log(a)+L\big(K(h)\big)\\ &=& -\gamma(X+f^*/r)+\log(a)+K\big(L(h)\big)\\ &=&\Phi\big(\log(a)-\gamma X+ L(h)\big)\\ &=&\Phi\big(\log(x)\big). \end{eqnarray*} where $L$ denotes the logarithmic series. \hfill$\Box_{\mathrm{Claim}}$ \hfill$\Box$ \vs{0.5cm} One could also define a {\bf volume isomorphism} $\langle R\rangle_\alpha[X]\stackrel{\cong}{\longrightarrow}\langle R\rangle_\beta[X]$ between two Lebesgue data $\alpha$ and $\beta$. But this does not lead to a new notion of isomorphism: \vs{0.5cm} {\bf 3.12 Proposition} \vs{0.1cm} {\it Let $\alpha,\beta$ be Lebesgue data for $R$. The following holds: \begin{itemize} \item[(1)] A Lebesgue isomorphism $\ma{R}_\alpha[X]\stackrel{\cong}{\longrightarrow}\ma{R}_\beta[X]$ restricts to a volume isomorphism $\langle R\rangle_\alpha[X]\stackrel{\cong}{\longrightarrow}\langle R\rangle_\beta[X]$. \item[(2)] A volume isomorphism $\langle R\rangle_\alpha[X]\stackrel{\cong}{\longrightarrow}\langle R\rangle_\beta[X]$ can be extended to a Lebesgue isomorphism $\ma{R}_\alpha[X]\stackrel{\cong}{\longrightarrow}\ma{R}_\beta[X]$. \end{itemize}} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} Let $\alpha=(s_\alpha,\sigma_\alpha,\tau_\alpha)$ and $\beta=(s_\beta,\sigma_\beta,\tau_\beta)$. By [38, Satz 5 in I \S 5] we find some $r\in \IR_{>0}$ such that $\tau_\beta=r\tau_\alpha$. Let $\eta:\tau_\alpha(\Gamma)\to \tau_\beta(\Gamma), \delta\mapsto r\delta$. \vs{0.2cm} (1): We denote the given Lebesgue isomorphism by $\Phi$. We have that $\Phi(\ma{R}_\alpha)=\ma{R}_\beta$. Let $\varphi:=\Phi|_{\ma{R}_\alpha}$. By Remark 3.6 we have that $\varphi\big(\Theta_\alpha(R)\big)\subset \Theta_\beta(R)$. By Proposition 1.6(2) we know that $\varphi$ is an $\ma{L}_\an$-isomorphism. Hence $\varphi\big(\langle R\rangle_\alpha\big)=\big\langle \Phi\big(\Theta_\alpha(R)\big)\big\rangle_{\ma{R}_\beta}$. So we obtain $\varphi\big(\langle R\rangle_\alpha\big)\subset \langle R\rangle_\beta$. Using this and the observation b) in the proof of Remark 3.6 we get that there are $\widetilde{r}\in\IR_{>0}$ and $g\in \ma{O}_{\langle R\rangle_\beta}$ such that $\Phi(X)=\widetilde{r}X+g$. So $\Phi(X)\in \langle R\rangle_\beta[X]$. We obtain that $\Phi\big(\langle R\rangle_\alpha[X]\big)\subset\langle R\rangle_\beta[X]$ and by symmetry we obtain equality. \vs{0.2cm} (2): We denote the given volume isomorphism by $\Psi$. We have that $\Psi(\langle R\rangle_\alpha)=\langle R\rangle_\beta$ and see similar to the proof of Proposition 3.7 that $\Psi(X)=rX$. Let $\psi:=\Psi|_{\langle R\rangle_\alpha}$. \vs{0.2cm} {\bf Special case:} $\sigma_\alpha=\sigma_\beta$. \vs{0.1cm} Then also $s_\alpha=s_\beta$. Via $\sigma:=\sigma_\alpha$ we can identify $R$ with a subfield of $\ma{R}$. Let $$G:\IR((t^\IR))\to\IR((t^\IR)), \sum_{\alpha\in\IR}a_\alpha t^\alpha\mapsto \sum_{\alpha\in\IR}a_\alpha t^{r\alpha}.$$ By Remark 3.6 we have that $\psi|_R=G|_R$. By Proposition 1.6(3) we have that $R$ is dense in $\langle R\rangle_\alpha$ and by Proposition 1.6(2) that $\psi$ and $G$ are continuous. Hence we obtain that $\psi=G|_{\langle R\rangle_\alpha}$. The isomorphism $\Phi:\ma{R}_\alpha[X]\to \ma{R}_\beta[X]$ with $\Phi|_{\ma{R}_\alpha}=G|_{\ma{R}_\alpha}$ and sending $X\to rX=\Psi(X)$ is clearly a Lebesgue isomorphism between $\alpha$ and $\beta$ and extends $\Psi$. \vs{0.2cm} {\bf General case:} By the special case we can assume that $\tau_\alpha=\tau_\beta$. Therefore we identify $\Gamma$ with a subgroup of $\IR$ and obtain $\ma{R}=\ma{R}_\alpha=\ma{R}_\beta$. The maps $\Theta_\alpha$ and $\Theta_\beta$ are valuation preserving. Since $\psi\circ\Theta_\alpha=\Theta_\beta$ by Remark 3.6 we get that $\psi|_{\Theta_\alpha(R)}$ is valuation preserving. Since $\Theta_\alpha(R)$ and $\langle R\rangle_\alpha$ have the same value group we see that $\psi$ is valuation preserving. Since $\ma{R}$ is the immediate maximal extension of $\langle R\rangle_\alpha$ respectively $\langle R\rangle_\beta$ we get by [32, Theorem 5] that there is a valuation preserving field automorphism $K:\ma{R}\to\ma{R}$ extending $\psi$. We finish as in the special case. \hfill$\Box$ \vs{0.5cm} To obtain a kind of weak uniqueness we introduce the following notations. \vs{0.5cm} {\bf 3.13 Definition} \begin{itemize} \item[(a)] Let $\alpha$ be a Lebesgue datum for $R$. Then the ordered group $\ma{R}_\alpha[X]/\ma{O}_{\ma{R}_\alpha}$ is called the {\bf reduced Lebesgue group} of $R$ with respect to $\alpha$. For $n\in\IN$, we set $\overline{\lambda}^\alpha_{R,n}:=\pi\circ\lambda_{R,n}^\alpha$ where $\pi:\ma{R}_\alpha[X]\to \ma{R}_\alpha[X]/\ma{O}_{\ma{R}_\alpha}$ is the canonical group epimorphism. \item[(b)] Let $\alpha,\beta$ be two Lebesgue data for $R$. We say that the reduced semialgebraic Lebesgue measure with respect to $\alpha$ and the reduced semialgebraic Lebesgue measure with respect to $\beta$ are {\bf isomorphic} if there is a group isomorphism $\Psi:\ma{R}_\alpha[X]/\ma{O}_{\ma{R}_\alpha}\stackrel{\cong}{\longrightarrow} \ma{R}_\beta[X]/\ma{O}_{\ma{R}_\beta}$ of the reduced Lebesgue groups such that $\overline{\lambda}^\beta_{R,n}=\Psi\circ\overline{\lambda}_{R,n}^\alpha$ for every $n\in\IN$. We call such an isomorphism a {\bf reduced Lebesgue isomorphism} between the Lebesgue data $\alpha$ and $\beta$. \item[(b)] We say that the reduced semialgebraic Lebesgue measure for $R$ is {\bf unique up to isomorphisms} if the reduced semialgebraic Lebesgue measures with respect to any Lebesgue data for $R$ are isomorphic. \end{itemize} \vs{0.2cm} {\bf 3.14 Theorem} \vs{0.1cm} {\it The reduced semialgebraic Lebesgue measure is unique up to isomorphisms.} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} Let $\alpha=(s_\alpha,\sigma_\alpha,\tau_\alpha)$ and $\beta=(s_\beta,\sigma_\beta,\tau_\beta)$ be two Lebesgue data for $R$. As above, we may assume that $\tau_\alpha=\tau_\beta=:\tau$ and identify $\Gamma$ via $\tau$ with a subgroup of $(\IR,+)$, obtaining thereby that $\ma{R}=\ma{R}_\alpha=\ma{R}_\beta$. By Proposition 3.8 there is a valuation preserving $\ma{L}_\an$-automorphism $K:\ma{R}_\an\to \ma{R}_\an$ such that $\sigma_\beta=K\circ\sigma_\alpha$. Mapping $X\mapsto X$ we extend $K$ to an algebra isomorphism $\Phi:\ma{R}[X]\to \ma{R}[X]$. Since $\Phi(\ma{O}_{\ma{R}})=\ma{O}_{\ma{R}}$ we get that $\Phi$ induces a group isomorphism $\Psi:\ma{R}[X]/\ma{O}_{\ma{R}}\to \ma{R}[X]/\ma{O}_{\ma{R}}$. Since $K$ is an $\ma{L}_\an$-automorphism and since $\Phi$ is a ring isomorphism we are done, as above, once the following claim is established where $\pi:\ma{R}[X]\to \ma{R}[X]/\ma{O}_{\ma{R}}$ denotes the canonical epimorphism. \vs{0.2cm} {\bf Claim:} Let $x\in \ma{R}_{>0}$. Then $\pi\Big(\log\big(K(x)\big)\Big)=\Psi\Big(\pi\big(\log(x)\big)\Big)$. \vs{0.1cm} {\bf Proof of the claim:} Let $\gamma\in\Gamma,a\in \IR_{>0}$ and $h\in\mathfrak{m}_{\ma{R}}$ such that $x=at^\gamma(1+h)$. Since $K$ is valuation preserving we find $b\in \IR_{>0}$ and $g\in\mathfrak{m}_{\ma{R}}$ such that $K(t^\gamma)=bt^\gamma(1+g)$. We obtain \begin{eqnarray*} \pi\Big(\log\big(K(x)\big)\Big)&=& \pi\Big(\log\big(bt^\gamma(1+g)\big)+\log(a)+L\big(K(h)\big)\Big)\\ &=&\pi\Big(\gamma X+\log(ab)+L(g)+L\big(K(h)\big)\Big)\\ &=&\gamma X=\Psi(\gamma X)\\ &=&\Psi\Big(\pi\big(\gamma X+\log(a)+L(h)\big)\Big)\\ &=&\Psi\Big(\pi\big(\log(x)\big)\Big).\\ \end{eqnarray*} where $L$ denotes the logarithmic series. \hfill$\Box_{\mathrm{Claim}}$ \hfill$\Box$ \vs{0.5cm} The above statements hold analogously for the semialgebraic integral. \subsection*{3.2 The Lebesgue algebra and the volume algebra} As in Section 3.1 let $R$ be a non-archimedean real closed field with archimedean value group $\Gamma:=\Gamma_R$ that contains (properly) the reals. As above we set $\ma{R}:=\IR((t^\Gamma))$. We have seen in Theorem 3.9 that the construction of the Lebesgue measure and the Lebesgue integral does only depend the choice of the section, not on the choice of the embedding with respect to the section and not on the choice of embedding of the value group into the reals. Since the definition of $\ma{R}_\alpha$ does only depend on the latter the following definitions are justified: \vs{0.5cm} {\bf 3.15 Definition} \begin{itemize} \item[(1)] We call the polynomial algebra $\ma{R}[X]$ over $\ma{R}$ in one variable the {\bf Lebesgue algebra} of $R$. \item[(2)] Let $s$ be a section for $R$. For $n\in\IN$ we call $$\lambda_{R,n}=\lambda_{R,n}^s:\big\{\mbox{semialgebraic subsets of }R^n\big\}\to \ma{R}[X]\cup\{\infty\},A\mapsto \lambda_{R,n}(A),$$ and $$\mathrm{Int}_{R,n}=\mathrm{Int}^s_{R,n}:\ma{L}^1_{R,n}\to \ma{R}[X],f\mapsto \int_{R^n}f\,d\lambda_{R,n},$$ the {\bf semialgebraic measure and integral on $R^n$ with respect to $s$}. \end{itemize} \vs{0.5cm} {\bf 3.16 Remark} {\it \begin{itemize} \item[(1)] Let $A$ be a semialgebraic subset of $R^n$ with finite measure. Then $\mathrm{deg}\big(\lambda_{R,n}(A)\big)<n$. \item[(2)] Let $f:R^n\to R$ be a semialgebraic function that is integrable. Then $\mathrm{deg}\big(\int_{R^n}f\,d\lambda_{R,n}\big)\leq n$. \end{itemize} Moreover, the degree does not depend on the choice of the section.} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} (1) has been shown in Proposition 3.3. (2) follows then from Proposition 2.7(1). That the degree does not depend on the choice of the section follows from Constructions 2.2 and 2.5. \hfill$\Box$ \vs{0.5cm} If the value group is isomorphic to the rationals we have seen in Theorem 3.10 that the semialgebraic Lebesgue measure and integral is unique up to isomorphisms. Embedding $R$ into $\ma{R}$ and setting $\langle R\rangle:=\langle R\rangle_\ma{R}$, the following definitions are justified: \vs{0.5cm} {\bf 3.17 Definition} \vs{0.1cm} Assume that $\Gamma\cong \IQ$. \begin{itemize} \item[(1)] We call the polynomial algebra $\langle R\rangle[X]$ over $\langle R\rangle$ in one variable the {\bf volume algebra} of $R$. \item[(2)] For $n\in\IN$ we call $$\lambda_{R,n}:\big\{\mbox{semialgebraic subsets of }R^n\big\}\to \langle R\rangle[X]\cup\{\infty\},A\mapsto \lambda_{R,n}(A),$$ and $$\mathrm{Int}_{R,n}:\ma{L}^1_{R,n}\to \langle R\rangle[X],f\mapsto \int_{R^n}f\,d\lambda_{R,n},$$ {\bf the semialgebraic measure and integral on $R^n$}. \end{itemize} \vs{0.2cm} {\bf 3.18 Theorem} \vs{0.1cm} {\it Assume that $\Gamma\cong \IQ$ and that $R$ can be made to a model of $T_\an$. Then the volume algebra of $R$ is the polynomial algebra $R[X]$ over $R$.} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} By Proposition 1.6(1) we see that $\langle R\rangle=R$. \hfill$\Box$ \vs{0.5cm} {\bf 3.19 Main Example} \vs{0.1cm} {\it Let $$\mathbb{P}=\big\{t^{-k/p}\sum_{j=0}^\infty a_j t^{j/p}\mid f\in \IR\{t\}, k\in\IN_0\mbox{ and }p\in\IN\big\}$$ be the field of Puiseux series over $\IR$. \begin{itemize} \item[(1)] The volume algebra of $\mathbb{P}$ is the polynomial ring $\mathbb{P}[X]$ over $\mathbb{P}$. \item[(2)] Let $n\in\IN$. The maps $$\lambda_{\mathbb{P},n}:\chi^1_{\mathbb{P},n}\to \mathbb{P}[X]_{< n}, A\mapsto\lambda_{\mathbb{P},n}(A),$$ and $$\mathrm{Int}_{\mathbb{P},n}:\ma{L}^1_{\mathbb{P},n}\to \mathbb{P}[X]_{\leq n}, f\mapsto \int_{\mathbb{P}^n}f\,d\lambda_{\mathbb{P},n},$$ are surjective. \end{itemize}} {\bf Proof:} \vs{0.1cm} (1): The field $\IP$ of Puiseux series can be made canonically into a model of $T_\an$ (compare with Section 1.5). So Theorem 3.18 gives (1). \vs{0.2cm} (2): By Proposition 2.7 it is enough to deal with the integral. Let $n\in\IN$. We have that $\mathrm{Int}_{\IP,n}\big(\ma{L}^1_{\IP,n}\big)\subset \IP[X]_{\leq n}$ by Remark 3.16(2). That equality holds follows from Properties 2.3, Proposition 2.7 and the observation that $$\int_1^{t^{-1}}\cdots\int_1^{t^{-1}}\frac{d\lambda_{\IP,n}(x)}{x_1\cdot\ldots\cdot x_n}=\big(\log(t^{-1})\big)^n=X^n$$ for all $n\in\IN$. \hfill$\Box$ \subsection*{3.3 Functoriality of the construction} Let $R\subset S$ be an extension of real closed fields with archimedean value groups. We assume that $R$ contains the reals. For a semialgebraic subset $A$ of $R^n$ or a semialgebraic function $f:R^n\to R$ let $A_S$ and $f_S:S^n\to S$ be the canonical lifting of $A$ respectively $f$ to $S$ (via quantifier elimination, see for example [5, Chapter 5]). \vs{0.5cm} {\bf 3.20 Remark} \vs{0.1cm} {\it Let $(s,\sigma,\tau)$ be a Lebesgue datum for $R$. Then there is a Lebesgue datum $(s^*,\sigma^*,\tau^*)$ for $S$ extending $(s,\sigma,\tau)$.} \vs{0.5cm} {\bf 3.21 Proposition} \vs{0.1cm} {\it Let $s$ be a section for $R$ and let $s^*$ be a section for $S$ extending $s$. \begin{itemize} \item[(1)] Let $A\subset R^n$ be semialgebraic. Then $\lambda^{s^*}_{S,n}(A_S)=\lambda^s_{R,n}(A)$. \item[(2)] Let $f:R^n\to R$ be semialgebraic. Then $f$ is integrable over $R^n$ if and only if $f_S$ is integrable over $S^n$. If this holds then $\int f_S\,d\lambda^{s^*}_{S,n}=\int f\,d\lambda^s_{R,n}$. \end{itemize}} {\bf Proof:} \vs{0.1cm} This is evident from the construction of the measure and the integral. \hfill$\Box$ \vs{0.5cm} In particular we obtain that if the semialgebraic set or the semialgebraic function is defined over the reals then the value of the measure and the integral is the usual one obtained by measuring and integrating on the reals. We generalize this. \vs{0.5cm} Let $R$ be a real closed field with archimedean value group that contains the reals. The {\bf standard part map} $\mathrm{\bf st}:R\to \IR\cup\{\infty\}$ is defined as follows (see for example Van den Dries [19] or Ma\v{r}\'{i}kov\'{a} [36]): Let $a\in R$. If $a$ is bounded (see Section 1.2) then $\mathrm{\bf st}(a)$ is the unique real number $x$ such that $x-a$ is infinitesimal. If $a$ is not bounded then $\mathrm{\bf st}(a)=\infty$. The standard part map for tuples is defined componentwise. We are interested in the behaviour of Lebesgue measure with respect to the standard map. Similarly to above, the standard part map $\mathrm{\bf st}:\ma{R}[X]\to \IR\cup\{\infty\}$ is defined. We choose an arbitrary section for $R$. \vs{0.5cm} {\bf 3.22 Remark} \vs{0.1cm} {\it Assume that $R\neq \IR$. For $n\geq 2$ there is an integrable semialgebraic subset $A$ of $R^n$ such that $\mathrm{\bf st}\big(\lambda_{R,n}(A)\big)\neq \lambda_n\big(\mathrm{\bf st}(A)\cap \IR^n\big)$.} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} We deal with the case $n=2$, the higher dimensional examples are constructed in a completely analogous way. Let $\varepsilon\in R_{>0}$ be infinitesimal. Let $A:=[-\varepsilon,\varepsilon]\times [-1/\varepsilon,1/\varepsilon]$. Then $\lambda_{R,2}(A)=4$ by Property 2.3(5) and so $\mathrm{\bf st}\big(\lambda_{R,2}(A)\big)=4$. We have $\mathrm{\bf st}(A)\cap\IR^2=\{0\}\times \IR$, hence $\lambda_2\big(\mathrm{\bf st}(A)\cap\IR^2\big)=0$. \hfill$\Box$ \vs{0.5cm} {\bf 3.23 Definition} A subset $A$ of $R^n$ is called $\IR$-bounded if there is some $C\in\IR_{>0}$ such that $|x|\leq C$ for all $x\in A$. \vs{0.5cm} Note that a subset $A$ of $R^n$ is $\IR$-bounded if and only if $\mathrm{\bf st}(A)\subset \IR^n$. Note also that, given $a\in R^n$, the set $\{a\}$ is $\IR$-bounded if and only if $a$ is bounded. \vs{0.5cm} {\bf 3.24 Remark} \vs{0.1cm} {\it Let $A\subset R^n$ be semialgebraic and $\IR$-bounded. Then $\mathrm{\bf st}(A)$ is a semialgebraic subset of $\IR^n$ and $\dim\big(\mathrm{\bf st}(A)\big)\leq \dim(A)$.} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} The first assertion follows from the Marker-Steinhorn theorem, see for example [19, Section 8]. The second one follows [19, Proposition 9.3]. \hfill$\Box$ \vs{0.5cm} {\bf 3.25 Proposition} \vs{0.1cm} {\it Let $A$ be a semialgebraic subset of $R^n$ that is $\IR$-bounded. Then $\mathrm{\bf st}\big(\lambda_{R,n}(A)\big)=\lambda_n\big(\mathrm{\bf st}(A)\big)$.} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} It holds that $\mathrm{\bf st}\big(A\triangle \mathrm{\bf st}(A)_R\big)\subset \mathrm{\bf st}(\partial A)$ where $\triangle$ denotes the symmetric difference, see [19, Lemma 10.1]. \vs{0.2cm} {\bf Claim:} $\mathrm{\bf st}\big(\lambda_{R_\an,n}(A\triangle \mathrm{\bf st}(A)_R)\big)=0$. \vs{0.1cm} {\bf Proof of the claim:} We argue as in the proof of [19, Lemma 10.2]. Let $B:=\mathrm{\bf st}(\partial A)$. We have that $B$ is compact with $\dim(B)<n$ by Remark 3.24. Hence $\lambda_n(B)=0$. For $k\in\IN$ let $B_k:=\{x\in\IR^n\mid \mathrm{dist}(x,B)\leq 1/k\}$. We have that $B_k\searrow B$. Since $B_1$ is compact and hence $\lambda_n(B_1)<\infty$ we get by $\sigma$-continuity from above that $\lim_{k\to\infty}\lambda_n(B_k)=0$. Since $\mathrm{\bf st}\big(A\triangle \mathrm{\bf st}(A)_R\big)\subset B$ we get that $A\triangle \mathrm{\bf st}(A)_R\subset (B_k)_R$ for all $k\in\IN$. We obtain by Property 2.3(2) and Proposition 3.21 that $$\lambda_{R,n}\big(A\triangle (\mathrm{\bf st}(A))_R\big)\leq\lambda_{R,n}\big((B_k)_R\big)=\lambda_n(B_k).$$ This shows that $\lambda_{R,n}\big(A\triangle \mathrm{\bf st}(A)_R\big)$ is infinitesimal. \hfill$\Box_{\mathrm{Claim}}$ \vs{0.2cm} Applying again Proposition 3.21, the claim gives that \begin{eqnarray*} \big\vert\lambda_{R,n}\big(A\big)-\lambda_n\big(\mathrm{\bf st}(A)\big)\big\vert&=&\big\vert\lambda_{R,n}\big(A\big)-\lambda_{R,n}\big(\mathrm{\bf st}(A)_R)\big\vert\\ &\leq&\lambda_{R,n}\big(A\triangle \mathrm{\bf st}(A)_R\big) \end{eqnarray*} is infinitesimal. This shows the assertion. \hfill$\Box$ \subsection*{3.4 Extension to models of $T_\an$} Let $R$ be a real closed field with archimedean value group. We assume that $R$ is a model of $T_\an$ (hence, it contains the reals). We write $R_\an$ when we want to stress that $R$ is considered as a model of $T_\an$ and not just as a real closed field. \vs{0.5cm} Note that the results of Comte et al. (see Section 1.5) are formulated for globally subanalytic sets. Using Proposition 1.6, one can literally translate the constructions of the previous sections to obtain in the general case, given a section $s$ for $R$, the {\bf analytic Lebesgue measure} $$\lambda_{R_\an,n}=\lambda^s_{R_\an,n}:\big\{\mbox{globally subanalytic subsets of } R^n\big\}\to \ma{R}[X]\cup\{+\infty\}$$ and the {\bf analytic Lebesgue integral} $$\mathrm{Int}_{R_\an,n}:\ma{L}^1_{R_\an,n}\to \ma{R}[X]$$ {\bf on $R^n$ with respect to the section}, and in the case that $\Gamma\cong \IQ$, {\bf the analytic Lebesgue measure} $$\lambda_{R_\an,n}:\big\{\mbox{globally subanalytic subsets of } R^n\big\}\to R[X]\cup\{+\infty\}$$ and {\bf the analytic Lebesgue integral} $$\mathrm{Int}_{R_\an,n}:\ma{L}^1_{R_\an,n}\to R[X]$$ {\bf on $R^n$} such that the above results hold, replacing semialgebraic by globally subanalytic. \vs{0.5cm} {\bf 3.26 Remark} \vs{0.1cm} {\it The analytic Lebesgue measure and the analytic Lebesgue integral extend the semialgebraic Lebesgue measure and the semialgebraic Lebesgue integral (with respect to a section).} \section*{4. Constructible functions} The definition of constructible functions on $R_{\an,\exp}$ from the end of Section 1.5 can be naturally generalized to an arbitrary model of $T_{\an,\exp}$, in particular to the field $\ma{S}:=\IR((t))^{\mathrm{LE}}$ of LE-series. We define constructible functions on the field $\IP$ of Puiseux series. These functions take values in the volume algebra $\IP[X]$. Throughout the section we use that $\IP$ carries a canonical $\ma{L}_\an$-structure which makes it a model of $T_\an$ (compare with Section 1.5). \vs{0.5cm} {\bf 4.1 Remark} \vs{0.1cm} {\it We have that $\log x\in\mathrm{ord}(x) X+\ma{O}_\IP\subset\IP[X]$ for all $x\in\IP_{>0}$.} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} See Lemma 3.2(2). \hfill$\Box$ \vs{0.5cm} {\bf 4.2 Definition} \vs{0.1cm} A function $\IP^n\to \IP[X]$ is called {\bf constructible} if it is a finite sum of finite products of globally subanalytic functions and logarithms of positive globally subanalytic functions on $\IP^n$. Similarly, we define a {\bf constructible function on $A$} where $A$ is a globally subanalytic subset of some $\IP^n$. \vs{0.5cm} The relevance of constructible functions for integrating is given by the following. \vs{0.5cm} {\bf 4.3 Proposition} \vs{0.1cm} {\it Let $f:\IP^{q+n}\to \IP$ be globally subanalytic. Then the following holds: \begin{itemize} \item[(1)] The set $$\mathrm{Fin}(f):=\Big\{t\in \IP^q\;\big\vert\, f_t\mbox{ is integrable}\Big\}$$ is globally subanalytic. \item[(2)] There is a constructible function $h:\IP^q\to\IR$ such that $$\int_{\IP^n}f_t\,d\lambda_{\IP,n}=h(t)$$ for all $t\in \mathrm{Fin}(f)$. \end{itemize}} {\bf Proof:} \vs{0.1cm} This follows by doing Construction 2.6 with parameters. \hfill$\Box$ \vs{0.5cm} By $\Omega_n$ we denote the ring of globally subanalytic functions $\IP^n\to\IP$. We write $\Omega:=\Omega_1$ and denote by $\Omega^*$ the subset consisting of the globally subanalytic functions $\IP\to \IP$ which have a limit in $\IP^*$ at $\infty$. We deal with unary constructible functions. \vs{0.5cm} {\bf 4.4 Definition} \vs{0.1cm} A constructible function $f:\IP\to \IP[X]$ is called {\bf simple at $\infty$} if there is a finite subset $\ma{E}$ of $\IZ\times\IN_0\times\IN_0$ and for every $\sigma=(\sigma_1,\sigma_2,\sigma_3)\in\ma{E}$ a function $h_\sigma\in\Omega^*$ such that $$f(x)=\sum_{\sigma\in\ma{E}}h_\sigma(x) x^{\sigma_1}\big(\log x\big)^{\sigma_2}X^{\sigma_3}$$ for all sufficiently large $x\in\IP$. \vs{0.5cm} We call such a presentation a {\bf simple description of $f$ at $\infty$} and the set $\ma{E}$ a {\bf set of exponents for $f$ at $\infty$}. Note that the simple description is in general not unique. \vs{0.5cm} {\bf 4.5 Proposition} \vs{0.1cm} {\it Every constructible function $\IP\to\IP[X]$ is simple at $\infty$.} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} Let $g:\IP\to\IP$ be globally subanalytic. Then it follows by the preparation theorem for polynomially bounded o-minimal structures (see van den Dries and Speissegger [24, Lemma 2.2]) that we find $\lambda\in\IQ$ and a globally subanalytic function $h:\IP\to \IP$ with $a:=\lim_{x\to \infty}h(x)\in \IP^*$ such that $g(x)=x^\lambda h(x)$ for all sufficiently large $x$. Let $\widetilde{h}:=h/a$. Assume that $a>0$. Applying the logarithm we get that $\log(g(x))=\log(a)+\log(\widetilde{h}(x))+\lambda \log x$ for all sufficiently large $x$. We have that $\log(a)\in \IR X+\ma{O}_\IP$ and that $\log(\widetilde{h}(x))$ is globally subanalytic on $]c,\infty[$ for some large $c$. Taking the definition of a constructible function into account and applying once more the preparation theorem to the functions of the latter type, we obtain the existence. \hfill$\Box$ \vs{0.5cm} From Remark 4.1 one sees that $\log x$ and all of its positive powers do not have a limit as $x$ tends to $\infty$ in $\IP$. \vs{0.5cm} An easy calculation gives the following: \vs{0.5cm} {\bf 4.6 Lemma} \vs{0.1cm} {\it Let $(q,n)\in \IQ\times\IZ$. Then, in $\IP(X)$, we have that $$\lim_{x\to\infty} x^q\big(\log x\big)^n= \left\{\begin{array}{ccc} 0,&& q<0,\\ 1,&\mbox{iff}& q=n=0,\\ \infty,&&q>0. \end{array} \right.$$} \vs{0.5cm} Let $\pi:\IQ\times\IN_0\times\IN_0\to \IQ\times\IN_0$ be the projection on the the first two components. We equip $\IQ\times\IN_0$ with the lexicographical ordering. \vs{0.5cm} {\bf 4.7 Proposition} \vs{0.1cm} {\it Let $f:\IP\to \IP[X]$ be a constructible function and let $\ma{E}$ be a set of exponents for $f$ at $\infty$. The following holds: \begin{itemize} \item[(1)] $f$ is ultimately $0$ if and only if $\ma{E}=\emptyset$. \item[(2)] Assume that $f$ is not ultimately $0$. Let $\mu_\ma{E}=(q_\ma{E},n_\ma{E}):=\max\pi(\ma{E})$. Then $f(x)$ has a limit in $\IP[X]$ as $x$ tends to $\infty$ if and only if either $q_\ma{E}>0$ or $q_\ma{E}=n_\ma{E}=0$. If this holds then $\mu_\ma{E}$ does not depend on the choice of $\ma{E}$ and we write $\mu_f=(q_f,n_f)$. \end{itemize}} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} This follows from Remark 4.1, Lemma 4.6 and the transcendence of $X$ over $\IP$. \hfill$\Box$ \vs{0.5cm} We want to lift constructible functions on $\IP$ to constructible functions on $\ma{S}$. Certainly, this cannot be done in a unique way. But we want to define a canonical lifting. The idea is to use the definition of constructible functions. Let $g:\IP^n\to \IP$ be globally subanalytic. By $g_\ma{S}:\ma{S}^n\to \ma{S}$ we denote the canonical lifting of $g$ from $\IP$ to $\ma{S}$ (as models of the theory $T_\an=\mathrm{Th}(\IR_\an)$). \vs{0.5cm} {\bf 4.8 Definition} \vs{0.1cm} Let $f:\IP^n\to \IP[X]$ be constructible. \begin{itemize} \item[(a)] There are $r\in\IN$, a polynomial $Q\in \Omega_n[T_1,\ldots,T_r]$ and positive $\varphi_1,\ldots,\varphi_r\in\Omega_n$ such that $f(x)=Q\big(\log(\varphi_1(x)),\ldots,\log(\varphi_r(x))\big)$ for all $x\in \IP$. We call $\Delta:=(r,Q,\varphi)$ where $\varphi=(\varphi_1,\ldots,\varphi_r)$ a {\bf representation tuple for $f$}. \item[(b)] Let $\Delta=(r,Q,\varphi)$ be a representation tuple for $f$. We write $f_{\ma{S},\Delta}$ for the constructible function $Q_\ma{S}\big(\log((\varphi_1)_\ma{S}),\log((\varphi_r)_\ma{S})\big)$ on $\ma{S}^n$. \end{itemize} \vs{0.2cm} Clearly $f_{\ma{S},\Delta}$ lifts $f$ to $\ma{S}$. \vs{0.5cm} {\bf 4.9 Theorem} \vs{0.1cm} {\it Let $f:\IP^n\to \IP$ be constructible. Let $\Delta,\widehat{\Delta}$ be representation tuples for $f$. Then $f_{\ma{S},\Delta}=f_{\ma{S},\widehat{\Delta}}$.} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} We do induction on $n$. \vs{0.2cm} $n=1$: Let $\Delta=(r,Q,\varphi)$ and $\widehat{\Delta}=(\widehat{r},\widehat{Q},\widehat{\varphi})$. Let $$\Sigma:=\big\{x\in\ma{S}\mid f_{\ma{S},\Delta}(x)\neq f_{\ma{S},\widetilde{\Delta}}(x)\big\}.$$ By o-minimality, $\Sigma$ is a finite union of open intervals and points. Note that $\Sigma\cap \IP=\emptyset$. We set $$A:=\big\{x\in \IP\mid \varphi\mbox{ or }\widehat{\varphi}\mbox{ is not }C^\infty\mbox{ at }x\big\}\subset\IP.$$ Since the $T_\an$-model $\IP$ allows $C^\infty$-cell decomposition we get that $A$ is a finite subset of $\IP$. We have that $\varphi_\ma{S}$ and $\widehat{\varphi}_\ma{S}$ are $C^\infty$ on $\ma{S}\setminus A$. Since the logarithm is $C^\infty$ on $\ma{S}_{>0}$ we get that $f_{\ma{S},\Delta},f_{\ma{S},\widehat{\Delta}}$ are $C^\infty$ on $\ma{S}\setminus A$. Since $\Sigma\cap\IP=\emptyset$ we have that $\Sigma\subset \ma{S}\setminus A$. Let $a_1<\ldots<a_N$ be the elements of $A$. Set $a_0:=-\infty$ and $a_{N+1}:=\infty$. \vs{0.2cm} {\bf Claim:} $\Sigma$ is finite. \vs{0.1cm} {\bf Proof of the claim:} Assume that $\Sigma$ is not finite. Let $J$ be a maximal nonempty interval that is contained in $\Sigma$. Since $\Sigma\cap A=\emptyset$ there is a unique $j\in \{0,\ldots,N\}$ such that $J\subset I:=]a_j,a_{j+1}[$. Since $\Sigma\cap \IP=\emptyset$ and $A\subset \IP$ we get that $J\neq I$. We deal with the case that $\inf(J)\neq \inf(I)$; the case $\sup(J)\neq \sup(I)$ is treated completely similarly. Then $b:=\inf(J)\in \ma{S}$ and $b> \inf(I)$. Let $h:=f_{\ma{S},\Delta}-f_{\ma{S},\widehat{\Delta}}$. Then $h=0$ for all $x\leq b$ close to $b$. So $h$ is flat at $b$ (i.e., all the derivatives of $h$ vanish at $b$). Since $h$ is a constructible function in the $T_{\an,\exp}$-model $\ma{S}$ we get that $h$ is identically $0$ on a neighbourhood of $b$. So $b\neq \inf(J)$, contradiction. \hfill$\Box_{\mathrm{Claim}}$ \vs{0.2cm} By the claim we have that $\Sigma$ is finite. Since $f_{\ma{S},\Delta}$ and $f_{\ma{S},\widetilde{\Delta}}$ are continuous on $\ma{S}\setminus A$ and $\Sigma\subset \ma{S}\setminus A$ we obtain that $\Sigma=\emptyset$. \vs{0.2cm} $n\to n+1:$ For $t\in\IP$ we have that $\Delta_t:=(r,Q_t,\varphi_t)$ and $\widehat{\Delta}_t:=(\widehat{r},\widehat{Q}_t,\widehat{\varphi}_t)$ are representation tuples for $f_t$. Applying the inductive hypothesis we obtain that $$(f_{\ma{S},\Delta})_t=(f_t)_{\ma{S},\Delta_t}=(f_t)_{\ma{S},\widehat{\Delta}_t}=(f_{\ma{S},\widehat{\Delta}})_t$$ for all $t\in\IP$. This shows the claim. \hfill$\Box$ \vs{0.5cm} Consequently, we write $f_\ma{S}$ for $f_{\ma{S},\Delta}$ where $\Delta$ is a presentation tuple for $f$ and call it the {\bf canonical lifting of $f$}. This, of course, generalizes the case that $f$ is globally subanalytic. Note that in the case $n=1$, by o-minimality, $\lim_{x\to\infty,x\in\ma{S}}f_{\ma{S}}(x)$ exists in $\ma{S}\cup\{\pm\infty\}$. \vs{0.5cm} {\bf 4.10 Theorem} \vs{0.1cm} {\it Let $f:\IP\to \IP[X]$ be constructible. The following are equivalent: \begin{itemize} \item[(i)] $\lim_{x\to \infty,x\in \IP}f(x)$ exists in $\IP[X]$, \item[(ii)] $\lim_{x\to \infty,x\in\ma{S}}f_{\ma{S}}(x)\in\ma{S}$. \end{itemize} If this holds then $\lim_{x\to \infty,x\in\ma{S}}f_{\ma{S}}(x)=\lim_{x\to \infty,x\in \IP}f(x)$.} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} If $f$ is ultimately $0$ then also $f_\ma{S}$ is ultimately $0$ and we are done. So we assume that $f$ is ultimately not the zero function. Let $$\sum_{\sigma\in\ma{E}}h_\sigma(x) x^{\sigma_1}\big(\log x\big)^{\sigma_2}X^{\sigma_3}$$ be a simple description of $f$ at $\infty$. Then $$f_\ma{S}(x)=\sum_{\sigma\in\ma{E}}(h_\sigma)_\ma{S}(x) x^{\sigma_1}\big(\log x\big)^{\sigma_2}X^{\sigma_3}$$ for all sufficiently large $x\in\ma{S}$. We have that $$A_\sigma:=\lim_{x\to\infty,x\in\IP}h_\sigma(x)=\lim_{x\to \infty,x\in\ma{S}}(h_\sigma)_\ma{S}(x)\in\IP^*$$ for all $\sigma\in \ma{E}$. Since $\ma{S}$ is a model of $\IR_{\an,\exp}$ we have, as in the reals, that for $q\in\IQ$ and $n\in\IZ$ $$\lim_{x\to \infty,x\in\ma{S}}x^q(\log(x))^n= \left\{\begin{array}{ll} \infty,& q>0,\\ \infty,& q=0\mbox{ and }n>0,\\ 1,& q=0\mbox{ and }n=0,\\ 0,& q=0\mbox{ and }n<0,\\ 0,& q<0.\\ \end{array}\right.$$ \vs{0.2cm} (ii)$\Rightarrow$(i): Since $\lim_{x\to \infty,x\in\ma{S}}f_{\ma{S}}(x)\in\ma{S}$ we see by the last observation that $\mu_f=\mu_\ma{E}\leq 0$. Since $\mu_f\in \IQ\times\IN_0$ we get that either $q_f<0$ or $q_f=n_f=0$. Moreover, $$\lim_{x\to \infty,x\in\ma{S}}f_{\ma{S}}(x)= \left\{ \begin{array}{lll} 0,&&q_f<0,\\ &\mbox{if}&\\ \sum_{\pi(\sigma)=\mu_f}A_{\sigma}X^{\sigma_3},&&q_f=n_f=0.\\ \end{array}\right.$$ This shows that $\lim_{x\to \infty,x\in\ma{S}}f_{\ma{S}}(x)\in \IP[X]$. By Lemma 4.6 we see that also $$\lim_{x\to \infty,x\in\IP}f(x)= \left\{ \begin{array}{lll} 0,&&q_f<0,\\ &\mbox{if}&\\ \sum_{\pi(\sigma)=\mu_f}A_{\sigma}X^{\sigma_3},&&q_f=n_f=0.\\ \end{array}\right.$$ \vs{0.2cm} (i)$\Rightarrow$(ii): By Proposition 4.7 we have that either $q_f<0$ or $q_f=n_f=0$. Now we can argue similarly to (ii)$\Rightarrow$(i). \hfill$\Box$ \vs{0.5cm} A function $f:\IP\to \IP[X]$ is, as usually, called differentiable at $x\in \IP$ if the limit $$f'(x):=\lim_{y\to x}\frac{f(y)-f(x)}{y-x}$$ exists in $\IP[X]$. \vs{0.5cm} {\bf 4.11 Theorem} \vs{0.1cm} {\it Let $f:\IP\to\IP[X]$ be constructible and let $x\in \IP$. The following are equivalent: \begin{itemize} \item[(i)] $f$ is differentiable at $x$. \item[(ii)] $f_\ma{S}$ is differentiable at $x$. \end{itemize} If this holds then $f'(x)=(f_\ma{S})'(x)$.} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} Apply Theorem 4.10 to the functions $$\IP_{>0}\to \IP[X], t\mapsto t\big(f(x+1/t)-f(x)\big)$$ and $$\IP_{>0}\to \IP[X], t\mapsto -t\big(f(x-1/t)-f(x)\big).$$ \hfill$\Box$ \vs{0.5cm} {\bf 4.12 Example} \vs{0.1cm} {\it The logarithm $\log:\IP_{>0}\to \IP[X]$ is differentiable with $(\log x)'=1/x$ for all $x\in \IP_{>0}$.} \vs{0.5cm} {\bf 4.13 Corollary} \vs{0.1cm} {\it Let $f:\IP\to \IP[X]$ be constructible. Then outside a finite set $f$ is infinitely often differentiable.} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} Since the $T_{\an,\exp}$-model $\ma{S}$ has $C^\infty$-cell decomposition we get that $f_\ma{S}$ is $C^\infty$ outside a finite set. We get the claim by Theorem 4.11. \hfill$\Box$ \vs{0.5cm} {\bf 4.14 Corollary} \vs{0.1cm} {\it Let $f:\IP\to \IP[X]$ be a constructible function that is differentiable. Then $f':\IP\to\IP[X]$ is constructible.} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} By o-minimality, a globally subanalytic function $\IP\to\IP$ is differentiable outside a finite set and its derivative is globally subanalytic. Applying this and Example 4.12 to a presentation of $f$ we obtain, by the usual product rule and chain rule, the claim. \hfill$\Box$ \vs{0.5cm} {\bf 4.15 Theorem} \vs{0.1cm} {\it Let $I$ be an open interval in $\IP$ and let $f:I\to \IP[X]$ be a constructible function that is infinitely often differentiable. Let $K$ be a closed and bounded subinterval of $I$. Then there are $N\in\IN_0$ and uniquely determined globally subanalytic functions $h_0,\ldots,h_N $ that are infinitely often differentiable on an open neighbourhood of $K$ such that $f|_K=\sum_{j=0}^Nh_jX^j$.} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} Let $\widetilde{K}$ be a closed and bounded subinterval of $I$ such that $K$ is contained in the interior of $\widetilde{K}$. Applying for example the preparation theorem for $\IR_\an$ (see for example [35, p. 760]) and arguing as in the proof of Proposition 4.5 we find a finite set $\ma{J}$ of open subintervals of $\widetilde{K}$ such that $\widetilde{K}\setminus\bigcup_{J\in\ma{J}}J$ is finite and for each $J\in\ma{J}$ we find \begin{itemize} \item[(1)] $\lambda\in \partial J$, \item[(2)] some $p\in\IN$, an open interval $U$ of $\IP$ with $|x-\lambda|^{1/p}\in U$ for $x$ in an open interval $V$ containing $\overline{J}$, \item[(3)] a finite subset $\ma{E}$ of $\IQ\times \IN_0\times\IN_0$ and \item[(4)] for every $\sigma\in\ma{E}$ a globally subanalytic function $g_\sigma:U\to \IP^*$ which is $C^\infty$ \end{itemize} such that $$f(x)=\sum_{\sigma\in\ma{E}}g_\sigma(|x-\lambda|^{1/p})|x-\lambda|^{-\sigma_1/p}\big(\log|x-\lambda|\big)^{\sigma_2}X^{\sigma_3}$$ for all $x\in J$. Fixing $J$, we may assume without restriction that $\lambda=\inf(J)=0$. Consider $$h:\widetilde{J}\to \IP, h(x)=f(x^p)=\sum_{\sigma\in\ma{E}}\widetilde{g}_\sigma(x)x^{-\sigma_1} \big(\log x\big)^{\sigma_2}X^{\sigma_3},$$ where $\widetilde{J}:=\{x\in \IP\mid x^p\in J\}$ and $\widetilde{g}_\sigma:=p^{\sigma_2}g_\sigma$ for $\sigma\in\ma{E}$. Since $h$ has a $C^\infty$-extension to $0$ we obtain that $\max\pi(\ma{E})\in -\IN_0\times\{0\}$. Subtracting $\sum_{\pi(\sigma)=\max\pi(\ma{E})}\widetilde{g}_\sigma(x) x^{-\sigma_1}X^{\sigma_3}$ from $h$ and repeating this argument for the new function which has again a $C^\infty$-extension to $0$ we get that $\pi(\ma{E})\subset -\IN_0\times\{0\}$. Since $\widetilde{K}\setminus \bigcup_{J\in \ma{J}}$ is finite we find $N\in\IN_0$ and globally subanalytic functions $h_0,\ldots,h_N:\widetilde{K}\to \IP$ such that $f|_{\widetilde{K}}=\sum_{j=0}^Nh_jX^j$. Since $X$ is transcendental over $\IP$ we get that $h_0,\ldots,h_N$ are $C^\infty$ on the interior of $\widetilde{K}$ and that these functions are uniquely determined. \hfill$\Box$ \vs{0.5cm} Similarly to above, we define partial derivatives of constructible functions $\IP^n\to \IP$. \vs{0.5cm} We fix an open interval $I$ in $\IP$. \newpage {\bf 4.16 Definition} \vs{0.1cm} Let $f:I\to \IP$ be globally subanalytic. A constructible function $F:I\to \IP[X]$ is called an {\bf antiderivative of $f$} if $F$ is differentiable with $F'=f$. \vs{0.5cm} {\bf 4.17 Proposition} \vs{0.1cm} {\it Let $f:I\to \IP$ be a globally subanalytic function that is continuous. Let $F:\IP\to \IP[X]$ be an antiderivative of $f$. Then $F_{\ma{S}}$ is differentiable on $I_\ma{S}$ with $F_{\ma{S}}'=f_{\ma{S}}$.} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} As in the proof of Theorem 4.9 we see that there is a finite subset $A$ of $\IP$ that is contained in $I$ such that $F_{\ma{S}}$ is $C^\infty$ on $I_\ma{S}\setminus A$. \vs{0.2cm} {\bf Claim:} $F'_\ma{S}=f_\ma{S}$ on $I_\ma{S}\setminus A$. \vs{0.1cm} {\bf Proof of the claim:} The set $I_\ma{S}\setminus A$ is the union of finitely many open intervals $J_1<\ldots<J_k$ with endpoints from $\IP\cup\{\pm\infty\}$. Fix $j\in\{1,\ldots,k\}$ and let $J:=J_j$. Let $u:=(F_\ma{S}|_J)'$. Then $u$ is $C^\infty$ and $u(x)=f_\ma{S}(x)$ for all $x\in \IP$ by Theorem 4.11. Moreover, $u$ is constructible (compare with Corollary 4.14). Arguing as in the proof of Theorem 4.9 we get that $u=f_\ma{S}|_J$. This shows the claim. \hfill$\Box_{\mathrm{Claim}}$ \vs{0.2cm} By Theorem 4.11 applied to $F$ we obtain that $F_\ma{S}$ is differentiable with $F'_\ma{S}(x)=f_\ma{S}$ at every $x\in A$. This and the claim gives the proposition. \hfill$\Box$ \vs{0.5cm} {\bf 4.18 Proposition} \vs{0.1cm} {\it Let $f:I\to \IP$ be a globally subanalytic function that is continuous. Let $F,\widehat{F}:I\to \IP[X]$ be antiderivatives of $f$. Then there is some $c\in \IP[X]$ such that $\widehat{F}=F+c$.} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} Let $G:=F_{\ma{S}}$ and let $\widehat{G}:=\widehat{F}_{\ma{S}}$. By Proposition 4.17, $G$ and $\widehat{G}$ are continuously differentiable on $I_\ma{S}$ with $G'=\widehat{G}'=f_\ma{S}$. Since $G$ and $\widehat{G}$ are definable in the o-minimal structure $\ma{S}_{\an,\exp}$ we can apply the mean value property and get that there is a constant $c\in\ma{S}$ such that $G=\widehat{G}+c$. Hence $\widehat{F}=F+c$ and we see that $c\in \IP[X]$. \hfill$\Box$ \section*{5. Main theorems of integration} In this section we establish the transformation formula, Lebesgue's theorem on dominated convergence and the fundamental theorem of calculus for semialgebraic and analytic integration. As explained in the introduction, we deal with the field of Puiseux series. The results can be immediately translated to a model of $T_\an$ with archimedean value group, independently of the choice of the section. For the general situation, i.e. for a real closed with archimedean value group that contains the reals, the results, again independently of the choosen section, can be also obtained by a more technical set up. We have refrained from this to concentrate on the key ideas. One can naturally realize the analytic respectively the semialgebraic Lebesgue measure and integral on the field of Puiseux series by the Lebesgue datum $(s,\sigma,\tau)$, where $s$ maps $q\in\IQ$ to $t^q$ and $\sigma:\IP\hookrightarrow \ma{P}=\IR((t^\IQ))$ and $\tau: \IQ\hookrightarrow\IR$ are the inclusions, respectively. We formulate the results below for the globally subanalytic setting. The corresponding statements for the semialgebraic one are then automatically included. \subsection*{5.1 The transformation formula} {\bf 5.1 Theorem} (Transformation formula) \vs{0.1cm} {\it Let $U,V\subset \IP^n$ be globally subanalytic sets that are open and let $\varphi:U\to V$ be a globally subanalytic $C^1$-diffeomorphism. Let $f:V\to R$ be globally subanalytic. Then $f$ is integrable over $V$ if and only if $(f\circ\varphi)\big\vert \det(D_\varphi)\big\vert$ is integrable over $U$, and in this case $$\int_V f\,d\lambda_{\IP,n}=\int_U(f\circ\varphi)\big\vert\det(D_\varphi)\big\vert\,d\lambda_{\IP,n}.$$} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} The usual transfer argument of Section 2 does the job. \hfill$\Box$ \subsection*{5.2 Lebesgue's theorem on dominated convergence and the fundamental theorem of calculus} Lebesgue's theorem on dominated convergence and the fundamental theorem of calculus involve limits with respect to the raw data. By simple transfer, we would obtain limits with respect to the lifting of the raw data to the big structure $\ma{S}=\IR((t))^{\mathrm{LE}}$ of LE-series. But we want to have a formulation where the limits are taken with respect to the globally subanalytic functions we start with. For this purpose we us the results of the previous Section 4. \vs{0.5cm} {\bf 5.2 Theorem} (Lebesgue's theorem on dominated convergence) \vs{0.1cm} {\it Let $f:\IP^{n+1}\to \IP, (s,x)\mapsto f(s,x)=f_s(x),$ be globally subanalytic. Assume that there is some integrable globally subanalytic function $h:\IP^n\to \IP$ such that $|f_s|\leq |h|$ for all sufficiently large $s\in \IP$. Then the globally subanalytic function $\lim_{s\to \infty,s\in \IP}f_s$ is integrable and $$\int \lim_{s\to \infty,s\in \IP}f_s\,d\lambda_{\IP,n}=\lim_{s\to \infty,s\in \IP}\int f_s\, d\lambda_{\IP,n}.$$} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} Note that, by o-minimality and by the assumption that $|f_s|\leq|h|$ for all sufficiently large $s$, $\lim_{s\to \infty,s\in \IP}f_s$ exists and is an integrable globally subanalytic function on $\IP^n$. Note that $$\big(\lim_{s\to \infty,s\in\IP} f_s\big)_\ma{S}=\lim_{s\to \infty,s\in\ma{S}} (f_s)_\ma{S}.$$ Let $$F:\IP\to \IP[X], s\mapsto \int f_s\,d\lambda_{\IP,n}.$$ Then $F$ is a constructible function by Proposition 4.3. By the parametric version of Construction 2.6 and by applying the familiar transfer argument, we get that $$\int \lim_{s\to \infty,s\in \IP}f_s\,d\lambda_{\IP,n}=\lim_{s\to \infty,s\in\ma{S}}F_{\ma{S}}(s).$$ Theorem 4.10 gives the claim. \hfill$\Box$ \vs{0.5cm} {\bf 5.3 Corollary} \vs{0.1cm} {\it Let $A$ be a globally subanalytic subset of $\IP_{\geq 0}\times \IP^{n}$. \begin{itemize} \item[(1)] (Continuity from below) Assume that $A_{s_1}\subset A_{s_2}$ for all $0\leq s_1\leq s_2$ and that there is some globally subanalytic subset $B$ of $\IP^n$ with $\lambda_{\IP,n}(A)<\infty$ such that $A_s\subset B$ for all $s\geq 0$. Then $$\lim_{s\to \infty,s\in \IP}\lambda_{\IP,n}\big(A_s\big)=\lambda_{\IP,n}\big(\bigcup_{s\geq 0}A_s\big).$$ \item[(2)] (Continuity from above) Assume that $A_{s_1}\supset A_{s_2}$ for all $0\leq s_1\leq s_2$ and that $\lambda_{\IP,n}(A_0)<\infty$. Then $$\lim_{s\to \infty,s\in \IP}\lambda_{\IP,n}\big(A_s\big)=\lambda_{\IP,n}\big(\bigcap_{s\geq 0}A_s\big).$$ \end{itemize}} {\bf Proof:} \vs{0.1cm} Apply Theorem 5.2 and Proposition 2.7(2). \hfill$\Box$ \vs{0.5cm} The continuity from above can be viewed as a substitute for $\sigma$-continuity from above in the usual Lebesgue theory. The continuity from below can be viewed as a partial substitute for $\sigma$-continuity from below which is equivalent to $\sigma$-additivity. We need here the additional assumption that the union is contained in a set of finite measure. This assumption is necessary as the following example shows. \vs{0.5cm} {\bf 5.4 Example} \vs{0.1cm} {\it For $s\geq 1$ let $$A_s:=\big\{(x,y)\in \IP^2\mid 1\leq x\leq s, 0\leq y\leq \frac{1}{x}\big\}.$$ Then $\lambda_{\IP,2}\big(\bigcup_{s\geq 1}A_t\big)=\infty$ but $\lambda_{\IP,2}(A_s)=\log s$ does not have a limit in $\IP[X]$ as $s$ tends to $\infty$.} \vs{0.5cm} The following example shows that, even in the finite case, the measure is not $\sigma$-additive (cf. the introduction). \vs{0.5cm} {\bf 5.5 Example} \vs{0.1cm} {\it For $j\in\IN$ let $A_j:=[j,t^{-1/j}]$. Then $A_j\searrow \emptyset$ but $\lambda_{\IP,1}(A_j)$ does not tend to $0$ as $j$ tends to $\infty$ since $\lambda_{\IP,1}(A_j)=t^{-1/j}-j$ for all $j$ by 2.3(5).} \vs{0.5cm} {\bf 5.6 Theorem} (Differentiation) \vs{0.1cm} {\it Let $k\in\IN$, let $U$ be an open globally subanalytic subset of $\IP^k$ and let $f:U\times \IP^n\to \IP, (s,x)\mapsto f(s,x),$ be globally subanalytic. Assume that \begin{itemize} \item[(a)] for all $s\in U$, $f_s:\IP^n\to \IP, x\mapsto f(s,x),$ is integrable, \item[(b)] for all $x\in \IP^n$, the partial derivatives of the function $f_x:U\to \IP, s\mapsto f(s,x),$ exist, \item[(c)] there is an integrable globally subanalytic function $g:\IP^n\to \IP$ such that $|\big(\partial f/\partial s_j\big)(s,x)|\leq |g(x)|$ for all $j\in\{1,\ldots,k\}$ and all $(s,x)\in U\times \IP^n$. \end{itemize} Then the partial derivatives of the constructible function $$\varphi:U\to \IP[X], s\mapsto \int_{\IP^n} f(s,x)\,d\lambda_{\IP,n}(x),$$ exist and $$\frac{\partial \varphi}{\partial s_j}(t)=\int_{\IP^n}\frac{\partial f}{\partial s_j}(s,x)\,d\lambda_{\IP,n}(x)$$ for all $j\in\{1,\ldots,k\}$ and all $s\in U$.} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} Using Theorem 5.2 on dominated convergence, the usual proof (see for example [1, \S 16 I]) can be adjusted to obtain that the partial derivatives of the constructible function $\varphi:U\to \IP[X]$ exist. Note that, by o-minimality, the mean value theorem holds for globally subanalytic functions with that are differentiable. \hfill$\Box$ \vs{0.5cm} {\bf 5.7 Theorem} (Fundamental theorem of calculus) \vs{0.1cm} {\it Let $I$ be an open subinterval of $\IP$ and let $f:I\to \IP$ be a globally subanalytic function that is continuous. \begin{itemize} \item[(1)] Let $a\in I$. The function $$F:I\to \IP[X], x\mapsto \int_a^xf(s)\,d\lambda_{\IP,1}(s),$$ is an antiderivative of $f$. \item[(2)] Let $G$ be an antiderivative of $f$ on $I$. For $a,b\in I$ we have $$\int_a^bf(x)\,d\lambda_{\IP,1}=G(b)-G(a).$$ \end{itemize}} {\bf Proof:} \vs{0.1cm} (1): By Proposition 4.3, $F$ is constructible. The usual proof of the fundamental calculus gives that $F$ is differentiable with $F'=f$. \vs{0.2cm} (2): This follows from (1) and Proposition 4.18. \hfill$\Box$ \vs{0.5cm} {\bf 5.8 Corollary} \vs{0.1cm} {\it Let $f:I\to \IP$ be a globally subanalytic function that is continuous. Then $f$ has an antiderivative.} \vs{0.5cm} {\bf 5.9 Example} \vs{0.1cm} {\it Let $n\in\IZ$. The antiderivative of $x^n$ on $\IP_{>0}$ is given up to additive constants by $x^{n+1}/(n+1)$ if $n\neq -1$ and $\log x$ if $n=-1$.} \subsection*{5.3 Fubini's theorem} So far we cannot establish Fubini's theorem in the above setting. The reason is that one obtains, when integrating with parameters, constructible functions which are not necessarily globally subanalytic. We generalize the construction of Section 2.3, using the results of Cluckers and D. Miller [9, 10, 11], to obtain a version of Fubini's theorem. \vs{0.5cm} {\bf 5.10 Construction} \vs{0.1cm} Let $f:\IP^n\to \IP$ be constructible. We say that $f$ is integrable and define its integral $\int_{\IP^n}f(x)\,dx\in \ma{S}$ as follows. Take a formula $\phi(x,s,y)$ in the language $\ma{L}_{\an,\log}$, $x=(x_1,\ldots,x_n), y=(y_1,\ldots,y_q)$, and a point $a\in \IP^q$ such that $\mathrm{graph}(f_\ma{S})=\phi(\ma{S}^{n+1},a)$. We choose thereby $\phi(x,s,y)$ in such a way that $\phi(\IR^{n+1+q})$ is the graph of a constructible function $g:\IR^{n+q}\to \IR$. By Fact 1.10 there are constructible functions $h:\IR^q\to \IR$ and $F:\IR^q\to \IR$ such that, for every $c\in \IR^q$, the following holds: \begin{itemize} \item[(1)] $g_c:\IR^n\to \IR$ is integrable over $\IR^n$ if and only if $h(c)=0$, \item[(2)] if $g_c$ is integrable then $\int_{\IR^n}g_c(x)\,dx=F(c)$. \end{itemize} The graph of the functions $F$ and $h$ are defined in $\IR_{\an,\exp}$ by $\ma{L}_{\an,\exp}$-formulas $\psi(y,z)$ and $\chi(y,z)$, respectively . These formulas define in $\ma{S}$ the graph of a function $F_\ma{S}:\ma{S}^p\to \ma{S}$ and of a function $h_\ma{S}:\ma{S}^p\to\ma{S}$. The values $F_\ma{S}(a)$ and $h_\ma{S}(a)$ do not depend on the choices of $\phi$, $a$ and $\psi,\chi$. We say that $f$ is integrable if $h_\ma{S}(a)=0$ and in this case we set $\int_{\IP^n}f(x)\,dx:=F_{\ma{S}}(a)$. \vs{0.5cm} Note that Theorem 3.11 can be generalized to this setting. We write again $\int_{\IP^n}f\,d\lambda_{\IP,n}$. Doing the same construction with parameters we obtain the following (compare with Proposition 4.3): \vs{0.5cm} {\bf 5.11 Proposition} \vs{0.1cm} {\it Let $f:\IP^{q+n}\to \IP[X]$ be constructible. The following holds: \begin{itemize} \item[(1)] There is a constructible function $g:\IP^q\to\IP[X]$ such that $$\mathrm{Fin}(f):=\big\{s\in \IP^q\mid f_s\mbox{ is integrable}\big\}$$ equals the zero set of $g$. \item[(2)] There is a constructible function $F:\IP^q\to\IP[X]$ such that $$\int_{\IP^n}f_s(x)\,d\lambda_{\IP,n}(x)=F(s)$$ for all $s\in \mathrm{Fin}(f)$. \end{itemize}} \vs{0.2cm} Using the usual transfer argument, we obtain Fubini's theorem. \newpage {\bf 5.12 Theorem} (Fubini's theorem) \vs{0.1cm} {\it Let $f:\IP^{m+n}\to \IP[X]$ be a constructible function that is integrable. Let $g:\IP^m\to \IP[X]$ be a constructible function with $$g(x)=\int_{\IP^n}f(x,y)\,d\lambda_{\IP,n}(y)$$ for all $x\in \IP^m$ such that $f_x:\IP^n\to\IP$ is integrable. Then $g$ is integrable and $$\int_{\IP^{m+n}}f(x,y)\,d\lambda_{\IP,m+n}(x,y)=\int_{\IP^m} g(x)\,d\lambda_{\IP,m}(x).$$} \section*{6. An application} The Stone-Weierstra\ss $ $ theorem on uniform approximation of continuous functions on bounded and closed intervals by polynomials does not hold for semialgebraic functions on non-archimedean real closed fields (see [2, Example 8.8.6] for the field of Puiseux series). By the approximation theorem of Efroymson ([2, Theorem 8.8.4]) it follows that continuous semialgebraic functions can be uniformly approximated by Nash functions. These are functions that are semialgebraic and $C^\infty$. By integration techniques (smoothing by convolution), we can extend the latter to the case of $T_\an$-models with archimedean value group. As in the previuos sections, we stick to the case of the field of Puiseux series. \vs{0.5cm} We consider the semialgebraic function $\Phi:\IP\to\IP, s\mapsto 1/\big(\pi(1+s^2)\big)$. \vs{0.5cm} {\bf 6.1 Remark} {\it \begin{itemize} \item[(1)] The function $\Phi$ is integrable and $\int_\IP\Phi(s)\,d\lambda_{\IP,1}(s)=1$. \item[(2)] Let $r\in \IP_{>0}$. Then $$\lim_{h\to 0,h\in\IP}\frac{1}{h}\int_{|s|>r}\Phi\big(s/h\big)\,d\lambda_{\IP,1}(s)=0.$$ \end{itemize}} {\bf Proof:} \vs{0.1cm} (1): This holds in the real case. We get the claim by Proposition 3.21. \vs{0.2cm} (2): The constructible antiderivative of $\Phi$ is given by the globally subanalytic function $\arctan_\IP$ which is the lifting of the real arctangent to $\IP$. Applying the transformation formula Theorem 5.1 and the fundamental theorem of calculus Theorem 5.7 we have that $$\frac{1}{h}\int_{|s|>r}\Phi\big(s/h\big)\,d\lambda_{\IP,1}(s)=\int_{|s|>r/h}\Phi(s)\,d\lambda_{\IP,1}(s)=\pi-2\arctan_\IP\big(r/h\big).$$ Since $\lim_{x\to \infty,x\in \IP}\arctan_\IP(x)=\lim_{x\to \infty,x\in \IR}\arctan(x)=\pi/2$ we obtain the claim. \hfill$\Box$ \vs{0.5cm} By Remark 6.1 one can call $\Big(\Phi_h(s)\Big)_{h\in \IP_{>0}}:=\Big(\frac{1}{h}\Phi\big(s/h\big)\Big)_{h\in \IP_{>0}}$ a {\bf Dirac family} on $\IP$. \newpage {\bf 6.2 Remark} \vs{0.1cm} {\it Let $g:\IP\to \IP$ be a globally subanalytic function that is bounded. Then $g(s)\Phi_h(s-x)$ is integrable for all $x\in R$ and all $h\in \IP_{>0}$.} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} This follows from Remark 6.1(1). \hfill$\Box$ \vs{0.5cm} {\bf 6.3 Definition} \vs{0.1cm} Let $g:\IP\to\IP$ be a globally subanalytic function that is bounded. For $h\in \IP_{>0}$ let $$S_hg:\IP\to \IP[X], x\mapsto \int_\IP g(s)\Phi_h(s-x)\,d\lambda_{\IP,1}(s).$$ The function $S_h(g)$ is the {\bf convolution} of $g$ with $\Phi_h$. \vs{0.5cm} We obtain the usual smoothing property: \vs{0.5cm} {\bf 6.4 Proposition} \vs{0.1cm} {\it Let $g:\IP\to\IP$ be a globally subanalytic function that is continuous. Assume that the support of $g$ is bounded. Then the following holds: \begin{itemize} \item[(1)] $S_hg$ is $C^\infty$ for all $h\in \IP_{>0}$. \item[(2)] For every $\varepsilon\in \IP_{>0}$ there is $h\in \IP_{>0}$ such that $|g(x)-S_hg(x)|<\varepsilon$ for all $x\in \IP$. \end{itemize}} {\bf Proof:} \vs{0.1cm} (1): $\Phi$ is $C^\infty$ and all derivatives of $\Phi$ are bounded on $\IP$. We get the claim by applying Theorem 5.6 repeatedly. \vs{0.2cm} (2): The classical proof (see Bourbaki [4, VIII \S 4]) works in this setting. \hfill$\Box$ \vs{0.5cm} {\bf 6.5 Theorem} \vs{0.1cm} {\it Let $a,b\in \IP$ with $a<b$ and let $f:[a,b]\to \IP$ be globally subanalytic and continuous. Let $\varepsilon\in\IP_{>0}$. Then there is some open interval $I$ in $\IP$ containing $[a,b]$ and some globally subanalytic function $u:I\to \IP$ that is $C^\infty$ such that $|f(x)-u(x)|<\varepsilon$ for all $x\in [a,b]$.} \vs{0.1cm} {\bf Proof:} \vs{0.1cm} We may assume that $\varepsilon\in \mathfrak{m}_\IP$. We choose a continuous globally subanalytic function $g:\IP\to\IP$ that extends $f$ and has a bounded support. By Proposition 6.4(2) there is some $h\in R_{>0}$ such that $|g(x)-S_hg(x)|<\varepsilon$ for all $x\in R$. Let $v:=S_hg$. Then $v$ is constructible by Proposition 4.3. By Proposition 6.4(1) we have that $v$ is $C^\infty$. By Theorem 4.15 there are $N\in\IN$ and globally subanalytic functions $h_0,\ldots,h_N$ that are infinitely often differentiable on an open interval $I$ containing $[a,b]$ such that $v|_{[a,b]}=\sum_{j=0}^Nh_jX^j$. Since $\varepsilon\in\mathfrak{m}_\IP$ we obtain from the transcendence of $X$ that $|h_0(x)-f(x)|<\varepsilon$ for all $x\in[a,b]$. \hfill$\Box$ \newpage \noi \footnotesize{\centerline{\bf References} \begin{itemize} \item[1.] H. Bauer: Measure and Integration Theory. De Gruyter, 2001. \item[2.] J. Bochnak, M. Coste, M.-F. Roy: Real Algebraic Geometry. Springer, 1998. \item[3.] N. Bourbaki: Integration. I. Chapters 1-6. Springer, 2004. \item[4.] N. Bourbaki: Integration. II. Chapters 7-9. Springer, 2004. \item[5.] L. Br\"ocker: Euler integration and Euler multiplication. {\it Adv. Geom.} {\bf 5} (2005), no. 1, 145-169. \item[6.] R. Cluckers, M. Edmundo: Integration of positive constructible functions against Euler characteristic and dimension. {\it J. Pure Appl. Algebra} {\bf 208} (2007), no. 2, 691-698. \item[7.] R. Cluckers, F. Loeser: Constructible motivic functions and motivic integration. {\it Invent. Math.} {\bf 173} (2008), no. 1, 23-121. \item[8.] R. Cluckers, F. Loeser: Constructible exponential functions, motivic Fourier transform and transfer principle. {\it Ann. of Math. (2)} {\bf 171} (2010), no. 2, 1011-1065. \item[9.] R. Cluckers, D. Miller: Stability under integration of sums of products of real globally subanalytic functions and their logarithms. {\it Duke Math. J.} {\bf 156} (2011), no. 2, 311-348. \item[10.] R. Cluckers, D. Miller: Loci of integrability, zero loci, and stability under integration for constructible functions on Euclidean space with Lebesgue measure. {\it Int. Math. Res. Not.} {\bf 2012}, no. 14, 3182-3191. \item[11.] R. Cluckers, D. Miller: Lebesgue classes and preparation of real constructible functions. {\it J. Funct. Anal.} {\bf 264} (2013), no. 7, 1599-1642. \item[12.] R. Cluckers, J. Nicaise, J. Sebag: Motivic integration and its interactions with model theory and Non-Archimedean geometry, Volume I. {\it London Math. Soc. Lecture Note Ser.} {\bf 383}, Cambridge Univ. Press, 2011. \item[13.] R. Cluckers, J. Nicaise, J. Sebag: Motivic integration and its interactions with model theory and Non-Archimedean geometry, Volume II. {\it London Math. Soc. Lecture Note Ser.} {\bf 384}, Cambridge Univ. Press, 2011. \item[14.] G. Comte, J.-M. Lion, J.-P. Rolin: Nature log-analytique du volume des sous-analytiques. {\it Illinois J. Math.} {\bf 44} (2000), no. 4, 884-888. \item[15.] O. Costin, P. Ehrlich, H. M. Friedman: Integration on the surreals: A conjecture of Conway, Kruskal and Norton. arXiv:1505.02478. \item[16.] N. Cutland: Nonstandard measure theory and its applications. {\it Bull. London Math. Soc.} {\bf 15} (1983), no. 6, 529-589. \item[17.] J. Denef, F. Loeser: Germs of arcs on singular algebraic varieties and motivic integration. {\it Invent. Math.} {\bf 135} (1999), no. 1, 201-232. \item[18.] L. van den Dries: Tame Topology and O-minimal Structures. {\it London Math. Soc. Lecture Notes Series} {\bf 248}, Cambridge University Press, 1998. \item[19.] L. van den Dries: Limit sets in o-minimal structures. In: Proceedings of the RAAG Summer School Lisbon 2003 O-minimal structures. Cuvillier, 2005, 172-215. \item[20.] L. van den Dries, A. Macintyre, D. Marker: The elementary theory of restricted analytic fields with exponentiation. {\it Annals of Mathematics} {\bf 140} (1994), 183-205. \item[21.] L. van den Dries, A. Macintyre, D. Marker: Logarithmic-exponential power series. {\it J. London Math. Soc.} (2) {\bf 56} (1997), no. 3, 417-434. \item[22.] L. van den Dries, A. Macintyre, D. Marker: Logarithmic-exponential series. {\it Ann. Pure Appl. Logic} {\bf 111} (2001), no. 1-2, 61-113. \item[23.] L. van den Dries, C. Miller: Geometric categories and o-minimal structures. {\it Duke Math. J.} {\bf 84} (1996), no. 2, 497-540. \item[24.] L. van den Dries, P. Speissegger: O-minimal preparation theorems. Model theory and applications, 87-116, Quad. Mat., 11, Aracne, Rome, 2002. \item[25.] L. Fuchs: Teilweise geordnete algebraische Strukturen. Vandenhoeck \& Ruprecht, 1966. \item[26.] W. Hodges: A shorter model theory. Cambridge University Press, 1997. \item[27.] E. Hrushovski, D. Kazhdan: Integration in valued fields. Algebraic geometry and number theory, 261-405, Progr. Math., {\bf 253}, Birkh\"auser, 2006. \item[28.] E. Hrushovski, Y. Peterzil, A. Pillay: Groups, measures, and the NIP. {\it J. Amer. Math. Soc.} {\bf 21} (2008), no. 2, 563-596. \item[29.] T. Kaiser: On convergence of integrals in o-minimal structures on archimedean real closed fields. {\it Ann. Polon. Math.} {\bf 87} (2005), 175-192. \item[30.] T. Kaiser: First order tameness of measures. {\it Ann. Pure Appl. Logic} {\bf 163} (2012), no. 12, 1903-1927. \item[31.] T. Kaiser: Integration of semialgebraic functions and integrated Nash functions. {\it Math. Z.} {\bf 275} (2013), no. 1-2, 349-366. \item[32.] I. Kaplansky: Maximal fields with valuations. {\it Duke Math. J.} {\bf 9} (1942), 303-321. \item[33.] F.-V. Kuhlmann, S. Kuhlmann, S. Shelah: Exponentiation in power series fields. {\it Proc. Amer. Math. Soc.} {\bf 125} (1997), no. 11, 3177-3183. \item[34.] S. Kuhlmann: Ordered exponential fields. {\it Fields Institute Monographs}, American Mathematical Society, 2000. \item[35.] J.-M. Lion, J.-P. Rolin: Int\'{e}gration des fonctions sous-analytiques et volumes des sous-ensembles sous-analytiques. {\it Ann. Inst. Fourier (Grenoble)} {\bf 48} (1998), no. 3, 755-767. \item[36.] J. Ma\v{r}\'{i}kov\'{a}: The structure on the real field generated by the standard part map on an o-minimal expansion of a real closed field. {\it Israel J. Math.} {\bf 171} (2009), 175-195. \item[37.] J. Ma\v{r}\'{i}kov\'{a}, M. Shiota: Measuring definable sets in o-minimal fields. {\it Israel J. Math.} {\bf 209} (2015), 687-714. \item[38.] S. Prie\ss-Crampe: Angeordnete Strukturen: Gruppen, K\"orper, projektive Ebenen. Springer, 1983. \item[39.] A. Robinson: Non-standard Analysis. North-Holland Publishing Company, 1966. \item[40.] Y. Yin: Additive invariants in o-minimal valued fields. arXiv:1307.0224. \item[41.] Y. Yomdin, G. Comte: Tame geometry with application in smooth analysis. Lecture Notes in Mathematics, {\bf 1834}. Springer, 2004. \end{itemize}} \vs{1cm} Tobias Kaiser\\ University of Passau\\ Faculty of Computer Science and Mathematics\\ tobias.kaiser@uni-passau.de\\ D-94030 Germany \end{document}
{"config": "arxiv", "file": "1409.2241.tex"}
TITLE: Nim game variant QUESTION [2 upvotes]: Statment is as follow Given a number of piles in which each pile contains some numbers of stones/coins. In each turn, a player can choose only one pile and remove any number of stones (at least one) from that pile. The player who cannot move is considered to lose the game (i.e., one who take the last stone is the winner). We can find solution by Xoring all the values of piles. But we have constraint In each turn, a player can choose only one pile and remove any number of stones between 1 to H. How to solve this modified one ? From my point of view it will remain unchanged.. We have to calculated the xor values only. REPLY [2 votes]: Lork Shark the Unknown gave the correct answer, but I believe it's helpful to describe the strategy. For a set o piles of sizes $p_1, p_2, \ldots, p_n$ respectively compute the residues $r_i = p_i \pmod{H+1}$ and pretend you're playing a regular Nim game with piles of size $r_i$. Now you are allowed to make any move you want, sinve $r_i \leqslant H$. Suppose $r_1 \oplus r_2 \oplus \ldots \oplus r_n = 0$ and it's your opponents turn. If the opponent takes $k$ stones from pile $i$ with $k \leqslant r_i$, just play as you would in a regular Nim, so after your move $r_1 \oplus r_2 \oplus \ldots \oplus r_n = 0$ is still true. If the opponent takes $r_i < k \leqslant p_i$ stones from the pile $i$, take $H+1-k$ stones from the same pile so the imaginary situation is unchanged, therefore $r_1 \oplus r_2 \oplus \ldots \oplus r_n = 0$. Thus you win.
{"set_name": "stack_exchange", "score": 2, "question_id": 2368702}
\subsection{Bounds for DRLS \label{sec-all-bounds}} We derive a new additive-multiplicative spectral approximation bound (Eqn. \ref{bound}) for the square of the submatrix $\mathbf{C}$ selected with DRLS. \begin{theorem}\label{theorem-DRLS} \textit{Additive-Multiplicative Spectral Bound:} Let $\mathbf{A} \in \mathbb{R}^{n \times d}$ be a matrix of at least rank $k$ and $\bar{\tau}_i(\mathbf{A})$ be defined as in Eqn. \ref{eqn-tau}. Construct $\mathbf{C}$ following the DRLS algorithm described in Sec. \ref{sec-DLS}. Then $\mathbf{C}$ satisfies, \begin{eqnarray} (1-\epsilon ) \mathbf{A} \mathbf{A}^T -\frac{ \epsilon}{k} || \mathbf{A}_{\backslash k}||_F^2 \mathbf{I} & \preceq & \mathbf{CC}^T \preceq \mathbf{A} \mathbf{A}^T. \label{bound} \end{eqnarray} The symbol $\preceq$ denotes the Loewner partial ordering which is reviewed in Sec \ref{sec-review} (see \cite{horn_matrix_2013} for a thorough discussion). \end{theorem} Conceptually, the Loewner ordering in Eqn. \ref{bound} is the generalization of the ordering of real numbers (e.g. $1<1.5$) to Hermitian matrices. Statements of Loewner ordering are quite powerful; important consequences include inequalities for the eigenvalues. We will use Eqn. \ref{bound} to prove Theorems \ref{theorem-css}, \ref{theorem-pcp}, and \ref{theorem-regression}. Note that our additive-multiplicative bound holds for an un-weighted column subset of $\mathbf{A}$. \begin{theorem}\label{theorem-css} \textit{Column Subset Selection:} Let $\mathbf{A} \in \mathbb{R}^{n \times d}$ be a matrix of at least rank $k$ and $\bar{\tau}_i(\mathbf{A})$ be defined as in Eqn. \ref{eqn-tau}. Construct $\mathbf{C}$ following the DRLS algorithm described in Sec. \ref{sec-DLS}. Then $\mathbf{C}$ satisfies, \begin{eqnarray} || \mathbf{A} - \mathbf{C C}^+\mathbf{A} ||_F^2 \le ||\mathbf{A} - \boldsymbol{\Pi}^F_{\mathbf{C}, k} (\mathbf{A}) ||_F^2 \le (1 + 4 \epsilon) || \mathbf{A}_{\backslash k} ||_F^2, \label{bound-css} \end{eqnarray} with $\quad 0<\epsilon < \tfrac14$ and where $\boldsymbol{\Pi}^F_{\mathbf{C}, k}(\mathbf{A}) =\left(\mathbf{C}\mathbf{C}^+ \mathbf{A}\right)_k$ is the best rank-$k$ approximation to $\mathbf{A}$ in the column space of $\mathbf{C}$ with the respect to the Frobenius norm. \end{theorem} Column subset selection algorithms are widely used for feature selection for high-dimensional data, since the aim of the column subset selection problem is to find a small number of columns of $\mathbf{A}$ that approximate the column space nearly as well as the top $k$ singular vectors. \begin{theorem}\label{theorem-pcp} \textit{Rank-$k$ Projection-Cost Preservation:} Let $\mathbf{A} \in \mathbb{R}^{n \times d}$ be a matrix of at least rank $k$ and $\bar{\tau}_i(\mathbf{A})$ be defined as in Eqn. \ref{eqn-tau}. Construct $\mathbf{C}$ following the DRLS algorithm described in Sec. \ref{sec-DLS}. Then $\mathbf{C}$ satisfies, for any rank $k$ orthogonal projection $\mathbf{X} \in \mathbb{R}^{n\times n}$, \begin{eqnarray} (1-\epsilon) || \mathbf{A} - \mathbf{X} \mathbf{A} ||_F^2 \le || \mathbf{C} - \mathbf{X} \mathbf{C} ||_F^2 \le || \mathbf{A} - \mathbf{X} \mathbf{A} ||_F^2. \label{bound-pcp} \end{eqnarray} To simplify the bookkeeping, we prove the lower bound of Theorem \ref{theorem-pcp} with $(1-\alpha \epsilon)$ error ($\alpha =2(2 + \sqrt{2})$), and assume $0<\epsilon< \tfrac12$. \end{theorem} Projection-cost preservation bounds were formalized recently in \citet{feldman_turning_2013, cohen_dimensionality_2015}. Bounds of this type are important because it means that low-rank projection problems can be solved with $\mathbf{C}$ instead of $\mathbf{A}$ while maintaining the projection cost. Furthermore, the projection-cost preservation bound has implications for $k$-means clustering, because the $k$-means objective function can be written in terms of the orthogonal rank-$k$ cluster indicator matrix \citep{boutsidis_unsupervised_2009}.\footnote{Thanks to Michael Mahoney for this point.} Note that our rank-$k$ projection-cost preservation bound holds for an un-weighted column subset of $\mathbf{A}$. A useful lemma on an approximate ridge leverage score kernel comes from combining Theorem \ref{theorem-DRLS} and \ref{theorem-pcp}. \begin{lemma}\label{lemma-kernel} Approximate ridge leverage score kernel: Let $\mathbf{A} \in \mathbb{R}^{n \times d}$ be a matrix of at least rank $k$ and $\bar{\tau}_i(\mathbf{A})$ be defined as in Eqn. \ref{eqn-tau}. Construct $\mathbf{C}$ following the DRLS algorithm described in Sec. \ref{sec-DLS}. Let $\alpha$ be the coefficient in the lower bound of Theorem \ref{theorem-pcp} and assume $0 < \epsilon < \frac12$. Let $\mathbf{K}(\mathbf{M}) = \left( \mathbf{M}\mathbf{M}^T + \frac{ 1}{k} || \mathbf{M}_{\backslash k}||_F^2 \mathbf{I} \right)^+$ for matrix $\mathbf{M} \in \mathbb{R}^{n \times l}$. Then $\mathbf{K}(\mathbf{C})$ and $\mathbf{K}(\mathbf{A})$ satisfy, \begin{eqnarray} \mathbf{K}(\mathbf{A}) \preceq \mathbf{K}(\mathbf{C}) \preceq \frac{1}{1-(\alpha +1) \epsilon} \mathbf{K}(\mathbf{A}) . \label{bound-kernel} \end{eqnarray} \end{lemma} \begin{theorem}\label{theorem-regression} Approximate Ridge Regression with DRLS: Let $\mathbf{A} \in \mathbb{R}^{n \times d}$ be a matrix of at least rank $k$ and $\bar{\tau}_i(\mathbf{A})$ be defined as in Eqn. \ref{eqn-tau}. Construct $\mathbf{C}$ following the DRLS algorithm described in Sec. \ref{sec-DLS}, let $\alpha$ be the coefficient in the lower bound of Theorem \ref{theorem-pcp}, and assume $0 < \epsilon < \frac{1}{2\alpha} < \frac12$. Choose the regularization parameter $\lambda_2=\frac{|| \mathbf{M}_{\backslash k}||_F^2}{k}$ for ridge regression with a matrix $\mathbf{M}$ (Eqn. \ref{eqn-ridge-min}). Under these conditions, the statistical risk $\mathcal{R}(\hat{\mathbf{y}}_{\mathbf{C}})$ of the ridge regression estimator $\hat{\mathbf{y}}_{\mathbf{C}}$ is bounded by the statistical risk $\mathcal{R}(\hat{\mathbf{y}}_{\mathbf{A}})$ of the ridge regression estimator $\hat{\mathbf{y}}_{\mathbf{A}}$: \begin{equation} \mathcal{R}(\hat{\mathbf{y}}_{\mathbf{C}}) \le (1 + \beta \epsilon) \mathcal{R}(\hat{\mathbf{y}}_{\mathbf{A}}), \end{equation} where $\beta = \frac{2 \alpha(-1 + 2 \alpha + 3 \alpha^2)}{(1-\alpha)^2}$. \end{theorem} Theorem \ref{theorem-regression} means that there are bounds on the statistical risk for substituting the DRLS selected column subset matrix for the complete matrix when performing ridge regression with the appropriate regularization parameter. Performing ridge regression with the column subset $\mathbf{C}$ effectively forces coefficients to be zero and adds the benefits of automatic feature selection to the $L_2$ regularization problem. We also note that the proof of Theorem \ref{theorem-regression} relies only on Theorem \ref{theorem-DRLS} and Theorem \ref{theorem-pcp} and facts from linear algebra, so a randomized selection of weighted column subsets that obey similar bounds to Theorem \ref{theorem-DRLS} and Theorem \ref{theorem-pcp} (e.g. \citet{cohen_input_2017}) will also have bounded statistical risk, albeit with a different coefficient $\beta$. As a point of comparison, consider the elastic net minimization with our ridge regression regularization parameter: \begin{eqnarray} \mathbf{\hat{x}}^E=\underset{\mathbf{x}}{\text{argmin}} \left(|| \mathbf{y}- \mathbf{A} \mathbf{x}||_2^2 + \frac{1}{k} || \mathbf{A}_{\backslash k} ||_F^2 || \mathbf{x} ||_2^2 + \lambda_1 \sum_{j=1}^d |\mathbf{x}_j |\right). \end{eqnarray} The risk of elastic net $ \mathcal{R}(\mathbf{\hat{y}}^E)$ has the following bound in terms of the risk of ridge regression $\mathcal{R}(\mathbf{\hat{y}}_{\mathbf{A}})$: \begin{eqnarray} \mathcal{R}(\mathbf{\hat{y}}^E= \mathbf{A} \mathbf{\hat{x}}^E )= \mathcal{R}(\mathbf{\hat{y}}_{\mathbf{A}}) +\lambda_1^2\frac{ 4 d ||\mathbf{A}||^2_2 }{\frac{1}{k^2} || \mathbf{A}_{\backslash k} ||_F^4} \label{eqn-risk-net} \end{eqnarray} This comes from a slight re-working of Theorem 3.1 of \citet{zou_adaptive_2009}. The bounds for the elastic net risk and $\mathcal{R}(\hat{\mathbf{y}}_{\mathbf{C}}) $ are comparable when $ \lambda_1^2 \approx \frac{\beta \epsilon}{k^2} || \mathbf{A}_{\backslash k} ||_F^4 \frac{ \mathcal{R}(\mathbf{\hat{y}}_{\mathbf{A}})}{ 4 d ||\mathbf{A}||^2_2 }$. Ridge regression is a special case of kernel ridge regression with a linear kernel. While previous work in kernel ridge regression has considered the use of ridge leverage scores to approximate the symmetric kernel matrix by selecting a subset of $n$ informative samples \citep{alaoui_fast_2015, rudi_less_2015}, to our knowledge, no previous work has used ridge leverage scores to approximate the symmetric kernel matrix using ridge leverage scores to select a subset of the $f$ informative features (after the feature mapping of the $d$-dimensional data points). The latter case would be the natural generalization of Theorem \ref{theorem-regression} to non-linear kernels, and remains an interesting open question. Lastly, we note that placing statistical assumptions on $\mathbf{A}$ in the spirit of \citep{rudi_less_2015} may lead to an improved bound for random designs for $\mathbf{A}$. \begin{theorem}\label{theorem-power-decay} Ridge Leverage Power-law Decay: Let $\mathbf{A} \in \mathbb{R}^{n \times d}$ be a matrix of at least rank $k$ and $\bar{\tau}_i(\mathbf{A})$ be defined as in Eqn. \ref{eqn-tau}. Furthermore, let the ridge leverage scores exhibit power-law decay in the sorted column index $\pi_i$, \begin{equation} \bar{\tau}_{\pi_i} (\mathbf{A})= \pi_i^{-a} \bar{\tau}_{\pi_0} (\mathbf{A}) \quad \quad a> 1. \end{equation} Construct $\mathbf{C}$ following the DRLS algorithm described in Sec. \ref{sec-DLS}. The number of sample columns selected by DRLS is, \begin{equation} |\Theta| \le \max \left( \left(\tfrac{4 k}{\epsilon}\right)^{\frac{1}{a}} -1, \left(\tfrac{4 k }{(a-1)\epsilon}\right)^{\frac{1}{a-1}} -1 , k \right). \label{eqn-decay} \end{equation} \end{theorem} Theorem $3$ of \citet{papailiopoulos_provable_2014} introduces the concept of power-law decay behavior for leverage scores for rank-$k$ subspace leverage scores. Our Theorem \ref{theorem-power-decay} is an adaptation of \citet{papailiopoulos_provable_2014}'s Theorem $3$ for ridge leverage scores. An obvious extension of Eqn. \ref{bound} is the following bound, \begin{eqnarray} (1-\epsilon ) \mathbf{A} \mathbf{A}^T -\frac{ \epsilon}{k} || \mathbf{A}_{\backslash k}||_F^2 \mathbf{I} \preceq \mathbf{CC}^T \preceq (1+\epsilon ) \mathbf{A} \mathbf{A}^T +\frac{ \epsilon}{k} || \mathbf{A}_{\backslash k}||_F^2 \mathbf{I} , \label{bound-proj} \end{eqnarray} which also holds for $\mathbf{C}$ selected by ridge leverage random sampling methods with $O(\tfrac{k}{\epsilon^2} \ln{ \left( \tfrac{k}{\delta} \right)})$ weighted columns and failure probability $\delta$ \cite{cohen_input_2017}. Thus, DRLS selects fewer columns with the same accuracy $\epsilon$ in Eqn. \ref{bound-proj} for power-law decay in the ridge leverage scores when, \begin{eqnarray} \max \left( \left(\tfrac{4 k }{\epsilon}\right)^{\frac{1}{a}} -1, \left(\tfrac{4 k}{(a-1)\epsilon}\right)^{\frac{1}{a-1}} -1, k \right) < C \tfrac{k}{\epsilon^2} \ln{ \left( \tfrac{k}{\delta} \right)}, \end{eqnarray} where $C$ is an absolute constant. In particular, when $a\ge 2$, the number of columns deterministically sampled is $\mathcal{O}(k)$.\footnote{Thanks to Ahmed El Alaoui for this point.}
{"config": "arxiv", "file": "1803.06010/bounds.tex"}