text
stringlengths
28
2.36M
meta
stringlengths
20
188
TITLE: How to recreate intuitively a geometric series when you know the sum it converges against? QUESTION [0 upvotes]: Given $x=ax+b$, $0<a<1$ solving for $x$ you get the sum $$\frac{b}{1-a}$$ but how can one intuitively come up with the geometric series $$b+ba+ba^2+...+ba^n $$ REPLY [3 votes]: Is this what you are looking for? $$\begin{align}x&=ax+b\\&=a(ax+b)+b=a^2x+ab+b\\\vdots\\&=a(a(a(\ldots)+b)+b)+b=b+ba+ba^2+ba^3+\ldots\end{align}$$
{"set_name": "stack_exchange", "score": 0, "question_id": 4463503}
TITLE: Probability household cars problem QUESTION [1 upvotes]: A survey consists of recording, the number of cars presently owned by a household among six major manufacturers. Order does not matter. i. Suppose that no household has more than four cars. In how many different ways can the survey sheet be filled out? I was thinking maybe it would just be 4! + 6! but I have a feeling that is wrong, I'm not sure if that would account for multiple cars being from the same brand REPLY [2 votes]: Let $x_1$ be the number of GMs, let $x_2$ the number of Fords, and so on up to $x_6$ being the number of Nissans. We want to find the number of solutions of $$x_1+x_2+\cdots+x_6\le 4$$ in non-negative integers. This is the same as the number of solutions of $$x_1+x_2+\cdots+x_6+x_7= 4$$ (the variable $x_7$ counts the number of empty slots in the four-car garage). Now we have a standard Stars and Bars problem (please see Wikipedia). The number of solutions is $\binom{4+7-1}{7-1}$, that is, $\binom{10}{6}$, or equivalently $\binom{10}{4}$.
{"set_name": "stack_exchange", "score": 1, "question_id": 1189979}
\section{Introduction}\label{sec:Introduction} In \cite{math.QA/9909027}, Jones introduced the notion of a {\it planar algebra} as an axiomatization of the standard invariant of a finite index subfactor. A planar algebra (in vector spaces) is a sequence of vector spaces $\cP[0]$, $\cP[1]$, $\cP[2], \ldots$ called the {\it box spaces} of the planar algebra, along with an action of planar tangles\footnote{While Jones worked with shaded planar tangles in his study of subfactors, in this article we work with unshaded planar tangles.}, i.e., for every planar tangle $T$ we have a linear map $Z(T): \cP[k_1] \otimes \cdots \otimes \cP[k_r] \to \cP[k_0]$. For example: $$ Z\left( \begin{tikzpicture}[baseline =-.1cm] \coordinate (a) at (0,0); \coordinate (b) at ($ (a) + (1.4,1) $); \coordinate (c) at ($ (a) + (.6,-.6) $); \coordinate (d) at ($ (a) + (-.6,.6) $); \coordinate (e) at ($ (a) + (-.8,-.6) $); \ncircle{}{(a)}{1.6}{85}{} \draw (60:1.6cm) arc (150:300:.4cm); \draw ($ (c) + (0,.4) $) arc (0:90:.8cm); \draw ($ (c) + (-.4,0) $) circle (.25cm); \draw ($ (d) + (0,.88) $) -- (d) -- ($ (d) + (-.88,0) $); \draw ($ (c) + (0,-.88) $) -- (c) -- ($ (c) + (.88,0) $); \ncircle{unshaded}{(d)}{.4}{235}{} \ncircle{unshaded}{(c)}{.4}{235}{} \node[blue] at (c) {\small 2}; \node[blue] at (d) {\small 1}; \end{tikzpicture} \right)\,\,:\,\,\,\cP[3] \otimes \cP[5] \to \cP[6] $$ The operation of sticking a tangle inside another is required, by the axioms of a planar algebra, to correspond to the composition of multilinear maps. In this paper, we internalize the notion of planar algebra to a braided pivotal tensor category $\cC$. The resulting notion is called an \emph{anchored planar algebra}. The box spaces $\cP[k]$ are now objects of $\cC$, and the planar tangles are replaced by {\it anchored planar tangles} (see Figure~\pageref{That's where Figure 1 is} for an example). An anchored planar algebra associates to each anchored planar tangle $T$ a morphism in $\cC$: \begin{equation*} Z\left( \begin{tikzpicture}[baseline =-.1cm] \coordinate (a) at (0,0); \coordinate (b) at ($ (a) + (1.4,1) $); \coordinate (c) at ($ (a) + (.6,-.6) $); \coordinate (d) at ($ (a) + (-.6,.6) $); \coordinate (e) at ($ (a) + (-.8,-.6) $); \ncircle{}{(a)}{1.6}{85}{} \draw[thick, red] (c) arc (0:-180:.4cm) arc (180:0:.7cm) .. controls ++(270:.15cm) and ++(45:.15cm) .. ($ (a) + (-45:1.45cm) $) arc (315:90:1.45cm) arc (-90:0:.125cm); \draw[thick, red] (d) .. controls ++(225:1.2cm) and ++(270:2.6cm) .. ($ (a) + (85:1.6) $); \draw (60:1.6cm) arc (150:300:.4cm); \draw ($ (c) + (0,.4) $) arc (0:90:.8cm); \draw ($ (c) + (-.4,0) $) circle (.25cm); \draw ($ (d) + (0,.88) $) -- (d) -- ($ (d) + (-.88,0) $); \draw ($ (c) + (0,-.88) $) -- (c) -- ($ (c) + (.88,0) $); \ncircle{unshaded}{(d)}{.4}{235}{} \ncircle{unshaded}{(c)}{.4}{235}{} \end{tikzpicture} \right) \,\,:\,\,\,\mathcal{P}[3]\otimes \mathcal{P}[5] \,\to\, \mathcal{P}[6]. \end{equation*} In general, this is a morphism $Z(T) \in \Hom_\cC(\cP[k_1] \otimes \cdots \otimes \cP[k_r], \cP[k_0])$, where the order of the tensor factors $\cP[k_1], \ldots, \cP[k_r]$ is determined by the anchor lines (the red lines in the picture). We recall that a tensor category is called \emph{pivotal} if every object is equipped with an isomorphism to its double dual, satisfying certain axioms. There is a well-known algebraic classification of planar algebras \cite{MR2559686,1207.1923,MR3405915} which goes as follows: \begin{equation} \label{eq:PlanarAlgebraClassification} \left\{\,\text{\rm Planar algebras}\left\} \,\,\,\,\longleftrightarrow\,\, \left\{\,\parbox{8.3cm}{\rm Pairs $(\mathcal{D},X)$, where $\cD$ is a pivotal category and $X\in\cD$ is a symmetrically self-dual generator}\,\right\}\right.\right.\!\!. \end{equation} Here, an object $X$ is called symmetrically self-dual if it is equipped with an isomorphism to its dual, subject to a certain symmetry condition. For a pair $(\cD, X)$ as above, the $k$-th box space of the associated planar algebra $\cP$ is given by the invariants in $X^{\otimes k}$: \begin{equation*} \cP[k]:=\Hom_\cD(1_\cD, X^{\otimes k}). \end{equation*} Conversely, the action of planar tangles is given by the graphical calculus of $\cD$. The main goal of our paper is to generalise the correspondence \eqref{eq:PlanarAlgebraClassification} to the case of planar algebras internal to $\cC$, i.e., anchored planar algebras. Our classification result is formulated as an equivalence of categories. It establishes a correspondence between anchored planar algebras in $\cC$, and a certain type of {\it module tensor categories} for $\cC$ (Definition~\ref{def: def of ModTC}). Here, a module tensor category is a simultaneous generalisation of the notion of $\cC$-module category and of the notion of tensor category (a monoidal category which is linear over some field). First and foremost, a module tensor category $\cM$ is a tensor category. In addition to being a tensor category, it comes equipped with a tensor functor $\Phi:\cC\to \cM$ which gives it with the structure of a $\cC$-module category: $c \cdot m = \Phi(c) \otimes m$. Now, because $\cC$ is braided, there is another left action of $\cC$ on $\cM$, given by $c \cdot m = m \otimes \Phi(c)$. In a module tensor category, those two actions are isomorphic, i.e., we are provided with isomorphisms $e_{\Phi(c), m}:\Phi(c) \otimes m\to m \otimes \Phi(c)$. The latter are subject to the following three axioms: \begin{align*} e_{\Phi(c), x \otimes y} &= (\id_x \otimes e_{\Phi(c),y}) \circ (e_{\Phi(c),x} \otimes \id_y) \\ e_{\Phi(c \otimes d),x} &= (e_{\Phi(c),x} \otimes \id_{\Phi(d)}) \circ (\id_{\Phi(c)} \otimes e_{\Phi(d),x}) \\ e_{\Phi(c),\Phi(d)} &= \Phi(\beta_{c,d}) \end{align*} where $\beta$ is the braiding in $\cC$. The above conditions can be re-packaged into the single data of a braided functor $\Phi^{\scriptscriptstyle \cZ}: \cC \to \cZ(\cM)$ from the category $\cC$ into the Drinfel'd center of $\cM$. We say that a module tensor category is \emph{pivotal} if both $\cM$ and $\Phi^{\scriptscriptstyle \cZ}$ are pivotal, and \emph{pointed} if it comes equipped with a symmetrically self-dual object $m \in \cM$ which generates it as a module tensor category. Our main theorem is: \begin{thmalpha}\label{thm:EquivalenceOfCategories} There is an equivalence of categories \[ \left\{\,\text{\rm Anchored planar algebras in $\cC$}\left\} \,\,\,\,\cong\,\, \left\{\,\parbox{4.7cm}{\rm \centerline{Pointed pivotal module} \centerline{tensor categories over $\cC$}}\,\right\}\right.\right.\!\! \] \end{thmalpha} We warn the reader that equipping the collection of all pointed pivotal module tensor categories with the structure of a category is not totally obvious (they are more naturally a $2$-category). The precise version of our theorem is stated as Theorem~\ref{thm:EquivalenceOfCategories2} in the body of this paper. Given a pointed pivotal module tensor category $(\cM,m)$, the $n$-th box object of the associated anchored planar algebra is given by the formula \[ \cP[n]=\Tr_\cC(m^{\otimes n}), \] where $\Tr_\cC: \cM \to \cC$ is the right adjoint of $\Phi$. If one removes the condition that the generator $m\in\cM$ is symmetrically self-dual, then one obtains a classification of {\it oriented} planar algebras (i.e., planar algebras where the strands are oriented); for simplicity we only consider the unoriented case. We note that, even when $\cC=\Vec$, our theorem yields a version of the equivalence \eqref{eq:PlanarAlgebraClassification} which is more precise than has previously appeared in the literature. In our previous article \cite{1509.02937}, we showed that the functor $\Tr_\cC$ admits a `calculus of strings on tubes' (see Section~\ref{sec:TubeRelations} for an overview). As a corollary of our main theorem, we can now prove that this calculus of strings on tubes is invariant under all 3-dimensional isotopies (Appendix~\ref{sec:TubeCalculus}). Examples of anchored planar algebras have already appeared in the literature, in the work of Jaffe-Liu on parafermions and reflection positivity \cite{1602.02662,1602.02671}. In their work, a notion of `planar para algebra' is presented, which is equivalent to that of an anchored planar algebra in the braided tensor category $\Vec(\bbZ/N\bbZ)$ of $\bbZ/N\bbZ$-graded vector spaces (Example \ref{ex:Planar para algebras}). By our main theorem (Theorem~\ref{thm:EquivalenceOfCategories}), the notion of a planar para algebra is equivalent to that of a module tensor category over $\Vec(\bbZ/N\bbZ)$. The parafermion planar para algebras constructed in \cite{1602.02662} then correspond, under the equivalence, to Tambara-Yamagami categories associated to $\bbZ/N\bbZ$. They lie in the larger family of Tambara-Yamagami module tensor categories over $\Vec(A)$, where $A$ is an abelian group (Example \ref{ex:TambaraYamagami}). In the other direction, the algebraic classification given in Theorem \ref{thm:EquivalenceOfCategories} allows us to construct many examples of anchored planar algebras, including examples from near group categories (Section \ref{sec: Near group examples}), and from $ADE$ module tensor categories over Temperley-Lieb-Jones categories (Section \ref{sec: TLJ examples}). We explicitly compute the box objects $\cP[k]$ in all our examples. Our paper is structured as follows. In Section~\ref{sec: Anchored planar algebras}, we review material on planar algebras and introduce the notion of anchored planar algebra. In Section~\ref{sec: The main theorem}, we review the notion of module tensor category, and state our main theorem (Theorem~\ref{thm:EquivalenceOfCategories2}). Using our theorem, we then provide a number of examples of anchored planar algebras. In Section~\ref{sec:Constructing anchored planar algebras}, we explain how to construct anchored planar algebras via generators and relations. In Section~\ref{sec:APAfromMTC}, we use the categorified trace associated to a module tensor category \cite{1509.02937} to construct a functor $\Lambda:\Mod_* \to \APA$ from the category of pointed $\cC$-module tensor categories to the category of anchored planar algebras in $\cC$. In Section~\ref{sec:MTCfromAPA}, we construct a functor $\Delta: \APA \to \Mod_*$ going the other way. Finally, in Section~\ref{sec:Equivalence} we complete the proof of our main theorem and show that the two functors $\Lambda$ and $\Delta$ witness an equivalence of categories. \paragraph{Acknowledgements.} The authors would like to thank Bruce Bartlett, Vaughan Jones, Zhengwei Liu, Scott Morrison, Mathew Pugh, Noah Snyder, and Kevin Walker for helpful discussions. Andr\'e Henriques gratefully acknowledges the Leverhulme trust and the EPSRC grant ``Quantum Mathematics and Computation'' for financing his visiting position in Oxford. Andr\'e Henriques has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 674978). David Penneys was supported by the NSF DMS grant 1500387. James Tener would like to thank the Max Planck Institute for Mathematics for support during the course of this work. David Penneys and James Tener were partially supported by NSF DMS grant 0856316.
{"config": "arxiv", "file": "1607.06041/Chapters/Sec_1_IntroductionPAinBTC.tex"}
TITLE: On the equivalence of representations of Fourier series QUESTION [5 upvotes]: Let $f : \Bbb{R} \to \Bbb{C}$ be a $2\pi$-periodic function such that $$ \int_0^{2\pi} |f(t)| \,dt < \infty $$ Define $$ \hat{f}(k) := \frac{1}{2\pi} \int_0^{2\pi} f(t) e^{-i k t} \,dt $$ The Fourier series of $f$ is then $$ \sum_{k=-\infty}^{\infty}\hat{f}(k)e^{ikt} \tag{1} $$ If we define $$ a_k := \frac{1}{\pi} \int_0^{2\pi} f(t) \cos(kt) \,dt \quad (k \geq 1) \\ b_k := \frac{1}{\pi} \int_0^{2\pi} f(t) \sin(kt) \,dt \quad (k \geq 1) $$ then the Fourier series of $f$ takes the form $$ \hat{f}(0) + \sum_{k=1}^{\infty} a_k \cos(kt) + \sum_{k=1}^{\infty} b_k \sin(kt) \tag{2} $$ In passing from $(1)$ to $(2)$, I have the following question (I'm led to think the answer is yes but haven't succeeded in proving it): If $$ \sum_{n=1}^{\infty}\left\{\hat{f}(n)[\cos(nt)+i\sin(nt)] + \hat{f}(-n)[\cos(nt)-i\sin(nt)]\right\} = \sum_{k=1}^{\infty} [a_k \cos(kt) + b_k \sin(kt)] $$ converges then do all four series $$ \sum_{n=1}^{\infty}\hat{f}(n)[\cos(nt)+i\sin(nt)] \\ \sum_{n=1}^{\infty}\hat{f}(-n)[\cos(nt)-i\sin(nt)] \\ \sum_{k=1}^{\infty} a_k \cos(kt) \\ \sum_{k=1}^{\infty} b_k \sin(kt) $$ converge? REPLY [3 votes]: We define the Fourier series description as $\mathfrak{F}$, and can be written as follows: $$ \begin{aligned} \mathfrak{F}(\hat{f}(k))&= \sum_{k=-\infty}^{\infty} \hat{f}(k)e^{ikt} \\&= \sum_{k=-\infty}^{-1} \hat{f}(k)e^{ikt} +\hat{f}(0)+ \sum_{k=1}^{\infty} \hat{f}(k)e^{ikt} \\&= \mathfrak{F}(\hat{f}(k))_{-} +\hat{f}(0)+ \mathfrak{F}(\hat{f}(k))_{+} \end{aligned} $$ where $\mathfrak{F}(\hat{f}(k))_{+}$ is summation in positive side and $\mathfrak{F}(\hat{f}(k))_{-}$ is negative side. These summations have a following relation: $$ \mathfrak{F}(\hat{f}(k))_{-}=\mathfrak{F}(\hat{f}(-k))_{+} $$ Hence these four series can be derived as follows: $$ \begin{aligned} \sum_{k=1}^{\infty}\hat{f}(k)[\cos(kt)+i\sin(kt)] =& \mathfrak{F}(\hat{f}(k))_{+} \\ \sum_{k=1}^{\infty}\hat{f}(-k)[\cos(kt)-i\sin(kt)] =& \mathfrak{F}(\hat{f}(-k))_{+} \\ \sum_{k=1}^{\infty} a_k \cos(kt) =& \cfrac{1}{2}\biggl[ \mathfrak{F}(\hat{f}(k)) + \mathfrak{F}(\hat{f}(-k)) \biggr]-\hat{f}(0)\\ \sum_{k=1}^{\infty} b_k \sin(kt) =& \cfrac{1}{2}\biggl[ \mathfrak{F}(\hat{f}(k)) - \mathfrak{F}(\hat{f}(-k)) \biggr] \end{aligned} $$ where example of derivation: $$ \begin{aligned} \sum_{k=1}^{\infty} a_k \cos(kt) =& \sum_{k=1}^{\infty} \biggl[ \cfrac{1}{\pi}\int_{0}^{2\pi}f(t)\cos(kt)dt \biggr] \cos(kt) \\=& \sum_{k=1}^{\infty} \biggl[ \cfrac{1}{2\pi} \int_{0}^{2\pi}f(t)(e^{ikt}+e^{-ikt})dt \biggr] \frac{e^{ikt}+e^{-ikt}}{2}\\=& \cfrac{1}{2} \sum_{k=1}^{\infty} \bigl( \hat{f}(-k)+\hat{f}(k) \bigr) (e^{ikt}+e^{-ikt}) \\=& \cfrac{1}{2} \Biggl[ \sum_{k=1}^{\infty} \hat{f}(-k)e^{ikt} + \sum_{k=1}^{\infty} \hat{f}(-k)e^{-ikt} + \sum_{k=1}^{\infty} \hat{f}(k)e^{ikt} + \sum_{k=1}^{\infty} \hat{f}(k)e^{-ikt} \Biggr] \\=& \cfrac{1}{2}\biggl[ \mathfrak{F}(\hat{f}(-k))_{+} + \mathfrak{F}(\hat{f}(k))_{-} + \mathfrak{F}(\hat{f}(k))_{+} + \mathfrak{F}(\hat{f}(-k))_{-} \biggr] \\=& \cfrac{1}{2}\biggl[ \mathfrak{F}(\hat{f}(k)) + \mathfrak{F}(\hat{f}(-k)) \biggr]-\hat{f}(0). \end{aligned} $$ Owing to the problem's description, the function has finite norm, and Parseval's identity shows that the norm of Fourier series and function $f(t)$ are corresponded. $$ \begin{aligned} \|\mathfrak{F}(\hat{f}(k))\|_{2}^{2}=& \|\mathfrak{F}(\hat{f}(-k))\|_{2}^{2}\\=& \frac{1}{2\pi}\int_{0}^{2\pi}|f(t)|^2dt < \infty \end{aligned} $$ Therefore the Fourier series is convergent so that $\|\mathfrak{F}(\hat{f}(-k))_{+}\|_{2}^2$ and $\|\mathfrak{F}(\hat{f}(-k))_{-}\|_{2}^2$ are finite, and eventually above four series are also converged.
{"set_name": "stack_exchange", "score": 5, "question_id": 2182785}
\begin{document} \title{Quantum Markov Processes \\(Correspondences and Dilations)} \author{Paul S. Muhly\thanks{Supported by grants from the U.S. National Science Foundation and from the U.S.-Israel Binational Science Foundation.}\\Department of Mathematics\\University of Iowa\\Iowa City, IA 52242\\\texttt{muhly@math.uiowa.edu} \and Baruch Solel\thanks{Supported by the U.S.-Israel Binational Science Foundation and by the Fund for the Promotion of Research at the Technion}\\Department of Mathematics\\Technion\\32000 Haifa\\Israel\\\texttt{mabaruch@techunix.technion.ac.il}} \maketitle \begin{abstract} We study the structure of quantum Markov Processes from the point of view of product systems and their representations. \textbf{2000 Subject Classificiation }Primary: 46L53, 46L55, 46L57, 46L60, 81S25. Secondary: 46L07, 46L08, 47L90. \end{abstract} \section{Introduction} A \emph{quantum Markov process} is a pair, $(\mathcal{M},\{P_{t}\}_{t\geq0})$, consisting of a von Neumann algebra $\mathcal{M}$ and a semigroup $\{P_{t}\}_{t\geq0}$ of unital, completely positive, normal linear maps on $\mathcal{M}$ such that $P_{0}$ is the identity mapping on $\mathcal{M}$ and such that the map $t\rightarrow P_{t}(a)$ from $[0,\infty)$ to $\mathcal{M}$ is continuous with respect to the $\sigma$-weak topology on $\mathcal{M}$ for each $a\in\mathcal{M}$. Over the years, there have been numerous studies wherein the authors ``dilate'' the Markov semigroup $\{P_{t}\}_{t\geq0}$ to an $E_{0}$-semigroup, in the sense of Arveson \cite{wA89} and Powers \cite{rP88}, of \emph{endomorphisms} $\{\alpha_{t}\}_{t\geq0}$ of a larger von Neumann algebra $\mathcal{R}$. Depending on context, the process of dilation has taken different meanings. Here we mean the following: Suppose $\mathcal{M}$ acts on a Hilbert space $H$, then a quadruple $(K,\mathcal{R},\{\alpha_{t}\}_{t\geq 0},u_{0})$, consisting of a Hilbert space $K$, a von Neumann algebra $\mathcal{R}$, an $E_{0}$-semigroup $\{\alpha_{t}\}_{t\geq0}$ of $\ast $-endomorphisms of $\mathcal{R}$ and an isometric embedding $u_{0} :H\rightarrow K$, will be called an \emph{E}$_{0}$-\emph{dilation }of the quantum Markov process $(\mathcal{M},\{P_{t}\}_{t\geq0})$ (or, simply, of $\{P_{t}\}_{t\geq0}$) in case for all $T\in\mathcal{M}$, all $S\in\mathcal{R}$ and all $t\geq0$ the following equations hold \[ P_{t}(T)=u_{0}^{\ast}\alpha_{t}(u_{0}Tu_{0}^{\ast})u_{0} \] and \[ P_{t}(u_{0}^{\ast}Su_{0})=u_{0}^{\ast}\alpha_{t}(S)u_{0}\text{.} \] Our objective in this paper is to prove that if the Hilbert space $H$ on which $\mathcal{M}$ acts is separable, then such a dilation always exists. What is novel in our approach is that we recognize the space of the Stinespring dilation of each $P_{t}$ as a correspondence $\mathcal{E}_{t}$ over the \emph{commutant} of $\mathcal{M}$, $\mathcal{M}^{\prime}$. (All relevant terms will be defined below.) These correspondences are then assembled and ``dilated'' to a product system $\{E(t)\}_{t\geq0}$ of correspondences over $\mathcal{M}^{\prime}$, very similar to the product systems that Arveson defined in \cite{wA89}. Then we describe $\{P_{t} \}_{t\geq0}$ explicitly in terms of what we call a ``fully coisometric, completely contractive covariant representation'' of $\{E(t)\}_{t\geq0}$, denoted $\{T_{t}\}_{t\geq0}$, in a fashion that derives immediately from our work in \cite{MS98}. A bit more explicitly, but still incompletely, we find that $\{P_{t}\}_{t\geq0}$ may be expressed in terms of $\{T_{t}\}_{t\geq0}$ via the formula \[ P_{t}(a)=\widetilde{T}_{t}(I_{E(t)}\otimes a)\widetilde{T}_{t}^{\ast}\text{,} \] $a\in\mathcal{M}$ and $t\geq0$, where $\widetilde{T}_{t}$ is the operator from $E(t)\otimes H$ to $H$ defined by the equation $\widetilde{T}_{t}(\xi\otimes h)=T_{t}(\xi)h$. Then we dilate $\{T_{t}\}_{t\geq0}$ to what is called an isometric representation $\{V_{t}\}_{t\geq0}$ of $\{E(t)\}_{t\geq0}$ on a Hilbert space $K$. If $u_{0}:H\rightarrow K$ is the embedding that goes along with $\{V_{t}\}_{t\geq0}$, the we find that $T_{t}(\xi)=u_{0}^{\ast}V_{t} (\xi)u_{0}$ for all $\xi\in E(t)$ and that the $E_{0}$-semigroup of endomorphisms $\{\alpha_{t}\}_{t\geq0}$ that we want is given by the formula \[ \alpha_{t}(R)=\widetilde{V}_{t}(I_{E(t)}\otimes R)\widetilde{V}_{t}^{\ast }\text{,} \] where $R$ runs over the von Neumann algebra $\mathcal{R}$ generated by $\{\alpha_{t}(u_{0}\mathcal{M}u_{0}^{\ast})\}_{t\geq0}$. That is, $(K,\mathcal{R},\{\alpha_{t}\}_{t\geq0},u_{0})$ is the dilation of $(\mathcal{M},\{P_{t}\}_{t\geq0})$. An important part of our analysis was inspired by Bhat's paper \cite{bB96}. Recently, Bhat and Skeide \cite{BS00} have dilated a quantum Markov process $(\mathcal{M},\{P_{t}\}_{t\geq0})$ using a product system over the von Neumann algebra $\mathcal{M}$ (ours is over $\mathcal{M}^{\prime}$). The precise connection between their work and ours has still to be determined. However, what we find attractive about our approach is the close explicit connection between dilations of quantum Markov processes and the classical dilation theory of contraction operators on Hilbert space pioneered by B. Sz-Nagy (see \cite{szNF70}). In the next section we develop the theory of correspondences over von Neumann algebras sufficiently so that we can link up with theory developed in \cite{MS98} in which representations and dilations of $C^{\ast}$ -correspondences are considered. We also show how what we call the Arveson correspondence $\mathcal{E}_{P}$ associated with the Stinespring dilation of a completely positive map $P$ can be dilated to a bigger correspondence $E$ in such a way that a certain completely contractive covariant representation of $E$ that gives $P$ is dilated to a fully coisometric, isometric representation of $E$. This representation of $E$ gives a ``power'' dilation of $P$. Then, in Section 3, we construct a ``discrete'' dilation $(K,\mathcal{R} ,\{\alpha_{t}\}_{t\geq0},u_{0})$ of the quantum Markov process $(\mathcal{M} ,\{P_{t}\}_{t\geq0})$. It is here, following ideas developed in Section 2, that we dilate the family $\{\mathcal{E}_{P_{t}}\}_{t\geq0}$ to a product system of correspondences $\{E(t)\}_{t\geq0}$ over $\mathcal{M}^{\prime}$. In Section 4, we show that if the Hilbert space on which $\mathcal{M}$ acts is separable, then the dilation $(K,\mathcal{R},\{\alpha_{t}\}_{t\geq0},u_{0})$ we construct in Section 2 is, in fact, an $E_{0}$-dilation. We adopt the standard notation that if $A$ is a subset of a Hilbert space $H$, then $[A]$ will denote the closed linear span of $A$. \section{Dilations of Completely Positive Maps\label{section1}} Throughout, $\mathcal{M}$ will denote a von Neumann algebra. While much of what we will have to say about von Neumann algebras can be formulated in a space-free fashion, it will be convenient to view $\mathcal{M}$ as acting on a fixed Hilbert space $H$. Thus, we will work inside $\mathcal{B}(H)$, the bounded operators on $H$. Also, throughout, $P$ will denote a fixed completely positive, unital and normal map from $\mathcal{M}$ to $\mathcal{M}$. We need to call attention to specific features of the minimal Stinespring dilation of $P$ \cite{fS55, wA69, wA97}. Form the algebraic tensor product, $\mathcal{M}\otimes H$ and define the sesquilinear form $\langle\cdot,\cdot\rangle$ on this space by the formula \[ \langle T_{1}\otimes h_{1},T_{2}\otimes h_{2}\rangle=\langle h_{1} ,P(T_{2}^{\ast}T_{1})h_{2}\rangle\text{,} \] $T_{i}\otimes h_{i}\in\mathcal{M}\otimes H$. The complete positivity of $P$ guarantees that this form is positive semidefinite. Therefore, the Hausdorff completion of $\mathcal{M}\otimes H$ is a Hilbert space, which we shall denote by $\mathcal{M}\otimes_{P}H$. We shall not distinguish between an element in $\mathcal{M}\otimes H$ and its image in $\mathcal{M}\otimes_{P}H$. The formula \[ \pi_{P}(S)(T\otimes h):=ST\otimes h\text{,} \] $S\in\mathcal{M}$, $T\otimes h\in\mathcal{M}\otimes_{P}H$ defines a representation of $\mathcal{M}$ on $\mathcal{M}\otimes_{P}H$ that is normal because $P$ is normal. Also, the formula \[ W_{P}(h):=I\otimes h, \] $h\in H$, defines an isometric imbedding of $H$ in $\mathcal{M}\otimes_{P}H$, and there results the fundamental equation \[ P(T)=W_{P}^{\ast}\pi_{P}(T)W_{P}\text{,} \] $T\in\mathcal{M}$. It is an easy matter to check that $\mathcal{M}\otimes _{P}H$ is minimal in the sense that the smallest subspace of $\mathcal{M} \otimes_{P}H$ containing $W_{P}H$ and reducing $\pi_{P}$ is all of $\mathcal{M}\otimes_{P}H$. Consequently, the triple $(\pi_{P},\mathcal{M} \otimes_{P}H,W_{P})$ is the unique minimal triple, $(\pi,K,W)$, up to unitary equivalence, such that \[ P(T)=W^{\ast}\pi(T)W\text{,} \] $T\in\mathcal{M}$. We therefore refer to $(\pi_{P},\mathcal{M}\otimes _{P}H,W_{P})$ as \emph{the} \emph{Stinespring dilation }of $P$. The adjoint $W_{P}^{\ast}$ of the isometric embedding $W_{P}$ of $H$ in $\mathcal{M}\otimes_{P}H$ has an explicit form that we will need throughout our analysis: \begin{equation} W_{P}^{\ast}(X\otimes h)=P(X)h\text{, }X\otimes h\in\mathcal{M}\otimes H\text{.} \label{Wpstar} \end{equation} This is easy to see because \[ \langle W_{P}^{\ast}(X\otimes h),k\rangle=\langle X\otimes h,W_{P} k\rangle=\langle X\otimes h,I\otimes k\rangle=\langle h,P(X^{\ast} )k\rangle=\langle P(X)h,k\rangle. \] A space of critical importance for us will be the intertwining space, \[ \mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H):=\{X:H\rightarrow \mathcal{M}\otimes_{P}H\mid XT=\pi_{P}(T)X,T\in M\}\text{.} \] That is, $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$ is the space of operators that intertwine the identity representation of $\mathcal{M}$ on $H$ and $\pi_{P}$. This space turns out to be a $W^{\ast}$-correspondence over the \emph{commutant} $\mathcal{M}^{\prime}$ of $\mathcal{M}$. The notion of a $W^{\ast}$-correspondence is fundamental in this study, and therefore we pause to develop the terminology and to cite some important facts. We follow Lance \cite{cL94} for the general theory of Hilbert $C^{\ast} $-modules that we shall use. In particular, unless indicated to the contrary, a Hilbert module $\mathcal{X}$ over a $C^{\ast}$-algebra $A$, will be a \emph{right }Hilbert $C^{\ast}$-module. We write $\mathcal{L}(\mathcal{X})$ for the space of continuous, adjointable $A$-module maps on $\mathcal{X}$ (which we shall write on the left of $\mathcal{X}$) and we shall write $\mathcal{K}(\mathcal{X})$ for the space of (generalized) compact operators on $\mathcal{X}$, i.e., $\mathcal{K}(\mathcal{X})$ is the span of the the rank one operators $\xi\otimes\eta^{\ast}$, $\xi,\eta\in\mathcal{X}$, where $\xi\otimes\eta^{\ast}(\zeta)=\xi\langle\eta,\zeta\rangle$. \begin{definition} Let $A$ and $B$ be $C^{\ast}$-algebras. A $C^{\ast}$-\emph{correspondence} \emph{from }$A$\emph{\ to }$B$ is a Hilbert $C^{\ast}$-module $\mathcal{X}$ \emph{over} $B$ endowed with the structure of a left module over $A$ via a $\ast$-homomorphism $\varphi:A\rightarrow\mathcal{L}(\mathcal{X})$. A $C^{\ast}$-\emph{correspondence over }$A$ is simply a $C^{\ast}$ -correspondence from $A$ to $A$. \end{definition} When dealing with specific $C^{\ast}$-correspondences, $\mathcal{X}$ from a $C^{\ast}$-algebra $A$ to a $C^{\ast}$-algebra $B$, it will be convenient to suppress the $\varphi$ in formulas involving the left action and simply write $a\xi$ or $a\cdot\xi$ for $\varphi(a)\xi$. \ This should cause no confusion in context. $C^{\ast}$-correspondences should be viewed as generalized $C^{\ast} $-homomorphisms. Indeed, the collection of $C^{\ast}$-algebras together with (isomorphism classes of) $C^{\ast}$-correspondences is a category that contains (contravariantly) the category of $C^{\ast}$-algebras and (conjugacy classes of) $C^{\ast}$-homomorphisms. Of course, for this to make sense, one has to have a notion of composition of correspondences and a precise notion of isomorphism. The notion of isomorphism is the obvious one: a bijective, bimodule map that preserves inner products. Composition is ``tensoring'': If $\mathcal{X}$ is a $C^{\ast}$-correspondence from $A$ to $B$ and if $\mathcal{Y}$ is a correspondence from $B$ to $C$, then the balanced tensor product, $\mathcal{X}\otimes_{B}\mathcal{Y}$ is an $A,C$-bimodule that carries the inner product defined by the formula \[ \langle\xi_{1}\otimes\eta_{1},\xi_{2}\otimes\eta_{2}\rangle_{\mathcal{X} \otimes_{B}\mathcal{Y}}:=\langle\eta_{1},\varphi(\langle\xi_{1},\xi_{2} \rangle_{\mathcal{X}})\eta_{2}\rangle_{\mathcal{Y}}\text{.} \] The Hausdorff completion of this bimodule is again denoted by $\mathcal{X} \otimes_{B}\mathcal{Y}$ and is called the \emph{composition} of $\mathcal{X}$ and $\mathcal{Y}$. At the level of correspondences, composition is not associative. However, if we pass to isomorphism classes, it is. That is, we only have an isomorphism $(\mathcal{X}\otimes\mathcal{Y})\otimes \mathcal{Z}\simeq\mathcal{X}\otimes(\mathcal{Y}\otimes\mathcal{Z})$. It is worthwhile to emphasize here that while it often is safe to ignore the distinction between correspondences and their isomorphism classes, at times, as we shall see, the distinction is of critical importance. If $\mathcal{N}$ is a von Neumann algebra and if $\mathcal{X}$ is a Hilbert $C^{\ast}$-module over $\mathcal{N}$, then $\mathcal{X}$ is called \emph{self-dual} in case every continuous $\mathcal{N}$-module map $\Phi$ from $\mathcal{X}$ to $\mathcal{N}$ is implemented by an element of $\mathcal{X}$, i.e., in case there is an $\xi_{\Phi}\in\mathcal{X}$ so that $\Phi (\xi)=\langle\xi_{\Phi},\xi\rangle$, $\xi\in\mathcal{X}$. There is a topological characterization of self-dual Hilbert $C^{\ast}$-modules over von Neumann algebras given in \cite{BDH88} that will be useful for us. To state it, recall that their $\sigma$-topology on a Hilbert $C^{\ast}$-module $\mathcal{X}$ over a von Neumann algebra $\mathcal{N}$ is the topology defined by the functionals \[ f(\cdot):=\sum_{n=1}^{\infty}w_{n}(\langle\eta_{n},\cdot\rangle) \] where the $\eta_{n}$ lie in $\mathcal{X}$, the $w_{n}$ lie in $\mathcal{N} _{\ast}$, and $\sum\left\| w_{n}\right\| \left\| \eta_{n}\right\| <\infty $. Baillet, Denizeau, and Havet proved that a Hilbert $C^{\ast}$-module $\mathcal{X}$ over a von Neumann algebra $\mathcal{N}$ is self-dual if and only if the unit ball in $\mathcal{X}$ is compact in the $\sigma$-topology \cite[Proposition 1.7]{BDH88}. In \cite{wP73}, Paschke proved that if $\mathcal{X}$ is a self-dual Hilbert $C^{\ast}$-module over a von Neumann algebra $\mathcal{N}$, then $\mathcal{L}(\mathcal{X})$ is a von Neumann algebra, i.e., $\mathcal{L}(\mathcal{X})$ is a $C^{\ast}$-algebra which is also a dual space and which, therefore, may be represented faithfully on Hilbert space in such a way that the weak-$\ast$ topology on $\mathcal{L} (\mathcal{X})$ coincides with the $\sigma$-weak topology on the image. \begin{definition} Let $\mathcal{M}$ and $\mathcal{N}$ be von Neumann algebras and let $\mathcal{X}$ be a Hilbert $C^{\ast}$-module over $\mathcal{N}$. Then $\mathcal{X}$ is called a \emph{Hilbert }$W^{\ast}$\emph{-module} over $\mathcal{N}$ in case $\mathcal{X}$ is self-dual. The module $\mathcal{X}$ is called a $W^{\ast}$\emph{-correspondence from }$\mathcal{M}$\emph{\ to }$\mathcal{N}$\emph{\ }in case $\mathcal{X}$ is a self-dual $C^{\ast} $-correspondence from $\mathcal{M}$ to $\mathcal{N}$ such that the $\ast $-homomorphism $\varphi:\mathcal{M}\rightarrow\mathcal{L}(\mathcal{X})$ giving the left module structure on $\mathcal{X}$ is normal. \end{definition} It is evident that the composition of $W^{\ast}$-correspondences is again a $W^{\ast}$-correspondence. The following proposition shows one way to construct $W^{\ast}$-correspondences. \begin{proposition} \label{Lemma 1.2}Let $H$ and $K$ be Hilbert spaces. Let $\mathcal{M}$ be a von Neumann algebra on $K$, let $\mathcal{N}$ be a von Neumann algebra on $H$, and let $\mathcal{Y}\subseteq B(H,K)$ be a $\sigma$-weakly closed linear space of operators. Suppose that $\mathcal{MYN}\subseteq\mathcal{Y}$ and that $\mathcal{Y}^{\ast}\mathcal{Y}:=\{Y^{\ast}Y\mid Y\in\mathcal{Y}\}$ is contained in $\mathcal{N}$. Then $\mathcal{Y}$ is a self-dual Hilbert $W^{\ast}$-module over $\mathcal{N}$ that has the structure of a $W^{\ast} $-correspondence from $\mathcal{M}$ to $\mathcal{N}$. Further, the $\sigma $-topology on $\mathcal{Y}$ coincides with the $\sigma$-weak topology on $\mathcal{Y}$ as a subspace of $B(H,K)$. \end{proposition} \begin{proof} It is evident that $\mathcal{Y}$ has the structure of a $C^{\ast} $-correspondence from $\mathcal{M}$ to $\mathcal{N}$. The main point of the proposition is the assertion about self-duality and the topologies. The functionals $f$ defining the $\sigma$-topology are of the form $f(\cdot ):=\sum_{n=1}^{\infty}w_{n}(\langle\eta_{n},\cdot\rangle)$ where the $\eta _{n}$ lie in $\mathcal{Y}$, the $w_{n}$ lie in $\mathcal{N}_{\ast}$, and $\sum\left\| w_{n}\right\| \left\| \eta_{n}\right\| <\infty$. Evidently, each of these is $\sigma$-weakly continuous. Conversely, given a functional on $\mathcal{Y}$ of the form $g(Y)=\langle Yh,k\rangle$, we may assume that $k$ is in the closed span of $\{Yh\mid Y\in\mathcal{Y}$, $h\in H\}$ and approximate $g$ in norm by functionals of the form \[ \tilde{g}(Y)=\sum_{m=1}^{r}\langle Yh,Y_{m}h_{m}^{\prime}\rangle=\sum _{m=1}^{r}\langle Y^{\ast}Yh_{m},h_{m}^{\prime}\rangle. \] Each of these functionals is continuous in the $\sigma$-topology. Since the space of $\sigma$-continuous functionals is a Banach space \cite[1.2]{BDH88}, the functional $Y\rightarrow\langle Yh,k\rangle$ is in this space, and so is every $\sigma$-weakly continuous functional on $\mathcal{Y}$. It follows that the two topologies coincide on $\mathcal{Y}$. Since the closed unit ball in $\mathcal{Y}$ is $\sigma$-weakly compact, it must be compact in the $\sigma $-topology. By \cite[Proposition 1.7]{BDH88}, $\mathcal{Y}$ is self-dual. \end{proof} \begin{remark} (i) The theory developed in \cite{dB97} can be used to prove a converse to this result: Given a $W^{\ast}$-correspondence $\mathcal{Y}$ from $\mathcal{M}$ to $\mathcal{N}$, then there are faithful normal representations $\pi:\mathcal{M}\rightarrow B(K)$ and $\rho:\mathcal{N}\rightarrow B(H)$ and there is a linear map $\Phi:\mathcal{Y}\rightarrow B(H,K)$ such that $\Phi(\varphi(T)YS)=\pi(T)\Phi(Y)\rho(S)$ and $\rho(\langle X,Y\rangle _{\mathcal{Y}})=\Phi(X)^{\ast}\Phi(Y)$ for all $X,Y\in\mathcal{Y}$, $T\in\mathcal{M}$, and $S\in\mathcal{N}$, and such that $\Phi$ is a homeomorphism with respect to the $\sigma$-topology on $\mathcal{Y}$ and the $\sigma$-weak topology on $\Phi(\mathcal{Y})$. Thus, in a sense, the construction in Proposition \ref{Lemma 1.2} is universal. (ii) Suppose $\mathcal{X}$ is a self-dual Hilbert $W^{\ast}$-module over a von Neumann algebra $\mathcal{N}$ and that $\pi:\mathcal{M}\rightarrow \mathcal{L}(\mathcal{X})$ is a $C^{\ast}$-homomorphism on the von Neumann algebra $\mathcal{M}$. Then $\pi$ is normal if for every bounded net $\{A_{\alpha}\}\subseteq\mathcal{N}$, with $A_{\alpha}\rightarrow A$ weakly, every $g\in\mathcal{N}_{\ast}$, and every $X,Y\in\mathcal{X}$, we have $g(\langle\pi(A_{\alpha})X,Y\rangle)\rightarrow g(\langle\pi(A)X,Y\rangle)$. This follows from the fact that $\mathcal{L}(\mathcal{X})$ is the dual space of the tensor product $\mathcal{X}\otimes\mathcal{X}^{\ast}\otimes \mathcal{N}_{\ast}$ equipped with the greatest cross norm \cite[Proposition 3.10]{wP73}. \end{remark} \begin{proposition} \label{Lemma1.3}Let $(\pi_{P},\mathcal{M}\otimes_{P}H,W_{P})$ be the Stinespring dilation of a completely positive map $P$ on the von Neumann algebra $\mathcal{M}$. Then $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M} \otimes_{P}H)$ is a $\sigma$-weakly closed subspace of $B(H,\mathcal{M} \otimes_{P}H)$ that is closed under left multiplication by $I\otimes \mathcal{M}^{\prime}$ and right multiplication by $\mathcal{M}^{\prime}$ and has the property that $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes _{P}H)^{\ast}\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)\subseteq \mathcal{M}^{\prime}$. Thus $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M} \otimes_{P}H)$ has the structure of a $W^{\ast}$-correspondence over $\mathcal{M}^{\prime}$. \end{proposition} \begin{proof} Evidently, if $X\in\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$ and $T\in\mathcal{M}^{\prime}$, then $XT\in\mathcal{L}_{\mathcal{M}} (H,\mathcal{M}\otimes_{P}H)$. Indeed, if $S\in\mathcal{M}$, then $XTS=XST=\pi_{P}(S)XT$, showing that $XT\in\mathcal{L}_{\mathcal{M} }(H,\mathcal{M}\otimes_{P}H)$. Also, if $X,Y\in\mathcal{L}_{\mathcal{M} }(H,\mathcal{M}\otimes_{P}H)$, then $X^{\ast}Y\in\mathcal{M}^{\prime}$ because for all $T\in\mathcal{M}$, $X^{\ast}YT=X^{\ast}\pi_{P}(T)Y=TX^{\ast}Y$. Thus, by Proposition \ref{Lemma 1.2}, it remains to give the left action of $\mathcal{M}^{\prime}$. On the face of it, this is evidently given by the formula $\varphi(T)=I\otimes T$, $T\in\mathcal{M}^{\prime}$. However, the meaning of $I\otimes T$, $T\in\mathcal{M}^{\prime}$, and the expression $I\otimes\mathcal{M}^{\prime}$, need a little development. For $T\in \mathcal{M}^{\prime}$, we prove that the algebraic tensor product $I\otimes T$ is bounded as follows: Observe that for $\sum_{i=1}^{n}S_{i}\otimes h_{i}\in M\otimes H$, we have \begin{align*} \left\| (I\otimes T)(\sum_{i=1}^{n}S_{i}\otimes h_{i})\right\| ^{2} & =\langle I\otimes T(\sum_{i=1}^{n}S_{i}\otimes h_{i}),I\otimes T(\sum _{i=1}^{n}S_{i}\otimes h_{i})\rangle\\ & =\langle(\sum_{i=1}^{n}S_{i}\otimes Th_{i}),(\sum_{i=1}^{n}S_{i}\otimes Th_{i})\rangle\\ & =\sum_{i,j=1}^{n}\langle Th_{i},P(S_{i}^{\ast}S_{j})Th_{j}\rangle\\ & =\sum_{i,j=1}^{n}\langle h_{i},T^{\ast}P(S_{i}^{\ast}S_{j})Th_{j} \rangle\text{.} \end{align*} However, since $P$ is completely positive the operator matrix $\left( P(S_{i}^{\ast}S_{j})\right) $ is a positive element in $M_{n}(\mathcal{M} ^{\prime}\mathbb{)}$ and so can be written as $C^{\ast}C$, for an element $C\in M_{n}(\mathcal{M}^{\prime}\mathbb{)}$. Therefore, $\left( T^{\ast }P(S_{i}^{\ast}S_{j})T\right) =\hat{T}^{\ast}C^{\ast}C\hat{T}=C^{\ast}\hat {T}^{\ast}\hat{T}C\leq\left\| T\right\| ^{2}C^{\ast}C$, where $\hat{T}$ is the $n$-fold inflation of $T$. Consequently, the last term in the displayed equation is dominated by \[ \left\| T\right\| ^{2}\sum_{i,j=1}^{n}\langle h_{i},T^{\ast}P(S_{i}^{\ast }S_{j})Th_{j}\rangle=\left\| T\right\| ^{2}\left\| (\sum_{i=1}^{n} S_{i}\otimes h_{i})\right\| ^{2}\text{.} \] Thus $I\otimes T$ extends to an element in $\pi_{P}(\mathcal{M})^{\prime}$, which we continue to denote by $I\otimes T$. The collection of all these operators on $M\otimes_{P}H$ is denoted by $I\otimes\mathcal{M}^{\prime}$. Evidently, the map $T\rightarrow I\otimes T$ is a (not-necessarily-injective) normal $\ast$-homomorphism of $\mathcal{M}^{\prime}$ onto its range. Nevertheless, we denote the range by $I\otimes\mathcal{M}^{\prime}$ and note that $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$ is a left $\mathcal{M}^{\prime}$ module through this homomorphism. Since $\mathcal{L} _{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$ is manifestly $\sigma$-weakly closed, the proof is completed by appeal to Proposition \ref{Lemma 1.2}. \end{proof} For our purposes, a drawback of $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M} \otimes_{P}H)$ is that it is a space of operators acting between two \emph{different} Hilbert spaces. We want to ``pull $\mathcal{L}_{\mathcal{M} }(H,\mathcal{M}\otimes_{P}H)$ back'' to $H$ using $W_{P}$ and the following device that is due to Arveson \cite{wA97}. Given $Y\in B(H)$, and $S\otimes h$ in the algebraic tensor product $\mathcal{M}\otimes H$, we set \[ \Phi_{Y}(S\otimes h):=SY^{\ast}h\text{,} \] and extend $\Phi_{Y}$ by linearity to a densely defined linear map from $\mathcal{M}\otimes_{P}H$ to $H$ with domain $\mathcal{M}\otimes H$. We write $\mathcal{E}_{P}$ for the space of all operators $Y\in B(H)$ such that $\Phi_{Y}$ is continuous. (In this case, of course, we continue to write $\Phi_{Y}$ for the unique continuous extension to all of $\mathcal{M} \otimes_{P}H$.) \begin{proposition} \label{Lemma1.4}The space $\mathcal{E}_{P}$ is a linear space that is stable under left and right multiplication by elements from $\mathcal{M}^{\prime}$, and the pairing $\langle Y,Z\rangle:=\Phi_{Y}\Phi_{Z}^{\ast}$ converts $\mathcal{E}_{P}$ into a $W^{\ast}$-correspondence that is isomorphic to $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$ under the map $Y\mapsto\Phi_{Y}^{\ast}$. \end{proposition} The advantage of $\mathcal{E}_{P}$ over $\mathcal{L}_{\mathcal{M} }(H,\mathcal{M}\otimes_{P}H)$ is not only that $\mathcal{E}_{P}\subseteq B(H)$, but also, as we shall see shortly, given two completely positive maps $P$ and $Q$ on $\mathcal{M}$, the relations among $\mathcal{E}_{P}$, $\mathcal{E}_{Q}$, and $\mathcal{E}_{PQ}$ are easier to work with than those among $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$, $\mathcal{L} _{\mathcal{M}}(H,\mathcal{M}\otimes_{Q}H)$, and $\mathcal{L}_{\mathcal{M} }(H,\mathcal{M}\otimes_{PQ}H)$.\medskip \begin{proof} Evidently, $\mathcal{E}_{P}$ is a linear space. For $R\in\mathcal{M}^{\prime}$ and $Y\in\mathcal{E}_{P}$, $\Phi_{YR}=R^{\ast}\Phi_{Y}$, and so $\mathcal{E} _{P}\mathcal{M}^{\prime}\subseteq\mathcal{E}_{P}$. For the other side, fix $Y\in\mathcal{E}_{P}$ and $T\in\mathcal{M}^{\prime}$. Then for $(\sum _{i=1}^{n}S_{i}\otimes h_{i})\in\mathcal{M}\otimes H$, $\Phi_{TY}(\sum _{i=1}^{n}S_{i}\otimes h_{i})=\sum_{i=1}^{n}S_{i}Y^{\ast}T^{\ast}h_{i} =\Phi_{Y}(\sum_{i=1}^{n}S_{i}\otimes T^{\ast}h_{i})$. Therefore $\Phi _{TY}=\Phi_{Y}(I\otimes T^{\ast})$ on $\mathcal{M}\otimes H$, showing that $\Phi_{TY}$ is bounded; i.e., $\mathcal{M}^{\prime}\mathcal{E}_{P} \subseteq\mathcal{E}_{P}$. Next note that for $Y\in\mathcal{E}_{P}$, $S,T\in\mathcal{M}$, and $h\in H$, \[ \Phi_{Y}(\pi_{P}(T)(S\otimes h))=\Phi_{Y}(TS\otimes h)=TSY^{\ast}h=T\Phi _{Y}(S\otimes h)\text{; } \] i.e., $\Phi_{Y}\pi_{P}(T)=T\Phi_{Y}$. Taking adjoints, we conclude that $\Phi_{Y}^{\ast}\in\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$. Thus, from the properties of $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M} \otimes_{P}H)$, we know that the formula $\langle Z,Y\rangle:=\Phi_{Z}\Phi _{Y}^{\ast}$, defines an $\mathcal{M}^{\prime}$-valued function on $\mathcal{E}_{P}$. In fact, it is clearly an $\mathcal{M}^{\prime}$-valued sesquilinear form, since $\Phi_{YR}=R^{\ast}\Phi_{Y}$ and $\Phi_{TY}=\Phi _{Y}(I\otimes T^{\ast})$, $R,T\in\mathcal{M}^{\prime}$, and it is clearly positive semidefinite. It is definite because if $\langle Y,Y\rangle=\Phi _{Y}\Phi_{Y}^{\ast}=0$, then $\Phi_{Y}=0,$ and so $TY^{\ast}h=0$ for all $T\in\mathcal{M}$ and $h\in H$. Taking $T=I$ we conclude that $Y=0$. The map $Y\rightarrow\Phi_{Y}^{\ast}$ preserves inner products by definition. Further, it is a bimodule map since $\left( \Phi_{YR}\right) ^{\ast }=(R^{\ast}\Phi_{Y})^{\ast}=\Phi_{Y}^{\ast}R$ and $\left( \Phi_{RY}\right) ^{\ast}=\left( \Phi_{Y}(I\otimes R^{\ast})\right) ^{\ast}=(I\otimes R)\Phi_{Y}^{\ast}$, for all $R\in\mathcal{M}^{\prime}$. Thus, to show that $\mathcal{E}_{P}$ is isomorphic to $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M} \otimes_{P}H)$ under this map, we need only show that it is onto. However, we assert that for all $X\in\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P} H)$, $X=\Phi_{(W_{P}^{\ast}X)}^{\ast}$. Indeed, for $h,h^{\prime}\in H$, and $S\in\mathcal{M}$, the fact that $(I\otimes S)X=XS$ implies that \begin{align*} \Phi_{(W_{P}^{\ast}X)}(S\otimes h)=SX^{\ast}W_{P}h=SX^{\ast}(I\otimes h)\\ =X^{\ast}(S\otimes I)(I\otimes h)=X^{\ast}(S\otimes h)\text{.} \end{align*} This shows that $(W_{P}^{\ast}X)$ is in $\mathcal{E}_{P}$ and that $X=\Phi_{(W_{P}^{\ast}X)}^{\ast}$. The facts that $\mathcal{E}_{P}$ and $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$ are isomorphic and that $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$ is self-dual imply that $\mathcal{E}_{P}$ is self-dual. \end{proof} \begin{definition} \label{arvesoncorresp}The $W^{\ast}$-correspondence $\mathcal{E}_{P}$ over $\mathcal{M}^{\prime}$ associated with a normal, unital completely positive map $P$ on a von Neumann algebra $\mathcal{M}$ will be called the \emph{Arveson correspondence }associated with $P$. \end{definition} The following corollary is immediate from the self-duality of the spaces involved. We call attention to it because it will be used several times in the sequel. \begin{corollary} \label{corollary1.5}If a subspace $\mathcal{Y}$ of $\mathcal{E}_{P}$ or of $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$ has zero annihilator, i.e., if $\mathcal{Y}^{\perp}=0$, then $\mathcal{Y}$ is dense. \end{corollary} A special case of our analysis so far needs to be singled out. \ Suppose that $P=\alpha$ is a unital, normal, $\ast$-\emph{endomorphism} of $\mathcal{M}$. (All endomorphisms will be unital, normal, and preserve adjoints.) Then $\mathcal{M}\otimes_{\alpha}H$ is isomorphic to $H$ under the map $T\otimes h\rightarrow\alpha(T)h$, which is $W_{\alpha}^{-1}=W_{\alpha}^{\ast}$. Further, $\pi_{\alpha}$ is unitarily equivalent to $\alpha$. Thus, we may identify $\mathcal{E}_{P}=\mathcal{E}_{\alpha}$ directly with $\mathcal{L} _{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$ as in the following corollary of Proposition \ref{Lemma1.4}. \begin{corollary} \label{corollary1.6}If $\alpha$ is a unital, normal, $\ast$-endomorphism of $\mathcal{M}$, then $\mathcal{E}_{\alpha}=\{X\in B(H)\mid XT=\alpha (T)X,\;T\in\mathcal{M}\}$ with the inner product $\langle X_{1},X_{2} \rangle=X_{1}^{\ast}X_{2}$. \end{corollary} The next lemma may seem like a technicality, but among other things, it establishes that the modules $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M} \otimes_{P}H)$ and $\mathcal{E}_{P}$ are nonzero. It plays other useful roles in the sequel. \begin{lemma} \label{lemma1.7}In the setting of a normal, unital completely positive map $P$ on $\mathcal{M}$ that we have been studying, \[ \mathcal{M}\otimes_{P}H=\bigvee\{\Phi_{Y}^{\ast}(H)\mid Y\in\mathcal{E} _{P}\}=\bigvee\{X(H)\mid X\in\mathcal{L}_{\mathcal{M}}(H,\mathcal{M} \otimes_{P}H)\}\text{.} \] \end{lemma} \begin{proof} Since $\pi_{P}$ is a normal $\ast$-representation, its kernel is of the form $\mathcal{M}q$ for a central projection $q$. Write $\pi^{\prime}$ for the representation of $\mathcal{M}$ that is reduction by $I-q$, i.e., $\pi ^{\prime}(S)=S(I-q)$. Then for $S\in\mathcal{M}$, $\left\| \pi^{\prime }(S)\right\| =\left\| \pi_{P}(S)\right\| $, so that $\pi^{\prime}$ and $\pi_{P}$ are quasiequivalent. If $Q$ is the projection of $\mathcal{M} \otimes_{P}H$ onto $\bigvee\{X(H)\mid X\in\mathcal{L}_{\mathcal{M} }(H,\mathcal{M}\otimes_{P}H)\}$, then for every $L\in\mathcal{L}_{\mathcal{M} }(H,\mathcal{M}\otimes_{P}H)$, $L(H)$ is $\pi_{P}(\mathcal{M})$-invariant. Hence $Q\in\pi_{P}(\mathcal{M})^{\prime}$. If $\pi_{0}$ is the reduction of $\pi_{P}$ to the range of $I-Q$, then, on the one hand, $\pi_{0}\leq\pi_{P}$ and on the other, $\pi_{0}$ is disjoint from $\pi^{\prime}$. Since $\pi^{\prime}$ is quasiequivalent to $\pi_{P}$ we conclude that $\pi_{0}=0$, i.e., that $Q=I$. \end{proof} We next want to illuminate the relation between the composition of two completely positive maps on $\mathcal{M}$ and the composition of their Arveson correspondences. This was worked out in the case when $\mathcal{M}=B(H)$ by Arveson in \cite[Theorem 1.12]{wA97}. Given two normal, unital completely positive maps $P,Q:\mathcal{M}\rightarrow\mathcal{M}$, we shall write $m$ for the multiplication map from $\mathcal{E}_{P}\otimes_{\mathcal{M}^{\prime} }\mathcal{E}_{Q}$ to $B(H)$. That is, $m(Y\otimes Z)=YZ$. \begin{lemma} \label{lemma1.8}The range of $m$ is contained in $\mathcal{E}_{PQ}$. \end{lemma} \begin{proof} First observe that if $Y\in\mathcal{E}_{P}$, if $A=(a_{ij})$ is a positive semidefinite element in $M_{n}(\mathcal{M}\mathbb{)}$, and if $\mathbf{h} =(h_{1},h_{2},\ldots,h_{n})$ is an $n$-tuple of elements from $H$, then \begin{equation} \left\| A^{1/2}(Y^{\ast}\otimes I)\mathbf{h}\right\| ^{2}\leq\left\| \Phi_{Y}\right\| ^{2}\langle h_{i},\sum P(a_{ij})h_{j}\rangle\text{.} \label{inflation} \end{equation} To see this, note that if $A$ is a diad, i.e., if $A$ has the form \[ A=(S_{1},S_{2},\ldots,S_{n})^{\ast}(S_{1},S_{2},\ldots,S_{n}),\;\;S_{i} \in\mathcal{M}\text{,} \] then the left hand side of the inequality is simply $\left\| \sum S_{i}Y^{\ast}h_{i}\right\| ^{2}$, while the right hand side is $\left\| \Phi_{Y}\right\| ^{2}\left\| \sum S_{i}\otimes h_{i}\right\| ^{2}$. So, the inequality is valid by definition. However, every non-negative $A\in M_{n}(\mathcal{M}\mathbb{)}$ is a sum of diads. (See \cite[Lemma 3.11]{vP86}.) So the inequality is valid as claimed. Now fix $Y\in\mathcal{E}_{P}$, $Z\in\mathcal{E}_{Q}$, $\sum S_{i}\otimes h_{i}\in\mathcal{M}\otimes H$. Then \begin{align*} \left\| \sum S_{i}(YZ)^{\ast}h_{i}\right\| ^{2} & =\left\| \sum S_{i}Z^{\ast}(Y^{\ast}h_{i})\right\| ^{2}\leq\left\| \Phi_{Z}\right\| ^{2}\left\| \sum S_{i}\otimes_{Q}Y^{\ast}h_{i}\right\| ^{2}\\ & =\left\| \Phi_{Z}\right\| ^{2}\langle\sum Y^{\ast}h_{i},Q(S_{i}^{\ast }S_{j})Y^{\ast}h_{j}\rangle\\ & =\left\| \Phi_{Z}\right\| ^{2}\left\| (Q(S_{i}^{\ast}S_{j} ))^{1/2}(Y^{\ast}\otimes I)\mathbf{h}\right\| ^{2}\\ & \leq\left\| \Phi_{Z}\right\| ^{2}\left\| \Phi_{Y}\right\| ^{2} \langle\sum h_{i},P(Q(S_{i}^{\ast}S_{j}))Y^{\ast}h_{j}\rangle\\ & =\left\| \Phi_{Z}\right\| ^{2}\left\| \Phi_{Y}\right\| ^{2}\left\| S\otimes_{PQ}h\right\| ^{2}\text{.} \end{align*} This shows that $\Phi_{YZ}$ is bounded, and that $\left\| \Phi_{YZ}\right\| \leq\left\| \Phi_{Z}\right\| \left\| \Phi_{Y}\right\| $. Thus $YZ\in\mathcal{E}_{PQ}$. \end{proof} We also want to express $m$ in terms of the space $\mathcal{L}_{\mathcal{M} }(H,\mathcal{M}\otimes_{P}H)$. For this purpose, fix two normal, unital, completely positive maps $P,Q:\mathcal{M}\rightarrow\mathcal{M}$. We define a map $\Psi:\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)\otimes _{\mathcal{M}^{\prime}}\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes _{Q}H)\rightarrow\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{Q} \mathcal{M}\otimes_{P}H)$ by the formula $\Psi(X\otimes Y)=(I\otimes X)Y$, where $I\otimes X$ is the map from $\mathcal{M}\otimes_{Q}H$ to $\mathcal{M} \otimes_{Q}\mathcal{M}\otimes_{P}H$ given by the equation $I\otimes X(S\otimes h)=S\otimes Xh$. We also define a map $V_{0}:\mathcal{M}\otimes_{PQ} H\rightarrow\mathcal{M}\otimes_{Q}\mathcal{M}\otimes_{P}H$ via the equation $V_{0}(S\otimes h)=S\otimes I\otimes h$, and we define $V:\mathcal{L} _{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)\rightarrow\mathcal{L}_{\mathcal{M} }(H,\mathcal{M}\otimes_{Q}\mathcal{M}\otimes_{P}H)$ by the formula $V(X)=V_{0}X$. \begin{proposition} \label{Lemma1.9}In the notation just established, $\Psi$ is an isomorphism of correspondences and $V$ is an isometry whose range is $\{X\in\mathcal{L} _{\mathcal{M}}(H,\mathcal{M}\otimes_{Q}\mathcal{M}\otimes_{P}H)\mid X(H)\subseteq\mathcal{M}\otimes_{Q}I\otimes_{P}H\}$. Further, if we write $U_{P}:\mathcal{E}_{P}\rightarrow\mathcal{L}_{\mathcal{M}}(H,\mathcal{M} \otimes_{P}H)$ for the isomorphism defined above, and similarly write $U_{Q}$ and $U_{PQ}$, then \begin{equation} U_{PQ}m(U_{P}^{-1}\otimes U_{Q}^{-1})=V^{\ast}\Psi\text{,} \label{comult} \end{equation} showing that $m$ is coisometric and $m^{\ast}$ is isometric. \end{proposition} \begin{proof} On the one hand, $\Psi(X\otimes Y)^{\ast}\Psi(X^{\prime}\otimes Y^{\prime })=Y^{\ast}(I\otimes X^{\ast})(I\otimes X^{\prime})Y^{\prime}=Y^{\ast }(I\otimes X^{\ast}X^{\prime})Y^{\prime}$ - the inner product in $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{Q}\mathcal{M}\otimes_{P}H)$. On the other hand, recall that the left action of $Z\in\mathcal{M}^{\prime}$ on $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{Q}H)$ is given by the equation $(I\otimes Z)Y$, $Y\in\mathcal{L}_{\mathcal{M}}(H,\mathcal{M} \otimes_{Q}H)$. Consequently, in $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M} \otimes_{P}H)\otimes_{\mathcal{M}^{\prime}}\mathcal{L}_{\mathcal{M} }(H,\mathcal{M}\otimes_{Q}H)$, \begin{align*} \langle X\otimes Y,X^{\prime}\otimes Y^{\prime}\rangle & =\langle Y,\langle X,X^{\prime}\rangle_{\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P} H)}Y^{\prime}\rangle_{\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{Q}H)}\\ & =\langle Y,(I\otimes X^{\ast}X^{\prime})Y^{\prime}\rangle_{\mathcal{L} _{\mathcal{M}}(H,\mathcal{M}\otimes_{Q}H)}=Y^{\ast}(I\otimes X^{\ast} X^{\prime})Y^{\prime}\text{.} \end{align*} Thus $\Psi$ preserves the inner products. To see that $\Psi$ is a bimodule map, let $S\in\mathcal{M}^{\prime}$, $X\in\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$, and $Y\in \mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{Q}H)$. Then \begin{align*} \Psi(S(X\otimes Y)) & =\Psi((I\otimes S)X\otimes Y)\\ & =(I\otimes(I\otimes S)X)Y\\ & =(I\otimes I\otimes S)(I\otimes X)Y\\ & =S\Psi(X\otimes Y)\text{,} \end{align*} while \[ \Psi(X\otimes YS)=(I\otimes X)(YS)=((I\otimes X)Y)S=\Psi(X\otimes Y)S\text{.} \] To see that $\Psi$ is surjective, we use the fact that $\mathcal{L} _{\mathcal{M}}(H,\mathcal{M\otimes}_{Q}\mathcal{M}\otimes_{P}H)$ is self-dual (see Proposition \ref{Lemma 1.2}) and show that $(\operatorname{Im} \Psi)^{\perp}=\{0\}$. Corollary \ref{corollary1.5}, then, will yield the result. If $Z$ annihilates $\operatorname{Im}\Psi$, then for every $X\in\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$ and for every $Y\in\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{Q}H)$, $Y^{\ast}(I\otimes X^{\ast})Z=0$. Observe that $(I\otimes X^{\ast})Z$ is a map from $H$ to $\mathcal{M}\otimes_{Q}H$. By Lemma \ref{lemma1.7}, $\mathcal{M}\otimes_{Q}H$ is the span of $Y(H)$, $Y\in\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes _{Q}H)$. Consequently, $\cap\{\ker Y^{\ast}\mid Y\in\mathcal{L}_{\mathcal{M} }(H,\mathcal{M}\otimes_{Q}H)\}=\{0\}$. Since $Y^{\ast}(I\otimes X^{\ast})Z=0$ for all such $Y$, we conclude that $(I\otimes X^{\ast})Z=0$ for all $X\in\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$. By Lemma \ref{lemma1.7} again, $\cap\{\ker X^{\ast}\mid X\in\mathcal{L}_{\mathcal{M} }(H,\mathcal{M}\otimes_{P}H)\}=\{0\}$, and so $Z=0$. Turning now to $V$, note that it is an easy matter to check that $V_{0}$ is an isometry. Consequently, for $X\in\mathcal{L}_{\mathcal{M}}(H,\mathcal{M} \otimes_{PQ}H)$, $V(X)^{\ast}V(X)=X^{\ast}V_{0}^{\ast}V_{0}X=X^{\ast}X$. So $V$ is isometric. Also, it is evident from the definitions that $V$ is a bimodule map and that its image is $\{X\in\mathcal{L}_{\mathcal{M} }(H,\mathcal{M}\otimes_{Q}\mathcal{M}\otimes_{P}H)\mid X(H)\subseteq \mathcal{M}\otimes_{Q}I\otimes_{P}H\}$. Thus, we are left to prove equation (\ref{comult}). To this end, let $Y\in\mathcal{E}_{P}$ and let $Z\in \mathcal{E}_{Q}$. As an equation between maps from $\mathcal{M}\otimes_{PQ}H$ to $H$, the relation $\Phi_{Z}(I\otimes\Phi_{Y})V_{0}=\Phi_{YZ}$ is immediate. Therefore, $\Phi_{YZ}^{\ast}=V_{0}^{\ast}(I\otimes\Phi_{Y}^{\ast})\Phi _{Z}^{\ast}$. Since $U_{P}(Y)=\Phi_{Y}^{\ast},\;U_{Q}(Z)=\Phi_{Z}^{\ast}\;$and $m(Y\otimes Z)=YZ$, we find that \begin{align*} U_{PQ}m(U_{P}^{-1}\otimes U_{Q}^{-1})(\Phi_{Y}^{\ast}\otimes\Phi_{Z}^{\ast}) & =U_{PQ}(YZ)=\Phi_{YZ}^{\ast}\\ & =V_{0}^{\ast}\Psi(\Phi_{Y}^{\ast}\otimes\Phi_{Z}^{\ast})=V^{\ast}(\Psi (\Phi_{Y}^{\ast}\otimes\Phi_{Z}^{\ast}))\text{.} \end{align*} \end{proof} \begin{corollary} \label{Cor 1.10}Under the hypotheses of \ Proposition \ref{Lemma1.9}, \[ \bigvee\{T(H)\mid T\in\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes _{Q}\mathcal{M}\otimes_{P}H)\}=\mathcal{M}\otimes_{Q}\mathcal{M}\otimes_{P}H. \] \end{corollary} \begin{proof} The span $\bigvee\{T(H)\mid T\in\mathcal{L}_{\mathcal{M}}(H,\mathcal{M} \otimes_{Q}\mathcal{M}\otimes_{P}H)\}$ contains the span $\bigvee\{(I\otimes X)Y(H)\mid X\in\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$, $Y\in\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{Q}H)\}$. But using Lemma \ref{lemma1.7} twice, we see that this space is $\bigvee\{(I\otimes X)(M\otimes_{Q}H)\mid X\in\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes _{P}H)\}=\mathcal{M}\otimes_{Q}\mathcal{M}\otimes_{P}H$. \end{proof} Although the product of two correspondences associated with completely positive maps does not coincide with the correspondence of the product of the maps, there are important special situations when they do. This, and more, is spelled out in the following proposition. (For a related result, see \cite[Theorem 2.12]{BDH88}.) \begin{proposition} \label{Proposition1.11}(1) If $\alpha\in Aut(\mathcal{M})$, then $\mathcal{E}_{\alpha^{-1}}=\mathcal{E}_{\alpha}^{\ast}$, and $\mathcal{E} _{\alpha}\mathcal{E}_{\alpha}^{\ast}$ and $\mathcal{E}_{\alpha}^{\ast }\mathcal{E}_{\alpha}$ are $\sigma$-weakly dense in $\mathcal{M}^{\prime}$. In particular, $\mathcal{E}_{\alpha}$ is an $\mathcal{M}^{\prime}-\mathcal{M} ^{\prime}$ equivalence bimodule. (2) If $\alpha$ is an \emph{endomorphism }of $\mathcal{M}$ and if $Q$ is a normal, unital, completely positive map of $\mathcal{M}$, then the multiplication map $m:\mathcal{E}_{\alpha}\otimes_{\mathcal{M}^{\prime} }\mathcal{E}_{Q}\rightarrow\mathcal{E}_{\alpha\circ Q}$, defined above, is an isomorphism. (3) If $\alpha\in Aut(\mathcal{M})$ and if $P$ is a normal, unital completely positive map, then the map $m:\mathcal{E}_{P}\otimes_{\mathcal{M}^{\prime} }\mathcal{E}_{\alpha}\rightarrow\mathcal{E}_{P\circ\alpha}$ is an isomorphism. (4) If $P$ and $Q$ are conjugate normal, unital completely positive maps (i.e., if there is an automorphism $\alpha$ of $\mathcal{M}$ such that $P=\alpha\circ Q\circ\alpha^{-1}$, then \begin{equation} \mathcal{E}_{P}\simeq\mathcal{E}_{\alpha}\otimes_{\mathcal{M}^{\prime} }\mathcal{E}_{Q}\otimes_{\mathcal{M}^{\prime}}\mathcal{E}_{\alpha}^{\ast }\text{.}\label{abstracttensor} \end{equation} In particular, $(\mathcal{M}^{\prime},\mathcal{E}_{P})$ and $(\mathcal{M} ^{\prime},\mathcal{E}_{Q})$ are strongly Morita equivalent in the sense of \cite{MS00}. \end{proposition} We note for the sake of emphasis, that the tensor products described in equation (\ref{abstracttensor}) are realized through operator multiplication and adjunction. That is, if $R$ and $T\in\mathcal{E}_{\alpha}$ and $S\in\mathcal{E}_{Q}$, then $T^{\ast}\in\mathcal{E}_{\alpha}^{\ast}$ and $R\otimes S\otimes T^{\ast}=RST^{\ast}$.\medskip \begin{proof} (1) By Corollary \ref{corollary1.6}, $\mathcal{E}_{\alpha}=\{X\in B(H)\mid XT=\alpha(T)X,\;T\in\mathcal{M}\}$. The inner product is given by the formula $\langle X_{1},X_{2}\rangle=X_{1}^{\ast}X_{2}$. It follows easily that $\mathcal{E}_{\alpha^{-1}}=\mathcal{E}_{\alpha}^{\ast}$ and that $\mathcal{E}_{\alpha}^{\ast}\mathcal{E}_{\alpha}\subseteq\mathcal{M}^{\prime} $. In fact, the $\sigma$-weak closure of $\mathcal{E}_{\alpha}^{\ast }\mathcal{E}_{\alpha}$ is a $2$-sided ideal in $\mathcal{M}^{\prime}$, and therefore is of the form $q\mathcal{M}^{\prime}$, for some central projection in $\mathcal{M}^{\prime}$. However, given $X\in\mathcal{E}_{\alpha^{-1}}$, with polar decomposition $X=V|X|$, we see that $V\in\mathcal{E}_{\alpha^{-1}}$ and $VV^{\ast}$ is the projection onto the closure of the range of $X$. Since $VV^{\ast}\in\mathcal{E}_{\alpha^{-1}}\mathcal{E}_{\alpha^{-1}}^{\ast }=\mathcal{E}_{\alpha}^{\ast}\mathcal{E}_{\alpha}\subseteq q\mathcal{M} ^{\prime}$, the range of $X$ is contained in the range of $q$. \ However, Lemma \ref{lemma1.7} implies, now, that $q=I$; i.e., that $\mathcal{E} _{\alpha}^{\ast}\mathcal{E}_{\alpha}$ is $\sigma$-weakly dense in $\mathcal{M}^{\prime}$. Hence, $\mathcal{E}_{\alpha}$ is a normal equivalence bimodule. (2) From Proposition \ref{Lemma1.9}, we know in general that $m$ is an isomorphism if $\mathcal{M}\otimes_{Q}I\otimes_{P}H=\mathcal{M}\otimes _{Q}\mathcal{M}\otimes_{P}H$. If $P=\alpha$ is an endomorphism of $\mathcal{M}$, then for every $T,S\in\mathcal{M}$ and $h,k\in H$, we have \begin{align*} \langle T\otimes h-I\otimes\alpha(T)h,S\otimes k\rangle_{\mathcal{M} \otimes_{\alpha}H} & =\langle T\otimes h,S\otimes k\rangle-\langle I\otimes\alpha(T)h,S\otimes k\rangle\\ & =\langle h,\alpha(T^{\ast}S)k\rangle-\langle\alpha(T)h,\alpha(S)k\rangle=0. \end{align*} Hence $T\otimes h=I\otimes\alpha(T)h$ and so $\mathcal{M}\otimes_{\alpha }H=I\otimes H=H$. Thus, $\mathcal{M}\otimes_{Q}\mathcal{M}\otimes_{\alpha }H=\mathcal{M}\otimes_{Q}I\otimes_{\alpha}H$, as required. (3) As in (2), we need to show that $\mathcal{M}\otimes_{\alpha}I\otimes_{Q}H$ coincides with $\mathcal{M}\otimes_{\alpha}\mathcal{M}\otimes_{Q}H$. This is obvious, since for all $S,T\in\mathcal{M}$, and $h\in H$, $S\otimes T\otimes h=S\alpha^{-1}(T)\otimes I\otimes h$. (4) From (2) and (3), we know that $\mathcal{E}_{P}\simeq\mathcal{E}_{\alpha }\otimes\mathcal{E}_{Q}\otimes\mathcal{E}_{\alpha}^{\ast}$, and (1) implies then that $\mathcal{E}_{P}\otimes\mathcal{E}_{\alpha}\simeq\mathcal{E} _{\alpha}\otimes\mathcal{E}_{Q}$. Part (1) also asserts that $\mathcal{E}_{Q}$ is an $\mathcal{M}^{\prime}$-$\mathcal{M}^{\prime}$-equivalence bimodule and so $\mathcal{E}_{P}$ are strongly Morita equivalent in the sense of \cite{MS00}. \end{proof} Although $\mathcal{E}_{P}\otimes\mathcal{E}_{Q}\ncong\mathcal{E}_{PQ}$, and especially, $\mathcal{E}_{P}^{\otimes n}\ncong\mathcal{E}_{P^{n}}$ in general, we shall soon see that it is possible to ``dilate'' $\mathcal{E}_{P}$ to a correspondence $\mathcal{F}_{\alpha}$ where $\alpha$ is an endomorphism of the commutant an isomorphic copy of $\mathcal{M}^{\prime}$. We then have $\mathcal{F}_{\alpha}^{\otimes n}\simeq\mathcal{F}_{\alpha^{n}}$, by part (2) of Proposition \ref{Proposition1.11}. This $\alpha$, then, will turn out to be a ``dilation'' of $P$. To effect this program, we require some of the technology from \cite{MS98}. We generally adopt the terminology and notation of \cite{MS98}, but with some minor modifications because we are working in the category of von Neumann algebras and \emph{normal }maps - representations, and completely positive maps. \begin{definition} \label{Definition1.12}Let $\mathcal{E}$ be a $W^{\ast}$-correspondence over a von Neumann algebra $\mathcal{N}$ and let $H_{0}$ be a Hilbert space. \begin{enumerate} \item A \emph{completely contractive covariant representation }of $\mathcal{E}$ in $B(H_{0})$ is a pair $(T,\sigma)$, where \begin{enumerate} \item $\sigma$ is a normal $\ast$-representation of $\mathcal{N}$ in $B(H_{0})$. \item $T$ is a linear, completely contractive map from $\mathcal{E}$ to $B(H_{0})$ that is continuous in the $\sigma$-topology of \cite{BDH88} on $\mathcal{E}$ and the $\sigma$-weak topology on $B(H_{0}).$ \item $T$ is a bimodule map in the sense that $T(S\xi R)=\sigma(S)T(\xi )\sigma(R)$, $\xi\in\mathcal{E}$, and $S,R\in\mathcal{N}$. \end{enumerate} \item A completely contractive covariant representation $(T,\sigma)$ of $\mathcal{E}$ in $B(H_{0})$ is called \emph{isometric }in case \begin{equation} T(\xi)^{\ast}T(\eta)=\sigma(\langle\xi,\eta\rangle)\text{,} \label{isometric} \end{equation} for all $\xi,\eta\in\mathcal{E}$. \end{enumerate} \end{definition} To lighten the terminology, we shall refer to an isometric, completely contractive, covariant representation simply as an isometric covariant representation. There is no problem doing this because it is easy to see that if one has a pair $(T,\sigma)$ satisfying all the conditions of part 1 of Definition \ref{Definition1.12}, except possibly the complete contractivity assumption, but which is isometric in the sense of equation (\ref{isometric}), then necessarily $T$ is completely contractive. The theory developed in \cite{MS98} applies here to prove that if a completely contractive covariant representation, $(T,\sigma)$, of $\mathcal{E}$ in $B(H)$ is given, then it determines a contraction $\tilde{T}:\mathcal{E} \otimes_{\sigma}H\rightarrow H$ defined by the formula $\tilde{T}(\eta\otimes h):=T(\eta)h$, $\eta\otimes h\in\mathcal{E}\otimes_{\sigma}H$. Here, $\mathcal{E}\otimes_{\sigma}H$ denotes the Hausdorff completion of the algebraic tensor product $\mathcal{E}\otimes H$ in the pre-inner product given by the formula $\langle\xi\otimes h,\eta\otimes k\rangle:=\langle h,\sigma(\langle\xi,\eta\rangle)k\rangle$. (See \cite[Lemma 3.5]{MS98}.) Also, there is an \emph{induced }representation $\sigma^{\mathcal{E}}:\mathcal{L} (\mathcal{E})\rightarrow B(\mathcal{E}\otimes_{\sigma}H)$ defined by the formula $\sigma^{\mathcal{E}}(S):=S\otimes I$ \cite[Lemma 3.4]{MS98}. Recalling that $\mathcal{L}(\mathcal{E})$ is a von Neumann algebra, it is not hard to see that $\sigma^{\mathcal{E}}$ is a normal representation. The operator $\tilde{T}$ and $\sigma^{\mathcal{E}}$ are related by the equation \begin{equation} \tilde{T}\sigma^{\mathcal{E}}\circ\varphi=\sigma\tilde{T}. \label{covariance} \end{equation} In fact we have the following lemma that is immediate from \cite{MS98} and \cite{MS99}. See, in particular, \cite[Lemmas 3.4-3.6]{MS98} and \cite[Lemma 2.1]{MS99}. \begin{lemma} \label{CovRep}The map $(T,\sigma)\rightarrow\tilde{T}$ is a bijection between all completely contractive covariant representations $(T,\sigma)$ of $\mathcal{E}$ on the Hilbert space $H$ and contractive operators $\tilde {T}:\mathcal{E}\otimes_{\sigma}H\rightarrow H$ that satisfy equation (\ref{covariance}). Given such a $\tilde{T}$ satisfying this equation, $T$, defined by the formula $T(\xi)h:=\tilde{T}(\xi\otimes h)$, together with $\sigma$ is a completely contractive covariant representation of $\mathcal{E}$ on $H$. Further, $(T,\sigma)$ is isometric if and only if $\tilde{T}$ is an isometry. \end{lemma} We note in passing that this lemma shows that the $\sigma$-weak continuity of $T$ really depends only on the fact that $\sigma$ is normal. The map $\Psi:\mathcal{L}(\mathcal{E})\rightarrow B(H)$ defined, then, by the formula \[ \Psi(S):=\tilde{T}\sigma^{\mathcal{E}}(S)\tilde{T}^{\ast}\text{,} \] $S\in\mathcal{L}(\mathcal{E})$, evidently is completely positive, normal, and contractive. \begin{definition} \label{Definition1.12bis}Given a completely contractive covariant representation $(T,\sigma)$ of $\mathcal{E}$ in $B(H),$ the map $\Psi$ is called the \emph{completely positive extension} of $(T,\sigma)$, and the representation $(T,\sigma)$ is called \emph{fully coisometric} in case $\Psi(I_{\mathcal{E}})=I_{H}$. \end{definition} The terminology is reminiscent of the theory of a single contraction. A completely contractive covariant representation $(T,\sigma)$ is isometric precisely when $\tilde{T}$ is an isometry. Likewise, it is fully coisometric precisely when $\tilde{T}$ is a coisometry. The map $\Psi$ is a normal $\ast $-representation precisely when $(T,\sigma)$ is isometric and it is a unital $\ast$-representation precisely when $(T,\sigma)$ is both isometric and fully coisometric. (We have, however, resisted the temptation to call $(T,\sigma)$ unitary in this case.) Our next result, which is a variant of \cite[Corollary 5.21]{MS98}, shows that a completely contractive covariant representation $(T,\sigma)$ can be dilated to an isometric covariant representation in the following sense. \begin{theorem and definition} \label{Theorem 1.13}Let $\mathcal{E}$ be a $W^{\ast}$-correspondence over a von Neumann algebra $\mathcal{N}$ and let $(T,\sigma)$ be a completely contractive covariant representation of $\mathcal{E}$ on the Hilbert space $H$. Then there is a Hilbert space $K$ containing $H$ and an isometric covariant representation $(V,\rho)$ of $\mathcal{E}$ on $K$ such that if $P$ is the projection of $K$ onto $H$, then \begin{enumerate} \item $P$ commutes with $\rho(\mathcal{N})$ and $\rho(A)P=\sigma(A)P,$ $A\in\mathcal{N}$; and \item for all $\eta\in\mathcal{E}$, $V(\eta)^{\ast}$ leaves $H$ invariant and $PV(\eta)P=T(\eta)P$. \end{enumerate} The representation $(V,\rho)$ may be chosen so that the smallest subspace $K$ containing $H$ that reduces $(V,\rho)$ is $K$. When this is done, $(V,\rho)$ is unique up to unitary equivalence and is called \emph{the minimal isometric dilation of }$(T,\sigma)$. Further, if $(T,\sigma)$ is fully coisometric, the (unique minimal) isometric dilation $(V,\rho)$ is fully coisometric, too. \end{theorem and definition} \begin{proof} One can construct a proof following the steps leading to Theorem 3.3 and Corollary 5.21 in \cite{MS98}. However, continuity issues must be dealt with along the way and one needs to observe that the ideal $J$ discussed there plays no role here. Rather than doing this, it is easier and it may be more revealing to appeal to Lemma \ref{CovRep} and simply write down the operator $\tilde{V}$ and representation $\rho$ that lead to the dilation $(V,\rho)$ of $(T,\sigma)$. The remaining details will be very easy to verify. To this end, let $\Delta=(I-\tilde{T}^{\ast}\tilde{T})^{1/2}$ and let $\mathcal{D}$ be its range. Then $\Delta$ is an operator on $\mathcal{E} \otimes_{\sigma}H$ and commutes with the representation $\sigma^{\mathcal{E} }\circ\varphi$ of $\mathcal{N}$, by equation (\ref{covariance}). Write $\sigma_{1}$ for the restriction of $\sigma^{\mathcal{E}}\circ\varphi$ to $\mathcal{D}$. Let $\sigma_{2}=\sigma_{1}^{\mathcal{E}}\circ\varphi$ on $\mathcal{E}\otimes_{\sigma_{1}}\mathcal{D}$, and let $\sigma_{3}=\sigma _{2}^{\mathcal{E}}\circ\varphi$ on $\mathcal{E}\otimes_{\sigma_{2} }(\mathcal{E}\otimes_{\sigma_{1}}\mathcal{D})$. It is easy to see that $\sigma_{3}$ is naturally unitarily equivalent to $\sigma_{1}^{\mathcal{E} ^{\otimes2}}\circ\varphi_{2}$ on $\mathcal{E}^{\otimes2}\otimes_{\sigma_{1} }\mathcal{D}$, where $\varphi_{2}$ is the representation of $\mathcal{N}$ in $\mathcal{L}(\mathcal{E}^{\otimes2})$ defined by the formula $\varphi _{2}(a)(\xi\otimes\eta)=(\varphi(a)\xi)\otimes\eta$. We shall identify them henceforth and in general, we write $\sigma_{n+1}$for $\sigma_{1} ^{\mathcal{E}^{\otimes n}}\circ\varphi_{n}$ on $\mathcal{E}^{\otimes n} \otimes_{\sigma_{1}}\mathcal{D}$, where $\varphi_{n}$ has its obvious meaning. It is evident that all the $\sigma_{n}$ are normal. We let \[ K=H\oplus\mathcal{D}\oplus\sum_{n=1}^{\infty}\oplus\mathcal{E}^{\otimes n}\otimes_{\sigma_{1}}\mathcal{D} \] and we let $\rho=\sigma\oplus\sigma_{1}\oplus\bigoplus_{n=1}^{\infty} \sigma_{n+1}$, i.e., thinking matricially, $\rho=diag(\sigma,\sigma_{1} ,\sigma_{2},\ldots)$. Then a moment's reflection reveals that $\rho$ is a normal representation of $\mathcal{N}$ on $K$ whose restriction to $H$ is $\sigma$, of course. Form $\mathcal{E}\otimes_{\rho}K$ and define $\tilde {V}:\mathcal{E}\otimes_{\rho}K\rightarrow K$ matricially as \[ \left[ \begin{array} [c]{cccccc} \tilde{T} & 0 & 0 & \cdots & & \\ \Delta & 0 & 0 & & \ddots & \\ 0 & I & 0 & \ddots & & \\ 0 & 0 & I & 0 & \ddots & \\ \vdots & 0 & 0 & I & \ddots & \\ & & & & \ddots & \ddots \end{array} \right] . \] Of course the identity operators in this matrix really must be interpreted as the operators that identify $\mathcal{E}\otimes_{\sigma_{n+1}}(\mathcal{E} ^{\otimes n}\otimes_{\sigma_{1}}\mathcal{D})$ with $\mathcal{E}^{\otimes (n+1)}\otimes_{\sigma_{1}}\mathcal{D}$. It is easily checked that $\tilde{V}$ is an isometry and that the associated covariant representation $(V,\rho)$ is an isometric dilation $(T,\sigma)$. Moreover, it is easily checked that $(V,\rho)$ is minimal, i.e., that the smallest subspace of $K$ containing $H$ and reducing $(V,\rho)$ is $K$. Further, if $(T,\sigma)$ is fully coisometric, so that $\tilde{T}$ is a coisometry, then so is $\tilde{V}$ a coisometry and $(V,\rho)$ is\ fully coisometric. The proof of the uniqueness of $(V,\rho)$ is the same as in the $C^{\ast} $-setting and is given in \cite[Proposition 3.2]{MS98}. Finally, to see that $V$ is fully coisometric if $T$ is, observe that if $T$ is fully coisometric, then $\widetilde{T}$ is a coisometry as we noted earlier. Thus $\widetilde{T}\Delta^{2}=0$. This implies that $\widetilde {T}\Delta=0$. Therefore, from the form of $\widetilde{V}$, we see that $\widetilde{V}\widetilde{V}^{\ast}=I$, which proves that $V$ is fully coisometric. \end{proof} We shall use Theorem \ref{Theorem 1.13} only for the module $\mathcal{L} _{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$ associated to a completely positive map $P$ on a von Neumann algebra $\mathcal{M}$ and only for the special covariant representation $(T,\sigma)$ which identifies $\mathcal{L} _{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$ with $\mathcal{E}_{P}$. However, we shall employ a picture of the dilation $(V,\rho)$ that is different from the one constructed in Theorem \ref{Theorem 1.13}. It will play a critical role in our analysis of semigroups of completely positive maps. The definition of $(T,\sigma)$ is simple: $T$ maps $\mathcal{L}_{\mathcal{M} }(H,\mathcal{M}\otimes_{P}H)$ to $B(H)$ via the formula: \begin{equation} T(X):=W_{P}^{\ast}X\text{, \ \ \ }X\in\mathcal{L}_{\mathcal{M}}(H,\mathcal{M} \otimes_{P}H)\text{,} \label{covT} \end{equation} and $\sigma$ is the identity representation, \[ \sigma(S)=S\text{,\ \ \ \ }S\in\mathcal{M}^{\prime}\text{.} \] Of course, $\sigma$ is $\sigma$-weakly continuous. Also, a straightforward calculation shows that $T$ is a bimodule map. To see that $T$ is completely contractive, we appeal to \cite[Lemma 3.5]{MS98} and show that the linear transformation $\tilde{T}:\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes _{P}H)\otimes_{\sigma}H\rightarrow H$ defined by the formula \[ \tilde{T}(\sum X_{j}\otimes h_{j})=\sum W_{P}^{\ast}X_{j}h_{j} \] is contractive. However, this is immediate: \begin{align*} \left\| \sum W_{P}^{\ast}X_{j}h_{j}\right\| ^{2} & \leq\left\| \sum X_{j}h_{j}\right\| ^{2}\\ & =\sum\langle h_{k},X_{k}^{\ast}X_{j}h_{j}\rangle=\left\| \sum X_{j}\otimes h_{j}\right\| ^{2}\text{.} \end{align*} As we remarked after Lemma \ref{CovRep}, $T$ is continuous with respect to the $\sigma$-topology on $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$ and the $\sigma$-weak topology on $B(H),$ and so $(T,\sigma)$ is a completely contractive representation of $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M} \otimes_{P}H)$ on $H$. Evidently, $T$ is really the inverse of the map $X\rightarrow\Phi_{X}^{\ast}$ that we used to identify $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes _{P}H)$ with $\mathcal{E}_{P}$ in Proposition \ref{Lemma1.4}. Indeed, using the notation of that proposition, we see that for $Y\in\mathcal{E}_{P}$, $T(\Phi_{Y}^{\ast})=W_{P}^{\ast}\Phi_{Y}^{\ast}=(\Phi_{Y}W_{P})^{\ast}=Y$. Now all this may look trivial. It appears that after identifying $\mathcal{L} _{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$ with $\mathcal{E}_{P}$ we are simply studying the identity covariant representation of $\mathcal{E}_{P}$. However, we need to emphasize that the heart of the matter lies in the fact that the inner product on $\mathcal{E}_{P}$ is \emph{not} the one coming from operator multiplication in $B(H)$ (unless $P$ is an endomorphism - see Corollary \ref{corollary1.6}). Rather, it is defined through the map $X\rightarrow\Phi_{X}^{\ast}$ (or through its inverse $T$) which identifies $\mathcal{E}_{P}$ with $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$. \begin{definition} \label{identityrep}The completely contractive covariant representation $(T,\sigma)$ of $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$, where $T$ is defined by (\ref{covT}) and where $\sigma$ is the identity representation, will be called the \emph{identity covariant representation }of $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$. \end{definition} As we noted above, and as we shall use to good effect, $(T,\sigma)$ really identifies $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$ with $\mathcal{E}_{P}$ and when this identification is made, the maps $T$ and $\sigma$ are both the identity maps. To present the model for the minimal isometric dilation $(V,\rho)$ of $(T,\sigma)$ with which we will work, we define, for $0\leq k<\infty$, maps \[ \iota_{k}:\underset{k\;\text{times}}{\underbrace{\mathcal{M}\otimes _{P}\mathcal{M}\otimes_{P}\cdots\otimes_{P}\mathcal{M}}}\otimes_{P} H\rightarrow\underset{k+1\;\text{times}}{\underbrace{\mathcal{M}\otimes _{P}\mathcal{M}\otimes_{P}\cdots\otimes_{P}\mathcal{M}}}\otimes_{P}H \] by the formula $\iota_{k}(T_{1}\otimes T_{2}\otimes\cdots T_{k}\otimes h)=I\otimes T_{1}\otimes T_{2}\otimes\cdots T_{k}\otimes h$. Of course, $\iota_{0}=W_{P}$. Since $P$ is unital, this map is a well defined isometry of $H_{k}:=\underset{k\;\text{times}}{\underbrace{\mathcal{M}\otimes _{P}\mathcal{M}\otimes_{P}\cdots\otimes_{P}\mathcal{M}}}\otimes_{P}H$ into $H_{k+1}:=\underset{k+1\;\text{times}}{\underbrace{\mathcal{M}\otimes _{P}\mathcal{M}\otimes_{P}\cdots\otimes_{P}\mathcal{M}}}\otimes_{P}H$. We write $H_{\infty}$ for the Hilbert space inductive limit, $\underrightarrow {\lim}(H_{k},\iota_{k})$, and we write $W_{k}$ for the canonical (isometric) embeddings of $H_{k}$ into $H_{\infty}$. Given $X\in\mathcal{L}_{\mathcal{M} }(H,\mathcal{M}\otimes_{P}H)$, we define $X_{k}:H_{k}\rightarrow H_{k+1}$ by the formula $X_{k}(T_{1}\otimes T_{2}\otimes\cdots T_{k}\otimes h)=T_{1} \otimes T_{2}\otimes\cdots T_{k}\otimes Xh$. A straight forward calculation using the fact that $X$ intertwines the actions of $\mathcal{M}$ on $H$ and on $\mathcal{M}\otimes_{P}H$ shows that $X_{k}$ is bounded with $\left\| X_{k}\right\| \leq\left\| X\right\| $. Further, the diagram \begin{equation} \begin{array} [c]{ccccccccccccc} H & \overset{\iota_{0}}{\rightarrow} & H_{1} & \overset{\iota_{1}} {\rightarrow} & \cdots & \overset{\iota_{k-1}}{\rightarrow} & H_{k} & \overset{\iota_{k}}{\rightarrow} & H_{k+1} & \rightarrow & \cdots & \rightarrow & H_{\infty}\\ & \underset{X}{\searrow} & & \underset{X}{\searrow}_{1} & & \underset {X}{\searrow}_{k-1} & & \underset{X}{\searrow}_{k} & & \underset{X} {\searrow}_{k+1} & & & \\ H & \overset{\iota_{0}}{\rightarrow} & H_{1} & \overset{\iota_{1}} {\rightarrow} & \cdots & \overset{\iota_{k-1}}{\rightarrow} & H_{k} & \overset{\iota_{k}}{\rightarrow} & H_{k+1} & \rightarrow & \cdots & \rightarrow & H_{\infty} \end{array} \label{Vinfinity} \end{equation} commutes and so defines an operator $X_{\infty}\in B(H_{\infty})$. We shall see in a moment that the map $X\rightarrow X_{\infty}$, which we shall call $V$, is part of an isometric covariant representation of $\mathcal{L} _{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$, $(V,\rho)$. To this end, we must first define $\rho$ through the following diagram, where $S\in\mathcal{M}^{\prime}$. The diagram \begin{equation} \begin{array} [c]{ccccccccc} H & \overset{\iota_{0}}{\rightarrow} & H_{1} & \overset{\iota_{1}} {\rightarrow} & \cdots & \overset{\iota_{k-1}}{\rightarrow} & H_{k} & \overset{\iota_{k}}{\rightarrow} & \cdots\\ \downarrow & \begin{array} [c]{rr} S & \; \end{array} & \downarrow & \begin{array} [c]{rr} I\otimes S & \;\;\;\; \end{array} & & & \downarrow & \begin{array} [c]{rr} \underset{}{I\otimes\cdots\otimes I}\otimes S & \;\;\; \end{array} & \\ H & \overset{\iota_{0}}{\rightarrow} & H_{1} & \overset{\iota_{1}} {\rightarrow} & \cdots & \overset{\iota_{k-1}}{\rightarrow} & H_{k} & \overset{\iota_{k}}{\rightarrow} & \cdots \end{array} \label{rho} \end{equation} commutes and, therefore, defines an operator $\rho(S)$ on $H_{\infty}$. Note that \[ W_{k}^{\ast}\rho(S)W_{k}=\underset{k\;\text{times}}{\underbrace{I\otimes \cdots\otimes I}}\otimes S, \] where, recall, $W_{k}$ is the canonical embedding of $H_{k}$ in $H_{\infty}$. >From this it is obvious that $\rho$ is a normal representation of $\mathcal{M}^{\prime}$ on $H_{\infty}$ that is reduced by each of the spaces $W_{k}H_{k}$. In particular, note that $W_{0}^{\ast}\rho(\cdot)W_{0}=\sigma$. If the diagrams that define $V$ and $\rho$, (\ref{Vinfinity}) and (\ref{rho}), resp., are combined in the obvious way, it becomes clear that for $X\in\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$ and $S\in \mathcal{M}^{\prime}$, \[ V(XS)=(XS)_{\infty}=X_{\infty}\rho(S)=V(X)\rho(S) \] while \[ V(S\cdot X)=V((I\otimes S)\circ X)=\rho(S)V(X) \] so that $(V,\rho)$ is covariant. Next we show that $(V,\rho)$ is isometric. To this end, fix $X$ and $Y$ in $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$ and recall that $X_{k}:H_{k}\rightarrow H_{k+1}$ is defined by the formula $X_{k}(T_{1}\otimes T_{2}\otimes\cdots T_{k}\otimes h)=T_{1}\otimes T_{2}\otimes\cdots T_{k}\otimes Xh$ and similarly for $Y_{k}$. Consequently, we find that \[ X_{k}^{\ast}(T_{1}\otimes T_{2}\otimes\cdots T_{k+1}\otimes h)=T_{1}\otimes T_{2}\otimes\cdots\otimes X^{\ast}(T_{k+1}\otimes h) \] because \begin{align*} & \langle T_{1}\otimes T_{2}\otimes\cdots\otimes X^{\ast}(T_{k+1}\otimes h),S_{1}\otimes S_{2}\otimes\cdots\otimes S_{k}\otimes k\rangle\\ & =\langle X^{\ast}(T_{k+1}\otimes h),P(T_{k}^{\ast}P(\cdots)S_{k})k\rangle\\ & =\langle T_{k+1}\otimes h,XP(T_{k}^{\ast}P(\cdots)S_{k})k\rangle\\ & =\langle T_{k+1}\otimes h,(P(T_{k}^{\ast}P(\cdots)S_{k})\otimes I)Xk\rangle\;\;\text{(because }X\in\mathcal{L}_{\mathcal{M}}(H,\mathcal{M} \otimes_{P}H)\text{)}\\ & =\langle T_{1}\otimes T_{2}\otimes\cdots\otimes T_{k+1}\otimes h,S_{1}\otimes S_{2}\otimes\cdots\otimes S_{k}\otimes Xk\rangle\\ & =\langle T_{1}\otimes T_{2}\otimes\cdots\otimes T_{k+1}\otimes h,X_{k}(S_{1}\otimes S_{2}\otimes\cdots\otimes S_{k}\otimes k)\rangle\text{.} \end{align*} Therefore, \begin{align*} X_{k}^{\ast}Y_{k}(T_{1}\otimes T_{2}\otimes\cdots\otimes T_{k}\otimes h) & =X_{k}^{\ast}(T_{1}\otimes T_{2}\otimes\cdots\otimes T_{k}\otimes Yh)\\ & =T_{1}\otimes T_{2}\otimes\cdots\otimes T_{k}\otimes X^{\ast}Yh\\ & =W_{k}^{\ast}\rho(X^{\ast}Y)W_{k}(T_{1}\otimes T_{2}\otimes\cdots\otimes T_{k+1}\otimes h). \end{align*} Thus $W_{k}^{\ast}\rho(X^{\ast}Y)W_{k}=X_{k}^{\ast}Y_{k}=W_{k}^{\ast }V(X)^{\ast}W_{k+1}W_{k+1}^{\ast}V(Y)W_{k}$ for all $k$ , from which it follows that $V(X)^{\ast}V(Y)=\rho(\langle X,Y\rangle)$, i.e., that $(V,\rho)$ is isometric. We now show that $(V,\rho)$ dilates $(T,\sigma)$ in the sense described in Theorem \ref{Theorem 1.13}. Of course to do this, we must, strictly speaking, identify $H$ with the subspace $W_{0}H$ of $H_{\infty}$. When this is done, the projection $P$ of $H_{\infty}$ on $H$ is $W_{0}W_{0}^{\ast}$. We already have seen that $H=W_{0}H$ reduces $\rho$ and that $\rho|H=\sigma$ as is required in part 1. of Theorem \ref{Theorem 1.13}. Also note that for $X\in\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$, \[ W_{0}^{\ast}X_{\infty}W_{0}=\iota_{0}^{\ast}X=T(X)\text{,} \] which is an evident consequence of the properties of inductive limits: For $h\in H$, $X_{\infty}W_{0}h=W_{1}Xh$, and $W_{0}^{\ast}W_{1}=\iota_{0}^{\ast} $. This of course means that $T(X)=W_{0}^{\ast}V(X)W_{0}$, so that after identifying $H$ with $W_{0}H$, through $W_{0}$, we see that $T(X)=PV(X)|H$, $X\in\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$, as required in part 2. of Theorem \ref{Theorem 1.13}. But we also need to check that $V(X)^{\ast}$ maps $H$ into itself. Equivalently, we need to show that $V(X)$ maps $H_{\infty}\ominus H$ into itself. For this purpose, it suffices to show that for each $k\geq1$, $V(X)$ maps $W_{k}H_{k}\ominus H$ into $H_{\infty}\ominus H$. To show how this is done, but to keep the matters simple, we show that $V(X)$ maps $W_{2}H_{2}\ominus H$ into $H_{\infty}\ominus H$. So let $W_{2}\sum T_{i}\otimes S_{i}\otimes h_{i}$ be an element of $W_{2}H_{2}.$ To say this is orthogonal to $H$ means that for all $h\in H$, \begin{align*} 0 & =\langle W_{2}\sum T_{i}\otimes S_{i}\otimes h_{i},W_{0}h\rangle\\ & =\langle\sum T_{i}\otimes S_{i}\otimes h_{i},I\otimes I\otimes h\rangle\\ & =\langle\sum P(P(T_{i})S_{i})h_{i},h\rangle\text{;} \end{align*} i.e., $W_{2}\sum T_{i}\otimes S_{i}\otimes h_{i}\in W_{2}H_{2}\ominus H$ if and only if $\sum P(P(T_{i})S_{i})h_{i}=0$. Now assume that $W_{2}\sum T_{i}\otimes S_{i}\otimes h_{i}\in W_{2}H_{2}\ominus H$, let $h$ be an element in $H$, and compute: \begin{align*} & \langle V(X)W_{2}\sum T_{i}\otimes S_{i}\otimes h_{i},W_{0}h\rangle\\ & =\langle\sum T_{i}\otimes S_{i}\otimes Xh_{i},I\otimes I\otimes I\otimes h\rangle_{H_{3}}\\ & =\sum\langle Xh_{i},P(S_{i}^{\ast}P(T_{i}^{\ast}))\otimes h\rangle_{H_{1} }\\ & =\sum\langle(P(P(T_{i})S_{i})\otimes I)Xh_{i},I\otimes h\rangle_{H_{1}}\\ & =\sum\langle X(P(P(T_{i})S_{i}))h_{i},I\otimes h\rangle_{H_{1}}\\ & =\langle X(\sum P(P(T_{i})S_{i}))h_{i}),I\otimes h\rangle_{H_{1}}\\ & =0\text{,} \end{align*} since $\sum P(P(T_{i})S_{i}))h_{i}=0$. Thus $(V,\rho)$ satisfies condition 2. in Theorem \ref{Theorem 1.13}. To show that this $(V,\rho)$ is unitarily equivalent to the dilation of $(T,\sigma)$ that is provided by Theorem \ref{Theorem 1.13}, we appeal to Proposition 3.2 of \cite{MS98} (which we stated as part of Theorem \ref{Theorem 1.13}) and show that $(V,\rho)$ is minimal; i.e., that there are no closed subspaces $K$ properly contained between $H$ and $H_{\infty}$ that are invariant under the images of $V$ and $\rho$. So, suppose $K$ is such a subspace, then for every $X\in\mathcal{L}_{\mathcal{M}}(H,\mathcal{M} \otimes_{P}H)$, $X_{\infty}(H)=V(X)(H)$ is contained in $K$. Hence, in particular, $X_{\infty}(H)=X_{\infty}(W_{0}H)=X(H)\subseteq K$. However, the span of $\{X(H)\mid X\in\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes _{P}H)\}$ is $M\otimes_{P}H$, by Lemma \ref{lemma1.7}, and so we conclude that $W_{1}H_{1}=W_{1}(\mathcal{M}\otimes_{P}H)\subseteq K.$ This, in turn, implies that $X_{\infty}(\mathcal{M}\otimes_{P}H)\subseteq K$; i.e., that $(I\otimes X)(\mathcal{M}\otimes_{P}H)\subseteq K$. Applying Lemma \ref{lemma1.7} again, we see that $W_{2}H_{2}=W_{2}(\mathcal{M}\otimes_{P}\mathcal{M}\otimes_{P}H)$ is contained in $K$. Continuing in this manner, we find that $W_{k}H_{k}$ is contained in $K$ for every $k$. Hence $K=H_{\infty}$. Since our special $(V,\rho)$ is unitarily equivalent to the one provided by Theorem \ref{Theorem 1.13}, we may infer that $V$ is continuous with respect to the $\sigma$-topology on $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M} \otimes_{P}H)$ and the $\sigma$-weak topology on $B(H)$. We summarize our discussion of the identity representation of $\mathcal{L} _{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$ in the following theorem. \begin{theorem} \label{idDilation}The maps $V$ and $\rho$ defined by the diagrams, (\ref{Vinfinity}) and (\ref{rho}) together form an isometric covariant representation of $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$ that dilates the identity representation $(T,\sigma)$ of $\mathcal{L}_{\mathcal{M} }(H,\mathcal{M}\otimes_{P}H)$. Moreover, $(T,\sigma)$ and $(V,\rho)$ are fully coisometric. \end{theorem} \begin{proof} The only thing that remains to be proved is the last statement about $(T,\sigma)$ and $(V,\rho)$ being fully coisometric. However, for this purpose, it suffices to show that $(T,\sigma)$ is fully coisometric, by Theorem \ref{Theorem 1.13}. Recall that $\tilde{T}$ maps $\mathcal{L} _{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)\otimes_{\sigma}H$ to $H$ by the formula $\tilde{T}(X\otimes h)=W_{P}^{\ast}Xh$. To calculate $\tilde{T}^{\ast }$, simply observe that for $X\in\mathcal{L}_{\mathcal{M}}(H,\mathcal{M} \otimes_{P}H)$ and $h\in H$, $\langle\tilde{T}^{\ast}k,X\otimes h\rangle =\langle k,\tilde{T}(X\otimes h)\rangle=\langle k,W_{P}^{\ast}Xh\rangle =\langle W_{P}k,Xh\rangle.$ However, by Lemma \ref{lemma1.7}, $\{Xh\mid X\in\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H),\;h\in H\}$ spans $\mathcal{M}\otimes_{P}H$. So, if we let $u:\mathcal{L}_{\mathcal{M} }(H,\mathcal{M}\otimes_{P}H)\otimes H\rightarrow\mathcal{M}\otimes_{P}H$ be defined by the formula $u(X\otimes h)=Xh$, the $u$ is a Hilbert space isomorphism such that $\langle\widetilde{T}^{\ast}k,X\otimes h\rangle=\langle W_{P}k,Xh\rangle=\langle W_{P}k,u(X\otimes h)\rangle=\langle u^{\ast} W_{P}k,X\otimes h\rangle$ for all $k$ and all $X\otimes h$. Thus $\tilde {T}^{\ast}$ is the \emph{isometry }$u^{\ast}W_{P}$, proving that $\tilde{T}$ is a coisometry and, therefore, that $(T,\sigma)$ is fully coisometric. \end{proof} If $\mathcal{N}$ is a von Neumann algebra and if $\mathcal{E}$ is a $W^{\ast} $-correspondence over $\mathcal{N}$, then we have seen how a completely contractive covariant representation $(T,\sigma)$ of $\mathcal{E}$ on a Hilbert space $H$ gives rise to a completely positive map $\Psi=\Psi_{T}$ of $\mathcal{L}(\mathcal{E})$ on $H$. (See Definition \ref{Definition1.12bis}.) However, equally important for our purposes is the related completely positive map $\Theta=\Theta_{T}$ on the \emph{commutant }of $\sigma(\mathcal{N})$, $\sigma(\mathcal{N})^{\prime}$, that is described in the next proposition. \begin{proposition} \label{Proposition 1.15}Let $\mathcal{N}$ be a von Neumann algebra, let $\mathcal{E}$ be a $W^{\ast}$-correspondence over $\mathcal{N}$, and let $(T,\sigma)$ be a completely contractive covariant representation of $\mathcal{E}$ on a Hilbert space $H$. For $S\in\sigma(\mathcal{N})^{\prime}$, set \begin{equation} \Theta(S)=\Theta_{T}(S):=\tilde{T}(1_{\mathcal{E}}\otimes S)\tilde{T}^{\ast }\text{.} \label{thetaT} \end{equation} Then $\Theta$ is normal completely positive map from $\sigma(\mathcal{N} )^{\prime}$ into itself that is unital if and only if $(T,\sigma)$ is fully coisometric. Further, if $(T,\sigma)$ is isometric, then $\Theta$ is multiplicative, i.e., $\Theta$ is an endomorphism of $\sigma(\mathcal{N} )^{\prime}$, and, conversely, if $\Theta$ is multiplicative, then the correspondence $\mathcal{E}$ decomposes as the direct sum of two subcorrespondences, $\mathcal{E}=\mathcal{E}_{1}\oplus\mathcal{E}_{2}$, so that $(T|\mathcal{E}_{1},\sigma)$ is isometric and $T|\mathcal{E}_{2}=0$. \end{proposition} \begin{proof} Much of the proof may be dug out of \cite{MS99}. See Lemma 2.3 there, in particular. Here are the particulars. First, recall the induced representation $\sigma^{\mathcal{E}}:\mathcal{L}(\mathcal{E})\rightarrow B(\mathcal{E} \otimes_{\sigma}H)$, $\sigma^{\mathcal{E}}(X)=X\otimes I_{H}$. As Rieffel shows in Theorem 6.23 of \cite{mR74}, the commutant of $\sigma^{\mathcal{E} }(\mathcal{L}(\mathcal{E}))$ is $\mathbb{C}1_{\mathcal{E}}\otimes \sigma(\mathcal{N})^{\prime}$, and of course the map $S\rightarrow 1_{\mathcal{E}}\otimes S$ is a normal representation of $\sigma(\mathcal{N} )^{\prime}$ onto $\mathbb{C}1_{\mathcal{E}}\otimes\sigma(\mathcal{N})^{\prime }$. Thus $\Theta$ is a normal completely positive map from $\sigma (\mathcal{N})^{\prime}$ into $B(H)$. The problem is to locate its range. This, however, is easy on the basis of equation (\ref{covariance}): Given $R\in\mathcal{N}$ and $S\in\sigma(\mathcal{N})^{\prime}$, that equation implies that \begin{align*} \sigma(R)\Theta(S)=\sigma(R)\tilde{T}(1_{\mathcal{E}}\otimes S)\tilde{T} ^{\ast}=\tilde{T}\sigma^{\mathcal{E}}\circ\varphi(R)(1_{\mathcal{E}}\otimes S)\tilde{T}^{\ast}\\ =\tilde{T}(1_{\mathcal{E}}\otimes S)\sigma^{\mathcal{E}}\circ\varphi (R)\tilde{T}^{\ast}=\tilde{T}(1_{\mathcal{E}}\otimes S)\tilde{T}^{\ast} \sigma(R)\\ =\Theta(S)\sigma(R), \end{align*} so $\Theta(S)\in\sigma(\mathcal{N})^{\prime}$. Of course $\Theta$ is unital if and only if $(T,\sigma)$ is fully coisometric. As for the last assertion, the direct statement is proved as Lemma 2.3 of \cite{MS99}. For the converse, suppose that $\Theta$ is multiplicative. Then $\tilde{T}\tilde{T}^{\ast}=\Theta(I)$ is a projection. Therefore, $\tilde {T}^{\ast}\tilde{T}$ is a projection on $\mathcal{E}\otimes_{\sigma}H$, call it $q$. Since $\Theta$ is multiplicative, we infer that $q(1_{\mathcal{E} }\otimes S_{1})q(1_{\mathcal{E}}\otimes S_{2})q=q(1_{\mathcal{E}}\otimes S_{1}S_{2})q$ for all $S_{1},S_{2}\in\sigma(\mathcal{N})^{\prime}$. This implies that $q\in(\mathbb{C}1_{\mathcal{E}}\otimes\sigma(\mathcal{N} )^{\prime})^{\prime}=\sigma^{\mathcal{E}}(\mathcal{L}(\mathcal{E}))$, by Rieffel's theorem \cite[Theorem 6.23]{mR74} and the fact that $\sigma ^{\mathcal{E}}$ is a normal representation of the von Neumann algebra $\mathcal{L}(\mathcal{E})$. Thus, $q=\sigma^{\mathcal{E}}(Q)$ for a projection $Q\in\mathcal{L}(\mathcal{E})$. If $\mathcal{E}_{1}:=Q\mathcal{E}$ and $\mathcal{E}_{2}:=(1_{\mathcal{E}}-Q)\mathcal{E}$, then it is easy to see that $(T|\mathcal{E}_{1},\sigma)$ is isometric, while $T|\mathcal{E}_{2}=0$. We omit the details. \end{proof} \begin{definition} \label{inducedCPmap}Let $\mathcal{E}$ be a $W^{\ast}$-correspondence over a von Neumann algebra $\mathcal{N}$ and let $(T,\sigma)$ be a completely contractive covariant representation of $\mathcal{E}$ on the Hilbert space $H$, the normal, completely positive map \[ \Theta_{T}:\sigma(\mathcal{N})^{\prime}\rightarrow\sigma(\mathcal{N})^{\prime} \] defined by equation (\ref{thetaT}) will be called the \emph{induced (completely positive) map }on $\sigma(\mathcal{N})^{\prime}$. If $T$ is isometric, then $\Theta_{T}$ will be called the \emph{induced endomorphism} of $\sigma(\mathcal{N})^{\prime}$. \end{definition} If we apply Proposition \ref{Proposition 1.15} to the identity representation $(T,\sigma)$ of $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$ or of $\mathcal{E}_{P}$, for a completely positive map $P$ on a von Neumann algebra $\mathcal{M}$, we recapture $P$. Specifically, we have \begin{corollary} \label{lemma 1.16}Let $P$ be a normal, unital, completely positive map on the von Neumann algebra $\mathcal{M}$, and let $(T,\sigma)$ be the identity representation of the Arveson correspondence $\mathcal{E}_{P}\simeq \mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$ on $H$. Then $\Theta _{T}=P$. \end{corollary} \begin{proof} We will apply Proposition \ref{Proposition 1.15}, with the von Neumann algebra $\mathcal{N}$ identified with $\mathcal{M}^{\prime}$. So, for $S\in \mathcal{M}$ and $h\in H$, we have from the computations in the proof of Proposition \ref{Proposition 1.15} (the fact that $\tilde{T}^{\ast}=u^{\ast }W_{P}$, so $\tilde{T}=W_{P}^{\ast}u$ \begin{align*} \Theta(S)h & =\tilde{T}(1_{\mathcal{E}}\otimes S)\tilde{T}^{\ast}h\\ & =\tilde{T}(1_{\mathcal{E}}\otimes S)W_{P}h\\ & =W_{P}^{\ast}u(I_{\mathcal{E}}\otimes S)u^{\ast}W_{P}h=P(S)h\text{.} \end{align*} \end{proof} We conclude this section with our principal dilation result for single completely positive maps. It is the key to our analysis of semigroups. \begin{theorem} \label{Theorem1.17}Let $\mathcal{M}$ be a von Neumann algebra acting on a Hilbert space $H$ and let $P:\mathcal{M}\rightarrow\mathcal{M}$ be a normal, unital, completely positive map of $\mathcal{M}$. Let $(T,\sigma)$ be the identity representation on $H$ of the Arveson correspondence $\mathcal{E}_{P} $, let $(V,\rho)$ be the minimal isometric dilation of $(T,\sigma)$ on the Hilbert space $K$, and let $W:H\rightarrow K$ be the associated imbedding. If $\mathcal{R}:=\rho(\mathcal{M}^{\prime})^{\prime}$, then \begin{enumerate} \item $W^{\ast}\mathcal{R}W=\mathcal{M}$, so that $\mathcal{M}$ is a \emph{corner} of $\mathcal{R}$ (and $\mathcal{R}^{\prime}$ is a normal homomorphic image of $\mathcal{M}^{\prime}$), \item $\Theta_{V}$ is a unital, normal $\ast$-endomorphism of $\mathcal{R}$, and \item for every non-negative integer $n$, \[ P^{n}(T)=W^{\ast}\Theta_{V}^{n}(WTW^{\ast})W \] and \[ P^{n}(W^{\ast}SW)=W^{\ast}\Theta_{V}^{n}(S)W \] for all $S\in\mathcal{R}$, and $T\in\mathcal{M}$. \end{enumerate} \end{theorem} Thus, the induced endomorphism $\Theta_{V}$ of $\mathcal{R}$ is a \emph{power dilation} of $P$. \begin{proof} >From Corollary \ref{lemma 1.16}, we know that $P$ is the induced completely positive map $\Theta_{T}$ on $\mathcal{M}$. Also, since $(V,\rho)$ is the minimal isometric dilation of $(T,\sigma)$ and $W$ is the embedding map, we know that $WH$ is invariant under $V(Y)^{\ast}$ for all $Y\in\mathcal{E}$ and $W^{\ast}V(Y)W=T(Y)$. Since $W^{\ast}\rho(S)W=\sigma(S)$ for all $S\in\mathcal{M}^{\prime}$ by definition of $(V,\rho)$, and since $\sigma(S)=S$, $S\in\mathcal{M}^{^{\prime}}$, by definition of the identity representation, we see that \begin{equation} W^{\ast}\mathcal{R}W=W^{\ast}\rho(\mathcal{M}^{\prime})^{\prime}W=(W^{\ast }\rho(\mathcal{M}^{\prime})W)^{\prime}=(\mathcal{M}^{\prime})^{\prime }=\mathcal{M}\text{.}\label{compress} \end{equation} By Theorem \ref{idDilation}, $(T,\sigma)$ and $(V,\rho)$ are fully coisometric, and so, by Proposition \ref{Proposition 1.15}, $\Theta_{V}$ is a normal, unital, $\ast$-endomorphism of $\mathcal{R}=\rho(\mathcal{M}^{\prime })^{\prime}$. Since $WH$ is invariant under $V(Y)^{\ast}$, $Y\in\mathcal{E}$, we see that for $Y\in\mathcal{E}$ and $k\in K$, \[ WW^{\ast}\tilde{V}(Y\otimes(I-WW^{\ast})k)=WW^{\ast}V(Y)(I-WW^{\ast})k=0 \] so that $WW^{\ast}\tilde{V}(I\otimes(I-WW^{\ast}))=0$. Therefore, $WW^{\ast }\Theta_{V}((I-WW^{\ast}))=WW^{\ast}\tilde{V}(I\otimes(I-WW^{\ast}))\tilde {V}^{\ast}=0$; i.e. $WW^{\ast}\Theta_{V}(WW^{\ast})=WW^{\ast}$. Multiplying this equation on the left by $W^{\ast}$, we see that \begin{equation} W^{\ast}\Theta_{V}(WW^{\ast})=W^{\ast}.\label{key1} \end{equation} Since $T(\cdot)=W^{\ast}V(\cdot)W$, it follows that $\tilde{T}=W^{\ast} \tilde{V}(I\otimes W)$. Consequently, for $L\in\mathcal{M}$, \begin{align} P(L) & =\tilde{T}(I\otimes L)\tilde{T}^{\ast}\label{key2}\\ & =W^{\ast}\tilde{V}(I\otimes W)(I\otimes L)(I\otimes W^{\ast})\tilde {V}^{\ast}W=W^{\ast}\Theta_{V}(WLW^{\ast})W\text{.} \end{align} On the other hand, for $S\in\mathcal{R}$, we find from this equation and the fact that $W^{\ast}SW\in\mathcal{M}$ (by (\ref{compress})) that \begin{align*} P(W^{\ast}SW) & =W^{\ast}\Theta_{V}(WW^{\ast}SWW^{\ast})W\\ & =W^{\ast}\Theta_{V}(WW^{\ast}SWW^{\ast})W=W^{\ast}\Theta_{V}(WW^{\ast })\Theta_{V}(S)\Theta_{V}(WW^{\ast})W\\ & =W^{\ast}\Theta_{V}(S)W\text{,} \end{align*} using equation (\ref{key1}). To relate $P^{2}$ to $\Theta_{V}^{2}$, let $L\in\mathcal{M}$. Then, using equation (\ref{key1}) again, we find that \begin{align*} P^{2}(L) & =P(P(L))=P(W^{\ast}\Theta_{V}(WLW^{\ast})W)\\ & =W^{\ast}\Theta_{V}(WW^{\ast}\Theta_{V}(WLW^{\ast})WW^{\ast})W\\ & =W^{\ast}\Theta_{V}(WW^{\ast})\Theta_{V}^{2}(WLW^{\ast})\Theta_{V} (WW^{\ast})W\\ & =W^{\ast}\Theta_{V}^{2}(WLW^{\ast})W\text{.} \end{align*} Continuing in this manner, we find that $P^{n}(L)=W^{\ast}\Theta_{V} ^{n}(WLW^{\ast})W$ for all $L\in\mathcal{M}$. To show that $P^{n}(W^{\ast}SW)=W^{\ast}\Theta_{V}^{n}(S)W$ for all $S\in\mathcal{R}$, and all $n$, we need to generalize equation (\ref{key1}) to $W^{\ast}\Theta_{V}^{n}(WW^{\ast})=W^{\ast}$, for all $n$. However, this is an easy induction, the general step of which is: \begin{align*} WW^{\ast}\Theta_{V}^{n+1}(WW^{\ast}) & =WW^{\ast}\Theta_{V}(\Theta_{V} ^{n}(WW^{\ast}))\\ & =WW^{\ast}\Theta_{V}(WW^{\ast}\Theta_{V}^{n}(WW^{\ast}))\\ & +WW^{\ast}\Theta_{V}((I-WW^{\ast})\Theta_{V}^{n}(WW^{\ast}))\\ & =WW^{\ast}\Theta_{V}^{n}(WW^{\ast})+WW^{\ast}\Theta_{V}(I-WW^{\ast} )\Theta_{V}^{n}(WW^{\ast})\\ & =WW^{\ast}\text{.} \end{align*} Thus $WW^{\ast}\Theta_{V}^{n}(WW^{\ast})=WW^{\ast}$ for all $n$. Multiplying through on the left by $W^{\ast}$ gives the desired formula. Using this, we see that since $W^{\ast}SW\in\mathcal{M}$ for all $S\in\mathcal{R}$, our earlier calculation gives \begin{align*} P^{n}(W^{\ast}SW) & =W^{\ast}\Theta_{V}^{n}(WW^{\ast}SWW^{\ast})W\\ & =W^{\ast}\Theta_{V}^{n}(WW^{\ast})\Theta_{V}^{n}(S)\Theta_{V}^{n}(WW^{\ast })W\\ & =W^{\ast}\Theta_{V}^{n}(S)W. \end{align*} \end{proof} \section{Semigroups of Completely Positive Maps} In this section, we focus on semigroups $\{P_{t}\}_{t\geq0}$ of unital, normal, completely positive maps on our basic von Neumann algebra $\mathcal{M}$ acting on a Hilbert space $H$. That is, we assume that $P_{t+s}=P_{t}P_{s}$, $s,t\geq0$, and $P_{0}$ is the identity map on $\mathcal{M}$. We call $\{P_{t}\}_{t\geq0}$ a \emph{completely positive semigroup} on $\mathcal{M}$, or simply a \emph{cp semigroup}, for short. We make no continuity assumptions on $\{P_{t}\}_{t\geq0}$ in this section and, in fact, everything we say is true if the additive semigroup of non-negative real numbers is replaced by any totally ordered semigroup. Our goal is to dilate $\{P_{t}\}_{t\geq0}$ to a semigroup of endomorphisms in much the same fashion that we did for a single completely positive map in Section \ref{section1}. However, there is a complication that must be addressed. Let $\mathcal{E}_{t}$ be the Arveson correspondence over $\mathcal{M}^{\prime }$ associated with $P_{t}$, $t\geq0$. As in Section \ref{section1}, we shall view $\mathcal{E}_{t}$ as either a space of operators on $H$ or as the space $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P_{t}}H)$. As we noted, the spaces $\mathcal{E}_{t}$ need not ``multiply'', i.e., $\mathcal{E}_{t} \otimes\mathcal{E}_{s}$ need not be isomorphic to $\mathcal{E}_{t+s}$. So, we will have to ``dilate'' these to a family $\{E(t)\}_{t\geq0}$ of $\mathcal{M}^{\prime}$ correspondences such that $E(t)\otimes E(s)\simeq E(t+s)$. That is, we need to dilate these to a (discrete) product system over $\mathcal{M}^{\prime}$ - a notion that is inspired by Arveson's product systems in \cite{wA89}. This we do following in outline arguments of Bhat in \cite{bB96}. There are similarities also between our arguments and arguments in \cite{BS00}, but our correspondences are over $\mathcal{M}^{\prime}$ as opposed to being over $\mathcal{M}$ and we cannot tap directly into their arguments. Once $\{E(t)\}_{t\geq0}$ is constructed, we promote the identity representations of the $\mathcal{E}_{t}$'s to completely contractive representations of the $E(t)$'s and then dilate these to isometric representations of the $E(t)$'s. These last representations will implement a semigroup of endomorphisms of a bigger von Neumann algebra in which $\mathcal{M}$ sits as a corner. The semigroup of endomorphisms will be the desired dilation of $\{P_{t}\}_{t\geq0}$. Let $\mathfrak{P}(t)$ denote the collection of partitions of the closed interval $[0,t]$ and order these by refinement. For a $\mathfrak{p} \in\mathfrak{P}(t)$, we shall write $\mathfrak{p}=\{0=t_{0}<t_{1}<t_{2} <\cdots<t_{n-1}<t_{n}=t\}$. For such a $\mathfrak{p}$, we shall write \[ H_{\mathfrak{p},t}:=\mathcal{M}\otimes_{P_{t_{1}}}\mathcal{M}\otimes _{P_{t_{2}-t_{1}}}\otimes\cdots\mathcal{M}\otimes_{P_{t-t_{n-1}}}H\text{.} \] Then it is easy to see that $H_{\mathfrak{p},t}$ is a left $\mathcal{M} $-module via the formula $S\cdot(T_{1}\otimes T_{2}\otimes\cdots T_{n}\otimes h):=(ST_{1})\otimes T_{2}\otimes\cdots T_{n}\otimes h$, $S\in\mathcal{M}$, $(T_{1}\otimes T_{2}\otimes\cdots T_{n}\otimes h)\in H_{\mathfrak{p},t}$. Also, it is easy to see that $\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p},t})$ becomes an $\mathcal{M}^{\prime}$-correspondence via the actions \[ (XR)h:=X(Rh) \] and \[ (RX)h:=(I\otimes R)Xh\text{,} \] $R\in\mathcal{M}^{\prime}$, $X\in\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p} ,t})$ and $h\in H$, where $I$ is the identity operator on $H_{\mathfrak{p},t} $. The inner product is given by the formula $\langle X_{1},X_{2} \rangle:=X_{1}^{\ast}X_{2}$. Note that the map $R\mapsto\langle X_{1} ,RX_{2}\rangle=X_{1}^{\ast}(I\otimes R)X_{2}$ is $\sigma$-weakly continuous, so that $\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p},t})$ is, indeed, an $\mathcal{M}^{\prime}$-correspondence. We shall write $\mathcal{L}_{t}$ for the $\mathcal{M}^{\prime}$-correspondence $\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P_{t}}H)$. Then Proposition \ref{Lemma1.9} shows that $\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p},t})$ is isomorphic to $\mathcal{L}_{t-t_{n-1}}\otimes_{\mathcal{M}^{\prime} }\mathcal{L}_{t_{n-1}-t_{n-2}}\otimes\cdots\otimes_{\mathcal{M}^{\prime} }\mathcal{L}_{t_{1}}$ as $\mathcal{M}^{\prime}$-correspondences. We next want to show that the Hilbert spaces $H_{\mathfrak{p},t}$ and $\mathcal{M}^{\prime}$-correspondences $\mathcal{L}_{\mathcal{M} }(H,H_{\mathfrak{p},t})$ form inductive systems so that we can take their direct limits. For this purpose, consider first the case when $\mathfrak{p} ^{\prime}:=\{0=t_{0}<t_{1}<t_{2}<\cdots<t_{k}<\tau<t_{k+1}<\cdots t_{n-1}<t_{n}=t\}$, a one point refinement of $\mathfrak{p}$. Then we obtain a Hilbert space isometry $v_{0}:H_{\mathfrak{p},t}\longrightarrow H_{\mathfrak{p}^{\prime},t}$ defined by the formula $v_{0}(T_{1}\otimes T_{2}\otimes\cdots T_{n}\otimes h)=T_{1}\otimes\cdots\otimes T_{k}\otimes I\otimes T_{k+1}\otimes\cdots T_{n}\otimes h$ and an $\mathcal{M}^{\prime} $-correspondence isometry $v:\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p} ,t})\longrightarrow\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p}^{\prime},t})$ defined by the formula $v(X):=v_{0}\circ X.$ The proof of these facts is a minor modification of the proof of Proposition \ref{Lemma1.9} and so will be omitted. Since every refinement of a partition can be obtained by a sequence of one-point refinements, it is clear that for every pair of partitions $(\mathfrak{p},\mathfrak{p}^{\prime})$, with $\mathfrak{p}^{\prime}$ refining $\mathfrak{p}$, we have Hilbert space isometries $v_{0,\mathfrak{p,p}^{\prime }}:H_{\mathfrak{p},t}\rightarrow H_{\mathfrak{p}^{\prime},t}$ and $\mathcal{M}^{\prime}$-correspondence isometries $v_{\mathfrak{p,p}^{\prime} }:\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p},t})\longrightarrow \mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p}^{\prime},t})$ so that $v_{0,\mathfrak{p}^{\prime}\mathfrak{,p}^{\prime\prime}}\circ v_{0,\mathfrak{p,p}^{\prime}}=v_{0,\mathfrak{p,p}^{\prime\prime}}$ and $v_{\mathfrak{p}^{\prime}\mathfrak{,p}^{\prime\prime}}\circ v_{\mathfrak{p,p} ^{\prime}}=v_{\mathfrak{p,p}^{\prime\prime}}$, when $\mathfrak{p} ^{\prime\prime}$ refines $\mathfrak{p}^{\prime}$ and $\mathfrak{p}^{\prime}$ refines $\mathfrak{p}$. The Hilbert space isometry $v_{0,\mathfrak{p,p} ^{\prime}}$ simply sends a decomposable tensor $T_{1}\otimes T_{2} \otimes\cdots T_{n}\otimes h\in H_{\mathfrak{p},t}$ to the decomposable tensor in $H_{\mathfrak{p}^{\prime},t}$ obtained from $T_{1}\otimes T_{2} \otimes\cdots T_{n}\otimes h$ by inserting identity operators in those positions where new indices have been added to $\mathfrak{p}$ to obtain $\mathfrak{p}^{\prime}$. The $\mathcal{M}^{\prime}$-correspondence isometry $v_{\mathfrak{p,p}^{\prime}}$ is defined by the formula $v_{\mathfrak{p,p} ^{\prime}}(X):=v_{0,\mathfrak{p,p}^{\prime}}\circ X$, $X\in\mathcal{L} _{\mathcal{M}}(H,H_{\mathfrak{p},t})$. We may thus form the direct limits \[ H_{t}:=\underrightarrow{\lim}(H_{\mathfrak{p},t},v_{0,\mathfrak{p,p}^{\prime} }) \] and \[ E(t):=\underrightarrow{\lim}(\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p} ,t}),v_{\mathfrak{p,p}^{\prime}})\text{.} \] Note that $H_{t}$ is a left $\mathcal{M}$ module since each $H_{\mathfrak{p} ,t}$ is and the maps $v_{0,\mathfrak{p,p}^{\prime}}$ respect the action of $\mathcal{M}$. It is also a left $\mathcal{M}^{\prime}$-module, since $\mathcal{M}^{\prime}$ acts on each $H_{\mathfrak{p},t}$ via the formula $R(T_{1}\otimes T_{2}\otimes\cdots T_{n}\otimes h)=(I\otimes R)(T_{1}\otimes T_{2}\otimes\cdots T_{n}\otimes h)=T_{1}\otimes T_{2}\otimes\cdots T_{n}\otimes Rh$, $T_{1}\otimes T_{2}\otimes\cdots T_{n}\otimes h\in H_{\mathfrak{p},t}$, $R\in\mathcal{M}^{\prime}$ and the maps $v_{0,\mathfrak{p,p}^{\prime}}$ respect this action. It is now easy to see that $\mathcal{L}_{M}(H,H_{t})$ has the structure of an $\mathcal{M}^{\prime} $-correspondence. Indeed, the bimodule structure has just been indicated. One passes to the limit when writing $H_{t}=\underrightarrow{\lim}(H_{\mathfrak{p} ,t},v_{0,\mathfrak{p,p}^{\prime}})$ in $\mathcal{L}_{M}(H,H_{t})$. Since each $\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p},t})$ is an $\mathcal{M}^{\prime} $-correspondence in an obvious way, so is $\mathcal{L}_{\mathcal{M}}(H,H_{t})$ via the limit of the inner products on the $\mathcal{L}_{\mathcal{M} }(H,H_{\mathfrak{p},t})$. \begin{lemma} \label{Lemma2.1}Each $E(t)$ is isomorphic, as an $\mathcal{M}^{\prime} $-correspondence, to $\mathcal{L}_{\mathcal{M}}(H,H_{t})$. \end{lemma} \begin{proof} For each $\mathfrak{p\in P}(t)$, we write $v_{0,\mathfrak{p,\infty}}$ for the canonical isometric embedding of $H_{\mathfrak{p},t}$ in $H_{t}$. Since the $v_{0,\mathfrak{p,p}^{\prime}}$ are $\mathcal{M}$-module maps, so is $v_{0,\mathfrak{p,\infty}}$. Hence we obtain $\mathcal{M}^{\prime} $-correspondence isometries $v_{\mathfrak{p,\infty}}:\mathcal{L}_{\mathcal{M} }(H,H_{\mathfrak{p},t})\longrightarrow\mathcal{L}_{\mathcal{M}}(H,H_{t})$ by setting $v_{\mathfrak{p,\infty}}(X)=v_{0,\mathfrak{p,\infty}}\circ X.$ However, for $\mathfrak{p}^{\prime}$ finer than $\mathfrak{p}$, we have $v_{0,\mathfrak{p}^{\prime}\mathfrak{,\infty}}\circ v_{0,\mathfrak{p,p} ^{\prime}}=v_{0,\mathfrak{p,\infty}}$. Hence $v_{\mathfrak{p}^{\prime }\mathfrak{,\infty}}\circ v_{\mathfrak{p,p}^{\prime}}=v_{\mathfrak{p,\infty}} $. Thus, by the universal properties of inductive limits, we obtain an $\mathcal{M}^{\prime}$-correspondence isometry $v:E(t)\longrightarrow \mathcal{L}_{\mathcal{M}}(H,H_{t})$. We need to show that $v$ is surjective. To this end, observe that if $P$ and $Q$ are two normal, unital, completely positive maps on $\mathcal{M}$, then using Proposition \ref{Lemma1.9} (and applying Lemma \ref{lemma1.7}), we find that $\bigvee\{X(H)\mid X\in \mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{Q}\mathcal{M}\otimes _{\mathcal{P}}H)\}\supseteq\bigvee\{(I\otimes X)Y(H)\mid X\in\mathcal{L} _{\mathcal{M}}(H,\mathcal{M}\otimes_{P}H)$, $Y\in\mathcal{L}_{\mathcal{M} }(H,\mathcal{M}\otimes_{Q}H)\}=\bigvee\{(I\otimes X)(\mathcal{M}\otimes _{Q}H)\mid X\in\mathcal{L}_{\mathcal{M}}(H,\mathcal{M}\otimes_{P} H)\}=\mathcal{M}\otimes_{P}\mathcal{M}\otimes_{Q}H$. The same argument, applied to more than two maps shows that for any partition $\mathfrak{p}$ in $\mathfrak{P}(t)$, $\bigvee\{X(H)\mid X\in\mathcal{L}_{\mathcal{M} }(H,H_{\mathfrak{p},t})\}=H_{\mathfrak{p},t}$ and $\bigvee \{v_{\mathfrak{p,\infty}}(X)(H)\mid X\in\mathcal{L}_{\mathcal{M} }(H,H_{\mathfrak{p},t})\}=v_{0,\mathfrak{p,\infty}}(H_{\mathfrak{p} ,t})\subseteq H_{t}$. Hence, $\bigvee\{Y(H)\mid Y\in v(E(t))\}=H_{t}$. Consequently, given any $X\in\mathcal{L}_{\mathcal{M}}(H,H_{t})$ satisfying $X^{\ast}Y=0$ for all $Y\in v(E(t)),$ we have $X=0$. That is, the orthogonal complement of $v(E(t))$ in $\mathcal{L}_{\mathcal{M}}(H,H_{t})$ is zero. Since $\mathcal{L}_{\mathcal{M}}(H,H_{t})$ is self-dual, by Proposition \ref{Lemma 1.2}, we conclude that $v(E(t))=\mathcal{L}_{\mathcal{M}}(H,H_{t})$. \end{proof} If $\mathfrak{p}_{1}\in\mathfrak{P}(t)$ and $\mathfrak{p}_{2}\in \mathfrak{P}(s)$, then we shall write $\mathfrak{p}_{2}\vee\mathfrak{p}_{1}+s$ for the following partition in $\mathfrak{P}(t+s)$: \begin{multline*} \{0=s_{0}<s_{1}<\cdots<s_{m-1}<s_{m}(=s=t_{0}+s)<t_{1}+s<\\ t_{2}+s\cdots<t_{n-1}+s<t_{n}+s=t+s\}\text{,} \end{multline*} where $\mathfrak{p}_{1}=\{0=t_{0}<t_{1}<t_{2}<\cdots<t_{n-1}<t_{n}=t\}$ and $\mathfrak{p}_{2}=\{0=s_{0}<s_{1}<s_{2}<\cdots<s_{n-1}<s_{m}=s\}$. Note the order in the definition of $\mathfrak{p}_{2}\vee\mathfrak{p}_{1}+s$. The ``concatination'' of partitions is \emph{not} commutative. It is designed to support the isomorphism of $E(s)\otimes E(t)$ with $E(t+s)$ that we are about to describe. \begin{lemma} \label{Lemma2.2}Let $\mathfrak{p}_{1}\in\mathfrak{P}(t)$ and $\mathfrak{p} _{2}\in\mathfrak{P}(s)$ and write $\mathfrak{p}$ for $\mathfrak{p}_{2} \vee\mathfrak{p}_{1}+s$. Then the map that sends $X\otimes Y\in\mathcal{L} _{\mathcal{M}}(H,H_{\mathfrak{p}_{1},t})\otimes\mathcal{L}_{\mathcal{M} }(H,H_{\mathfrak{p}_{2},s})$ to $(I_{s}\otimes X)Y$ in $\mathcal{L} _{\mathcal{M}}(H,H_{\mathfrak{p},t+s})$ extends to an isomorphism of $\mathcal{M}^{\prime}$-correspondences, where $I_{s}$ denotes the identity map on $\mathcal{M}\otimes_{P_{s_{1}}}\mathcal{M}\otimes_{P_{s_{2}-s_{1}} }\mathcal{M}\otimes\cdots\otimes_{P_{s-s_{m-1}}}\mathcal{M}$ and where $\mathfrak{p}_{2}=\{0=s_{0}<s_{1}<s_{2}<\cdots<s_{n-1}<s_{m}=s\}$. Further, this isomorphism induces a natural isomorphism of $\mathcal{M}^{\prime} $-correspondences from $E(t)\otimes E(s)$ onto $E(t+s)$. \end{lemma} \begin{proof} That the map $X\otimes Y\rightarrow(I_{s}\otimes X)Y$ induces an isomorphism of $\mathcal{M}^{\prime}$-correspondences from $\mathcal{L}_{\mathcal{M} }(H,H_{\mathfrak{p}_{1},t})\otimes\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p} _{2},s})$ into $\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p},t+s})$ is essentially proved in Proposition \ref{Lemma1.9}. To see that the isomorphism is surjective, simply apply Corollary \ref{Cor 1.10} (several times). To get the isomorphism from $E(t)\otimes E(s)$ onto $E(t+s)$, we appeal to universal properties of inductive limits. Let $\mathfrak{p}_{1}$ and $\mathfrak{p}_{1}^{\prime}$ be partitions in $\mathfrak{P}(t)$, with $\mathfrak{p}_{1}^{\prime}$ finer than $\mathfrak{p}_{1}$, and let $\mathfrak{p}_{2}$ be a partition in $\mathfrak{P}(s)$. Write $\mathfrak{p} =\mathfrak{p}_{2}\vee\mathfrak{p}_{1}+s$ and $\mathfrak{p}^{\prime }=\mathfrak{p}_{2}\vee\mathfrak{p}_{1}^{\prime}+s$. Also let $\alpha _{\mathfrak{p}_{1},\mathfrak{p}_{2}}$ be the isomorphism from $\mathcal{L} _{\mathcal{M}}(H,H_{\mathfrak{p}_{1},t})\otimes\mathcal{L}_{\mathcal{M} }(H,H_{\mathfrak{p}_{2},s})$ onto $\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p} ,t+s})$ that sends $X\otimes Y$ to $(I_{s}\otimes X)Y$, and let $\alpha _{\mathfrak{p}_{1}^{\prime},\mathfrak{p}_{2}}$ be the similarly defined isomorphism from $\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p}_{1}^{\prime} ,t})\otimes\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p}_{2},s})$ onto $\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p}^{\prime},t+s})$. Then we have the following diagram, which is easily seen to be commutative: \[ \begin{array} [c]{ccc} \mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p}_{1},t})\otimes\mathcal{L} _{\mathcal{M}}(H,H_{\mathfrak{p}_{2},s}) & \overset{\alpha_{\mathfrak{p} _{1},\mathfrak{p}_{2}}}{\longrightarrow} & \mathcal{L}_{\mathcal{M} }(H,H_{\mathfrak{p},t+s})\\% \begin{array} [c]{ccc} v_{\mathfrak{p}_{1}\mathfrak{,p}_{1}^{\prime}}\otimes I & \downarrow & \;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \end{array} & & \begin{array} [c]{ccc} \;\;\;\;\;\;\; & \downarrow & v_{\mathfrak{p,p}^{\prime}} \end{array} \\ \mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p}_{1}^{\prime},t})\otimes \mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p}_{2},s}) & \overset{\alpha _{\mathfrak{p}_{1}^{\prime},\mathfrak{p}_{2}}}{\longrightarrow} & \mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p}^{\prime},t+s}) \end{array} \] In the limit, we obtain an isometry from $E(t)\otimes\mathcal{L}_{\mathcal{M} }(H,H_{\mathfrak{p}_{2},s})$ into $E(t+s)$. A similar argument yields an isometry from $E(t)\otimes E(s)$ into $E(t+s)$. It is clear from the definition of this map that its image contains all the $\mathcal{L} _{\mathcal{M}}(H,H_{\mathfrak{p},t+s})$, where $\mathfrak{p}$ is constructed as $\mathfrak{p}_{2}\vee\mathfrak{p}_{1}+s$. (We shall view these spaces as contained in $E(t+s)$ without reference to the isomorphic embeddings.) For a given partition $\mathfrak{p\in P}(s+t)$, we can refine it by adding $s$ to get $\mathfrak{p}^{\prime}$, say. Then $\mathcal{L}_{\mathcal{M} }(H,H_{\mathfrak{p}^{\prime},t+s})$ is contained in $E(t+s)$ and contains (a copy of) $\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p},t+s})$. Hence the image contains all the $\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p},t+s})$ and so must be all of $E(t+s)$. \end{proof} \begin{remark} \label{Associativity} Given $t,\;s,\;r\in(0,\infty)$ and partitions $\mathfrak{p}_{1}\in\mathfrak{P}(t)$, $\mathfrak{p}_{2}\in\mathfrak{P}(s)$, and $\mathfrak{p}_{3}\in\mathfrak{P}(r)$, one can define an isomorphism of $\mathcal{M}^{\prime}$-correspondences between $\mathcal{L}_{\mathcal{M} }(H,H_{\mathfrak{p}_{1},t})\otimes\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p} _{2},s})\otimes\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p}_{3},r})$ and $\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p},t+s+r})$ in two different, but natural, ways, where $\mathfrak{p}=\mathfrak{p}_{3}\vee(\mathfrak{p} _{2}+r)\vee(\mathfrak{p}_{1}+s+r)$: In the first, we map the left hand side, $\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p}_{1},t})\otimes\mathcal{L} _{\mathcal{M}}(H,H_{\mathfrak{p}_{2},s})\otimes\mathcal{L}_{\mathcal{M} }(H,H_{\mathfrak{p}_{3},r})$, to $\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p} _{1},t})\otimes\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p}^{\prime},t})$, where $\mathfrak{p}^{\prime}=\mathfrak{p}_{3}\vee(\mathfrak{p}_{2}+r)$, and then to $\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p},t+s+r})$, while in the second, we map $\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p}_{1},t} )\otimes\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p}_{2},s})\otimes \mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p}_{3},r})$ to $\mathcal{L} _{\mathcal{M}}(H,H_{\mathfrak{p}^{\prime\prime},t+s})\otimes\mathcal{L} _{\mathcal{M}}(H,H_{\mathfrak{p}_{3},r})$, where $\mathfrak{p}^{\prime\prime }=\mathfrak{p}_{2}\vee(\mathfrak{p}_{1}+s)$ and then to $\mathcal{L} _{\mathcal{M}}(H,H_{\mathfrak{p},t+s+r})$. These two ways of identifying $\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p}_{1},t})\otimes\mathcal{L} _{\mathcal{M}}(H,H_{\mathfrak{p}_{2},s})\otimes\mathcal{L}_{\mathcal{M} }(H,H_{\mathfrak{p}_{3},r})$ and $\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p} ,t+s+r})$ amount to nothing more than identifying \thinspace$X_{1}\otimes X_{2}\otimes X_{3}$ with $(I_{s+r}\otimes X_{1})\circ(I_{r}\otimes X_{2})\circ X_{3}$, as we may. Passing to the limit yields the natural isomorphisms \[ (E(t)\otimes E(s))\otimes E(r)\simeq E(t)\otimes(E(s)\otimes E(r))\simeq E(t+s+r)\text{.} \] \end{remark} Our analysis to this point shows that if we set $E(0)=\mathcal{M}^{\prime}$, then $\{E(t)\}_{t\geq0}$ is a discrete product system in the sense of \begin{definition} \label{Definition2.3a}Let $\mathcal{N}$ be a von Neumann algebra. A \emph{discrete product system} over $\mathcal{N}$ is simply a family $\{E(t)\}_{t\geq0}$ of $W^{\ast}$-correspondences over $\mathcal{N}$ such that $E(0)=\mathcal{N}$ and such that $E(t+s)\simeq E(t)\otimes E(s)$ for all $t,s\in\lbrack0,\infty)$. The particular product system that we associated with the semigroup $\{P_{t}\}_{t\geq0}$ in the preceding paragraphs will be called \emph{the} \emph{(discrete) product system of }$M^{\prime}$\emph{-correspondences} associated with $\{P_{t}\}_{t\geq0}$. A (completely contractive) \emph{covariant representation} of a discrete product system $\{E(t)\}_{t\geq0}$ on a Hilbert space $H$ is simply a family $\{T_{t}\}_{t\geq0}$ of completely contractive linear maps, where $T_{t}$ maps from $E(t)$ to $\mathcal{B}(H)$ such that each $T_{t}$ is continuous with respect to the $\sigma$-topology on $E(t)$ and the $\sigma$-weak topology on $\mathcal{B}(H)$, $T_{0}$ is a $\ast$-representation of $E(0)=\mathcal{N}$ on $H$, and such that $T_{t}\otimes T_{s}=T_{t+s}$ (after identifying $E(t+s)\simeq E(t)\otimes E(s)$.) \end{definition} \begin{remark} It is useful to think of product systems as semigroups and then to view covariant representations as representations of such a semigroup. However, when working with any particular product system and representation, it frequently becomes necessary to make explicit the isomorphisms between $E(t)\otimes E(s)$ and $E(t+s)$ and then, of course, the formulas involving $\{T_{t}\}_{t\geq0}$ become correspondingly more complicated. Note, too, that the definition of a covariant representation implies that $T_{t}(a\xi b)\allowbreak=T_{0}(a)T_{t}(\xi)T_{0}(b)$ for all $t\geq0$, $\xi\in E(t)$, $a,b\in\mathcal{N}$. Thus if $\{T_{t}\}_{t\geq0}$ is a covariant representation of the product system then for each $t$, $(T_{t},T_{0})$ is a completely contractive covariant representation of $E(t)$ in the sense of Definition \ref{Definition1.12}. \end{remark} \begin{definition} \label{Definition2.3b} A covariant representation $\{T_{t}\}_{t\geq0}$ of a product system $\{E(t)\}_{t\geq0}$ is called \emph{isometric} in case for each $t$, $(T_{t},T_{0})$ is isometric in the sense of Definition \ref{Definition1.12}. It is called \emph{fully coisometric} in case for each $t$, $(T_{t},T_{0})\;$is fully coisometric in the sense of Definition \ref{Definition1.12bis}. \end{definition} Our next objective is to show how a fully coisometric covariant representation of a product system $\{E(t)\}_{t\geq0}$ can be dilated to a fully coisometric and isometric representation of $\{E(t)\}_{t\geq0}$. \begin{theorem and definition} \label{Theorem2.6}Let $\{E(t)\}_{t\geq0}$ be a discrete product system over a von Neumann algebra $\mathcal{N}$ and let $\{T_{t}\}_{t\geq0}$ be a \emph{fully coisometric }covariant representation of $\{E(t)\}_{t\geq0}$ on a Hilbert space $H$. Then there is another Hilbert space $K$, an isometry $u_{0}$ mapping $H$ into $K$, and fully coisometric, isometric covariant representation $\{V_{t}\}_{t\geq0}$ of $E$ on $K$ so that \begin{enumerate} \item $u_{0}^{\ast}V_{t}(\xi)u_{0}=T_{t}(\xi)$ for all $\xi\in E(t)$, $t\geq0;$ and \item For $\xi\in E(t)$, $t\geq0$, $V_{t}(\xi)^{\ast}$ leaves $u_{0}(H)$ invariant. \end{enumerate} The smallest subspace of $K$ containing $u_{0}(H)$ and reducing $V_{t}(\xi)$ for every $\xi\in E(t)$, $t\geq0$, is all of $K$. If $(\{V_{t}^{\prime }\}_{t\geq0},u_{0}^{\prime},K^{\prime})$ is another triple with same properties as $(\{V_{t}\}_{t\geq0},u_{0},K)$, then there is a Hilbert space isomorphism $W$ from $K$ to $K^{\prime}$ such that $WV_{t}(\xi)W^{-1} =V_{t}^{^{\prime}}(\xi)$ for all $\xi\in E(t)$ and $t\geq0$, and $W\circ u_{0}=u_{0}^{\prime}$. We therefore call the triple, $(\{V_{t}\}_{t\geq 0},u_{0},K)$, \emph{the} \emph{minimal isometric dilation of }$\{T_{t} \}_{t\geq0}$. \end{theorem and definition} \begin{proof} For $0\leq t<s$, we write $U_{t,s}$ for the isomorphism from $E(t)\otimes E(s-t)$ to $E(s).$ Then the associativity of tensor products implemented through these isomorphisms coupled with the identification of $E(t)\otimes E(s)\otimes E(r)$ with $E(t+s+r)$ imply that $U_{s,r}(U_{t,s}\otimes I_{r-s})=U_{t,r}$.\ Further, for any $t$, we write $\widetilde{T}_{t}$ for the operator from $E(t)\otimes_{T_{0}}H$ to $H$ defined by the formula $\widetilde{T}_{t}(\xi\otimes h)=T_{t}(\xi)h$. (See Lemma \ref{CovRep} and the discussion surrounding it.) For $0\leq t<s$, we define $u_{t,s}$ from $E(t)\otimes_{T_{0}}H$ to $E(s)\otimes_{T_{0}}H$ by the formula \[ u_{t,s}:=(U_{t,s}\otimes I_{H})(I_{E(t)}\otimes\widetilde{T}_{s-t}^{\ast })\text{.} \] Observe that each space $E(t)\otimes_{T_{0}}H$ is a left $\mathcal{N}$-module and that the $u_{t,s}$ are $\mathcal{N}$-module maps. We claim that each $u_{t,s}$ is an isometry. Indeed, since $U_{t,s}$ is a Hilbert module isomorphism, $U_{t,s}\otimes I_{H}$ is a Hilbert space isomorphism, i.e., a unitary, and so \begin{align*} u_{t,s}^{\ast}u_{t,s} & =(I_{E(t)}\otimes\widetilde{T}_{s-t})(U_{t-s}\otimes I_{H})^{\ast}(U_{t-s}\otimes I_{H})(I_{E(t)}\otimes\widetilde{T}_{s-t}^{\ast })\\ & =(I_{E(t)}\otimes\widetilde{T}_{s-t})(I_{E(t)}\otimes\widetilde{T} _{s-t}^{\ast})\\ & =I_{E(t)}\otimes\widetilde{T}_{s-t}\widetilde{T}_{s-t}^{\ast}\text{.} \end{align*} However, this last term is the identity on $E(t)\otimes_{T_{0}}H$ because $\{T_{t}\}_{t\geq0}$ is assumed to be fully coisometric. Further observe that the composition properties of the $U_{t,s}$ coupled with the fact that $\{T_{t}\}_{t\geq0}$ is a covariant representation imply that for $0\leq t<s<r$, $u_{t,s}u_{s,r}=u_{t,r}$. Hence, if we agree to set $u_{t,t}$ equal to the identity on $E(t)\otimes_{T_{0}}H$ for each $t$, then $\{\{E(t)\otimes_{T_{0}}H\}_{t\geq0},\{u_{t,s}\}_{0\leq t\leq s}\}$ is an inductive system of Hilbert spaces - in fact, it is an inductive system of $\mathcal{N}$-modules and module maps. We set $K=\underrightarrow{\lim }(E(t)\otimes_{T_{0}}H,u_{t,s})$ and we write $u_{t}:E(t)\otimes_{T_{0} }H\longrightarrow K$ for the canonical embeddings. Note that the $u_{t}$'s are isometries and $\mathcal{N}$-module maps. To construct the dilation $\{V_{t}\}_{t\geq0}$, we begin by defining $V_{t}$ on the range of each $u_{s}$ by the formula \[ V_{t}(\xi)\cdot u_{s}(\eta\otimes h):=u_{t+s}(U_{t,t+s}(\xi\otimes\eta)\otimes h)\text{,} \] where $t,s\geq0$, $\xi\in E(t)$, $\eta\in E(s)$, and $h\in H$. To see that $V_{t}(\xi)$ is well-defined on the union of the ranges of the $u_{s}$, simply note that for $s_{1}>s_{2}\geq0$, $t\geq0$, $\eta\in E(s_{2})$, $\xi\in E(t)$, and $h\in H$, we have \[ V_{t}(\xi)u_{s_{1}}u_{s_{2},s_{1}}(\eta\otimes h)=V_{t}(\xi)u_{s_{2}} (\eta\otimes h)\text{.} \] The $\mathcal{N}$-module structure on $K$ is just that afforded by $V_{0}$. That is, for $a\in\mathcal{N}$, $V_{0}(a)\cdot u_{s}(\eta\otimes h)=u_{s}(a\eta\otimes h),\;\eta\otimes h\in E(s)\otimes_{T_{0}}H$. So, to show that every other $V_{t}$ extends to all of $K$, and yields an isometric representation of the $\mathcal{N}$-correspondence $E(t)$, we first simply compute to see that for $\xi_{1},\xi_{2}\in E(t)$, and $\eta\otimes h,\zeta\otimes k\in E(s)\otimes_{T_{0}}H$, we have \begin{multline*} \langle V_{t}^{\ast}(\xi_{1})V_{t}(\xi_{2})u_{s}(\eta\otimes h),u_{s} (\zeta\otimes k)\rangle=\langle u_{t+s}(\xi_{2}\otimes\eta\otimes h),u_{t+s}(\xi_{1}\otimes\eta\otimes k)\rangle\\ =\langle\xi_{2}\otimes\eta\otimes h,\xi_{1}\otimes\eta\otimes k\rangle =\langle\eta\otimes h,\langle\xi_{2},\xi_{1}\rangle\eta\otimes k\rangle =\langle u_{s}(\eta\otimes h),V_{0}(\langle\xi_{2},\xi_{1}\rangle)u_{s} (\eta\otimes k)\rangle\text{.} \end{multline*} This shows that on the range of each $u_{s}$ $(V_{t},V_{0})$ is an isometric covariant representation of $E(t)$. Thus on the range of each $u_{s}$, $V_{t}(\xi)$ is a bounded operator with norm bounded by $\left\| \xi\right\| $. Hence, $V_{t}(\xi)$ extends to all of $K$ as a bounded operator. Further, this equation shows that if we denote the projection of $K$ onto the range of $u_{s}$ by $Q_{s}$, i.e., if we let $Q_{s}=u_{s}u_{s}^{\ast}$, then \[ Q_{s}(V_{t}(\xi_{2})^{\ast}V_{t}(\xi_{1}))Q_{s}=Q_{s}V_{0}(\langle\xi_{2} ,\xi_{1}\rangle)Q_{s}\text{.} \] Since this so for all $s$, it follows that $V_{t}(\xi_{2})^{\ast}V_{t}(\xi _{1})=V_{0}(\langle\xi_{2},\xi_{1}\rangle)$ on all of $K$. Thus, for each $t$, $(V_{t},V_{0})$ is an isometric covariant representation of $E(t)$. To show that $\{V_{t}\}_{t\geq0}$ satisfies the semi-group property, let $t=t_{1}+t_{2}$, let $\xi_{1}\in E(t_{1})$, $\xi_{2}\in E(t_{2})$, and let $\eta\otimes h\in E(s)\otimes_{T_{0}}H$. Then on the one hand we have \[ V_{t}(U_{t_{1},t}(\xi_{1}\otimes\xi_{2}))u_{s}(\eta\otimes h)=u_{t+s} (U_{t,t+s}(U_{t_{1},t}(\xi_{1}\otimes\xi_{2})\otimes\eta)\otimes h)\text{,} \] while on the other we have \begin{align*} V_{t_{1}}(\xi_{1})V_{t_{2}}(\xi_{2})u_{s}(\eta\otimes h) & =V_{t_{1}} (\xi_{1})u_{t_{2}+s}U_{t_{2},t_{2}+s}(\xi_{2}\otimes\eta)\otimes h)\\ & =u_{t+s}(U_{t_{1},t+s}(\xi_{1}\otimes U_{t_{2},t_{2}+s}(\xi_{2}\otimes \eta))\otimes h)\text{.} \end{align*} By Remark \ref{Associativity}, we conclude that $V_{t}(U_{t_{1},t}(\xi _{1}\otimes\xi_{2}))=V_{t_{1}}(\xi_{1})V_{t_{2}}(\xi_{2})$. Ignoring the $U_{t,t+s}$ when identifying $E(t)\otimes E(s)$ with $E(t+s)$, we obtain the desired result: $V_{t}(\xi_{1}\otimes\xi_{2})=V_{t_{1}}(\xi_{1})V_{t_{2}} (\xi_{2})$. Next, we show that each $V_{t}$ is continuous with respect to the $\sigma $-topology on $E(t)$ and the $\sigma$-weak topology on $\mathcal{B}(K)$. For this, observe that for $\xi,\xi_{1}\in E(t)$, $\eta,\xi_{2}\in E(s)$, and $h,k\in H$, we have $\langle V_{t}(\xi)u_{s}(\eta\otimes h),u_{t+s}(\xi _{1}\otimes\xi_{2}\otimes k)\rangle=\langle u_{t+s}(\xi\otimes\eta\otimes h),u_{t+s}(\xi_{1}\otimes\xi_{2}\otimes k)\rangle=\langle h,T_{0}(\langle \xi\otimes\eta,\xi_{1}\otimes\xi_{2}\rangle)k\rangle=\langle h,T_{0} (\langle\eta,\langle\xi,\xi_{1}\rangle\xi_{2}\rangle)k\rangle$. Thus, for each $s\geq0$, the map $\xi\mapsto V_{t}(\xi)|u_{s}(E(s)\otimes_{T_{0}}H)$ has the desired continuity properties. Since the union of the ranges of the $u_{s}$ is dense and since $\left\| V_{t}(\xi)\right\| \leq\left\| \xi\right\| $, we conclude that $V_{t}$ is continuous with respect to the $\sigma$-topology on $E(t)$ and the $\sigma$-weak topology on $\mathcal{B}(K)$. To see that $u_{0}^{\ast}V_{t}(\xi)u_{0}=T_{t}(\xi)$, i.e., to see that $\{V_{t}\}_{t\geq0}$ dilates $\{T_{t}\}_{t\geq0}$, simply note that for $h,h^{\prime}\in H,\;t>0$ and $\xi\in E(t)$, we have $\langle u_{0}^{\ast }V_{t}(\xi)u_{0}(h),h^{\prime}\rangle=\langle u_{t}(\xi\otimes h),u_{0} (h^{\prime})\rangle=\langle u_{t}(\xi\otimes h),u_{t}(u_{0,t}(h^{\prime }))\rangle=\langle\xi\otimes h,\widetilde{T}_{t}^{\ast}h^{\prime} \rangle_{E(t)\otimes H}=\langle\widetilde{T}_{t}(\xi\otimes h),h^{\prime }\rangle=\langle T_{t}(\xi)h,h^{\prime}\rangle$. To check that $V_{t}(\xi)^{\ast}$, $\xi\in E(t)$, leaves $u_{0}(H)$ invariant, first note that the computation just completed shows that for $\zeta\in E(r)$, $r\geq0$, and $h\in H$, $u_{0}^{\ast}u_{r}(\zeta\otimes h)=T_{r}(\zeta)h$. Hence, for $\xi\in E(t)$, $\eta\in E(s)$, and $h\in H$, $u_{0}^{\ast}V_{t} (\xi)u_{s}(\eta\otimes h)=u_{0}^{\ast}u_{t+s}(\xi\otimes\eta\otimes h)=T_{t+s}(\xi\otimes\eta)h=T_{t}(\xi)T_{t}(\eta)h=u_{0}^{\ast}u_{t} (\xi\otimes T_{s}(\eta)h)=u_{0}^{\ast}V_{t}(\xi)u_{0}u_{0}^{\ast}u_{s} (\eta\otimes h).$ Since this holds for all $s\geq0,$ we see that $u_{0}^{\ast }V_{t}(\xi)=u_{0}^{\ast}V_{t}(\xi)u_{0}u_{0}^{\ast}.$ Taking adjoints and multiplying the resulting equation on the right by $u_{0}$, we conclude that $V_{t}(\xi)^{\ast}u_{0}u_{0}^{\ast}=u_{0}u_{0}^{\ast}V_{t}(\xi)^{\ast} u_{0}u_{0}^{\ast}$ for all $\xi\in E(t)$, and $t\geq0$, which shows that $V_{t}(\xi)^{\ast}$, $\xi\in E(t)$, leaves $u_{0}(H)$ invariant. To see that $\{V_{t}\}_{t\geq0}$ is fully coisometric because $\{T_{t} \}_{t\geq0}$ is, we need to show that $\widetilde{V}_{t}$ is a coisometry for each $t$. Since $\{V_{t}\}_{t\geq0}$ is isometric, each $\widetilde{V}_{t}$ is an isometry. Hence, all we need to do is to show that the range of each $\widetilde{V}_{t}$ is dense. For this, it suffices to show that for every $s\geq0$ the span of $\{V_{t}(\xi)u_{s}(\eta\otimes h)\mid\eta\in E(s)$, $\xi\in E(t)$, $h\in H\}$ equals $u_{t+s}(E(t+s)\otimes H)$. However, $V_{t}(\xi)u_{s}(\eta\otimes h)=u_{t+s}(\xi\otimes\eta\otimes h)$ and, since $E(t)\otimes E(s)$ is isomorphic to $E(t+s)$, we see that the range of $\widetilde{V}_{t}$ is, indeed, dense. >From what we have shown so far, it is clear that the smallest subspace of $K$ that contains $u_{0}(H)$ and reduces every $V_{t}(\xi)$ is all of $K$. The uniqueness of $(\{V_{t}\}_{t\geq0},u_{0},K)$ up to unitary equivalence is proved just as in Proposition 3.2 of \cite{MS98}, and so will be omitted here. \end{proof} \begin{remark} \label{alternate}It is worthwhile pointing out that the relation $u_{0}^{\ast }V_{t}(\xi)u_{0}=T_{t}(\xi)$ in the preceding theorem is equivalent to the relation \[ u_{0}^{\ast}\widetilde{V}_{t}(I\otimes u_{0})=\widetilde{T}_{t}\text{.} \] Further, the invariance of $u_{0}(H)$ under $V_{t}(\xi)^{\ast}$ is equivalent to the equation $u_{0}u_{0}^{\ast}\widetilde{V}_{t}(I\otimes u_{0}u_{0}^{\ast })=u_{0}u_{0}^{\ast}\widetilde{V}_{t}$. These assertions are immediate from the proof. \end{remark} We return to our semigroup, $\{P_{t}\}_{t\geq0}$, of completely positive maps on the von Neumann algebra $\mathcal{M}$ and to the associated product system $\mathcal{M}^{\prime}$-correspondences $\{E(t)\}_{t\geq0}$ that we constructed at the outset of this section. Our next objective, Theorem \ref{Lemma2.8}, is to show that there is a fully coisometric, completely contractive covariant representation $\{T_{t}\}_{t\geq0}$ of $\{E(t)\}_{t\geq0}$ on $H$ (the Hilbert space of $\mathcal{M}$) so that $\{P_{t}\}_{t\geq0}$ can be represented by the formula \[ P_{t}(S)=\widetilde{T}_{t}(I_{E(t)}\otimes S)\widetilde{T}_{t}^{\ast}\text{,} \] $S\in\mathcal{M}$, $t\geq0$. For this purpose, recall that for a partition $\mathfrak{p}\in\mathfrak{P} (t)$, the Hilbert space $H_{\mathfrak{p},t}$ is $H_{\mathfrak{p} ,t}:=\mathcal{M}\otimes_{P_{t_{1}}}\mathcal{M}\otimes_{P_{t_{2}-t_{1}}} \otimes\cdots\mathcal{M}\otimes_{P_{t-t_{n-1}}}H$ where $\mathfrak{p} =\{0=t_{0}<t_{1}<t_{2}<\cdots<t_{n-1}<t_{n}=t\}$. The map $\iota _{\mathfrak{p}}:H\longrightarrow H_{\mathfrak{p},t}$, defined by the formula $\iota_{\mathfrak{p}}(h)=I\otimes I\otimes\cdots\otimes I\otimes h$ is easily seen to be an isometry, with adjoint $\iota_{\mathfrak{p}}^{\ast}$ given by the formula \[ \iota_{\mathfrak{p}}^{\ast}(X_{1}\otimes X_{2}\otimes\cdots\otimes X_{n}\otimes h)=P_{t-t_{n-1}}(P_{t_{n-1}-t_{n-2}}(\cdots(P_{t_{1}}(X_{1} )X_{2})\cdots)X_{n-1})X_{n})h\text{.} \] Indeed, $\iota_{\mathfrak{p}}$ is just a generalization of the Stinespring embedding\ $W_{P}$ for a single completely positive map and the the formula for $\iota_{\mathfrak{p}}^{\ast}$ is an obvious extension of formula (\ref{Wpstar}). Further, it is easy to check that if $\mathfrak{p}^{\prime}$ is a refinement of $\mathfrak{p}$ in $\mathfrak{P}(t)$, then $\iota _{\mathfrak{p}}^{\ast}=\iota_{\mathfrak{p}^{\prime}}^{\ast}\circ v_{0,\mathfrak{p},\mathfrak{p}^{\prime}}$. Hence, by the universal properties of inductive limits, there is a (unique) map $\iota_{t}^{\ast}:H_{t} \;(=\underrightarrow{\lim}(H_{\mathfrak{p},t},v_{0,\mathfrak{p,p}^{\prime} }))\longrightarrow H$ so that $\iota_{t}^{\ast}v_{0,\mathfrak{p,\infty}} =\iota_{\mathfrak{p}}^{\ast}$, where, recall, $v_{0,\mathfrak{p,\infty} }:H_{\mathfrak{p},t}\longrightarrow H_{t}$ is the canonical isometric embedding associated with the directed system $(H_{\mathfrak{p},t} ,v_{0,\mathfrak{p,p}^{\prime}})$ and its limit, $H_{t}$. It is easy to check that $\iota_{t}^{\ast}$ is a coisometry. To define the covariant representation $\{T_{t}\}_{t\geq0}$ of $\{E(t)\}_{t\geq0}$ that we want, we recall that $E(t)$ is isomorphic to $\mathcal{L}_{\mathcal{M}}(H,H_{t})$ and we set \begin{equation} T_{t}(X)=\iota_{t}^{\ast}\circ X\text{,} \label{idcovrep} \end{equation} for $X\in\mathcal{L}_{\mathcal{M}}(H,H_{t})$. \begin{theorem and definition} \label{Lemma2.8}Let $\{E(t)\}_{t\geq0}$ be the discrete product system of $\mathcal{M}^{\prime}$-correspondences constructed from $\{P_{t}\}_{t\geq0}$ as above, and let $\{T_{t}\}_{t\geq0}$ be defined by equation (\ref{idcovrep} ). Then $\{T_{t}\}_{t\geq0}$ is a fully coisometric, completely contractive covariant representation of $\{E(t)\}_{t\geq0}$ such that \begin{equation} P_{t}(S)=\widetilde{T}_{t}(I_{E(t)}\otimes S)\widetilde{T}_{t}^{\ast} \text{,}\label{implement} \end{equation} for all $t\geq0$ and all $S\in\mathcal{M}$. We call $\{T_{t}\}_{t\geq0}$ \emph{the identity representation of} $\{E(t)\}_{t\geq0}$. \end{theorem and definition} \begin{proof} Since $T_{t}$ is given by left multiplication by an operator between Hilbert spaces of norm at most one, viz. $\iota_{t}^{\ast}$, $T_{t}$ is completely contractive. To check that $\{T_{t}\}_{t\geq0}$ is multiplicative, we identify $E(t+s)$ with $E(t)\otimes E(s)$ as above and proceed to show that under this identification, $T_{t+s}=T_{t}\otimes T_{s}$. For this purpose, let $\mathfrak{p}_{1}$ be a partition in $\mathfrak{P}(t)$ and let $\mathfrak{p} _{2}$ be a partition in $\mathfrak{P}(s)$. Also, fix $X_{1}\in\mathcal{L} _{\mathcal{M}}(H,H_{\mathfrak{p}_{1},t})$ and $X_{2}\in\mathcal{L} _{\mathcal{M}}(H,H_{\mathfrak{p}_{2},t})$. Then the map that sends $X_{1}\otimes X_{2}$ to $(I_{s}\otimes X_{1})X_{2}$ (where $I_{s}$ denotes the identity map on $\mathcal{M}\otimes_{P_{s_{1}}}\otimes\cdots\otimes _{P_{s-s_{j-1}}}\mathcal{M}$, with $\mathfrak{p}_{2}=\{0=s_{0}<s_{1} <s_{2}<\cdots<s_{j-1}<s_{j}=s\}$) carries $\mathcal{L}_{\mathcal{M} }(H,H_{\mathfrak{p}_{1},t})\otimes\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p} _{2},t})$ to $\mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p},t+s})$, where $\mathfrak{p}=\mathfrak{p}_{2}\vee\mathfrak{p}_{1}+s$. To show that $T_{t+s}=T_{t}\otimes T_{s}$, it follows from equation (\ref{idcovrep}) and the properties of direct limits that we need only check that $\iota _{\mathfrak{p}}^{\ast}\circ(I_{s}\otimes X_{1})X_{2}=\iota_{\mathfrak{p}_{1} }^{\ast}X_{1}\circ\iota_{\mathfrak{p}_{2}}^{\ast}X_{2}$. For this purpose, consider the element $S_{1}\otimes S_{2}\otimes\cdots\otimes S_{j}\otimes T_{1}\otimes T_{2}\otimes\cdots\otimes T_{n}\otimes k$ in $H_{\mathfrak{p} ,t+s}$ (where $\mathfrak{p}$ and $\mathfrak{p}_{2}$ have just been defined and $\mathfrak{p}_{1}=\{0=t_{0}<t_{1}<t_{2}<\cdots<t_{n-1}<t_{n}=t\}$). Then \begin{multline*} \iota_{\mathfrak{p}}^{\ast}(S_{1}\otimes S_{2}\otimes\cdots\otimes T_{n}\otimes k)\\ =P_{t-t_{n-1}}(P_{t_{n-1}-t_{n-2}}(\cdots(P_{s-s_{j-1}}(\cdots(P_{s_{1}} (S_{1})S_{2})\cdots T_{n})k\\ =\iota_{\mathfrak{p}_{1}}^{\ast}((P_{s-s_{j-1}}(\cdots(P_{s_{1}}(S_{1} )S_{2})\cdots(S_{j})T_{1}\otimes T_{2}\otimes\cdots\otimes T_{n}\otimes k). \end{multline*} Hence, for $X_{1}(h)\in H_{\mathfrak{p}_{1},t}$, and $S_{1}$, $S_{2}$, \ldots, $S_{j}$ in $\mathcal{M}$, \[ \iota_{\mathfrak{p}}^{\ast}(S_{1}\otimes S_{2}\otimes\cdots\otimes S_{j}\otimes X_{1}(h))=\iota_{\mathfrak{p}_{1}}^{\ast}(P_{s-s_{j-1}} (\cdots(P_{s_{1}}(S_{1})S_{2})\cdots S_{j})X_{1}(h)). \] Since $X_{1}$ is an $\mathcal{M}$-module map, this equation can be rewritten as \begin{align*} \iota_{\mathfrak{p}}^{\ast}(S_{1}\otimes S_{2}\otimes\cdots\otimes S_{j}\otimes X_{1}(h)) & =\iota_{\mathfrak{p}_{1}}^{\ast}X_{1}(P_{s-s_{j-1} }(\cdots(P_{s_{1}}(S_{1})S_{2})\cdots S_{j})h)\\ & =\iota_{\mathfrak{p}_{1}}^{\ast}X_{1}\iota_{\mathfrak{p}_{2}}^{\ast} (S_{1}\otimes S_{2}\otimes\cdots\otimes S_{j}\otimes h), \end{align*} as was required. To see that each $T_{t}$ is continuous with respect to the $\sigma$-topology on $E(t)$ and the $\sigma$-weak topology on $\mathcal{B}(H)$, take $X\in\mathcal{L}_{\mathcal{M}}(H,H_{t})\;(\simeq E(t))$ and $h,h^{\prime}\in E(t)$ and note that $\langle T_{t}(X)h,h^{\prime}\rangle=\langle\iota _{t}^{\ast}(X(h)),h^{\prime}\rangle=\langle X(h),\iota_{t}(h^{\prime})\rangle $. Since $\iota_{t}(h^{\prime})\in H_{t}=\bigvee\{Y(h)\mid Y\in\mathcal{L} _{\mathcal{M}}(H,H_{t}),\;h\in H\}$, and since for $Y\in\mathcal{L} _{\mathcal{M}}(H,H_{t})$ and $k\in H$, $\langle X(h),Y(k)\rangle=\langle h,\langle X,Y\rangle k\rangle$, the desired continuity is evident. Finally, we must verify equation (\ref{implement}). For $t=0$, the equation is clear, so we always work with a fixed $t>0$. For $k\in H$, $\widetilde {T}_{t}^{\ast}k\in\mathcal{L}_{\mathcal{M}}(H,H_{t})\otimes_{\mathcal{M} ^{\prime}}H$ and so we may write $\widetilde{T}_{t}^{\ast}k=\sum X_{i}\otimes h_{i},\;X_{j}\in\mathcal{L}_{\mathcal{M}}(H,H_{t})$, and $h_{j}\in H$. Then, for $Y\in\mathcal{L}_{\mathcal{M}}(H,H_{t})$ and $h\in H$, $\langle\sum X_{i}(h_{i}),Y(h)\rangle=\langle\sum Y^{\ast}X_{i}h_{i},h\rangle=\langle\sum X_{i}\otimes h_{i},Y\otimes h\rangle=\langle\widetilde{T}_{t}^{\ast}k,Y\otimes h\rangle=\langle k,\iota_{t}^{\ast}(Y(h))\rangle=\langle\iota_{t} (k),Y(h)\rangle$. Since $\bigvee\{Y(h)\mid Y\in\mathcal{L}_{\mathcal{M} }(H,H_{t}),\;h\in H\}=H_{t}$, this equation implies that $\sum X_{i} (h_{i})=\iota_{t}(k)$. Consequently, $\widetilde{T}_{t}(I\otimes S)\widetilde{T}_{t}^{\ast}k=\widetilde{T}_{t}(I\otimes S)\sum X_{i}\otimes h_{i}=\widetilde{T}_{t}(\sum X_{i}\otimes Sh_{i})=\iota_{t}^{\ast}(\sum X_{i}(Sh_{i}))$. Since the $X_{i}$ are $\mathcal{M}$-module maps, this last expression equals $\iota_{t}^{\ast}(S\sum X_{i}(h_{i}))=\iota_{t}^{\ast} S\iota_{t}(k)$. To evaluate $\iota_{t}^{\ast}S\iota_{t}(k)$, recall that $\iota_{t}$ is the natural embedding of $H$ into $H_{t}$, when $H_{t}$ is viewed as the inductive limit of the $H_{\mathfrak{p},t}$. Using the relations among $\iota_{t}$, the $v_{0,\mathfrak{p,p}^{\prime}}$ and the $v_{0,\mathfrak{p,\infty}}$ established above, it suffices choose a partition $\mathfrak{p}\in\mathfrak{P}(t)$, and evaluate $\iota_{\mathfrak{p}}^{\ast }S\iota_{\mathfrak{p}}(k)$. Suppose, then that $\mathfrak{p}=\{0=t_{0} <t_{1}<t_{2}<\cdots<t_{n-1}<t_{n}=t\}$. Then $S\iota_{\mathfrak{p} }(k)=S(I\otimes I\otimes\cdots\otimes I\otimes k)=S\otimes I\otimes \cdots\otimes I\otimes k$ and $\iota_{\mathfrak{p}}^{\ast}S\iota _{\mathfrak{p}}(k)=P_{t-t_{n-1}}(P_{t_{n-1}-t_{n-2}}(\cdots(P_{t_{1} }(S)I)\cdots)I)k=P_{t}(S)k$, by the semigroup property of $\{P_{t}\}_{t\geq0} $. Thus $\iota_{\mathfrak{p}}^{\ast}S\iota_{\mathfrak{p}}(k)=P_{t}(S)k$ for all $\mathfrak{p}\in\mathfrak{P}(t)$, and so, in the limit, $\iota_{t}^{\ast }S\iota_{t}(k)=P_{t}(S)k$. Of course, setting $S=I$ in equation (\ref{implement}) shows that $\{T_{t}\}_{t\geq0}$ is fully coisometric. \end{proof} This result is really not special to our given Markov semigroup $\{P_{t} \}_{t\geq0}$; the next result is a converse which shows that completely contractive representations of product systems of correspondences over a von Neumann algebra $\mathcal{N}$ always define completely positive semigroups on $\mathcal{N}^{\prime}$. \begin{theorem} \label{Converse}Let $\mathcal{N}$ be a von Neumann algebra acting on a Hilbert space $H$, let $\{E(t)\}_{t\geq0}$ be a discrete product system of $\mathcal{N}$-correspondences, and let $\{T_{t}\}_{t\geq0}$ be a completely contractive representation of $\{E(t)\}_{t\geq0}$ on $H$. For $S\in \mathcal{N}^{\prime}$ and $t\geq0$, define \[ \Theta_{t}(S)=\widetilde{T}_{t}(1_{E(t)}\otimes S)\widetilde{T}_{t}^{\ast }\text{.} \] Then $\{\Theta_{t}\}_{t\geq0}$ is a semigroup of normal, contractive, completely positive maps on $\mathcal{N}^{\prime}$. Further, $\{\Theta _{t}\}_{t\geq0}$ is unital if and only if $\{T_{t}\}_{t\geq0}$ is fully coisometric, and $\{\Theta_{t}\}_{t\geq0}$ is a semigroup of $\ast $-endomorphisms if $\{T_{t}\}_{t\geq0}$ is isometric. \end{theorem} \begin{proof} Most of the result is proved in Proposition \ref{Proposition 1.15}. We simply need to note that $T_{0}$ is a normal $\ast$-representation of $\mathcal{N}$ on $H$ and that for each $t\geq0$, $(T_{t},T_{0})$ is a completely contractive covariant representation of $E(t)$ on $H$. All that really requires attention is the fact that $\{\Theta_{t}\}_{t\geq0}$ is a semigroup, i.e., that $\Theta_{t+s}=\Theta_{t}\circ\Theta_{s}$. However, the multiplicativity of $\{T_{t}\}_{t\geq0}$ implies that for $s,t\geq0$, $\widetilde{T} _{t+s}=\widetilde{T}_{t}(I_{E(t)}\otimes\widetilde{T}_{s})$ and from this we see immediately that $\Theta_{t+s}=\Theta_{t}\circ\Theta_{s}$. \end{proof} We note in passing that if the $\Theta_{t}$ are multiplicative, then by Proposition \ref{Proposition 1.15}, the $E(t)$ decompose into the direct sum $E(t)=E(t)^{\prime}\oplus E(t)^{\prime\prime}$ so that $T_{t}|E(t)^{\prime}$ is isometric, while $T_{t}|E(t)^{\prime\prime}$ is zero. The multiplicativity of the $\Theta_{t}$ forces relations among the $q_{t}$, where $q_{t}$ is the projection of $E(t)$ onto $E(t)^{\prime}$, but we shall not dwell on these here. In the presence of Theorems \ref{Theorem2.6} and \ref{Lemma2.8}, we are able to state and prove our dilation result, which is a semigroup analogue of Theorem \ref{Theorem1.17}. \begin{theorem} \label{DiscDilat}Let $\mathcal{M}$ be a von Neumann algebra acting on the Hilbert space $H$ and let $\{P_{t}\}_{t\geq0}$ be a semigroup of normal, unital, completely positive maps on $\mathcal{M}$ such that $P_{0}$ is the identity mapping on $\mathcal{M}$. Further, let $\{E(t)\}_{t\geq0}$ be the product system of $\mathcal{M}^{\prime}$-correspondences associated with $\{P_{t}\}_{t\geq0}$, let $\{T_{t}\}_{t\geq0}$ be the identity representation of $\{E(t)\}_{t\geq0}$ on $H$ and let $(\{V_{t}\}_{t\geq0},u_{0},K)$ be the minimal isometric dilation of $\{T_{t}\}_{t\geq0}$. We write $\rho$ for $V_{0}$, thereby obtaining a normal $\ast$-homomorphism of $\mathcal{M} ^{\prime}$ into $\mathcal{B}(K)$ and we set $\mathcal{R}$ equal to $\rho(\mathcal{M}^{\prime})^{\prime}$. Then $u_{0}^{\ast}\mathcal{R} u_{0}=\mathcal{M}$, and if we define $\{\alpha_{t}\}_{t\geq0}$ by the formula \[ \alpha_{t}(S)=\widetilde{V}_{t}(I_{E(t)}\otimes S)\widetilde{V}_{t}^{\ast }\text{,} \] $S\in\mathcal{R}$, $t\geq0$, then $\{\alpha_{t}\}_{t\geq0}$ is a semigroup of unital, normal, $\ast$-endomorphisms of $\mathcal{R}$ such that for $t\geq0$, $S\in\mathcal{R}$, and $T\in\mathcal{M}$, \begin{equation} P_{t}(u_{0}^{\ast}Su_{0})=u_{0}^{\ast}\alpha_{t}(S)u_{0} \label{Pt1} \end{equation} and \begin{equation} P_{t}(T)=u_{0}^{\ast}\alpha_{t}(u_{0}Tu_{0}^{\ast})u_{0}\text{.} \label{Pt2} \end{equation} \end{theorem} \begin{proof} Since $\{V_{t}\}_{t\geq0}$ is a completely contractive covariant representation of $\{E(t)\}_{t\geq0}$ on $K$, we know that $V_{0}=\rho$ is a normal $\ast$-homomorphism of $\mathcal{M}^{\prime}$ on $K$. Further, from Theorem \ref{Converse}, with $\mathcal{N}=\mathcal{M}^{\prime}$, we see that $\{\alpha_{t}\}_{t\geq0}$ is a semigroup of normal $\ast$-endomorphisms of $\mathcal{R}$ ($=\rho(\mathcal{M}^{\prime})^{\prime}$) that are unital because $\{V_{t}\}_{t\geq0}$ is fully coisometric. By the definition of $V_{0}$ in the proof of Theorem \ref{Theorem2.6} we see that $\rho(a)u_{s}(\eta\otimes h)=V_{0}(a)u_{s}(\eta\otimes h)=u_{s} (a\eta\otimes h)$, for all $a\in\mathcal{M}^{\prime}$, $s\geq0$, and $\eta\otimes h\in E(s)\otimes H$. This implies that the range of each $u_{s}$ reduces $\rho(\mathcal{M}^{\prime})$ and, in particular that the restriction of $\rho(\mathcal{M}^{\prime})$ to $u_{0}(H)$ is unitarily equivalent to the identity representation of $\mathcal{M}^{\prime}$. Specifically, since $\rho(a)u_{0}(h)=u_{0}(ah)$, for all $a\in\mathcal{M}^{\prime}$, $h\in H$, $a=u_{0}^{\ast}\rho(a)u_{0}$, $a\in\mathcal{M}^{\prime}$. Thus, $\mathcal{M} ^{\prime}=u_{0}^{\ast}\rho(\mathcal{M}^{\prime})u_{0}$, so by the double commutant theorem, $\mathcal{M}=u_{0}^{\ast}\rho(\mathcal{M}^{\prime} )^{\prime}u_{0}=u_{0}^{\ast}\mathcal{R}u_{0}$. Moreover, from Theorem \ref{Lemma2.8} and equation (\ref{implement}), we see that for all $S\in\mathcal{R}$, \begin{align*} P_{t}(u_{0}^{\ast}Su_{0}) & =\widetilde{T}_{t}(I_{E(t)}\otimes u_{0}^{\ast }Su_{0})\widetilde{T}_{t}^{\ast}\\ & =u_{0}^{\ast}\widetilde{V}_{t}(I_{E(t)}\otimes u_{0})(I_{E(t)}\otimes u_{0}^{\ast}Su_{0})(I_{E(t)}\otimes u_{0}^{\ast})\widetilde{V}_{t}^{\ast} u_{0}\\ & =u_{0}^{\ast}\widetilde{V}_{t}(I_{E(t)}\otimes u_{0}u_{0}^{\ast}Su_{0} u_{0}^{\ast})\widetilde{V}_{t}^{\ast}u_{0}\\ & =u_{0}^{\ast}u_{0}u_{0}^{\ast}\widetilde{V}_{t}(I_{E(t)}\otimes u_{0} u_{0}^{\ast})(I\otimes S)(I_{E(t)}\otimes u_{0}u_{0}^{\ast})\widetilde{V} _{t}^{\ast}u_{0}u_{0}^{\ast}u_{0}\\ & =u_{0}^{\ast}u_{0}u_{0}^{\ast}\widetilde{V}_{t}(I\otimes S)\widetilde {V}_{t}^{\ast}u_{0}u_{0}^{\ast}u_{0}\\ & =u_{0}^{\ast}u_{0}u_{0}^{\ast}\alpha_{t}(S)u_{0}u_{0}^{\ast}u_{0}\\ & =u_{0}^{\ast}\alpha_{t}(S)u_{0} \end{align*} where the second and fifth equations are justified by Remark \ref{alternate} and where the fourth and sixth equations are justified by the fact that the final projection of $u_{0}$ is $u_{0}u_{0}^{\ast}$. Equation (\ref{Pt2}) can be verified similarly, or directly from equation (\ref{Pt1}). \end{proof} \section{Minimality and Continuity} Our goal in this section is to show that under the hypothesis of separability on the Hilbert space $H$ and the hypothesis of weak continuity on $\{P_{t}\}_{t\geq0}$ in Theorem \ref{DiscDilat}, the Hilbert space $K$ that is produced there is separable and the semigroup $\{\alpha_{t}\}_{t\geq0}$ is weakly continuous. That is, $\{\alpha_{t}\}_{t\geq0}$ will be an $E_{0} $-semigroup. Therefore, throughout this section, we make the blanket assumption that our underlying Hilbert space $H$ is \emph{separable} and that our semigroup of normal, unital, completely positive maps $\{P_{t}\}_{t\geq0}$ on $\mathcal{M}$ is (weakly) continuous in the sense that for all $T\in\mathcal{M}$ and vectors $h_{1},h_{2}\in H$, the function $t\rightarrow \langle P_{t}(T)h_{1},h_{2}\rangle$ is continuous. Note that since the $\sigma$-weak topology coincides with the weak topology on bounded subsets of a von Neumann algebra, our continuity assumption on $\{P_{t}\}_{t\geq0}$ is tantamount to assuming that $(\mathcal{M},\{P_{t}\}_{t\geq0})$ is a quantum Markov process. Our arguments will be broken into a series of (somewhat technical) lemmas and propositions. Basically, we will distill for our use salient features of \cite{wA97, wA97a} and \cite{dSL}. The first proposition is a generalization of some observations due to D. SeLegue in \cite[Section 2.7]{dSL} when $\mathcal{M}=\mathcal{B}(H)$. We offer somewhat different proofs. \begin{proposition} \label{Proposition3.1}\ (\cite{dSL})Under our standing assumptions on $\{P_{t}\}_{t\geq0}$, the following assertions are valid: \begin{enumerate} \item The map $t\rightarrow P_{t}(X)$ is strongly continuous for all $X\in\mathcal{M}$; i.e., for all $h\in H$, $t\rightarrow P_{t}(X)h$ is continuous from $[0,\infty)$ to $H$. \item Given a sequence $\{X_{n}\}\subseteq\mathcal{M}$ that converges in the weak operator topology to $X$, and given a sequence $\{t_{n}\}_{n=1}^{\infty }\subseteq\lbrack0,\infty)$ converging to $t$, the sequence of operators $\{P_{t_{n}}(X_{n})\}_{n=1}^{\infty}$ converges to $P_{t}(X)$ in the weak operator topology, i.e., $\{P_{t}\}_{t\geq0}$ is jointly continuous in the weak operator topology. \end{enumerate} \end{proposition} \begin{proof} For 1., first note that it suffices to prove the assertion when $X=U$ is unitary and it suffices to show that $P_{t}(U)h\rightarrow Uh$ for every $h\in H$ as $t\rightarrow0$. Then observe that for any vector $h\in H$, $\lim_{t\rightarrow0}\left\| P_{t}(U)h\right\| =\left\| h\right\| $. For if not, then the $\lim\inf\left\| P_{t}(U)h\right\| $ is strictly less than $\left\| h\right\| $. Since $|\langle P_{t}(U)h,Uh\rangle|\leq\left\| P_{t}(U)h\right\| $, the $\lim\inf|\langle P_{t}(U)h,Uh\rangle|$ is strictly less than $\left\| h\right\| $, also. However, by our hypothesis on $\{P_{t}\}_{t\geq0}$, $\langle P_{t}(U)h,Uh\rangle\rightarrow\langle Uh,Uh\rangle=\left\| h\right\| $. Thus $\lim_{t\rightarrow0}\left\| P_{t}(U)h\right\| $ must be $\left\| h\right\| $. But then we see that for all $h\in H$, $\left\| P_{t}(U)h-Uh\right\| ^{2}=\langle P_{t} (U)h-Uh,P_{t}(U)h-Uh\rangle=\left\| P_{t}(U)h\right\| ^{2} -2\operatorname{Re}\langle Uh,P_{t}(U)h\rangle+\left\| Uh\right\| ^{2}$ tends to zero, as $t\rightarrow0$, as required. For 2., observe that the normality of $P_{t}$ means that there is a unique bounded map $\Psi_{t}$ such that $P_{t}=\Psi_{t}^{\ast}$. The uniqueness and the fact that $\{P_{t}\}_{t\geq0}$ is a semigroup imply the same is true for $\{\Psi_{t}\}_{t\geq0}$, i.e., $\Psi_{t+s}=\Psi_{t}\Psi_{s}$. The continuity of $\{P_{t}\}_{t\geq0}$ in the weak operator topology and the fact that $\{P_{t}\}_{t\geq0}$ is uniformly bounded imply that $\omega\circ P_{t}(X)$ is continuous in $t$ for all $X\in\mathcal{M}$, and all $\omega\in\mathcal{M} _{\ast}$. If we write the pairing between $\mathcal{M}_{\ast}$ and $\mathcal{M}$ by $\langle\cdot,\cdot\rangle$ as we shall, then this means that $\langle\Psi_{t}(\omega),X\rangle$ is continuous in $t$ for all $X$; i.e., $\{\Psi_{t}\}_{t\geq0}$ is weakly continuous on $\mathcal{M}_{\ast}$. But $\mathcal{M}_{\ast}$ is separable and so by \cite[Corollary 3.1.8]{BR79}, the semigroup $\{\Psi_{t}\}_{t\geq0}$ is strongly continuous on $\mathcal{M} _{\ast}$, i.e., for all $\omega\in\mathcal{M}_{\ast}$, $\left\| \Psi _{t}(\omega)-\Psi_{s}(\omega)\right\| \rightarrow0$, as $t\rightarrow s$. This means, in particular, that if $\omega_{h}$ is the vector state associated with the vector $h\in H$, then $\left\| \omega_{h}\circ P_{t}-\omega_{h}\circ P_{s}\right\| \rightarrow0$ as $t\rightarrow s$. So, if $\{X_{n} \}_{n=1}^{\infty}$ is a sequence in $\mathcal{M}$ that converges weakly to $X\in\mathcal{M}$, and if $t_{n}\rightarrow t$, then \begin{multline*} |\langle P_{t_{n}}(X_{n})h,h\rangle-\langle P_{t}(X)h,h\rangle|=|\omega _{h}\circ P_{t_{n}}(X_{n})-\omega_{h}\circ P_{t}(X)|\\ \leq|\omega_{h}\circ P_{t_{n}}(X_{n})-\omega_{h}\circ P_{t}(X_{n} )|+|\omega_{h}\circ P_{t}(X_{n})-\omega_{h}\circ P_{t}(X)|\\ \leq\left\| \omega_{h}\circ P_{t_{n}}-\omega_{h}\circ P_{t}\right\| \left\| X_{n}\right\| +|\langle P_{t}(X_{n}-X)h,h\rangle|\text{.} \end{multline*} Since the norms of the $X_{n}$ are uniformly bounded by the uniform boundedness principle, this inequality shows that $P_{t_{n}}(X_{n})\rightarrow P_{t}(X)$ in the weak operator topology, as required. \end{proof} \begin{proposition} \label{Theorem3.2}Under our standing separability and continuity assumptions, the Hilbert space $K$ in Theorem \ref{DiscDilat} is separable. \end{proposition} \begin{proof} Recall from the proof of Theorem \ref{Theorem2.6} that $K$ is defined to be the inductive limit $\underrightarrow{\lim}(E(t)\otimes_{T_{0}}H,u_{t,s})$. Since the sequence of spaces, $\{E(n)\otimes_{T_{0}}H\}_{n\geq0}$, is cofinal in $\{E(t)\otimes_{T_{0}}H\}_{t\geq0}$, it suffices to show each space $E(t)\otimes_{T_{0}}H$ is separable. However, each space $E(t)$ is isomorphic to $\mathcal{L}_{\mathcal{M}}(H,H_{t})$ by Lemma \ref{Lemma2.1}. So, if we can show that $H_{t}$ is separable, then $E(t)$ will be separable in the $\sigma $-topology (which is the same as the $\sigma$-weak topology by Proposition \ref{Lemma 1.2}.) But then, of course, $E(t)\otimes_{T_{0}}H$ will be spanned by a sequence $\{X_{n}\otimes h_{m}\}_{m,n\geq0}$, where the $X_{n}$ run through a countable set that is dense in $E(t)$ in the $\sigma$-topology and the $h_{m}$ run through a countable dense set of $H$, and so $E(t)\otimes _{T_{0}}H$ will be separable. Thus we need to show $H_{t}$ is separable. Now $H_{t}$ is, itself, an inductive limit $\underrightarrow{\lim }(H_{\mathfrak{p},t},v_{0,\mathfrak{p,p}^{\prime}})$ where $\mathfrak{p}$ and $\mathfrak{p}^{\prime}$ range over $\mathfrak{P}(t)$, and $\mathfrak{p} ^{\prime}$ refines $\mathfrak{p}$. The normality of the $P_{t}$ enables one to see that each $H_{\mathfrak{p},t}$ is separable and the weak continuity of $\{P_{t}\}_{t\geq0}$ enables one to replace $\mathfrak{P}(t)$ with a countable (but not, strictly speaking, cofinal) subset. From these two observations the separability of $H_{t}$ follows. Here are the details. To see that $H_{\mathfrak{p},t}$ is separable, first observe that $\mathcal{M}\otimes_{P_{t}}H$ is separable for any $t$. For this, it suffices to show that if $\{T_{n}\}_{n\geq0}$ is any sequence that is strongly dense in the unit ball of $\mathcal{M}$ and if $\{h_{n}\}_{n\geq0}$ is any dense sequence of vectors in $H$, then any decomposible tensor, $T\otimes h$, with $T$ in the unit ball of $\mathcal{M}$, is in the closure of $\{T_{n}\otimes h_{n}\}_{n\geq0}$. So, passing to subsequences, if necessary, assume that $T_{n}\rightarrow T$ strongly and that $h_{n}\rightarrow h$. Then $T_{n}\otimes h_{n}-T\otimes h=(T_{n}-T)\otimes h+T\otimes(h_{n} -h)+(T-T_{n})\otimes(h_{n}-h)$. However, $\left\| (T_{n}-T)\otimes h\right\| ^{2}=\langle h,P_{t}((T_{n}-T)^{\ast}(T_{n}-T))h\rangle\rightarrow0$ because $T_{n}\rightarrow T$ strongly and $P_{t}$ is normal. On the other hand, $\left\| T\otimes(h_{n}-h)\right\| ^{2}=\langle h_{n}-h,P_{t}(T^{\ast }T)(h_{n}-h)\rangle\rightarrow0$ because $h_{n}\rightarrow h$ in $H$. Finally, since $h_{n}\rightarrow h$ in $H$ and since $T_{n}$, $T$, and their images under $P_{t}$ are all bounded in norm by $1$, we see that $\left\| (T-T_{n})\otimes(h-h_{n})\right\| ^{2}=\langle h-h_{n},P_{t}((T_{n}-T)^{\ast }(T_{n}-T))(h-h_{n})\rangle\leq4\left\| h-h_{n}\right\| ^{2}\rightarrow0$. This shows that $T_{n}\otimes h_{n}\rightarrow T\otimes h$ as required. Now the proof that each $H_{\mathfrak{p},t}$ is separable is proved by iterating this argument. Let $\mathfrak{P}_{0}(t)$ be the collection of those partitions $\mathfrak{p} \in\mathfrak{P}(t)$ whose points lie in $t\mathbb{Q}\cap\lbrack0,t]$. Observe that $\mathfrak{P}_{0}(t)$ is countable and write $\widetilde{H}_{t}$ for the union $\cup\{v_{0,\mathfrak{p,\infty}}(H_{\mathfrak{p},t})\mid\mathfrak{p} \in\mathfrak{P}_{0}(t)\}$. Then $\widetilde{H}_{t}$ is the countable union of separable Hilbert spaces and so its closure is separable. We will show that its closure is all of $H_{t}$. For this purpose, it suffices to show that if $\mathfrak{p}$ is an arbitrary partition in $\mathfrak{P}(t)$, then $v_{0,\mathfrak{p,\infty}}(H_{\mathfrak{p},t})$ is in the closure of $\widetilde{H}_{t}$. This, in turn, will be clear if we can show that if $\mathfrak{p}=\{0=t_{0}<t_{1}<t_{2}<\cdots<t_{n-1}<t_{n}=t\}$ and if $\{\mathfrak{p}_{m}\}_{m\geq0}=\{\{0=s(m)_{0}<s(m)_{1}<s(m)_{2}<\cdots <s(m)_{n-1}<s(m)_{n}=t\}\}_{m\geq0}$ is a sequence of partitions in $\mathfrak{P}_{0}(t)$ such that $\lim_{m\rightarrow\infty}s(m)_{k}=t_{k}$ for every $k$, then for every $n$-tuple $T_{1},T_{2},\cdots,T_{n}\in\mathcal{M}$ and every $h\in H$, \begin{gather*} \lim_{m\rightarrow\infty}v_{0,\mathfrak{p}_{m}\mathfrak{,\infty}}T_{1} \otimes_{P_{s(m)_{1}}}T_{2}\otimes_{P_{s(m)_{2}-s(m)_{1}}}\cdots \otimes_{P_{t-s(m)_{n-1}}}h=\\ v_{0,\mathfrak{p,\infty}}T_{1}\otimes_{P_{t_{1}}}T_{2}\otimes_{P_{t_{2}-t_{1} }}\cdots\otimes_{P_{t-t_{n-1}}}h\text{.} \end{gather*} To verify this equation, it suffices to assume that $s(m)_{k}=t_{k}$ for all $m$ and for all $k$ but one. So, in fact, it is enough to verify the desired limit when $\mathfrak{p}=\{0=t_{0}<t_{1}<t_{2}=t\}$ and when each $\mathfrak{p}_{m}$ is of the form $\{0=s(m)_{0}<s(m)_{1}<s(m)_{2}=t\}$, where $s(m)_{1}<t_{1}$. In this event, we have \begin{align*} & \left\| v_{0,\mathfrak{p}_{m}\mathfrak{,\infty}}T_{1}\otimes_{P_{s(m)_{1} }}T_{2}\otimes_{P_{t-s(m)_{1}}}h-v_{0,\mathfrak{p,\infty}}T_{1}\otimes _{P_{t_{1}}}T_{2}\otimes_{P_{t-t_{1}}}h\right\| ^{2}\\ & =\langle h,P_{t-s(m)_{1}}(T_{2}^{\ast}P_{s(m)_{1}}(T_{1}^{\ast}T_{1} )T_{2})h\rangle\\ & -\langle h,P_{t-t_{1}}(P_{t_{1}-s(m)_{1}}(T_{2}^{\ast}P_{s(m)_{1}} (T_{1}^{\ast}T_{1})T_{2})h\rangle\\ & -\langle h,P_{t-t_{1}}(T_{2}^{\ast}P_{t_{1}-s(m)_{1}}(P_{s(m)_{1}} (T_{1}^{\ast}T_{1})T_{2}))h\rangle\\ & +\langle h,P_{t-t_{1}}(T_{2}^{\ast}P_{t_{1}}(T_{1}^{\ast}T_{1} )T_{2})h\rangle\text{.} \end{align*} A moment's reflection reveals that assertion (2) in Proposition \ref{Proposition3.1} shows that this expression tends to zero as $m\rightarrow\infty$. \end{proof} Our next goal is to show that $\{\alpha_{t}\}_{t\geq0}$ is a minimal dilation of $\{P_{t}\}_{t\geq0}$ in the sense of \cite{wA97a}. To explain this, recall that a projection $q\in\mathcal{R}$ is \emph{increasing }relative to $\{\alpha_{t}\}_{t\geq0}$ in case $\alpha_{t}(q)\geq q$ for all $t\geq0$, i.e., the family $\{\alpha_{t}(q)\}_{t\geq0}$ is an increasing family of projections. We will show that $u_{0}u_{0}^{\ast}$ is increasing. (Recall that from the proof of Theorem \ref{DiscDilat}, $u_{0}u_{0}^{\ast}\in\mathcal{R}$.) A projection $q\in\mathcal{R}$ is called \emph{multiplicative} in case the map $X\rightarrow q\alpha_{t}(X)q$ is multiplicative on $\mathcal{R}$ for each $t$, i.e., $q\alpha_{t}(\cdot)q$ is a (non-unital) endomorphism of $\mathcal{R}$. To say, then, that $\{\alpha_{t}\}_{t\geq0}$ is \emph{minimal }is to say that there is no multiplicative, increasing projection $q\in\mathcal{R}$ that dominates $u_{0}u_{0}^{\ast}$, i.e., such that $q\geq u_{0}u_{0}^{\ast}$. We note in passing that in \cite{wA97, wA97a} Arveson assumes that $\{\alpha_{t}\}_{t\geq0}$ is weakly continuous. However, this is not necessary for the definition of minimality. That is, minimality makes sense without assuming that $\{\alpha_{t}\}_{t\geq0}$ is weakly continuous. Here, minimality will be used to show that the $\{\alpha_{t}\}_{t\geq0}$ we constructed in Theorem \ref{DiscDilat} is weakly continuous. \begin{lemma} \label{Lemma3.3}With the notation of Section 2, we have, for all $t\geq0$, \begin{enumerate} \item $\widetilde{V}_{t}^{\ast}u_{0}=(I_{t}\otimes u_{0})\widetilde{T} _{t}^{\ast}$, where $I_{t}$ denotes the identity operator on $E(t)$. \item $\bigvee\{(I_{t}\otimes X)\widetilde{T}_{t}^{\ast}h\mid X\in\mathcal{M} $, $h\in H\}=\mathcal{E}_{t}\otimes_{\mathcal{M}^{\prime}}H$. \item $\bigvee\{(I_{t}\otimes Y)\widetilde{V}_{t}^{\ast}h\mid Y\in\mathcal{R} $, $h\in H\}=\mathcal{E}_{t}\otimes_{\mathcal{M}^{\prime}}K$. \item $\bigvee\{\alpha_{t}(Y)u_{0}h\mid Y\in\mathcal{R}$, $h\in H\}=\bigvee \{V_{t}(X)k\mid X\in\mathcal{E}_{t}$, $k\in K\}$ \end{enumerate} \end{lemma} \begin{proof} (1) This is an easy consequence of Remark\ \ref{alternate}. Indeed, from the first part of that remark, we know that $u_{0}^{\ast}\widetilde{V} _{t}(I\otimes u_{0})=\widetilde{T}_{t}$. So, $(I\otimes u_{0})^{\ast }\widetilde{V}_{t}^{\ast}u_{0}=\widetilde{T}_{t}^{\ast}$. Therefore, $(I\otimes u_{0}u_{0}^{\ast})\widetilde{V}_{t}^{\ast}u_{0}=(I\otimes u_{0})\widetilde{T}_{t}^{\ast}$. On the other hand, the second part of the remark shows that $(I\otimes u_{0}u_{0}^{\ast})\widetilde{V}_{t}^{\ast} u_{0}=(I\otimes u_{0}u_{0}^{\ast})\widetilde{V}_{t}^{\ast}(u_{0}u_{0}^{\ast })u_{0}=\widetilde{V}_{t}^{\ast}(u_{0}u_{0}^{\ast})u_{0}=\widetilde{V} _{t}^{\ast}u_{0}$, so that $\widetilde{V}_{t}^{\ast}u_{0}=(I_{t}\otimes u_{0})\widetilde{T}_{t}^{\ast}$. (2) Here, $\mathcal{E}_{t}$ is embedded as a subspace of $E(t)$ and so $\mathcal{E}_{t}\otimes H$ is contained in $E(t)\otimes H$. To show the desired equality, first note that if $h\in H$ and if $\mathfrak{p} \in\mathfrak{P}(t)$, then $\iota_{\mathfrak{p}}(h)=I\otimes I\otimes \cdots\otimes I\otimes h$. (See the discussion after Remark \ref{alternate}.) If we let $\mathfrak{p}_{0}=\{0=t_{0}<t_{1}=t\}$, then in the notation developed between Remark \ref{alternate} and Theorem \ref{Lemma2.8}, we have $v_{0,\mathfrak{p}_{0},\mathfrak{p}}(I\otimes h)=\iota_{\mathfrak{p}}(h)$. So, for $S\in\mathcal{M}$, $S\iota_{\mathfrak{p}}(h)=S\otimes I\otimes \cdots\otimes I\otimes h=v_{0,\mathfrak{p}_{0},\mathfrak{p}}(S\otimes h)$. By identifying $\mathcal{M}\otimes_{P_{t}}H$ with a subspace of $H_{\mathfrak{p} ,t}$, we write $S\iota_{\mathfrak{p}}(h)\in\mathcal{M}\otimes_{P_{t}}H$, $S\in\mathcal{M}$. Since this holds for all partitions $\mathfrak{p} \in\mathfrak{P}(t)$, we have \begin{equation} S\iota_{t}(h)\in\mathcal{M}\otimes_{P_{t}}H,\label{(i)} \end{equation} for all $S\in\mathcal{M}$ and $t\geq0$. Now fix an element of $E(t)\otimes H$ that is orthogonal to $\mathcal{E}_{t}\otimes H$ and write it as $\sum X_{i}\otimes h_{i}$, $X_{i}\in E(t)$, $h_{i}\in H$. Then for every $X\in\mathcal{E}_{t}$ and $k\in H$, $0=\langle X\otimes k,\sum X_{i}\otimes h_{i}\rangle=\sum\langle k,X^{\ast}X_{i}h_{i}\rangle=\langle X(k),\sum X_{i}(h_{i})\rangle$. Since $\bigvee\{X(h)\mid X\in\mathcal{L}_{\mathcal{M} }(H,\mathcal{M}\otimes_{P_{t}}H)=\mathcal{E}_{t}$, $h\in H\}=\mathcal{M} \otimes_{P_{t}}H$, by Lemma \ref{lemma1.7}, we see that \begin{equation} \sum X_{i}(h_{i})\in(\mathcal{M}\otimes_{P_{t}}H)^{\perp}\text{.}\label{(ii)} \end{equation} However, we have just shown above that $\mathcal{M}\iota_{t}(H)\subseteq \mathcal{M}\otimes_{P_{t}}H$. Hence, for $S\in\mathcal{M}$ and $h\in H$, \begin{multline*} \langle(I\otimes S)\widetilde{T}_{t}^{\ast}h,\sum X_{i}\otimes h_{i} \rangle=\langle\widetilde{T}_{t}^{\ast}h,\sum X_{i}\otimes S^{\ast} h_{i}\rangle\\ =\langle h,\iota_{t}^{\ast}(\sum X_{i}(S^{\ast}h_{i}))\rangle=\langle h,\iota_{t}^{\ast}(S^{\ast}\sum X_{i}(h_{i}))\rangle\\ =\langle S\iota_{t}(h),\sum X_{i}(h_{i})\rangle=0\text{,} \end{multline*} where the last equation follows from (\ref{(i)}) and (\ref{(ii)}), and the preceding one follows from the fact that the $X_{i}$ are $\mathcal{M}$-module maps, i.e., they commute with elements of $\mathcal{M}$. This equation thus shows that $[(I\otimes\mathcal{M})\widetilde{T}_{t}^{\ast}H]$ is contained in $\mathcal{E}_{t}\otimes H$. For the reverse containment, fix an element $\sum X_{i}\otimes h_{i}$, $X_{i}\in\mathcal{E}_{t}$, $h_{i}\in H$, that is in $\mathcal{E}_{t}\otimes H$ but orthogonal to $[(I\otimes\mathcal{M} )\widetilde{T}_{t}^{\ast}H]$. Then for every $S\in\mathcal{M}$ and $h\in H$, the last equation shows that $0=\langle(I\otimes S)\widetilde{T}_{t}^{\ast }h,\sum X_{i}\otimes h\rangle=\langle S\iota_{t}(h),\sum X_{i}(h)\rangle$. This shows that $\sum X_{i}(h)$ is orthogonal to $\mathcal{M}\otimes_{P_{t}} H$. Since $X_{i}\in\mathcal{E}_{t}$, $X_{i}(h_{i})\in\mathcal{M}\otimes _{P_{t}}H$ for all $i$ and, so, $\sum X_{i}(h)=0$. However, $\langle\sum X_{i}\otimes h_{i},\sum X_{j}\otimes h_{j}\rangle=\sum_{i,j}\langle h_{i},X_{i}^{\ast}X_{j}h_{j}\rangle=\sum_{i,j}\langle X_{i}(h_{i}),X_{j} (h_{j})\rangle=\left\| \sum X_{i}(h)\right\| ^{2}=0$, and so $\sum X_{i}\otimes h_{i}=0$ as was to be proved. (3) From (1) we may write \[ (I\otimes\mathcal{R})\widetilde{V}_{t}^{\ast}u_{0}H=(I\otimes\mathcal{R} )(I\otimes u_{0})\widetilde{T}_{t}^{\ast}H=(I\otimes\mathcal{R}u_{0} )\widetilde{T}_{t}^{\ast}H\text{.} \] As we noted in the proof of Theorem \ref{DiscDilat}, $u_{0}u_{0}^{\ast}$ lies in $\mathcal{R}$. Consequently, $\mathcal{R}u_{0}=\mathcal{R}u_{0}u_{0}^{\ast }\mathcal{R}u_{0}=\mathcal{R}u_{0}\mathcal{M}$. So, using (2), we conclude that \begin{align*} \lbrack(I\otimes\mathcal{R})\widetilde{V}_{t}^{\ast}u_{0}H] & =[(I\otimes \mathcal{R}u_{0})(I\otimes\mathcal{M})(I\otimes u_{0})\widetilde{T}_{t}^{\ast }H]\\ & =[(I\otimes\mathcal{R}u_{0})(\mathcal{E}_{t}\otimes H)]=\mathcal{E} _{t}\otimes\lbrack\mathcal{R}u_{0}H]\text{.} \end{align*} Now let $p$ be the projection of $K$ onto $[\mathcal{R}u_{0}]$. Then $p\in\mathcal{R}^{\prime}=\rho(\mathcal{M}^{\prime})^{\prime\prime} =\rho(\mathcal{M}^{\prime})$; i.e. $p=\rho(p_{0})$ for some projection $p_{0}\in\mathcal{M}^{\prime}$. However, $pK$ contains $u_{0}H$ and $\rho(p_{0})$ acts on $u_{0}H$ by $\rho(p_{0})u_{0}h=u_{0}p_{0}h$. Hence $p_{0}=I$ and so $p=I$. Thus $[\mathcal{R}u_{0}H]=K$ and so $[(I\otimes \mathcal{R)}\widetilde{V}_{t}^{\ast}u_{0}H]=\mathcal{E}_{t}\otimes K$. (4) The last assertion follows from the previous one and the definition of $\alpha_{t}$: \[ \lbrack\alpha_{t}(\mathcal{R})u_{0}H]=[\widetilde{V}_{t}(I\otimes \mathcal{R})\widetilde{V}_{t}^{\ast}u_{0}H]=[\widetilde{V}_{t}(\mathcal{E} _{t}\otimes K)]=[V_{t}(\mathcal{E}_{t})K]\text{.} \] \end{proof} \begin{lemma} \label{Lemma3.4}Let $q_{t}$ be the projection from $K$ onto $[\alpha _{t}(\mathcal{R})u_{0}H]$. Then $q_{t}$ lies in $\mathcal{R}$ and $q_{t} \alpha_{t}(q_{s})$ is the projection onto $[V_{t+s}(\mathcal{E}_{t} \otimes\mathcal{E}_{s})K]$. \end{lemma} \begin{proof} The previous lemma shows that $q_{t}$ is the projection of $K$ onto $[V_{t}(\mathcal{E}_{t})K]$. Thus $q_{t}$ lies in $\mathcal{R}=\rho (\mathcal{M}^{\prime})^{\prime}$. Also, the range of $q_{t}$, which is $[\alpha_{t}(\mathcal{R})u_{0}H]$ is clearly invariant under $\alpha _{t}(\mathcal{R})$; i.e., $q_{t}\in\alpha_{t}(\mathcal{R})^{\prime}$. Thus, in particular, $q_{t}$ commutes with $\alpha_{t}(q_{s})$; so we see that $q_{t}\alpha_{t}(q_{s})$ is a projection. We need to show that $q_{t} \alpha_{t}(q_{s})$ has range $[V_{t+s}(\mathcal{E}_{t}\otimes\mathcal{E} _{s})K]$. For this purpose, observe that the range of $\alpha_{t}(q_{s})$ is $\widetilde{V}_{t}(I\otimes q_{s})\widetilde{V}_{t}^{\ast}K=\widetilde{V} _{t}(I\otimes q_{s})(E(t)\otimes K)=\widetilde{V}_{t}(E(t)\otimes q_{s}(K))=\widetilde{V}_{t}(E(t)\otimes\lbrack V_{s}(\mathcal{E} _{s})K])=[V_{t}(E(t))V_{s}(\mathcal{E}_{s})K]$. Clearly, $[V_{t} (\mathcal{E}_{t})V_{s}(\mathcal{E}_{s})K]\subseteq\lbrack V_{t}(E(t))V_{s} (\mathcal{E}_{s})K]\cap\lbrack V_{t}(\mathcal{E}_{t})K]$. We claim that, in fact, the two subspaces coincide. To see this, let $w$ be the isometric embedding of $\mathcal{M}\otimes_{P_{t}}H$ into $H_{t}$, view $\mathcal{M} \otimes_{P_{t}}H$ as a subspace of $H_{t}$ and view $\mathcal{E}_{t}$ as a subspace of $E(t)$ (i.e. omit reference to the canonical embeddings.) Also, identify $\mathcal{L}_{\mathcal{M}}(H,H_{t})$ with $E(t)$ and $\mathcal{L} _{\mathcal{M}}(H,\mathcal{M}\otimes_{P_{t}}H)$ with $\mathcal{E}_{t}$, as we have throughout this paper. Then the map $p$ on $E(t)=\mathcal{L} _{\mathcal{M}}(H,H_{t})$ defined by the formula $p(X)=ww^{\ast}\circ X$, $X\in E(t)$, is a projection in $\mathcal{L}(E(t))$ with range $\mathcal{E}_{t}$. For elements $V_{t}(X_{i})V_{s}(Y_{i})k_{i}$, $i=1,2$, in $[V_{t} (E(t))V_{s}(\mathcal{E}_{s})K]$, we have \begin{align*} \langle V_{t}(X_{1})V_{s}(Y_{1})k_{1},V_{t}(pX_{2})V_{s}(Y_{2})k_{2}\rangle & =\langle\widetilde{V}_{t}(X_{1}\otimes V_{s}(Y_{1})k_{1}),\widetilde{V} _{t}(p(X_{2})\otimes V_{s}(Y_{2})k_{2})\rangle\\ & =\langle X_{1}\otimes V_{s}(Y_{1})k_{1},p(X_{2})\otimes V_{s}(Y_{2} )k_{2}\rangle\\ & =\langle V_{s}(Y_{1})k_{1},\rho(X_{1}^{\ast}\circ ww^{\ast}\circ X_{2})V_{s}(Y_{2})k_{2}\rangle\\ & =\langle V_{s}(Y_{1})k_{1},\rho(p(X_{1})^{\ast}X_{2})V_{s}(Y_{2} )k_{2}\rangle\\ & =\langle p(X_{1})\otimes V_{s}(Y_{1})k_{1},X_{2}\otimes V_{s}(Y_{2} )k_{2}\rangle\\ & =\langle V_{t}(p(X_{1}))V_{s}(Y_{1})k_{1},V_{t}(X_{2})V_{s}(Y_{2} )k_{2}\rangle\text{.} \end{align*} Thus, the map $V_{t}(X)V_{s}(Y)k\rightarrow V_{t}(p(X))V_{s}(Y)k$ is selfadjoint and, therefore, is the orthogonal projection from $[V_{t} (E(t))V_{s}(\mathcal{E}_{s})K]$ onto $[V_{t}(\mathcal{E}_{t})V_{s} (\mathcal{E}_{s})K]$. Write $q$ for this projection. A similar computation shows that the projection from $[V_{t}(E(t))K]$ onto $[V_{t}(\mathcal{E} _{t})K]$ is given by the formula $V_{t}(X)k\rightarrow V_{t}(p(X))k$. However, this projection is just the restriction of $q_{t}$ to $[V_{t}(E(t))K]$. We can then restrict $q_{t}$ further to $[V_{t}(E(t))V_{s}(\mathcal{E}_{s})K]$ and then the restricted image will clearly be $[V_{t}(E(t))V_{s}(\mathcal{E} _{s})K]\cap\lbrack V_{t}(\mathcal{E}_{t})K]$ (because $\alpha_{t}(q_{s})$ commutes with $q_{t}$). Also, by the definition of $q$, this restriction of $q_{t}$ is just $q$ and so its image is $[V_{t}(\mathcal{E}_{t})V_{s} (\mathcal{E}_{s})K]$. Thus the range of $q_{t}\alpha_{t}(q_{s})$ is $[V_{t}(\mathcal{E}_{t})V_{s}(\mathcal{E}_{s})K]=[V_{t+s}(\mathcal{E} _{t}\otimes\mathcal{E}_{s})K]$. \end{proof} Now let $\mathfrak{p}=\{0=t_{0}<t_{1}<t_{2}<\cdots<t_{n-1}<t_{n}=t\}$ be a partition in $\mathfrak{P}(t)$, write $q_{s}$ for the projection onto $[\alpha_{s}(\mathcal{R})u_{0}H]$, as in the last lemma, and set \[ q_{\mathfrak{p},t}:=q_{t-t_{n-1}}\alpha_{t-t_{n-1}}(q_{t_{n-1}-t_{n-2}} )\cdots\alpha_{t_{2}-t_{1}}(q_{t_{1}})\text{.} \] Repeated use of the last lemma shows that $q_{\mathfrak{p},t}(K)=[V_{t} (\mathcal{E}_{t-t_{n-1}}\otimes\mathcal{E}_{t_{n-1}-t_{n-2}}\otimes \cdots\otimes\mathcal{E}_{t_{1}})K]=[V_{t}(\mathcal{L}_{\mathcal{M} }(H,H_{\mathfrak{p},t}))K]$. Thus, it is clear that the $q_{\mathfrak{p},t}$ increase as the partitions $\mathfrak{p}$ are refined and since $E(t)=\lim \mathcal{L}_{\mathcal{M}}(H,H_{\mathfrak{p},t})$, we see that they converge strongly to the projection onto $[V_{t}(E(t))K]$; call it $\overline{q}_{t}$. However, since $\widetilde{V}_{t}$ is a coisometry, we see that $K=[\widetilde {V}_{t}(E(t)\otimes K)]=[V_{t}(E(t))K]$. Thus, $\overline{q}_{t}=I$, $t\geq0$. Observe that $\overline{q}_{t}$ is the same projection defined by Arveson in Section 3 of \cite{wA97}. He uses a slightly different indexing scheme for the partitions that enter into his $q_{\mathfrak{p},t}$, but a moment's reflection reveals that his $q_{\mathfrak{p},t}$ are the same as ours. \begin{proposition} \label{Proposition3.4}The semigroup of endomorphisms, $\{\alpha_{t}\}_{t\geq 0}$, of $\mathcal{R}$ is minimal. \end{proposition} \begin{proof} As Arveson indicates in \S3 of \cite{wA97} (see page 575, in particular), since we have shown that the projections $\overline{q}_{t}$ are all equal to $I$, it remains to show that $\alpha_{t}(u_{0}u_{0}^{\ast})\rightarrow I$, as $t\rightarrow\infty$. However, for each $t\geq0$, $\alpha_{t}(u_{0}u_{0} ^{\ast})$ is the projection onto $[\widetilde{V}_{t}(I\otimes u_{0}u_{0} ^{\ast})\widetilde{V}_{t}^{\ast}K]=[\widetilde{V}_{t}(I\otimes u_{0} u_{0}^{\ast})E(t)\otimes K]=[\widetilde{V}_{t}(E(t)\otimes u_{0}u_{0}^{\ast }K)]=[V_{t}(E(t))u_{0}H]=u_{t}(E(t)\otimes H)$, where, recall, $u_{t}$ is the embedding of $E(t)\otimes H$ into $K$. Since the spaces $u_{t}(E(t)\otimes H)$ are nested and have span equal to $K$, we conclude that the projections $\alpha_{t}(u_{0}u_{0}^{\ast})$ increase to $I$. \end{proof} Let $p_{+}$ be the projection of $K$ onto the span \[ \bigvee\{\alpha_{t_{1}}(u_{0}a_{1}u_{0}^{\ast})\alpha_{t_{2}}(u_{0}a_{2} u_{0}^{\ast})\cdots\alpha_{t_{n}}(u_{0}a_{n}u_{0}^{\ast})u_{0}h\mid a_{i} \in\mathcal{M},h\in H,\text{and }t_{i}\geq0\} \] and let $\mathcal{R}_{+}$ be the von Neumann algebra generated by $\{\alpha_{t}(u_{0}u_{0}^{\ast}\mathcal{R}u_{0}u_{0}^{\ast})\mid t\geq0\}$. Then, as Arveson shows in Proposition 3.14 of \cite{wA97a}, $p_{+}$ is the largest projection in the center of $\mathcal{R}_{+}$ that dominates $u_{0}u_{0}^{\ast}$ and, as he shows in Theorem B of \cite{wA97a}, because $\{\alpha_{t}\}_{t\geq0}$ is minimal, $p_{+}=I$. (Note: In the proof of \cite[Theorem B]{wA97a}, Arveson assumes that $\mathcal{R}$ is a factor. However, this assumption is not necessary for the implications spelled out there that we have used.) Thus we have \begin{corollary} \label{Corollary3.5}The von Neumann algebra $\mathcal{R}$ is generated by $\{\alpha_{t}(u_{0}u_{0}^{\ast}\mathcal{R}u_{0}u_{0}^{\ast})\mid t\geq0\}$ and $K$ is the span $\bigvee\{\alpha_{t_{1}}(u_{0}a_{1}u_{0}^{\ast})\alpha_{t_{2} }(u_{0}a_{2}u_{0}^{\ast})\cdots\alpha_{t_{n}}(u_{0}a_{n}u_{0}^{\ast} )u_{0}h\mid a_{i}\in\mathcal{M}$, $h\in H$, and $t_{i}\geq0\}$. \end{corollary} We let $\mathcal{A}$ denote the $C^{\ast}$-algebra generated by $\{\alpha _{t}(u_{0}u_{0}^{\ast}\mathcal{R}u_{0}u_{0}^{\ast})\mid t\geq0\}$. Then $\mathcal{A}$ is a translation invariant $C^{\ast}$-subalgebra of $\mathcal{R}$ that generates $\mathcal{R}$ as a von Neumann algebra. To show that $\{\alpha_{t}\}_{t\geq0}$ is weakly continuous on $\mathcal{R}$ we show first that it is weakly continuous on $\mathcal{A}$ and then promote the weak continuity there to all of $\mathcal{R}$. For this purpose, we begin with the following result proved by SeLegue \cite{dSL} in the context when $\mathcal{M}=\mathcal{B}(H)$. Our proof is somewhat different. \begin{proposition} \label{Proposition3.6}(\cite[Proposition 2.27]{dSL}) For every $T\in \mathcal{M}$, $\alpha_{t}(u_{0}Tu_{0}^{\ast})\rightarrow u_{0}Tu_{0}^{\ast}$ in the strong operator topology as $t\rightarrow0+$. \end{proposition} \begin{proof} Fix $T\in\mathcal{M}$ and $k\in K$ and then \begin{align*} \left\| (\alpha_{t}(u_{0}Tu_{0}^{\ast})-u_{0}Tu_{0}^{\ast})k\right\| ^{2} & =\langle(\alpha_{t}(u_{0}T^{\ast}u_{0}^{\ast})-u_{0}T^{\ast}u_{0}^{\ast })(\alpha_{t}(u_{0}Tu_{0}^{\ast})-u_{0}Tu_{0}^{\ast})k,k\rangle\\ & =\langle\alpha_{t}(u_{0}T^{\ast}Tu_{0}^{\ast})k,k\rangle-\langle\alpha _{t}(u_{0}T^{\ast}u_{0}^{\ast})u_{0}Tu_{0}^{\ast}k,k\rangle\\ & -\langle\alpha_{t}(u_{0}T^{\ast}u_{0}^{\ast})u_{0}Tu_{0}^{\ast} k,k\rangle+\langle u_{0}T^{\ast}Tu_{0}^{\ast}k,k\rangle \end{align*} to realize that it suffices to show that $\alpha_{t}(u_{0}Tu_{0}^{\ast })\rightarrow u_{0}Tu_{0}^{\ast}$ in the weak operator topology as $t\rightarrow0+$ for every $T\in\mathcal{M}$. However, since we have shown that $\{\alpha_{t}\}_{t\geq0}$ is minimal and since $\{\alpha_{t}\}_{t\geq0}$ is uniformly bounded, we may apply Corollary \ref{Corollary3.5} to assert that it suffices to show that \[ \langle\alpha_{t}(u_{0}Tu_{0}^{\ast})k_{1},k_{2}\rangle\rightarrow\langle u_{0}Tu_{0}^{\ast}k_{1},k_{2}\rangle \] for all $T\in\mathcal{M}$ and all vectors $k_{i}$ of the form $\alpha_{t_{1} }(u_{0}a_{1}u_{0}^{\ast})\alpha_{t_{2}}(u_{0}a_{2}u_{0}^{\ast})\cdots \allowbreak\alpha_{t_{n}}(u_{0}a_{n}u_{0}^{\ast})\allowbreak u_{0}h$, $h\in H$, $a_{i}\in\mathcal{M}$, and $t_{i}\geq0$. Note, too, that if any $t_{j}=0$, then $\alpha_{t_{j}}(u_{0}a_{1}u_{0}^{\ast})\alpha_{t_{j+1}}(u_{0}a_{2} u_{0}^{\ast})\cdots\allowbreak\alpha_{t_{n}}(u_{0}a_{n}u_{0}^{\ast})u_{0}h$ lies in $H$ and so we may assume for the discussion that every $t_{i}>0$. Also, let $t$ be the minimal number among the $t_{i}$'s. Then we may write $\alpha_{t_{1}}(u_{0}a_{1}u_{0}^{\ast})\alpha_{t_{2}}(u_{0}a_{2}u_{0}^{\ast })\cdots\alpha_{t_{n}}(u_{0}a_{n}u_{0}^{\ast})u_{0}h$ as $\alpha_{t} (\cdots)u_{0}h$. That is, $\alpha_{t_{1}}(u_{0}a_{1}u_{0}^{\ast})\alpha _{t_{2}}(u_{0}a_{2}u_{0}^{\ast})\cdots\alpha_{t_{n}}(u_{0}a_{n}u_{0}^{\ast })u_{0}h$ is in the cyclic subspace $[\alpha_{t}(\mathcal{R})u_{0}h]$. Thus, we may assume that $k_{1}=\alpha_{r}(R)u_{0}h_{1}$ and that $k_{2}=\alpha _{s}(L)u_{0}h_{2}$, where $R$ and $L$ are in $\mathcal{R}$, the $h_{i}$ are in $H$ and $r,s>0$. We need to show that for $T\in\mathcal{M}$, \[ \langle\alpha_{t}(u_{0}Tu_{0}^{\ast})\alpha_{r}(R)u_{0}h_{1},\alpha _{s}(L)u_{0}h_{2}\rangle\rightarrow\langle u_{0}Tu_{0}^{\ast}\alpha _{r}(R)u_{0}h_{1},\alpha_{s}(L)u_{0}h_{2}\rangle \] as $t\rightarrow0+$. For this, we may assume at the outset that the $t$'s under consideration are all less than $r$ and $s$. Further, since $\alpha _{t}(u_{0}u_{0}^{\ast})\geq u_{0}u_{0}^{\ast}$ as we saw in the proof of Theorem \ref{DiscDilat}, we find that \begin{multline*} \langle\alpha_{t}(u_{0}Tu_{0}^{\ast})\alpha_{r}(R)u_{0}h_{1},\alpha _{s}(L)u_{0}h_{2}\rangle=\langle\alpha_{t}(\alpha_{s-t}(L^{\ast})u_{0} Tu_{0}^{\ast}\alpha_{r-t}(R))u_{0}h_{1},u_{0}h_{2}\rangle\\ =\langle\alpha_{t}(\alpha_{s-t}(L^{\ast})u_{0}Tu_{0}^{\ast}\alpha _{r-t}(R))\alpha_{t}(u_{0}u_{0}^{\ast})u_{0}h_{1},\alpha_{t}(u_{0}u_{0}^{\ast })u_{0}h_{2}\rangle\\ =\langle\alpha_{t}(u_{0}u_{0}^{\ast}\alpha_{s-t}(L^{\ast})u_{0}Tu_{0}^{\ast }\alpha_{r-t}(R))u_{0}u_{0}^{\ast})u_{0}h_{1},u_{0}h_{2}\rangle\\ =\langle\alpha_{t}(u_{0}P_{s-t}(u_{0}^{\ast}L^{\ast}u_{0})u_{0}^{\ast} u_{0}Tu_{0}^{\ast}u_{0}P_{r-t}(u_{0}^{\ast}Ru_{0})u_{0}^{\ast})u_{0} h_{1},u_{0}h_{2}\rangle\\ =\langle P_{t}(P_{s-t}(u_{0}^{\ast}L^{\ast}u_{0})TP_{r-t}(u_{0}^{\ast} Ru_{0}))h_{1},h_{2}\rangle\text{.} \end{multline*} By the first assertion in Proposition \ref{Proposition3.1}, the functions $t\rightarrow P_{s-t}(u_{0}^{\ast}L^{\ast}u_{0})$ and $t\rightarrow P_{r-t}(u_{0}^{\ast}Ru_{0})$ are strongly continuous. Consequently, the function $t\rightarrow P_{s-t}(u_{0}^{\ast}L^{\ast}u_{0})TP_{r-t}(u_{0}^{\ast }Ru_{0})$ is weakly continuous. Therefore, applying the second assertion in Proposition \ref{Proposition3.1}, we see that \[ \langle P_{t}(P_{s-t}(u_{0}^{\ast}L^{\ast}u_{0})TP_{r-t}(u_{0}^{\ast} Ru_{0}))h_{1},h_{2}\rangle\rightarrow\langle u_{0}Tu_{0}^{\ast}\alpha _{r}(R)u_{0}h_{1},\alpha_{s}(L)u_{0}h_{2}\rangle, \] completing the proof. \end{proof} Let $\mathcal{R}_{0}:=\{R\in\mathcal{R}\mid\lim_{t\rightarrow0+}\alpha _{t}(R)=R$ in the strong operator topology$\}$. Then, since $\{\alpha _{t}\}_{t\geq0}$ is a semigroup of $\ast$-endomorphisms of $\mathcal{R}$, $\mathcal{R}_{0}$ is easily seen to be a $\ast$-subalgebra of $\mathcal{R}$. In fact, since $\left\| \alpha_{t}(R)k-Rk\right\| \leq\left\| \alpha _{t}(S)k-Sk\right\| +\left\| \alpha_{t}(R-S)k-(R-S)k\right\| \leq2\left\| R-S\right\| \left\| k\right\| +\left\| \alpha_{t}(S)k-Sk\right\| $, we see that any $R$ in the norm closure of $\mathcal{R}_{0}$ already is in $\mathcal{R}_{0}$. Thus, $\mathcal{R}_{0}$ is a $C^{\ast}$-algebra. This $C^{\ast}$-algebra contains $u_{0}\mathcal{M}u_{0}^{\ast}$ by the preceding proposition. But also, since each $\alpha_{r}$ is a normal $\ast$-endomorphism of $\mathcal{R}$, we see that $\mathcal{R}_{0}$ contains $\alpha_{r} (u_{0}\mathcal{M}u_{0}^{\ast})$ for all $r\geq0$. Indeed, the proposition shows that for each $T\in\mathcal{M}$, $\alpha_{t}(u_{0}Tu_{0}^{\ast })\rightarrow u_{0}Tu_{0}^{\ast}$ strongly as $t\rightarrow0+$. Therefore, since $\alpha_{r}$ is normal, $\alpha_{r}(\alpha_{t}(u_{0}Tu_{0}^{\ast }))\rightarrow\alpha_{r}(u_{0}Tu_{0}^{\ast})$ strongly, as $t\rightarrow0+$. Since $\alpha_{r}(\alpha_{t}(u_{0}Tu_{0}^{\ast}))=\alpha_{t}(\alpha_{r} (u_{0}Tu_{0}^{\ast}))$, we see that $\alpha_{t}(\alpha_{r}(u_{0}Tu_{0}^{\ast }))\rightarrow\alpha_{r}(u_{0}Tu_{0}^{\ast})$ strongly, as $t\rightarrow0+$. Thus, $\mathcal{R}_{0}\supseteq\mathcal{A}$ and so $\mathcal{R}_{0}$ is weakly dense in $\mathcal{R}$ by Corollary \ref{Corollary3.5}. We are therefore well on our way to showing that $\mathcal{R}_{0}=\mathcal{R}$. For this, we need the following lemma. \begin{lemma} \label{lemma3.7}The $\alpha_{t}$, for \emph{strictly positive} $t$, are jointly faithful on $\mathcal{R}$, i.e., \[ \cap\{\ker(\alpha_{t})\mid t>0\}=\{0\}. \] \end{lemma} \begin{proof} The kernel of each $\alpha_{t}$ is a $2$-sided, $\sigma$-weakly closed ideal in $\mathcal{R}$. Thus so is $\cap\{\ker(\alpha_{t})\mid t>0\}$. Hence, we may write $\cap\{\ker(\alpha_{t})\mid t>0\}=q\mathcal{R}$ for some central projection $q$ in $\mathcal{R}$. Since for $R\in\mathcal{A}$ we have $\alpha_{t}(R)\rightarrow R$ strongly as $t\rightarrow0+$, we conclude that $\mathcal{A}\cap q\mathcal{R}=\{0\}$. Since $\mathcal{A}$ generates $\mathcal{R}$ as a von Neumann algebra by Corollary \ref{Corollary3.5}, we conclude that $q=0$. \end{proof} We have arrived at the main theorem of the paper. \begin{theorem} \label{theorem3.8}Let $(\mathcal{M},\{P_{t}\}_{t\geq0})$ be a quantum Markov process and assume that $\mathcal{M}$ acts on a separable Hilbert space $H$. Then the discrete dilation $(K,\mathcal{R},\{\alpha_{t}\}_{t\geq0},\allowbreak u_{0})$ constructed from $\{P_{t}\}_{t\geq0}$ in Theorem \ref{DiscDilat} is an $E_{0}$-dilation; i.e., $\{\alpha_{t}\}_{t\geq0}$ is weakly continuous. \end{theorem} \begin{proof} By Proposition \ref{Theorem3.2}, $K$ is separable and so the predual of $\mathcal{R}$, $\mathcal{R}_{\ast}$, is separable as a Banach space. We will write the pairing between $\mathcal{R}$ and $\mathcal{R}_{\ast}$ as $\langle\cdot,\cdot\rangle$, i.e., $\langle\omega,R\rangle=\omega(R)$, $\omega\in\mathcal{R}_{\ast}$, $R\in\mathcal{R}$, as we did in Proposition \ref{Proposition3.1}. However, here we write $\Psi_{t}$ for the pre-adjoint of $\alpha_{t}$, i.e., $\Psi_{t}(\omega)=\omega\circ\alpha_{t}$ for all $\omega\in\mathcal{R}_{\ast}$; so $\langle\Psi_{t}(\omega),R\rangle =\langle\omega,\alpha_{t}(R)\rangle$. Since the $\sigma$-weak topology on $\mathcal{B}(K)$ agrees with the weak operator topology on bounded subsets, we see from the discussion following Proposition \ref{Proposition3.6} that for all $\omega\in\mathcal{R}_{\ast}$ and all $R\in\mathcal{R}_{0}$ the function $t\rightarrow\langle\omega,\alpha_{t}(R)\rangle$ is continuous. However, if $R\in\mathcal{R}$ we may find a sequence $\{R_{n}\}_{n=1}^{\infty}$ in $\mathcal{R}_{0}$ that converges weakly to $R$. Consequently, the function $t\rightarrow\langle\omega,\alpha_{t}(R)\rangle$ is the pointwise limit of the sequence of continuous functions $t\rightarrow\langle\omega,\alpha_{t} (R_{n})\rangle$. Therefore $t\rightarrow\langle\omega,\alpha_{t}(R)\rangle$ is measurable for each $\omega\in\mathcal{R}_{\ast}$ and each $R\in\mathcal{R}$. That is, the semigroup of linear maps $\{\Psi_{t}\}_{t\geq0}^{\infty}$ on $\mathcal{R}_{\ast}$ is weakly measurable with respect to the duality between $\mathcal{R}_{\ast}$ and $\mathcal{R}$ (See \cite[Definition 3.5.4]{HP74}.) Since $\mathcal{R}_{\ast}$ is separable, Theorem 3.5.3 of \cite{HP74} implies that $t\rightarrow\Psi_{t}(\omega)$ is strongly measurable as an $\mathcal{R}_{\ast}$-valued function. Thus, in the terminology of \cite[Chapter 10]{HP74}, $\{\Psi_{t}\}_{t\geq0}^{\infty}$ is a strongly measurable semigroup of linear maps on $\mathcal{R}_{\ast}$. But then, Theorem 10.2.3 of \cite{HP74} shows that at least for $t$ strictly larger than zero, the function $t\rightarrow\Psi_{t}$ is strongly continuous; i.e., for each $\omega\in\mathcal{R}_{\ast}$ the $\mathcal{R}_{\ast}$-valued function on $(0,\infty)$, $t\rightarrow\Psi_{t}(\omega)$, is continuous with respect to the norm topology on $\mathcal{R}_{\ast}$. To extend the continuity to all of $[0,\infty)$, let $\widetilde{\mathcal{R}}_{\ast}$ be the closed linear span $\bigvee\{\Psi_{t}(\mathcal{R}_{\ast})\mid t>0\}$. If $\widetilde{\mathcal{R} }_{\ast}$ is not all of $\mathcal{R}_{\ast}$, then there is an $R\in \mathcal{R}$ such that $\langle\omega,R\rangle=0$ for all $\omega\in \widetilde{\mathcal{R}}_{\ast}$. This means that for all $t>0$ and all $\omega\in\mathcal{R}_{\ast}$, $\langle\omega,\alpha_{t}(R)\rangle=\langle \Psi_{t}(\omega),R\rangle=0$. Thus, $R$ is in the kernels of all the $\alpha_{t}$. However, Lemma \ref{lemma3.7} implies that $R=0$. Thus, $\widetilde{\mathcal{R}}_{\ast}$ is all of $\mathcal{R}_{\ast}$. Now we can appeal to Theorem 10.5.5 of \cite{HP74} to conclude that $\lim_{\rightarrow 0+}\left\| \Psi_{t}(\omega)-\omega\right\| =0$. Consequently, for all $\omega\in\mathcal{R}_{\ast}$ and $R\in\mathcal{R}$ we see that $\langle \omega,\alpha_{t}(R)\rangle=\langle\Psi_{t}(\omega),R\rangle\rightarrow \langle\omega,R\rangle$ as $t\rightarrow0+$, which is what we wanted to prove. \end{proof}
{"config": "arxiv", "file": "math0203193.tex"}
TITLE: Can an implication be a tautology? QUESTION [6 upvotes]: I have a problem understanding this proof: $ F →G \equiv \top \implies F \models G $. My textbook proceeds as follows: $ \implies $ : we assume $ F \to G \equiv \top $ . This means that the implication $ F \to G $ holds for every possible truth assignment. Looking at the definition of the implication, we see that this is the case if either both propositions are true, both are false or the first one is false and the second one is true , but never when $ F = 1 $ and $ G = 0 $, because that case would render the implication false. Looking at those three cases, we see that whenever $ G = 1 $ , $F$ is also $1 $ . This is the definition of the logical consequence, so we have $ F \models G $ . Now my question is: Isn't the definition of the tautology that it is always true? Which means no matter what truth assignments we give the propositions involved in a formula, the formula is always true. But now we said that G can never be false while F is true, so isn't this a contradiction? How can an implication be a tautology? REPLY [6 votes]: It looks like you're missing the fact that $F$ and $G$ must be placeholders for entire formulas. This means that the same propositional variable might occur in both $F$ and $G$, which can restrict what the possible truth values of $F$ and $G$ in combination are. An example where the theorem makes a claim is if we let $F$ be the formula $A\land B$ and $G$ be the formula $A\lor B$. Then this instance of the theorem says, If $(A\land B)\to(A\lor B)\equiv \top$, then $A\land B \vDash A\lor B$. It is actually the case that $(A\land B)\to(A\lor B)$ is a tautology -- you can see by a truth table that it is impossible for $A\land B$ to evaluate to $1$ and at the same time $A\lor B$ evaluates to $0$. So in this case the premise of the theorem holds, so it does actually claim that $A\land B\vDash A\lor B$. And this conclusion is indeed true too. On the other hand, if we swap the formulas around and let $F$ be $A\lor B$ and $G$ be $A\land B$, then we get the claim If $(A\lor B)\to(A\land B)\equiv \top$ then $A\lor B\vDash A\land B$. In this case it is possible for $F$ to be true and at the same time $G$ be false, such as in the truth assignment $A=0$, $B=1$. But this means that $F\to G\equiv \top$ is false for this choice of $F$ and $G$. So the theorem doesn't actually promise us anything in this case, because the premise doesn't hold.
{"set_name": "stack_exchange", "score": 6, "question_id": 3280640}
TITLE: Can someone explain me the sentence about ideals? QUESTION [1 upvotes]: Can someone explain me the sentence: "If $R=K[x]$ the prime ideals are $\langle f(x)\rangle $ where $f(x)$ is an irreducible polynomial in $K[x]$ and $\langle 0\rangle $, and again $\langle f(x)\rangle $ $\ f(x)$ is also maximal." ? REPLY [1 votes]: The notation $\langle a_1,\dots,a_n \rangle$ for $a_i$ in a ring $R$ sometimes means the ideal generated by $a_i$. So $\langle f(x) \rangle$ is the same as $(f(x))$ in other notation, or simply the set $\{f(x)g(x): g(x) \in K[x]\}$. The statement is saying a prime ideal of $K[x]$ is either the zero ideal, or a maximal one which is of the form $\langle f(x) \rangle$ for $f(x)$ irreducible.
{"set_name": "stack_exchange", "score": 1, "question_id": 1016648}
TITLE: Proving set of linear functionals is a basis QUESTION [0 upvotes]: Let $\mathbb{R}[X]_{\leq 2}$ be the vector space of polynomials (with real coefficients) of degree at most $2$. Consider the functionals $l_1, l_2, l_3$ on this space: $$ l_1: f \mapsto \int_{0}^1 f(t) dt, \quad l_2: f \mapsto f'(1), \quad l_3: f \mapsto f(0). $$ I need to prove that $\mathcal{L} = \left\{l_1, l_2, l_3\right\}$ is a basis for the dual space $(\mathbb{R}[X]_{\leq 2} )^*$. I need to prove linear independence and span. For the first, let $$ \lambda_1 l_1 + \lambda_2 l_2 + \lambda_3 l_3 = 0 $$ be a linear combination. I need to prove that all the $\lambda_i$ are zero. Let $f = a + bx + cx^2$ be some arbitrary polynomial. Then I evaluated both sides of the equation above with this polynomial to get $$ 0 = 0(f) = \lambda_1 (a + b/2 + c/3) + \lambda_2 (b+ 2c) + \lambda_3 a. $$ By solving the system of equations that arises for the coefficients and assuming that $a,b,c$ are not zero, I showed that each $\lambda_i$ must be zero. But I don't know how to prove the span. I need to show an arbitrary functional $l$ can be written as a combination of the base vectors. How can I do this? Edit: I'm also interested in finding a basis $V = \left\{v_1, v_2, v_3\right\}$ for $\mathbb{R}[X]_{\leq 2}$ such that $V^* = \mathcal{L}$, i.e. such that $v_i^* = l_i$ for every $i$. Here $v_i^*$ means the projection on the $i$th coordinate, that is $$ v_i^*: \mathbb{R}[X]_{\leq 2} \rightarrow \mathbb{R}: f = \alpha_1 v_1 + \alpha_2 v_2 + \alpha_3 v_3 \mapsto \alpha_i. $$ I'm not sure how such a basis $V$ looks like, and if it is unique. For the first basisvector $v_1$, and for an arbitrary polynomial $f \in \mathbb{R}[X]_{\leq 2}$ we would need $$v_1^* (f) = l_1(f) = \int_0^1 f(t) dt. $$ REPLY [1 votes]: Let $V = \mathbb{R}[X]_{\leq 2}$. You want to show that for an arbitrary $\phi \in V^*$, there exist scalars $\alpha_1, \alpha_2, \alpha_3$ such that $$ \phi = \alpha_1 l_1 + \alpha_2 l_2 + \alpha_3 l_3, $$ meaning that for every $p(x) = a + bx + cx^2 \in V$, $$ \phi (p(x)) = \phi ( a + bx + cx^2) = \alpha_1 l_1(a + bx + cx^2) + \alpha_2 l_2(a + bx + cx^2) + \alpha_3 l_3(a + bx + cx^2), $$ which, as you have already computed is equal to $$ \alpha_1 (a + \frac b2 + \frac c3) + \alpha_2(b + 2c) + \alpha_3 a. $$ If we can find a basis $\{v_1, v_2, v_3\}$ for $V$ such that $$ p(x) = a + bx + cx^2 = (a + \frac b2 + \frac c3)v_1 + (b + 2c)v_2 + a v_3, $$ then by linearity of $\phi$, we would have $$ \phi(p(x)) = (a + \frac b2 + \frac c3)\phi(v_1) + (b + 2c)\phi(v_2) + a \phi(v_3), $$ and by setting $\alpha_1 = \phi(v_1)$, $\alpha_2 = \phi(v_2)$, and $\alpha_3 = \phi(v_3)$ we would be done. Take $v_1 = 3x - \frac32 x^2, v_2 = -\frac 12x + \frac 34 x^2,$ and $v_3 = 1 - 3x + \frac 32 x^2$. You can check that these three polynomials form the desired basis.
{"set_name": "stack_exchange", "score": 0, "question_id": 1790729}
TITLE: Coloring Complete Graph QUESTION [3 upvotes]: Let $n$ be a positive odd integer. There are $n$ computers and exactly one cable joining each pair of computers. You are to colour the computers and cables such that no two computers have the same colour, no two cables joined to a common computer have the same colour, and no computer is assigned the same colour as any cable joined to it. Prove that this can be done using $n$ colours. Here is how I translated it into a graph: Call the graph $G$. Note that $G$ is a complete graph. We would like to prove that using $n$ colors, we can color every vertex a different color and no two edges coming out of a single vertex is the same color, and no vertex and edge connecting the vertex is the same color either. I wanted to prove this problem using construction, but I'm not sure how to. I know that you color every vertex a different color, and you have to prove that you can have each vertex have $n-1$ edges incident to it which together with the vertex's color, is the set of all colors, but I'm lost. Thank you! REPLY [2 votes]: Arrange the computers in a circle & Rotate the configuration above. REPLY [2 votes]: Identify the vertices of $K_n$ with the vertices of a regular $n$-gon. Give each vertex a different colour. For any two vertices $x\ne y$, since $n$ is odd, the perpendicular bisector of the line segment $xy$ passes through a unique vertex $z$. Give the edge $xy$ the same colour as the vertex $z$. Alternatively label the vertices $0,1,\dots,n-1$. Give each vertex a different colour. For any two vertices $x\ne y$, since $n$ is odd, there is a unique $z\in\{0,1,\dots,n-1\}$ such that $x+y\equiv2z\pmod n$; give the edge $xy$ the same colour as the vertex $z$. We have shown that, for odd $n$, the total chromatic number of $K_n$ is $n$. This is equivalent to the fact that, for even $n$, the edge chromatic number of $K_n$ is $n-1$. Suppose $n$ is odd, and suppose we have a (proper) edge colouring of $K_{n+1}$ with $n$ colours. To get a total colouring of $K_n$ with the same colours, delete any vertex $v$, and give each remaining vertex $x$ the same colour that was given to the edge $vx$.
{"set_name": "stack_exchange", "score": 3, "question_id": 3229028}
TITLE: Movies about mathematics/mathematicians QUESTION [11 upvotes]: I would like to watch a movie about mathematics/mathematicians (english/french language is OK, italian would be the best! Both real and invented stories are OK, maybe I would prefer something based on a real story). Well, I know maybe just the most famous ones: A beatiful mind (who doesn't know it?!) The proof (by the way, do you know whether or not it is based on a real story?) Will Hunting "Morte di un matematico napoletano" (I am sorry, I don't know the english name. It's the story of the Italian mathematician Renato Caccioppoli) "Pi" Are there any other? Any suggestion? Thanks in advance REPLY [1 votes]: Agora (2009) starring Rachel Weisz as Hypatia. Directed by Alejandro Amenabar, this film contains the best depiction of what it is like to do mathematics that I have seen in a movie. It's also beautifully filmed. Spoiler alert! Stop reading now if you plan to see the movie. -- The writers chose to (very likely!) bend facts by suggesting that Hypatia deduced that planets orbit in ellipses just before she was stoned to death. Her struggle, however, to solve this problem is given in vignettes stretching over a significant period of time culminating in a great Aha! moment.
{"set_name": "stack_exchange", "score": 11, "question_id": 77279}
\begin{document} \title[Small covers and realization of cycles]{Small covers of graph-associahedra and realization of cycles} \author{Alexander A.~Gaifullin} \thanks{The work is supported by the Russian Science Foundation under grant 14-11-00414.} \address{Steklov Mathematical Institute of the Russian Academy of Sciences} \keywords{Realization of cycles, domination relation, URC-manifold, small cover, graph-associahedron} \email{agaif@mi.ras.ru} \date{} \subjclass[2010]{57N65, 52B20, 52B70, 05E45, 20F55} \sloppy \begin{abstract} An oriented connected closed manifold~$M^n$ is called a \textit{URC-manifold} if for any oriented connected closed manifold~$N^n$ of the same dimension there exists a non-zero degree mapping of a finite-fold covering~$\widehat{M}^n$ of~$M^n$ onto~$N^n$. This condition is equivalent to the following: For any $n$-dimensional integral homology class of any topological space~$X$, a multiple of it can be realized as the image of the fundamental class of a finite-fold covering~$\widehat{M}^n$ of~$M^n$ under a continuous mapping $f\colon \widehat{M}^n\to X$. In 2007 the author gave a constructive proof of the classical result by Thom that a multiple of any integral homology class can be realized as an image of the fundamental class of an oriented smooth manifold. This construction yields the existence of URC-manifolds of all dimensions. For an important class of manifolds, the so-called small covers of graph-associahedra corresponding to connected graphs, we prove that either they or their two-fold orientation coverings are URC-manifolds. In particular, we obtain that the two-fold covering of the small cover of the usual Stasheff associahedron is a URC-manifold. In dimensions~4 and higher, this manifold is simpler than all previously known URC-manifolds. \end{abstract} \maketitle \section{Introduction} The following question is called Steenrod's problem on realization of cycles: Given an integral homology class~$z\in H_n(X;\Z)$ of a topological space~$X$, does there exist an oriented closed smooth manifold~$N^n$ and a continuous mapping~$f$ of~$N^n$ to~$X$ such that $f_*[N^n]=z$? If such pair $(N^n,f)$ exists, the class~$z$ is said to be \textit{realizable by Steenrod}. A classical theorem of Thom~\cite{Tho58} asserts that: \begin{itemize} \item All classes of dimensions less than or equal to~$6$ are realizable by Steenrod. \item In every dimension greater than or equal to~$7$, there exist non-realizable classes. \item Any integral homology class~$z$ of any topological space is realizable by Steenrod with some multiplicity, that is, there exists an integer $k>0$ such that the class~$kz$ is realizable by Steenrod. (In fact, the number~$k$ depends only on the dimension~$n$.) \end{itemize} Novikov~\cite{Nov62} showed that a class~$z\in H_n(X;\Z)$ is realizable by Steenrod if for all $j\ge 1$ and all primes~$p>2$ the $(n-2j(p-1)-1)$-dimensional homology of~$X$ does not contain $p$-torsion. The best known bound for the multiplicity~$k$ was obtained by Buchstaber~\cite{Buc69},~\cite{Buc70}: $$ k\le\prod_{p>2}p^{\left[\frac{n-2}{2(p-1)}\right]}\,, $$ where the product is taken over all odd primes~$p$. Notice that the question about manifolds that realize homology classes was not discussed in~\cite{Tho58}, \cite{Nov62}, \cite{Buc69}, \cite{Buc70}, since the methods of these papers did not give any approach to it. The author~\cite{Gai07} gave a constructive solution of Steenrod's problem, i.\,e., presented an explicit combinatorial construction of a pair~$(N^n,f)$ that realizes a multiple of the given homology class. This construction does not allow to obtain effective estimates for the multiplicity~$k$. But, unlike the classical algebraic topological approach, it allows us to control the topology of the obtained manifold~$N^n$. Namely, it was shown in~\cite{Gai08a},~\cite{Gai08b} that in every dimension~$n$ there exists a single manifold~$M_0^n$ \ --- \ the so-called Tomei manifold --- such that for any homology class~$z\in H_n(X;\Z)$ of any topological space~$X$, the manifold~$N^n$ that realizes a multiple~$kz$ of~$z$ can be chosen to be a (non-ramified) finite-fold covering of~$M_0^n$. The Tomei manifold~$M^n_0$ first appeared in the paper~\cite{Tom84} by Tomei as the isospectral manifold of symmetric tridiagonal real matrices. The problem arises: Which other manifolds~$M^n$, except for the Tomei manifold~$M^n_0$, satisfy the same universal property with respect to the problem on realization of cycles? The author~\cite{Gai13},~\cite{Gai13-b} has introduced the following class of URC-manifolds (form the words ``Universal Realizator of Cycles''). \begin{defin} An oriented connected closed manifold~$M^n$ is called a \textit{URC-manifold} if, for any topological space~$X$ and any homology class $z\in H_n(X;\Z)$, there exists a finite-fold covering~$\hM^n$ of~$M^n$ and a continuous mapping $f\colon\hM^n\to X$ such that $f_*[\hM^n]=kz$ for a non-zero integer~$k$. \end{defin} \begin{remark} We always endow the covering~$\hM^n$ of~$M^n$ with the orientation induced by the orientation of~$M^n$, i.\,e., such that the Jacobian of the projection $\hM^n\to M^n$ is positive at all points of~$\hM^n$. We do not require~$\hM^n$ to be connected. If we restrict ourselves to considering pathwise connected spaces~$X$ only and require~$\hM^n$ to be connected, then we obtain an equivalent definition of a URC-manifold \end{remark} Another motivation for the class of URC-manifolds is the problem about the domination relation on the set of homotopy types of oriented closed manifolds. Let~$M^n$ and~$N^n$ be connected oriented closed manifolds of the same dimension. We say that $M^n$ \textit{dominates\/}~$N^n$ and write $M^n\ge N^n$ if there exists a non-zero degree mapping of~$M^n$ onto~$N^n$. We say that~$M^n$ \textit{virtually dominates\/}~$N^n$ if a finite-fold covering of~$M^n$ dominates~$N^n$. The study of the domination relation goes back to the works of Milnor and Thurston~\cite{MiTh77} and Gromov~\cite{Gro82}. Carlson and Toledo~\cite{CaTo89} posed a problem of finding of a reasonable \textit{maximal class\/} of $n$-dimensional manifolds with respect to domination, that is, a class of $n$-dimensional manifolds such that any $n$-dimensional manifolds can be dominated by a manifold in this class. Each URC-manifold~$M^n$ provides a solution to this problem: For a required class one can take the class of all finite-fold coverings of~$M^n$. Equivalently, URC-manifolds are exactly manifolds that are greatest with respect to virtual domination. In other words, a manifold~$M^n$ is a URC-manifold if and only if it virtually dominates every oriented connected closed manifold of the same dimension. A good survey of works on the domination relation can be found in~\cite{KoLo09}. The first example of a URC-manifold, the Tomei manifold~$M_0^n$ mentioned above, is a \textit{small cover\/} of certain simple polytope, namely, of the \textit{permutahedron\/}~$\Pe^n$. This means that~$M_0^n$ is glued in a special way out of $2^n$ copies of~$\Pe^n$. (Strict definitions of a small cover and of the permutahedron will be given in Section~\ref{section_defin}.) In~\cite{Gai13} the author has found many other examples of URC-manifolds among small covers of other simple polytopes. However, all these simple polytopes, except for the dodecahedron in dimension~$3$, are more complicated than the permutahedron. Here `more complicated' can be understood, for instance, in the sense that they have greater numbers of facets. Hence, in dimensions~$4$ and higher, the Tomei manifolds~$M_0^n$ have remained the simplest known URC-manifolds. The main result of the present paper is the construction of new examples of URC-manifolds that are simpler than the Tomei manifolds. Carr and Devados~\cite{CaDe05} suggested a construction that assigns a simple polytope~$P_{\Gamma}$, which they called a \textit{graph-associahedron\/}, to every finite graph~$\Gamma$, see Section~\ref{subsection_as} for details. Our main result is as follows: For any connected graph~$\Gamma$, any orientable small cover~$M_{P_{\Gamma},\lambda}$ of~$P_{\Gamma}$ is a URC-manifold, and for any non-orientable small cover~$M_{P_{\Gamma},\lambda}$ of~$P_{\Gamma}$, the two-fold orientation covering of~$M_{P_{\Gamma},\lambda}$ is a URC-manifold. This result will be formulated in details in Section~\ref{subsection_result} after we give all necessary definitions. We shall present an explicit construction that, for a singular cycle representing the given homology class, builds a manifold being a finite-fold covering of~$M_{P_{\Gamma},\lambda}$ that realizes a multiple of the given homology class. This construction is a further modification of the construction suggested by the author in~\cite{Gai07} and developed in~\cite{Gai08a}, \cite{Gai08b}, and~\cite{Gai13}. Notice that another modification of this construction was applied to solve the problem of constructing of an oriented simplicial manifold with the prescribed set of links of vertices, see~\cite{Gai08c}. \begin{remark} Unfortunately, we shall always face the following inconvenience concerning the terminology. We use to work with coverings of manifolds, and the word `covering' is always understood in the sense of classical topology, i.\,e., as a non-ramified covering. On the other hand, the main source of example of manifolds for us is the construction of small covers of simple polytopes. Here the expression `small cover' should be understood as a single whole, and a small cover of a polytope is by no means a covering of this polytope in the sense of classical topology. Nevertheless, the term `small cover' is established and it is impossible to change it. Hence, we shall often deal with objects like a `two-fold covering of a small cover of a simple polytope'. To avoid confusion, we settle for all that the word `covering' is always understood in the sense of classical topology unless it enters the phrase `small cover'. \end{remark} The `smallest' of graph-associahedra corresponding to connected graphs is the well-known \textit{Stasheff associahedron}~$\As^n$. Here the word `smallest' can be understood in several senses. For instance, among all graph-associahedra corresponding to connected graphs, the Stasheff associahedron has the smallest numbers of faces, and also $h$-numbers and $\gamma$-numbers, see~\cite{BuVo11}. For example, the number of facets of~$\Pe^n$ is equal to $2^{n+1}-2$, while the number of facets of~$\As^n$ is equal to $n(n+3)/2$. One of the URC-manifolds that will be constructed in this paper is the two-fold orientation covering~$\overline{M}_{\As^n,\clambda}$ of the small cover~$M_{\As^n,\clambda}$ of the Stasheff associahedron~$\As^n$ that corresponds to the canonical Delzant characteristic function~$\clambda$, see Section~\ref{section_defin} for details. This manifold is much `smaller' than the Tomei manifold, for instance, in the sense that it has smaller Betti numbers. Notice that the Betti numbers with coefficients in~$\Z_2$ of a small cover~$M_{P,\lambda}$ are independent of the choice of the characteristic function~$\lambda$ and are equal to the $h$-numbers of~$P$, see~\cite{DaJa91}. The Betti numbers with coefficients in~$\Q$ of small covers are harder to compute, see Section~\ref{subsection_rational}. Up to now, in every dimension~$n\ge 4$, the manifold~$\overline{M}_{\As^n,\clambda}$ is the simplest known URC-manifold. This paper is organized in the following way. In Section~\ref{section_defin} we give all necessary definitions and then formulate our main results, Theorem~\ref{theorem_main} and Corollary~\ref{cor_main}, which claim that for any graph-associahedron~$P_{\Gamma}$ corresponding to a connected graph~$\Gamma$ the following manifolds are URC-manifolds: \begin{itemize} \item the real moment-angle manifold over~$P_{\Gamma}$, \item all orientable small covers of~$P_{\Gamma}$, \item the two-fold orientation coverings of all non-orientable small covers of~$P_{\Gamma}$. \end{itemize} An explicit construction for realization of cycles providing the proofs of these results is contained in Sections~\ref{section_simplex}--\ref{section_proof}. Finally, in Section~\ref{section_small} we discuss how one can compare different URC-manifolds, and in what senses the newly constructed URC-manifolds are smaller than URC-manifolds known before. \section{Definitions and main result}\label{section_defin} \subsection{Small covers and real moment-angle manifolds} Recall that an $n$-dimensional convex polytope $P\subset\R^n$ is said to be \textit{simple} if every its vertex is contained in exactly $n$ facets. An important branch of modern algebraic geometry is theory of \textit{toric varieties}. Recall that a toric variety is a normal algebraic variety that contains a Zariski open subset isomorphic to the algebraic torus~$(\C^{\times})^n$ such that the action of this torus on itself extends to its action on the whole variety. It is well known that, for a projective toric variety~$X$, the quotient of~$X$ by the action of the compact torus~$T^n\subset(\C^{\times})^n$ is a simple polytope~$P^n$. Davis and Januszkiewicz~\cite{DaJa91} suggested a construction of topological analogs of toric varieties, i.\,e., of smooth topological manifolds~$M^{2n}$ with locally standard actions of the half-dimensional compact torus~$T^n$ such that $M^{2n}/T^n=P^n$ is a simple polytope. Today such manifolds are called \textit{quasi-toric manifolds}. Alongside with this construction, in~\cite{DaJa91} they also introduced a real version of it with the torus~$T^n$ replaced by its real analog, that is, by the group~$\Z_2^n$. The obtained manifolds are called \textit{small covers} of simple polytopes. (Here and further, we denote by~$\Z_2$ the cyclic group of order~$2$ and, unlike in~\cite{DaJa91}, use the additive notation for it.) In the construction of quasi-toric manifolds and small covers, an auxiliary role was played by certain special manifolds~$\mathcal{Z}_P$ and~$\CR_P$ with the actions of the groups~$T^m$ and~$\Z_2^m$, respectively, such that $\mathcal{Z}_P/T^m=P^n$ and $\CR_P/\Z_2^m=P^n$, where $m$ is the number of facets of~$P^n$. Theory of these manifolds was developed in the works by Buchstaber and Panov, see~\cite{BuPa15}. It turned out that they have significant independent importance. Today they are called \textit{moment-angle manifolds} and \textit{real moment-angle manifolds}, respectively. Below, among all constructions mentioned above, we shall need only the constructions of real moment-angle manifolds and small covers. Hence, we shall adduce only these two constructions. Let $P$ be an $n$-dimensional simple convex polytope with $m$ facets $F_1,\ldots,F_m$. Consider the group~$\Z_2^m$ with the standard basis $a_1,\ldots,a_m$. We put, $$ \CR_{P}=(P\times\Z_2^m)/\sim\,, $$ where $\sim$ is the equivalence relation on~$P\times\Z_2^m$ such that $(x,g)\sim (x',g')$ if and only if $x=x'$ and the element $g-g'$ belongs to the subgroup of~$\Z_2^m$ generated by all~$a_i$ such that $x\in F_i$. We denote by~$[x,g]$ the point of~$\CR_P$ corresponding to the equivalence class of~$(x,g)$. Now, consider the group~$\Z_2^n$. A homomorphism $\lambda\colon\Z_2^m\to\Z_2^n$ is called a \textit{characteristic function} if, for each vertex~$v$ of~$P$, the elements $\lambda(a_{i_1}),\ldots,\lambda(a_{i_n})$ corresponding to the facets $F_{i_1},\ldots,F_{i_n}$ containing~$v$ form a basis of~$\Z_2^n$. Recall that not every simple polytope admits a characteristic function. To a pair $(P,\lambda)$ such that $P$ is a simple polytope and $\lambda$ is a characteristic function, one can assign the manifold $$ M_{P,\lambda}=(P\times\Z_2^n)/\sim_{\lambda}, $$ where $\sim_{\lambda}$ is the equivalence relation on~$P\times\Z_2^n$ such that $(x,g)\sim_{\lambda} (x',g')$ if and only if $x=x'$ and the element $g-g'$ belongs to the subgroup of~$\Z_2^n$ generated by all~$\lambda(a_i)$ such that $x\in F_i$. We denote by $[x,g]_{\lambda}$ the point of~$M_{P,\lambda}$ corresponding to the equivalence class of~$(x,g)$. The manifold $\CR_{P}$ is called the \textit{real moment-angle manifold\/} over the simple polytope~$P$, and the manifolds~$M_{P,\lambda}$ are called \textit{small covers\/} of~$P$. The manifold~$\CR_P$ is glued out of $2^m$ copies of~$P$ indexed by the elements of~$\Z_2^m$. Each manifold~$M_{P,\lambda}$ is glued out of $2^n$ copies of~$P$ indexed by the elements of~$\Z_2^n$. The copies of~$P$ corresponding to the elements~$g$ and~$g'$ are glued to each other along a facet~$F_i$ if and only if $g-g'=a_i$ for~$\CR_P$ and $g-g'=\lambda(a_i)$ for~$M_{P,\lambda}$. Herewith, the joining of the simple polytopes at every point in either~$\CR_P$ or~$M_{P,\lambda}$ is locally modeled by the joining of the coordinate orthants at a point in~$\R^n$. Therefore, $\CR_P$ and $M_{P,\lambda}$ are indeed topological manifolds. Any simple polytope~$P$ has the standard structure of a smooth manifold with corners, which induces smooth structures on the manifolds~$\CR_P$ and~$M_{P,\lambda}$. It is easy to see that, for any characteristic function~$\lambda$, the mapping $p_{\lambda}\colon \CR_P\to M_{P,\lambda}$ given by $[x,g]\mapsto [x,\lambda(g)]_{\lambda}$ is an $2^{m-n}$--fold regular covering. In other words, $M_{P,\lambda}=\CR_P/\ker\lambda$, where the group~$\Z_2^m$ and, hence, the subgroup~$\ker\lambda$ of it acts on~$\CR_P$ by $h\cdot [x,g]=[x,g+h]$. A simple polytope~$P$ is called a \textit{flag polytope} if every set $F_1,\ldots,F_k$ of its pairwise intersecting facets has a non-empty intersection~$F_1\cap\cdots\cap F_k$. It follows from the results of Davis~\cite{Dav83} that small covers~$M_{P,\lambda}$ and the real moment-angle manifold~$\CR_P$ over a flag polytope~$P$ are aspherical manifolds, that is, $\pi_i(M_{P,\lambda})=0$ and $\pi_i(\CR_P)=0$ whenever $i>1$. (The manifolds~$M_{P,\lambda}$ and~$\CR_P$ had not been introduced in~\cite{Dav83}, but the manifold~$\CU_P$, which is the universal covering of them, was studied and was proved to be contractible.) \begin{remark} If one endows a simple polytope~$P$ with a natural structure of an orbifold, and defines properly the concept of the fundamental group of an orbifold (cf.~\cite[Section~13.2]{Thu02}), then the manifold~$\CR_P$ will turn out a \textit{universal Abelian covering} of the orbifold~$P$, i.\,e., the covering corresponding to the commutant of its fundamental group. Hence the manifold~$\CR_P$ is sometimes called the universal Abelian covering of~$P$. We prefer not to use this terminology, since in the present paper the word `covering' is already overloaded. The fundamental group of~$P$ treated as an orbifold is a right-angular Coxeter group. The commutants of such groups have been studied by Panov and Veryovkin~\cite{PaVe16}. \end{remark} \subsection{Nestohedra and graph-associahedra}\label{subsection_as} We shall need to consider an important family of simple polytopes, which are called \textit{graph-associahedra}. These polytopes were introduced by Carr and Devados~\cite{CaDe05}. Their definition is based on the fundamental concept of a \textit{building set}, which goes back to the paper~\cite{DCPr95} by De Concini and Procesi on the models of the complements of subspace arrangements in a vector space. The same polytopes were studied by Toledano Laredo~\cite{TL08} under the name \textit{De Concini--Procesi associahedra}. Graph-associahedra are representatives of an important wider class of simple polytopes called \textit{nestohedra} and introduced in~\cite{FeSt05},~\cite{Pos09}. (The term `nestohedron' was first used in~\cite{PRW08}.) Let $V$ be a finite set. A \textit{building set} on~$V$ is a subset~$\CB$ of non-empty subsets $S\subseteq V$ satisfying the following conditions: \begin{enumerate} \item If $S_1, S_2\in \CB$ and $S_1\cap S_2\ne\varnothing$, then $S_1\cup S_2\in\CB$. \item All one-elements subsets $\{i\}$, $i\in V$, belong to~$\CB$. \end{enumerate} A building set~$\CB$ is called \textit{connected\/} if $V\in\CB$. In the present paper under a \textit{graph\/} we always mean a finite simple graph, i.\,e., a finite graph without loops and multiple edges. Let $\Gamma$ be a graph on the vertex set~$V$. For each subset $S\subseteq V$, we denote by~$\Gamma|_S$ the \textit{restriction\/} of~$\Gamma$ to~$S$, that is, the graph on the vertex set~$S$, such that every pair of vertices $s_1,s_2\in S$ is connected by an edge in~$\Gamma|_S$ if and only if it is connected by an edge in~$\Gamma$. The \textit{graph building set\/} corresponding to~$\Gamma$ is the set~$\CB(\Gamma)$ consisting of all non-empty subsets $S\subseteq V$ such that the graph~$\Gamma|_{S}$ is connected. It is easy to see that conditions~1 and~2 are satisfied, that is, $\CB(\Gamma)$ is indeed a building set. Besides, the building set~$\CB(\Gamma)$ is connected if and only if the graph~$\Gamma$ is connected. Recall that the \textit{Minkowski sum\/} of subsets $P$ and~$Q$ of~$\R^k$ is the subset $P+Q$ of~$\R^k$ consisting of all vectors $p+q$ such that $p\in P$ and $q\in Q$. Consider the space~$\R^{n+1}$ with the basis $e_0,\ldots,e_n$ and with the coordinates $x_0,\ldots,x_n$ in this basis. The \textit{standard $n$-dimensional simplex\/} is the simplex $\Delta^n\subset\R^{n+1}$ with vertices $e_0,\ldots,e_n$. Equivalently, $\Delta^n$ is the set of all points $(x_0,\ldots,x_n)$ satisfying $\sum_{i=0}^nx_i=1$ and $x_i\ge0$ for all~$i$. For each non-empty subset $S\subseteq\{0,\ldots,n\}$, we denote by~$\Delta_S$ the face of~$\Delta^n$ spanned by all vertices~$e_i$ such that $i\in S$. Let $\CB$ be a building set on the set $V=\{0,\ldots,n\}$. The \textit{nestohedron} corresponding to~$\CB$ is the polytope $$ P_{\CB}=\sum_{S\in \CB}\Delta_S, $$ where the sum is the Minkowski sum. If $\CB=\CB(\Gamma)$ is the graph building set corresponding to a graph~$\Gamma$, then the nestohedron $P_{\Gamma}=P_{\CB(\Gamma)}$ is called a \textit{graph-associahedron}. In the following proposition we collect basic results of papers~\cite{FeSt05}, \cite{Pos09}, \cite{PRW08} on nestohedra in the interesting to us partial case of graph-associahedra corresponding to connected graphs. \begin{propos} Let $\Gamma$ be a connected graph on the vertex set $V=\{0,\ldots,n\}$. Then the graph-associahedron~$P_{\Gamma}$ satisfies the following: \begin{enumerate} \item $P_{\Gamma}$ is an $n$-dimensional flag simple polytope lying in the hyperplane $H\subset\R^{n+1}$ given by $\sum_{i=0}^nx_i=|\CB(\Gamma)|$. \item $P_{\Gamma}$ has $|\CB(\Gamma)|-1$ facets, which can be naturally indexed by the subsets $S\in\CB(\Gamma)\setminus\{V\}$. The facet~$F_S$ corresponding to a subset~$S$ lies in the intersection of~$H$ and the hyperplane given by $\sum_{i\in S}x_i=k_S$, where $k_S$ is the number of subsets $T\in\CB(\Gamma)$ such that $T\subseteq S$. \item Facets~$F_S$ and~$F_T$ intersect if and only if $S\subseteq T$ or $T\subseteq S$ or $S\cap T=\varnothing$ and~$S\cup T\notin \CB(\Gamma)$. \end{enumerate} \end{propos} We shall conveniently put~$\CB'(\Gamma)=\CB(\Gamma)\setminus\{V\}$. Then the facets of the graph-associahedron~$P_{\Gamma}$ are indexed by the elements of~$\CB'(\Gamma)$. It is not hard to see that, for a disconnected graph~$\Gamma$ with connected components $\Gamma_1,\ldots,\Gamma_q$, the graph-associahedron~$P_{\Gamma}$ is combinatorially equivalent (and even isometric) to the direct product $P_{\Gamma_1}\times\cdots\times P_{\Gamma_q}$. The most important are the following three series of graph-associahedra: 1. If $L_{n+1}$ is the path graph on the vertex set $\{0,\ldots,n\}$, i.\,e., the graph with the $n$ edges $\{0,1\}$, $\{1,2\},\ldots,$ $\{n-1,n\}$, then $P_{L_{n+1}}$ is the usual \textit{associahedron\/}~$\As^n$, which is also called the \textit{Stasheff polytope}. 2. If $C_{n+1}$ is the cycle graph on the vertex set $\{0,\ldots,n\}$, i.\,e., the graph with the $n+1$ edges $\{0,1\}$, $\{1,2\},\ldots,$ $\{n-1,n\}$, $\{n,0\}$, then $P_{C_{n+1}}$ is the \textit{cyclohedron\/}~$\Cy^n$, which is also called the \textit{Bott--Taubes polytope}. 3. If $K_{n+1}$ is the complete graph on the vertex set $\{0,\ldots,n\}$, then $P_{K_{n+1}}$ is the \textit{permutohedron\/}~$\Pe^n$. \begin{remark} Usually under a standard permutohedron one means the convex hull of the $(n+1)!$ points obtained by permutations of coordinates of the point $(1,2,3,\ldots,n+1)$. Nevertheless, the above permutohedron~$\Pe^n=P_{K_{n+1}}$ is the convex hull of the $(n+1)!$ points obtained by permutations of coordinates of the point $(1,2,4,\ldots,2^n)$. These two polytopes are combinatorially equivalent. \end{remark} For a connected graph~$\Gamma$, we construct a mapping $\pi_{\Gamma}\colon P_{\Gamma}\to\Delta^n$ in the following way. Let $K_{\Gamma}$ be the barycentric subdivision of~$P_{\Gamma}$. We map the barycentre of every face $F=F_{S_1}\cap\cdots\cap F_{S_k}$ of~$P_{\Gamma}$ to the barycentre of the face~$\Delta_{V\setminus(S_1\cup\cdots\cup S_k)}$ of~$\Delta^n$. In particular, we map the barycentre of~$P_{\Gamma}$ to the barycentre of~$\Delta^n$. (Notice that if $F_{S_1}\cap\cdots \cap F_{S_k}\ne\varnothing$, then the set $S_1\cup\cdots\cup S_k$ either coincides with one of the subsets~$S_i$ or does not belong to~$\CB(\Gamma)$, hence, never coincides with the whole set~$V$.) Further, we extend the mapping linearly to every simplex of~$K_{\Gamma}$, and denote the obtained mapping by~$\pi_{\Gamma}$. \begin{propos}\label{propos_pi} The mapping~$\pi_{\Gamma}$ satisfies the following: \begin{enumerate} \item $\pi_{\Gamma}(F_S)\subseteq\Delta_{V\setminus S}$ for all $S\in\CB'(\Gamma)$. \item $\pi_{\Gamma}(\partial P_{\Gamma})=\partial\Delta^n$. \item The mapping $\pi_{\Gamma}$ has degree~$1$, that is, takes the fundamental homology class of the pair~$(P_{\Gamma},\partial P_{\Gamma})$ to the fundamental homology class of the pair~$(\Delta^n,\partial\Delta^n)$. \end{enumerate} \end{propos} \begin{proof} Property~1 follows immediately from the construction of~$\pi_{\Gamma}$. The inclusion~$\pi_{\Gamma}(\partial P_{\Gamma})\subseteq\partial\Delta^n$ follows from property~1. Let us prove property~3. It follows immediately from the construction that $\pi_{\Gamma}$ is a simplicial mapping of the barycentric subdivision of~$P_{\Gamma}$ to the barycentric subdivision of~$\Delta^n$, that is, $\pi_{\Gamma}$ maps every simplex of~$K_{\Gamma}$ linearly onto a simplex of the barycentric subdivision of~$\Delta^n$. Obviously, there exist subsets $S_1\subset \cdots\subset S_n\subset V$ such that $|S_i|=i$ and the graphs~$\Gamma|_{S_i}$ are connected, i.\,e., $S_i\in\CB'(\Gamma)$. Denote by~$u_0$ the barycentre of the graph-associahedron~$P_{\Gamma}$, and denote by $u_1,\ldots,u_n$ the barcentres of its faces $F_{S_1}$, $F_{S_1}\cap F_{S_2},\ldots$, $F_{S_1}\cap\cdots\cap F_{S_n}$, respectively. Denote by~$v_0$ the barycentre of the simplex~$\Delta^n$, and denote by $v_1,\ldots,v_n$ the barycentres of its faces $\Delta_{V\setminus S_1},\ldots,\Delta_{V\setminus S_n}$, respectively. Then $\pi_{\Gamma}(u_i)=v_i$, $i=0,\ldots,n$, hence, $\pi_{\Gamma}$ maps isomorphically the simplex~$\sigma$ with vertices $u_0,\ldots,u_n$ onto the simplex~$\tau$ with vertices $v_0,\ldots,v_n$. Besides, it can be checked immediately that this affine isomorphism preserves the orientation. (The simplex~$\Delta^n$ and the graph-associahedron~$P_{\Gamma}$ lie in parallel hyperplanes, hence, the standard orientation of~$\Delta^n$ induces the orientation of~$P_{\Gamma}$.) Therefore, to prove that~$\pi_{\Gamma}$ has degree~$1$, we suffice to show that no other $n$-dimensional simplex~$\sigma'$ of~$K_{\Gamma}$ is mapped isomorphically onto~$\tau$. Assume the converse, that is, assume that $K_{\Gamma}$ contains a simplex~$\sigma'\ne\sigma$ with vertices $u_0',\ldots,u_n'$ such that $\pi_{\Gamma}(u_i')=v_i$ for all~$i$. Then $u_0'=u_0$ and there exist facets $F_{S_1'},\ldots,F_{S_n'}$ of~$P_{\Gamma}$ such that $u_i'$ is the barycentre of~$F_{S_1'}\cap\cdots\cap F_{S_i'}$ for $i=1,\ldots,n$. Since $\pi_{\Gamma}(u_i')=v_i$, we see that $S_1'\cup\cdots\cup S_i'=S_i$ for all~$i$. In particular, $S_1'=S_1$. Take the smallest~$i$ such that $S_i'\ne S_i$. Then $S_{i-1}'\cup S_i'=S_i\in\CB(\Gamma)$ and neither of the sets~$S_{i-1}'=S_{i-1}$ and~$S_i'$ is contained in the other. Hence, the facets~$F_{S_{i-1}'}$ and~$F_{S_i'}$ do not intersect, which is impossible. Therefore, $S_i'=S_i$ for all~$i$, that is, $\sigma'=\sigma$, which completes the proof of property~3. Since the degree of~$\pi_{\Gamma}$ is non-zero, we obtain that the inclusion $\pi_{\Gamma}(\partial P_{\Gamma})\subseteq\partial\Delta^n$ is not strict. \end{proof} Recall that an $n$-dimensional simple polytope $P\subset\R^n$ is called a \textit{Delzant polytope} if, for each its vertex~$p$, there exist integral normal vectors to the facets of~$P$ containing~$p$ that form a $\Z$-basis of the standard lattice $\Z^n\subset\R^n$. For each $n$-dimensional Delzant polytope~$P$ with $m$ facets $F_1,\ldots,F_m$, there is a canonical characteristic function $\clambda\colon\Z_2^m\to\Z_2^n$ such that, for each $i$, the value~$\clambda(a_i)$ is the primitive integral normal vector to~$F_i$ reduced modulo~$2$. To each Delzant polytope~$P$ is assigned a smooth projective toric variety, and the small cover~$M_{P,\clambda}$ corresponding to the canonical characteristic function described above is the set of real points of this projective variety, see~\cite{BuPa15}. For any connected graph~$\Gamma$ on the vertex set $V=\{0,\ldots,n\}$, the corresponding graph-associahedron~$P_{\Gamma}$ becomes Delzant if one identifies the hyperplane~$H$ containing~$P_{\Gamma}$ with the space~$\R^n$ spanned by the vectors $e_1,\ldots,e_n$ by means of the coordinate projection $\R^{n+1}\to\R^n$. Since facets of~$P_{\Gamma}$ are indexed by the elements~$S\in\CB'(\Gamma)$, we shall conveniently denote the basis element of~$\Z_2^m=\Z_2^{|\CB'(\Gamma)|}$ corresponding to the facet~$F_S$ by~$a_S$. Then the canonical characteristic function $\clambda\colon\Z_2^m\to\Z_2^n$ yielded by the Delzant structure on~$P_{\Gamma}$ is given by $$ \clambda(a_S)=\left\{ \begin{aligned} &\sum_{i\in S}b_i&&\text{if $0\notin S$,}\\ &\sum_{i\in V\setminus S}b_i&&\text{if $0\in S$,} \end{aligned} \right. $$ where $b_1,\ldots,b_n$ is the standard basis of~$\Z_2^n$. Notice that graph-associahedra may admit other characteristic functions. For instance, the above mentioned Tomei manifold~$M^n_0$ is the small cover of the permutohedron~$\Pe^n=P_{K_{n+1}}$ corresponding to the characteristic function~$\lambda_0$ given by $\lambda_0(a_S)=b_{|S|}$. \subsection{Main result} \label{subsection_result} \begin{theorem}\label{theorem_main} For any connected graph~$\Gamma$, the real moment-angle manifold~$\CR_{P_{\Gamma}}$ is a URC-manifold. \end{theorem} We shall prove this theorem in Section~\ref{section_proof} after we describe two auxiliary constructions necessary for it in Sections~\ref{section_simplex} and~\ref{section_constr}. If a graph $\Gamma$ is the disjoint union of two graphs~$\Gamma_1$ and~$\Gamma_2$, then $P_{\Gamma}=P_{\Gamma_1}\times P_{\Gamma_2}$ and $\CR_{P_{\Gamma}}=\CR_{P_{\Gamma_1}}\times \CR_{P_{\Gamma_2}}$. In particular, adding an isolated vertex to a graph, we do not change the polytope~$P_{\Gamma}$. Kotschick and L\"oh~\cite{KoLo09} proved that in every dimension $n\ge 2$ there exist many examples of oriented closed manifolds, e.\,g., all manifolds on strictly negative curvature, that cannot be dominated by a product of two manifolds of positive dimensions. Hence, if $\Gamma$ is a disconnected graph containing at least two connected components that are not isolated vertices, then $\CR_{P_{\Gamma}}$ is not a URC-manifold. It follows immediately from the definition of a URC-manifold that if $M_1^n$ and~$M_2^n$ are oriented connected closed manifolds and $M_1^n$ is a finite-fold covering of~$M_2^n$, then $M_1^n$ is a URC-manifold if and only if $M^n_2$ is a URC-manifold. \begin{cor}\label{cor_main} Suppose that $\Gamma$ is a connected graph and $\lambda$ is a characteristic function for~$P_{\Gamma}$. Then the small cover~$M_{P_{\Gamma},\lambda}$ is a URC-manifold whenever it is orientable, and if $M_{P_{\Gamma},\lambda}$ is non-orientable, then the two-fold orientation covering~$\overline{M}_{P_{\Gamma},\lambda}$ of it is a URC-manifold. \end{cor} It is easy to check that the small cover~$M_{P,\lambda}$ is orientable if and only if $\lambda(a_{i_1})+\cdots+\lambda(a_{i_{2k+1}})\ne 0$ for every set of an odd number of indices $i_1,\ldots,i_{2k+1}$. This implies immediately that, for any connected graph~$\Gamma$ with at least three vertices, the manifold~$M_{P_{\Gamma},\clambda}$ is non-orientable. \begin{cor} Suppose that $\Gamma$ is a connected graph with at least three vertices. Then the manifold~$\overline{M}_{P_{\Gamma},\clambda}$ is a URC-manifold. \end{cor} \section{A subdivision corresponding to a finite graph of a simplicial cell pseudo-manifold}\label{section_simplex} In this section we give a construction of a special subdivision of a simplicial cell pseudo-manifold. This subdivision will be used in Section~\ref{section_proof} in the proof of Theorem~\ref{theorem_main}. \begin{defin} A (\textit{finite}) \textit{simplicial cell complex\/} is a quotient of the disjoint union of a finite set of simplices~$\Delta_1,\ldots,\Delta_N$ (possibly, of different dimensions) by an equivalence relation~$\sim$ such that \begin{enumerate} \item The relation~$\sim$ identifies no two distinct points of the same simplex~$\Delta_i$. \item If $x\in \Delta_i$, $x'\in \Delta_j$, and $x\sim x'$, then the relation~$\sim$ identifies a face $F\subseteq \Delta_i$ containing~$x$ with a face $F'\subseteq \Delta_j$ containing~$x'$ by an affine isomorphism. (The simplex is also supposed to be a face of itself.) \end{enumerate} The images of faces of the simplices~$\Delta_i$ under the quotient mapping are called {\it simplices} or {\it faces} of the obtained simplicial cell complex. \end{defin} The difference between a simplicial cell complex and a simplicial complex consists in the fact that in a simplicial complex the intersection of any two simplices is either empty or is a face of either simplex, while in a simplicial cell complex two simplices can have several common faces. For instance, a decomposition of a circle into two arcs is a simplicial cell complex but not a simplicial complex. However, notice that the standard cell decomposition of a circle with a single one-dimensional cell is not a simplicial cell complex, since condition 1 is violated. All necessary facts about simplicial cell complexes can be found in~\cite{BuPa04}. The set of vertices of a simplicial cell complex~$Z$ will be denoted by~$V(Z)$. The subcomplex of~$Z$ consisting of all its simplices of dimensions less than or equal to~$k$ is called the $k$-\textit{skeleton} of~$Z$ and is denoted by~$\Sk^k(Z)$. A \textit{regular colouring\/} of vertices of a simplicial cell complex~$Z$ in colours in a set~$A$ is a mapping $C\colon V(Z)\to A$ such that $C(u)\ne C(v)$ for every two vertices~$u$ and~$v$ connected by an edge. For a simplex~$\rho$ of~$Z$ we denote by~$C(\rho)$ the set of colours of vertices of~$\rho$. For~$A$ we shall often take the vertex set~$V(\Gamma)$ of a finite graph~$\Gamma$. Hence colours of vertices of the complex will be vertices of the graph. This should not lead to a confusion. A simplicial cell complex~$Z$ is called an $n$-dimensional \textit{simplicial cell pseudo-manifold} if every simplex of~$Z$ is contained in an $n$-dimensional simplex of~$Z$, and any $(n-1)$-dimensional simplex of~$Z$ is contained in exactly two $n$-dimensional simplices of~$Z$. Equivalently, an $n$-dimensional simplicial cell complex~$Z$ is a pseudo-manifold if and only if $Z\setminus\Sk^{n-2}(Z)$ is a (non-compact) manifold without boundary. A pseudo-manifold~$Z$ is said to be \textit{oriented} if the $n$-dimensional simplices of it are endowed with compatible orientations. By definition, a \textit{topological subdivision\/} of a simplicial cell complex~$Z$ is a pair~$(Y,h)$ such that $Y$ is a simplicial cell complex and $h\colon Y\to Z$ is a piecewise linear homeomorphism such that the pre-image~$h^{-1}(\rho)$ of every simplex~$\rho$ of~$Z$ is a subcomplex of~$Y$. Similarly, a \textit{topological subdivision\/} of a convex polytope~$Q$ is a pair~$(Y,h)$ such that $Y$ is a simplicial cell complex and $h\colon Y\to Q$ is a piecewise linear homeomorphism such that the pre-image~$h^{-1}(F)$ of every face~$F$ of~$Q$ is a subcomplex of~$Y$. The introduced concept of a topological subdivision should be distinguished from a more common and more restrictive concept of a \textit{geometric subdivision}. In the definition of a geometric subdivision simplices of~$Z$ (or the polytope~$Q$) are required to be decomposed into true convex simplices rather than into their images under a piecewise linear homeomorphism. For a polytope~$Q$, we denote by~$\Int Q$ the \textit{relative interior\/} of it, that is, the interior of~$Q$ in the affine hull of it. If $Q$ is a point, then $\Int Q=Q$. \begin{propos}\label{propos_subd} Suppose that $\Gamma$ is a connected graph on the vertex set $\{0,\ldots,n\}$. Then any $n$-dimensional simplicial cell pseudo-manifold~$Z$ has a topological subdivision~$(Y,h)$ with a regular colouring of vertices $C\colon V(Y)\to\{0,\ldots,n\}$ satisfying the following condition: \begin{itemize} \item[$(*)$] Every $(n-2)$-dimensional simplex~$\rho$ of~$Y$ such that the two elements of the two-element set $\{0,\ldots,n\}\setminus C(\rho)$ are not connected by an edge in~$\Gamma$ is contained in exactly four $n$-dimensional simplices of~$Y$. \end{itemize} \end{propos} Notice that if $\Gamma=L_{n+1}$ is the path graph, then this proposition is obvious. Indeed, for the required subdivision one can take the barycentric subdivision of~$Z$ with the barycentre of every $i$-dimensional simplex of~$Z$ coloured in colour~$i$. The condition~$(*)$ can be checked easily. The proof of Proposition~\ref{propos_subd} in the general case will be based on the following auxiliary lemma. \begin{lem}\label{lem_subd} Suppose that\/ $\Gamma$ is a connected graph, $|V(\Gamma)|=n+1$, and $a$ is a vertex of\/~$\Gamma$. Then there exists a topological subdivision~$(K,f)$ of the standard $n$-dimensional simplex~$\Delta^n$ with a regular colouring of vertices $C\colon V(K)\to V(\Gamma)$ satisfying the following two conditions: \begin{enumerate} \item $C(v)\ne a$ for all vertices $v$ on the boundary of the disk~$K$; \item If $\rho$ is an $(n-2)$-dimensional simplex of~$K$ such that the two elements of the two-element set $V(\Gamma)\setminus C(\rho)$ are not connected by an edge in~$\Gamma$, then either $f(\Int\rho)\subset\Int\Delta^n$ and $\rho$ is contained in exactly four $n$-dimensional simplices of~$K$ or $f(\Int\rho)$ is contained in the relative interior of a facet of~$\Delta^n$ and $\rho$ is contained in exactly two $n$-dimensional simplices of~$K$. \end{enumerate} \end{lem} \begin{proof} Let us prove the assertion of the lemma by induction on~$n$. The base of induction for $n=0$ is trivial, since the conditions~1 and~2 are void. Assume that the assertion of the lemma is true for graphs with not more than $n$ vertices, and prove it for a graph~$\Gamma$ with $n+1$ vertices. Consider the graph~$\Gamma-a$ obtained from~$\Gamma$ by removing the vertex~$a$ and all (open) edges connecting to it. Denote by $\Gamma_1,\ldots,\Gamma_s$ the connected components of $\Gamma-a$. Let $n_1+1,\ldots,n_s+1$ be the numbers of vertices of the graphs $\Gamma_1,\ldots,\Gamma_s$, respectively; then $n=n_1+\cdots+n_s+s$. Since the graph~$\Gamma$ is connected, every connected component~$\Gamma_i$ contains at least one vertex that is connected by an edge with~$a$; we choose an arbitrary vertex satisfying this condition and denote it by~$b_i$. By the induction hypothesis, the assertion of the lemma is true for all pairs~$(\Gamma_i,b_i)$. This means that, for every~$i=1,\ldots,s$, there exists a topological subdivision~$(K_i,f_i)$ of the standard simplex~$\Delta^{n_i}$ with a regular colouring of vertices $C_i\colon V(K_i)\to V(\Gamma_i)$ that satisfies conditions~1 (with $a$ replaced by~$b_i$) and~2. For every~$i$, consider the Euclidean space~$\R^{n_i+1}$ with the standard orthonormal basis $e_0,\ldots,e_{n_i}$ and the standard simplex~$\Delta^{n_i}\subset\R^{n_i+1}$ with vertices $e_0,\ldots,e_{n_i}$. Consider the group $G_i\cong\Z_2^{n_i+1}$ of isometries of~$\R^{n_i+1}$ generated by the orthogonal reflections in the coordinate hyperplanes $\{x_0=0\},\ldots,\{x_{n_i}=0\}$, where $x_0,\ldots,x_{n_i}$ are the coordinates in the basis $e_0,\ldots,e_{n_i}$. The simplices~$g(\Delta^{n_i})$, $g\in G_i$, constitute the boundary of the \textit{cross-polytope}~$Q^{n_i+1}$, which is the regular convex polytope in~$\R^{n_i+1}$ with vertices $\pm e_0,\ldots,\pm e_{n_i}$. Subdivide every simplex~$g(\Delta^{n_i})$ by means of the topological subdivision~$(K_i,g\circ f_i)$. Then we obtain the topological subdivision of the boundary of~$Q^{n_i+1}$; we denote this subdivision by~$(\bK_i,\bar f_i)$. The regular colouring~$C_i$ of vertices of~$K_i$ induces the regular colouring~$\overline{C}_i$ of vertices of~$\bK_i$ in colours in the same set~$V(\Gamma_i)$. Condition~2 for~$K_i$ provides that any $(n_i-2)$-dimensional simplex~$\rho$ of~$\bK_i$ is contained in exactly four $n_i$-dimensional simplices of~$\bK_i$ whenever the two elements of the two-element set~$V(\Gamma_i)\setminus \overline{C}_i(\rho)$ are not connected by an edge. We identify naturally the boundary~$\partial Q^n$ of the cross-polytope $Q^{n}\subset\R^n$ with the join $\partial Q^{n_1+1} * \cdots *\partial Q^{n_s+1}$, and consider the topological subdivision~$(J,q)$ of~$\partial Q^n$ that is obtained by taking the join of the topological subdivisions $(\bK_1,\bar f_1),\ldots,(\bK_s,\bar f_s)$. Then vertices of~$J$ are regularly coloured in colours in the set $V(\Gamma_1)\cup\cdots\cup V(\Gamma_s)=V(\Gamma)\setminus\{a\}$. The cross-polytope~$Q^n$ is naturally identified with the cone over~$\partial Q^n$ with apex in the origin of~$\R^n$. We put $K=\cone(J)$, $\tilde q=\cone(q)$. Then $(K,\tilde q)$ is a topological subdivision of~$Q^n$. We denote the apex of the cone~$K=\cone(J)$ by~$p$ and colour it in the colour~$a$. Then we obtain the regular colouring~$C$ of vertices of~$K$ in colours in the set~$V(\Gamma)$. Now, consider an arbitrary piecewise linear homeomorphism $\varphi\colon Q^n\to\Delta^n$ such that the pre-image of every face of~$\Delta^n$ is the union of several closed faces of~$Q^n$. For instance, such homeomorphism can be constructed in the following way. Denote the vertices of~$Q^n$ by $ \pm \varepsilon_1,\ldots,\pm \varepsilon_n$ and the centre of~$Q^n$ by~$o$. (We use notation~$\pm\varepsilon_i$ instead of~$\pm e_i$ to avoid confusion between vertices of the cross-polytope~$Q^n$ and vertices of the standard simplex~$\Delta^n$.) For a triangulation of the cross-polytope~$Q^n$, we take the cone with apex~$o$ over the natural triangulation of~$\partial Q^n$. Define a mapping~$\varphi$ on the vertices of this triangulation by \begin{gather*} \varphi(\varepsilon_i)=e_i,\qquad \varphi(-\varepsilon_i)=\frac{1}{i}(e_0+\cdots+e_{i-1}),\qquad i=1,\ldots,n,\\ \varphi(o)=\frac{1}{n+1}(e_0+\cdots+e_n), \end{gather*} and extend it linearly to every simplex of the triangulation. It is easy to check that the obtained mapping is a piecewise linear homeomorphism and satisfies the required condition that the pre-images of faces of~$\Delta^n$ are unions of faces of the cross-polytope. Then the mapping $ f=\varphi\circ\tilde q\colon K\to\Delta^n $ provides a topological subdivision of the simplex~$\Delta^n$. Obviously, condition~1 holds for the subdivision~$(K,f)$. Let us prove condition~2. Consider an arbitrary $(n-2)$-dimensional simplex~$\rho$ of~$K$. Suppose that $V(\Gamma)\setminus C(\rho)=\{c_1,c_2\}$, and $c_1$ and~$c_2$ are not connected by an edge in~$\Gamma$. Consider three cases: \textsl{1. The vertices $c_1$ and $c_2$ lie in distinct components\/~$\Gamma_{j_1}$ and\/~$\Gamma_{j_2}$ of\/~$\Gamma-a$.} Then $a\in C(\rho)$, hence, $p\in \rho$. Therefore, $f(\Int\rho)\subset\Int\Delta^n$. Denote by~$\tau$ the facet of~$\rho$ opposite to the vertex~$p$. We have, $\tau=\tau_1*\cdots*\tau_s$, where $\tau_1,\ldots,\tau_s$ are simplices of the complexes $\bK_1,\ldots,\bK_s$, respectively, such that $\dim\tau_i=n_i$ whenever $i\ne j_1,j_2$, $\dim\tau_{j_1}=n_{j_1}-1$, and $\dim\tau_{j_2}=n_{j_2}-1$. Since every complex~$\bK_i$ is an $n_i$-dimensional simplicial cell sphere, hence, an $n_i$-dimensional pseudo-manifold, it follows that~$\tau$ is contained in exactly four $(n-1)$-dimensional simplices of the complex $J=\bK_1*\cdots*\bK_s$. Therefore, the simplex~$\rho=p*\tau$ is contained in exactly four $n$-dimensional simplices of the complex~$K=\cone(J)$. \textsl{2. The vertices $c_1$ and $c_2$ lie in the same component~$\Gamma_j$ of\/~$\Gamma-a$.} As in the previous case, we have $p\in \rho$, hence, $f(\Int\rho)\subset\Int\Delta^n$. The facet $\tau\subset\rho$ opposite to~$p$ has the decomposition $\tau=\tau_1*\cdots*\tau_s$, where $\tau_1,\ldots,\tau_s$ are simplices of the complexes $\bK_1,\ldots,\bK_s$, respectively, such that $\dim\tau_i=n_i$ unless $i= j$, and $\dim\tau_{j}=n_{j}-2$. As we have already mentioned, condition~2 for~$K_j$ implies that $\tau_j$ is contained in exactly four $n_j$-dimensional simplices of~$\bK_j$. Hence, the simplex~$\rho=p*\tau_1*\cdots*\tau_s$ is contained in exactly four $n$-dimensional simplices of the complex $K=p*\bK_1*\cdots*\bK_s$. \textsl{3. One of the vertices~$c_1$ and~$c_2$ \textnormal{(}say,~$c_1$\textnormal{)} coincides with~$a$.} Then $a\notin C(\rho)$, hence, $\rho$ is contained in $\partial K=J$. Since~$J$ is homeomorphic to the $(n-1)$-dimensional sphere and, in particular, is an $(n-1)$-dimensional pseudo-manifold, we obtain that $\rho$ is contained in exactly two $(n-1)$-dimensional simplices of~$J$. Therefore, $\rho$ is contained in exactly two $n$-dimensional simplices of~$K=\cone(J)$. Let us prove that the set $f(\Int\rho)$ is contained in the relative interior of a facet of~$\Delta^n$. Since we already know that $f(\rho)\subset\partial\Delta^n$ we suffice to prove that $f(\rho)$ is contained in no $(n-2)$-dimensional face of~$\Delta^n$. Further, to prove this we suffice to prove that the set $\tilde q(\rho)$ is contained in no $(n-2)$-dimensional face of~$Q^n$. Suppose that $c_2\in V(\Gamma_j)$. Then $\rho=\rho_1*\cdots*\rho_s$, where $\rho_1,\ldots,\rho_s$ are simplices of the complexes $\bK_1,\ldots,\bK_s$, respectively, such that $\dim\rho_i=n_i$ unless $i= j$, and $\dim\rho_j=n_j-1$. Then $q(\rho)=\bar f_1(\rho_1)*\cdots*\bar f_s(\rho_s)$. Since the subdivision~$(J,q)$ of~$\partial Q^n$ is invariant under the action of the group $G=G_1\times\cdots\times G_s$, we may assume without loss of generality that the simplices $\rho_1,\ldots,\rho_s$ are contained in the subcomplexes $K_1,\ldots,K_s$ of the complexes $\bK_1,\ldots,\bK_s$, respectively. Then, for every $i\ne j$, the set~$f_i(\rho_i)$ contains a point in the relative interior of~$\Delta^{n_i}$. Since the vertices $c_1=a$ and~$c_2$ are not connected by an edge, we have $c_2\ne b_j$. Hence, $b_j\in C_j(\rho_j)$. Therefore, condition~1 for the topological subdivision~$(K_j,f_j)$ implies that the simplex~$\rho_j$ contains a vertex that does not lie on the boundary of the disk~$K_j$. Consequently, the set~$f_j(\rho_j)$ also contains a point in the relative interior of~$\Delta^{n_j}$. Thus, the set~$q(\rho)$ contains a point in the relative interior of the facet~$\Delta^{n-1}=\Delta^{n_1}*\cdots *\Delta^{n_s}$ of~$Q^n$. \end{proof} \begin{proof}[of Proposition~\ref{propos_subd}] Replacing, if necessary, the pseudo-manifold~$Z$ with its barycntric subdivision, we may assume that the vertices of~$Z$ admit a regular colouring in colours in the set~$\{0,\ldots,n\}$. For each $n$-dimensional simplex~$\sigma$ of~$Z$, consider the affine isomorphism $\psi_{\sigma}\colon\Delta^n\to\sigma$ taking the vertices $e_0,\ldots,e_n$ of~$\Delta^n$ to the vertices of~$\sigma$ of colours~$0,\ldots,n$, respectively. Let $f\colon K\to \Delta^n$ be the topological subdivision in Lemma~\ref{lem_subd} for the graph~$\Gamma$ and a vertex~$a$ of it. Subdivide every $n$-dimensional simplex~$\sigma$ of~$Z$ by means of the subdivision~$(K,\psi_{\sigma}\circ f)$. Denote the obtained subdivision of~$Z$ by~$(Y,h)$. Condition~$(*)$ for the subdivision~$(Y,h)$ follows immediately from condition~2 in Lemma~\ref{lem_subd} for the subdivision~$(K,f)$. \end{proof} \section{A combinatorial construction}\label{section_constr} In this section we describe an auxiliary combinatorial construction that will be used in the next section in the proof of Theorem~\ref{theorem_main}. This construction generalizes the construction suggested by the author in~\cite{Gai07} and developed in~\cite{Gai08a}, \cite{Gai08b}, \cite{Gai08c}, and~\cite{Gai13}. Let $\Gamma$ be a connected graph on the vertex set $\{0,\ldots,n\}$. Put $m=|\CB'(\Gamma)|$. Then $m$ is the number of facets of the graph-associahedron~$P_{\Gamma}$. As above, we conveniently index elements of the standard basis of~$\Z_2^m$ by subsets~$S\in\CB'(\Gamma)$, and denote the basis element corresponding to a subset~$S$ by~$a_S$. Let $\Sigma=\Sigma_+\sqcup \Sigma_-$ be a finite set with involutions $\xi_0,\ldots,\xi_n$ such that every~$\xi_i$ exchanges the subsets~$\Sigma_+$ and~$\Sigma_-$. Assume that the involutions~$\xi_i$ and~$\xi_j$ commute for any vertices~$i$ and~$j$ that are not connected by an edge in~$\Gamma$. Suppose that $S\in\CB'(\Gamma)$. Denote by~$\min(S)$ the least element of the set~$S$. Let~$\CI_{S}$ be the set consisting of all involutions $\mu\colon \Sigma\to \Sigma$ such that $$ \mu=\xi_{i_1}\circ\cdots\circ\xi_{i_q}\circ\xi_{\min(S)}\circ\xi_{i_q}\circ\cdots\circ\xi_{i_1} $$ for some elements $i_1,\ldots,i_q\in S$, which are not required to be distinct. Since every involution~$\xi_i$ exchanges the subsets~$\Sigma_+$ and~$\Sigma_-$, the same holds true for any involution~$\mu\in \CI_{S}$. It is easy to see that if $S_1,S_2\in\CB'(\Gamma)$ and $S_1\subseteq S_2$, then the involution $\mu_1\circ\mu_2\circ\mu_1$ belongs to~$\CI_{S_2}$ for any~$\mu_1\in \CI_{S_1}$ and~$\mu_2\in\CI_{S_2}$. Now, suppose that $S_1,S_2\in\CB'(\Gamma)$ and $S_1\cup S_2\notin\CB(\Gamma)$. Then $S_1\cap S_2=\varnothing$ and no element of~$S_1$ is connected by an edge in~$\Gamma$ with any element of~$S_2$. Hence the involutions~$\xi_{i_1}$ and~$\xi_{i_2}$ commute for any $i_1\in S_1$ and~$i_2\in S_2$. Therefore, any involutions~$\mu_1\in\CI_{S_1}$ and~$\mu_2\in\CI_{S_2}$ commute. We put, \begin{equation}\label{eq_Omega} \Omega_{\pm}=\Sigma_{\pm}\times\left(\prod_{T\in\CB'(\Gamma)}\CI_{T}\right)\times\Z_2^{m},\qquad \Omega=\Omega_+\sqcup\Omega_-\,. \end{equation} An arbitrary element of~$\Omega$ has the form $\bigl(\sigma,(\mu_{T})_{T\in\CB'(\Gamma)},g\bigr)$, where $\sigma\in \Sigma$, $\mu_{T}\in\CI_{T}$ for all~$T\in\CB'(\Gamma)$, and $g\in \Z_2^{m}$. We define the mappings $\varphi_{S}\colon \Omega\to \Omega$, $S\in\CB'(\Gamma)$, by \begin{gather}\label{eq_phi} \varphi_{S}\bigl(\sigma,(\mu_{T})_{T\in\CB'(\Gamma)},g\bigr)= \bigl(\mu_{S}(\sigma),(\widetilde\mu_{T})_{T\in\CB'(\Gamma)},g+a_{S}\bigr),\\ \widetilde\mu_{T}=\left\{ \begin{aligned} &\mu_{S}\circ\mu_{T}\circ\mu_{S}&&\text{if $S\subseteq T$,}\\ &\mu_{T}&&\text{if $S\not\subseteq T$.} \end{aligned} \right.\nonumber \end{gather} It has already been mentioned above that if $S\subseteq T$ then the involution $\mu_{S}\circ\mu_{T}\circ\mu_{S}$ belongs to~$\CI_{T}$. Hence, $\widetilde\mu_{T}\in\CI_{T}$ for all~$T\in\CB'(\Gamma)$. Since $\widetilde\mu_S=\mu_S$, we obtain that $\varphi_S^2=\mathrm{id}_{\Omega}$, that is, $\varphi_S$ is an involution. The involution~$\mu_S$ exchanges the subsets~$\Sigma_+$ and~$\Sigma_-$. Therefore, the involution~$\varphi_{S}$ exchanges the subsets~$\Omega_+$ and~$\Omega_-$. \begin{propos}\label{propos_phi_commute} The involutions~$\varphi_{S_1}$ and~$\varphi_{S_2}$ commute for any $S_1,S_2\in \CB'(\Gamma)$ such that the facets~$F_{S_1}$ and~$F_{S_2}$ of~$P_{\Gamma}$ intersect each other. \end{propos} \begin{proof} The facets~$F_{S_1}$ and~$F_{S_2}$ intersect each other if and only if either one of the two subsets~$S_1$ and~$S_2$ is contained in the other or $S_1\cup S_2\notin\CB(\Gamma)$. In the first case, we may assume that $S_1\subseteq S_2$. A direct computation using~\eqref{eq_phi} yields \begin{gather*} \begin{split} (\varphi_{S_1}\circ\varphi_{S_2})\bigl(\sigma,(\mu_{T})_{T\in\CB'(\Gamma)},g\bigr)=(\varphi_{S_2}\circ\varphi_{S_1})\bigl(\sigma,(\mu_{T})_{T\in\CB'(\Gamma)},g\bigr)={} \\ {}=\bigl((\mu_{S_1}\circ\mu_{S_2})(\sigma),(\widehat\mu_{T})_{T\in\CB'(\Gamma)},g+a_{S_1}+a_{S_2}\bigr), \end{split}\\ \widehat\mu_{T}=\left\{ \begin{aligned} &\mu_{S_1}\circ\mu_{S_2}\circ\mu_{T}\circ\mu_{S_2}\circ\mu_{S_1}&&\text{if $S_2\subseteq T$,}\\ &\mu_{S_1}\circ\mu_{T}\circ\mu_{S_1}&&\text{if $S_1\subseteq T$ and $S_2\not\subseteq T$,}\\ &\mu_{T},&&\text{if $S_1\not\subseteq T$.} \end{aligned} \right.\nonumber \end{gather*} In the second case, we have $S_1\cup S_2\notin\CB(\Gamma)$. Hence any involution in~$\CI_{S_1}$ commutes with any involution in~$\CI_{S_2}$, which immediately implies that $\varphi_{S_1}\circ\varphi_{S_2}=\varphi_{S_2}\circ\varphi_{S_1}$. \end{proof} \section{Proof of Theorem~\ref{theorem_main}}\label{section_proof} It follows easily from the definition of singular homology that any homology class~$z\in H_n(X;\Z)$ of any topological space~$X$ can be realized as a continuous image of the fundamental homology class of an oriented $n$-dimensional simplicial cell pseudo-manifold~$Z$, i.\,e., there exists a continuous mapping $\alpha\colon Z\to X$ such that $\alpha_*[Z]=z$. By Proposition~\ref{propos_subd}, there exists a topological subdivision~$(Y,h)$ of~$Z$ with a regular colouring of vertices $C\colon V(Y)\to\{0,\ldots,n\}$ satisfying condition~$(*)$. Then $(\alpha\circ h)_*[Y]=z$. We denote by~$\Sigma$ the set of $n$-dimensional simplices of~$Y$. For each simplex~$\sigma\in\Sigma$, consider the affine isomorphism $\psi_{\sigma}\colon\Delta^n\to\sigma$ taking the vertices $e_0,\ldots,e_n$ of the standard simplex~$\Delta^n$ to the vertices of~$\sigma$ of colours~$0,\ldots,n$, respectively. We denote by~$\Sigma_+$ the set of all simplices $\sigma\in\Sigma$ such that the isomorphism~$\psi_{\sigma}$ preserves the orientation, and we denote by~$\Sigma_-$ the set of all simplices~$\sigma\in\Sigma$ such that the isomorphism~$\psi_{\sigma}$ reverses the orientation. Obviously, if two different simplices $\sigma_1,\sigma_2\in\Sigma$ have a common $(n-1)$-dimensional face, then one of these two simplices belongs to~$\Sigma_+$, and the other belongs to~$\Sigma_-$. For each $i\in\{0,\ldots,n\}$ and each~$\sigma\in\Sigma$, we denote by~$\xi_i(\sigma)$ a unique simplex in~$\Sigma$ such that $\xi_i(\sigma)\ne\sigma$ and the simplices~$\sigma$ and~$\xi_i(\sigma)$ have a common facet~$\tau$ whose set of colours of vertices is $$C(\tau)=\{0,\ldots,n\}\setminus\{i\}.$$ (Since $Y$ is not necessarily a simplicial complex but only a simplicial cell complex, it is possible that $\tau$ is not a unique common facet of the simplices~$\sigma$ and~$\xi_i(\sigma)$.) Then $\xi_i\colon\Sigma\to\Sigma$ are involutions satisfying $\xi_i(\Sigma_+)=\Sigma_-$ and $\xi_i(\Sigma_-)=\Sigma_+$. By condition~$(*)$ in Proposition~\ref{propos_subd}, each $(n-2)$-dimensional simplex~$\rho$ of~$Y$ such that the two elements of the two-element set $\{i,j\}=\{0,\ldots,n\}\setminus C(\rho)$ are not connected by an edge in~$\Gamma$ is contained in exactly four $n$-dimensional simplices. It is easy to see that if we denote by~$\sigma$ one of these four simplices, then the three others will be $\xi_i(\sigma)$, $\xi_j(\sigma)$, and $\xi_i(\xi_j(\sigma))=\xi_j(\xi_i(\sigma))$. Thus, the involutions~$\xi_i$ and~$\xi_j$ commute whenever $i$ and~$j$ are not connected by an edge in~$\Gamma$. We apply the construction in the previous section to the set~$\Sigma$ with the involutions~$\xi_0,\ldots,\xi_n$. This means that, we introduce the set $\Omega=\Omega_+\sqcup\Omega_-$ and the involutions $\varphi_S\colon\Omega\to\Omega$, $S\in\CB'(\Gamma)$ by~\eqref{eq_Omega},~\eqref{eq_phi}. We denote by~$\Phi$ the subgroup of the symmetric group on~$\Omega$ generated by the involutions~$\varphi_S$, $S\in\CB'(\Gamma)$. The action of the involutions~$\varphi_S$ on the factor~$\Z_2^m$ in decomposition~\eqref{eq_Omega} yields a well-defined epimorphism $\varkappa\colon\Phi\to\Z_2^m$ such that $\varkappa(\varphi_S)=a_S$ for all~$S$. For each point $x\in P_{\Gamma}$, we denote by~$\Phi(x)$ the subgroup of~$\Phi$ generated by all~$\varphi_S$ such that~$x\in F_S$. Suppose that~$x$ lies in the relative interior of an $(n-k)$-dimensional face $F_{S_1}\cap\cdots\cap F_{S_k}$ of~$P_{\Gamma}$. By Proposition~\ref{propos_phi_commute}, the involutions $\varphi_{S_1},\ldots,\varphi_{S_k}$ pairwise commute. Hence the restriction of~$\varkappa$ to~$\Phi(x)$ is the isomorphism onto the subgroup $\Z_2^k(x)\subset\Z_2^m$ generated by the elements~$a_{S_1},\ldots,a_{S_k}$. We put, \begin{equation*} N=(P_{\Gamma}\times\Omega)/\sim\,, \end{equation*} where $\sim$ is the equivalence relation on $P_{\Gamma}\times\Omega$ such that $(x,\omega)\sim (x',\omega')$ if and only if $x=x'$ and $\theta(\omega)=\omega'$ for some~$\theta\in\Phi(x)$. We denote by~$[x,\omega]$ the point in~$N$ corresponding to the equivalence class of the pair~$(x,\omega)$. Consider the mapping $p\colon N\to\CR_{P_{\Gamma}}$ given by \begin{equation*} p([x,\omega])=[x,\pr_3(\omega)], \end{equation*} where $\pr_3$ is the projection onto the third factor in decomposition~\eqref{eq_Omega}. \begin{propos} The mapping $p$ is an $r$-fold covering, where $$r=|\Sigma|\prod_{T\in\CB'(\Gamma)}|\CI_T|.$$ \end{propos} \begin{proof} Since the spaces~$N$ and~$\CR_{P_{\Gamma}}$ are compact and~$\CR_{P_{\Gamma}}$ is connected, to prove that $p$ is a covering, it is sufficient to prove that $p$ is a local homeomorphism, that is, to prove that for each point~$[x,\omega]\in N$ the restriction of $p$ to a neighborhood of~$[x,\omega]$ is a homeomorphism onto a neighborhood of~$p([x,\omega])$. Suppose that the point~$x$ lies in the relative interior of an $(n-k)$-dimensional face $F_{S_1}\cap\cdots\cap F_{S_k}$ of~$P_{\Gamma}$, and $\omega=(\sigma,(\mu_{T})_{T\in\CB'(\Gamma)},g)$. Then $p([x,\omega])=[x,g]$. Every point in a neighborhood of~$[x,g]$ has the form $[y,g+h]$, where $y$ is close to~$x$ and $h\in\Z_2^k(x)$. A locally inverse to~$p$ mapping~$q$ is given by $q([y,g+h])=[y,\nu(h)(\omega)]$, where $\nu\colon\Z_2^k(x)\to\Phi(x)$ is the isomorphism inverse to~$\varkappa|_{\Phi(x)}$. It can be immediately checked that the mapping~$q$ is well defined and continuous in a neighborhood of~$[x,g]$, and is both the left and the right inverse to~$p$. Therefore, $p$ is a local homeomorphism, hence, a covering. The manifold~$\CR_{P_{\Gamma}}$ is glued out of $2^{m}$ copies of~$P_{\Gamma}$, and the manifold~$N$ is glued out of~$2^{m}r$ copies of~$P_{\Gamma}$. The mapping~$p$ maps every copy of~$P_{\Gamma}$ in the decomposition of~$N$ isomorphically onto a copy of~$P_{\Gamma}$ in the decomposition of~$\CR_P$. Hence, the number of sheets of the covering~$p$ is equal to~$r$. \end{proof} For an $\omega=(\sigma,(\mu_T)_{T\in\CB'(\Gamma)},g)$, we put~$\varepsilon'(\omega)=1$ if $\sigma\in\Sigma_+$ and $\varepsilon'(\omega)=-1$ if $\sigma\in\Sigma_-$. Consider the homomorphism $\eta\colon\Z_2^{m}\to\Z_2$ that takes every~$a_S$ to the generator~$1$ of the group~$\Z_2=\Z/2\Z$, and put $\varepsilon''(\omega)=(-1)^{\eta(g)}$. Further, we put $\varepsilon(\omega)=\varepsilon'(\omega)\varepsilon''(\omega)$. It follows from~\eqref{eq_phi} that $\varepsilon(\varphi_S(\omega))=\varepsilon(\omega)$ for all~$S$ and~$\omega$. Hence, $\varepsilon(\omega_1)=\varepsilon(\omega_2)$ whenever $[x,\omega_1]=[x,\omega_2]$. Therefore, the manifold~$N$ is the disjoint union of the manifold~$N_+$ consisting of all points~$[x,\omega]$ such that $\varepsilon(\omega)=1$ and the manifold~$N_-$ consisting of all points~$[x,\omega]$ such that $\varepsilon(\omega)=-1$. Either of the manifolds~$N_+$ and~$N_-$ is an $(r/2)$-fold covering of~$\CR_{P_{\Gamma}}$. Consider the mapping $\gamma\colon N_+\to Y$ given by \begin{equation}\label{eq_gamma} \gamma([x,(\sigma,(\mu_T)_{T\in\CB'(\Gamma)},g)])=\psi_{\sigma}(\pi_{\Gamma}(x)), \end{equation} where $\pi_{\Gamma}\colon P_{\Gamma}\to\Delta^n$ is the mapping constructed in Section~\ref{subsection_as}. \begin{propos} The mapping $\gamma$ is well defined, continuous, and \begin{equation*} \gamma_*[N_+]=s[Y],\qquad s=2^{m-1}\prod_{T\in\CB'(\Gamma)}|\CI_T|. \end{equation*} \end{propos} \begin{proof} To prove that~$\gamma$ is well defined and continuous, we need to show that the values~$\gamma([x,\omega])$ and~$\gamma([x,\omega'])$ computed by~\eqref{eq_gamma} are equal to each other whenever $(x,\omega)\sim (x,\omega')$. To this end, it is sufficient to show that the values~$\gamma([x,\omega])$ and~$\gamma([x,\varphi_S(\omega)])$ are equal to each other whenever $x\in F_S$. Suppose that $\omega=(\sigma,(\mu_T)_{T\in\CB'(\Gamma)},g)$; then $\varphi_S(\omega)=(\mu_S(\sigma),(\widetilde{\mu}_T)_{T\in\CB'(\Gamma)},g+a_S)$. It follows from assertion~1 of Proposition~\ref{propos_pi} that $\pi_{\Gamma}(x)\in\Delta_{V\setminus S}$ whenever $x\in F_S$. Hence, $\psi_{\xi_i(\tau)}(\pi_{\Gamma}(x))=\psi_{\tau}(\pi_{\Gamma}(x))$ for all~$\tau\in\Sigma$ and all~$i\in S$. Therefore, $\psi_{\mu_S(\sigma)}(\pi_{\Gamma}(x))=\psi_{\sigma}(\pi_{\Gamma}(x))$, which is exactly what we need to prove. We choose the orientation of~$\CR_{P_{\Gamma}}$ so that the embedding of~$P_{\Gamma}$ into~$\CR_{P_{\Gamma}}$ given by~$x\mapsto[x,g]$ preserves the orientation if and only if $\eta(g)=0$. As before, we endow the covering~$N_+$ of~$\CR_{P_{\Gamma}}$ with the induced orientation. The embedding of~$P_{\Gamma}$ into~$N_+$ given by~$x\mapsto[x,\omega]$ preserves the orientation if and only if $\varepsilon''(\omega)=1$. Since $\varepsilon(\omega)=1$, the latter is equivalent to~$\varepsilon'(\omega)=1$. The embedding $\psi_{\sigma}\colon\Delta^n\to Y$ also preserves the orientation if and only if~$\varepsilon'(\omega)=1$. By assertion~3 of Proposition~\ref{propos_pi}, the mapping~$\pi_{\Gamma}$ has degree~$1$. Therefore, $\gamma$ maps every cell of~$N_+$ isomorphic to~$P_{\Gamma}$ onto a simplex of~$Y$ with degree~$1$. Besides, for each simplex of~$Y$, there are exactly~$s$ cells of~$N_+$ that are mapped onto it. Consequently, $\gamma_*[N_+]=s[Y]$. \end{proof} Thus, starting from an arbitrary homology class~$z\in H_n(X;\Z)$, we have constructed an $(r/2)$-fold covering~$N_+$ of~$\CR_{P_{\Gamma}}$ and a mapping $f=\alpha\circ h \circ\gamma$ of~$N_+$ to~$X$ such that $f_*[N_+]=sz$ for certain~$s>0$. Hence, $\CR_{P_{\Gamma}}$ is a URC-manifold. \begin{remark} We could define the mapping~$\gamma$ of the whole manifold~$N$ onto~$Y$ again by~\eqref{eq_gamma}. Nevertheless, this mapping would have zero degree, since the $n$-dimensional cells of~$N_+$ would be mapped onto simplices of~$Y$ preserving the orientation, and the $n$-dimensional cells of~$N_-$ would be mapped onto simplices of~$Y$ reversing the orientation. \end{remark} \section{Looking for the smallest URC-manifold}\label{section_small} Since for $n\ge 3$ the class of $n$-dimensional URC-manifolds is rather extensive, an interesting problem is to find an $n$-dimensional URC-manifold that is in some sense the smallest in this class. In the two-dimensional case, URC-manifolds are exactly oriented surfaces of genera $g\ge 2$. Hence, in any reasonable sense the smallest of them is the surface of genus~$2$. In high-dimensional case, the situation is more complicated, and the answer to the question on the smallest URC-manifold depends on how to compare different manifolds. The problem on realization of cycles by images of spheres is the classical problem on the image of the Hurewicz homomorphism. It is well known that the Hurewicz homomorphism $\pi_n(X)\otimes \Q\to H_n(X;\Q)$ for $n\ge 2$ is not always surjective. Hence, the sphere~$S^n$ is by no means a URC-manifold. However, notice that the classical results by Serre~\cite{Ser51} imply that after taking a multiple suspension the Hurewicz homomorphism $\pi_{n+N}(\Sigma^NX)\otimes \Q\to H_{n+N}(\Sigma^NX;\Q)\cong H_n(X;\Q)$ becomes an isomorphism. Therefore, any homology class can be stably realized with some multiplicity by an image of the fundamental class of the sphere. Upon this fact is based the construction of the Chern--Dold character in any extraodinary cohomology theory due to Buchstaber~\cite{Buc70-2}. In the present paper we always consider the question on unstable realization of cycles. It is easy to see that if a mapping $f\colon M^n\to N^n$ has nonzero degree, then the image of the homomorphism $f_*\colon \pi_1(M^n)\to \pi_1(N^n)$ has finite index in~$\pi_1(N^n)$, and the homomorphism $f_*\colon H_*(M^n;\Q)\to H_*(N^n;\Q)$ is surjective. It follows easily that no manifold with finite fundamental group is a URC-manifold. Moreover, any URC-manifold must have connected finite-fold coverings with arbitrarily large Betti numbers. It follows from a result by Kotschick and L\"oh~\cite{KoLo09} that no direct product of two manifolds of positive dimensions is a URC-manifold. In dimension~$3$, we know about the domination relation more than in higher dimensions. In particular, Sun~\cite{Sun15} has recently shown that any oriented hyperbolic manifold is a URC-manifold. (A manifold is called \textit{hyperbolic\/} if it admits a Riemannian metric of constant negative curvature.) There are many examples of three-dimensional homology spheres that are hyperbolic manifolds. By the result of Sun all they are URC-manifolds. \begin{quest}\label{quest_homsphere} For which $n\ge 4$ there exist $n$-dimensional homology spheres \textnormal{(}or at least rational homology spheres\textnormal{)} that are URC-manifolds? \end{quest} \begin{remark} It has already been mentioned above that any nonzero degree mapping $f\colon M^n\to N^n$ induces a surjective homomorphism in homology with rational coefficients. Hence no rational homology sphere can dominate a manifold that is not a rational homology sphere. However, this is not prevent a homology sphere from being a URC-manifold, since coverings of homology spheres can have nonzero (and arbitrarily large) Betti numbers. \end{remark} Let $\K$ be a field. Recall that the \textit{Betti numbers with coefficients in~$\K$} of a space~$X$ are the numbers $$\beta_i^{\K}(X)=\dim_{\K}H_i(X;\K),$$ and the \textit{total Betti number with coefficients in~$\K$} of~$X$ is the sum $$\beta^{\K}(X)=\sum\beta_i^{\K}(X).$$ If $\K=\Q$ is the field of rational numbers, then for spaces with finitely generated homology groups, the numbers~$\beta_i^{\Q}$ are the usual Betti numbers, i.\,e., the ranks of the free parts of the groups~$H_i(X;\Z)$, respectively. Since the answer to Question~\ref{quest_homsphere} is unknown, it is reasonable to consider the following more general problem. \begin{problem} For every dimension $n\ge 4$, find an $n$-dimensional URC-\allowbreak manifold~$M^n$ with the smallest total Betti number~$\beta^{\K}(M^n)$. Does there exist a URC-manifold $M^n$ such that for any other URC-manifold~$N^n$ of the same dimension, the inequalities $\beta_i^{\K}(N^n)\ge \beta_i^{\K}(M^n)$ hold true for all~$i$? \end{problem} In light of this problem it would be interesting to find out which of the already constructed URC-manifolds has the smallest total Betti number. Thus, the following natural question arises. \begin{quest}\label{quest_sc} Let $\CC_n$ be the class consisting of all orientable manifolds~$M_{P_{\Gamma},\lambda}$ and all two-fold orientation coverings~$\overline{M}_{P_{\Gamma},\lambda}$ of non-orientable manifolds~$M_{P_{\Gamma},\lambda}$, where $\Gamma$ runs over connected graphs on the vertex set $\{0,\ldots,n\}$. Which of the manifolds in~$\CC_n$ has the smallest total Betti number with coefficients in~$\K$? \end{quest} We shall be interested in the cases $\K=\Q$ and $\K=\Z_2$. In light of the studied in the present paper problem on realization of cycles with multiplicities the most natural characteristics of URC-manifolds are Betti numbers with coefficients in~$\Q$. On the other hand, Betti numbers with coefficients in~$\Z_2$ are easier to compute for small covers. The author does not know the answer to Question~\ref{quest_sc}. Nevertheless, we shall at least show that among URC-manifolds constructed in the present paper there are manifolds whose total Betti numbers are much smaller than the total Betti numbers of small covers of permutohedra, which have been known to be URC-manifolds before. In particular, for such manifolds we can take the manifolds~$\overline{M}_{\As^n,\clambda}$. This support our intuition that the small covers of~$\As^n$ must be the `smallest' among the small covers of graph-associahedra corresponding to connected graphs, and must be `much smaller' then the small covers of~$\Pe^n$, at least for~$n$ large enough. This intuition takes its origin from a theorem of Buchstaber and Volodin~\cite{BuVo11} claiming that the numbers of faces of graph-associahedra corresponding to connected graphs are always greater than or equal to the corresponding numbers of faces of Stasheff associahedra~$\As^n$, and the same is true for other important characteristics such as $h$-numbers and $\gamma$-numbers. Moreover, for large~$n$, the numbers of faces of~$\As^n$ are much smaller than the numbers of faces of~$\Pe^n$. For instance, the number of facets of~$\As^n$ is equal to $n(n+3)/2$ while the number of facets of~$\Pe^n$ is equal to~$2^{n+1}-2$. In Section~\ref{subsection_simp_vol} we shall also consider simplicial volume as another important characteristic that allows us to compare URC-manifolds. \subsection{Betti numbers with coefficients in~$\Q$} \label{subsection_rational} Notice that if $\overline{M}^n$ is a two-fold orientation covering of a non-orientable closed manifold~$M^n$, then by a classical result due to Eckmann~\cite{Eck49} we have $\beta^{\Q}(\overline{M}^n)=2\beta^{\Q}(M^n)$. Moreover, \begin{equation}\label{eq_Brash} \beta_i^{\Q}(\overline{M}^n)=\beta_i^{\Q}(M^n)+\beta_{n-i}^{\Q}(M^n) \end{equation} for all~$i$, see~\cite{Bra69}. Computation of the Betti numbers with coefficients in~$\Q$ of a small cover~$M_{P,\lambda}$ generally is a hard problem. There is an unpublished formula by Suciu and Trevisan that reduces this computation to the computation of the homology groups for certain unions of faces of~$P$. For small covers of graph-associahedra~$M_{P_{\Gamma},\clambda}$ corresponding to the canonical characteristic function~$\clambda$ coming from the Delzant structure, Choi and Park~\cite{ChPa15} obtained an explicit formula for the Betti numbers with coefficients in~$\Q$ from special invariants of the graph~$\Gamma$. However, even in this case the problem of finding a graph with the smallest total Betti number of~$M_{P_{\Gamma},\clambda}$ remains unsolved, though it is very likely that the minimum is attained for the path graph~$L_{n+1}$. We shall restrict ourselves to considering three examples for which the Betti numbers with coefficients in~$\Q$ can be computed explicitly: 1. The Tomei manifold~$M^n_0=M_{\Pe^n,\lambda_0}$. This manifold is orientable. Its Betti numbers with coefficients in~$\Q$ were computed by Fried~\cite{Fri86}. They are equal to the \textit{Eulerian numbers of the first kind}: $\beta_{i}^{\Q}(M^n_0)=A(n+1,i)$. Recall that the Eulerian number of the first kind~$A(m,k)$ is the number of permutations of numbers from~$1$ to~$m$ with exactly $k$ ascents, that is, permutations $(\nu_1,\ldots,\nu_m)$ such that there are exactly $k$ indices~$s>1$ for which $\nu_s>\nu_{s-1}$. The total Betti number of the Tomei manifold is equal to $(n+1)!$\,. Notice that the $h$-numbers of the permutohedron~$\Pe^n$ are also equal to the Eulerian numbers of the first kind, $h_i(\Pe^n)=A(n+1,i)$, see~\cite{PRW08}, \cite{Buc08}. As we have already mentioned, the Betti numbers with coefficients in~$\Z_2$ of a small cover~$M_{P,\lambda}$ are independent of~$\lambda$ and equal to the $h$-numbers of~$P$. In particular, $\beta_{i}^{\Z_2}(M^n_0)=\beta_{i}^{\Q}(M^n_0)=A(n+1,i)$. The coincidence of the Betti numbers with coefficients in~$\Q$ and in~$\Z_2$ is a specific property of the Tomei manifolds. Generally, the integral homology of a small cover often contains $2$-torsion, which implies that their Betti numbers with coefficients in~$\Q$ are smaller than their Betti numbers with coefficients in~$\Z_2$, that is, the $h$-numbers of~$P$. 2. The second example is also a small cover of the permutohedron but corresponding to the canonical Delzant characteristic function~$\clambda$. This manifold $M_{\Pe^n,\clambda}$ is the set of real points of a smooth projective toric variety, which is called the \textit{Hessenberg variety}. For $n\ge 2$, the manifold~$M_{\Pe^n,\clambda}$ is non-orientable. Its Betti numbers with coefficients in~$\Q$ were computed by Henderson~\cite{Hen12} (see also~\cite{ChPa15}): $$ \beta_{i}^{\Q}(M_{\Pe^n,\clambda})=\binom{n+1}{2i}E_{2i}, $$ where $E_{m}$ is the \textit{Euler zigzag number}, that is, the number of permutations $(\nu_1,\ldots,\nu_m)$ such that $\nu_1<\nu_2>\nu_3<\nu_4>\cdots$ (the signs~$<$ and~$>$ alternate). Equivalently, $E_m$ are the coefficients in the decomposition $$ \sec t+\tan t=\sum_{m=0}^{\infty}\frac{E_m}{m!}\,t^m. $$ Hence, the Betti numbers of the two-fold orientation covering~$\overline{M}_{\Pe^n,\clambda}$ of~$M_{\Pe^n,\clambda}$ are given by \begin{gather*} \beta_{i}^{\Q}(\overline{M}_{\Pe^n,\clambda})=\binom{n+1}{2i}E_{2i}+\binom{n+1}{2n-2i}E_{2n-2i},\\ \beta^{\Q}(\overline{M}_{\Pe^n,\clambda})=2\sum_{i=0}^{\left[\frac{n+1}{2}\right]}\binom{n+1}{2i}E_{2i}. \end{gather*} 3. Now, consider the small cover~$M_{\As^n,\clambda}$. For $n\ge 2$, it is also non-orientable. Its Betti numbers with coefficients in~$\Q$ were computed by Choi and Park~\cite{ChPa15}: $$ \beta_{i}^{\Q}(M_{\As^n,\clambda})=\left\{ \begin{aligned} &\binom{n+1}{i}-\binom{n+1}{i-1},&&&0\le i\le \left[\frac{n+1}{2}\right]&,\\ &0,&&&i>\left[\frac{n+1}{2}\right]&. \end{aligned} \right. $$ Hence, the Betti numbers of the two-fold orientation covering~$\overline{M}_{\As^n,\clambda}$ of~$M_{\As^n,\clambda}$ are given by \begin{gather*} \beta_{i}^{\Q}(\overline{M}_{\As^n,\clambda})=\beta_{n-i}^{\Q}(\overline{M}_{\As^n,\clambda})=\binom{n+1}{i}-\binom{n+1}{i-1},\qquad{}\\ {}\hspace{8cm} 0\le i< \left[\frac{n+1}{2}\right],\\ \beta_k^{\Q}(\overline{M}_{\As^{2k},\clambda})=2\binom{2k+1}{k}-2\binom{2k+1}{k-1},\\ \beta_{k-1}^{\Q}(\overline{M}_{\As^{2k-1},\clambda})=\beta_{k}^{\Q}(\overline{M}_{\As^{2k-1},\clambda})=\binom{2k}{k}-\binom{2k}{k-2},\\ \beta^{\Q}(\overline{M}_{\As^{n},\clambda})=2\binom{n+1}{\left[\frac{n+1}{2}\right]}. \end{gather*} It is not hard to show that for $n\ge 3$, $$ 2\binom{n+1}{\left[\frac{n+1}{2}\right]}<2\sum_{i=0}^{\left[\frac{n+1}{2}\right]}\binom{n+1}{2i}E_{2i}<(n+1)!\,, $$ that is, $$ \beta^{\Q}(\overline{M}_{\As^{n},\clambda})<\beta^{\Q}(\overline{M}_{\Pe^{n},\clambda})<\beta^{\Q}(M_{\Pe^{n},\lambda_0}). $$ \begin{remark} In fact, it is easy to see that even a single middle Betti number $\beta'(n)=\beta_{[n/2]}^{\Q}(\overline{M}_{\Pe^{n},\clambda})$ grows as $n\to\infty$ much faster than the total Betti number $\beta''(n)=\beta^{\Q}(\overline{M}_{\As^{n},\clambda})$. Indeed, $\beta'(n)= 2(n+1)E_n$ if $n$ is even, $\beta'(n)> E_{n+1}$ if $n$ is odd, $\beta''(n)<2^{n+2}$, and it is well known that $$ E_{2k}\sim 8\sqrt{\frac{k}{\pi}}\left(\frac{4k}{\pi e}\right)^{2k},\qquad k\to\infty. $$ \end{remark} \subsection{Betti numbers with coefficients in~$\Z_2$} Let $P$ be an $n$-dimensional simple polytope. Then the \textit{$f$-vector\/} of~$P$ is the integral vector $(f_{-1}(P),f_0(P),\ldots,f_{n-1}(P))$ such that $f_k(P)$ is the number of $(n-k-1)$-dimensional faces of~$P$. (In particular, $f_{-1}=1$, since the polytope is considered as the only $n$-dimensional face of itself.) The \textit{$h$-vector\/} of~$P$ is the integral vector $(h_0(P),\ldots,h_n(P))$ such that \begin{multline*} h_0(P)x^n+h_1(P)x^{n-1}+\cdots+h_n(P)={}\\f_{-1}(P)(x-1)^n+f_0(P)(x-1)^{n-1}+\cdots+f_{n-1}(P). \end{multline*} Davis and Januszkiewicz~\cite{DaJa91} computed the cohomology ring with coefficients in~$\Z_2$ of an arbitrary small cover~$M_{P,\lambda}$, and showed that \begin{equation}\label{eq_DJ} \beta_i^{\Z_2}(M_{P,\lambda})=h_i(P). \end{equation} Postnikov, Reiner, and Williams~\cite{PRW08} computed the $h$-vectors for many important classes of graph-associahedra. Buchstaber and Volodin~\cite{BuVo11} proved that for any connected graph~$\Gamma$ on the vertex set~$\{0,\ldots,n\}$, there are inequalities \begin{equation}\label{eq_h_ineq} h_i(P_{\Gamma})\ge h_i(\As^n)=\frac{1}{n+1}\binom{n+1}{i}\binom{n+1}{i+1}, \quad 1\le i\le n-1. \end{equation} Moreover, each of inequalities~\eqref{eq_h_ineq} becomes an equality only if $\Gamma\cong L_{n+1}$, i.\,e., $P_{\Gamma}\cong\As^n$. Thus, if the associahedron~$\As^n$ admitted a characteristic function~$\lambda^*$ such that the manifold~$M_{\As^n,\lambda^*}$ were orientable, then the number $$ \beta^{\Z_2}(M_{\As^n,\lambda^*})=\frac{1}{n+1}\sum_{i=0}^n\binom{n+1}{i}\binom{n+1}{i+1} $$ would be the smallest among the total Betti numbers with coefficients in~$\Z_2$ of all orientable small covers~$M_{P_{\Gamma},\lambda}$ corresponding to connected graphs~$\Gamma$ on the vertex set $\{0,\ldots,n\}$. However, even in this case it would not be clear whether this number would be the smallest among the total Betti numbers with coefficients in~$\Z_2$ of all manifolds in the class~$\CC_n$, since the computation of the Betti numbers with coefficients in~$\Z_2$ of the two-fold coverings~$\overline{M}_{P_{\Gamma},\lambda}$ is a harder problem. The author does not know for which~$n$ there exist orientable small covers of~$\As^n$. As we have mentioned above, the small covers~$M_{\As^n,\clambda}$ are non-orientable for all $n\ge 2$. It is easy to check that all small covers of the pentagon~$\As^2$ are non-orientable. On the other hand, there are orientable small covers over the three-dimensional associahedron~$\As^3$. For instance, one can take the characteristic function~$\lambda^*$ given by \begin{gather*} \lambda^*(a_{\{0\}})=\lambda^*(a_{\{1\}})=b_1,\qquad \lambda^*(a_{\{2\}})=\lambda^*(a_{\{3\}})=b_2,\\ \lambda^*(a_{\{0,1\}})=\lambda^*(a_{\{1,2\}})=\lambda^*(a_{\{2,3\}})=b_3,\\ \lambda^*(a_{\{0,1,2\}})=\lambda^*(a_{\{1,2,3\}})=b_1+b_2+b_3. \end{gather*} For a non-orientable $n$-dimensional manifold~$M$, it is easy to obtain the following estimates for the Betti numbers with coefficients in~$\Z_2$ of its two-fold orientation covering~$\overline{M}$: $$ \beta_i^{\Q}(M)+\beta_{n-i}^{\Q}(M)\le \beta_i^{\Z_2}(\overline{M})\le 2\beta_i^{\Z_2}(M). $$ The first inequality follows from~\eqref{eq_Brash} and the inequality $\beta^{\Q}_i(\overline{M})\le\beta_i^{\Z_2}(\overline{M})$; the second inequality is obtained from the Gysin exact sequence of the covering $p\colon\overline{M}\to M$ $$ \ldots\to H^i(M;\Z_2)\xrightarrow{p^*} H^{i}(\overline{M};\Z_2)\xrightarrow{p_!} H^i(M;\Z_2)\xrightarrow{\psi} H^{i+1}(M;\Z_2)\to\ldots $$ In this exact sequence, $\psi$ is the multiplication by the first Stiefel--Whitney class~$w_1(M)$. For small covers, Davis and Januszkiewicz~\cite{DaJa91} wrote explicitly the cohomology ring~$H^*(M_{P,\lambda};\Z_2)$ and the first Stiefel--Whitney class~$w_1(M_{P,\lambda})$. Nevertheless, it is a still unsolved problem to compute the dimensions of the kernel and the cokernel of~$\psi$ to which the computation of the Betti numbers with coefficients in~$\Z_2$ of~$\overline{M}_{P,\lambda}$ is reduced. However, the inequalities $2\beta^{\Q}(M)\le \beta^{\Z_2}(\overline{M})\le 2\beta^{\Z_2}(M)$ imply at least that the total Betti number with coefficients in~$\Z_2$ of~$\overline{M}_{\As^n,\clambda}$ grows much slower than the total Betti numbers with coefficients in~$\Z_2$ of the Tomei manifold~$M^n_0=M_{\Pe^n,\lambda_0}$ and of the two-fold covering of the real Hessenberg manifold~$\overline{M}_{\Pe^n,\clambda}$. (For the Tomei manifold formulae~\eqref{eq_DJ} easily imply that $\beta_i^{\Z_2}(M_0^n)=\beta_i^{\Q}(M_0^n)=A(n+1,i)$ and $\beta^{\Z_2}(M_0^n)=\beta^{\Q}(M_0^n)=(n+1)!$\,.) \subsection{Simplicial volume}\label{subsection_simp_vol} For any topological space~$X$, the vector spaces~$C_n(X;\R)$ of the singular chains of it with real coefficients can be endowed with the $L^1$-norms~$\|{\cdot}\|_1$ such that $\|\xi\|_1=\sum_{i=1}^q|\alpha_i|$ if $\xi=\sum_{i=1}^q\alpha_i\sigma_i$, where $\alpha_i$ are real numbers and $\sigma_i$ are pairwise different singular simplices. By definition, the \textit{simplicial volume\/} of an $n$-dimensional oriented closed manifold~$M$ is the number $$ \|M\|=\inf_{\xi\in[M]}\|\xi\|_1, $$ where the infimum is taken over all singular cycles $\xi\in C_n(M;\R)$ representing the fundamental homology class $[M]\in H_n(M;\R)$. If~$M^n$ dominates~$N^n$, then $\|M^n\|\ge \|N^n\|$. It is well known that $\|\hM^n\|=r\|M^n\|$ whenever $\hM^n$ is an $r$-fold covering of~$M^n$. Hence, for $n\ge 2$, any URC-manifold has a nonzero simplicial volume. \begin{problem}\label{problem_sv} In every dimension $n\ge 3$ find the infimum of the simplicial volumes of $n$-dimensional URC-manifolds. If this infimum is achieved, find a URC-manifold~$M^n$ with the smallest simplicial volume~$\|M^n\|$. \end{problem} By a well-known theorem of Gromov (see~\cite{Gro82}), for an $n$-dimensional hyperbolic manifold, we have $\|M^n\|=\mvol(M^n)/v_n$, where $\mvol(M^n)$ is the hyperbolic volume of~$M^n$, and $v_n$ is the supremum of volumes of convex simplicies in the $n$-dimensional Lobachevskii space~$\Lambda^n$. In particular, $v_3=1.0149\ldots$ is the volume of a regular ideal tetrahedron in~$\Lambda^3$. It is well known that the closed three-dimensional hyperbolic manifold with the smallest hyperbolic volume is the so-called Fomenko--Matveev--Weeks manifold~$Q_1$. Its volume $\mvol(Q_1)=0.9427\ldots$ was computed by Matveev and Fomenko~\cite{MaFo88}. The minimality of its volume was conjectured in~\cite{MaFo88} and was proved by Gabai, Meyerhoff, and Milley~\cite{GMM09}. As we have mentioned above, all three-dimensional hyperbolic manifolds are URC-manifolds. In particular, $Q_1$ is a URC-manifold. Its simplicial volume is $\|Q_1\|=\mvol(Q_1)/v_3=0.9288\ldots$ \begin{quest}\label{quest_sv} Is it true that the Fomenko--Matveev--Weeks manifold has the minimal simplicial volume among all three-dimensional URC-manifolds? \end{quest} Unfortunately, it is very hard to compute or at least to estimate the simplicial volumes of manifolds. The author knows no approaches to Question~\ref{quest_sv} and Problem~\ref{problem_sv}. Other results on relationship between URC-manifolds and simplicial volume can be found in the author's paper~\cite{Gai13-b}. \smallskip The author is grateful to V.\,M.~Buchstaber for multiple fruitful discussions.
{"config": "arxiv", "file": "1611.01816.tex"}
\begin{document} \title{Machine Learning for CSI Recreation Based on Prior Knowledge} \author{ \IEEEauthorblockN{\textit{Brenda Vilas Boas\IEEEauthorrefmark{1}$^,$\IEEEauthorrefmark{2}, Wolfgang Zirwas\IEEEauthorrefmark{1}}, \textit{Martin Haardt\IEEEauthorrefmark{2}}} \IEEEauthorblockA{\IEEEauthorrefmark{1}Nokia, Germany \\\ \IEEEauthorrefmark{2}Ilmenau University of Technology, Germany \\\ {brenda.vilas\_boas@nokia.com, wolfgang.zirwas@nokia-bell-labs.com, martin.haardt@tu-ilmenau.de} } } \maketitle \begin{abstract} Knowledge of channel state information (CSI) is fundamental to many functionalities within the mobile wireless communications systems. With the advance of machine learning (ML) and digital maps, i.e., digital twins, we have a big opportunity to learn the propagation environment and design novel methods to derive and report CSI. In this work, we propose to combine untrained neural networks (UNNs) and conditional generative adversarial networks (cGANs) for MIMO channel recreation based on prior knowledge. The UNNs learn the prior-CSI for some locations which are used to build the input to a cGAN. Based on the prior-CSIs, their locations and the location of the desired channel, the cGAN is trained to output the channel expected at the desired location. This combined approach can be used for low overhead CSI reporting as, after training, we only need to report the desired location. Our results show that our method is successful in modelling the wireless channel and robust to location quantization errors in line of sight conditions. \end{abstract} \begin{IEEEkeywords} Channel estimation, channel prediction, UNN, cGAN, digital twin. \end{IEEEkeywords} \section{Introduction} Machine learning (ML) for physical layer applications are gaining momentum in standardization bodies, such as 3GPP and O-RAN ~\cite{213gppRel18}. Combining ML capabilities with virtual representations of the real world, i.e., a digital twin environment , enables a variety of possibilities for wireless network planning, deployment and management. In order to leverage the potential of a digital twin of the environment, full knowledge of the channel state information (CSI) is desired such that most of the real propagation effects can be represented. Here, we propose to combine two ML methods for channel recreation/estimation which minimize the overall complexity of the neural networks (NNs), reduce training time, and enable low CSI reporting overhead. In contrast to state of the art, our solution does not rely on multi-modal data, such as lidar~\cite{19DiasLidar} or environment images~\cite{21RatnanFadeNet}, which allow us to reduce the complexity of our NN architectures. Untrained neural networks (UNNs) were first proposed in~\cite{18LempitskyDIP} to solve inverse problems, such as denoising. The term `untrained' refers to the method characteristic of avoiding a huge data collection phase as the updates of the gradient descent is for a single image measurement. The \textit{deep decoder} architecture as proposed in~\cite{18HeckelDeep} simplifies the structure of a UNN, making it underparameterized. For wireless communications, this means we can fit a UNN to directly estimate the wireless channel based on a small noisy measurement campaign, i.e., a few time snapshots, without the need of noiseless labels. The work in ~\cite{20BaleviUNN} has proposed the use of UNNs for MIMO channel estimation under pilot contamination. Despite the limitation to statistical channel models, UNNs could reduce the noise level of the measured signal. The simplicity of UNNs comes at the cost of lack of generalization. Since there is no dataset collection for weights update, iterating the gradient descent is always needed when a new set of channel measurements is acquired. In our recent work~\cite{21BoasTwo}, we have proposed to used cGAN for channel estimation in MIMO arrays with mixed radio frequency chains where part of the array had antenna elements turned-off. Our results demonstrated the good generalization capability of cGANs. Motivated by the generalization capabilities of cGAN and the underparameterization of UNNs, we propose to combine them for MIMO channel estimation/prediction within a propagation area. Hence, the UNNs are used to generate prior CSI for a set of locations. Then, the cGAN uses the prior-CSIs together with their locations to recreate the CSI in a desired location. After deriving the weights for all the ML models, only the target location needs to be reported. Therefore, our solution enables low CSI reporting overhead between user equipment (UE) and base station (BS). Moreover, our approach can be used to add to the digital twin the small scaling fading characteristics of the wireless channels. In this paper, Section~\ref{sec:scenario} presents our geometrical propagation environment and the channels considered, Section~\ref{sec:method} introduces our proposed method, Section~\ref{sec:UNNs} presents details about our UNN for prior knowledge CSI estimation, Section~\ref{sec:cGAN} shows the processing performed at the cGAN for CSI recreation using location and prior-CSI, Section~\ref{sec:results} presents our results, and Section~\ref{sec:conclusion} concludes our paper. Regarding the notation, $a$, $\mathbf{a}$, $\mathbf{A}$ and $\mathbfcal{A}$ represents, respectively, scalars, column vectors, matrices and $D$-dimensional tensors. The superscript $^T$, denotes transposition. For a tensor $\mathbfcal{A} \in \mathbb{C}^{M_1 \times M_2 \times \dots M_D}$, $M_d$ refers to the tensor dimension on the $d^\mathrm{th}$ mode. A $d$-mode unfolding of a tensor is written as $[\mathbfcal{A}]_{(d)} \in \mathbb{C}^{M_d \times M_{d+1} \dots M_D M_1 \dots M_{d-1}}$ where all $d$-mode vectors are aligned as columns of a matrix. The $d$-mode vectors of $\mathbfcal{A}$ are obtained by varying the $d^\mathrm{th}$ index from $1$ to $M_d$ and keeping all other indices fixed. Moreover, $\mathbfcal{A} \times_d \mathbf{U}$ is the $d$-mode product between a $D$-way tensor $\mathbfcal{A} \in \mathbb{C}^{M_1 \times M_{2} \dots \times M_D}$ and a matrix $\mathbf{U} \in \mathbb{C}^{J \times M_d}$. The $d$-mode product is computed by multiplying $\mathbf{U}$ with all $d$-mode vectors of $\mathbfcal{A}$. In addition, $\mathbfcal{A} \sqcup_d \mathbfcal{B}$ denotes the concatenation of $\mathbfcal{A}$ and $\mathbfcal{B}$ among the $d^\mathrm{th}$ mode. The concatenation $\sqcup_d$ operation also applies to matrices. \section{Propagation Environment} \label{sec:scenario} In this work, we consider an urban environment with a fixed base station (BS) equipped with an uniform rectangular array (URA) containing $N_\mathrm{ant}$ antenna elements, moving user equipment (UEs) with single antennas, operating with $N_\mathrm{sub}$ OFDM subcarriers, and collecting $N_\mathrm{sp}$ time snapshots. This scenario was modeled with IlmProp, a geometry based channel simulator developed at Ilmenau University of Technology~\cite{05GaldoGeometry}. Figure~\ref{fig:IlmProp} presents the urban environment with the BS represented by a red circle, buildings in blue squares, scatters in green circles, and three UEs moving in a linear trajectory towards the BS. From IlmProp, we collect the channels used as ground truth values $\mathbfcal{H}_\mathrm{sim}^C \in \mathbb{C}^{N_\mathrm{sp} \times N_\mathrm{sub} \times N_\mathrm{ant}}$ and their relative locations $\mathbf{\Gamma} = \{ \mathbf{x}, \mathbf{y}, \mathbf{z} \} \in \mathbb{R}^{N_\mathrm{sp} \times 3}$ to the BS. The three UEs shown in Figure~\ref{fig:IlmProp} are used to derive the noisy channel measurements $\mathbfcal{H}_\mathrm{mes}^C \in \mathbb{C}^{N_\mathrm{sp} \times N_\mathrm{sub} \times N_\mathrm{ant}}$ as \begin{equation} \mathbfcal{H}_\mathrm{mes}^C = \mathbfcal{H}_\mathrm{sim}^C + \mathbfcal{N}, \label{eq:mes} \end{equation} where $\mathbfcal{N} \in \mathbb{C}^{N_\mathrm{sp} \times N_\mathrm{sub} \times N_\mathrm{ant}}$ is a zero mean circularly symmetric complex Gaussian noise process. The $\mathbfcal{H}_\mathrm{mes}^C$ are further used to estimate the prior-CSIs. Moreover, the red square indicates the study area where the $\mathbfcal{H}_\mathrm{sim}^C$ are collected for CSI recreation. \setlength{\textfloatsep}{1\baselineskip plus 0.2\baselineskip minus 0.2\baselineskip} \begin{figure}[!tb] \centering \includegraphics[width=0.7\columnwidth]{figs/bvb12a-viewZoom.png} \caption{Propagation scenario simulated in IlmProp, a geometric based channel simulator developed at Ilmenau University of Technology. The BS is fixed and represented by a red circle. The displayed UEs and trajectories are used to compute the prior-CSIs. The red square represents the study area ($100~\mathrm{m}^2$), to which we aim to reconstruct the CSIs.} \label{fig:IlmProp} \end{figure} \section{CSI recreation with prior knowledge} \label{sec:method} Here, we propose a ML framework to recreate CSI in a desired location based on prior knowledge of CSI at neighboring UEs. Figure~\ref{fig:system} summarizes our proposed method for CSI recreation which we divide in two parts, the first in blue and the second in purple. The first ML instance aims to find the prior-CSI $\mathbfcal{H}_\mathrm{p} \in \mathbb{R}^{N_\mathrm{sp} \times N_\mathrm{sub} \times 2N_\mathrm{ant}}$ based on the measured channels $\mathbfcal{H}_\mathrm{mes}^C$. We employ a UNN for this purpose where each UNN estimates $\mathbfcal{H}_\mathrm{p}$ the channel of a single UE over multiple time snapshots. Even though UNNs are low complex structures, deriving one UNN model for each possible location in a propagation environment is unfeasible. Therefore, we propose to use a second ML instance based on a cGAN due to its generalization capabilities. The second ML instance is trained to compute the recreated-CSI $\mathbfcal{H}_\mathrm{r} \in \mathbb{R}^{(S+1) \times N_\mathrm{sub} \times (2N_\mathrm{ant}+1)}$ in the targeted location $\mathbf{\Gamma}_\mathrm{r} \in \mathbb{R}^{1 \times 3}$ based on the knowledge of a sub-set of $S$ selected prior-CSIs and their respective locations $\mathbf{\Gamma}_\mathrm{p} \in \mathbb{R}^{S \times 3}$. Since UNNs do not need `labels' to find their best weights, we can perform a small measurement campaign and use the UNN-estimated channels as conditional input to the cGAN. In a `day-zero' operation where not many CSI measurements are available, the cGAN can be trained with target channels derived from simulations or a digital twin and the prior-CSIs from the UNNs are responsible to adjust the model to real propagation conditions. In the long run, we could update the cGAN model based on collected real world measurements. In this scenario, the availability of priors at the conditional input reduce the complexity of the NN structure and its training time. In the following sections, we explain in details how each part of the algorithm is trained. \begin{figure} \centering \includegraphics[width=0.85\columnwidth]{figs/system.png} \caption{Schematic of our proposed ML solution for CSI recreation combining channel measurements and simulations. In blue, we present the first ML part where the UNNs are iterated to perform channel estimation. Each UE measurement campaign derive one UNN that represents $N_\mathrm{sp}$ locations. In purple, we show the second ML part where a cGAN is trained with knowledge of the prior-CSIs, the simulated desired channels and their locations. The cGAN output is the recreated desired channel.} \label{fig:system} \end{figure} Due to the low computational complexity of UNNs, the UE can derive the UNN weights and send it to the BS. Then, the BS is able to reconstruct the prior-CSIs and can train a cGAN to recreate the CSI at a desired location, which is different from the prior-CSI locations. After deriving the cGAN optimal weights, the BS can send the cGAN model to the UE for the purpose of reliability. Hence, the UE is able to identify when the BS will fail on its CSI recreation and may trigger a correction procedure. \section{Estimation of prior-CSI with UNNs} \label{sec:UNNs} The underparametrization of deep decoder~\cite{18HeckelDeep} and its capability to optimize noisy measurements has motivated us to employ UNNs as our channel estimator for prior-CSIs. Since we do not need the true-labels, the channel measurements can be acquired during measurement campaigns which will then represent the real world propagation environment for specific locations. The following subsections present the data pre-processing, the UNN architecture and how the gradient descent is used to update the UNN weights. \begin{figure}[!tb] \centering \includegraphics[width=0.75\columnwidth]{figs/untrained.PNG} \caption{General layer structure of a UNN $P$ used to estimate the prior-CSIs $\mathbfcal{H}_\mathrm{p} = P(\mathbfcal{K}^*, \mathbfcal{Z}_0)$. There are $L$ layers, $L-2$ inner layers in orange, one pre-output layer in yellow, and one output layer in olive. In blue, we represent $\mathbfcal{Z}_0$ the random input tensor.} \label{fig:unn} \end{figure} \subsection{Data pre-processing for UNN} The input signal to an UNN is a random noise seed $\mathbfcal{Z}_0 \in \mathbb{R}^{k_1 \times b \times c}$, where $b = N_\mathrm{sp}/2^{L-2}$, $c = N_\mathrm{sub}/2^{L-2}$,~$k_1$ is the number of filters in the first layer, and $L$ is the number of layers. The input tensor $\mathbfcal{Z}_0$ is drawn from a uniform distribution $U(-a,+a)$ defined on the interval $[-a, +a]$ and kept fixed during the iterations to update the gradient descent. The measured channel $\mathbfcal{H}_\mathrm{mes}^C$ from Equation~\ref{eq:mes} is preprocessed as \begin{itemize} \item Each time snapshot within $\mathbfcal{H}_\mathrm{mes}^C$ is normalized by its Frobenius norm, and then multiplied by a scaling factor to ease convergence. \item $\mathbfcal{H}_\mathrm{mes}^C \in \mathbb{C}^{N_\mathrm{sp} \times N_\mathrm{sub} \times N_\mathrm{ant}}$ is rearranged by concatenating $\mathfrak{Re} \{ \mathbfcal{H}_\mathrm{mes}^C \}$ and $\mathfrak{Im} \{ \mathbfcal{H}_\mathrm{mes}^C \}$ in the dimension corresponding to the antenna elements. \end{itemize} After those operations, $\mathbfcal{H}_\mathrm{mes} \in \mathbb{R}^{N_\mathrm{sp} \times N_\mathrm{sub} \times 2N_\mathrm{ant}}$ is directly used to compute the cost function. \subsection{UNN architecture} A UNN is a composition of $L$ layers where there are $(L-2)$ inner layers, one pre-output layer $(L-1)$ and one output layer $(L)$, according to the \textit{deep decoder} architecture~\cite{18HeckelDeep}. Figure~\ref{fig:unn} shows a generic organization of those layers, the random noise seed $\mathbfcal{Z}_0$ in blue, the inner layers in orange, the pre-output layer in yellow, and the output layer in olive. All the layer types contain convolutional filters $\mathbfcal{W}_l \in \mathbb{R}^{1 \times 1 \times k_{l-1} \times k_{l}}$ where $l = \{1, 2, \ldots L \}$, $k_{l-1}$ and $k_{l}$ are hyper-parameters which define the number of filters on the respective $(l-1)^\mathrm{th}$ and $l^\mathrm{th}$ layers. However, the types of layers differ with respect to the upsampling computation and the operation of batch normalization ($\mathrm{BatchNorm}$) operation~\cite{15IoffeBatch}. The inner layers contain linear and non-linear operations. First, there is a convolutional filter $\mathbfcal{W}_l$ which weights are updated by the gradient descent. Second, there is a fixed bilinear upsampling operation, where $\mathbf{A}_l \in \mathbb{R}^{{2^l}b \times 2^{l-1}b}$ and $\mathbf{C}_l \in \mathbb{R}^{2^{l}c \times 2^{l-1}c}$ are the linear upsampling matrices in the subcarrier and time snapshots dimensions, respectively. Third, the rectifier linear unit (ReLu) activation function is applied, and a batch normalization is computed with trainable parameters $\mathbf{R}_l = [ \boldsymbol{\gamma}_l, \boldsymbol{\beta}_l ] \in \mathbb{R}^{k_l \times 2}$, where $\boldsymbol{\gamma}_l \in \mathbb{R}^{k_l \times 1}$ and $\boldsymbol{\beta}_l \in \mathbb{R}^{k_l \times 1}$ are the mean and variance correction factors, respectively, for the coefficients in the $l^\mathrm{th}$ layer. For example, the output of the first inner layer $\mathbfcal{Z}_1 $ can be written as \begin{equation} \mathbfcal{Z}_1 = \mathrm{BatchNorm} (\mathrm{ReLu}(\mathbfcal{Z}_0 \times_3 [\mathbfcal{W}_1]_{(4)} \times_1 \mathbf{A}_1 \times_2 \mathbf{C}_1^T)), \end{equation} where $[\mathbfcal{W}_1]_{(4)}$ is the $4$-mode unfolding of the convolutional filters operating at the antenna elements dimension. The pre-output layer differs from the inner layers because it does not apply upsampling. Hence, it can be written as \begin{equation} \mathbfcal{Z}_{L-1} = \mathrm{BatchNorm} (\mathrm{ReLu}(\mathbfcal{Z}_{L-2} \times_3 [\mathbfcal{W}_{L-1}]_{(4)})). \end{equation} Next, the output layer is used to adjust the number of filters of the pre-output layer to the size expected at the output $k_L = 2N_\mathrm{ant}$ as \begin{equation} \mathbfcal{Z}_{L} = \mathrm{TanH}(\mathbfcal{Z}_{L-1} \times_3 [\mathbfcal{W}_{L}]_{(4)}), \end{equation} where $\mathbfcal{W}_{L} \in \mathbb{R}^{1 \times 1 \times k_{l-1} \times 2N_\mathrm{ant}}$, and $\mathrm{TanH}$ is the hyperbolic tangent activation function. Since the upsamplig operations are pre-defined, the trainable parameters relates to the convolutional filters $\mathbfcal{W}_{l}$ and the regularization parameters $\mathbf{R}_l$ of the batch normalization operation. Therefore, $\mathbfcal{K}_l = \{\mathbfcal{W}_{l}, \mathbf{R}_l\}$ is the set of trainable parameters of the $l^\mathrm{th}$ layer, and $\mathbfcal{K}$ refers to all trainable parameters of the $L$ layers. \subsection{Updating the weights of an UNN} Here, we refer to the UNN as a model $P:~\mathbb{R}^N~ \rightarrow~\mathbb{R}^{N_\mathrm{sub}N_\mathrm{sp}2N_\mathrm{ant}}$ where $N<N_\mathrm{sub}N_\mathrm{sp}2N_\mathrm{ant}$ is the total number of parameters. The UNN $P$ performs the mapping operation $\mathbfcal{Z}_L = P(\mathbfcal{K},\mathbfcal{Z}_0)$, where $\mathbfcal{Z}_0$ is the random noise seed, and $\mathbfcal{K}$ is the tensor of weights that represents all the UNN trainable parameters. The cost function is the mean square error (MSE), calculated as \begin{equation} \mathcal{L}(\mathbfcal{K}) = \mathbb{E} \{\parallel P(\mathbfcal{K}, \mathbfcal{Z}_0) - \mathbfcal{H}_\mathrm{mes} \parallel^2_F \}. \end{equation} The gradient descent is updated as in supervised learning, performing $I$ gradient iterations until the optimum parameters are found, such that \begin{equation} \mathbfcal{K}^* = \underset{\mathbfcal{K}}{\argmin}~ \mathcal{L}(\mathbfcal{K}), ~ \mathrm{and} ~ \mathbfcal{H}_\mathrm{p} = P(\mathbfcal{K}^*, \mathbfcal{Z}_0) \end{equation} is the channel estimation of the prior-CSI. From the loss function, we observe that the prior-CSI $\mathbfcal{H}_\mathrm{p}$ derived by the UNN $P$ is specific to $\mathbfcal{H}_\mathrm{mes}$. Hence, the model $P$ does not directly generalize for other channels, it is specific to the $\mathbfcal{H}_\mathrm{mes}$ considered during gradient updates. \section{CSI-recreation with cGAN} \label{sec:cGAN} \begin{figure}[!bt] \centering \includegraphics[width=0.85\columnwidth]{figs/draws-cgan.png} \caption{Conditional GAN, two NNs play a minmax game where the generator tries to fool the discriminator. The discriminator should classify $\mathbfcal{\tilde{H}}_\mathrm{r}$ as a fake sample, while $\mathbfcal{H}_\mathrm{r}$ is classified as a real sample. The generator fools the discriminator when $\mathbfcal{\tilde{H}}_\mathrm{r}$ is classified as real.} \label{fig:cgan} \end{figure} In our previous work~\cite{21BoasTwo}, we have applied cGAN for the purpose of channel estimation within mixed-resolution radio-frequency chains. Here, we use the same operational principle of image to image translation~\cite{17IsolaPix}. However, we aim to reconstruct CSI based on the knowledge of its location and a set of prior-CSIs. Different from UNNs, cGAN requires data collection and training. Nonetheless, it has great generalization capabilities~\cite{21BoasTwo}. In the following subsections, we present the dataset preprocessing, our cGAN architecture, and the adversarial training. \subsection{Dataset preprocesing for cGAN} \label{sub:datagan} In this section, we present how we construct the signals to train the cGAN: the conditional input $\mathbfcal{H}_\mathrm{c}$, the label $\mathbfcal{H}_\mathrm{r}$, and the generator output $\tilde{\mathbfcal{H}}_\mathrm{r}$. The conditional input to our cGAN is derived from the prior-CSIs $\mathbfcal{H}_\mathrm{p}$, their locations $\mathbf{\Gamma}_\mathrm{p}$, and the target location $\mathbf{\Gamma}_\mathrm{r}$ where we aim to recover the CSI of a certain UE. Each $\mathbfcal{H}_\mathrm{p} \in \mathbb{R}^{N_\mathrm{sub} \times N_\mathrm{sp} \times 2N_\mathrm{ant}}$ estimated by a UNN has CSI for $N_\mathrm{sp}$ different locations. Therefore, if $N_\mathrm{UE}$ UNNs are used to estimate the prior-CSIs $\mathbfcal{H}_\mathrm{p}^{N_\mathrm{UE}} \in \mathbb{R}^{N_\mathrm{UE}N_\mathrm{sp} \times N_\mathrm{sub} \times 2N_\mathrm{ant}}$, there are $N_\mathrm{UE}N_\mathrm{sp}$ CSI-location pairs $\{ \mathbf{H}_{\mathrm{p}j}, \mathbf{\Gamma}_{\mathrm{p}j} \}$ available, where $\mathbf{H}_{\mathrm{p}j} = \mathbfcal{H}_\mathrm{p}^{N_\mathrm{UE}}(j,:,:)$ and $j=\{1, 2, \dots N_\mathrm{UE} N_\mathrm{sp} \}$. From the available CSI-location pairs, a sub-set of $S$ CSI-location pairs is selected according to their minimum Euclidean distance to $\mathbf{\Gamma}_\mathrm{r}$. The $S$ selected prior-CSIs $\mathbfcal{H}_\mathrm{p}^S \in \mathbb{R}^{S \times N_\mathrm{sub} \times 2N_\mathrm{ant}}$ are concatenated in the first dimension and ordered according to the minimum Euclidean distance to the target location $\mathbf{\Gamma}_\mathrm{r}$. The target location vector $\mathbf{\Gamma}_\mathrm{r} \in \mathbb{R}^{1 \times 3}$ and the prior location matrix $\mathbf{\Gamma}_\mathrm{p}^S \in \mathbb{R}^{S \times 3}$ are extended by repeating their coordinates until $\mathbf{\Gamma}_\mathrm{p}^S \in \mathbb{R}^{S \times N_\mathrm{sub}}$ and $\mathbf{\Gamma}_\mathrm{r} \in \mathbb{R}^{1 \times N_\mathrm{sub}}$. Hence, the complete location matrix is formed as $\mathbf{H}_\mathrm{LOC} = [\mathbf{\Gamma}_\mathrm{r} \sqcup_1 \mathbf{\Gamma}_\mathrm{p}^S] \in \mathbb{R}^{(S+1) \times N_\mathrm{sub}}$. Finally, the conditional input to the cGAN is constructed as \begin{equation} \mathbfcal{H}_c = [ \mathbf{H}_N \sqcup_1 \mathbfcal{H}_\mathrm{p}^S \sqcup_3 \mathbf{H}_\mathrm{LOC}] \in \mathbb{R}^{(S+1) \times N_\mathrm{sub} \times (2N_\mathrm{ant}+1)}, \end{equation} where $\mathbf{H}_N \in \mathbb{R}^{N_\mathrm{sub} \times 2N_\mathrm{ant}}$ is a matrix of random values drawn from a Gaussian distribution. The desired channel $\mathbf{H}_r$ is recreated in the $\mathbf{H}_N$ position at the generator output. Ideally, the true recreated CSI $\mathbfcal{H}_\mathrm{r}~\in~ \mathbb{R}^{(S+1) \times N_\mathrm{sub} \times (2N_\mathrm{ant}+1)}$ is found in the output of the cGAN. Hence, each $d$ time snapshot in $\mathbf{H}_\mathrm{sim}(d) = \mathbfcal{H}_\mathrm{sim}^C(d,:,:)$ is used as ground truth value for $\mathbf{H}_r$, the CSI reconstructed at the target location $\mathbf{\Gamma}_\mathrm{r}$. The pre-processing of the labels for cGAN include \begin{itemize} \item Normalization of $\mathbf{H}_\mathrm{sim}(d) \in \mathbb{C}^{N_\mathrm{sub} \times N_\mathrm{ant}}$ by its Frobenius norm, and multiplication by a scaling factor. \item $\mathbf{H}_r = [ \mathfrak{Re}\{\mathbf{H}_\mathrm{sim}(d)\} \sqcup_2 \mathfrak{Im}\{\mathbf{H}_\mathrm{sim}(d)\} ] \in \mathbb{R}^{N_\mathrm{sub} \times 2N_\mathrm{ant}}$ is the target real valued CSI at location $\mathbf{\Gamma}_\mathrm{r}$. \end{itemize} Finally, the label is constructed as \begin{equation} \mathbfcal{H}_\mathrm{r} = [ \mathbf{H}_r \sqcup_1 \mathbfcal{H}_\mathrm{p}^S \sqcup_3 \mathbf{H}_\mathrm{LOC}] \in \mathbb{R}^{(S+1) \times N_\mathrm{sub} \times (2N_\mathrm{ant}+1)}, \end{equation} where the prior-CSIs $\mathbfcal{H}_\mathrm{p}^S$, the location matrix $\mathbf{H}_\mathrm{LOC}$ and the recreated CSI $\mathbf{H}_r$ form the desired output. We refer to the generator output as $\tilde{\mathbfcal{H}}_\mathrm{r} = G(\mathbfcal{H}_\mathrm{c}) \in \mathbb{R}^{(S+1) \times N_\mathrm{sub} \times (2N_\mathrm{ant}+1)}$, where $G$ is the generator mapping function that tries to approximate the label $\mathbfcal{H}_\mathrm{r}$. The discriminator $D$ is a classifier for which the inputs and labels are, respectively, $\mathbfcal{H}_\mathrm{r} \rightarrow \{ \mathrm{true} \}$ and $\tilde{\mathbfcal{H}}_\mathrm{r} \rightarrow \{ \mathrm{fake} \}$. \subsection{Adversarial Network Architecture} Figure~\ref{fig:cgan} shows the interconnection between generator and discriminator NNs for the adversarial training. Here, the generator NN consists of an U-shaped deep NN (U-Net) which has two paths for the flow of information between blocks: the encoder-decoder path and the skip connections path, see Figure~\ref{fig:u-net}. The discriminator NN consists of a Patch-NN~\cite{17IsolaPix} where the input is reduced to a patch of arbitrary size; then, each coefficient of the patch is classified as real or fake. \begin{figure}[tb!] \centering \includegraphics[width=\columnwidth]{figs/draws-generator.png} \caption{U-Net architecture deployed as the generator including encoder and decoder pipeline and numbering for skip connections.} \label{fig:u-net} \end{figure} Figure~\ref{fig:u-net} shows the U-Net architecture employed for the generator, the $N_g/2$ downsample blocks for the encoder and the $N_g/2$ upsample blocks for the decoder, where $N_g$ is the total number of processing blocks. Each downsample block consists of one convolutional 2-dimensional layer (Conv2D), one batch normalization layer (BatchNorm), and a leaky rectifier linear unit (LeakyReLU) activation function, where $y = x$ for $x > 0$, and $y = 0.3x$ for $x<0$. Each upsample block consists of one transposed convolutional 2-dimensional layer (Conv2D$^T$), followed by BatchNorm and ReLU as activation function. The skip connections path happens between the output of the $n^\mathrm{th}$ downsample block and the output of the $(N_g-n)^\mathrm{th}$ upsample block, where $n=[1, 2, \ldots, (N_g/2-1)]$. Those skip connections provide more information to the decoder block~\cite{17IsolaPix} since the input to each upsample block is the concatenation $\mathbfcal{X}_{(N_g-n+1)} = [\mathbfcal{Y}_n \sqcup_3 \mathbfcal{Y}_{(N_g-n)}]$, where $\mathbfcal{Y}_n$ is the output of the $n^\mathrm{th}$ block. For the discriminator NN, we employ a Patch-NN~\cite{17IsolaPix}. First, downsampling blocks are used to reduce the dimensionality of the input signal to some patch of arbitrary size. Second, the patch is processed by a sequence of convolutional layers (Conv2D + BatchNorm + LeakyReLU and Conv2D + Linear). Then, the discriminator is trained to classify each patch coefficient as real or fake. Implementations details are provided in Section~\ref{sec:results}. \subsection{Optimization with cGAN} As shown in Figure~\ref{fig:cgan}, in cGAN there are two NNs playing a minmax game where the generator $G:\{ \mathbfcal{H}_\mathrm{c}\} \rightarrow \mathbfcal{H}_\mathrm{r}$ tries to fool the discriminator $D: \{ \tilde{\mathbfcal{H}}_\mathrm{r}\} \rightarrow \{\mathrm{true}\}$, and it is conditional because some prior knowledge is provided. Mathematically, the optimization objective of a cGAN has two terms \begin{equation} G^* = \arg \min_{G} \max_{D} \mathcal{L}_{\mathrm{cGAN}}(G,D) + \alpha \mathcal{L}_{\mathrm{L_2}}, \label{eq:final-loss} \end{equation} where $\mathcal{L}_{\mathrm{cGAN}}(G,D)$ is the adversarial loss, $\mathcal{L}_{\mathrm{L_2}}$ is the $\mathrm{L_2}$ loss, and $\alpha$ is the weighting factor~\cite{17IsolaPix}. The adversarial loss is computed as \begin{equation} \begin{aligned} \mathcal{L}_{\mathrm{cGAN}}(G,D) = & \mathbb{E}[\log D(\mathbfcal{H}_\mathrm{r})] + \\ & \mathbb{E}[\log (1-D(\tilde{\mathbfcal{H}}_\mathrm{r}))], \label{eq:cgan-loss} \end{aligned} \end{equation} where the generator $G$ learns to map the input data $\mathbfcal{H}_\mathrm{c}$ to the output data $\mathbfcal{H}_\mathrm{r}$ such that $\tilde{\mathbfcal{H}}_\mathrm{r} = G^*(\mathbfcal{H}_\mathrm{c})$, and the discriminator $D$ tries to recognize the channels generated by $G$. In order to have the generated output wireless channels $\tilde{\mathbfcal{H}}_\mathrm{r}$ close to the wireless channel labels $\mathbfcal{H}_\mathrm{r}$, a weighted $L_2$ loss \begin{equation} \mathcal{L}_{\mathrm{L_2}}(G) = \mathbb{E}[\| \mathbfcal{H}_\mathrm{r}-G(\mathbfcal{H}_\mathrm{c}) \|_F] \label{eq:generator-l1} \end{equation} is included as a regularization term. In \cite{16Pathakcontext} a $L_2$ reconstruction loss is also proposed, but $0/1$ masks are used to restrict the $L_2$ loss only to the signal we aim to estimate. Note that, at the cGAN output, we have the conditional information as well as the desired signal. Hence, we study the feasibility of a reconstruction loss $\mathcal{L}_{\mathrm{rec}}$ defined as \begin{equation} \mathcal{L}_{\mathrm{rec}} = \mathbb{E}[\| \mathbf{H}_\mathrm{r}-\tilde{\mathbf{H}}_\mathrm{r} \|_F] + \mathbb{E}[\| \mathbf{H}_\mathrm{LOC}-\tilde{\mathbf{H}}_\mathrm{LOC} \|_F], \label{eq:generator-l2} \end{equation} where the MSE of the target channel and the location matrix are considered. Then, $\mathcal{L}_{\mathrm{rec}}$ substitutes $\mathcal{L}_{\mathrm{L_2}}$ in Equation~\ref{eq:cgan-loss}. The generator and discriminator NNs are trained together in each epoch. For testing, or inference, only the generator architecture is used. Therefore, only knowledge of $\mathbfcal{H}_\mathrm{c}$ is needed. In practice, at inference time, we are able to estimate/predict a channel based on its location and the prior-knowledge provided by the UNNs. \section{Simulations and Results} \label{sec:results} Figure~\ref{fig:IlmProp} presents our propagation environment simulated at IlmProp where there are $3$ UEs used to derive $\mathbfcal{H}_\mathrm{mes}^C$ for prior-CSI estimation by the UNNs, and the red square indicates our study area where $\mathbfcal{H}_\mathrm{sim}^C$ is collected for training the cGAN to recreate the CSIs. Table~\ref{tab:SymIlm} presents the simulation parameters set at IlmProp. A total of seven simulation campaigns were performed, changing the number of UEs as well as their trajectories, and including or removing scatters. The dataset for prior-CSI estimation with UNNs has 384 channel samples, and the dataset for cGAN has 4492 channel samples. We use the normalized squared error (NSE) $\mathrm{NSE} = \frac{\|\mathbf{B} -\mathbf{\tilde{B}} \|_F^2}{\|\mathbf{B}\|_F^2}$ as our performance metric for CSI recreation. \begin{table}[tb!] \centering \caption{Simulation parameters for IlmProp.} \label{tab:SymIlm} \resizebox{0.4\linewidth}{!}{ \begin{tabular}{|c|c|c|} \hline Parameter & $\mathbfcal{H}_\mathrm{mes}^C$ & $\mathbfcal{H}_\mathrm{sim}^C$ \\ \hline Carrier frequency & \multicolumn{2}{c|}{2.6~GHz} \\ \hline Bandwidth & \multicolumn{2}{c|}{20~MHz} \\ \hline $N_\mathrm{sub}$ & \multicolumn{2}{c|}{64} \\ \hline UE velocity & $1$ m/s & $0.1$ to $5$ m/s \\ \hline $N_\mathrm{sp}$ & 128 & 74 to 174 \\ \hline Total of UEs & 3 & many \\ \hline $N_\mathrm{ant}$ & \multicolumn{2}{c|}{36} \\ \hline \end{tabular}} \end{table} \begin{table}[bt!] \centering \caption{Description of the U-Net deployed as generator NN.} \label{tab:gen-desc} \resizebox{0.8\linewidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & Block & $N_\mathrm{filter}$ & Filter size & Stride & Padding & BatchNorm & Dropout & Activation\\ \hline 1 & downsample & 64 & [3,3] & [1,1] & Yes & No & No & LeakyReLU \\ \hline 2 & downsample & 64 & [2,4] & [1,2] & Yes & Yes & No & LeakyReLU \\ \hline 3 & downsample & 64 & [2,4] & [1,2] & Yes & Yes & No & LeakyReLU \\ \hline 4 & downsample & 64 & [2,4] & [1,2] & Yes & Yes & No & LeakyReLU \\ \hline 5 & downsample & 64 & [2,4] & [1,2] & Yes & Yes & No & LeakyReLU \\ \hline 6 & downsample & 128 & [4,4] & [1,1] & No & Yes & No & LeakyReLU \\ \hline 7 & upsample & 64 & [4,4] & [2,2] & No & Yes & Yes & ReLU \\ \hline 8 & upsample & 64 & [2,4] & [1,2] & Yes & Yes & Yes & ReLU \\ \hline 9 & upsample & 64 & [2,4] & [1,2] & Yes & Yes & Yes & ReLU\\ \hline 10 & upsample & 64 & [2,4] & [1,2] & Yes & Yes & No & ReLU\\ \hline 11 & upsample & 64 & [2,4] & [1,2] & Yes & Yes & No & ReLU \\ \hline 12 & upsample & 64 & [3,3] & [1,1] & Yes & Yes & No & ReLU \\ \hline 13 & output & 73 & [3,3] & [1,1] & Yes & No & No & TanH \\ \hline \end{tabular}} \end{table} \begin{table}[bt!] \centering \caption{Description of the Patch-Net deployed as discriminator NN.} \label{tab:dis-desc} \resizebox{0.8\linewidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|c|} \hline Block & $N_\mathrm{filter}$ & Filter size & Stride & Padding & BatchNorm & Activation \\ \hline downsample & 128 & [3,3] & [1,1] & Yes & Yes & LeakyReLU \\ \hline downsample & 128 & [2,4] & [1,2] & Yes & Yes & LeakyReLU \\ \hline zero padding 2D & - & - & - & Yes & - & - \\ \hline Conv2D & 256 & [3,3] & [1,1] & No & Yes & LeakyReLU \\ \hline zero padding 2D & - & - & - & Yes & - & - \\ \hline Conv2D & 1 & [3,3] & [1,1] & No & No & Linear \\ \hline \end{tabular}} \end{table} First, we define a UNN structure $P$ that is used to derive the optimal weights for all the three UEs in Figure~\ref{fig:IlmProp}. The $\mathbfcal{H}_\mathrm{mes}^C$ for each UE has $N_\mathrm{sp} = 128$ time snapshots and we design a UNN capable to estimate chunks of $64$ time snapshots. Therefore, we need to derive $2$ sets of trainable parameters $\mathbfcal{K}^*$ for each UE. In total, there are $6$ different UNN set of weights $\mathbfcal{K}^*$ to estimate all the prior-CSIs. For the UNN mapping structure $P$, we choose to have four inner layers, each with $k_{1:L-2}=64$ filters and both upsampling matrices, $\mathbf{A}_l$ and $\mathbf{C}_l$, activated in all inner layers. In addition, we use one pre-output layer with $k_{L-1} = 64$ filters and one output layer with $k_L = 72$ convolutional filters. Therefore, there are $L=6$ layers and the random noise seed $\mathbfcal{Z}_0~\in~\mathbb{R}^{4 \times 4 \times 64}$ is drawn from a uniform distribution as $U(-0.15, +0.15)$ where $k_0=64$. After setting the UNN structure, the trainable parameters $\mathbfcal{K}$ are initialized from random values and $I=25000$ gradient updates are performed to find the best $\mathbfcal{K}^*$ for each UE, separately. Our design choices for $k$ and $I$, the number of filters in each layer and the number of iterations respectively, were taken according to the estimated SNR at the UNN output. The UNN output SNR had to be at least the same as the SNR of the measured channel. Figure~\ref{fig:results} shows the cumulative distribution function (CDF) of the NSE for the UNN-estimator in blue, red and green curves for the $3$ UEs with $128$ time snapshots each, where $\mathrm{SNR}=20$~dB for the measured channel. The presented UNN architecture is capable to recreate a sequence of $64$ time snapshots collected from a single UE. This structure contains $25,728$ trainable parameters which correspond to $17.45\% $ of the coefficients in a channel measurement $\mathbfcal{H}_\mathrm{mes}^C~\in~\mathbb{C}^{64 \times 64 \times 36}$. Therefore, UNNs provide means to compress the CSI with a high probability of about $90\%$ to recover it with better SNR than the measured one. \begin{figure}[tb!] \centering \includegraphics[width=\columnwidth]{figs/cGANCDFquantizComparev2.png} \caption{Cumulative distribution function of the normalized squared error for UNN and cGAN results for CSI recreation. The cGAN trained with $\mathcal{L}_{\mathrm{rec}}$ ($\mathrm{L}_\mathrm{rec}$) has best performance.} \label{fig:results} \end{figure} After estimating the channels for the three UEs, we have a pool of $384$ prior-CSIs together with their location $\mathbf{\Gamma}_\mathrm{p}$. Here, we use the location coordinates provided by the IlmProp simulator. In a practical implementation, we could derive the location from the estimated prior-CSIs by Unitary Tensor ESPRIT~\cite{08HaardtTensor}, for instance. A sub-set of $S=3$ CSI-location pairs are selected according to their minimum Euclidean distance to the location $\mathbf{\Gamma}_\mathrm{r}$ where we aim to recreate the CSI. Hence, the input to our cGAN is $\mathbfcal{H}_c \in \mathbb{R}^{4 \times 64 \times 73}$, where $64$ is the number of subcarriers. The architecture details of our generator and discriminator are presented in Table~\ref{tab:gen-desc} and Table~\ref{tab:dis-desc}, respectively. The adversarial training runs for $150$ epochs with $60\%$ of the simulated channels used for training and $40\%$ for testing. Figure~\ref{fig:results} presents the CDF curves of the NSE of the channels estimated by the cGAN with prior-knowledge and desired location as input. The black curves are the NSE for the cGANs trained with $\mathbf{H}_\mathrm{LOC}$ with infinity precision, where the full line and dashed line are for the cGANs trained with $\mathcal{L}_{\mathrm{rec}}$ and $\mathcal{L}_{\mathrm{L_2}}$ as regularization term, respectively. As we can see in Figure~\ref{fig:results}, the cGAN constrained to $\mathcal{L}_{\mathrm{rec}}$ achieves better performance with a $90\%$ NSE of about $-12.5$~dB. Regarding state of the art, our approach provides much higher performance if compared to the $6~$dB reported in ~\cite{21RatnanFadeNet}. Moreover, our cGAN architecture is less complex since we just use $13$ layers for the U-net at the generator while ~\cite{21RatnanFadeNet} has reported $28$ layers to process images of the environment map and output the wireless channel. The generator described in Table~\ref{tab:gen-desc} has a total of $789,786$ trainable parameters. Our cGAN takes about $6$~hours to train in a computer with $16$~GB of RAM and a GPU with $2$~GB of dedicated memory. At inference time, we test the sensitivity of the trained cGAN to a shut-down of $\mathbf{H}_\mathrm{LOC}$. We set $\mathbf{H}_\mathrm{LOC}=0$ and compute the NSE at the cGAN output. This sensitivity curve is plotted in purple at Figure~\ref{fig:results}. The cGAN performance decays to nearly $0$~dB, which inform us that the architecture is not neglecting the location matrix. Next, we test the cGAN sensitivity to location quantization errors. The locations are quantized using a uniform quantizer with $8$ and $4$ bits, their NSE are plotted in pink and cyan at Figure~\ref{fig:results}. There is very little performance loss even for $4$ bits location quantization, which introduces errors in the range of $1$ to $4$ times the channel wavelength. The cGAN is robust to those errors because it is closely modeling the channels statistical distribution. Our analyzes of the dataset has shown that the channels are mostly line of sight (LOS) and frequency-flat for the defined study area. Therefore, the good statistical behavior of the dataset makes the CSI-recreation task quite easy for the cGAN. Such robustness against location errors may not be experienced in non-line of sight scenarios. \section{Conclusion} \label{sec:conclusion} In this paper we propose to combine UNNs with cGAN to reconstruct wireless channels within an area in the propagation environment. The channel is reconstructed based on prior-CSIs from UNNs and the location where we aim to reconstruct the channel. Our method performs better than state of the art solutions, is much less complex and requires only some hours of training. Our results show that the cGAN performs well in a propagation scenario with mostly LOS channels. Future work is related to further evaluation of quantization effects as well as advanced methods to improve the reconstructed CSI beyond this first ML inference. \section*{Acknowledgement} \small{This research was partly funded by German Ministry of Education and Research (BMBF) under grant 16KIS1184 (FunKI).} \bibliographystyle{IEEEtran} \bibliography{myREFfile} \end{document}
{"config": "arxiv", "file": "2111.07854/main.tex"}
\begin{document} \title{CaSCADE: Compressed Carrier and DOA Estimation} \author{\thanks{ This project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No. 646804-ERC-COG-BNYQ, and from the Israel Science Foundation under Grant no. 335/14. Deborah Cohen is grateful to the Azrieli Foundation for the award of an Azrieli Fellowship.} Shahar Stein, Or Yair, Deborah Cohen, \emph{Student IEEE} and Yonina C. Eldar, \emph{Fellow IEEE} } \maketitle \IEEEpeerreviewmaketitle{} \begin{abstract} Spectrum sensing and direction of arrival (DOA) estimation have been thoroughly investigated, both separately and as a joint task. Estimating the support of a set of signals and their DOAs is crucial to many signal processing applications, such as Cognitive Radio (CR). A challenging scenario, faced by CRs, is that of multiband signals, composed of several narrowband transmissions spread over a wide spectrum each with unknown carrier frequencies and DOAs. The Nyquist rate of such signals is high and constitutes a bottleneck both in the analog and digital domains. To alleviate the sampling rate issue, several sub-Nyquist sampling methods, such as multicoset sampling or the modulated wideband converter (MWC), have been proposed in the context of spectrum sensing. In this work, we first suggest an alternative sub-Nyquist sampling and signal reconstruction method to the MWC, based on a uniform linear array (ULA). We then extend our approach to joint spectrum sensing and DOA estimation and propose the CompreSsed CArrier and DOA Estimation (CaSCADE) system, composed of an L-shaped array with two ULAs. In both cases, we derive perfect recovery conditions of the signal parameters (carrier frequencies and DOAs if relevant) and the signal itself and provide two reconstruction algorithms, one based on the ESPRIT method and the second on compressed sensing techniques. Both our joint carriers and DOAs recovery algorithms overcome the well-known pairing issue between the two parameters. Simulations demonstrate that our alternative spectrum sensing system outperforms the MWC in terms of recovery error and design complexity and show joint carrier frequencies and DOAs from our CaSCADE system's sub-Nyquist samples. \end{abstract} \section{Introduction} Both traditional tasks of spectrum sensing and direction of arrival (DOA) estimation have been thoroughly investigated in the literature. For the first, several sensing schemes have been proposed, such as energy detection \cite{Urkowitz_energy}, matched filter \cite{MF1, MF2} and cyclostationary detection \cite{Gardner_review, Napo_review}, assuming known or identical DOAs. Well known techniques for DOA estimation include MUSIC \cite{pisarenko73, schmidt86} and ESPRIT \cite{PAUL1986}. Here, the signal frequency support is typically known. However, many signal processing applications may require or at least benefit from the two combined, namely joint spectrum sensing and DOA estimation. Cognitive Radio (CR) \cite{Mitola, MitolaMag} is one such application, which aims at solving the spectrum scarcity issue by exploiting its sparsity. Spectral resources, traditionally allocated to licensed or primary users (PUs) by governmental organizations, are becoming critically scant but at the same time have been shown to be underutilized \cite{Study1, Study2}. These observations led to the idea of CR, which allows secondary users to opportunistically access the licensed frequency bands left vacant by their primary owners increasing spectral efficiency \cite{Mitola, Haykin}. Spectrum sensing is an essential task in the CR cycle \cite{cog, MagazineMishali}. Indeed, a CR should be able to constantly monitor the spectrum and detect the PUs' activity, reliably and fast \cite{cognitive1,cognitive2}. DOA recovery can enhance CR performance by allowing exploitation of vacant bands in space in addition to the frequency domain. The 2D-DOA problem, which requires finding two unknown angles for each transmission and pairing them, is considered in \cite{Jgu07, Jgu15}. The authors suggest a modification to the traditional ESPRIT \cite{PAUL1986}, which is used to estimate a single angle. However, this approach only allows the recovery of two angles and solves a separable problem. This cannot be directly extended to joint angle and frequency estimation, which is not separable. Joint DOA and carrier frequency estimation has been considered in \cite{vanderveen1,vanderveen2}, where the authors developed a joint angle-frequency estimation (JAFE) algorithm. JAFE is based on an extension of ESPRIT which allows for multiple parameters to be recovered. However, this method requires additional joint diagonalization of two matrices using iterative algorithms to pair between the carrier frequencies and the DOAs of the different transmissions. In \cite{bai}, the authors consider multiple interleaved sampling channels, with a fixed delay between consecutive channels. They propose a two-stage reconstruction method, where first the frequencies are recovered and then the DOAs are computed from the corresponding estimated carriers. The works described above all assume that the signal is sampled at least at its Nyquist rate, and do not consider signal reconstruction. Many modern applications deal with signals with high bandwidth and consequently high Nyquist rate. For instance, to increase the chance of finding unoccupied spectral bands, CRs have to sense a wide spectrum, leading to prohibitively high Nyquist rates. Moreover, such high sampling rates generate a large number of samples to process, affecting speed and power consumption. To overcome the rate bottleneck, several new sampling methods have recently been proposed \cite{Mishali09, Mishali10, MagazineMishali} that reduce the sampling rate in multiband settings below the Nyquist rate. The multicoset or interleaved approach adopted in \cite{Mishali09} suffers from practical issues, as described in \cite{Mishali10}. Specifically, the signal bandwidth can exceed the analog bandwidth of the low rate analog-to-digital converter (ADC) by orders of magnitude. Another practical issue stems from the time shift elements since it can be difficult to maintain accurate time delays between the ADCs at such high rates. The modulated wideband converter (MWC) \cite{Mishali10} was designed to overcome these issues. It consists of an analog front-end composed of several channels. In each channel, the analog wideband signal is mixed by a periodic function, low-pass filtered and sampled at a low rate. The MWC solves carrier frequency estimation and spectrum sensing from sub-Nyquist samples, but does not address DOA recovery. A few works have recently considered joint DOA and spectrum sensing of multiband signals from sub-Nyquist samples. In \cite{leus_DOA}, the authors consider both time and spatial compression by selecting receivers from a uniform linear array (ULA) and samples from the Nyquist grid. They exploit a mathematical relation between sub-Nyquist and Nyquist samples over a certain sensing time and recover the signal's power spectrum from the compressed samples. The frequency support and DOAs are then estimated by identifying peaks of the power spectrum, corresponding to each one of the uncorrelated transmissions. Since the power spectrum is computed over a finite sensing time, the frequency supports and angles are obtained on a grid defined by the number of samples. In \cite{kumar_doa}, an L-shaped array with two interleaved (or multicoset) channels, with a fixed delay between the two, samples the signal below the Nyquist rate. Then, the carrier frequencies and the DOAs are recovered from the samples. However, the pairing issue between the two is not discussed. Moreover, this delay-based approach suffers from the same drawbacks as the multicoset sampling scheme when it comes to practical implementation. In this work, we first consider spectrum sensing of a multiband signal whose transmissions are assumed to have known or identical DOAs, as in \cite{Mishali10}. For this scenario, we present an alternative sub-Nyquist sampling scheme based on a ULA of sensors. We then extend this scheme to the scenario where both the carrier frequencies and DOAs of the transmissions composing the input signal are unknown. In this case, we propose the CompreSsed CArrier and DOA Estimation (CaSCADE) system, composed of an L-shaped array, and perform joint DOA and carrier recovery from sub-Nyquist samples. In the first scenario, we consider a ULA where each sensor implements one channel of the MWC. This configuration has two main advantages over the MWC. First, it allows for a simpler design of the mixing functions which can be identical in all sensors. Second, the ULA based system outperforms the MWC in low signal to noise ratio (SNR) regimes. Since all the MWC channels belong to the same sensor, they are all affected by the same additive sensor noise. In the ULA based system, each channel has a different sensor with uncorrelated sensor noise between channels. This allows for noise averaging which increases the SNR. We present two approaches to recover the carrier frequencies of the transmissions composing the input signal. The first method is based on compressed sensing (CS) \cite{CSBook} algorithms and assumes that the carriers lie on a predefined grid. In the second technique, we drop the grid assumption and use the ESPRIT algorithm \cite{PAUL1986} to estimate the frequencies. Once these are recovered, we show how the signal itself can be reconstructed. We demonstrate that the minimal number of sensors required for perfect reconstruction in noiseless settings is identical for both recovery approaches and that our system achieves the minimal sampling rate derived in \cite{Mishali09}. Next, we extend our approach to joint spectrum sensing and DOA estimation from sub-Nyquist samples, using CaSCADE implementing the modified MWC over an L-shape array. Specifically, we consider several narrowband transmissions spread over a wide spectrum, impinging on an L-shaped ULA, each from a different direction. The array sensors are composed of an analog mixing front-end, implementing one channel of the MWC \cite{Mishali10}, as before. We then propose two approaches to jointly recover the carrier frequencies and DOAs of the transmissions. The first is based on CS techniques and allows recovery of both parameters assuming they lie on a predefined grid. The CS problem is formulated in such a way that no pairing issue arises between the carrier frequencies and their corresponding DOAs. The second approach, inspired by \cite{Jgu07, Jgu15}, extends the ESPRIT algorithm to the joint estimation of carriers and DOAs, while overcoming the pairing issue. Our 2D-ESPRIT algorithm can be applied to sub-Nyquist samples, as opposed to previous work which only considered the Nyquist regime. Once the carriers and DOAs are recovered, the signal itself is reconstructed, similarly to the previous scenario. We provide sufficient conditions on our sampling system for perfect reconstruction of the carriers and DOAs, and of the signal itself. We compare our reconstruction algorithms to the Parallel Factor (PARAFAC) analysis method \cite{Hars94}, previously proposed for the 2D-DOA problem \cite{Zhang2011}, \cite{Liu2010}. This approach solves the pairing issue between two estimated angles. However, it has only been applied in the Nyquist regime so far. In \cite{Stein2015}, we applied it on sub-Nyquist samples and extended it to the case where the second variable is a frequency rather than an additional angle. Last, for each scenario, we derive the minimal sampling rate allowing for perfect reconstruction of the signal parameters and the signal itself in noiseless settings. This paper is organized as follows. In Section~\ref{sec:model}, we formulate the signal model and spectrum sensing goal. Section~\ref{sec:samp_rec} presents the ULA-based sub-Nyquist sampling and reconstruction schemes. Numerical experiments for the spectrum sensing scenario, including comparison with the MWC system, are shown in Section~\ref{sec:exp}. The joint spectrum sensing and DOA estimation problem is considered in Section~\ref{sec:joint}. We present the CaSCADE system along with its sampling scheme and reconstruction techniques, and illustrate its performance in simulations. \section{Spectrum Sensing Problem Formulation} \label{sec:model} \subsection{Signal Model \label{sub:Signal-Model}} Let $u\left(t\right)$ be a complex-valued continuous-time signal, bandlimited to $\mathcal{F}=\left[-\frac{f_{\text{Nyq}}}{2},\frac{f_{\text{Nyq}}}{2}\right]$ and composed of up to $M$ uncorrelated transmissions $s_{i}\left(t\right),\,i\in\left\{ 1,2,...,M\right\} $. Each transmission $s_{i}\left(t\right)$ is modulated by a carrier frequency $f_{i}\in\mathbb{R}$, such that \begin{equation} u\left(t\right) = \sum_{i=1}^{M}s_{i}\left(t\right)e^{j2\pi f_{i}t}. \end{equation} Assume that $s_{i}\left(t\right)$ are bandlimited to $\mathcal{B}=\left[-\nicefrac{1}{2T},\nicefrac{1}{2T}\right]$ and disjoint, namely $\min_{i\neq j}\left\{ \left|f_{i}-f_{j}\right|\right\} >B$, where $B=|\mathcal{B}|$. Formally, the Fourier transform of $u(t)$, defined by \begin{equation} U(f)=\intop_{-\infty}^{\infty}u(t)e^{-j2\pi ft}dt=\sum_{i=1}^{M}S_{i}(f-f_{i}), \end{equation} where $S_{i}\left(f\right)$ is the Fourier transform of $s_{i}\left(t\right)$, is zero for every $f\notin\mathcal{F}$. All source signals are assumed to have identical and known angle of arrival (AOA) $\theta\neq90^{\circ}$. A typical source signal $u(t)$ is depicted in the frequency domain in Fig. \ref{signals figure}(a). \begin{definition} The set $\mathcal{M}_{1}$ contains all signals $u\left(t\right)$, such that the support of the Fourier transform $U\left(f\right)$ is contained within a union of $M$ disjoint intervals in $\mathcal{F}$. Each of the bandwidths does not exceed $B$ and all the transmissions composing $u\left(t\right)$ have identical and known AOA $\theta\neq90^{\circ}$. \end{definition} We wish to design a sampling and reconstruction system for signals from the model $\mathcal{M}_{1}$ which satisfies the following properties: \begin{enumerate} \item The system has no prior knowledge on the carrier frequencies. \item The sampling rate should be as low as possible. \end{enumerate} Let $\mathbf{s}(t)=\left[s_{1}(t),s_{2}(t),\cdots,s_{M}(t)\right]^{T}$ be the source signals vector, $\mathbf{S}(f)=\left[S_{1}(f),S_{2}(f),\cdots,S_{M}(f)\right]^{T}$ the signal Fourier transform vector, and $\boldsymbol{f}=\left[f_{1},f_{2},\cdots,f_{M}\right]^{T}$ the carrier frequencies vector. Our goal is to design a sampling and reconstruction system in order to recover $\boldsymbol{f}$ and $\mathbf{s}(t)$ from sub-Nyquist samples of $u(t)$. In the reconstruction phase, we will address two separate objectives: \begin{enumerate} \item Frequencies recovery, i.e. recovering only the signals carrier frequencies $\boldsymbol{f}$. \item Full spectrum recovery, i.e. recovering both the signals carrier frequencies $\boldsymbol{f}$ and the source signals $\mathbf{s}(t)$. \end{enumerate} \subsection{Multicoset Sampling and the MWC} It was previously shown in \cite{Mishali09}, that if $MB<\frac{f_{\text{Nyq}}}{2}$, then the minimal sampling rate to allow blind reconstruction of $u\left(t\right)$ is $2MB$, namely twice the Landau rate \cite{Landau67}. Concrete algorithms for blind recovery achieving the minimal rate were developed in \cite{Mishali09} based on multicoset sampling and in \cite{Mishali10} based on the MWC. Unfortunately, the implementation of multicoset sampling is problematic due to the inherent analog bandwidth of the ADCs and the required synchronization between time shift elements \cite{Mishali10}. The MWC achieves the minimal sampling rate and can be implemented in practice \cite{Mishali10}. This system is composed of $N$ parallel channels. Each channel consists of an analog mixing front-end in which $u(t)$ is multiplied by a periodic mixing function $p_{n}(t), 1 \leq n \leq N$. This multiplication aliases the spectrum, such that each spectral band appears in baseband. We denote by $T_{p}$ the period of $p_n(t)$ and require $f_{p}=1/T_{p}\ge B$. The signal then goes through a low-pass filter (LPF) with cut-off frequency $f_{s}/2$ and is sampled at rate $f_{s} \geq f_p$. Finally, $u(t)$ is reconstructed from the low rate samples using CS techniques. An illustration of the MWC is shown in Fig.~\ref{fig:mwc}. A known difficulty of the MWC is choosing appropriate periodic functions $p_{n}(t)$ so that their Fourier coefficients fulfill CS requirements. In this work, we suggest an alternative implementation of the MWC, based on a ULA, which overcomes this difficulty, and satisfies the properties described above. Besides, our ULA based system, shown in Fig.~\ref{ULA fig}, is more robust to noise, as we will explain in Section~\ref{sec:exp} and demonstrate via simulations. In Section \ref{sec:joint}, we show how to use this system for DOA recovery. \begin{figure} \begin{centering} \fbox{ \includegraphics[width = 0.48\textwidth]{mwc_schema.png} } \end{centering} \protect\caption{MWC system. \label{fig:mwc}} \end{figure} \begin{figure*} \begin{centering} \fbox{ \begin{minipage}[t]{1.5\columnwidth} \begin{center} \includegraphics[width = 0.8\textwidth]{mixing.png} \par\end{center} \end{minipage}} \par\end{centering} \protect\caption{The different stages of the analog mixing front-end at the $n$th sensor. $\left(a\right)$ The input signal in the frequency domain $U\left(f\right)$ with $M=3$ different source signals. $\left(b\right)$ Each replicated source signal (after mixing). $\left(c\right)$ Replicated input signal $\tilde{Y}_n\left(f\right)$ (after mixing). $\left(d\right)$ Baseband signal $Y_n\left(f\right)$ after LPF.\label{signals figure}} \end{figure*} \begin{table} \centering \protect\caption{Notation} {\tiny{}{}} \begin{tabular}{|c|c|} \hline {\tiny{}{}Symbol} & {\tiny{}{}Interpretation} \tabularnewline \hline \hline {\tiny{}{}$A$$\left(f\right)$} & {\tiny{}{}Fourier Transform of $a(t)$} \tabularnewline \hline {\tiny{}{}$A$$\left(e^{j2\pi fT}\right)$} & {\tiny{}{}DTFT of $a[n]$} \tabularnewline \hline {\tiny{}{}$\mathbf{a}$, $\mathbf{A}$} & {\tiny{}{}vector, matrix (capital letter)} \tabularnewline \hline {\tiny{}{}$c$} & {\tiny{}{}the speed of light} \tabularnewline \hline {\tiny{}{}$\angle\left(\cdot\right)$} & {\tiny{}{}the angle of $\left(\cdot\right)$, $\angle\left(\cdot\right)\in(-\pi,\pi]$} \tabularnewline \hline $\mathbf{A}^{H}$ & {\tiny{}{}the conjugate-transpose (Hermitian) of $\mathbf{A}$} \tabularnewline \hline $\mathbf{A}^{\dagger}$ & {\tiny{}{}the (Moore-Penrose) pseudoinverse of $\mathbf{A}$, i.e. $\mathbf{A}^{\dagger}=\left(\mathbf{A}^{H}\mathbf{A}\right)^{-1}\mathbf{A}^{H}$} \tabularnewline \hline \end{tabular} \end{table} \section{ULA Based MWC} \label{sec:samp_rec} \subsection{System Description \label{sec:Sampling-Scheme}} Our sensing system consists of a ULA composed of $N$ sensors, with two adjacent sensors separated by a distance $d$, such that $d<\frac{c}{|\cos(\theta)|f_{\text{Nyq}}}$, where $c$ is the speed of light. All sensors have the same sampling pattern implementing a single channel of the MWC; the received signal is multiplied by a periodic function $p(t)$ with period $T_{p}=1/f_{p}$, low-pass filtered with a filter that has cut-off frequency $f_{s}/2$ and sampled at the low rate $f_{s}$. For simplicity, we choose $f_{s}=f_{p}$. The system is illustrated in Fig.~\ref{ULA fig}. The only requirement on $p(t)$ is that none of its Fourier series coefficients within the signal's Nyquist bandwidth are zero. In the next section, we show how we can recover both the carrier frequencies $\boldsymbol{f}$ and $\mathbf{s}(t)$, or alternatively the signal itself, from the samples at the output of Fig.~\ref{ULA fig}. We demonstrate that the minimal number of sensors required by both our reconstruction methods is $N=2M$, with each sensor sampling at the minimal rate of $f_{s}=B$ to allow for perfect signal recovery. This leads to a minimal sampling rate of $2MB$, as shown in \cite{Mishali09}, which is assumed to be less than $f_{\text{Nyq}}$. With high probability, the minimal number of sensors reduces to $M+1$. \begin{figure*} \centering{} \fbox{ \begin{minipage}[t]{1.5\columnwidth} \begin{center} $$\xymatrix@C=2pc@R=0.7pc{ & \ar@<1ex>[dr]\ar@<-1ex>@{.>}[dr]^<<[@!-41]{s_{1,...,M}}\ar@{-->}[dr] \\ \ar@<1ex>[ddrr]\ar@<-1ex>@{.>}[ddrr]^<[@!+47]{\overset{\overset{\text{wave}}{\text{front}} }{........................}}\ar@{-->}[ddrr] & & *+[o][F]{\sum}\ar[r]\ar@{<.>}[dd]|{d} & \overset{\underset{\downarrow}{p(t)}}{\bigotimes}\ar[r] & *+[F]{\underset{f_s}{LPF}}]\ar[r]_{f_s}|{\underset{}{\nswfilledspoon}} & y_1[k] & \text{ sensor 1} \\ \\ & & *+[o][F]{\sum}\ar[r] & \overset{\underset{\downarrow}{p(t)}}{\bigotimes}\ar[r] & *+[F]{\underset{f_s}{LPF}}]\ar[r]_{f_s}|{\underset{}{\nswfilledspoon}} & y_2[k] & \text{ sensor 2} \\ & \ar@<1ex>[dr]\ar@<-1ex>@{.>}[dr]^<<[@!-43]{s_{1,...,M}}\ar@{-->}[dr] & \vdots & & & \vdots\\ & & *+[o][F]{\sum}\ar[r] & \overset{\underset{\downarrow}{p(t)}}{\bigotimes}\ar[r] & *+[F]{\underset{f_s}{LPF}}]\ar[r]_{f_s}|{\underset{}{\nswfilledspoon}} & y_N[k] & \text{ sensor N} \\ }$$ \par\end{center} \end{minipage}}\protect\caption{ULA configuration with $N$ sensors, with distance $d$ between two adjacent sensors. Each sensor includes an analog front-end composed of a mixer with the same periodic function $p\left(t\right)$, a LPF and a sampler, at rate $f_s$.\label{ULA fig} } \end{figure*} In the remainder of this section, we describe our ULA based sampling scheme and derive conditions for perfect recovery of the carrier frequencies $\boldsymbol{f}$ and the transmissions $\mathbf{s}(t)$. We then provide concrete recovery algorithms. \subsection{Frequency Domain Analysis} \label{sec:der1} We start by deriving the relation between the sample sequences from the $n$th sensor and the unknown transmissions $s_{i}(t)$ and corresponding carrier frequencies $\boldsymbol{f}$. To this end, we introduce the following definitions \begin{equation} \mathcal{F}_{p}\triangleq\left[-\nicefrac{f_{p}}{2},\nicefrac{f_{p}}{2}\right],\quad\mathcal{F}_{s}\triangleq\left[-\nicefrac{f_{s}}{2},\nicefrac{f_{s}}{2}\right]. \end{equation} Consider the received signal $u_{n}\left(t\right)$ at the $n$th sensor of the ULA \begin{equation} u_{n}(t)=\sum_{i=1}^{M}s_{i}(t+\tau_{n})e^{j2\pi f_{i}(t+\tau_{n})}\approx\sum_{i=1}^{M}s_{i}(t)e^{j2\pi f_{i}(t+\tau_{n})},\label{eq:u_n(t)} \end{equation} where \begin{equation} \label{eq:delay} \tau_{n}=\frac{dn}{c}\cos\left(\theta\right) \end{equation} is the accumulated phase at the $n$th sensor with respect to the first sensor. The approximation in (\ref{eq:u_n(t)}) stems from the narrowband assumption on the transmissions $s_{i}\left(t\right)$. The Fourier transform of the received signal $u_{n}\left(t\right)$ is then given by \begin{equation} U_{n}\left(f\right)=\sum_{i=1}^{M}S_{i}\left(f-f_{i}\right)e^{j2\pi f_{i}\tau_{n}}.\label{eq:Xi Fourier} \end{equation} In each sensor, the received signal is first mixed with the periodic function $p(t)$ prior to filtering and sampling. Since $p(t)$ is periodic with period $T_{p}=1/f_{p}$, it can be represented by its Fourier series \begin{equation} p(t)=\sum_{l=-\infty}^{\infty}c_{l}e^{j2\pi lf_{p}t},\label{eq:p(t)} \end{equation} where \begin{equation} c_{l}=\frac{1}{T_{p}}\intop_{0}^{T_{p}}p(t)e^{-j2\pi lf_{p}t}\mathrm{d}t.\label{eq:c_coeff} \end{equation} The Fourier transform of the analog multiplication $\tilde{y}_{n}(t)=u_{n}(t)p(t)$ is evaluated as \begin{eqnarray} \tilde{Y}_{n}\left(f\right) & = & \intop_{-\infty}^{\infty}u_{n}\left(t\right)p\left(t\right)e^{-j2\pi ft}\mathrm{d}t\nonumber \\ & = & \intop_{-\infty}^{\infty}u_{n}\left(t\right)\sum_{l=-\infty}^{\infty}c_{l}e^{j2\pi f_{p}lt}e^{-j2\pi ft}\mathrm{d}t\nonumber \\ & = & \sum_{l=-\infty}^{\infty}c_{l}\intop_{-\infty}^{\infty}u_{n}\left(t\right)e^{-j2\pi t(f-l\cdot f_{p})}\mathrm{d}t\nonumber \\ & = & \sum_{l=-\infty}^{\infty}c_{l}U_{n}\left(f-lf_{p}\right).\label{eq:mixed signal} \end{eqnarray} The mixed signal $\tilde{Y}_{n}\left(f\right)$ is thus a linear combination of $f_{p}-$shifted and $c_{l}-$scaled copies of $U_{n}\left(f\right)$. Since $U\left(f\right)=0,\,\forall f\notin\mathcal{F}$, the sum in (\ref{eq:mixed signal}) contains at most $\left\lceil \frac{f_{\text{Nyq}}}{f_{p}}\right\rceil $ nonzero terms, for each $f$. Fig. \ref{signals figure}(b)-(c) depicts each transmission and the resulting signal after mixing, respectively. Substituting (\ref{eq:Xi Fourier}) into (\ref{eq:mixed signal}), we have \begin{eqnarray*} \tilde{Y}_{n}\left(f\right) & = & \sum_{l=-\infty}^{\infty}c_{l}\sum_{i=1}^{M}S_{i}\left(f-f_{i}-lf_{p}\right)e^{j2\pi f_{i}\tau_{n}}. \end{eqnarray*} Denote by $h(t)$ and $H(f)$ the impulse and frequency responses of an ideal LPF with cut-off frequency $f_{s}$, respectively. After filtering $\tilde{y}_{n}(t)$ with $h(t)$, we have \begin{eqnarray*} Y_{n}\left(f\right) & = & \tilde{Y}_{n}\left(f\right) H\left(f\right)\\ & = & \begin{cases} \sum_{l=-\infty}^{\infty}c_{l}\sum_{i=1}^{M}S_{i}\left(f-f_{i}-lf_{p}\right)e^{j2\pi f_{i}\tau_{n}}, & f\in\mathcal{F}_{s}\\ 0, & f\notin\mathcal{F}_{s}. \end{cases} \end{eqnarray*} Note that $Y_{n}\left(f\right)$ only contains frequencies in the interval $\mathcal{F}_{s}$, due to the lowpass operation. Therefore, it is composed of a finite number of aliases of $U_{n}\left(f\right)$. Consequently, we can write \begin{eqnarray*} Y_{n}\left(f\right) & = & \sum_{l=-L_{0}}^{L_{0}}c_{l} \sum_{i=1}^{M}S_{i}\left(f-f_{i}-lf_{p}\right)e^{j2\pi f_{i}\tau_{n}}\\ & = & \sum_{i=1}^{M}e^{j2\pi f_{i}\tau_{n}}\sum_{l=-L_{0}}^{L_{0}}c_{l}S_{i}\left(f-f_{i}-lf_{p}\right) \\ & = & \sum_{i=1}^{M}\tilde{S}_{i}\left(f\right)e^{j2\pi f_{i}\tau_{n}}, \end{eqnarray*} where $L_{0}$ is the smallest integer such that the sum contains all nonzero contributions, i.e. $L_{0}=\left\lceil \frac{f_{\text{Nyq}}}{2f_{p}}\right\rceil $, and \begin{equation} \tilde{S}_{i} (f) \triangleq \sum_{l=-L_{0}}^{L_{0}}c_{l}S_{i} (f-f_{i}-lf_{p}).\label{eq:S_tidle} \end{equation} The corresponding $Y_n\left(f\right)$ after filtering is depicted in Fig.~\ref{signals figure}(d). Note that in the interval $\mathcal{F}_{p}$, $\tilde{S}_{i}\left(f\right)$ is a cyclic shifted and scaled (by known factors $\left\{ c_{l}\right\} $) version of $S_{i}\left(f\right)$, as shown in Fig.~\ref{s_F and omega f}. \begin{figure} \begin{centering} \fbox{ \begin{minipage}[t]{1\columnwidth} \begin{center} \includegraphics[width = 0.5\textwidth]{S_tllde2.png} \par\end{center} \end{minipage}} \par\end{centering} \protect\caption{The left pane shows the original source signals at baseband (before modulation). The right pane presents the output signals at baseband $\tilde{S}\left(f\right)$ after modulation, mixing and filtering.\label{s_F and omega f}} \end{figure} After sampling, the discrete-time Fourier transform (DTFT) of the $n$th sequence $x_{n}\left[k\right]\triangleq y_{n}\left(kT_{s}\right)$ is expressed as \begin{equation} X_{n}\left(e^{j2\pi fT_{s}}\right)=\sum_{i=1}^{M}W_{i}\left(e^{j2\pi fT_{s}}\right)e^{j2\pi f_{i}\tau_{n}},\quad f\in\mathcal{F}_{s},\label{eq:dtft_rel} \end{equation} where we define $w_{i}\left[k\right]\triangleq\tilde{s}_{i}\left(kT_{s}\right)$ and $W_{i}\left(e^{j2\pi fT_{s}}\right)=\text{DTFT}\left\{ w_{i}\left[k\right]\right\} $. It is convenient to write (\ref{eq:dtft_rel}) in matrix form as \begin{equation} \label{eq:The Equation} \mathbf{X}\left(f\right)=\mathbf{A} \mathbf{W}\left(f\right), \quad f \in \mathcal{F}_s. \end{equation} Here, $\mathbf{X}\left(f\right)$ is of length $N$ with $n$th element $X_{n}\left(f\right)=X_{n}\left(e^{j2\pi fT_{s}}\right)$, the unknown vector $\mathbf{W}\left(f\right)$ is of length $M$, with its $i$th entry $W_{i}\left(f\right)=W_{i}\left(e^{j2\pi fT_{s}}\right)$ and the matrix $\mathbf{A}$ depends on the unknown carrier frequencies vector $\boldsymbol{f}$, and is defined by \begin{equation} \label{eq:A} \mathbf{A}= \left(\begin{matrix}e^{j2\pi f_{1}\tau_{1}} & \cdots & e^{j2\pi f_{M}\tau_{1}}\\ \vdots & & \vdots\\ \\ e^{j2\pi f_{1}\tau_{N}} & \cdots & e^{j2\pi f_{M}\tau_{N}} \end{matrix}\right). \end{equation} In the time domain, we have, \begin{equation} \mathbf{x}[k]=\mathbf{A}\mathbf{w}[k],\quad k\in\mathbb{Z}, \end{equation} where $\mathbf{x}[k]$ has $n$th element $x_{n}[k]$ and $\mathbf{w}[k]$ is a vector of length $M$ with $i$th element $w_{i}[k]$. In the next section, we derive sufficient conditions for (\ref{eq:The Equation}) to have a unique solution, namely for perfect recovery of the carrier frequencies $\boldsymbol{f}$ and the transmissions $\mathbf{s}\left(t\right)$ from the low rate samples $\mathbf{x}\left[k\right]$. \subsection{Choice of Parameters} In order to enable perfect blind reconstruction of both the carrier frequencies $\boldsymbol{f}$ and transmissions $\mathbf{s}\left(t\right)$ in noiseless settings, we first require (\ref{eq:The Equation}) to have a unique solution. In addition, we need to ensure that $\mathbf{s}\left(t\right)$ can be uniquely recovered from $\mathbf{w}[k], k \in \mathbb{Z}$. Theorem \ref{thm: the equation uniquness} presents sufficient conditions for (\ref{eq:The Equation}) to have a unique solution. Then, Theorem \ref{prop:signals-recovery:-given} specifies sufficient conditions for perfect recovery of $\mathbf{s}\left(t\right)$. \subsubsection{Carrier Frequency Recovery} We first consider sufficient conditions on the ULA configuration that allow for perfect reconstruction of the carrier frequencies $\boldsymbol{f}$. \begin{thm} \label{thm: the equation uniquness} Let $u\left(t\right)$ be an arbitrary signal in $\mathcal{M}_{1}$ and consider a ULA with spacing $d<\frac{c}{|\cos\left(\theta\right)| f_{\text{Nyq}}}$ and steering matrix $\mathbf{A}$. If: \begin{itemize} \item (c1) $N>2M-\dim\left(\mbox{span}\left(\mathbf{w}\right)\right)$ \item (c2) $\dim\left(\mbox{span}\left(\mathbf{w}\right)\right)\geq1$, \end{itemize} then (\ref{eq:The Equation}) has a unique solution $\left(\boldsymbol{f},\mathbf{w}\right)$. \end{thm} \begin{IEEEproof} From the assumption of disjoint transmissions, we have $f_{i}\neq f_{j}$, for $i\neq j$. Thus, if $d<\frac{c}{|\cos\left(\theta\right)| f_{\text{Nyq}}}$, then it holds that $d\neq\frac{ck}{\left| \cos\left(\theta\right) \right|}\cdot\frac{1}{\left|f_{i}-f_{j}\right|},\,\forall k\in\mathbb{Z},\,\forall i\neq j$, and $e^{j2\pi f_{i} \tau_n} \neq e^{j2\pi f_{j} \tau_n}$ for $1 \leq n \leq N-1$, with $\tau_n$ defined in (\ref{eq:delay}). It follows that $\mathbf{A}$ is a Vandermonde matrix with $M \leq N$, and thus, $\mbox{rank}\left(\mathbf{A}\right)=M$. Since $d<\frac{c}{|\cos\left(\theta\right)| f_{\text{Nyq}}}$, we have that $2\pi f_{i} \tau_1 \in(-\pi,\pi]$. The proof then follows directly from Proposition 2 in \cite{Kfir2010}. \begin{prop} [Proposition 2, \cite{Kfir2010}] \label{prop:kfir} If $\left(\boldsymbol{f},\mathbf{w}\right)$ is a solution to (\ref{eq:The Equation}), \begin{equation*} N>2M-\dim\left(\mbox{span}\left(\mathbf{w}\right)\right), \quad \dim\left(\mbox{span}\left(\mathbf{w}\right)\right)\geq1 \end{equation*} then $\left(\boldsymbol{f},\mathbf{w}\right)$ is the unique solution of (\ref{eq:The Equation}).\end{prop} \vspace{-1.75em} \end{IEEEproof} Note that $\dim\left(\mbox{span}\left(\mathbf{w}\right)\right) < 1$ iff $u(t) \equiv 0$, that is the received signal does not contain any transmission. \subsubsection{Signal Recovery} While Theorem \ref{thm: the equation uniquness} guarantees the uniqueness of $\left(\boldsymbol{f},\mathbf{w}\right)$, some additional conditions need to be imposed in order to uniquely derive $\mathbf{s}\left(t\right)$ from $\mathbf{w}$, as $\mathbf{w}$ is a sampled permutation of $\mathbf{s}\left(t\right)$. Obviously, in order to be able to achieve perfect reconstruction of $\mathbf{s}\left(t\right)$, the preprocessing of the signal (i.e mixing with $p(t)$ and filtering with $h\left(t\right)$) should not cause any loss of information. The following lemma presents conditions on $p(t)$ and $H(f)$ so that each entry of the processed signal vector $\tilde{\mathbf{S}}(f)$ is a cyclic shift (up to scaling by known factors $\{c_{l}\}$) of the matching entry of the original source signal vector $\mathbf{S}(f)$, as shown in Fig. \ref{s_F and omega f}. In particular, the transformation between $\mathbf{S}(f)$ and $\tilde{\mathbf{S}}(f)$ should be invertible so that the former can be recovered from the latter. \begin{lem} \label{lem:singal equality} If $f_s \geq f_{p}\geq B$ and $c_{l}\neq0$ for all $l\in\left\{ -L_{0},...,L_{0}\right\} $, where $c_{l}$ is defined in (\ref{eq:c_coeff}), then \begin{equation} \forall f'\in\mathcal{F}_{p},\exists k:\,\tilde{S}_{i}\left(f'\right)=c_{k}S_{i}\left(f'-f_{i}-kf_{p}\right).\label{eq:uni1} \end{equation} \end{lem} \begin{IEEEproof} Consider the $i$th transmission. The output of the LPF $H(f)$, namely $\tilde{S}_{i}\left(f\right)$, is given by \begin{equation} \tilde{S}_{i}\left(f\right)=\begin{cases} \sum_{l=-L_{0}}^{L_{0}}c_{l}S_{i}\left(f-f_{i}-lf_{p}\right), & f\in\mathcal{F}_{p}\\ 0, & f\notin\mathcal{F}_{p}. \end{cases}\label{eq:sum_lpf} \end{equation} Since $f_{p}\geq B$, the sum in (\ref{eq:sum_lpf}) is over disjoint bands and only one of its elements is nonzero for each $f$. Equation (\ref{eq:uni1}) is true for $k$ that satisfies $f'-f_{i}-kf_{p}\in\mathcal{F}_{p}$, since for any other $k'\neq k$, $f'-f_{i}-k'f_{p}\notin\mathcal{F}_{p}$ and $\tilde{S}_{i}\left(f'-f_{i}-k'f_{p}\right)=0$. \end{IEEEproof} Moreover, if $f_s \geq f_p \geq B$, then the system sampling rate obeys the Nyquist rate of $\tilde{S}_{i}\left(f\right)$, which means that $\mathbf{S}\left(t\right)$ can be perfectly recovered from $\mathbf{w}\left[k\right]$ and it holds that \begin{equation} W_{i}\left(e^{j2\pi fT_{s}}\right)=\tilde{S}_{i}\left(f\right)\qquad f\in\mathcal{F}_{s}. \end{equation} Theorem \ref{prop:signals-recovery:-given} summarizes sufficient conditions for perfect blind reconstruction of $\mathbf{s}(t)$ from the low rate samples $\mathbf{x}[k]$. \begin{thm} \label{prop:signals-recovery:-given} Let $u(t)$ and the ULA be as in Theorem \ref{thm: the equation uniquness} and let $\left(\boldsymbol{f},\mathbf{w}\right)$ be the unique solution of (\ref{eq:The Equation}). If: \label{enu: signals recovery cond1} \begin{itemize} \item {(c1)} $c_{l}\neq0$ for all $l\in\left\{ -L_{0},...,L_{0}\right\} $, where $c_{l}$ is defined in (\ref{eq:c_coeff}) \item {(c2)} $f_{s} \geq f_{p}\geq B$, \label{enu:signals recovery cond2} \end{itemize} then $\left\{ \hat{s}_{i}\left(t\right)\right\} _{i=1}^{M}$ can be uniquely recovered from $\mathbf{x}\left[k\right]$. \end{thm} \begin{IEEEproof} Consider the $i$th transmission and let $f'\in\mathcal{B}\subseteq\mathcal{F}_{p}$. Since $s_{i}(t)$ is bandlimited to $\mathcal{B}$, it holds that \begin{equation} W_{i}\left(e^{j2\pi f'T_{s}}\right)=\tilde{S}_{i}\left(f'\right)=c_{l_{a}} S_{i}\left(f'-f_{i}-l_{a}\cdot f_{p}\right), \end{equation} where the last equality follows from Lemma \ref{lem:singal equality}. Since $c_{l_{a}}\neq0$, we have \begin{equation} S_{i}\left(f'-f_{i}-l_{a}\cdot f_{p}\right)=\frac{1}{c_{l_{a}}}W_{i}\left(e^{j2\pi f'T_{s}}\right), \end{equation} or, after a change of variables, \begin{equation} S_{i}\left(f'\right)=\frac{1}{c_{l_{a}}}W_{i}\left(e^{j2\pi\left(f'+f_{i}+l_{a}\cdot f_{p}\right)T_{s}}\right),\label{eq:sVsw} \end{equation} where $l_{a}$ is given by \begin{equation} \label{eq:la} l_{a}=\left\lfloor \frac{f_{i}+f'+f_p/2}{f_{p}}\right\rfloor, \end{equation} completing the proof. \end{IEEEproof} Note that $l_a$, defined in (\ref{eq:la}), can only be the index of one of the two $f_p$-bins that may overlap with the $i$th transmission's support. \subsubsection{Minimal Sampling Rate} It was previously proved in \cite{Mishali09} that the minimal sampling rate for perfect blind reconstruction of a signal of the model {\emph{$\mathcal{M}_{1}$} is $2MB$. The sampling rate in our ULA based scheme is governed by $B$ and $\dim\left(\mbox{span}\left(\mathbf{w}\right)\right)$, where $1 \leq \dim\left(\mbox{span}\left(\mathbf{w}\right)\right) \leq M$. Therefore, in the worst case, the minimal sampling rate that can be achieved is $2MB$, in accordance with \cite{Mishali09}. With high probability, $\dim\left(\mbox{span}\left(\mathbf{w}\right)\right) = M$ and the minimal rate becomes as low as $\left(M+1\right)B$. If our sole objective is carrier frequency recovery, then we can further reduce the sampling rate of each channel $f_s$ below $B$. However, in this case, the signal $W_{i}\left(e^{j2\pi fT_{s}}\right)$ is an aliased version of $\tilde{S}_{i}\left(f\right)$. A possible, though unlikely, consequence of the aliasing is that for some transmission, the folded versions of $\tilde{S}_{i}\left(f\right)$ cancel each other and result in $W_{i}\left(e^{j2\pi fT_{s}}\right)\equiv0$. In such a case, $W_{i}\left(e^{j2\pi fT_{s}}\right)$ and the corresponding $i$th column of the steering matrix will not appear in (\ref{eq:The Equation}). Nevertheless, this unlikely scenario will not affect the recovery of the other signals carrier frequencies. The carrier frequency recovery is possible for each $s_{i}(t)$ such that $W_{i}\left(e^{j2\pi fT_{s}}\right)\neq0$, even if $W_{i}\left(e^{j2\pi fT_{s}}\right)$ has suffered from loss of information due to folding. \subsection{Reconstruction Methods} \label{sec:rec1} In this section, we propose two carrier frequency reconstruction methods that solve (\ref{eq:The Equation}). The first follows from the ESPRIT algorithm \cite{PAUL1986} while the second is based on CS \cite{CSBook}. Once the carriers are estimated, one can recover the transmissions $\mathbf{s}(t)$ by inverting (\ref{eq:The Equation}) and substitute the recovered $W_i(e^{j2 \pi fT_s})$ into (\ref{eq:sVsw}). \subsubsection{ESPRIT Approach} \label{sec:prob1_rec} One practical method to obtain a solution $\left(\hat{\boldsymbol{f}},\hat{\mathbf{w}}\right)$ is by using the ESPRIT algorithm \cite{PAUL1986} on the measurement set $\mathbf{x}[k]$, as in \cite{Kfir2010} (Section C.). We can either assume that the number of source signals $M$ is known or first estimate it using the minimum description length (MDL) algorithm \cite{PAUL1986}, for example. One of the conditions needed to use ESPRIT is that the correlation matrix $\mathbf{R}_{w} = \sum_{k \in \mathbb{Z}} \mathbf{w}[k]\mathbf{w}^{H}[k] $ is positive definite. From \cite{Kfir2010} (Proposition 3), if $\dim\left(\mbox{span}\left(\mathbf{w}\right)\right)=M$, then $\mathbf{R}_{w}\succ0$. Therefore, the authors in \cite{Kfir2010} distinguish between two cases. The first, where $\mathbf{R}_{w}\succ0$, is referred to as the uncorrelated case. Here, ESPRIT can be directly applied on $\mathbf{R} = \sum_{k \in \mathbb{Z}} \mathbf{x}[k]\mathbf{x}^{H}[k]$. The main steps of ESPRIT are summarized in Algorithm \ref{algo:esprit}. In the algorithm description, $\mbox{eig}\left(\mathbf{\Psi}\right)$ is a vector of the eigenvalues of $\mathbf{\Psi}$ and the correlation matrix $\mathbf{R}$ is estimated as \begin{equation} \label{eq:Rest} \mathbf{R}=\sum_{k=1}^{Q}\mathbf{x}[k] \mathbf{x}^H[k], \end{equation} where $Q$ is the number of snapshots for the averaging and $\mathbf{x}[k]$ is the vector of samples from the $k$th snapshot. \begin{algorithm}[H] \textbf{\uline{Input:}}\textbf{ } \begin{itemize} \item $Q$ snapshots of the sensors measurements $\mathbf{x}[k]$ \end{itemize} \textbf{\uline{Output:}} \begin{itemize} \item $\hat{\boldsymbol{f}}$ - estimated carriers frequencies \end{itemize} \textbf{\uline{Algorithm:}} \begin{enumerate} \item Estimate the sample covariance $\mathbf{R}$ from (\ref{eq:Rest}) \item Decompose $\mathbf{R}$ using the singular value decomposition: $\mathbf{U},\mathbf{S},\mathbf{V}=\mbox{svd}\left(\mathbf{R}\right)$ \item Extract signal subspace: $\mathbf{U}_{s}=\left[\mathbf{U}^{1},...,\mathbf{U}^{M}\right]$ \item Define: $\mathbf{U}_{1}=\left[\mathbf{U}^{1},...,\mathbf{U}^{M-1}\right]$, $\mathbf{U}_{2}=\left[\mathbf{U}^{2},...,\mathbf{U}^{M}\right]$ \item Least squares recovery: \begin{enumerate} \item $\mathbf{\Psi}=\mathbf{U}_{2}\mathbf{U}_{1}^{\dagger}$ \item $\boldsymbol{f}=\arccos\left[\angle\left(\mbox{eig}\left(\mathbf{\Psi}\right)\right)\right]\cdot\frac{c}{2\pi d}$ \end{enumerate} \end{enumerate} \protect\caption{ESPRIT} \label{algo:esprit} \end{algorithm} If $\dim\left(\mbox{span}\left(\mathbf{w}\right)\right)<M$, then the rank of the correlation matrix $\mathbf{R}$ is less than $M$. Here, an additional step is implemented to construct a smoothed correlation matrix of rank $M$, before applying ESPRIT. This case is referred to as the correlated case \cite{Kfir2010}. The smoothed correlation matrix is given by \begin{equation} \bar{\mathbf{R}}=\frac{1}{V}\sum_{l=1}^{V} \sum_{k \in \mathbb{Z}} \mathbf{x}_{l}\left[k\right] \mathbf{x}_{l}^{H}\left[k\right],\label{eq:smoothedR} \end{equation} where $V\triangleq N-M$ and \begin{equation} \mathbf{x}_{l}\left[k\right]\triangleq\left[\begin{matrix}x_{l}\left[k\right] & x_{l+1}\left[k\right] & \cdots & x_{l+M}\left[k\right]\end{matrix}\right]^{T},\quad1\leq l\leq V. \end{equation} Note that in order to be able to construct the smoothed correlation matrix, one should require $N>2M-\dim\left(\mbox{span}\left(\mathbf{w}\right)\right)$, which is exactly condition ({c2}) in Theorem \ref{thm: the equation uniquness}. Once the carrier frequencies $f_{i}$ are recovered, the steering matrix $\mathbf{A}$, defined in (\ref{eq:A}) can be constructed. The vector $\mathbf{W}(f)$ is then obtained by inverting the steering matrix, \begin{equation} \mathbf{W}(f)=\mathbf{A}^{\dagger}\mathbf{X}(f),\label{eq:rec_sig} \end{equation} and the source signal vector is computed using (\ref{eq:sVsw}). \subsubsection{CS Approach} Suppose that the carrier frequencies $f_{i}$ lie on a grid $\{\delta l\}_{l=-L}^{L}$, with $L=\frac{f_{\text{Nyq}}}{2\delta}$. Here, $\delta$ is a parameter of the recovery algorithm that defines the grid resolution. Equation (\ref{eq:The Equation}) then becomes \begin{equation} \mathbf{x}[k]=\mathbf{G}\mathbf{w}[k],\quad k\in\mathbb{Z},\label{eq:ongrid} \end{equation} where ${\bf G}$ is a $N\times(2L+1)$ matrix with $(n,l)$ element $G_{nl}=e^{j2\pi\tau_{n}l\delta}$. The nonzero elements of the sparse $(2L+1)\times1$ vector $\mathbf{w}[k]$ have unknown indices $l_{i}=\frac{f_{i}}{\delta}$ for $1\leq i\leq M$. The set of equations (\ref{eq:ongrid}) represents an infinite number of linear systems with joint sparsity. Such systems are known as infinite measurement vectors (IMV) in the CS literature \cite{rembo}. We use the support recovery paradigm from \cite{Mishali09} that produces a finite system of equations, called multiple measurement vectors (MMV) from an infinite number of linear systems. This reduction is performed by what is referred to as the continuous to finite (CTF) block \cite{rembo,CSBook}. From (\ref{eq:ongrid}), we have \begin{equation} \mathbf{R=GR}_w^g \mathbf{G}^{H} \end{equation} where $\mathbf{R}=\sum_{k \in \mathbb{Z}}\mathbf{x}[k]\mathbf{x}^{H}[k]=\int_{f\in\mathcal{F}_{s}}\mathbf{X}(f)\mathbf{X}^{H}(f)\mathrm{d}f$ is a $N\times N$ matrix and $\mathbf{R}_w^g=\sum_{k \in \mathbb{Z}}\mathbf{w}[k]\mathbf{w}^{H}[k]=\int_{f\in\mathcal{F}_{s}}\mathbf{W}(f)\mathbf{W}^{H}(f)\mathrm{d}f$ is a $M\times M$ matrix. We then construct a frame ${\bf V}$ such that $\mathbf{R=VV}^{H}$. Clearly, there are many possible ways to select ${\bf V}$. We construct it by performing an eigendecomposition of ${\bf R}$ and choosing ${\bf V}$ as the matrix of eigenvectors corresponding to the nonzero eigenvalues. We can then define the following linear system \begin{equation} {\bf V=GU.}\label{eq:CTF} \end{equation} From \cite{Mishali09} (Propositions 2-3), the support of the unique sparsest solution of (\ref{eq:CTF}) is the same as the support of the original set of equations (\ref{eq:ongrid}). Equation (\ref{eq:CTF}) can be solved using any MMV CS algorithm, such as simultaneous orthogonal matching pursuit (SOMP) \cite{CSBook}. Once the support $S$ of ${\bf U}$, namely the support of $\mathbf{W}(f)$, is recovered, the carrier frequencies $f_{i}$ are computed using $f_{i}=l_{i}\delta$, with $l_{i}\in S$, and the steering matrix $\mathbf{A}$, defined in (\ref{eq:A}) is constructed. The vectors ${\mathbf{w}[k]}$ and $\mathbf{s}(t)$ are then obtained using (\ref{eq:rec_sig}) and (\ref{eq:sVsw}), respectively. Theorem \ref{thm:cs} shows that the conditions for perfect recovery of $\mathbf{w}[k]$ from (\ref{eq:ongrid}) are identical to those derived in the previous section. \begin{thm} \label{thm:cs} Let $u(t)$ be an arbitrary signal within $\mathcal{M}_{1}$ and consider a ULA with spacing $d<\frac{c}{|\cos(\theta) |f_{\text{Nyq}}}$. The minimal number of sensors required for perfect recovery of $\mathbf{w}[k]$ in (\ref{eq:ongrid}) in a noiseless environment is $N> 2M-\text{dim}(\text{span}(\mathbf{w}))$. \end{thm} Theorem \ref{thm:cs} follows directly from the fact that if $d<\frac{c}{|\cos(\theta)| f_{\text{Nyq}}}$, then ${\bf G}$ is a Vandermonde matrix, and therefore has full spark, namely $\text{spark}(\mathbf{G})=M$. Then, we use the MMV recovery condition from \cite{rankCS}, given by \begin{equation} M < \frac{\text{spark}(\mathbf{G}) -1 + \text{rank}(\mathbf{V})}{2}, \end{equation} where $1 \leq \text{rank}(\mathbf{V}) \leq M$. Finally, it holds that \begin{equation} \text{rank}(\mathbf{V}) = \text{dim}(\text{span}(\mathbf{x})) = \text{dim}(\text{span}(\mathbf{w})), \end{equation} where the last equality follows from the fact that $\bf G$ is full spark. In the worst case, it holds that $\text{rank}(\mathbf{V}) = \text{dim}(\text{span}(\mathbf{w})) = 1$ and the MMV processing does not improve the recovery ability over the single measurement vector (SMV) case. The required number of sensors is then $2M$, leading to a minimal sampling rate of $2MB$. With high probability, $\text{rank}(\mathbf{V})=\text{dim}(\text{span}(\mathbf{w}))=M$ and the number of sensors required is thus reduced to $M+1$. \subsection{Comparison with the MWC} Both our ULA based system and the MWC \cite{Mishali10} allow for reconstruction of multiband signals from samples obtained below the Nyquist rate. The ULA approach adopts the same sampling principle as the MWC but differs in some essential ways. First, the MWC uses one sensor composed of $N$ analog processing channels, whereas the ULA scheme uses $N$ sensors, each composed of one channel. While both systems use the same amount of $N$ mixers, LPFs and samplers, this difference of configuration leads to essential distinctions between the systems. Since all the MWC channels belong to the same sensor, they are all affected by the same additive sensor noise, i.e. $\tilde{u}(t)=u(t)+\eta(t)$ in all channels. In the ULA, each channel belongs to a different sensor and as a consequence, is corrupted by a different additive sensor noise, namely $\tilde{u}_{n}(t)=u_{n}(t)+\eta_{n}(t)$, where $\eta_{n}(t)$ can be assumed to be uncorrelated between channels. This is an advantage of the ULA based method since the noise is averaged. Moreover, a known difficulty of the MWC is choosing appropriate mixing functions $\left\{ p_{n}(t)\right\} $ so that the original signal can be reconstructed. The ULA scheme allows for all sensors to use the same function $p\left(t\right)$, and this function does not have any limitation other than $f_{p}>B$ and $c_{l}\neq0$ for all $l\in\left\{ -L_{0},...,L_{0}\right\} $. Finally, the ULA configuration can be extended to allow for joint carrier and DOA recovery, as shown in Section \ref{sec:joint}. Table \ref{table:comp} summarizes the main properties of each system. \begin{table} {\tiny{}{}}{\tiny \par} \centering \begin{tabular}{|c|c|c|} \hline & {\tiny{}{}ULA based system} & {\tiny{}{}MWC}\tabularnewline \hline \hline {\tiny{}{}Periodic functions} & {\tiny{}{}one function for all sensors} & {\tiny{}{}one function per channel}\tabularnewline \hline {\tiny{}{}Number of samplers} & {\tiny{}{}$N$ - number of sensors} & {\tiny{}{}$N$ - number of channels}\tabularnewline \hline {\tiny{}{}Minimal sampling rate (average)} & {\tiny{}{}$\left(M+1\right)B$} & {\tiny{}{}$(M+1)B$}\tabularnewline \hline {\tiny{}{}Minimal sampling rate (worst)} & {\tiny{}{}$2MB$} & {\tiny{}{}$2MB$}\tabularnewline \hline {\tiny{}{}Practical sampling rate} & {\tiny{}{}$Nf_{s}$} & {\tiny{}{}$Nf_{s}$}\tabularnewline \hline \end{tabular} \protect \ \caption{Main properties of the ULA based and MWC system.} \label{table:comp} \end{table} \section{Numerical Experiments} \label{sec:exp} \label{sec:sim} We now numerically investigate different aspects of our system shown in Fig.~{\ref{ULA fig}} and show that it ourperforms the MWC system \cite{Mishali10} of Fig.~\ref{fig:mwc} at low SNRs in terms of recovery error. We first explore the impact of SNR, sampling rate, sensors distance $d$, number of sensors/channels $N$ and number of snapshots $Q$ on the signal reconstruction performance. We then consider carrier frequency recovery only and demonstrate that the sampling rate can be made lower than the Landau rate \cite{Landau67} in this case. For the ULA based system, we show both the MMV CS and ESPRIT approaches described in Section \ref{sec:rec1}. For the MMV method, we use SOMP \cite{CSBook} for recovering the support $S$. \subsection{Simulation Setup} The setup described hereafter is used as a basis for all simulations. Consider signals of the model $\mathcal{M}_{1}$ with $M=3$, $f_{\text{Nyq}}=10$GHz, $\theta=0^{\circ}$ and $B=50$MHz. The carrier frequencies $f_{i}$ are drawn uniformly at random from $[-\frac{f_{\text{Nyq}}-B}{2},\frac{f_{\text{Nyq}}-B}{2}]$. In our ULA based system, the received signal at each sensor is given by (\ref{eq:u_n(t)}). In each sensor, the received signal is corrupted by uncorrelated additive white Gaussian noise (AWGN) $\eta_{n}(t)$, such that the signal at the $n$th sensor is given by $\tilde{u}_{n}(t)=u_{n}(t)+\eta_{n}(t)$. For the MWC system, the received signal is the sum of the transmissions with AWGN, namely $u(t)=\sum_{i=1}^{M}s_{i}(t)e^{j2\pi f_{i}t}+ \eta(t)$. Here, all channels are corrupted by the same noise $\eta(t)$ since they all belong to one unique sensor. The noises $\eta(t)$ and $\eta_n(t), 0 \leq n \leq N-1$ are assumed to have the same variance. In all the simulations, we use $f_{s}=f_{p}=1.3B$ (if not mentioned otherwise). For the ULA based system, we use a periodic function $p\left(t\right)$ such that $P\left(f\right)=\sum_{l=-\infty}^{\infty}\delta\left(f-lf_{p}\right)$. In the MWC, $p_{i}(t)$ are chosen as piecewise constant functions alternating between the levels $\pm1$ with sequences generated uniformly at random. The system performance is measured by computing the MSE between the original and reconstructed signals, i.e. $\text{MSE}=||u-\hat{u}||^{2}$ normalized to the length of $u$. For the simulations, we estimate $\mathbf{R}$ as in (\ref{eq:Rest}). To set similar conditions for both systems (MWC and ULA based), we use the same parameters, i.e the number of source signals $M$, the number of snapshots $Q$, SNR, $f_{p}=f_{s}$, and $N$, which in the ULA denotes the sensors number and in the MWC denotes the channels number. The same signal is fed to both systems at each realization of the simulations. The results are averaged over $2000$ realizations, where in each realization, the signals, carriers and noises are generated at random. \subsection{Signal Reconstruction} Figure \ref{d simulation} presents the performance of the ULA based system as a function of $d$. As shown in Theorem \ref{thm: the equation uniquness}, we require $d\leq\frac{c}{|\cos\left(\theta\right)|\cdot f_{\text{Nyq}}}$, which in our setting translates to $d\leq\frac{3\cdot10^{8}}{10^{10}}=0.03[m]$. This property of the system geometry is clearly demonstrated in Fig. \ref{d simulation}, where we observe a monotonic decrease in the performance starting from $d=0.03$, for both reconstruction methods, MMV and ESPRIT. The decrease in performance below $d=0.03$ stems from the fact that the closer the sensors, the more correlated their samples. In the following simulations, $d$ is set to $d=0.03$. \begin{figure} \includegraphics[width = 0.5\textwidth]{d.png} \protect\caption{Influence of the distance $d$ between adjacent sensors with $M=3$, $N=10$, $Q=400$, and $\text{SNR}=10$dB.\label{d simulation}} \end{figure} We next examine the effect of $f_{p}$. From Theorem \ref{prop:signals-recovery:-given}, $f_{p}$ must be greater than the transmissions bandwidth $B$. When $f_{s}=f_{p}<B$, mixing the signal $u\left(t\right)$ with $p\left(t\right)$ results in aliasing of $u\left(t\right)$, as adjacent shifted copies of the source signal overlap. Each spectral bin overlaps with two others over a bandwidth $b=B-f_{p}$ each. It follows that we reconstruct the aliased version of each signal, that is only $B-2b$ of each source signal's support is perfectly recovered, while the remaining $2b$ are corrupted. Therefore, the reconstruction performance depends on $\frac{f_{p}}{B}$. In particular, if $\frac{f_{p}}{B}\leq\frac{1}{2}$, no reconstruction at all is possible. This phenomenon is demonstrated in Fig. \ref{Fp simulation}. \begin{figure} \includegraphics[width = 0.5\textwidth]{Fp_to_B.png} \protect\caption{Influence of the ratio $\nicefrac{f_{p}}{B}$, with $M=3$, $N=10$, $Q=400$, and $\text{SNR}=10$dB. \label{Fp simulation}} \end{figure} The third experiment examines the influence of the number of sensors $N$. A large amount of sensors increases the system's robustness to noise and allows it to handle a greater amount of source signals. This parameter is equivalent to the number of channels in the MWC system. From Fig. \ref{N simulation}, it can be seen that the reconstruction error decreases with more sensors. In this setting, the minimal number of sensors is $N=2M=6$. \begin{figure} \includegraphics[width = 0.5\textwidth]{Sensors.png} \protect\caption{Influence of the number of sensors $N$, with $M=3$, $Q=400$, $\text{SNR}=10$dB. \label{N simulation}} \end{figure} The influence of the number of snapshots $Q$ is investigated in the next experiment. As shown in Fig. \ref{Q simulation}, the performance of ESPRIT improves with the number of snapshots. A small amount of snapshots can yield $\dim\left(\mbox{span}\left(\mathbf{w}\right)\right)<M$. In this case, referred to as the correlated case, we need to construct a smoothed correlation matrix (\ref{eq:smoothedR}) on which we can apply ESPRIT, as discussed in Section \ref{sec:rec1}. This setting can be useful when only carrier frequencies recovery is needed, as it enables good frequency recovery with few samples. In this experiment, we set $M=8$ and use a low number of snapshots. In Fig. \ref{few Q simulation}, we observe that for $Q\leq M=8$ the smoothing algorithm yields better performance than the traditional ESPRIT. \begin{figure} \includegraphics[width = 0.5\textwidth]{Q.png} \protect\caption{Signal reconstruction performance vs. $Q$, with $M=3$, $\text{SNR}=10$dB, $N=8$. \label{Q simulation}} \end{figure} \begin{figure} \includegraphics[width = 0.5\textwidth]{Corelated.png} \protect\caption{Correlated vs. uncorrelated case with $M=8$, $N=25$, $\text{SNR}=20$dB. \label{few Q simulation}} \end{figure} The next simulation tests the reconstruction performance under different SNR conditions. When dealing with low SNR scenarios, grid search algorithms (such as MMV) are known to outperform analytic algorithms such as ESPRIT, as illustrated in Fig. \ref{SNR simulation}. \begin{figure} \includegraphics[width = 0.5\textwidth]{SNR.png} \protect\caption{Influence of SNR on complex-valued signal reconstruction performance, with $M=3$, $N=10$, $Q=400$. \label{SNR simulation}} \end{figure} The simulations demonstrate that our system outperforms the MWC, in particular in low SNR regimes. Besides, in such settings, the reconstruction error of the CS approach is typically lower than that of ESPRIT, which is an analytic method. In the presence of enough samples, originated by increasing the sampling rate $f_s$, the number of sensors $N$ or snapshots $Q$, ESPRIT achieves better results. \subsection{Carrier Frequency Recovery} We now consider the case where only the carrier frequencies are recovered, which can be relevant for various applications such as CR. Here, we use the following performance measure, $\frac{1}{Mf_{\text{Nyq}}}\sum_{i=1}^{M}\left|f_{i}-\hat{f}_{i}\right|$. We demonstrate that, for this purpose, a lower sampling rate can be used. We sample the data at the cut-off frequency of the LPF $f_{s}<f_{p}$, which causes loss of information. Since lower sampling rate yields fewer samples for a given sensing time, we use the correlated case or smoothing approach. In the first experiment, we examine different sampling ratios $\frac{f_{s}}{f_{p}}$. Figure \ref{fs simulation} demonstrates that even for very low ratios, carrier frequency recovery yields low error, which decreases as the ratio grows, as expected. The second simulation, presented in Fig. \ref{fs2 simulation}, shows the impact of SNR for a fixed sampling ratio $\frac{f_{s}}{f_{p}}=0.2$. \begin{figure} \includegraphics[width = 0.5\textwidth]{Fs_simulation.png} \protect\caption{Influence of the ratio $\nicefrac{f_{s}}{f_{p}}$ on carrier frequency reconstruction performance, with $M=3$, $N=8$, $Q=400$, $\text{SNR}=10$dB. \label{fs simulation}} \end{figure} \begin{figure} \includegraphics[width = 0.5\textwidth]{Fs2_simulation.png} \protect\caption{Influence of SNR on carrier frequency reconstruction performance, $M=3$, $N=8$, $Q=400$, $\nicefrac{f_{s}}{f_{p}}=0.2$. \label{fs2 simulation}} \end{figure} \section{Joint Spectrum Sensing and DOA Recovery} \label{sec:joint} We now show how our ULA based system can be expanded to allow for joint recovery of the carrier frequencies and the AOAs. This is the main advantage of our system with respect to the MWC. We present the compressed carrier and DOA estimation (CaSCADE) system, consisting of an L-shaped array composed of two orthogonal ULAs with an identical sampling scheme. Specifically, we consider the problem where the source signals $s_{i}(t), 1 \leq i \leq M$ have both unknown and different carrier frequencies $f_i$ and AOAs $\theta_i$. The main difference between this scenario and the one that has been discussed in the previous sections is the additional unknown AOA vector $\boldsymbol{\theta}=\left[\theta_{1}, \theta_{2},\cdots, \theta_{M}\right]^{T}$. This problem can be treated as a 2D-DOA recovery problem, where two angles are traditionally recovered. In our case, the second variable is the signal's carrier frequency instead of an additional angle. The 2D-DOA problem requires both finding the two unknown angles and pairing them. Previous work \cite{Jgu07, Jgu15} suggests a modification to the ESPRIT algorithm, that achieves automatic pairing between the two estimated factors, by simultaneous singular value decomposition (SVD) of two cross-correlation matrices. We further develop this approach, derived in the Nyquist regime, to perform recovery from sub-Nyquist samples. \subsection{Signal Model} In this scenario, for the sake of simplicity, we consider a statistical model. Let $u\left(t\right)$ and $s_{i}\left(t\right)$ be defined as in the previous section, with Fourier transforms $U\left(f\right)$ and $S_{i}\left(f\right)$, accordingly. The signals $s_{i}\left(t\right)$ are considered to be within the $xz$ plane and associated with an AOA $\theta_{i}$, where $\theta_{i}$ is measured from the positive side of the $x$ axis. All signals are assumed to be far-field, non coherent, wide-sense stationary with zero mean and uncorrelated, i.e. $\forall t,\, \mathbb{E}\left[s_{i}\left(t\right)\bar{s}_{j}\left(t\right)\right]=0$ for $i\neq j$, with $\sigma^2_i = \mathbb{E}\left[s_{i}^{2}(t)\right] \neq 0$. Fig. \ref{fig:XZ Plane} illustrates our signal model. To ensure an array structure deprived of ambiguity, we assume that the electronic angles, namely $f_i \cos (\theta_i)$ and $f_i \sin (\theta_i)$, are distinct \cite{multiESPRIT, kikuchi}, namely \begin{eqnarray} f_i \cos (\theta_i) \neq f_j \cos (\theta_j), \nonumber \\ f_i \sin (\theta_i) \neq f_j \sin(\theta_j), \label{eq:distinct_angles} \end{eqnarray} for $i \neq j$. \begin{figure} \begin{centering} \fbox{ \begin{minipage}[t]{1\columnwidth} \begin{center} \includegraphics[width = 0.75\textwidth]{xz_plane.png} \par\end{center} \end{minipage}} \par\end{centering} \protect\caption{Example of $M=3$ source signals in the $xz$ plane. Each transmission is associated with a carrier frequency $f_{i}$ and AOA $\theta_{i}$.\label{fig:XZ Plane}} \end{figure} \begin{definition} The set $\mathcal{M}_{2}$ contains all signals $u(t)$, such that the support of the Fourier transform $U(f)$ is contained within a union of $M$ disjoint intervals in $\mathcal{F}$. Each of the bandwidths does not exceed $B$ and the transmissions composing $u(t)$ are wide-sense stationary, zero mean and uncorrelated and have unknown and distinct AOAs $|\theta_{i}|<90^{\circ}$, such that (\ref{eq:distinct_angles}) holds. \end{definition} In this section, we wish to design a sampling and reconstruction system which allows for perfect blind signal reconstruction, i.e. recovery of $\boldsymbol{\theta}, \boldsymbol{f}, \mathbf{s}(t)$, where $\boldsymbol{\theta}$ denotes the AOAs vector defined above and $\boldsymbol{f}$, $\mathbf{s}(t)$ are defined in Section \ref{sec:model}, without any prior knowledge on the carrier frequencies nor the AOAs. \subsection{CaSCADE System Description \label{sub:ULA description}} Each transmission $s_{i}\left(t\right)$ impinges on an L-shaped array with $2N-1$ sensors ($N$ sensors along the $x$ axis and $N$ sensors along the $z$ axis including a common sensor at the origin) in the $xz$ plane with its corresponding AOA $\theta_{i}$, as shown in Fig.~\ref{fig:L-Shape}. All the sensors have the same sampling pattern as described in Section \ref{sec:Sampling-Scheme}. In the following sections, we demonstrate that in this case the minimal number of sensors required is $2M$. This leads to a minimal sampling rate of $2MB$ which is assumed to be less then $f_{\text{Nyq}}$. \begin{figure} \begin{centering} \fbox{ \begin{minipage}[t]{1\columnwidth} \begin{center} \includegraphics[width = 0.6\textwidth]{L_shape.png} \par\end{center} \end{minipage}} \par\end{centering} \protect\caption{CaSCADE system: L-shaped array with $N$ sensors along the $x$ axis and $N$ sensors along the $z$ axis including a common sensor at the origin. \label{fig:L-Shape}} \end{figure} By treating the L-shaped array as two orthogonal ULAs, one along the $x$ axis and the other along the $z$ axis, we form two systems of equations, following the derivations of Section \ref{sec:der1}. For the ULA along the $x$ axis, we obtain \begin{equation} \label{eq:Xeq} \mathbf{X}(f)=\mathbf{A}_{x} \mathbf{W}(f), \quad f \in \mathcal{F}_s, \end{equation} where \begin{equation} \mathbf{A}_{x} = \left[\begin{matrix}e^{j2\pi f_{1}\tau_{1}^{x}(\theta_{1})} & \cdots & e^{j2\pi f_{M}\tau_{1}^{x}(\theta_{M})}\\ \vdots & & \vdots\\ \\ e^{j2\pi f_{1}\tau_{N}^{x}(\theta_{1})} & \cdots & e^{j2\pi f_{M}\tau_{N}^{x}(\theta_{M})} \end{matrix}\right]. \end{equation} Similarly, along the $z$ axis, we get \begin{equation} \label{eq:Zeq} \mathbf{Z}(f)=\mathbf{A}_{z} \mathbf{W}(f), \quad f \in \mathcal{F}_s, \end{equation} where $\mathbf{A}_{z}$ is defined accordingly. Here, $\tau_{n}^{x}\left(\theta\right)=\frac{dn}{c}\cos\left(\theta\right)$, $\tau_{n}^{z}\left(\theta\right)=\frac{dn}{c}\sin\left(\theta\right)$ and the matrices $\mathbf{A}_x$ and $\mathbf{A}_z$ thus depend on both the unknown carrier frequencies $\boldsymbol{f}$ and AOAs $\boldsymbol{\theta}$, namely $\mathbf{A}_x= \mathbf{A}_x\left(\boldsymbol{f},\boldsymbol{\theta}\right)$ and $\mathbf{A}_z= \mathbf{A}_z\left(\boldsymbol{f},\boldsymbol{\theta}\right)$. In the time domain, \begin{eqnarray} \mathbf{x}[k] & = & \mathbf{A}_{x}\mathbf{w}[k], \quad k \in \mathbb{Z} \label{eq:xVsw}\\ \mathbf{z}[k] & = & \mathbf{A}_{z}\mathbf{w}[k], \quad k \in \mathbb{Z}, \label{eq:zVsw} \end{eqnarray} where \textbf{$\mathbf{x}[k]$} and $\mathbf{z}[k]$ are the samples for the $x$ and $z$ axis, respectively, and $\mathbf{w}[k]$ is a vector of length $M$ with $i$th element $w_{i}[k]$. In the following sections, we discuss two possible methods to recover $\boldsymbol{f}$ and $\boldsymbol{\theta}$, present sufficient conditions to recover the transmissions $\mathbf{s}\left(t\right)$ from $\mathbf{w}[k]$, and provide concrete reconstruction algorithms. \subsection{Joint ESPRIT Recovery \label{sub:Joint-SVD-ESPRIT}} We now extend the ESPRIT approach to a 2D setting, in order to jointly recover two parameters, $f_i$ and $\theta_i$, for each transmission. Once these are estimated, the transmissions $s_i(t)$ can be recovered from (\ref{eq:rec_sig}) and (\ref{eq:sVsw}) with the observation matrix $\mathbf{A}= \left[ \begin{array}{l} \mathbf{A}_x \\ \mathbf{A}_z \end{array} \right]$ and the concatenated vector of measurements $\left[ \begin{array}{l} \mathbf{X}(f) \\ \mathbf{Z}(f) \end{array} \right]$. Consider two sub-arrays of size $N-1$ along each of the $x$ and $z$ axis. The first sub-array along the $x$ axis consists of sensors $\left\{ 1,...,N-1\right\} $. The second sub-array is composed of the last $N-1$ sensors along the same axis, i.e. sensors $\left\{ 2,...,N\right\} $. The sub-arrays along the $z$ axis are similarly defined. Dropping the time variable $k$ for clarity, we can then write: \begin{eqnarray} \mathbf{x}_{1} = \mathbf{A}_{x_{1}}\mathbf{w}, & \quad & \mathbf{x}_{2} = \mathbf{A}_{x_{2}}\mathbf{w} \nonumber \\ \mathbf{z}_{1} = \mathbf{A}_{z_{1}}\mathbf{w}, & \quad & \mathbf{z}_{2} = \mathbf{A}_{z_{2}}\mathbf{w}, \end{eqnarray} where $\mathbf{x}_{1}$ and $\mathbf{A}_{x_{1}}$ are the first $N-1$ rows of $\mathbf{x}$ and $\mathbf{A}_{x}$ respectively and $\mathbf{x}_{2}$ and $\mathbf{A}_{x_{2}}$ are the last $N-1$ rows of $\mathbf{x}$ and $\mathbf{A}_{x}$ respectively. The vectors $\mathbf{z}_1$, $\mathbf{z}_2$ and matrices $\mathbf{A}_{z_1}$, $\mathbf{A}_{z_2}$ are similarly defined. Each couple of sub-array matrices along the same axis are related as follows: \begin{eqnarray} \mathbf{A}_{x_{2}} & = & \mathbf{A}_{x_{1}}\mathbf{\Phi} \nonumber \\ \mathbf{A}_{z_{2}} & = & \mathbf{A}_{z_{1}}\mathbf{\Psi}, \end{eqnarray} where \begin{eqnarray} \label{eq:phipsi} \mathbf{\Phi} & \triangleq & \mbox{diag}\left[\begin{matrix}e^{j2\pi f_{1}\tau_{1}^{x}(\theta_{1})} & \cdots & e^{j2\pi f_{M}\tau_{1}^{x}(\theta_{M})}\end{matrix}\right] \nonumber \\ \mathbf{\Psi} & \triangleq & \mbox{diag}\left[\begin{matrix}e^{j2\pi f_{1}\tau_{1}^{z}(\theta_{1})} & \cdots & e^{j2\pi f_{M}\tau_{1}^{z}(\theta_{M})}\end{matrix}\right]. \end{eqnarray} We can see from (\ref{eq:phipsi}) that the carrier frequencies $f_i$ and AOAs $\theta_i$ are embedded in the diagonal matrices $\bf \Phi$ and $\bf \Psi$. Our goal is thus to jointly recover these matrices in order to be able to pair the corresponding elements $f_{i}\tau_{1}^{x}(\theta_{i})$ and $f_{i}\tau_{1}^{z}(\theta_{i})$. We then show how $f_i$ and $\theta_i$ can be estimated from $\mathbf{\Phi}$ and $\mathbf{\Psi}$ for all $1 \leq i \leq M$. To this end, we apply the ESPRIT framework to cross-correlation matrices between the sub-arrays of both axis. Consider the following correlation matrices: \begin{eqnarray} \label{eq:cross} \mathbf{R}_{1} & \triangleq & \mathbb{E}\left[\mathbf{x}_{1}\mathbf{z}_{1}^{H}\right]=\mathbf{A}_{x_{1}}\mathbf{R}_{w}\mathbf{A}_{z_{1}}^{H}, \nonumber \\ \mathbf{R}_{2} & \triangleq & \mathbb{E}\left[\mathbf{x}_{2}\mathbf{z}_{1}^{H}\right]=\mathbf{A}_{x_{2}}\mathbf{R}_{w}\mathbf{A}_{z_{1}}^{H}=\mathbf{A}_{x_{1}}\mathbf{\Phi}\mathbf{R}_{w}\mathbf{A}_{z_{1}}^{H},\nonumber \\ \mathbf{R}_{3} & \triangleq & \mathbb{E}\left[\mathbf{x}_{1}\mathbf{z}_{2}^{H}\right]=\mathbf{A}_{x_{1}}\mathbf{R}_{w}\mathbf{A}_{z_{2}}^{H}=\mathbf{A}_{x_{1}}\mathbf{R}_{w}\mathbf{\Psi}^{H}\mathbf{A}_{z_{1}}^{H}. \end{eqnarray} Since the transmissions $s_i(t)$ are assumed to be uncorrelated, $\mathbf{R}_{w}$ is diagonal. In addition, since $\sigma_i^2 \neq 0$, $(\mathbf{R}_{w})_{ii} \triangleq \mathbb{E} \left[w_i^2[k] \right] \neq 0$ and $\mathbf{R}_{w}$ is invertible. Using the fact that $\mathbf{\mathbf{\Psi}}^{H}$ is diagonal as well, we can write \begin{eqnarray} \mathbf{R}_{3} & = & \mathbf{A}_{x_{1}}\mathbf{R}_{w}\mathbf{\Psi}^{H}\mathbf{A}_{z_{1}}^{H}=\mathbf{A}_{x_{1}}\mathbf{\Psi}^{H}\mathbf{R}_{w}\mathbf{A}_{z_{1}}^{H}. \end{eqnarray} Define the concatenated covariance matrix \begin{equation} \mathbf{R}=\left[\begin{matrix}\mathbf{R}_{1}\\ \mathbf{R}_{2}\\ \mathbf{R}_{3} \end{matrix}\right]=\left[\begin{matrix}\mathbf{A}_{x_{1}}\\ \mathbf{A}_{x_{1}}\mathbf{\Phi}\\ \mathbf{A}_{x_{1}}\mathbf{\Psi}^{H} \end{matrix}\right]\mathbf{R}_{w}\mathbf{A}_{z_{1}}^{H}.\label{eq:R Matrix} \end{equation} The SVD of $\mathbf{R}$ yields \begin{equation} \mathbf{R}=\left[\mathbf{U}_{1}\mathbf{U}_{2}\right]\left[\begin{matrix}\boldsymbol{\Lambda} & \boldsymbol{0}\\ \boldsymbol{0} & \boldsymbol{0} \end{matrix}\right]\mathbf{V}^{H}, \label{eq:R-SVD} \end{equation} where $\bf R$ is full column rank, as we show in Lemma \ref{lem:trilinear kruskal rank}. Then, the columns of the matrix $[\mathbf{U}_{1}\mathbf{U}_{2}]$ are the left singular vectors of $\mathbf{R}$, where $\mathbf{U}_{1}$ contains the vectors corresponding to the first $M$ singular values, $\mathbf{\Lambda}$ is a $M\times M$ diagonal matrix with the $M$ non zero singular values of $\mathbf{R}$, and $\mathbf{V}$ contains the right singular vectors of $\mathbf{R}$. We now derive sufficient conditions for perfect recovery of $\bf \Phi$ and $\bf \Psi$, up to some joint permutation, from $\mathbf{U}_1$. We then show how $\bf \Phi$ and $\bf \Psi$, and as a consequence $\boldsymbol{f}$ and $\boldsymbol{\theta}$, can be recovered from $\mathbf{U}_1$. First, Lemma \ref{lem:trilinear kruskal rank} provides sufficient conditions so that there exists an invertible $M \times M$ matrix $\bf T$ such that \begin{equation} \mathbf{U}_{1}=\left[\begin{matrix}\mathbf{U}_{11}\\ \mathbf{U}_{12}\\ \mathbf{U}_{13} \end{matrix}\right]=\left[\begin{matrix}\mathbf{A}_{x_{1}}\\ \mathbf{A}_{x_{1}}\mathbf{\Phi}\\ \mathbf{A}_{x_{1}}\mathbf{\Psi}^{H} \end{matrix}\right]\mathbf{T}, \label{eq:Signal Sub Space} \end{equation} where $\mathbf{U}_{1i}$ are $(N-1)\times M$ matrices. \begin{lem} \label{lem:trilinear kruskal rank} Let $u\left(t\right)$ be an arbitrary signal within $\mathcal{M}_{2}$ and consider an L-shaped ULA with $N$ sensors and distance $d$ between two adjacent sensors. If: \begin{itemize} \item (c1) $d \leq \frac{c}{f_{\text{Nyq}}}$ \item (c2) $N>M$ \end{itemize} then (\ref{eq:Signal Sub Space}) holds.\end{lem} \begin{IEEEproof} We begin by showing that under conditions (c1)-(c2), $\bf R$ is full rank. From (\ref{eq:distinct_angles}), both the $(N-1) \times M$ matrices $\mathbf{A}_{x_1}$ and $\mathbf{A}_{z_1}$ are Vandermonde with distinct columns, and are thus full column rank. The $M \times N$ matrix $\mathbf{R}_{w}$ is diagonal and invertible. It follows that $\mathbf{R}_{1}$ and $\mathbf{R}$ are full column rank. The SVD decomposition of $\mathbf{R}$ yields (\ref{eq:R-SVD}). In particular, it holds that $\mathbf{R}^{H}\mathbf{U}_{2}=\boldsymbol{0}$, where $\bf R$ is of size $3(N-1) \times N-1$ with $\text{rank}(\mathbf{R})=M$ and the $3(N-1) \times (3(N-1)-M)$ matrix $\mathbf{U}_2$ is in the null space of $\bf R$. That is \[ \mathbf{A}_{z_{1}}\mathbf{R}_{w} \mathbf{B} \mathbf{U}_{2}=\boldsymbol{0}, \] where \[ \mathbf{B}=\left[\begin{matrix}\mathbf{A}^H_{x_{1}} & \mathbf{\Phi}^{H}\mathbf{A}_{x_{1}}^{H} & \mathbf{\Psi}^H\mathbf{A}_{x_{1}}^{H} \end{matrix}\right]. \] Since $\mathbf{A}_{z_{1}} \mathbf{R}_{w}$ is full rank, it follows that $\mathbf{B} \mathbf{U}_{2}=\boldsymbol{0}$. Besides $\text{rank}(\mathbf{B}) = \text{rank}(\mathbf{U}_1)$; this implies that $\mbox{span}\left(\mathbf{B}^H\right)=\mbox{span}\left(\mathbf{U}_{1}\right)$. Therefore, there exists a $M\times M$ invertible matrix $\mathbf{T}$ such that (\ref{eq:Signal Sub Space}) holds. \end{IEEEproof} If the conditions of Lemma \ref{lem:trilinear kruskal rank} hold, then we can write \begin{eqnarray} \mathbf{A}_{x_{1}} & = & \mathbf{U}_{11}\mathbf{T}^{-1} \nonumber \\ \mathbf{U}_{12} & = & \mathbf{A}_{x_{1}}\mathbf{\Phi}\mathbf{T}=\mathbf{U}_{11}\mathbf{T}^{-1}\boldsymbol{\Phi T}\nonumber \\ \mathbf{U}_{13} & = & \mathbf{A}_{x_{1}}\mathbf{\Psi}^{H}\mathbf{T}=\mathbf{U}_{11}\mathbf{T}^{-1}\mathbf{\Psi}^{H}\mathbf{T}. \nonumber \end{eqnarray} Besides, since $\mathbf{U}_{1i}$ is of size $(N-1) \times M$, with $N>M$, the number of its rows is greater or equal to the number of its columns. In addition, from (\ref{eq:Signal Sub Space}), $\mathbf{U}_{11}=\mathbf{A}_{x_1}\mathbf{T}$, where $\bf T$ is invertible and $\text{rank}(\mathbf{A}_{x_1})=M$. Therefore, it holds that $\text{rank}(\mathbf{U}_{11})=M$ and $\mathbf{U}_{11}^{\dagger} \mathbf{U}_{11}=\mathbf{I}$. We can then derive a relation between $\bf \Phi$, $\bf \Psi$ and the blocks that compose $\mathbf{U}_1$ as \begin{eqnarray} \mathbf{U}_{11}^{\dagger}\mathbf{U}_{12} & = & \mathbf{T}^{-1}\boldsymbol{\Phi T}\label{eq:Same Permutation} \nonumber \\ \mathbf{U}_{11}^{\dagger}\mathbf{U}_{13} & = & \mathbf{T}^{-1}\mathbf{\Psi}^{H}\mathbf{T}, \end{eqnarray} where the matrix $\mathbf{T}$ is identical in both equations. We can now obtain $\mathbf{\Phi}$ and $\mathbf{T}$ using an eigenvalue decomposition up to permutation. Denote by $\bf \hat{\Phi}$ and $\bf \hat{T}$ the obtained matrices. Once these are recovered, we compute $\bf \hat{\Psi}$ with the same permutation, as \begin{equation} \hat{\mathbf{\Psi}}^H = \hat{\mathbf{T}}\left(\mathbf{U}_{11}^{\dagger}\mathbf{U}_{13}\right)\hat{\mathbf{T}}^{-1}. \end{equation} Since the electronic angles $f_i \cos (\theta_i)$ and $f_i \sin (\theta_i)$ are distinct, the eigenvalues of $\hat{\mathbf{\Phi}}$ and $\hat{\mathbf{\Psi}}$ are distinct as well and if follows that both matrices have the same permutation. We thus obtain proper pairing between the diagonal elements, and the AOAs $\theta_i$ and carrier frequencies $f_i$ are given by \begin{eqnarray} \theta_{i} & = & \tan^{-1}\left(\frac{\angle\Psi_{ii}}{\angle\Phi_{ii}}\right) \nonumber \\ f_{i} & = & \frac{\angle\Phi_{ii}}{2\pi\frac{d}{c}\cos\left(\theta_{i}\right)}. \label{eq:DOA solution} \end{eqnarray} Algorithm \ref{algo:2desprit} summarizes the main steps of the joint 2D ESPRIT described above. In the algorithm description we exploit the 4 cross-correlation matrices between the sub-arrays instead of only 3 as defined in (\ref{eq:cross}) to increase robustness to noise. Here, we assume perfect knowledge of $\mathbf{R}$. In practice, it can be estimated as shown in Algorithm \ref{algo:2desprit}. \begin{algorithm} \textbf{\uline{Input:}}\textbf{ } \begin{itemize} \item $Q$ snapshots of the sensors measurements $\mathbf{x}$ along the $x$ axis \item $Q$ snapshots of the sensors measurements $\mathbf{z}$ along the $z$ axis \end{itemize} \textbf{\uline{Output:}} \begin{itemize} \item $\hat{\boldsymbol{f}}$ - estimated carriers frequencies \item $\hat{\boldsymbol{\theta}} $ - estimated the AOA \end{itemize} \textbf{\uline{Algorithm:}} \begin{enumerate} \item Define $\mathbf{x}_{1}$ and $\mathbf{x}_{2}$ as the first and last $N-1$ rows of $\mathbf{x}$\\ Define $\mathbf{z}_{1}$ and $\mathbf{z}_{2}$ as the first and last $N-1$ rows of $\mathbf{z}$ \item Estimate the cross covariance matrices: \begin{enumerate} \item $\mathbf{R}_{1}=\sum_{k=1}^Q \mathbf{x}_{1}[k]\mathbf{z}_{1}^H[k]$ \item $\mathbf{R}_{2}=\sum_{k=1}^Q \mathbf{x}_{2}[k]\mathbf{z}_{1}^H[k]$ \item $\mathbf{R}_{3}=\sum_{k=1}^Q \mathbf{x}_{1}[k]\mathbf{z}_{2}^H[k]$ \item $\mathbf{R}_{4}=\sum_{k=1}^Q \mathbf{x}_{2}[k]\mathbf{z}_{2}^H[k]$ \end{enumerate} \item Decompose $\mathbf{R}=\left[\begin{matrix}\mathbf{R}_{1}^T & \mathbf{R}_{2}^T & \mathbf{R}_{3}^T & \mathbf{R}_{4}^T \end{matrix}\right]^T$ using SVD: $\mathbf{U,S,V}=\text{svd}(\mathbf{R})$ \item Set $\mathbf{U}_1$ to be the ${(4N-4)\times M}$ matrix that contains the $M$ left eigenvectors corresponding to the highest singular values of $\mathbf{R}$ \item Define: \begin{enumerate} \item $\mathbf{U}_{11}$ as the first $N-1$ rows of $\mathbf{U}$ \item $\mathbf{U}_{12}$ as the next $N-1$ rows of $\mathbf{U}$ \item Same for $\mathbf{U}_{13}$, $\mathbf{U}_{14}$ \end{enumerate} \item Compute: \begin{enumerate} \item $\mathbf{V}_{1}=\mathbf{U}_{11}^{\dagger}\mathbf{U}_{12}$ \item $\mathbf{V}_{2}=\mathbf{U}_{11}^{\dagger}\mathbf{U}_{13}$ \item $\mathbf{V}_{3}=\mathbf{U}_{11}^{\dagger}\mathbf{U}_{14}$ \end{enumerate} \item Perform an eigenvalue decomposition of $\left(\mathbf{V}_{1}+\mathbf{V}_{2}+\mathbf{V}_{3}\right)=\mathbf{T}\boldsymbol{\Lambda}\mathbf{T}^{-1}$, where $\boldsymbol{\Lambda}$ is a diagonal matrix \item Compute $\mathbf{\hat{\Phi}}=\mathbf{T}^{-1}\mathbf{V}_{1}\mathbf{T}$ and $\mathbf{\hat{\Psi}}=\left(\mathbf{T}^{-1}\mathbf{V}_{2}\mathbf{T}\right)^{H}$ \item Compute the carrier frequencies and AOAs using (\ref{eq:DOA solution}) \end{enumerate} \protect\caption{Joint ESPRIT} \label{algo:2desprit} \end{algorithm} Finally, Theorem \ref{thm:DOA unique solution} summarizes sufficient conditions for perfect blind reconstruction of $\boldsymbol{f}$ and $\boldsymbol{\theta}$ from the low rate samples $\mathbf{x}[k]$ and $\mathbf{z}[k]$. \begin{thm} \label{thm:DOA unique solution}Let $u(t)$ be an arbitrary signal within $\mathcal{M}_{2}$ and consider an L-shaped ULA with $2N-1$ sensors, such that there are $N$ sensors along each axis with a common sensor at the origin, and the distance between two adjacent sensors is denoted by $d$. If: \begin{itemize} \item (c1) $d<\frac{c}{f_{\text{Nyq}}}$ \item (c2) $N>M$, \end{itemize} then (\ref{eq:Xeq})-(\ref{eq:Zeq}) has a unique solution $\left(\boldsymbol{f},\boldsymbol{\theta},\mathbf{w}\right)$.\end{thm} \begin{IEEEproof} From Lemma \ref{lem:trilinear kruskal rank}, it follows that, under conditions (c1)-(c2), $\mathbf{U}_{11}$ is full column rank and thus left invertible. Therefore, $\mathbf{\Phi}$ and $\mathbf{\Psi}$ can be uniquely derived from (\ref{eq:Same Permutation}), with the same permutation $\mathbf{T}$ for both matrices. This follows from the assumption that the electronic angles and, as a consequence the eigenvalues of $\mathbf{\Phi}$ and $\mathbf{\Psi}$, are distinct. Condition (c1) implies that both $2\pi\hat{f}_{i}\frac{d}{c}\cos\left(\hat{\theta}_{i}\right)\in(-\pi,\pi]$ and $2\pi\hat{f}_{i}\frac{d}{c}\sin\left(\hat{\theta}_{i}\right)\in(-\pi,\pi]$ namely $\angle\Psi_{i,i}$ and $\angle\Phi_{i,i}$, for $1 \leq i \leq M$, are unique, and it follows that $\boldsymbol{f},\boldsymbol{\theta}$ are unique as well and are given by (\ref{eq:DOA solution}). \end{IEEEproof} In addition, if conditions (c1)-(c2) from Theorem \ref{prop:signals-recovery:-given} hold, then $s_i(t)$ is uniquely recovered from $\mathbf{x}[k]$ and $\mathbf{z}[k]$ from (\ref{eq:rec_sig}) and (\ref{eq:sVsw}) with the observation matrix $\mathbf{A}= \left[ \begin{array}{l} \mathbf{A}_x \\ \mathbf{A}_z \end{array} \right]$. From Theorem \ref{thm:DOA unique solution}, the minimal necessary number of sensors in each axis, including a common sensor at the origin, to allow perfect blind reconstruction is $N\geq M+1$, leading to a total number of sensors $2N-1 \geq 2M +1$ sensors in each axis, including a common sensor at the origin. In addition, for perfect reconstruction we require $f_{s}\geq B$ as in Theorem \ref{prop:signals-recovery:-given}. Thus, the minimal sampling rate is bounded by $(2M+1)B$. \subsection{CS Approach \label{sub:Compressed-Sensing-Approach}} In this section, we derive a second joint carrier frequency and AOA recovery approach based on CS methods. Denote \begin{equation} \mathbf{v}[k] = \left[ \begin{matrix} \mathbf{x}[k] \\ \mathbf{z}[k] \end{matrix} \right] \end{equation} that stacks the samples from sensors of both axis. Consider the correlation matrix \begin{equation} \mathbf{R}=\mathbb{E}\left[\mathbf{v}[k] \mathbf{v}^{H}[k]\right]=\mathbf{A}\mathbf{R}_{w}\mathbf{A}^{H},\label{eq:rvsp} \end{equation} where $\mathbf{A} = \left[ \begin{matrix} \mathbf{A}_x \\ \mathbf{A}_z \end{matrix} \right]$. In the following, we assume perfect knowledge of $\mathbf{R}$. In practice, it can be estimated as \begin{equation} \mathbf{R}=\sum_{k=1}^{Q}\mathbf{v}[k]\mathbf{v}[k]^{H},\label{eq:r_est} \end{equation} where $Q$ is the number of snapshots. Denote $\alpha_{i}=f_{i}\cos\theta_{i}$ and $\beta_{i}=f_{i}\sin\theta_{i}$ and suppose that $\alpha_{i}$ and $\beta_{i}$ lie on a grid $\{\delta l\}_{l=-L_0}^{L_0}$, with $L_0=\frac{f_{\text{Nyq}}}{2\delta}$. Here, $\delta$ is a parameter of the recovery algorithm that defines the grid resolution. With high probability, the discretization conserves the unambiguous property, namely $\text{spark}(\mathbf{G})=N+1$. Formulating concrete conditions to ensure the lack of ambiguity is very involved and thus this property is traditionally assumed without justification \cite{multiESPRIT}. Denote $L=2L_0+1$. We can then write \begin{equation} \mathbf{R}=\mathbf{G}\mathbf{R}_w^g\mathbf{G}^{H},\label{eq:rvsp2} \end{equation} where ${\bf G}$ is a $(2N-1)\times L^2$ matrix with $(n,l)$th element $G_{nl}=e^{j2\pi\frac{dn}{c}\alpha_{l_1}}$, for $0 \leq n \leq N-1$ and $G_{nl}=e^{j2\pi\frac{d (n-N+1)}{c} \beta_{l_2} }$, for $N \leq n \leq 2N-1$. Here, $l_1=(l \mod L)-L_0$ and $l_2=\lfloor \frac{l}{L} \rfloor -L_0$. The nonzero elements of the $L^2 \times L^2$ matrix $\mathbf{R}_w^g$ are the $M$ diagonal elements of $\mathbf{R}_{w}$ at the $M$ indices corresponding to $\{\alpha_{i},\beta_{i}\}$. Since $\mathbf{R}_w^g$ is diagonal, the observation model (\ref{eq:rvsp2}) can be equivalently written in vector form as \begin{equation} \label{eq:vecCS2} \text{vec}(\mathbf{R})= \left( \bar{\mathbf{G}} \odot \mathbf{G} \right) \mathbf{r}_w^g. \end{equation} Here $\text{vec}(\mathbf{R})$ is a column vector that vectorizes the matrix $\bf R$ by stacking its columns, $\mathbf{r}_w^g$ is the $L^2 \times 1$ vector that contains the diagonal of $\mathbf{R}_w^g$ and $\odot$ denotes the Khatri-Rao product. The goal is thus to recover the $M$-sparse vector $\mathbf{r}_w^g$ from the $(2N-1)^2$ measurement vector $\text{vec}(\mathbf{R})$. The following theorem derives a necessary condition on the minimal number of sensors $2N-1$ for perfect recovery of $\alpha_{i},\beta_{i}$, $i\in\{1,\dots,M\}$ in a noiseless environment. \begin{thm} Let $u(t)$ be an arbitrary signal within $\mathcal{M}_{2}$ and consider an L-shaped ULA with $2N-1$ sensors, such that there are $N$ sensors along each axis with a common sensor at the origin, and the distance between two adjacent sensors is denoted by $d$. If: \begin{itemize} \item (c1) $d<\frac{c}{f_{\text{Nyq}}}$ \item (c2) $N>M$ \item (c3) $\text{spark}(\mathbf{G})=N+1$, \end{itemize} then (\ref{eq:vecCS2}) has a unique $M$-sparse solution $\mathbf{r}_w^g$. \end{thm} \begin{proof} In order to recover the $M$-sparse vector $\mathbf{r}_w^g$ from $\text{vec}(\mathbf{R})$, we require \cite{CSBook, SamplingBook} \begin{equation} \label{eq:spark_kr} \text{spark} \left( \bar{\mathbf{G}} \odot \mathbf{G} \right) > 2M. \end{equation} From \cite{KR_spark}, it holds that \begin{equation} \text{spark} \left( \bar{\mathbf{G}} \odot \mathbf{G} \right) \geq \min \{ 2(\text{spark} (\mathbf{G}) -1), L^2+1 \}. \end{equation} Combining (c2) and (c3), we have \begin{equation} 2(\text{spark} (\mathbf{G}) -1) > 2M. \end{equation} Finally, since $L^2 \gg N$, (\ref{eq:spark_kr}) holds. \end{proof} To recover the sparse vector $\mathbf{r}_w^g$, we can use any CS recovery algorithm such as orthogonal matching pursuit (OMP) \cite{CSBook}. Once the indices $\alpha_{i},\beta_{i}$, $i\in\{1,\dots,M\}$ are recovered, the corresponding $f_{i}$ and $\theta_{i}$ are given by \begin{eqnarray} \hat{\theta}_{i} & = & \tan^{-1}\left(\frac{\beta_{i}}{\alpha_{i}}\right),\nonumber \\ \hat{f}_{i} & = & \frac{\alpha_{i}}{\cos(\theta_{i})}. \end{eqnarray} \subsection{Numerical Results} In this section, we demonstrate the effect of different system parameters on the reconstruction performance. Consider a complex-valued signal $u\left(t\right)$ from $\mathcal{M}_{2}$, which is the sum of $M=3$ narrowband source signals $s_{i}\left(t\right)$, each of width $B=50$Mhz and with $f_{\text{Nyq}}=10$Ghz. The carrier frequencies $f_{i}$ are drawn uniformly at random from $[-\frac{f_{\text{Nyq}}-B}{2},\frac{f_{\text{Nyq}}-B}{2}]$, and the AOAs $\theta_{i}$ are drawn uniformly at random from $[-85^{\circ},85^{\circ}]$. The L-shaped array is composed of $2N-1$ sensors; $N$ along each axis with a common sensor at the origin. The received signal at each sensor is corrupted with AWGN. The mixing and sampling rates are set to $f_{s}=f_{p}=1.4B$. In this section, we compare 3 reconstruction methods: 1) Joint ESPRIT summarized in Algorithm \ref{algo:2desprit}, 2) CS approach presented in Section \ref{sub:Compressed-Sensing-Approach}, 3) PARAFAC analysis \cite{Hars94} based approach, as presented in \cite{Stein2015}. PARAFAC \cite{Hars94} extends the bilinear model of factor analysis to a trilinear model using the alternating least squares (ALS) method. In \cite{Stein2015}, PARAFAC is used to decompose the cross correlations matrices defined in (\ref{eq:cross}) into three matrices, isolating $\bf \Phi$ and $\bf \Psi$. To apply the PARAFAC algorithm we use the COMFAC MATLAB function implemented by \cite{Sidi98}. In these simulations, we focus on the recovery of the carrier frequencies $f_{i}$ and AOAs $\theta_{i}$. Once these are recovered, full signal reconstruction can be performed as shown in the first part of this work (see (\ref{eq:rec_sig}) and (\ref{eq:sVsw})). The reconstruction performance is measured by the following criteria: $\frac{1}{Mf_{\text{Nyq}}}\sum_{i=1}^{m}\left|f_{i}-\hat{f}_{i}\right|$ for the frequencies, and $\frac{1}{M180^{\circ}}\sum_{i=1}^{m}\left|\theta_{i}-\hat{\theta}_{i}\right|$ for the AOA. The first simulation examines the recovery performance with respect to the number of sensors $2N-1$. Figure \ref{DOA sensors} presents the carrier frequency and AOA reconstruction performance for different values of the number of sensors, which affects both the noise averaging and the total amount of samples available. \begin{figure}[H] \includegraphics[width = 0.5\textwidth]{2D_DOA.png} \protect\caption{Influence of the number of sensors $2N-1$ on the carrier frequency and AOA reconstruction performance, with $M=3$, $Q=300$, $\text{SNR}=10$dB. \label{DOA sensors}} \end{figure} The second simulation, presented in Fig. \ref{DOA SNR}, illustrates the impact of SNR on the recovery performance. \begin{figure}[H] \includegraphics[width = 0.5\textwidth]{DOA_SNR.png} \protect\caption{Influence of SNR on the carrier frequency and AOA reconstruction performance, with $M=3$, $2N-1=11$, $Q=300$. \label{DOA SNR}} \end{figure} \section{Conclusion} In this paper, we considered two scenarios: spectrum sensing and joint spectrum sensing and DOA of multiband signals from sub-Nyquist samples. For the first scenario, we proposed a receiver composed of a ULA, where each sensor contains an analog front-end equivalent to one channel of the MWC. This system constitutes an alternative sub-Nyquist sampling scheme that outperforms the MWC in terms of performance in low SNRs or implementation complexity. For the joint spectrum sensing and DOA scenario, we extend our ULA configuration and present the CasCADE system, an L-shaped array composed of two ULAs with the same sampling scheme as above. In both cases, we derive sufficient conditions for the recovery of the transmissions carrier frequencies and AOAs, if relevant. We showed that the minimal number of sensors for the first scenario is twice the number of transmissions, namely $2M$, in the worst case and $M+1$ with high probability, whereas in the second scenario, it is $2M+1$ in the average case. Last, we provided two reconstruction schemes for both scenarios: one based on the analytic method ESPRIT and the second based on CS techniques. Simulations demonstrated the performance of the above algorithms in comparison with existing methods. \bibliographystyle{IEEEtran} \bibliography{IEEEabrv,IEEEfull} \end{document}
{"config": "arxiv", "file": "1604.02723/DOA_last_column.tex"}
TITLE: Open balls appearance QUESTION [0 upvotes]: We know that with the euclidean metric the open balls in $\mathbb{R}^2$ are circles without the frontier, of course. My question is if there exist a known metric in $\mathbb{R}^2$ such that the open balls have the appearance as triangles or hexagons? Thank you for the support. REPLY [1 votes]: I guess that you want a metric which induces the standard topology on $\mathbb{R}^2$. If you want that the metric is induced by a norm, then you can find an answer here: Norm induced by convex, open, symmetric, bounded set in $\Bbb R^n$. . Note that any open norm-ball centered at $0$ must be bounded, open, convex and centre symmetric (for bounded recall that all norms on $\mathbb{R}^n$ are equivalent). This excludes triangles, but allows hexagons. Edited: Let $H$ be an open hexagon with centre $0$ and vertices lying on the unit circle. It is bounded, open, convex and centre symmetric. As in the above link define $$\|x\|_{hex} = \inf \{k>0 : x/k \in H \} .$$ This is a norm such that $H = \{ x \in \mathbb{R}^2 \mid \|x\|_{hex} < 1 \}$. Note that $$\{ x \in \mathbb{R}^2 \mid \|x\|_{hex} < r \} = r H ,$$ the latter being defined as $r H = \{r h \mid h \in H \}$. This is again an open hexagon stretched by the factor $r$. Our norm induces the metric $$d_{hex}(x,y) = \|x - y\|_{hex} .$$ Then $$B_{hex}(x;r) = \{ y \in \mathbb{R}^2 \mid d_{hex}(x,y) < r \} = x + B_{hex}(0;r) = x +r H$$ which is an open hexagon with centre $x$. The same construction can be performed for any bounded, open, convex and centre symmetric set $A$. Then all open balls with respect to the metric $d_A$ are copies of some $rA$ (which is obtained by stretching $A$).
{"set_name": "stack_exchange", "score": 0, "question_id": 2956919}
TITLE: Does this matrix identity hold? QUESTION [4 upvotes]: For invertible matrices A and B does the identity: $$ (A^{-1} + B^{-1})^{-1} = A - A(A+B)^{-1}A $$ hold? My supervisor suggested that they are equal but I haven't been able to prove this and in the matrix cookbook (http://www.math.uwaterloo.ca/~hwolkowi/matrixcookbook.pdf) there are separate identities for both sides of this equation, but they are not given as equal to each other. REPLY [20 votes]: \begin{eqnarray*} A-A(A+B)^{-1}A &=&A-(A+B-B)(A+B)^{-1}A \\ &=&B(A+B)^{-1}A=[A^{-1}(A+B)B^{-1}]^{-1} \\ &=&(A^{-1}+B^{-1})^{-1} \end{eqnarray*}
{"set_name": "stack_exchange", "score": 4, "question_id": 1617991}
\begin{document} \begin{titlepage} \title{\vspace{-3cm}{\huge On Factorizable S-matrices, Generalized TTbar, \break and the Hagedorn Transition}} \author{Giancarlo Camilo$^{1,2\musDoubleFlat}$, Thiago Fleury$^{1\musFlat}$, M\'{a}t\'{e} Lencs\'{e}s$^{3\musNatural}$,\\[0.1cm] Stefano Negro$^{4\musSharp}$ and Alexander Zamolodchikov$^{5,6\musDoubleSharp}$\\[0.3cm]} \date{\footnotesize{$^1$ International Institute of Physics, Federal University of Rio Grande do Norte\\ Campus Universit\'ario, Lagoa Nova, 59078-970, Natal, RN, Brazil\\[0.1cm]$^2$ Quantum Research Centre, Technology Innovation Institute, Abu Dhabi, UAE\\[0.1cm]$^3$ BME-MTA Statistical Field Theory `Lend\"ulet’ Research Group \\Department of Theoretical Physics, Budapest University of Technology and Economics \\1111 Budapest, Budafoki {\'u}t 8, Hungary\\[0.1cm]$^4$ Center for Cosmology and Particle Physics, New York University, New York,\\ NY 10003, U.S.A. \\[0.1cm]$^5$ C.N. Yang Institute for Theoretical Physics, State University of New York, Stony Brook,\\ NY 11794-3840, USA.\\[0.1cm]$^6$ Kharkevich Institute for Information Transmission Problems, Moscow, Russia\\[0.3cm]$^{\musDoubleFlat}$\texttt{\href{mailto:gcamilo@iip.ufrn.br}{gian.fis@gmail.com},} $^{\musFlat}$\texttt{\href{mailto:tsi.fleury@gmail.com}{tsi.fleury@gmail.com},} $^{\musNatural}$\texttt{\href{mailto:mate.lencses@gmail.com}{mate.lencses@gmail.com},}\\ $^{\musSharp}$\texttt{\href{mailto:stefano.negro@nyu.edu}{stefano.negro@nyu.edu},} $^{\musDoubleSharp}$\texttt{\href{mailto:alexander.zamolodchikov@stonybrook.edu}{alexander.zamolodchikov@stonybrook.edu},}\\}} \maketitle \begin{abstract} We study solutions of the Thermodynamic Bethe Ansatz equations for relativistic theories defined by the factorizable $S$-matrix of an integrable QFT deformed by CDD factors. Such $S$-matrices appear under generalized TTbar deformations of integrable QFT by special irrelevant operators. The TBA equations, of course, determine the ground state energy $E(R)$ of the finite-size system, with the spatial coordinate compactified on a circle of circumference $R$. We limit attention to theories involving just one kind of stable particles, and consider deformations of the trivial (free fermion or boson) $S$-matrix by CDD factors with two elementary poles and regular high energy asymptotics -- the ``2CDD model". We find that for all values of the parameters (positions of the CDD poles) the TBA equations exhibit two real solutions at $R$ greater than a certain parameter-dependent value $R_*$, which we refer to as the primary and secondary branches. The primary branch is identified with the standard iterative solution, while the secondary one is unstable against iterations and needs to be accessed through an alternative numerical method known as pseudo-arc-length continuation. The two branches merge at the ``turning point" $R_*$ (a square-root branching point). The singularity signals a Hagedorn behavior of the density of high energy states of the deformed theories, a feature incompatible with the Wilsonian notion of a local QFT originating from a UV fixed point, but typical for string theories. This behavior of $E(R)$ is qualitatively the same as the one for standard TTbar deformations of local QFT. \end{abstract} \end{titlepage} \newpage \tableofcontents \newpage \section{Introduction}\label{sec:intro} The so-called TTbar deformations \cite{Smirnov:2016lqw,Cavaglia:2016oda} of two-dimensional quantum field theories (QFTs) has brought about a renewed interest to UV properties of Renormalization Group (RG) flows generated by higher dimensional (a.k.a. ``irrelevant'') operators. The TTbar deformation is defined as the one-parameter family of formal ``actions'' $\mathcal{A}_\alpha$, determined by the flow \begin{eqnarray}\label{Aalpha} \frac{d}{d\alpha}\mathcal{A}_\alpha = \int\,(T\Tb)_\alpha (x)\,d^2 x\;, \end{eqnarray} where $(T\Tb)_\alpha (x)$ is a special composite operator built from the components of the energy-momentum tensor associated with the theory $\mathcal{A}_\alpha$ \cite{Zamolodchikov:2004ce}. The deformation \eqref{Aalpha} has a number of notable properties. The theory $\mathcal{A}_\alpha$ is ``solvable'', in the sense that certain characteristics can be found exactly in terms of the corresponding ones in the undeformed theory $\mathcal{A}_{\alpha=0}$. This is remarkable, because the deformation operator $(T\Tb)_\alpha$ has exact dimension $4$, meaning the perturbation in \eqref{Aalpha} is ``irrelevant'' in the RG sense. Normally, such deformations are expected to break the short-distance structure of the quantum field theory, generally rendering the theory UV incomplete, and possibly violating causality at short scales. The abnormal UV properties of the theory $\mathcal{A}_\alpha$ are manifest already in the short-scale behavior of its finite-size ground-state energy. If the spatial coordinate of the 2D space-time is compactified on a circle of circumference $R$, its ground-state energy $E_\alpha(R)$ is determined exactly, via the equation \cite{Smirnov:2016lqw,Cavaglia:2016oda} \begin{eqnarray}\label{burgers2} E_\alpha(R)=E_0(R-\alpha E_\alpha(R))\;, \end{eqnarray} in terms of the ground state energy $E_0(R)$ of the undeformed theory, at $\alpha=0$. The equation \eqref{burgers2} shows that, depending on the sign of the deformation parameter $\alpha$, the ground state energy either develops a square root singularity at some $R_*\sim 1/\sqrt{|\alpha|}$, or has no short-distance singularity at all. Neither of these types of behavior is compatible with the idea of QFT as the RG flow stemming out of a UV fixed point. The theory defined by \eqref{Aalpha} therefore is not a local QFT in the Wilsonian sense \cite{Wilson:1973jj}. Moreover, at negative $\alpha$, the singularity at finite $R$ signals a very fast growth of the density of states at high energies, a common hallmark of string theories, leading to the Hagedorn transition \cite{Polchinski:1998rq}. The behavior of $E_\alpha(R)$ at positive $\alpha$ is possibly even more puzzling, as it suggests a finite number of states per unit volume, an unlikely feature if one thinks of a QFT as a system of continuously many interacting degrees of freedom, unless quantum gravity is involved\footnote{A relation of the TTbar deformation to the Jackiw-Teitelboim gravity was indeed proposed in \cite{Dubovsky:2017cnj,Dubovsky:2018bmo}.}. Therefore, the deformed theory determined by \eqref{Aalpha} cannot be considered a conventional UV complete local QFT. At the same time, however, the TTbar deformation has a number of robust features which makes one reluctant to simply dismiss it as ``pathological''. It is instead tempting to think that the deformation \eqref{Aalpha} exemplifies some meaningful extension of the notion of local QFT. In particular, an interesting interpretation of the theory $\mathcal{A}_\alpha$ in terms of its gravitational dual was proposed in \cite{McGough:2016lol}, where a relation to the state of the bulk gravity in the dual theory was suggested. Several questions about 2D physics of the deformed theory need to be elucidated in order to put such suggestions on a solid ground. For example, does the deformation preserve any part of the local structure of QFT? Notice how the very definition \eqref{Aalpha} depends on the notion of the energy-momentum tensor, conventionally a part of such a local structure. Another important question concerns the macro-causality in 2D space-time. While the deformation \eqref{Aalpha} with positive $\alpha$ is suspected to display super-luminal propagation \cite{Dubovsky:2017cnj,McGough:2016lol}, the case of negative $\alpha$ is most likely free from this problem. We will not dwell on this question, as it is the negative-$\alpha$ deformation which will be of interest to the present discussion. In any case, we believe it is important to understand the physical origin of the above abnormal short-distance properties. Another exact result about the theory $\mathcal{A}_\alpha$ concerns the deformation of its $S$-matrix, whose elements differ from the corresponding undeformed ones by a universal phase factor, available in closed form \cite{Dubovsky:2017cnj}. In particular, the $2 \to 2$ elastic scattering amplitude has the form \begin{eqnarray}\label{sdef} S_\alpha(\theta) = S_0(\theta)\,\exp\left(-i \alpha M^2 \sinh\theta\right)\;, \end{eqnarray} where $S_0(\theta)=S_{\alpha=0}(\theta)$ is the $2\to 2$ scattering amplitude of the undeformed theory. Here $\theta=\theta_1 - \theta_2$ is the difference between rapidities of the two particles involved -- assumed for simplicity to be identical -- and $M$ denotes their mass; in what follows we set the units so that $M=1$. A notable feature of the additional phases acquired under the deformation is their abnormally fast high-energy growth, which is evident already in the form \eqref{sdef}\footnote{A similar behavior of the scattering phase was previously found in non-commutative field theories \cite{Douglas:2001ba}.}. The scattering phase in \eqref{sdef} determines the density of two-particle states, suppressing it when $\alpha>0$ but greatly enhancing it at negative $\alpha$. In the latter case, one might be led to believe that the Hagedorn behavior is directly related to this rapid growth of the $2\to 2$ scattering phase. One of the results of the present work is to show that the situation is more subtle: the growth of the two-particle scattering phase in \eqref{sdef} is not a necessary condition for the formation of the singularity of the finite-size energy at finite real $R$. We will study certain generalizations of the TTbar deformation which can be defined whenever the original QFT is integrable \cite{Smirnov:2016lqw}. In most of such deformations, the scattering phases present a less exotic high-energy behavior -- i.e., they have finite limit at $\theta\to\infty$ -- while, at the same time, the overall density of states grows nonetheless exponentially with the energy, leading to the Hagedorn singularity. The generalizations of the TTbar deformations we will be interested in are based on the integrability of the original QFT. This assumes that the theory possesses infinitely many conserved local currents of higher Lorentz spins $s+1$, with $s$ taking values in the set $\{s\}$ of odd natural numbers: $s=1,3,5,7,...$\footnote{Generally, the set of spins $\{s\}$ of local Integrals of motion may be different in different integrable theories. Here we assume, again for simplicity, the most common situation -- represented e.g. by sinh-Gordon or sigma models -- where $\{s\}$ involves all odd natural numbers. In different models the CDD factor discussed below may be constrained by additional conditions, which however do not change the overall conclusions below.}. The deforming operators $T\Tb^{(s)}(x)$ are constructed from these currents in the exact same way as the operator $T\Tb (x)$ is built from the energy-momentum tensor, see \cite{Smirnov:2016lqw} for details. It can be then shown that the theory deformed by adding such operators retains its integrability, preserving the same set of conserved local currents. Therefore the deformations of an Integrable QFT (IQFT) by the operators $T\Tb^{(s)}$ generate an infinite-dimensional family of flows generalizing \eqref{Aalpha}, \begin{eqnarray}\label{Aalphas} \frac{\partial\mathcal{A}_{\{\alpha\}}}{\partial\alpha_s} = \int\,T\Tb^{(s)}_{\{\alpha\}}(x)\,d^2 x\;. \end{eqnarray} Here $\{\alpha\}$ denotes the infinite set of the deformation parameters $\{\alpha\}:=\{\alpha_s\}$, and the subscript $\{\alpha\}$ under the operator $T\Tb^{(s)}(x)$ is added to emphasize that it is constructed in terms of the conserved currents of the deformed theory $\mathcal{A}_{\{\alpha\}}$. In what follows we refer to \eqref{Aalphas} as the \emph{generalized TTbar flow}\footnote{In \cite{Conti:2019dxg}, a different family of generalizations of the TTbar flow, in which the deforming operators $T\Tb_{s}$ are asymmetrically constructed from the energy-momentum tensor and a higher-conserved current, was explored.}. For integrable theories, the infinite-parameter flow \eqref{Aalphas} generalizes the one-parameter deformation \eqref{Aalpha}. The latter corresponds to the special case $\alpha_s =0$ for $s>1$, and $\alpha_1=\alpha$. To distinguish them, below we often refer to \eqref{Aalpha} as the "TTbar proper", or simply TTbar, reserving the term "Generalized TTbar" to the generic deformation \eqref{Aalphas}. It was argued that the deformation \eqref{Aalphas} leads to the following deformation of the elastic two-particle $S$-matrix \begin{eqnarray}\label{sdefs} S_{\{\alpha\}}(\theta)=S_{\{0\}}(\theta)\,\Phi_{\{\alpha\}}(\theta)\,, \qquad \Phi_{\{\alpha\}}(\theta)=\exp\left\{-i \,\sum_{s\in2\mathbb{Z}+1}\,\alpha_s\,\sinh\left(s\,\theta\right)\right\}\;, \end{eqnarray} with the same notations as in \eqref{sdef} and \eqref{Aalphas}\footnote{The parameters $\alpha_s$ in \eqref{sdef} coincide with the flow parameters defined in \eqref{Aalphas} provided a specific normalization of the fields $T\Tb^{(s)}_{\{\alpha\}}(x)$ is chosen, otherwise the terms in the sum in \eqref{sdef} would have additional normalization-dependent numerical coefficients. The form \eqref{sdef} was explicitly derived in \cite{Smirnov:2016lqw} for the deformed sine-Gordon model, to leading order in the deformation parameters. However, this form of the $S$-matrix deformation under the flow \eqref{Aalphas} can be proven in the general case, using the methods of \cite{Cardy:2018sdv} or the approach developed in \cite{Kruthoff:2020hsi}. We will elaborate this point elsewhere.}. The phase factor $\Phi_{\{\alpha\}}(\theta)$ is known with the name of \emph{CDD factor} \cite{Castillejo:1955ed}. Generally, it is an energy-dependent phase factor $\Phi(\theta)$ which can be added to the $2\to 2$ scattering amplitude without violating the analyticity, unitarity and crossing symmetry conditions. The unitarity and crossing demand that $\Phi(\theta)$ satisfies the functional relations \begin{eqnarray}\label{cdddef} \Phi(\theta)\Phi(-\theta)=1\,, \qquad \Phi(\theta)=\Phi(i\pi-\theta)\;, \end{eqnarray} which $\Phi_{\{\alpha\}}(\theta)$ in \eqref{sdefs} obviously does term by term in the sum over $s$. Moreover, it is easy to see that (once the overall sign ambiguity is ignored) any solution of \eqref{cdddef} can be represented by the form \eqref{sdefs}, with the series in the exponential converging in some vicinity of the point $\theta=0$. However, the series does not need to converge at all $\theta$. The $S$-matrix analyticity forces $\Phi(\theta)$ to be a meromorphic function of $\theta$, with the locations of the poles constrained by the condition of macro-causality (more on this momentarily). Therefore, for \eqref{sdefs} to represent a physically sensible $S$-matrix, the sum over $s$ is allowed to have a finite domain of convergence, while its analytic continuation must admit the representation \begin{eqnarray}\label{cddg} \Phi_{\{\alpha\}}(\theta) = \Phi_{\text{pole}}(\theta) \,\Phi_{\text{entire}}(\theta)\;, \end{eqnarray} where the first factor absorbs all the poles located at finite $\theta$, whose number $N$ is in general arbitrary (possibly infinite), \begin{eqnarray}\label{phipole} \Phi_{\text{pole}}(\theta) = \prod_{p=1}^N \frac{\sinh\theta_p + \sinh\theta}{\sinh\theta_p - \sinh\theta}\;, \end{eqnarray} and \begin{eqnarray}\label{phientire} \Phi_{\text{entire}}(\theta) = \exp\left\{-i \,\sum_{s}\,a_s \,\sinh\left(s\,\theta\right)\right\}\;. \end{eqnarray} In this last factor, the series in the exponential is assumed to converge at all $\theta$, so that $\Phi_{\text{entire}}(\theta)$ represents an entire function of $\theta$. Macro-causality restricts possible positions of the poles $\theta_p$ to either the imaginary axis $\Re \theta_p=0$, or to the strips $\Im \theta_p \in [-\pi,0] \ \text{mod} \ 2\pi$ since, in virtue of \eqref{cdddef}, $\Phi(\theta)$ is a periodic function, $\Phi(2\pi i+\theta)=\Phi(\theta)$. Let us stress here that the representation (\ref{cddg}--\ref{phientire}) of the generic CDD factor $\Phi_{\{\alpha\}}(\theta)$ differs from the one given in \eqref{sdefs} only in the parameterization: any factor (\ref{cddg}--\ref{phientire}) can be written in the form \eqref{sdefs}, with the parameters $\alpha_s$ expressed in terms of $a_s$ and $\theta_p$, and conversely any factor $\Phi_{\{\alpha\}}(\theta)$ defined in \eqref{sdefs}, being analytically continued to the whole $\theta$-plane, can be written in the form \eqref{cddg}. In the present work we focus our attention on the class of $S$-matrices \eqref{sdefs} having CDD factors \eqref{cddg} for which the entire part \eqref{phientire} is absent\footnote{A first analysis of models whose $S$-matrix is deformed by a CDD factor consisting of only of a generic entire part \eqref{phientire} has been performed in \cite{Hernandez-Chifflet:2019sua}.}, \begin{eqnarray}\label{cddn} \Phi_{\{\alpha\}}(\theta)= \Phi_\text{pole}(\theta)\;, \end{eqnarray} and the product in \eqref{phipole} involves finitely many factors, i.e. $N<\infty$. Note that, unlike \eqref{sdef}, such CDD factors have regular limits at $\theta\to\pm \infty$. Therefore, if the undeformed $S$-matrix $S_0(\theta)$ behaves regularly -- presenting no abnormal growth of the scattering phase -- at large $\theta$, so does the deformed $S$-matrix $S_0(\theta)\Phi(\theta)$. We now raise the following question: how does an $S$-matrix deformation such as the one just described affect the short-distance behavior of the theory? Unfortunately, for the general TTbar deformation \eqref{Aalphas} no closed form of the finite-size energy levels similar to \eqref{burgers2} is available with which one could analyze their dependence on the size $R$ of the system. However, having an exact expression for the deformed IQFT $S$-matrix, the finite-size ground-state energy $E(R)$ can be obtained by solving the associated Thermodynamic Bethe Ansatz (TBA) equation \cite{Yang:1968rm,Zamolodchikov:1989cf}. In general, the form of the TBA equations depends on the particle spectrum of the theory. Here we consider, for simplicity, the case of a factorizable $S$-matrix involving only one kind of particles, having mass $M=1$. In this case the two-particle $S$-matrix consists of a single amplitude $S(\theta)$, which itself satisfies the equations \eqref{cdddef}. Therefore we can limit attention to the functions $S(\theta)$ of the form \eqref{phipole}\footnote{One can think of these as CDD deformations of the free $S$-matrix $S(\theta)=\pm 1$.}. There are two substantially different cases, depending on the sign of $S(0) = \sigma = \pm 1$. Following \cite{Zamolodchikov:1989cf}, we refer to these cases as the ``bosonic TBA'' when $\sigma=+1$ and ``fermionic TBA'' for $\sigma=-1$. Given $S(\theta)$, let $\varphi(\theta)$ be the derivative of the scattering phase, \begin{eqnarray}\label{varphidef} \varphi(\theta) = \frac{1}{i}\,\frac{d}{d\theta} \log S(\theta)\;. \end{eqnarray} Then the TBA equation takes the form of a non-linear integral equation for a single function $\epsilon(\theta)$, the \emph{pseudo-energy}, \begin{eqnarray}\label{tbas} \epsilon(\theta)=R\,\cosh\theta - \int\,\varphi(\theta-\theta')\,L(\theta')\,\frac{d\theta'}{2\pi}\;, \qquad \end{eqnarray} where \begin{eqnarray}\label{Ldef} L(\theta) := -\sigma\,\log\left(1-\sigma\,e^{-\epsilon(\theta)}\right)\;. \end{eqnarray} The ground state energy can then be recovered from the pseudo-energy via the following expression \begin{eqnarray}\label{etbas} E(R) = -\,\int_{-\infty}^{\infty} \,\cosh\theta\,L(\theta)\,\frac{d\theta}{2\pi}\;. \end{eqnarray} In most cases the TBA equations are impervious to the exact analytic derivation of their solutions but are amenable to numerical approaches. These can yield important insight into high energy, \emph{viz.} short distance, properties of the deformed theories \eqref{Aalphas}. A numerical solution can be obtained, with practically arbitrary accuracy, by numerical integration of \eqref{tbas}. This approach was employed to obtain $E(R)$ in a number of IQFT's with known $S$-matrices, see e.g. \cite{Zamolodchikov:1989cf,Zamolodchikov:1991pc}. Usually, the numerical solution is obtained by iterations, starting from a seed function, conventionally taken to be $\epsilon(\theta)=R\,\cosh\theta$, and successively substituting the result of the previous iteration in the right-hand-side in \eqref{tbas}. We will review this approach in \S \ref{subsec:iterative}. If one considers the $S$-matrix associated with a UV complete local IQFT -- such as a conformal field theory (CFT) perturbed by a relevant operator, the sine-Gordon model, or an integrable sigma-model -- the iterations turn out to converge for all $R>0$, and the resulting ground-state-energy $E(R)$ happens to be analytic at all positive real $R$, developing a Casimir singularity at $R=0$. But how adding a CDD factor to the $S$-matrix will affect the TBA solution? This question was addressed in the early 90's by Al. Zamolodchikov, who has considered the modification of the trivial fermionic $S$-matrix $S(\theta) = -1$ by the simplest possible rational CDD factor, namely \eqref{phipole} with $N=1$. In the resulting theory, the celebrated ``staircase model'' \cite{Zamolodchikov:1991pc}, the iterative solution of the TBA still converges at all positive $R$, producing a ground-state-energy $E(R)$ analytic for $R>0$. He also observed that when adding more general CDD factors the situation changes qualitatively. Typically, the convergence of the iterative solution breaks down at $R$ below a certain critical value $R_*$, and the form of the numerical solution at $R>R_*$, where the iterations converge, strongly indicates the existence of a square-root singularity of $E(R)$ at $R_*$ \cite{Zamolodchikov_unpublished}. A similar observation was made in \cite{Mussardo:1999aj}, where a particular CDD deformation of the trivial bosonic $S$-matrix $S(\theta) = 1$ was studied and the numerical solution of the associated TBA equation was found consistent with the existence of a singularity at finite $R_*>0$. We wish to stress that the presence of the singularity at finite $R_*$ and, moreover, its square-root character, are features very similar to the ones displayed by $E(R)$ in the TTbar deformed QFTs, as shown in Fig \ref{ERplotTTbar} below. In this work we study a few simple cases of CDD deformed TBA equations, using a refined numerical routine based on the so-called ``pseudo-arc-length continuation'' (PALC) method. This allows one to recover solutions to the TBA equation \eqref{tbas} which are unstable under the standard iterative approach. This method is explained in detail in \S \ref{sec:num_meth}. The object of our attention will be trivial $S$-matrices $S(\theta) = \sigma = \pm 1$ deformed by CDD factors \eqref{phipole} with $N=1,2$. The case $N=1$ with $\sigma=-1$ corresponds to either the sinh-Gordon or the staircase model, depending on the position of the pole. As mentioned just above, these models do not display any abnormal short-distance behavior and were extensively studied in the literature. The bosonic TBA with $N=1$ was considered in \cite{Mussardo:1999aj} and we will comment on it in \S \ref{sec:results}, along with the $N=2$ case. Of these, we mostly address the fermionic cases, although some results for the bosonic TBA are also presented. We find that for all allowed values of the parameters $\theta_p\, (p=1,2)$ the fermionic TBA equation \eqref{tbas} with sufficiently large $R$ possesses two real solutions, or ``branches'', which merge at some finite $R=R_*$. For $R<R_*$ these branches are likely to continue as a pair of conjugated complex-valued solutions. Of these two real solutions at $R>R_*$, one reproduces the iterative solution of the TBA equations \eqref{tbas}. We will call this solution the ``primary branch'', while referring to the other one as the ``secondary branch''. Let us stress here that it is the primary branch which directly corresponds to the deformed theory: $E(R)$ on the primary branch represents the finite-size vacuum energy of the deformed theory (in particular, at $R\to\infty$ the effect of the deformation disappears, as expected); it also gives the specific free energy of the deformed theory at temperature $T=1/R$ (in particular, it is the primary branch solution which correctly sums up the virial expansion associated with the input particle theory). In this sense, one could call the primary branch the ``physical'' one, although we will not use such a term\footnote{ The reason is that this would imply that the secondary branch is ``unphysical'', which we are reluctant to claim. Although the secondary branch definitely does not have direct interpretation in terms of ``physics'' of the input S-matrix, it might very well have some physical content of its own. In fact, understanding physical mechanism behind the secondary branch is one of the outstanding problems which remains open both for the generalized TTbar deformations and for the TTbar proper. }. The secondary branch always has lower energy $E(R)$ than the primary one, which is qualitatively similar to the behavior observed in the TTbar deformations with negative $\alpha$, see Fig \ref{ERplotTTbar}. Since the two branches merge at some finite $R=R_*$, this can be regarded as a ``turning point'', where the continuation along the graph of $E(R)$ turns backward into the secondary branch. This is precisely the kind of situation the PALC method is designed to deal with. The secondary branch remains real for all $R>R_*$ and, moreover, develops a linear asymptotic $\sim e_{\infty}\, R$ as $R\to \infty$. This, again, is in full qualitative agreement with the TTbar deformations, together with the important fact that the singularity of the pseudo-energy $\epsilon(\theta|R)$, viewed as a function of $R$, occurs at $R=R_*$ that is independent of $\theta$. Of the above features, the existence of primary and secondary branches with the turning point at finite $R_*$, independent from $\theta$, repeat \emph{verbatim} in the bosonic 2CDD model. On the other hand, we still cannot check the large $R$ behavior of the secondary branch with sufficient accuracy, due to some instability in the numerical procedure. We will return on this problem in a future work. It is likely that the general situation displayed in the models studied here, i.e. the solution of the TBA equation developing a square-root singularity at finite $R_*$, which signals the presence of a Hagedorn transition, remains qualitatively the same when more CDD poles are added in \eqref{phipole} -- with the possible exceptions of special domains, hypersurfaces of lower dimension, in the parameter space\footnote{Examples of such cases can be found in \cite{Martins:1992ht,Martins:1992yk,Dorey:2000zb}.}. This of course will have to be carefully verified. We regard the present work as a first step in the program of systematically studying the short-distance behavior of the generalized TTbar deformations \eqref{Aalphas} of IQFTs. The qualitative similarity to the TTbar-deformed QFTs, with negative $\alpha$, suggests that the same mechanism behind the formation of the Hagedorn singularities is at play in all of these models. Understanding the physics underlying this phenomenon remains the most important open problem in this context, as well as the main motivation for the present work. \section{From TBA to Hagedorn: the TTbar case}\label{sec:TTbar} Henceforth we will assume that the theory under consideration is integrable, with a factorizable $S$-matrix. Let us briefly remind how, in this case, equation \eqref{burgers2} can be derived from the $S$-matrix deformation \eqref{sdef} via the TBA equations. We will present a somewhat simplified version of the much more general arguments of \cite{Cavaglia:2016oda} (for related work see \cite{Dubovsky:2012wk,Caselle:2013dra} and the more recent \cite{LeClair:2021wfd,LeClair:2021opx}). Whereas the analysis in \cite{Cavaglia:2016oda} applies to all the energy eigenvalues of the TTbar deformed theory \eqref{Aalpha}, we limit our considerations to the ground-state energy, which we denote as $E(R)$. The advantage is that the simple arguments presented below apply to the deformation \eqref{sdef} of an essentially generic integrable theory. The only assumptions, made for simplicity, are that the particle scattering theory associated with $\mathcal{A}_0$ involves only one kind of neutral particles, with the factorizable scattering of fermionic type\footnote{Extension to the bosonic case $S(0)=+1$ is trivial. Less straightforward but still possible is the generalization to the cases of a scattering theory involving many species of particles,including the bound states, with different or equal masses. We will elaborate on such cases elsewhere.}, i.e. $S_0(0)=-1$. The goal is to emphasize some important properties of the solution which, as we will see, are shared by the TBA solutions by more general CDD deformations. The TBA equation \eqref{tbas} associated with the deformed $S$-matrix \eqref{sdef} has the following kernel \begin{eqnarray}\label{varphialpha} \varphi_\alpha (\theta-\theta') = \varphi_0 (\theta-\theta') - \alpha\,\cosh(\theta-\theta')\;. \end{eqnarray} Recall that the ground state energy $E_\alpha(R)$ is given by \eqref{etbas}, which in our case reads \begin{eqnarray} E_\alpha (R)= - \,\int_{-\infty}^{\infty} \,\cosh\theta\,\,L_\alpha(\theta|R)\,\frac{d\theta}{2\pi}\;, \label{eq:enalpha} \end{eqnarray} where $L_\alpha (\theta|R):=\log\left(1+e^{-\epsilon_\alpha (\theta|R)}\right)$ satisfies the deformed TBA equation \eqref{tbas}, \begin{eqnarray} \epsilon_\alpha(\theta|R) = R\, \cosh\theta - \int\, \varphi_\alpha(\theta-\theta')\,L_\alpha(\theta'|R)\,\frac{d\theta'}{2\pi}\;. \end{eqnarray} Due to the fact that the pseudo-energy is even, as is easily shown, we can separate the dependence on $\theta$ and $\theta'$ in the rightmost term in the kernel \eqref{varphialpha} so that the TBA equation can be written as follows \begin{eqnarray} \epsilon_\alpha(\theta|R) = \left(R-\alpha\,E_\alpha(R)\right)\,\cosh\theta - \int\,\varphi_0(\theta-\theta')\,L_\alpha(\theta'|R)\,\frac{d\theta'}{2\pi}\;, \label{ttbartba2} \end{eqnarray} where we used the definition \eqref{eq:enalpha}. For reasons that will become clear shortly we have made explicit the fact that $\epsilon(\theta|R)$ and $L(\theta|R)$ are functions of $R$ as well as of the rapidity $\theta$. This last form \eqref{ttbartba2} shows that $\epsilon_\alpha(\theta|R)$ satisfies the same TBA equation as $\epsilon_0(\theta|R)$, only with $R$ replaced by $R-\alpha E_\alpha(R)$. It then follows that \begin{eqnarray} \epsilon_\alpha(\theta|R) = \epsilon_0(\theta|R-\alpha E_\alpha(R)) \end{eqnarray} which immediately implies the equation \eqref{burgers2} for the deformed energy. It is also worth reminding here how the singularity of $E_\alpha(R)$, signifying the Hagedorn density of states, follows from \eqref{burgers2}. This takes a particularly simple form in terms of the function $R_\alpha(E)$, inverse to the function $E_\alpha(R)$, where $\alpha$ is regarded as a fixed parameter, \begin{eqnarray}\label{burgers3} R_\alpha(E)=R_0(E)+\alpha\,E\;. \end{eqnarray} This expression shows that the graph of the deformed function $E_\alpha(R)$ differs from the graph of $E_0(R)$ just by an affine transformation $(R,E) \to (R+\alpha E, E)$ of the $(R,E)$ plane. If we assume, as we do, that the undeformed theory $\mathcal{A}_0$ is a conventional QFT, defined \`{a} la Wilson as the RG flow from some UV fixed point down to an IR one (see \cite{Wilson:1973jj}), then the graph of $E_0(R)$ looks qualitatively as shown in Fig \ref{EvacQFT}. \begin{figure}[ht] \centering \includegraphics{Figures/e0.pdf} \caption{\small{ {Finite-size ground state energy $E_0(R)$ of a conventional Wilsonian relativistic QFT. Its $R\to 0$ behavior $-\pi c/6R$ is controlled by the UV fixed point. At large $R$, $E_0(R)$ shows the linear behavior $\simeq \varepsilon_0 R$, with the slope $\varepsilon_0$ representing the bulk vacuum energy density. We have to stress that the TBA equations actually compute the difference $E_\text{vac}(R)-\varepsilon_{0} R$, and in our subsequent analysis $E(R)$ stands for this difference. (That is why in all plots below the $R\to\infty$ slope of the primary branch is always set to zero.)}}}\label{EvacQFT} \end{figure} At large $R$ the function $E_0(R)$ approaches a linear asymptotic $\varepsilon_0 R$, where $\varepsilon_0$ is the vacuum energy density of the infinite system, with the rate of the approach controlled by the IR fixed point, which, typically, is a non-critical one. On the other hand, at $R\to 0$ it diverges as the Casimir energy determined by the UV fixed point, $E_0(R) \to -\pi c/6R$, where $c$ is the Virasoro central charge of the UV fixed point CFT. Then, according to \eqref{burgers3}, the plot of $E_\alpha(R)$ will look as either one of the panels \emph{a}) or \emph{b}) in Fig \ref{ERplotTTbar}, depending on the sign of $\alpha$. In what follows we will concentrate our attention to the case of negative $\alpha$, shown in panel \emph{a}). Note that the curve $E_\alpha(R)$ has two branches, each of which having real values for $R$ above a certain critical value $R_*$. It is the upper ``primary'' branch that corresponds to the ground state energy of the TTbar-deformed theory \eqref{Aalpha}. \begin{figure}[ht] \centering \begin{subfigure}{4cm} \includegraphics{Figures/e-a.pdf} \caption{$\alpha<0$} \end{subfigure} \hspace{2cm} \begin{subfigure}{4cm} \includegraphics{Figures/e+a.pdf} \caption{$\alpha>0$} \end{subfigure} \caption{\small{Finite-size ground state energy of the TTbar deformed theory. (a) $\alpha <0$. The graph $E_\alpha(R)$ shows the ``turning point" at some finite $R_*$, which signals the Hagedorn transition. (b) $\alpha >0$. $E_\alpha(R)$ shows no singularity at $R=0$.}}\label{ERplotTTbar} \end{figure} The two branches merge at $R=R_*$, where the function $E_\alpha (R)$ develops a square-root branch point, i.e. the derivative $dE_\alpha(R)/dR$ diverges as $(R-R_*)^{-1/2}$. At $R<R_*$, the analytic continuation of $E_\alpha(R)$ returns complex values and the two branches are complex conjugate. It is the singularity at $R_*$ that signals the Hagedorn phenomenon in the deformed theory, which can be inferred as follows. When the Euclidean theory is considered in the geometry of a very long cylinder of circumference $R$, as shown in Fig \ref{cylinder}, its partition function $Z$ is saturated by the finite-size ground state \begin{figure}[ht] \centering \includegraphics{Figures/cylinder.pdf} \caption{\small{{The Euclidean space-time cylinder representing the finite-size geometry in our analysis. The coordinate $x$ is compactified on a circle of circumference $R$, while the length $L$ of the cylinder is assumed to be asymptotically large. In the picture where $y$ is regarded as the Euclidean ``time" the partition function \eqref{eq:logZ} is dominated by the finite-size ground state contribution. In the complementary picture, where $y$ is interpreted as spatial coordinate while $x$ plays the role of the Matsubara ``time", the same partition function is given by the thermal trace \eqref{eq:Ztrace}.}}}\label{cylinder} \end{figure} \begin{eqnarray}\label{eq:logZ} -\log Z \ \simeq L\,E_\alpha(R)\;, \end{eqnarray} where $L\to \infty$ is the length of the cylinder. This corresponds to the picture in which the coordinate $y$ along the cylinder is taken as the Euclidean time. Alternatively, if one uses the picture where $x$ plays the role of Matsubara time, the same partition function is represented as the trace \begin{eqnarray}\label{eq:Ztrace} Z = \text{tr}\left(e^{-R {\hat H}_x}\right) = \int_0^\infty \,d{\cal E}\,\,\Gamma({\cal E})\,e^{-R{\cal E}} = e^{-RF(R)} \end{eqnarray} where $d{\cal E} \Gamma({\cal E})\sim d{\cal E}\,e^{{\cal S}({\cal E})}$ denotes the density of states, i.e. the number of states in the energy interval $d{\cal E}$. While in a local QFT whose high-energy limit is governed by the UV fixed point the entropy ${\cal S}$ grows as ${\cal S}({\cal E}) \simeq \sqrt{2\pi c/3}\ \sqrt{L {\cal E}}$ as ${\cal E}\to \infty$ -- this is known as Cardy formula \cite{Cardy:1986ie} -- the singularity of $F(R)$ at finite positive $R=R_*$ is formed when the entropy ${\cal S}({\cal E})$ grows much faster, \begin{eqnarray}\label{hagedorn1} {\cal S}(\cal E) \simeq R_* \,{\cal E}\,, \end{eqnarray} so that the partition sum diverges at $R<R_*$. We will discuss the density of states in the TTbar deformed theories in more details in \S \ref{sec:discussion}. In the above discussion we have denoted by $R_*$ the position of the singularity of $E_\alpha(R)$ as a function of $R$. It is important to observe that the solution $\epsilon_\alpha (\theta|R)$ displays a singularity at the same position $R=R_*$, independent on the value of the rapidity $\theta$. In other words, in the two-dimensional space spanned by the variables $(\theta, R)$ the singularity of $\epsilon_\alpha (\theta|R)$ occurs along the line $(\theta, R=R_*)$. We will see that this feature of the singularity associated with the Hagedorn transition will be reproduced in the generalized TTbar flows studied below. As already mentioned, the enhancement of the density of states in the deformed theory is well expected. The scattering phase in \eqref{sdef} grows fast with the center of mass energy, leading to the increase of the density of two-particle states, implying a yet greater increase of the density of all multi-particle states. The calculation presented above demonstrates that the resulting entropy displays the Hagedorn behavior \eqref{hagedorn1}. It is then tempting to assume that the formation of the Hagedorn density \eqref{hagedorn1} is directly related to the fast growth of the scattering phase. In the next section we will show that the Hagedorn singularity develops just as well in models whose associated CDD factor has a finite behavior at high energies, as in \eqref{phipole} with finite $N$, which indicates that the physical origin of the Hagedorn transition in the deformed theories is substantially more intricate. \section{The models}\label{sec:models} Here we study the CDD deformations of the trivial (fermionic or bosonic) $S$-matrix by the pole factor \eqref{phipole}, which we write as \begin{eqnarray}\label{sNpole} S(\theta) = \sigma\,\prod_{p=1}^N \,\frac{i\sin u_p+\sinh\theta}{i\sin u_p-\sinh\theta} \end{eqnarray} where, as before, $\sigma=-$ (resp. $\sigma=+$) corresponds to the fermionic (resp. bosonic) case. The parameters $u_p$ may be taken to be complex and, in view of the obvious periodicity of $S(\theta)$, we may limit our attention to the strip $-\pi < \Re(u_p) < \pi$. The standard analytic requirements for the physical $S$-matrix, however, impose restrictions on the possible locations of the poles $\theta_p = iu_p$. Taking these restrictions into consideration, the parameters $u_p$ are allowed to be either real or complex with negative real parts. The poles $\theta_p = i u_p$, with real positive $u_p$ signal the existence of bound states -- new stable particles of mass $2M\,\cos(u_p/2)$. Since the presence of such particles violates our working assumption that the mass spectrum of the theory only involves a single kind of stable particle with mass $M$, henceforth we will assume that all parameters $u_p$ in \eqref{sNpole} possess a negative real part\footnote{This leaves out the possibility of having a pole at $\theta=2\pi i/3$ which may be identified with the same particle of the mass $M=1$. Such interpretation requires that $S(\theta)$ satisfies additional bootstrap condition. This possibility, known as the ``$\varphi^3$ property", cannot be realized in the 2CDD model considered in this work, but may be relevant when $N$ is greater than 2. We hope to address this type of models elsewhere. }: \begin{eqnarray} -\pi \leq \Re(u_p)\leq 0\;,\qquad \forall p=1,\ldots ,N\;. \label{eq:up_analiticity} \end{eqnarray} This leaves us with poles $\theta_p = i u_p$ lying in the unphysical region, i.e. the region of the complex center-of-mass energy $s$-plane reached by analytically continuing the scattering amplitude through the two-particle branch cut. When $u_p$ has nonzero imaginary parts, such poles are associated to unstable particles, having complex masses $M_p = 2M\,\cos(u_p/2)$, with the real and imaginary parts identified as usual with the mean center of mass energy and the width of the resonances. The poles with real negative $u_p$ do not have clear particle interpretation, but the number of such poles signify the increment of the scattering phase as the function of $\theta$ at low energies; these poles are often referred to as the virtual states (see e.g.\cite{perelomov1998quantum}). A final requirement is that of unitarity of the physical $S$-matrix, which demands that $S(-\theta) = S^* (\theta)$ at all real $\theta$, or, equivalently, that $S(\theta)$ takes real values at pure imaginary $\theta$. It follows that any non-real parameter $u_p$ in \eqref{sNpole} either has fixed real part $\Re(u_p)=-\pi/2$ or appears together with its conjugate $u_p^{\ast}$. We can then refine the range (\ref{eq:up_analiticity}) to the following three cases \begin{eqnarray} \textrm{a})&\quad&\Im(u_p) = 0\quad\textrm{and}\quad \Re(u_p)\in\left(-\pi,0\right)\;, \nonumber\\ \textrm{b})&\quad&\Im(u_p) \neq 0 \quad\textrm{and}\quad\Re(u_p)=-\frac{\pi}{2}\;, \label{eq:up_analiticity_refined}\\ \textrm{c})&\quad&\Im(u_p) >0 \quad\textrm{and}\quad \Re(u_p)\in\left(-\pi,-\frac{\pi}{2}\right)\cup\left(-\frac{\pi}{2},0\right] \nonumber\\ \phantom{\textrm{c})}&\quad&\phantom{\Im(u_p) >0}\quad \textrm{and}\quad \exists p'\leq N\quad \textrm{s.t.}\quad u_{p'} = u_p^{\ast}\;. \nonumber \end{eqnarray} Thus, each subfamily $(\sigma,N)$ of \eqref{sNpole} contains a number of, in principle, different models, determined by a given combination of the ranges \eqref{eq:up_analiticity_refined} for each of the parameters $\lbrace u_p\rbrace_{p=1}^N$. Some simple combinatorics\footnote{ Given the number $N$ of poles one needs to partition it into three non-negative integers $n_a$, $n_b$ and $n_c$ with the constraint that $n_a+n_b+2n_c=N$. Once a value of $n_c = 0,1,\ldots, \lfloor N/2\rfloor$ is chosen, one is obviously left with $N-2n_c+1$ non-equivalent arrangements of poles between the cases a) and b). Thus the number of different models is given by $\sum_{n_c=0}^{\lfloor N/2\rfloor}(N-2n_c+1)$, which gives the result \eqref{eq:number_of_models}. } tells us that this number is \begin{eqnarray} \frac{1}{4}N^2 + N +\frac{7+(-1)^N}{8} = \left\lbrace \begin{array}{l l l} \left(\frac{N}{2}+1\right)^2 & & N\in2\mathbb Z_{>0} \\ \\ \frac{N+1}{2}\frac{N+3}{2} & & N\in2\mathbb Z_{>0}-1 \end{array} \right. \;. \label{eq:number_of_models} \end{eqnarray} Since for any model determined by \eqref{sNpole}, with parameters in the ranges \eqref{eq:up_analiticity}, the mass spectrum contains a single stable excitation, the resulting single-particle TBA equation takes the simple form (\ref{tbas}), with the kernel $\varphi(\theta)$ being the derivative of the scattering phase which, in the case of \eqref{sNpole}, explicitly reads \begin{eqnarray} \varphi_{N\textrm{CDD}}(\theta) = \frac{1}{i}\frac{\partial}{\partial \theta} \log S_{N\textrm{CDD}}(\theta) = - \sum_{p=1}^N \frac{2\sin u_p\,\cosh\theta}{\sin^2 u_p +\sinh^2\theta} \;. \label{eq:TBA_kernel} \end{eqnarray} An equivalent, sometimes more useful, expression of this kernel is its partial fractions expansion \begin{eqnarray} \label{eq:kernelNCDD} \varphi_{N\textrm{CDD}}(\theta) = \sum_{p=1}^N\left[\frac{1}{\cosh\left(\theta+i(u_p+\frac{\pi}{2})\right)} + \frac{1}{\cosh\left(\theta-i(u_p+\frac{\pi}{2})\right)}\right]\;. \label{eq:TBA_kernel_v2} \end{eqnarray} In what follows, we are going to concentrate our attention on two particular subfamilies: the ``1CDD models'' where $N=1$ and the ``2CDD models'' with $N=2$. \paragraph{The 1CDD models} When $N=1$ the $S$-matrix \eqref{sNpole} consists of a single factor \begin{eqnarray} S_{1\textrm{CDD}}(\theta) = \sigma\,\frac{i\sin u_1+\sinh\theta}{i\sin u_1-\sinh\theta}\,. \label{1cdd} \end{eqnarray} According to the breakdown of cases \eqref{eq:up_analiticity_refined}, for each choice of the TBA statistics we only have two possible models, corresponding to the following ranges of the parameter $u_1$: \begin{enumerate}[label=(\alph*)] \item $u_1\in\mathbb{R}$ and $-\pi<u_1<0$, \item $u_1 = -\pi/2 + i \theta_0$ and $\theta_0 \in\mathbb R$. \end{enumerate} Considering at first the fermionic case $\sigma=-1$, one recognizes in \eqref{1cdd}, for the case (a), the well-known $S$-matrix of the sinh-Gordon model \begin{eqnarray} S_{\textrm{shG}}(\theta) = -\frac{i\sin u_1+\sinh\theta}{i\sin u_1-\sinh\theta}\,,\qquad -\pi<u_1<0\;. \label{eq:shG_Smat} \end{eqnarray} On the other hand, the case (b) corresponds to the $S$-matrix of the ``staircase model'', introduced in \cite{Zamolodchikov:1991pc} \begin{eqnarray} S_{\textrm{stair}}(\theta) = \frac{\sinh\theta-i\cosh\theta_0}{\sinh\theta+i\cosh\theta_0}\;,\qquad \theta_0 \in\mathbb R\;. \label{eq:stair_Smat} \end{eqnarray} In both the cases (a) and (b) of the fermionic 1CDD model, the iterative solution to the TBA equation converges at all positive values of $R$, producing a function $E(R)$ analytic in the half-line $R>0$ and displaying a Casimir-like singularity at $R=0$, in full agreement with the interpretation of $E(R)$ as the ground state energy of a UV complete local QFT. For what concerns the two bosonic 1CDD models, the solution of the TBA equation has a considerably more intricate behavior. The case (a) of $u_1$ real was first addressed in \cite{Mussardo:1999aj}, where it was observed that the iterative solution of the TBA equation only converges for sufficiently large radius $R>R_*>0$. The authors also noticed that the function $E(R)$ appears to develop some sort of singularity at $R=R_*$. Below in \S \ref{sec:results} we will show that the solution to the TBA equation, and, consequently, the ground state energy $E(R)$, possesses, as a function of $R$, two branches. These merge at $R=R_*$, meaning that $R_*$ is a square-root branching point. We also show that this behavior extends to the case (b) of complex parameter $u_1=-\pi/2+i\theta_0$. \paragraph{The 2CDD model} In the $N=2$ subfamily, a pair of CDD factors is present in \eqref{sNpole}: \begin{eqnarray} S_{\textrm{2CDD}}(\theta) = \sigma\,\frac{i\sin u_1 + \sinh\theta}{i\sin u_1 - \sinh\theta}\frac{i\sin u_2 + \sinh\theta}{i\sin u_2 - \sinh\theta}\;. \label{eq:2cdd_general} \end{eqnarray} Following the breakdown \eqref{eq:up_analiticity_refined}, we see that there are $4$ possibly distinct models, corresponding to the following ranges of the parameters $u_1$ and $u_2$ \begin{enumerate}[label=(\alph*)] \item $u_1\in\mathbb{R}$ and $-\pi<u_1<0$,\\ $u_2\in\mathbb{R}$ and $-\pi<u_2<0$, \item $\theta_0 \in\mathbb R$ and $u_1 = -\pi/2 + i \theta_0$, \\$u_2 \in\mathbb R$ and $-\pi<u_2<0$, \item[(b')] $u_1 \in\mathbb R$ and $-\pi<u_1<0$, \\$\theta_0 \in\mathbb R$ and $u_2 = -\pi/2 + i \theta_0$, \item $\theta_0 \in\mathbb R$ and $u_1 = -\pi/2 + i \theta_0$, \\$\theta_0' \in\mathbb R$ and $u_2 = -\pi/2 + i \theta_0'$, \item $\theta_0 \in \mathbb R$, $\gamma \in (-\pi/2,\pi/2)$, $u_1 = \gamma - \pi/2 + i \theta_0$ and $u_2 = u_1^{\ast}$. \end{enumerate} The model (a) can be considered as a special instance of the more general case d). On the other hand the models (c) and (b) -- equivalent to (b') -- are genuinely distinct. All the models above display, both for the bosonic and fermionic statistics, the same type of behavior observed in the bosonic 1CDD models mentioned above: the iterative procedure for solving the TBA equation \eqref{tbas} only converges for $R$ larger than some positive value $R_\ast>0$ and the ground state energy $E(R)$ apparently develops a singularity at $R=R_\ast$. While we are going to present some data for all the various 2CDD cases, we devoted most of our attention to the case (d), that we will call, with some definitional abuse, the ``2CDD model''. Its $S$-matrix and TBA kernels explicitly read as follows \begin{eqnarray} S_{\textrm{2CDD}}(\theta) = \sigma\, \frac{\sinh\theta - i\cosh(\theta_0+i\pi\gamma)}{\sinh\theta + i\cosh(\theta_0+i\pi\gamma)}\frac{\sinh\theta - i\cosh(\theta_0-i\pi\gamma)}{\sinh\theta + i\cosh(\theta_0-i\pi\gamma)}\;. \label{eq:2cdd_particular} \end{eqnarray} \begin{eqnarray} \varphi_{\textrm{2CDD}}(\theta) = \sum_{\eta,\eta'=\pm}\frac{1}{\cosh(\theta+\eta\theta_0+i\eta'\gamma)} \;. \label{eq:2cdd_kernel} \end{eqnarray} \subsection{Iterative solution}\label{subsec:iterative} The chances of a non-linear integral equation of the form \eqref{tbas} to be amenable to an explicit analytic solution are considerably slim. For this reason the main investigation approach to the TBA equations is of numerical nature\footnote{In some limiting cases, it is possible to derive exact expressions, e.g. for the ground-state energy in the conformal limit, via the so-called ``dilogarithm trick'', as explained nicely in \cite{Fendley:1993jh}.}. In most situations, a simple iterative procedure of the following type \begin{eqnarray} \epsilon_n(\theta) = R \cosh\theta + \sigma \intop \varphi(\theta - \theta') \log\left[1-\sigma e^{-\epsilon_{n-1}(\theta')}\right]\,\frac{d\theta}{2\pi}\;, \label{eq:iterative_routine} \end{eqnarray} appropriately discretized, is shown to converge to the actual solution \begin{eqnarray} \lim_{n\rightarrow\infty} \epsilon_n(\theta) = \epsilon(\theta)\;, \label{eq:limit_solution_iteration} \end{eqnarray} when the seed function $\epsilon_0(\theta)$ is chosen as the driving term\footnote{In the case in which the iterative procedure does converge, there is actually a vast freedom in the choice of the seed function. However the standard choice indicated in the main text is the most natural one.} \begin{eqnarray} \epsilon_0(\theta) = R\cosh\theta\;. \end{eqnarray} The existence and uniqueness of the limit \eqref{eq:limit_solution_iteration} has been proven rigorously in \cite{Fring:1999mn} for the fermionic single-particle\footnote{See also \cite{Hilfiker:2017jqg} for an extension to fermionic multi-particle TBA equations.} TBA equation \eqref{tbas} with a kernel satisfying the requirement \begin{eqnarray} \left\vert\left\vert\varphi\right\vert\right\vert_1 := \intop\, \left\vert\varphi(\theta)\right\vert \frac{d\theta}{2\pi} \leq 1\;. \end{eqnarray} The fermionic 1CDD models do satisfy this condition and, as such, the iteration procedure is guaranteed to converge nicely in the whole range $R\in\mathbb R_{>0}$, a fact which is easily verified numerically. All the other models we considered above, on the other hand, violate one or more of the hypotheses of the existence and uniqueness theorem in \cite{Fring:1999mn} -- being either of bosonic statistic, or having a kernel with $L^1$ measure $\vert\vert\varphi\vert\vert_1=2$, or both -- and are not guaranteed to possess a convergent iterative solution. Notice that the $L^1$ measure of the TBA kernel \eqref{eq:TBA_kernel_v2} counts the number of CDD factors \begin{eqnarray} \left\vert\left\vert \varphi_{N\textrm{CDD}} \right\vert\right\vert_1 = N\;, \label{eq:L1_kernel} \end{eqnarray} meaning that, in the class of models described by the $S$-matrix \eqref{sNpole}, only the subfamily with $(\sigma,N) = (-1,1)$ is guaranteed to have a convergent iterative solution. \begin{figure}[h!] \begin{center} \includegraphics{Figures/2CDD_gsE_iteration_model_comparison_v2.pdf} \end{center} \caption{Ground-state energies for the various models discussed above, along with that of the $T\bar{T}$-deformed free fermion (black dots). The empty (resp. filled) markers correspond to models with bosonic (resp. fermionic) statistics. The fermionic sinh-Gordon and staircase models can be solved iteratively all the way to the $R\to 0$ limit, while the rest fail to converge below a certain model-specific scale $R_*$. The parameters of the models were chosen as to allow a comfortable visual comparison between the curves and are the same for both bosonic and fermionic versions of the same model. Insets: inverse square of the (numerical) derivative. As shown by the fits (dotted lines), the fermionic sinh-Gordon and staircase models show the conventional UV behavior $\propto R^4$, while the other models develop a $\propto R$ behavior reminiscent of the square-root branching singularity of the ground state energy. } \label{fig:2CCDmodelscomp} \end{figure} We investigated numerically the 1CDD models (a) and (b) and the 2CDD models (b) to (d)\footnote{Remember that the 2CDD model (a) is really a sub-case of model (d).}, for both the bosonic and fermionic statistic, using the iterative procedure \eqref{eq:iterative_routine}. As already mentioned above we observed that only for the 1CDD fermionic models this procedure converges for all positive values of the radius $R$. In every other case, there exists a positive ``critical radius'' $R_{\ast}>0$ such that for $R\leq R_{\ast}$ the iterative routine stops converging. As $R$ approaches $R_{\ast}$ from larger values, we noticed that the rate of convergence of the iterative numerical routine slows down dramatically, a telltale sign of the existence of some kind of singularity nearby\footnote{This same ``critical slowing down'' of the numerical iterative procedure is observed as $R\rightarrow 0$ in any TBA system with iterative solution converging in $R\in\mathbb R_{>0}$. In this cases it reflects the existence of a Casimir-like singularity of the ground-state energy at $R=0$.}. In Figure \ref{fig:2CCDmodelscomp} we collected the plots of the ground-state energy $E(R)$ for one representative point in the parameter space for each of the models we mentioned above along with one for the $T\bar{T}$-deformed free fermion. The shape of the curves suggests that all the cases, apart from the fermionic 1CDD models, behave qualitatively in the same way as the $T\bar{T}$-deformed free fermion, that is to say they develop a square-root type singularity at some critical value of the radius $R=R_{\ast}>0$: \begin{eqnarray} E(R) \underset{R\rightarrow R_{\ast}^+}{\sim} c_0 + c_{1/2}\sqrt{R-R_{\ast}}+\mathcal{O}(R-R_{\ast})\;. \label{eq:supposed_square_root_behaviour} \end{eqnarray} In order to further confirm this suspicion we plotted the derivative of the ground-state energy to the power $-2$ in the vicinity of the supposed critical point. As we can see in the insets of Figure~\ref{fig:2CCDmodelscomp}, the numerical results are in good accord with the hypothesis that $R_{\ast}$ is a singular point of square root type, as expressed by \eqref{eq:supposed_square_root_behaviour}. \subsection{Two branches}\label{subsec:two_branches} Having our expectation confirmed leaves us with the question of how to deal numerically with such a square root critical point. In particular, the behavior \eqref{eq:supposed_square_root_behaviour} implies the existence of a secondary branch of the ground-state energy, behaving as \begin{eqnarray} \tilde{E}(R) \underset{R\rightarrow R_{\ast}^+}{\sim} c_0 - c_{1/2}\sqrt{R-R_{\ast}}+\mathcal{O}(R-R_{\ast})\;, \label{eq:supposed_square_root_behaviour_second_branch} \end{eqnarray} in the vicinity of the critical point. Here and below we are going to use the notation $\tilde{E}(R)$ for the secondary branch. We would like to be able to access numerically to this secondary branch and to explore its properties, e.g. its large $R$ behavior and the possible existence of further critical points. The iterative routine \eqref{eq:iterative_routine} is ill suited for this job and we need to employ a more refined method, the PALC mentioned in the introduction and described in \S \ref{sec:num_meth}. Deferring a more thorough analysis of the properties of $E(R)$ to \S \ref{sec:results}, let us present here its main qualitative features, concentrating on a single point in the parameter space of the fermionic 2CDD model (d) as a representative case. \begin{figure}[t!] \begin{center} \includegraphics{Figures/2branches.pdf} \end{center} \caption{Here is plotted the ground-state energy $E(R)$ for the model with $S$-matrix \eqref{eq:2cdd_particular} with $\theta_0 = 1/2$ and $\gamma = 3\pi/20$, obtained through the PALC routine described in \S \ref{sec:num_meth}. The numerical points are sided by three lines, approximating $E(R)$ for large $R$ on both branches and for $R\gtrsim R_{\ast}$.} \label{fig:both_branches} \end{figure} More specifically let us set $\theta_0 = 1/2$ and $\gamma = 3\pi/20$ and compute numerically the ground-state energy of the model defined by the $S$-matrix \eqref{eq:2cdd_particular}. The result is displayed in Figure \ref{fig:both_branches}. We see that the function $E(R)$ does indeed possess two branches with distinctly different IR behavior. The primary branch is characterized by the universal IR behavior \begin{eqnarray} E(R)\underset{R\rightarrow\infty}{\sim} - \frac{1}{\pi}\,K_1(R) + \mathcal{O}\left(e^{-2R}\right)\;, \label{eq:E_asymp_primary} \end{eqnarray} where $K_1$ stands for the modified Bessel function while the secondary branch approaches a linear behavior at large $R$ \begin{eqnarray} \tilde{E}(R)\underset{R\rightarrow\infty}{\sim} - \varepsilon_{-} R \;, \label{eq:E_asymp_secondary} \end{eqnarray} with a rate of approach likely to be some negative power of $R$. For the specific case depicted in Figure \ref{fig:both_branches} the coefficient of the linear term is found to be \begin{eqnarray} \varepsilon_{-}\left(\theta_0 = \frac{1}{2},\gamma = \frac{3\pi}{20}\right) = -2.87452\ldots\;, \end{eqnarray} while the constant term is vanishing up to the precision we used for our numerical routines. We will see in \S \ref{sec:results} that this is the asymptotic behavior predicted by analytical considerations. In the zoomed box in Figure \ref{fig:both_branches} we also plotted a fit of the function $E(R)$ in the vicinity of the critical point $R_{\ast}$. As expected the behavior in this region is best described by the square-root function \eqref{eq:supposed_square_root_behaviour} (and \eqref{eq:supposed_square_root_behaviour_second_branch} for the secondary branch), with the coefficients taking the following values \begin{eqnarray} c_0\left(\theta_0 = \frac{1}{2},\gamma = \frac{3\pi}{20}\right) &=& -1.11767\ldots\;, \nonumber \\ c_{1/2}\left(\theta_0 = \frac{1}{2},\gamma = \frac{3\pi}{20}\right) &=& 2.03547\ldots\;, \\ R_{\ast}\left(\theta_0 = \frac{1}{2},\gamma = \frac{3\pi}{20}\right) &=& 0.61478849\ldots\;.\nonumber \end{eqnarray} Another notable fact is that we see no trace of additional singular points: the PALC method can, apparently, reach arbitrarily large values of $R$ on the secondary branch and the resulting ground-state energy quickly approaches the expected asymptotic linear behavior. We note again that the behavior of $E(R)$ depicted in Figure \ref{fig:both_branches} is qualitatively identical to the one exhibited by the ground-state energy of $T\bar{T}$-deformed models for negative values of the deformation parameter $\alpha$, as described in \S \ref{sec:TTbar} (see e.g. Figure \ref{ERplotTTbar}). Finally, we stress that the features of $E(R)$ described here for a point in the parameter space of a specific model really are representative of the general behavior of the ground-state energy in the family of models defined by the S-matrices \eqref{sNpole}, at least for what concerns the case of fermionic statistics. As we will discuss in \S \ref{sec:results} the status of the models with bosonic statistics is still not completely settled. In particular it is still unclear whether the secondary branch of $E(R)$ displays additional critical points or continues undisturbed in the deep IR and, if this was the case, what type of behavior it follows. \section{Numerical Method}\label{sec:num_meth} The results displayed in the previous section suggest that the solution to the TBA equation \eqref{tbas}, for S-matrices of the form \eqref{sNpole}, may generally possesses a singular dependence on the parameter $R$. In particular the slope of the tangent to the graph of $E(R)$ apparently diverges at some $R=R_*$. Such critical points are known as \emph{turning points}. Their presence in the dependence of the ground-state energy $E$ on the system size $R$ evokes the case of the TTbar deformed models, in which all the quantities obtainable from the TBA display a square-root singularity at the same value $R=R_*$. The iterative procedure described in \S \ref{subsec:iterative} becomes unstable at $R\to R_*$, therefore it is not particularly suitable for analyzing the vicinity of the singular point. Fortunately, many powerful methods exist that are capable of handling numerically critical points in non-linear equations. We refer to the nice monograph by Allgower and Georg \cite{allgower2012numerical} for an introduction, paired with an extensive literature, to the subject. The simplest of these numerical routines is the already mentioned PALC method which, in spite of the simplicity of its implementation, will be entirely sufficient to handle the situations of interest for us. In this section we will quickly review this method and its main features. \subsection{The pseudo-arc-length continuation method} Before starting let us point out a trivial fact: the TBA equation \eqref{tbas} is \emph{non-linear}. It is then not at all surprising that its solutions can develop a highly non-trivial dependence on the parameters. Conversely, what is remarkable is that in the vast majority of instances known in the literature, the solution to the TBA equations display a simple behavior as functions of $R$. In full generality, we should expect a solution $\epsilon(\theta\vert R)$ to potentially present, as a function of $R$\footnote{In principle, the solution might possess critical points also in its dependence on the other parameters present in the TBA equation. We found no hint of such a possibility and we will thus simplify our discussion by concentrating on the dependence on the parameter $R$.}, any type of critical point imaginable. As we will see later, in the cases of the 1CDD and 2CDD models we are concerned with here, only turning points appear. We will thus restrict our attention to the simple cases in which every critical point is a turning point. This considerably simplifies both the discussion and the actual implementation of the PALC method, although, if needed, it is entirely possible -- and not exceedingly difficult -- to include the existence of bifurcations in the game. Since our goal is to analyze the TBA equation \eqref{tbas} numerically, we are going to describe the principles of the PALC for maps between finite-dimensional spaces. Let us then truncate and discretize the real $\theta$-line on a $N$-point lattice $\left\lbrace\theta_k\;\vert\; k=1,2,\ldots ,N\right\rbrace$ which, for the moment, we are not going to specify further. Now, consider a parametrized map $H$ which takes as input a parameter $R\in\mathbb R$ together with the values $\epsilon_k=\epsilon(\theta_k)\in\mathbb R$ of some real function on the lattice, and yields $N$ real numbers: \begin{eqnarray} H\;:\quad \begin{array}{c c c} \mathbb R^{N}\times \mathbb R & \longrightarrow & \mathbb R^{N}\\ & & \\ (\vec{\epsilon},R) & \longmapsto & \vec{H}(\vec{\epsilon},R) \end{array}\;, \end{eqnarray} where we packaged the values $\epsilon_k$ and $H_k$ into vectors $\vec{\epsilon}$ and $\vec{H}$. We wish to explore the following fixed-point condition \begin{eqnarray} \vec{H}(\vec{\epsilon},R) = \vec{0}\;. \label{eq:map_equation} \end{eqnarray} Note that the TBA equation \eqref{tbas}, appropriately discretized and truncated, can be written in the above form. By definition, the map $H$ acts between spaces of different dimensionality, meaning \begin{eqnarray} \textrm{dim}[\textrm{Ker}(H)]\geq 1\;, \end{eqnarray} or, in other words, the image of the null vector $\vec{0}\in\mathbb R^N$ under the inverse map $H^{-1}$ is a space of dimension at least $1$. Hence at a generic point, where $\textrm{dim}[\textrm{Ker}(H)] = 1$, this image is a curve \begin{eqnarray} C\;:\quad J\subset \mathbb R\;\longrightarrow \; \mathbb R^N\times \mathbb R\;. \end{eqnarray} We call this the \emph{solution curve} for the map $H$. Our goal is to follow the solution curve from a given starting point $C_i = (\vec{\epsilon}_i, R_i)$ to a final one $C_f = (\vec{\epsilon}_f, R_f)$. The most straightforward way to achieve this is to simply parametrize the curve by $R$ and employ some numerical iterative routine, such as the one reviewed in \S \ref{subsec:iterative}, to move from $C_i = C(R_i)$ to $C_f = C(R_f)$. However this simple-minded approach fails at any point in the parameter space where the rank of the Jacobian \begin{eqnarray} \mathcal{J}_{kl} = \frac{\partial H_k}{\partial \epsilon_l}\;, \end{eqnarray} is not maximal. There we can no longer rely on the implicit function theorem to solve (\ref{eq:map_equation}) for $\vec{\epsilon}$ in terms of $R$. More geometrically, what happens is that the curve $C(R)$ displays a turning point, where $\frac{d}{dR}C(R)$ diverges. Fortunately there exists a very simple cure for this problem: instead of parameterizing the curve $C$ by the parameter $R$, we can use an auxiliary quantity $s$, traditionally chosen to be the arc-length of $C$ or a suitable numerical equivalent, whence the name \emph{pseudo-arc-length} given to this approach. The condition (\ref{eq:map_equation}) then becomes \begin{eqnarray} \vec{H}(C(s)) = \vec{0}\;,\qquad s\in J\subset\mathbb R\;. \label{eq:eq_map_H} \end{eqnarray} In order to proceed, let us take a derivative of this condition with respect to the parameter $s$. We immediately obtain \begin{eqnarray} H'(C(s)) \dot{C}(s) = \vec{0}\;, \end{eqnarray} where the \emph{extended Jacobian} \begin{eqnarray} H'(C(s)) = \Bigg(\;\mathcal J\;\Bigg\vert\;\frac{d\vec{H}}{dR}\;\Bigg)\;, \end{eqnarray} is a $N\times(N+1)$ block matrix, while \begin{eqnarray} \dot{C}(s) = \left(\begin{array}{c} \frac{d}{ds} \vec{\epsilon} \\[0.1cm] \hline\\[-0.4cm] \frac{d}{ds} R \end{array}\right)\;, \end{eqnarray} is an $(N+1)$ column vector. At this point we seem to be short of $1$ condition, since we introduced an additional parameter. However, remember that we decided to choose $s$ as the (pseudo-)arc-length of $C$, which means \begin{eqnarray} \vert\vert \dot{C}(s)\vert\vert = 1\;. \end{eqnarray} Summing up, we converted our non-linear problem, supported by the starting point $(\vec{\epsilon}_i,R_i)$, into an initial value problem \begin{eqnarray} H'(C(s))\dot{C}(s) = \vec{0}\;,\qquad \vert\vert\dot{C}(s)\vert\vert = 1\;,\qquad C(s_i) = (\vec{\epsilon}_i,R_i)\;, \label{eq:in_val_prob_map_H} \end{eqnarray} capable of dealing with the presence of turning points. Still, this formulation is somewhat unnatural as it completely disregards the fact that the curve $C$ is the fixed point of the map $H$, and, as such, should enjoy powerful local contractive properties with respect to iterative solution methods -- such as Newton's method. We are then led to an integrated approach in which we numerically integrate (\ref{eq:in_val_prob_map_H}) very coarsely and subsequently employ some kind of iterative method to solve (\ref{eq:eq_map_H}) locally. This is the general strategy behind the approaches known as \emph{predictor-corrector routines}. In Appendix \ref{app:pred_corr} we are going to describe the one that we employed in this work and present a pseudo-code of its implementation. \section{Results for the 2CDD model}\label{sec:results} Here we present some results obtained using the numerical techniques of the previous Section. We first concentrate on the fermionic 2CDD models and then discuss some facts about the bosonic models. \subsection{Fermionic case} The numerical data we collected, of which we have shown some example in \S \ref{subsec:two_branches}, strongly indicate the following properties of the ground-state energy $E(R)$ as a function of $R$: \begin{itemize} \item[--] $E(R)$ is a double-valued function of $R$, in the range $R>R_{\ast}$ with values in the negative real numbers; \item[--] The point $R=R_{\ast}$ is a square-root branching point -- or, using the terminology of \S \ref{sec:num_meth}, a turning point -- of the function $E(R)$; \item[--] There is no sign of additional turning or singular points other than $R=R_{\ast}$; \item[--] The two branches display the large-$R$ behaviors \eqref{eq:E_asymp_primary} and \eqref{eq:E_asymp_secondary}. \end{itemize} We could not find a convincing analytic argument proving the first three properties and we regard them as experimental observations. On the other hand, the last property \eqref{eq:E_asymp_secondary} can be verified analytically, as we are now going to show. \subsubsection{The large $R$ behavior}\label{subsec:large_R_fermion} Let us analyze the possible behaviors of the TBA equation \eqref{tbas} at large $R$. To this end, we write the equation as follows \begin{eqnarray} \epsilon(\theta) = d(\theta) -\chi(\theta)\;, \label{eq:TBA_symbolic} \end{eqnarray} where $d(\theta)$ is the driving term and $\chi(\theta)$ the convolution: \begin{eqnarray} d(\theta) = R\cosh\theta\;,\qquad \chi(\theta) = \int\,\varphi(\theta-\theta')\,\log\left[1+e^{-\epsilon(\theta')}\right]\,\frac{d\theta'}{2\pi}\;. \end{eqnarray} As $R\rightarrow \infty$, the driving term becomes large, $\sim R$, and, in order for the equation \eqref{eq:TBA_symbolic} to be satisfied, it has to be balanced by a similar behavior in either $\epsilon(\theta)$, $\chi(\theta)$ or both. The standard assumption is that \begin{eqnarray} \epsilon(\theta)\underset{R\rightarrow\infty}{\sim} d(\theta)\;,\qquad \chi(\theta)\underset{R\rightarrow\infty}{\ll}d(\theta)\;, \label{eq:standard_large_R} \end{eqnarray} which turns out to be consistent, since, as one easily verifies, \begin{eqnarray} \chi(\theta) \underset{R\rightarrow\infty}{\sim} \int\,\varphi(\theta-\theta')\,\log\left[1+e^{-R\cosh\theta'}\right]\,\frac{d\theta'}{2\pi} \underset{R\rightarrow\infty}{\sim} \frac{\varphi(\theta)}{\sqrt{2\pi R}} e^{-R} \underset{R\rightarrow\infty}{\ll} R \cosh\theta\;. \label{eq:standard_large_R_chi} \end{eqnarray} However this is not, in general, the only possibility. It might be the case that the convolution term $\chi(\theta)$ is diverging as $R\rightarrow \infty$ and becomes comparable with either $\epsilon(\theta)$, $d(\theta)$ or both. It is then not difficult to check that only two possibilities are consistent: \begin{enumerate} \item $\epsilon(\theta) \underset{R\rightarrow\infty}{\longrightarrow} 0$ \underline{and} the kernel $\varphi(\theta)$ is not integrable on the real line; \item $\epsilon(\theta) \underset{R\rightarrow\infty}{\sim} -R\,f(\theta)$ where $f(\theta)$ is positive only in some finite\footnote{The subset $\Theta$ cannot be infinite, since the equation \eqref{eq:TBA_symbolic} forces $\epsilon(\theta)$ to behave as $d(\theta)$ for $\theta\rightarrow \pm \infty$.} subset $\Theta\subset\mathbb{R}$ of the real line and negative everywhere else. \end{enumerate} The scenario 1 cannot arise for the class of models we are dealing with\footnote{This scenario is, however, possible in models whose $S$-matrix presents a non-vanishing factor $\Phi_{\textrm{entire}}(\theta)$ \eqref{phientire}. In particular it describes the large $R$ behavior of the secondary branch $\tilde{E}(R)$ in the $T\bar{T}$-deformed theories.}, since the kernels \eqref{eq:TBA_kernel_v2} are obviously bounded functions of $\theta\in\mathbb{R}$. The situation 2 is, on the other hand, a possible one. Let us explore its consequences. In the hypothesis that \begin{eqnarray}\label{eq:second_large_R} \epsilon(\theta) \underset{R\rightarrow\infty}{\sim} -R\,f(\theta)\;,\qquad \left\lbrace\begin{array}{l l} f(\theta) > 0\;,& \theta\in\Theta\subset\mathbb{R}\;, \\ f(\theta)\leq 0& \theta\in\Theta^{\perp} = \mathbb{R}-\Theta \;,\end{array}\right. \label{eq:negative_epsilon} \end{eqnarray} the convolution can be approximated as follows \begin{eqnarray} \chi(\theta) \underset{R\rightarrow\infty}{\sim} R\intop_{\Theta}\,\varphi(\theta-\theta')\,f(\theta')\,\frac{d\theta'}{2\pi} + \intop_{\mathbb{R}}\,\varphi(\theta-\theta')\,\log\left[1+e^{-R\left\vert f(\theta')\right\vert}\right]\,\frac{d\theta'}{2\pi}\;. \end{eqnarray} Discarding the second term in the right-hand side, we arrive at the linear equation \begin{eqnarray} f(\theta) = -\cosh\theta + \intop_{\Theta}\,\varphi(\theta-\theta')\,f(\theta')\,\frac{d\theta'}{2\pi}\;. \end{eqnarray} Due to our hypothesis on the function $f(\theta)$, we see that the integrand in the right-hand side above is positive for any $(\theta,\theta')\in\mathbb R^2$, which implies the following bound \begin{eqnarray} 0\leq \intop_{\Theta}\,\varphi(\theta-\theta')\,f(\theta')\,\frac{d\theta'}{2\pi} \leq \underset{t\in\Theta}{\textrm{Max}}\left[f(t)\right] \intop_{\Theta}\,\varphi(\theta-\theta')\,\frac{d\theta'}{2\pi} \;. \end{eqnarray} Now, let $\theta_{\textrm{M}}\in\Theta$ be such that $f(\theta_{\textrm{M}}) = \underset{t\in\Theta}{\textrm{Max}}\left[f(t)\right]$, then the following inequalities are true \begin{eqnarray} -\cosh\theta_{\textrm{M}} \leq f(\theta_{\textrm{M}}) \leq -\cosh\theta_{\textrm{M}} +f(\theta_{\textrm{M}}) \intop_{\Theta}\,\varphi(\theta_{\textrm{M}}-\theta')\,\frac{d\theta'}{2\pi} \;. \end{eqnarray} Rearranging the right inequality above, we find that \begin{eqnarray} \intop_{\Theta}\,\varphi(\theta_{\textrm{M}}-\theta')\,\frac{d\theta'}{2\pi} \geq 1+\frac{\cosh\theta_{\textrm{M}}}{f(\theta_{\textrm{M}})} > 1\;, \end{eqnarray} which we can interpret as a constraint on the class of models which allow for this scenario. In fact, remember that the integral of the kernel on the whole real line, \eqref{eq:L1_kernel}, counts the number $N$ of CDD factors appearing in the $S$-matrix \eqref{sNpole}. But, since we assumed that $\Theta$ is a finite subset of $\mathbb R$, we find that \begin{eqnarray} N > \intop_{\Theta}\,\varphi(\theta_{\textrm{M}}-\theta')\,\frac{d\theta'}{2\pi} > 1\quad \Longrightarrow \quad N>1\;. \label{eq:bound_on_N} \end{eqnarray} Thus we have found that the fermionic 1CDD models, namely sinh-Gordon and the staicase models, can only display the standard large $R$ behavior \eqref{eq:standard_large_R}, \eqref{eq:standard_large_R_chi}. We stress that this result should not be read as a proof of the absence of turning points in these models, but rather as a sanity check for the correctness of our computations, since the ground-state energy for fermionic 1CDD models is well known to be a smooth and monotonously increasing function of the radius in the whole range $R>0$. Conversely, all fermionic $N$CDD models with $N>1$ allow for both the standard large $R$ behavior \eqref{eq:standard_large_R}, \eqref{eq:standard_large_R_chi} and the non-standard one \eqref{eq:negative_epsilon}. Consequently, their ground-state energy will possibly display both the asymptotic behavior \eqref{eq:E_asymp_primary} and \eqref{eq:E_asymp_secondary}, where \begin{eqnarray} \varepsilon_{-} = \intop_{\Theta}\,\cosh\theta\,f(\theta)\,d\theta\;, \end{eqnarray} in accordance with the numerical data we have obtained. \subsubsection{Analysis of the numerical data}\label{sec:F2CDD_num} The fermionic 2CDD models were classified in \S \ref{sec:models} into cases (a) to (d). We have performed numerical analysis for all the different cases and the results show that the behaviors are qualitatively the same. Thus, we are going to show here the details of the numerical analysis only for the representative case (d). We begin by analyzing the numerical solution obtained through the PALC method for large values of $R$. It was argued in the previous section that the pseudoenergy should behave as in \eqref{eq:second_large_R}, assuming negative values in a finite subset of the real line and positive values elsewhere. This is indeed checked to be true for all the 2CDD models under consideration, as illustrated for a particular member of this family in Figure \ref{fig:second_branch_large_R}, and to be contrasted with the standard iterative solution (the primary branch) which is positive everywhere. The numerics indicate that the negativity region is always a single interval centered at the origin of the form $\Theta=\{\theta\in\mathbb{R}\,|\,-\Lambda\le\theta\le\Lambda\}$. They also indicate that the interval size $\Lambda$ is model-dependent. In particular, it seems to grow with $\theta_0$ and decreases with $\gamma$. Nevertheless, the precise dependence of $\Lambda$ on the parameters deserves further investigation. \begin{figure}[t!] \begin{center} \includegraphics{Figures/epslargerF.pdf} \end{center} \caption{ Pseudoenergy $\epsilon(\theta)$ for the secondary branch solution (blue) at large values of $R$, showing the expected behavior \eqref{eq:second_large_R}, namely it is below $0$ (marked with the dashed line) in a finite interval. Corresponding behavior of the iterative solution (red). Here the model parameters are $\theta_0=2$ and $\gamma=4\pi/10$, though we checked the qualitative picture to remain the same within the whole set of admissible values of $\theta_0$ and $\gamma$.} \label{fig:second_branch_large_R} \end{figure} We then proceed to analyze the secondary branch solution in the opposite extremum of $R$, i.e., as $R$ approaches the critical value $R_*$. For some of the plots it will be convenient to show the results in terms of the log-scale distance \begin{equation} x=\log(R/2) \end{equation} that alleviates the exponential dependence (with $x_*=\log(R_*/2) $ for the corresponding critical point). Here we find it more instructive to display $L(\theta)$ instead of the pseudoenergy itself in order to ease the comparison with the primary branch solution. The situation is illustrated in Figure \ref{fig:both_branches_R_crit}. The two branches approach each other as the value of $R$ decreases, eventually merging at $R=R_*$ after which they become complex-valued. For each $R$, the function $L(\theta)$ for the secondary branch is everywhere larger than the corresponding primary branch counterpart, which is compatible with the previously mentioned fact that it has lower energy (recall the overall minus sign in \eqref{etbas}). \begin{figure}[t!] \begin{center} \includegraphics{Figures/LclosecritF.pdf} \end{center} \caption{$L(\theta)$ for both the primary (red) and secondary (blue) branch solutions as $R$ approaches the critical value $R_*$. For each color (blue or red), the color gradient indicates the decrease of $R$ towards $R_*$, where the two branches merge. Here $\theta_0=5$ and $\gamma=4\pi/10$, which lead to $R_*\approx0.0192$. } \label{fig:both_branches_R_crit} \end{figure} The critical value $R_*$ could in principle have a dependence on $\theta$. We ran an extensive numerical test exploring this possibility, but all the numerical results indicate $\theta$-independence to high accuracy, even though at this moment we do not have an analytic proof of this property. The analyses were as follows. We first ran the iterative numerical routine and computed the pseudoenergy $\epsilon(\theta)$ for at least ten different values of $x$ differing from each other and from $x_*$ by $10^{-8}$. Then, we selected several values of $\theta$ and for each value we performed a square root fit of the form $a(\theta) + b(\theta) \sqrt{-x_*(\theta) + x}$. The fits were done using Mathematica's \texttt{NonlinearModelFit} function by giving an initial estimate for $x_*(\theta)$. By comparing all the obtained $x_*(\theta)$, we verified that they agree up to errors greater than $10^{-8}$ which was our minimal working precision. The analysis was performed for several values of $\theta_0$ and for $\gamma$ in the range $0 \le \gamma \le (99/200) \pi$. In many cases, when the number of necessary points in the discretized $\theta$ grid was not very high it was possible to work with even higher precision. In those cases, another way of getting $x_{*}$ with high precision is by assuming a square root behavior for the pseudoenergy and solving the resulting equations using Mathematica's \texttt{FindRoot} function. In addition, we also verified that $R_*$ depends smoothly on the model parameters $\theta_0$ and $\gamma$, as shown in Figure \ref{fig:xcdeps} for both the fermionic and bosonic models. In particular, for large $\theta_0$ we have the asymptotic behavior $x_*=\log(R_*/2)\approx-\theta_0+x_*^{(0)}$ (see \S\ref{sec:NRlimit} for a derivation in the special limit where $\gamma$ is close to $\pi/2$, for which $x_*^{(0)}=\log\log(2+2\sqrt{2})$; for other values of $\gamma$ the linear term remains the same, though $x_*^{(0)}$ is different). \begin{figure}[t!] \centering \begin{subfigure}[b]{6cm} \centering \includegraphics[width=\textwidth]{Figures/xcgam.pdf} \caption{$\gamma$ dependence of $x_*$.} \label{fig:xcgam} \end{subfigure} \begin{subfigure}[b]{6.31cm} \centering \includegraphics[width=\textwidth]{Figures/xcth0.pdf} \caption{$\theta_0$ dependence of $x_*$ with $\gamma=2\pi/5$} \label{fig:xcth0} \end{subfigure} \caption{Dependence of the critical $x_*$ in the model parameters. Black lines correspond to fermionic 2CDD models, red lines correspond to bosonic ones. On (a), we demonstrate the validity of the narrow resonance limit approximation for $x_*$ (red and black bullets/boxes), see in \ref{sec:NRlimit}. } \label{fig:xcdeps} \end{figure} \subsection{Bosonic case} We have also repeated the analysis described above using the PALC method to the case of bosonic systems. The numerical routine used in this case only differs from the fermionic case by a few signs. As already mentioned in \S\ref{sec:models}, the solutions to the TBA equation for the bosonic models have intricate behavior already for the 1CDD cases. It was first noticed in \cite{Mussardo:1999aj} (for the case of real $u_1$, in the notation of \eqref{1cdd}) that the numerical iterative routine stops converging for some $R_{*}$, signaling the presence of a singularity. In fact, we have verified numerically that all the bosonic models up to two CDD factors behave similarly to the fermionic 2CDD models of previous section, i.e., they have a ``primary branch” and a ``secondary branch” which merge at a critical scale $R_{*}$, where the energies $E(R)$ have square-root singularities in $R_{*}$, and the value of $R_{*}$ is independent of $\theta$. There is a simple argument based on the well-known relation between bosonic and fermionic TBA which makes this behavior of the bosonic $1$CDD model rather natural. Consider the TBA equation \eqref{tbas}, \eqref{Ldef} with $\sigma = +1$ and an $N$CDD kernel \eqref{sNpole}, and introduce the following function \begin{eqnarray} \tilde{\epsilon}(\theta) = \log\left[e^{\epsilon(\theta)} - 1\right]\;. \label{eq:epsilon_tilde} \end{eqnarray} Some trivial manipulations show that this function satisfies a fermionic TBA equation with kernel \begin{eqnarray} \tilde{\varphi}(\theta) = \varphi(\theta) + 2\pi \delta(\theta)\;, \label{eq:BosonicFermionic} \end{eqnarray} with the $\delta(\theta)$ being the Dirac $\delta$-function. Therefore, a general bosonic $N$CDD model is equivalent to the ($N+1$)CDD fermionic TBA, taken in the limit when $u_{N+1}\to 0$ (see \eqref{eq:kernelNCDD})\footnote{Notice that $\lim_{u\to 0}\text{log}\frac{i\sin u - \sinh\theta}{i\sin u - \sinh\theta} = i\pi\,\text{sign}(\theta)$, for the principal branch of the log function.}. Recalling the arguments presented in \S \ref{subsec:large_R_fermion}, we conclude that bosonic $N$CDD models admit two different types of large $R$ behaviors whenever $N>0$. The large $R$ regime of the pseudoenergy $\epsilon(\theta)$ for the primary branch is as expected and it is easily accessed numerically, however for the secondary branch it is more involved to compute it. By increasing the value of $R$, eventually we reach a value $R^{\prime}$ where the PALC method suddenly ceases to provide a real solution and reverts back to the primary branch solution. Analyzing the behavior of $\epsilon(\theta)$ for complex values of $\theta$, we verified that a pair of complex conjugate zeros of $z(\theta)=1-e^{-\epsilon(\theta)}$ is approaching the real axis and causing the numerical instability. In principle it is possible to refine the numerical methods so as to obtain solutions for $R>R^{\prime}$. However, it is not clear at the moment whether or not those singularities of $L(\theta)$ ever cross the real axis. In case they do, an analysis similar to the one performed in \cite{Dorey:1996re} for the excited state TBA could be carried out. We leave the analysis of the large $R$ behavior of the secondary branch in bosonic models for a future study. \begin{figure}[t!] \begin{center} \includegraphics{Figures/LclosecritB.pdf} \end{center} \caption{$L(\theta)$ for the 2CDD bosonic model of type (d) with $\theta_0=5$ and $\gamma=3\pi/10$, in which case $R_*\approx0.2382$. Similarly to the fermionic case the function $L(\theta)$ for the secondary branch is everywhere greater than the one for the first branch.} \label{1CDDbosonicoL} \end{figure} The behavior of the models for $R$ close to $R_{*}$ is illustrated in Figure \ref{1CDDbosonicoL} by the $L(\theta)$ function for a 2CDD model of type (d). The qualitative picture is similar to the fermionic case, i.e. the function $L(\theta)$ for the secondary branch solution is greater everywhere than the one for the primary branch and the two merge as the critical point is approached. We conclude this subsection by showing in Figure \ref{fig:xcdeps} the smooth dependence of $x_{*}$ on the model parameters and in particular in the limit $\gamma \rightarrow \pi/2$. {In addition, notice that the bosonic curve is always above of the fermionic curve for the same parameters. This can be understood by analyzing the map \eqref{eq:BosonicFermionic} and the fact that the additional delta function term always give a positive contribution to the convolution term of the TBA equations.} \subsection{Narrow resonance limit} \label{sec:NRlimit} Here we consider the special limit $\gamma\to\frac{\pi}{2}$ of the kernel~\eqref{eq:2cdd_kernel}. In this limit the poles of the kernel get closer to the real line, finally forming two Dirac $\delta$ functions. We shall refer to this as the Narrow Resonance (NR) limit. After integration of the delta functions and exponentiation, TBA \eqref{tbas} becomes the difference equation \begin{equation} \label{eq:NR} Y(\theta|R)= e^{-R\cosh \theta}[1-\sigma Y(\theta+\theta_0|R)]^{-\sigma}[1-\sigma Y(\theta-\theta_0|R)]^{-\sigma}\,, \end{equation} where we introduced the notation $Y(\theta|R)=e^{-\epsilon(\theta|R)}$. Note that this can be seen as an infinite set of equations relating the values of $Y$ on the grid points $\theta \in (-\theta_0,\theta_0)+\theta_0\mathbb{Z}$. Let us focus on the fermionic case ($\sigma=-1$). Introducing $y_k=Y(\theta+k \theta_0)$ and $g_k=e^{-R \cosh(\theta+k \theta_0)}$ we can write \eqref{eq:NR} as \begin{equation} y_k=g_k(1+y_{k-1})(1+y_{k+1}) \, , \qquad (k\in\mathbb{Z}) \end{equation} and look for a solution for different grids specified by a choice of $\theta$. This is an infinite set of equations, however starting with $k=0$ one can obtain an approximate solution by truncating the system for some $|k|\leq m$, since $g_k$ and $y_k$ decay very rapidly with $R$ and $\theta_0$, and hence with $k$. Truncating to $m=1$ leads to the quadratic equation \begin{equation} \label{eq:NRtr1} y_0=k_0[1+g_1(y_0+1)][1+g_{-1}(y_0+1)]\,. \end{equation} One can now choose the integer lattice (i.e., $\theta=0$), to get \begin{align}\label{eq:y0nrm} y_0 &=-1-e^{-R\cosh\theta_0}+\frac{1}{2}e^{R(1+2\cosh \theta_0)}\left(1\pm\sqrt{1-4e^{-R(1+\cosh\theta_0)}(1+e^{-R\cosh \theta_0})}\right). \end{align} The solution develops a square root singularity at $x_*\approx-\theta_0+\log\log(2(1+\sqrt{2}))$, which is compatible with our findings in \S\ref{sec:F2CDD_num}. This point is shown as a red bullet in Figure \ref{fig:xcgam}. In contrast to the general case, it is clear that here the branching point depends on the choice of $\theta$ lattice. Let us also comment that a similar analysis of the truncated system in the bosonic case ($\sigma=+1$) using the half-integer lattice leads to the black box shown in Figure \ref{fig:xcgam}.\footnote{ The analog of \eqref{eq:y0nrm} comes with a more complicated square root argument and no analytical solution for $x_*$ as a function of $\theta_0$ can be found in that case, although it is straightforward to find it numerically.} Note that the truncation to $m=1$ is only valid for sufficiently large $R$ and $\theta_0$. Increasing the truncation order leads to more coupled equations, which in turn can be recast as an (more complicated) algebraic equation for $y_0$, with parameters depending on $\theta$. The number of solutions increases accordingly. However, for any $\theta \in (-\theta_0,\theta_0)$ there is always a single pair of solutions which collide and form a branching point at some $x_*(\theta)\approx -\theta_0 + \text{const.}$, corresponding to real, positive $R_*(\theta)$, a feature that is not altered by increasing the truncation order. Finally, we remark that in the further special limit $\theta_0=0$, the difference equations~\eqref{eq:NR} become simple algebraic equations for $Y(\theta)$ that can be exactly solved both in the fermionic and the bosonic case, leading to exact expressions $x_*=\log \log 2$ and $x_*=\log \log \frac{3}{2}\sqrt{3}$ for the critical points, respectively. These points are also shown in Figure~\ref{fig:xcgam}, emphasizing the smooth nature of the limit $\gamma\to\frac{\pi}{2}$. In Figure~\ref{fig:NRlimit} we present as an example a solution with $m=8$ truncation together with the iterative solution of the integral equation \eqref{tbas} for $\theta_0=2$ and $\gamma$ approaching $\pi/2$, just before reaching the (first) critical $R_*(\theta)$ of the NR limit. The transition seems to be smooth, however we do not yet have a complete understanding of the nature of this limit. We plan to revisit the narrow resonance model in a more sistematic way in the future. \begin{figure} \centering \includegraphics{Figures/yvsthetaNR.pdf} \caption{Approaching the Narrow Resonance (NR) limit for $\theta_0=2$ and $x=1.75$} \label{fig:NRlimit} \end{figure} \section{Discussion}\label{sec:discussion} There are two general questions which we believe our results shed some light upon. One concerns the short-distance behavior of the theory under the generalized TTbar deformation \eqref{Aalphas}. Our results supports the expectation that, at least in the cases when the CDD factor in the associated $S$-matrix deformation has the form \eqref{cddn}, \eqref{phipole} with finite $N$, the theory develops the Hagedorn singularity corresponding to a density of high-energy states much greater than what is allowed in a Wilsonian QFT. Although we demonstrated this in a limited set of examples -- the 2CDD deformations of the free $S$-matrix with both fermionic and bosonic statistics and the 1CDD deformations of the free boson $S$-matrix -- this result likely extends to more general $N$CDD deformations, at least for massive theories involving only one kind of particles. In fact the case $N=\infty$, a model known as \emph{Elliptic sinh-Gordon}, is shown to display the same behavior as the ones studied here \cite{Cordova:2021fnr}. We note that this behavior is qualitatively the same as the one encountered under the ``TTbar proper'' deformation \eqref{Aalpha} of a generic local QFT. Moreover, the singularity of $E(R)$ at the Hagedorn point $R_*$ is a square-root branching point, exactly as in the TTbar deformations with negative $\alpha$. From a formal point of view, this nature of the singularity is not entirely unexpected. Indeed, the character of the singularity relates to the rate of approach of the Hagedorn asymptotic \eqref{hagedorn1} at high energy $\mathcal{E} \to \infty$. Assume that the approach is power-like\footnote{{It is interesting to compare this assumption with the analysis of thermodynamic stability in \cite{Barbon:2020amo}.}} \begin{eqnarray}\label{tohagedorn} S(\mathcal{E}) = R_*\,\mathcal{E} - \frac{a\,L^{\kappa+1}}{\mathcal{E}^\kappa} + \cdots \end{eqnarray} where $\kappa$ is some positive number, $L$ is the spatial size of the system which is assumed to be asymptotically large, and the dots represent yet higher negative powers of $\mathcal{E}$. The dependence on $L$ of the subleading term reflects the extensive nature of the entropy, which must behave as $L\,\sigma(\mathcal{E}/L)$ in the limit $L\to\infty$, with the intensive quantity - the entropy density $\sigma$ - depending on the energy density $\mathcal{E}/L$. Inspection of \eqref{tohagedorn} reveals the mass dimension of the coefficient $a$ to be $a\sim[\text{mass}]^{2\kappa+1}$. Having in mind that all the deformation parameters $\alpha_s$ in \eqref{Aalphas} have even integer dimensions, one could expect that the exponent $2\kappa+1$ is an integer. The lowest positive $\kappa$ consistent with this assumption is $\kappa=1$, and then \eqref{tohagedorn} leads exactly to the square-root singularity of $E(R)$. Still, the physics behind this simple character of the singularity appears mysterious. Analytic continuation of $E(R)$ below $R_*$ returns complex values of $E$. This likely signals an instability of the ground state at $R<R_*$ against some sort of decay. If so, what is the product(s) of the decay? Usually in a theory with finite range of interaction the decay of the unstable ground state goes through the process of nucleation, as in the ``false vacuum'' decay studied in \cite{Coleman:1977py,Kobzarev:1974cp}. However such a decay would imply a much weaker -- and analytically more complicated -- singularity at $R_*$. Therefore the simple algebraic character of the actual singularity appears puzzling. A different, but possibly related, question is the physical interpretation of the secondary branch of $E(R)$ discovered in \S \ref{sec:results}. An even more general question concerns the relation between the $S$-matrix and the underlying local structure. Suppose we are given an $S$-matrix, i.e. a collection of masses of stable particles as well as the full set of scattering amplitudes, satisfying all the standard requirements of the $S$-matrix theory - unitarity, analyticity, crossing and bootstrap conditions (see e.g. \cite{Eden:1966dnq, Iagolnitzer:1994xv}), with the singularity structure consistent with the macro-causality \cite{Iagolnitzer:1974wz}. Is there a local QFT generating such a scattering theory? The answer is generally no. There are consistent $S$-matrices which cannot be derived from Wilsonian QFT, and indeed do not have an underlying local structure, meaning a complete algebra of local operators. This possibility is famously realized in string theories. The results presented here support the expectation that the overwhelming majority of self-consistent $S$-matrices are not derivable from local QFT. Although this expectation arise from a general analysis of the RG flows \cite{Wilson:1973jj}, we substantiate it by providing concrete examples in 1+1 dimensions with factorizable $S$-matrices consisting of pure CDD factors. We studied a number of examples of such $S$-matrices and verified that they lead to the Hagedorn density of high-energy states \eqref{hagedorn1}, familiar to the string theories. What's more, it looks likely that this situation is rather general: with the exception of a small subset of ``local field-theoretic'' $S$-matrices, the bulk part of the space of consistent, factorizable S-matrices in 1+1 dimensions, leads to a Hagedorn transition. This statement of course needs to be verified on a more systematic level, but it is tempting to conjecture that this is a general situation, not limited to integrable theories and to low space-time dimensions. If so, would it mean that the majority of consistent $S$-matrices correspond to some kind of string theories? Or maybe there is a more general class of theories, besides the strings, which break the standard local structure of QFT while preserving macro-causality and exhibit the stringy density of states? The present work represents a first step of a project having as a goal the systematic analysis of the TBA equations for completely general CDD-deformed factorizable $S$-matrices, with arbitrarily complicated CDD factors \eqref{cddg}, possibly including the factors \eqref{phientire} with singular behavior at high energies. Clearly, also CDD deformations of more complicated $S$-matrices, involving more than one kind of particles -- possibly having mass degeneracies, a situation leading to off-diagonal scattering -- have to be studied. Such $S$-matrices lead to systems of TBA equations more complicated than the simple equation \eqref{tbas}. Nonetheless, we believe that the numerical methods adopted here, in particular the PALC routine, can be adopted in full generality. Finally, a similar analysis can be extended to the CDD deformed ``massless TBA systems'' (see e.g. \cite{Zamolodchikov:1991vx,Zamolodchikov:1992zr,Fendley:1993jh}). Although the physical foundation here is less firm -- since the notion of $S$-matrix is ambiguous for massless theories in 1+1 dimensions -- these cases might yield welcome surprises. \subsection*{Acknowledgements} AZ acknowledges warm hospitality extended to him at the International Institute of Physics, Natal, Brasil, where parts of this work were done. AZ is grateful to A. Polyakov and F. Smirnov for interest and discussions. SN wishes to thank R. Tateo, L. G. C\'{o}rdova and F. I. Schaposnik for their always interesting and useful comments and questions. Work of GC was supported by MEC and MCTIC. Work of TF was supported by the Serrapilheira Institute (grant number Serra-1812-26900). ML was supported by the National Research Development and Innovation Office of Hungary under the postdoctoral grant PD-19 No. 132118 and by the Fund TKP2020 IES (Grant No. BME-IE-NAT), under the auspices of the Ministry for Innovation and Technology. Early stage of ML's work was carried out at the International Institute of Physics, Natal, Brasil where he was supported by MEC and MCTIC. Work of SN is supported by NSF grant PHY-1915219. Work of AZ was partly supported by the NSF grant PHY-191509. \appendix \section{Predictor-corrector routine}\label{app:pred_corr} In general, a predictor-corrector routine is, as the name suggests, a two-step procedure to solve an equation, by first performing an educated (numerical) guess and subsequently adjusting it. In the case we are concerned with, we wish to solve the equation \begin{eqnarray} H(\epsilon,R) = -\epsilon(\theta) + R \cosh\theta - \intop\,\frac{d\theta'}{2\pi} \varphi(\theta - \theta') \log\Big(1+e^{-\epsilon(\theta')}\Big) = 0\;, \end{eqnarray} with $\varphi$ being the 2CDD kernel \eqref{eq:2cdd_kernel} \begin{eqnarray} \varphi(\theta) = \sum_{\sigma,\sigma' = \pm1}\frac{1}{\cosh(\theta + \sigma \omega + i \sigma'\gamma)}\;. \end{eqnarray} Obviously, we are going to deal with an appropriate truncation and discretization of the above equation, taking the following form \begin{eqnarray} H_k(\vec{\epsilon},R) = -\epsilon_k + R \cosh\theta_k - \frac{1}{2\pi}\sum_{l}\Delta\theta \varphi_{kl}\log\Big(1+e^{-\epsilon_l}\Big) = 0\;, \end{eqnarray} with $\Delta\theta$ being the lattice step (taken to be constant, for simplicity) and \begin{eqnarray} \varphi_{kl} = \sum_{\sigma,\sigma' = \pm1}\frac{1}{\cosh((k -l)\Delta\theta + \sigma \omega + i \sigma'\gamma)}\;. \end{eqnarray} The two steps of the predictor-corrector routine can be then described as follows \begin{itemize} \item \textbf{Predictor}. This part of the routine takes as input a point $c(s_j) = (\vec{\epsilon}_j,R_j)$ on the solution curve and uses the initial value problem form (\ref{eq:in_val_prob_map_H}), which we recall here \begin{eqnarray} H'(c(s))\dot{c}(s) = \vec{0}\;,\qquad \vert\vert\dot{c}(s)\vert\vert = 1\;,\qquad c(s_j) = (\vec{\epsilon}_j,R_j)\;, \end{eqnarray} to yield a reasonable guess for a new point $c^{(0)}(s_{j+1}) = (\vec{\epsilon}^{\,(0)}_{j+1},R^{(0)}_{j+1})$. The simplest way to obtain such a point is to employ the so-called \emph{Euler predictor}, which implements the equation \begin{eqnarray} (\vec{\epsilon}^{\,(0)}_{j+1},R^{(0)}_{j+1}) = (\vec{\epsilon}_j,R_j) + \delta s\,\frac{t_j}{\vert\vert t_j\vert\vert}\;, \end{eqnarray} where the $N+1$ vector $t_j$ is tangent to the extended Jacobian $H'(c(s))$ at the point $ (\vec{\epsilon}_j,R_j) $: \begin{eqnarray} H' (\vec{\epsilon}_j,R_j) t_j = 0\;. \end{eqnarray} \item \textbf{Corrector}. This second part of the routine engages in the problem of adjusting the predictor's output $(\vec{\epsilon}^{\,(0)}_{j+1},R^{(0)}_{j+1})$ to a point actually lying on the solution curve. It does so by some iterative method for solving the equation $\vec{H} = 0$ starting from an initial, reasonably close, guess. The fastest and least expensive of these methods is the Newton's one, which in our case would take the following form \begin{eqnarray} \vec{\epsilon}^{\,(\ell+1)}_{j+1} = \vec{\epsilon}^{\,(\ell)}_{j+1} - [\mathcal{J}(\vec{\epsilon}^{\,(\ell)}_{j+1},R^{(\ell )}_{j+1})]^{-1}\, H(\vec{\epsilon}^{\,(\ell)}_{j+1},R^{(\ell )}_{j+1})\;,\qquad R^{(\ell +1)}_{j+1} = R^{(\ell +1)}_j \end{eqnarray} if only we were not worried to encounter a point where $\mathcal J$ is not invertible. In fact we are concerned precisely with such an eventuality, it being the very reason that led us to consider the PALC method and the associated predictor-corrector routine. Hence, we need to appropriately modify Newton's method in order to accommodate the possibility of a singular $\mathcal J$, with $H'$ of maximal rank $N$. The way to handle such a situation is to consider the concept of \emph{quasi-inverse} (also called \emph{Moore-Penrose inverse}) $A^{+}$ of a matrix $A$, defined as \begin{eqnarray} A^{+} = A^{T}\,(A\,A^{T})^{-1}\;, \end{eqnarray} where a superscript $T$ denotes standard matrix transposition. Notice that, if $A$ is a square matrix, the above definition is equivalent to the standard inverse. Now, if $A$ is instead an $N\times(N+1)$ matrix of maximal rank $N$ and $t$ is its tangent vector $At=0$, then the following statements are equivalent \begin{enumerate} \item $Ax = b$ \underline{and} $t^T x = 0$, \item $x = A^{+}b$, \item $x = \underset{v}{\textrm{min}}\Big[\,\vert\vert v\vert\vert \;\Big\vert\; Av = b \Big]$ which, in plain words, means that $x$ is the vector of minimal norm which solves the equation $Ax=b$. \end{enumerate} Without going too much in the details (see chapter 3 of \cite{allgower2012numerical}), the takeaway is that we can implement Newton's method in the usual way, as long as we trade the inverse of the Jacobian for the quasi-inverse of the extended Jacobian: \begin{eqnarray} (\vec{\epsilon}^{\,(\ell+1)}_{j+1},R^{(\ell +1)}_{j+1}) = (\vec{\epsilon}^{\,(\ell)}_{j+1},R^{(\ell )}_{j+1}) - [H'(\vec{\epsilon}^{\,(\ell)}_{j+1},R^{(\ell )}_{j+1})]^{+}\, H(\vec{\epsilon}^{\,(\ell)}_{j+1},R^{(\ell )}_{j+1})\;. \end{eqnarray} The above equation is then iterated as long as necessary, until reaching a point $(\vec{\epsilon}^{\,(L)}_{j+1},R^{(L)}_{j+1}) \equiv (\vec{\epsilon}_{j+1},R_{j+1})$ deemed, by some appropriate convergence test, close enough to a point on the solution curve. \end{itemize} Here follows a pseudo-code summarizing the procedure expounded above. As we can immediately see, the algorithm requires an initial point solving the TBA equation. This can be provided by using the standard iterative procedure of \S\ref{subsec:iterative} to solve the equation at some value of $R>R_*$. This will yield a solution $(\vec{\epsilon}_0,R_0)$ on the first branch, from which to start the PALC. \begin{algorithm}[t!] \caption{Euler-Newton predictor-corrector routine} \raggedright \textbf{Part 1: Input}\; \nl $(\vec{\epsilon}_0, R_0)$, s.t. $\vec{H}(\vec{\epsilon}_0, R_0)=\vec{0}$\hspace{164pt}INITIAL POINT\; \nl $\delta s$\hspace{307pt}STEP SIZE\; \nl $N_{\textrm{step}}$\hspace{264pt} STEP NUMBER\; \nl $\eta\ll1$\hspace{203pt}NUMERICAL TOLERANCE\; \textbf{Part 2: Initialization}\; \nl Solve $(\mathcal J)_0\,\vec{x} = -\frac{d}{d\lambda}\vec{H}_0$\hspace{133pt}FIND INITIAL TANGENT\; \nl $(\vec{t},\tau) = \frac{(\vec{x},1)}{\sqrt{1+\vert\vert\vec{x}\vert\vert^2}}$\hspace{168pt}NORMALIZE TANGENT\; \For{$j = 1$ \textbf{to} $N_{\textrm{step}}$}{ \textbf{Part 3: Predictor}\; \nl Solve $\left(\begin{array}{c c} \mathcal J & \frac{d}{d\lambda}\vec{H} \\ \vec{t} & \tau\end{array}\right)_{j-1} \left(\begin{array}{c}\vec{t} \\ \tau\end{array}\right)_j = \left(\begin{array}{c} 0 \\ 1 \end{array}\right)$\hspace{44pt}FIND NEW TANGENT\; \nl $(\vec{\epsilon}_{j+1}^{(0)}, R_{j+1}^{(0)}) = (\vec{\epsilon}_{j}, R_{j})+ \delta s\frac{(\vec{t}_j,\tau_j)}{\vert\vert(\vec{t}_j,\tau_j)\vert\vert}$\hspace{83pt}EULER PREDICTOR\; \textbf{Part 4: Corrector}\; \For{$\ell = 0$ \textbf{to} $\infty\,,$ \textbf{until break}}{ \nl $(\delta\vec{\epsilon},\delta R) = - \left[H'\left(\vec{\epsilon}_{j+1}^{(\ell)}, R_{j+1}^{(\ell)}\right)\right]^{+} \vec{H}\left(\vec{\epsilon}_{j+1}^{(\ell)}, R_{j+1}^{(\ell)}\right)$ \hspace{-6.5pt} CORRECTION STEP\; \nl $\left(\vec{\epsilon}_{j+1}^{(\ell+1)}, R_{j+1}^{(\ell+1)}\right) = \left(\vec{\epsilon}_{j+1}^{(\ell)}, R_{j+1}^{(\ell)}\right) + (\delta\vec{\epsilon},\delta R)$ \hspace{62.5pt}RELAXATION\; \nl \textbf{if} $\vert\vert\delta\vert\vert < \eta$ \textbf{BREAK}\hspace{84pt}CONVERGENCE CONDITION\; } \nl $\left(\vec{\epsilon}_{j+1}, R_{j+1}\right) = \left(\vec{\epsilon}_{j+1}^{(\ell)}, R_{j+1}^{(\ell)}\right)$\hspace{93pt} MOVE TO NEXT POINT\; } \end{algorithm} \newpage \bibliography{biblio} \end{document}
{"config": "arxiv", "file": "2106.11999/main.tex"}
TITLE: How to change $ Cx^2 + Dy^2 + Ex + Fy + G = 0$ to$ (x-h)^2/a^2 ± (y-k)^2/b^2=1 $ using only the variables C, D, E, F, and G QUESTION [0 upvotes]: Or, state the terms a,b,h,and k in terms of C, D, E, F, and/or G $Cx^2 + Dy^2 + Ex + Fy + G = 0$ $(x-h)^2/a^2 ± (y-k)^2/b^2=1$ REPLY [0 votes]: $$\begin{array}{rcl} \\ Cx^2+Dy^2+Ex+Fy+G&=&0 \\ C(x^2+ \frac EC x)+D(y^2+\frac FD y)&=&-G \\ C(x+\frac E{2C})^2-\frac{E^2}{4C}+D(y+\frac F{2D})^2-\frac{F^2}{4D}&=&-G \\ C(x+\frac E{2C})^2+D(y+\frac F{2D})^2&=&-G+\frac{E^2}{4C}+\frac{F^2}{4D} \end{array}$$ So $$\begin{array}{cr} \\ h=-\frac E{2C} \\a^2= \frac{-G+\frac{E^2}{4C}+\frac{F^2}{4D}}{C} \\ k=-\frac F{2D} \\b^2= \frac{-G+\frac{E^2}{4C}+\frac{F^2}{4D}}{D} \end{array}$$
{"set_name": "stack_exchange", "score": 0, "question_id": 1623932}
TITLE: Evaluate the partial derivatives of $g(x)$ QUESTION [0 upvotes]: $g\begin{pmatrix}x\\y \end{pmatrix} = \begin{pmatrix} \sqrt{x^2 + y^2} \\ arctan\frac{y}{x} \end{pmatrix}$ I got $g'\begin{pmatrix}x\\y \end{pmatrix} = \begin{pmatrix} 2x\sqrt{x^2 + y^2} & 2y\sqrt{x^2 + y^2} \\ \frac{1}{1 + y/x^2} & \frac{1}{1 + y^2/x} \end{pmatrix}$ I differentiated $\sqrt{x^2+y^2}$ using the chain rule for both dy and dx. Everything just looks really weird and I have no intuition whether I did this right or not and would like a second opinion before moving on with the problem, thanks REPLY [0 votes]: If $g_1(x,y)=\sqrt{x^2+y^2}=(x^2+y^2)^{1/2}$, then $$\frac{\partial g_1}{\partial x}(x,y)=\frac12(x^2+y^2)^{-1/2}\cdot2x=\frac{x}{\sqrt{x^2+y^2}}$$ the same occurs with respect to $y$. Also, if $g_2(x,y)=\arctan(\tfrac{y}{x})$, then $$\frac{\partial g_2}{\partial x}(x,y)=\cfrac{1}{1+(\tfrac{y}{x})^2}\cdot\Big(-\frac{y}{x^2}\Big)=-\frac{y}{x^2+y^2}$$ and $$\frac{\partial g_2}{\partial y}(x,y)=\cfrac{1}{1+(\tfrac{y}{x})^2}\cdot\frac{1}{x}=\frac{x}{x^2+y^2}$$
{"set_name": "stack_exchange", "score": 0, "question_id": 3422375}
TITLE: Integral closure in an infinite algebraic extension QUESTION [1 upvotes]: If $A$ is a principal ideal domain and $L/Q(A)$ a finite field extension, then it follows from Krull-Akizuki theorem that the integral closure of $A$ in $L$ is a Dedekind domain. Now if $L/Q(A)$ is an infinite algebraic extension, can we say that the integral closure of $A$ in $L$ is not Noetherian? REPLY [1 votes]: The integral closure of $\Bbb{Z}_p$ ($p$-adic integers) in $\bigcup_{n\ge 1}\Bbb{Q}_p(\zeta_{p^n-1})$ is the DVR $\bigcup_{n\ge 1}\Bbb{Z}_p[\zeta_{p^n-1}]$ (in particular Noetherian). So there is no easy rule.
{"set_name": "stack_exchange", "score": 1, "question_id": 4460052}
TITLE: What does this proof of Fermat's little theorem mean for Euler's theorem? QUESTION [0 upvotes]: The following proof of Fermat's little theorem is semi-standard: We prove that $a^p-a \equiv 0 \mod p$ by induction on $a.$ For $a = 2,$ we write $2^p = (1+1)^p = 2 + \sum_{i=1}^{p-1} \binom{p}{i},$ and since each of the binomial coefficients is divisible by $p,$ we are done. Now, $(a+1)^p = a^p + 1 + \sum_{i=1}^{p-1} \binom{p}{i}a^i \equiv a^p + 1 \equiv a+1$ (by induction). Now, suppose we wanted to prove Euler's theorem ($a^{\phi(n)} \equiv 1 \mod n)$. The very first step of the induction seems to tell us something nontrivial about the binomial coefficients, but is there a direct way to see it? REPLY [9 votes]: As mentioned in the comments, there is no induction, because Euler's theorem isn't true for all values of $a$, only those coprime to $n$. Alternatively, here is a generalization of Fermat's little theorem where $p$ isn't necessarily prime but where $a$ is arbitrary: it is the congruence $$\sum_{d \mid n} \varphi \left( \frac{n}{d} \right) a^d \equiv 0 \bmod n.$$ I don't know a name for this congruence which is in widespread use; I call it the necklace congruence. (Actually there are two necklace congruences; the other one involves $\mu \left( \frac{n}{d} \right)$.) The proof is very short: you can show using Burnside's lemma that the LHS, divided by $n$, counts the number of orbits of size $n$ of the action of the cyclic group $C_n$ on the set of necklaces of length $n$ with $a$ colors. When $n$ is prime this gives a more direct combinatorial proof of Fermat's little theorem than the one you describe, which amounts to counting necklaces in a more laborious way. Replacing $a$ with $a + 1$ gives $$\sum_{d \mid n} \varphi \left( \frac{n}{d} \right) \sum_{i=0}^d {d \choose i} a^i \equiv 0 \bmod n$$ but we can't conclude anything about the coefficients of this thing as a polynomial in $a$ because it's no longer true that the values $\bmod n$ of a polynomial of degree $\le n$ mostly determine its coefficients. (When $p$ is prime this is true except for the leading term.) A more natural way to get binomial coefficients into the game is to break up the count of necklaces by how many beads of each color there are. You can show, using for example the Polya enumeration theorem, that this gives the congruence $$\sum_{d \mid n} \varphi \left( \frac{n}{d} \right) (r_1^{n/d} + \dots + r_a^{n/d})^d \equiv 0 \bmod n$$ where the $r_i$ are actually indeterminates this time; the power of $r_i$ counts how many times the $i^{th}$ color appears. As a statement about number-theoretic properties of the binomial coefficients, this appears to be related to Lucas' theorem, although a more direct proof is possible along these lines: see, for example, this blog post.
{"set_name": "stack_exchange", "score": 0, "question_id": 235929}
In this chapter, we study multi-objective power allocation for energy-efficient SWIPT with separated receivers. In a MISO system, information signals and energy beams are transmitted simultaneously to jointly support information delivery to a information receiver and energy supply to a energy harvester. Under a maximum transmit power constraint, we focus on three desired system design objectives, namely, information receiving efficiency (IR-EE) maximization, energy harvesting efficiency (EH-EE) maximization, and total transmit power minimization. In particular, we jointly optimize the information beamforming vector and covariance matrix of the energy signal to achieve the considered system objective. The problem is formulated as a non-convex MOOP. To deal with the fractional objective functions, Charnes-Cooper transformation method is adopted. Subsequently, the transformed problem is solved by semi-definite program (SDP) relaxation approach. We prove that the SDP relaxation is tight. In particular, a tractable structure of the optimal solution is verified. Simulation results shows the trade-off between IR-EE, EH-EE, and the total transmit power. \section{System Model} \begin{figure} \centering \includegraphics[scale=0.8]{system_model2} \caption{A SWIPT system with separated receivers.} \label{fig:system_model2} \end{figure} We focus on a downlink MISO system with SWIPT. The system consists of one multi-antenna transmitter, one single-antenna information receiver, and one single-antenna energy harvester. The transmitter is equipped with $N_\mathrm{T}$ antennas. It sends precoded information signal and energy beams simultaneously to facilitate information transmission and power transfer, cf. Figure \ref{fig:system_model2}. The transmission is divided into time slots. The transmitted signal in each time slot is given by \begin{eqnarray} \mathbf{x}=\mathbf{w}_\mathrm{I}s+\mathbf{w}_\mathrm{E}, \end{eqnarray} where $s\in \mathbb{C}$ is the information-bearing symbol with $\est{{\abs{s}}^2}=1$. $\mathbf{w}_\mathrm{I}\in \mathbb{C}^{{N_\mathrm{T}}\times 1}$ is the corresponding precoded beamforming vector for the information receiver. $\mathbf{w}_\mathrm{E}\in \mathbb{C}^{{N_\mathrm{T}}\times 1}$ is the energy signal beamforming vector facilitating energy transfer to the energy harvester. The energy beamforming vector $\mathbf{w}_\mathrm{E}$ is modeled as a complex Gaussian pseudo-random sequence as $\mathbf{w}_\mathrm{E}\sim{\cal CN}(0,\mathbf{W}_\mathrm{E})$, where $\mathbf{W}_\mathrm{E}=\est{\mathbf{w}_\mathrm{E}\mathbf{w}_{\mathrm{E}}^H}$ is the covariance matrix of the energy signal. Assume $\mathbf{w}_\mathrm{E}$ is generated at the transmitter by a pseudo-random sequence generator with a predefined seed. The seed can be delivered to the information receiver before effective information transmission. Thus, the interference of energy signal can be totally cancelled at the information receiver. We assume a narrow-band slow fading channel between the transmitter and receivers. The channel is assumed to be perfectly known at the transmitter. Then, the received signals at the information receiver and energy harvester are expressed as \begin{eqnarray} y_\mathrm{IR}=\mathbf{h}^H(\mathbf{w}_\mathrm{I}s+\mathbf{w}_\mathrm{E})+n_\mathrm{I},\\ y_\mathrm{EH}=\mathbf{g}^H(\mathbf{w}_\mathrm{I}s+\mathbf{w}_\mathrm{E})+n_\mathrm{E}, \end{eqnarray} where $\mathbf{h}\in \mathbb{C}^{N_{\mathrm{T}}\times 1}$ is the channel vector between the transmitter and the information receiver, and $\mathbf{g}\in \mathbb{C}^{N_{\mathrm{T}}\times 1}$ is the channel vector between the transmitter and the energy harvester. They capture the joint effect of multipath fading and path loss. $n_\mathrm{I}\in \mathbb{C}$ and $n_\mathrm{E}\in \mathbb{C}$ are additive white Gaussian noise (AWGN) at the information receiver and energy harvester, respectively, which are distributed as ${\cal CN}(0,\sigma_\mathrm{I}^2)$ and ${\cal CN}(0,\sigma_\mathrm{E}^2)$. Information receiver focuses on decoding the information signal. The achievable rate (bit/s/Hz) at the information receiver can be described as \begin{eqnarray} R=\log_2\Big(1+\frac{1}{\sigma_\mathrm{I}^2}\abs{\mathbf{h}^H\mathbf{w}_\mathrm{I}}^2\Big)=\log_2\Big(1+\frac{1}{\sigma_\mathrm{I}^2}\mathbf{w}_{\mathrm{I}}^H\mathbf{H}\mathbf{w}_\mathrm{I}\Big), \end{eqnarray} where $\mathbf{H}=\mathbf{h}\mathbf{h}^H$. At the same time, both the information signal and the energy signal can act as RF energy source for the energy harvester due to the broadcast nature of wireless channels. According to the law of energy conservation, the harvested energy is proportional to the received signal power. The total harvested energy at the energy harvester is given by \begin{eqnarray} P_\mathrm{harv}(\mathbf{w}_\mathrm{I},\mathbf{W}_\mathrm{E})=\eta\Big( \abs{\mathbf{g}^H\mathbf{w}_\mathrm{I}}^2+\abs{\mathbf{g}^H\mathbf{w}_\mathrm{E}}^2\Big) =\eta\Big(\mathbf{w}^H_\mathrm{I}\mathbf{G}\mathbf{w}_\mathrm{I}+\Tr(\mathbf{G}\mathbf{W}_\mathrm{E})\Big), \end{eqnarray} with $\mathbf{G}=\mathbf{g}\mathbf{g}^H$. $\eta$ is the energy conversion efficiency, which is a constant with $0\leq\eta\leq1$. It implies a energy loss in the process of converting the received RF energy to electrical energy for storage. We ignore the thermal noise at the receiving antenna as it is relative small compared to the received signal power. Apart from system throughput, EE is also a fundamental system performance metric in modern communication networks. EE is generally defined as the ratio between system throughput and total power consumption. We first model the total power consumption (Joule-per-second) by taking into account the transmit power consumption and additional hardware power dissipation at the transmitter which can be described as \begin{eqnarray}\label{eqn:Ptotal} P_\mathrm{tot}(\mathbf{w}_\mathrm{I},\mathbf{W}_\mathrm{E})=\frac{\norm{\mathbf{w}_\mathrm{I}}^2+\Tr(\mathbf{W}_\mathrm{E})}{{\xi}}+N_\mathrm{T}P_\mathrm{ant}+P_\mathrm{c}. \end{eqnarray} \noindent $\xi$ is the power amplifier efficiency, which is a constant with $0\leq\xi\leq1$. The first term in (\ref{eqn:Ptotal}) is the total power consumption in the power amplifier. ${N_{\mathrm{T}}P_{\mathrm{ant}}}$ accounts for the dynamic circuit power consumption proportional to the number of transmitting antenna. $P_\mathrm{ant}$ denotes the power dissipation at the transmitting antenna, including the transmit filter, mixer, frequency synthesizer, digital-to-analog converter (DAC), etc. $P_{\mathrm{c}}$ denotes the fixed circuit power consumption due to baseband signal processing. Based on the general concept of efficiency, we define IR-EE and EH-EE as \begin{eqnarray} \Phi_\mathrm{IR}(\mathbf{w}_\mathrm{I},\mathbf{W}_\mathrm{E})&=&\frac{R}{P_\mathrm{tot}}\,\,\,\,=\frac{\log_2(1+\frac{1}{\sigma_\mathrm{I}^2}\mathbf{w}^H_\mathrm{I}\mathbf{H}\mathbf{w}_\mathrm{I})}{(\norm{\mathbf{w}_\mathrm{I}}^2+\Tr(\mathbf{W}_\mathrm{E}))/{\xi}+N_\mathrm{T}P_\mathrm{ant}+P_\mathrm{c}}\\ \notag\\ \mathrm{and}\quad\Phi_\mathrm{EH}(\mathbf{w}_\mathrm{I},\mathbf{W}_\mathrm{E})&=&\frac{P_\mathrm{harv}}{P_\mathrm{tot}}=\frac{\eta(\mathbf{w}^H_\mathrm{I}\mathbf{G}\mathbf{w}_\mathrm{I}+\Tr(\mathbf{G}\mathbf{W}_\mathrm{E}))}{(\norm{\mathbf{w}_\mathrm{I}}^2+\Tr(\mathbf{W}_\mathrm{E}))/{\xi}+N_\mathrm{T}P_\mathrm{ant}+P_\mathrm{c}}, \end{eqnarray} respectively. \section{Problem Formulation} In SWIPT system, IR-EE maximization, EH-EE maximization, and total transmit power minimization are all desirable for system design. In this section, we first propose three problem formulations for single-objective system design for SWIPT. Each single-objective problem describes one important aspect of the system design. Then, we consider the three system design objectives jointly by MOO. The first system design objective is the maximization of IR-EE. The optimization problem is formulated as \begin{Prob}IR-EE Maximization:\label{prob:WIPT_IR-EE} \begin{eqnarray} \underset{\mathbf{W}_{\mathrm{E}}\in\mathbb{H}^{N_\mathrm{T}},\mathbf{w}_{\mathrm{I}}}{\maxo}\,\, &&\Phi_{\mathrm{IR}}(\mathbf{w}_\mathrm{I},\mathbf{W}_\mathrm{E})\notag\\ \mathrm{subject\,\,to}\,\, &&\mathrm{C1}:\,\,\norm{\mathbf{w}_\mathrm{I}}^2+\Tr(\mathbf{W}_{\mathrm{E}})\leq P_{\mathrm{max}},\notag\\ &&\mathrm{C2}:\,\,\mathbf{W}_{\mathrm{E}}\succeq \mathbf{0}.\notag \end{eqnarray} \end{Prob} The second system design objective is the maximization of EH-EE. The problem formulation is given as \begin{Prob}EH-EE Maximization:\label{prob:WIPT_EH-EE} \begin{eqnarray} \underset{\mathbf{W}_{\mathrm{E}}\in\mathbb{H}^{N_\mathrm{T}},\mathbf{w}_{\mathrm{I}}}{\maxo}\,\, &&\Phi_{\mathrm{EH}}(\mathbf{w}_\mathrm{I},\mathbf{W}_\mathrm{E})\notag\\ \mathrm{subject\,\,to}\,\, &&\mathrm{C1,\,C2}.\notag \end{eqnarray} \end{Prob} The third system design objective is the minimization of the total transmit power at the transmitter. The problem formulation is proposed as \begin{Prob}Total Transmit Power Minimization:\label{prob:WIPT_Ptrans} \begin{eqnarray} \underset{\mathbf{W}_{\mathrm{E}}\in\mathbb{H}^{N_\mathrm{T}},\mathbf{w}_{\mathrm{I}}}{\mino}\,\, &&\norm{\mathbf{w}_\mathrm{I}}^2+\Tr(\mathbf{W}_{\mathrm{E}})\notag\\ \mathrm{subject\,\,to}\,\, &&\mathrm{C1,\,C2}.\notag \end{eqnarray} \end{Prob} As the above problem formulations stated, IR-EE, EH-EE, and the total transmit power are independently optimized respectively. In each single-objective problem, information beamforming vector, $\mathbf{w}_\mathrm{I}$, and the covariance matrix of the energy signal, $\mathbf{W}_{\mathrm{E}}$, are jointly designed by considering the maximum transmit power constraint C1. In addition, covariance matrix $\mathbf{W}_{\mathrm{E}}$ should be a positive semi-definite Hermitian matrix as indicated in constraint C2. For the sake of notational simplicity, we denote the objective functions in the above problems as $F_j(\mathbf{w}_{\mathrm{I}},\mathbf{W}_{\mathrm{E}})$, $j=1,2,3$. We note that Problem \ref{prob:WIPT_Ptrans} is a trivial problem with optimal value zero since the transmitter does not need to provide any QoS to the receiver. Yet, Problem \ref{prob:WIPT_Ptrans} plays an important role in the following when we study the multi-objective power allocation algorithm design. Without loss of generality, Problem \ref{prob:WIPT_Ptrans} can be rewritten as an equivalent maximization problem in order to represent the three problems consistently. The corresponding objective function is written as $F_3(\mathbf{w}_{\mathrm{I}},\mathbf{W}_{\mathrm{E}})=-(\norm{\mathbf{w}_\mathrm{I}}^2+\Tr(\mathbf{W}_{\mathrm{E}}))$. In practice, these three independent optimization objectives are all desirable from the system operator perspective. However, there are non-trivial trade-off between them. In order to optimize these conflicting system design objectives systematically and simultaneously, we apply MOO, cf. Appendix \ref{app:intro_MOOP}. A common approach to formulate a MOOP is via "\emph{a prior method}". This method allows the system designer to indicate the relative importance of the system design objectives before running the optimization algorithm. In particular, a sequence of scalars, which is known as "preference parameters" or "weights", are a prior specified to scalarize system designer's preference on different objectives. There are many scalarization methods. Here we adopt weighted min-max method \cite{JR:MOOP}. As introduced in Appendix \ref{app:intro_MOOP}, the optimal point(s) of a MOOP is defined by Pareto optimality. All Pareto optimal points, which form the Pareto optimal set, are important to the system designer. In fact, weighted min-max method can provide the complete Pareto optimal set by varying the preference parameters, despite the non-convexity of the MOOP. Based on weighted min-max method, we incorporate three system design objectives into a MOOP, which is formulated as \begin{Prob}Multi-Objective Optimization Problem:\label{prob:multiobj_WIPTsepuser} \begin{eqnarray} \underset{\mathbf{W}_{\mathrm{E}}\in\mathbb{H}^{N_\mathrm{T}},\mathbf{w}_{\mathrm{I}}}{\mino}\,\,&&\max_{j=1,2,3}\,\, \Big\{\omega_j(F_j^*-F_j(\mathbf{w}_{\mathrm{I}},\mathbf{W}_{\mathrm{E}}))\Big\}\notag\\ \mathrm{subject\,\,to} &&\mathrm{C1,\,C2},\notag \end{eqnarray} \end{Prob} \noindent where $F_j^*$ is the optimal objective value with respect to Problem $j$. $\omega_j$ is the weight imposed on objective function $j$ subject to $0\leq\omega_i\leq1$ and $\sum_j\omega_j=1$, which indicates the system designer's preference on $j$th objective function over the others. In extreme case, when $\omega_j=1$ and $\omega_i=0, \forall i\neq j $, Problem \ref{prob:multiobj_WIPTsepuser} is equivalent to the single-objective optimization problem $j$. In this MOOP, we investigate the complete trade-off region between the three objectives regarding to system power allocation. To this end, we only take maximum transmit power constraint into consideration. In case other QoS constraints are imposed into the MOOP, a smaller Pareto optimal set can be obtained, which is actually a subset of the complete trade-off region. \section{Multi-Objective Power Allocation Algorithm Design} It can be observed that Problem \ref{prob:WIPT_IR-EE} and Problem \ref{prob:WIPT_EH-EE} are non-convex due to the fractional form of the objectives which leads to the non-convexity in Problem \ref{prob:multiobj_WIPTsepuser}. In order to obtain a tractable solution, we first transform the non-convex objective functions by Charnes-Cooper transformation method. Then, the transformed problems are solved by SDP relaxation approach. We first reformulate aforementioned three single-objective optimization problems by defining a set of new optimization variables as follows: \begin{eqnarray}\label{eqn:newvariabledefine} \mathbf{W}_\mathrm{I}=\mathbf{w}_\mathrm{I}\mathbf{w}_\mathrm{I}^H,\,\,\theta=\frac{1}{P_\mathrm{tot}(\mathbf{w}_\mathrm{I},\mathbf{W}_\mathrm{E})},\,\,\overline{\mathbf{W}}_\mathrm{I}=\theta\mathbf{W}_\mathrm{I},\,\,\mathrm{and}\,\,\overline{\mathbf{W}}_\mathrm{E}=\theta\mathbf{W}_\mathrm{E}. \end{eqnarray} Then the original problems can be rewritten with respect to the new optimization variables $\{\overline{\mathbf{W}}_\mathrm{I},\overline{\mathbf{W}}_\mathrm{E}, \theta\}$, which are given by \begin{Prob}Transformed IR-EE Maximization Problem:\label{prob:WIPT_IR-EE_reform} \begin{eqnarray} \underset{\overline{\mathbf{W}}_\mathrm{I},\overline{\mathbf{W}}_\mathrm{E}\in\mathbb{H}^{N_\mathrm{T}},\theta}{\maxo}\,\, &&\theta\log_2(1+\frac{\Tr(\mathbf{H}\overline{\mathbf{W}}_\mathrm{I})}{\theta\sigma_\mathrm{I}^2})\notag\\ \mathrm{subject\,\,to}\,\, &&\overline{\mathrm{C1}}:\,\,\Tr(\overline{\mathbf{W}}_\mathrm{I}+\overline{\mathbf{W}}_\mathrm{E})\leq\theta P_{\mathrm{max}},\notag\\ &&\overline{\mathrm{C2}}:\,\,\overline{\mathbf{W}}_\mathrm{I}\succeq \mathbf{0},\,\,\overline{\mathbf{W}}_\mathrm{E}\succeq \mathbf{0},\notag\\ &&\overline{\mathrm{C3}}:\,\,\Rank(\overline{\mathbf{W}}_\mathrm{I})\leq1,\notag\\ &&\overline{\mathrm{C4}}:\,\,\frac{\Tr(\overline{\mathbf{W}}_\mathrm{I}+\overline{\mathbf{W}}_\mathrm{E})}{\xi}+\theta(N_\mathrm{T}P_\mathrm{ant}+P_\mathrm{c})\leq1,\notag\\ &&\overline{\mathrm{C5}}:\,\,\theta\ge0. \notag \end{eqnarray} \end{Prob} \begin{Prob}Transformed EH-EE Maximization Problem:\label{prob:WIPT_EH-EE_reform} \begin{eqnarray} \underset{\overline{\mathbf{W}}_\mathrm{I},\overline{\mathbf{W}}_\mathrm{E}\in\mathbb{H}^{N_\mathrm{T}},\theta}{\maxo}\,\, &&\eta\Tr(\mathbf{G}(\overline{\mathbf{W}}_\mathrm{I}+\overline{\mathbf{W}}_\mathrm{E}))\notag\\ \mathrm{subject\,\,to}\,\, &&\overline{\mathrm{C1}} - \overline{\mathrm{C5}}.\notag \end{eqnarray} \end{Prob} \begin{Prob}Transformed Total Transmit Power Minimization Problem:\label{prob:WIPT_Ptrans_reform} \begin{eqnarray} \underset{\overline{\mathbf{W}}_\mathrm{I},\overline{\mathbf{W}}_\mathrm{E}\in\mathbb{H}^{N_\mathrm{T}},\theta}{\maxo}\,\, &&-\xi(\frac{1}{\theta}-N_\mathrm{T}P_\mathrm{ant}-P_\mathrm{c})\notag\\ \mathrm{subject\,\,to}\,\, &&\overline{\mathrm{C1}} - \overline{\mathrm{C5}}.\notag \end{eqnarray} \end{Prob} Denote the transformed objective function as $\overline{F_j}(\overline{\mathbf{W}}_\mathrm{I},\overline{\mathbf{W}}_\mathrm{E},\theta)$, $j=1,2,3$. Constraints $\overline{\mathbf{W}}_\mathrm{I}\succeq \mathbf{0}$, $\overline{\mathbf{W}}_\mathrm{I}\in\mathbb{H}^{N_\mathrm{T}}$, and $\Rank(\overline{\mathbf{W}}_\mathrm{I})\leq1$ are imposed to guarantee that $\overline{\mathbf{W}}_\mathrm{I}=\theta\mathbf{w}_\mathrm{I}\mathbf{w}_\mathrm{I}^H$. Constraints $\overline{\mathrm{C4}}$ and $\overline{\mathrm{C5}}$ are introduced due to the proposed transformation. Furthermore, in order to simplify the following algorithm design, we first normalize the transformed objective functions due to their different ranges and dimensions. A robust transformation, regardless of the original range or dimension of the objective function, is given as follows \cite{JR:MOOP}, \begin{eqnarray}\label{eqn:normalization} \overline{F_j}^{\mathrm{nml}}(\overline{\mathbf{W}}_\mathrm{I},\overline{\mathbf{W}}_\mathrm{E},\theta)=\frac{\overline{F_j}(\overline{\mathbf{W}}_\mathrm{I},\overline{\mathbf{W}}_\mathrm{E},\theta)-\overline{F_j}^0}{\overline{F_j}^*-\overline{F_j}^0}, \end{eqnarray} where $\overline{F_j}^*$ and $\overline{F_j}^0$ are the maximum and minimum value of the $j$th transformed objective function, i.e., $\overline{F_j}^0\leq \overline{F_j}(\overline{\mathbf{W}}_\mathrm{I},\overline{\mathbf{W}}_\mathrm{E},\theta)\leq \overline{F_j}^*$. $\overline{F_j}^*$ can result from the transformed single-objective problems. Then, the transformed objective functions are normalized to range $[0,1]$. Regarding to the MOOP \ref{prob:multiobj_WIPTsepuser}, the objective function can be rewritten in a normalization form as $\max_{j=1,2,3}\,\, \{\omega_j(1-\overline{F_j}^{\mathrm{nml}}(\overline{\mathbf{W}}_\mathrm{I},\overline{\mathbf{W}}_\mathrm{E},\theta))\}$. A common approach for handling such a min-max optimization problem is to introduce an auxiliary optimization variable. Then, the MOOP can be transformed into its equivalent epigraph representation \cite{book:convex}, which is given by \begin{Prob}Transformed MOOP:\label{prob:multiobj_WIPTsepuser2} \begin{eqnarray} \underset{\overline{\mathbf{W}}_\mathrm{I},\overline{\mathbf{W}}_\mathrm{E}\in\mathbb{H}^{N_\mathrm{T}},\theta,\tau}{\mino}&&\tau \notag\\ \mathrm{subject\,\,to}\,\,\,&&\overline{\mathrm{C1}} - \overline{\mathrm{C5}},\notag\\ &&\overline{\mathrm{C6}}:\,\omega_j(1-\overline{F_j}^{\mathrm{nml}}(\overline{\mathbf{W}}_\mathrm{I},\overline{\mathbf{W}}_\mathrm{E},\theta))\leq \tau,\,\,\forall j,\notag \end{eqnarray} \end{Prob} \noindent where $\tau$ is the auxiliary optimization variable. We note that the optimal value of Problem \ref{prob:multiobj_WIPTsepuser2} lies between zero and one. Now, we introduce the following proposition. \begin{proposition}\label{prop:equivalency} The transformed problems \ref{prob:WIPT_IR-EE_reform}-\ref{prob:multiobj_WIPTsepuser2} are equivalent transformations of the original problems \ref{prob:WIPT_IR-EE}-\ref{prob:multiobj_WIPTsepuser}, respectively. \end{proposition} \begin{proof} Please refer to Appendix \ref{app:Prop_equivalency}. \end{proof} Based on Proposition \ref{prop:equivalency}, we can recover the solution of the original problems based on (\ref{eqn:newvariabledefine}). In particular, the optimal value $\overline{F_j}^*$ or the lower bound $\overline{F_j}^0$ of the $j$th transformed objective function equal to that of the $j$th original objective function, i.e., $\overline{F_j}^*=F_j^*$, $\overline{F_j}^0=F_j^0$, $j=1,2,3$. Thus, in (\ref{eqn:normalization}), $\overline{F_j}^0$ are trivial results from the original objective functions, that are $\overline{F_1}^0=\overline{F_2}^0=0$, $\overline{F_3}^0=P_\mathrm{max}$. We also denote $\overline{F_j}^*$ simply as $\overline{F_1}^*=\Phi_\mathrm{IR}^*$, $\overline{F_2}^*=\Phi_\mathrm{EH}^*$, and $\overline{F_3}^*=0$. We note that if Problem \ref{prob:multiobj_WIPTsepuser2} can be solved optimally by an algorithm, then the algorithm can also be used to solve Problem \ref{prob:WIPT_IR-EE_reform}-\ref{prob:WIPT_Ptrans_reform}, since Problem \ref{prob:multiobj_WIPTsepuser2} is a generalization of Problem \ref{prob:WIPT_IR-EE_reform}-\ref{prob:WIPT_Ptrans_reform}. Thus, we focus on the method in solving Problem \ref{prob:multiobj_WIPTsepuser2}. It is evident that Problem \ref{prob:multiobj_WIPTsepuser2} is non-convex due to the rank-one beamforming matrix constraint $\overline{\mathrm{C3}}:\,\,\Rank(\overline{\mathbf{W}}_\mathrm{I})\leq1$. Now, we apply the SDP relaxation by removing constraint $\mathrm{C3}$ from Problem \ref{prob:multiobj_WIPTsepuser2}. As a result, the SDP relaxed problem is given by \begin{Prob}SDP Relaxed Transformed MOOP:\label{prob:multiobj_WIPTsepuser_relaxed} \begin{eqnarray} \underset{\overline{\mathbf{W}}_\mathrm{I},\overline{\mathbf{W}}_\mathrm{E}\in\mathbb{H}^{N_\mathrm{T}},\theta,\tau}{\mino}&&\tau \notag\\ \mathrm{subject\,\,to}\,\,\, &&\overline{\mathrm{C1}},\,\overline{\mathrm{C2}},\,\overline{\mathrm{C4}},\,\overline{\mathrm{C5}},\,\overline{\mathrm{C6}},\notag \end{eqnarray} \end{Prob} \noindent which is a convex SDP problem and can be solved by numerical convex program solvers such as CVX \cite{website:CVX}. In particular, if the obtained solution $\overline{\mathbf{W}}_\mathrm{I}^*$ of the SDP relaxed problem satisfies constraint $\overline{\mathrm{C3}}$, i.e., $\Rank(\overline{\mathbf{W}}_\mathrm{I}^*)\leq1$, then it turns out to be the optimal solution. Then, the optimal beamforming vector $\mathbf{w}_{\mathrm{I}}^*$ of the original problem can be achieved by solving the relaxed problem and recovering from the invertible mapping equations (\ref{eqn:newvariabledefine}). Now, we study the tightness of the SDP relaxation by the following theorem. \begin{Thm}\label{thm:rankone} The optimal solution of Problem \ref{prob:multiobj_WIPTsepuser_relaxed} satisfies $\Rank(\overline{\mathbf{W}}_\mathrm{I}^*)=1$ and $\Rank(\overline{\mathbf{W}}_\mathrm{E}^*)\leq1$. In particular, an optimal solution with $\Rank(\overline{\mathbf{W}}_\mathrm{I}^*)=1$ and $\overline{\mathbf{W}}_\mathrm{E}^*=\mathbf{0}$ can always be constructed. \end{Thm} \begin{proof} Please refer to Appendix \ref{app:rankone}. \end{proof} Therefore, the adopted SDP relaxation is tight. Besides, Problem \ref{prob:WIPT_IR-EE_reform}-\ref{prob:WIPT_Ptrans_reform} can be solved by SDP relaxation as solving Problem \ref{prob:multiobj_WIPTsepuser_relaxed}. Next, we construct an optimal solution with $\Rank(\overline{\mathbf{W}}_\mathrm{I}^*)=1$ and $\overline{\mathbf{W}}_\mathrm{E}^*=\mathbf{0}$ based on Theorem \ref{thm:rankone}. We redefine the optimization variable $\overline{\mathbf{W}}_\mathrm{I}$ as \begin{eqnarray}\label{eqn:newvariabledefine2} \overline{\mathbf{W}}_\mathrm{I}=\lambda\mathbf{u}\mathbf{u}^H,\,\,\mathbf{u}=[u_1,u_2,\dots,u_{N_\mathrm{T}}]^T,\,\,\mathrm{and}\,\,\overline{\mathbf{W}}_\mathrm{E}=\mathbf{0}, \end{eqnarray} where $\mathbf{u}\in \mathbb{C}^{N_{\mathrm{T}}\times 1}$. $\mathbf{u}$ is an orthonormal vector, i.e., $\sum_{i=1}^{N_\mathrm{T}}\left|u_i\right|^2=1$. According to (\ref{eqn:newvariabledefine}) and (\ref{eqn:newvariabledefine2}), we have $\mathbf{w}_\mathrm{I}=\sqrt{\frac{\lambda}{\theta}}\mathbf{u}$. Then, MOOP \ref{prob:multiobj_WIPTsepuser2} can be reformed with respect to the optimization variables $\{\mathbf{u}, \lambda, \theta, \tau\}$ as follows: \begin{Prob}\label{prob:multiobj_WIPTsepuser3} \begin{eqnarray} \underset{\mathbf{u},\lambda,\theta,\tau}{\mino}&&\tau \notag\\ \mathrm{subject\,\,to} &&\widehat{\mathrm{C1}}:\,\lambda\sum_{i=1}^{N_\mathrm{T}}\left|u_i\right|^2\leq\theta P_{\mathrm{max}},\notag\\ &&\widehat{\mathrm{C2}}:\,\,\lambda\ge0,\quad\widehat{\mathrm{C3}}:\,\,\theta\ge0, \notag\\ &&\widehat{\mathrm{C4}}:\,\,\frac{\lambda\sum_{i=1}^{N_\mathrm{T}}\left|u_i\right|^2}{\xi}+\theta(N_\mathrm{T}P_\mathrm{ant}+P_\mathrm{c})\leq1,\notag\\ &&\widehat{\mathrm{C5}}:\,\omega_1\Big(1-\frac{\theta}{\Phi_\mathrm{IR}^*}\log_2\Big(1+\frac{\lambda\left|\sum_{i=1}^{N_\mathrm{T}}u_i^*h_i\right|^2}{\theta\sigma_\mathrm{I}^2}\Big)\Big)\leq \tau, \notag\\ &&\widehat{\mathrm{C6}}:\,\omega_2\Big(1-\frac{\eta}{\Phi_\mathrm{EH}^*}\lambda\left|\sum_{i=1}^{N_\mathrm{T}}u_i^*g_i\right|^2\Big)\leq \tau, \notag\\ &&\widehat{\mathrm{C7}}:\,\omega_3\frac{\xi}{P_\mathrm{max}}(\frac{1}{\theta}-N_\mathrm{T}P_\mathrm{ant}-P_\mathrm{c})\leq \tau, \notag \end{eqnarray} \end{Prob} \noindent where $h_i$ and $g_i$, $i\in\{1,\dots,N_\mathrm{T}\}$, are the elements of channel vectors $\mathbf{h}$ and $\mathbf{g}$, respectively. In order to investigate the structure of vector $\mathbf{u}$, we analyze the Karush-Kuhn-Tucker (KKT) conditions of Problem \ref{prob:multiobj_WIPTsepuser3} by introducing the Lagrangian function. The Lagrangian function is given by \begin{eqnarray} \label{eqn:lagrangian_moop3} &&{\cal L}\big(\mathbf{u},\lambda,\theta,\tau,\mu,\nu,\kappa_1,\kappa_2,\kappa_3,\zeta\big)\\ &=&\tau+\mu\big(\lambda\sum_{i=1}^{N_\mathrm{T}}\left|u_i\right|^2-\theta P_{\mathrm{max}}\big)+\nu\big(\frac{\lambda\sum_{i=1}^{N_\mathrm{T}}\left|u_i\right|^2}{\xi}+\theta(N_\mathrm{T}P_\mathrm{ant}+P_\mathrm{c})-1\big)-\zeta\theta\notag\\ &+&\kappa_1\Big[\omega_1\big(1-\frac{\theta}{\Phi_\mathrm{IR}^*}\log_2(1+\frac{\lambda\left|\sum_{i=1}^{N_\mathrm{T}}u_i^*h_i\right|^2}{\theta\sigma_\mathrm{I}^2})\big)-\tau\Big]\notag\\ &+&\kappa_2\Big[\omega_2\big(1-\frac{\eta}{\Phi_\mathrm{EH}^*}\lambda\left|\sum_{i=1}^{N_\mathrm{T}}u_i^*g_i\right|^2\big)-\tau\Big]+\kappa_3\Big[\omega_3\frac{\xi}{P_\mathrm{max}}(\frac{1}{\theta}-N_\mathrm{T}P_\mathrm{ant}-P_\mathrm{c})-\tau\Big],\notag \end{eqnarray} where $\mu,\nu,\kappa_1,\kappa_2,\kappa_3,\zeta$ are dual variables associated with the corresponding constraints, respectively. Constraint C2 is captured in the solution when deriving KKT conditions in the following. Since Problem \ref{prob:multiobj_WIPTsepuser3} satisfies Slater's constraint qualification and is convex with respect to the optimization variables, strong duality holds. Then, based on KKT optimality conditions, the gradient of Lagrangian function with respect to $u_i$, the element of $\mathbf{u}$, vanishes, from which we can result in\\ \begin{eqnarray}\label{eqn:w_i} u_i&=&\omega_1ah_i+\omega_2bg_i,\\ \mathrm{where}\quad a&=&\frac{\kappa_1\theta\lambda\big(\sum_{i=1}^{N_\mathrm{T}}u_i^*h_i\big)^*}{\Phi_\mathrm{IR}^*(\mu+\frac{\nu}{\xi})\Big(\theta\sigma_\mathrm{I}^2+\lambda\left|\sum_{i=1}^{N_\mathrm{T}}u_i^*h_i\right|^2\Big)}\notag\\ \mathrm{and}\quad b&=&\frac{\kappa_2\eta\lambda}{\Phi_\mathrm{EH}^*(\mu+\frac{\nu}{\xi})}\Big(\sum_{i=1}^{N_\mathrm{T}}u_i^*g_i\Big)^*.\notag \end{eqnarray} Similarly, consider KKT condition with respect to $\lambda$, which is given by \begin{eqnarray}\label{eqn:lambda} \lambda=\theta\Bigg[\frac{\kappa_1\omega_1/\Phi_\mathrm{IR}^*}{ln(2)\Big(\mu+\frac{\nu}{\xi}-\frac{\kappa_2\eta\omega_2}{\Phi_\mathrm{EH}^*}\left|\sum_{i=1}^{N_\mathrm{T}}u_i^*g_i\right|^2\Big)}-\frac{\sigma_\mathrm{I}^2}{\left|\sum_{i=1}^{N_\mathrm{T}}u_i^*h_i\right|^2}\Bigg]^+. \end{eqnarray} As we can see, (\ref{eqn:w_i}) and (\ref{eqn:lambda}) imply the structure of beamforming vector $\mathbf{w}_\mathrm{I}$ by considering $\mathbf{w}_\mathrm{I}=\sqrt{\frac{\lambda}{\theta}}\mathbf{u}$. In particular, (\ref{eqn:w_i}) indicates the direction of the information signal. (\ref{eqn:lambda}) shows that the power allocation for the information signal follows the policy of water-filling solution. For specific case, when IR-EE is considered and EH-EE is discarded, i.e., $\omega_1\neq0$ and $\omega_2=0$, information beamforming vector $\mathbf{w}_\mathrm{I}$ is aligning to the direction of channel vector $\mathbf{h}$ according to (\ref{eqn:w_i}). Since $\sum_{i=1}^{N_\mathrm{T}}\left|u_i\right|^2=1$, we obtain \begin{eqnarray} \mathbf{w}_\mathrm{I}=\sqrt{p}\frac{\mathbf{h}}{\norm{\mathbf{h}}},\,\,\,\text{where}\,\,\,p=\Bigg[\frac{\kappa_1\omega_1/\Phi_\mathrm{IR}^*}{\ln(2)(\mu+\nu/\xi)}-\frac{\sigma_\mathrm{I}^2}{\norm{\mathbf{h}}^2}\Bigg]^+. \end{eqnarray} On the other hand, when IR-EE is not taken into account and EH-EE is maximized, i.e., $\omega_1=0$ and $\omega_2\neq0$, the beamforming vector $\mathbf{w}_\mathrm{I}$ directs to the energy harvester by following the direction of channel vector $\mathbf{g}$ as (\ref{eqn:w_i}) indicates. Especially, Problem \ref{prob:multiobj_WIPTsepuser3} becomes a linear programming with respect to $\lambda$. In extreme case, if transmit power minimization is not considered either, i.e., $\omega_3=0$, we solve a single-objective problem for EH-EE maximization. Then, the optimal solution is given as \begin{eqnarray} \mathbf{w}_\mathrm{I}=\sqrt{P_\mathrm{max}}\frac{\mathbf{g}}{\norm{\mathbf{g}}},\,\,\,\Phi_\mathrm{EH}^*=\frac{\eta P_\mathrm{max}\norm{\mathbf{g}}^2}{\frac{P_\mathrm{max}}{\xi}+N_\mathrm{T}P_\mathrm{ant}+P_\mathrm{c}}. \end{eqnarray} Furthermore, when both IR-EE maximization and EH-EE maximization are active objectives, i.e., $\omega_1\neq0$ and $\omega_2\neq0$, $\mathbf{w}_\mathrm{I}$ is designed as a dual use beamforming vector for simultaneous information delivery and power transfer. (\ref{eqn:w_i}) shows that it incorporates the directions of both channel vectors $\mathbf{h}$ and $\mathbf{g}$. \section{Results} In this section, we present simulation results to demonstrate the system performance of multi-objective system design. The simulation parameters are summarized in Table \ref{table:parameters}. In particular, we adopt the TGn path loss model \cite{report:tgn}. The multipath fading is modeled as Rician fading with Rician factor $3$ dB. Assume the carrier center frequency as $470$ MHz with bandwidth $200$ kHz. At the transmitter, we set the dynamic power consumption $P_\mathrm{ant}=75$ mWatt per antenna, static circuit power consumption $P_\mathrm{c}=1$ Watt, and the power amplifier efficiency $\xi=0.4$. The maximum transmit power is given as $P_\mathrm{max}=1$ Watt. Two receivers, namely, information receiver and energy harvester, are uniformly located between the reference distance 1 meters and the maximum service distance $10$ meters. Each receiver is equipped with a single antenna with antenna gain 10 dBi. Assume the noise covariances at the information receiver and the energy harvester are the same, i.e., $\sigma_\mathrm{I}^2=\sigma_\mathrm{E}^2=\sigma^2$. We set $\sigma^2=-47$ dBm which includes thermal noise at a temperature of $290$ Kelvin and signal processing noise. The signal processing noise is caused by a $12$-bit uniform quantizer employed in the analog-to-digital converter at the analog front-end of each receiver. At the energy harvester, the energy conversion efficiency for converting RF energy to electrical energy is $\eta=0.8$. In this setting, multiple channel realizations are simulated, where both pass loss and multipath fading effects are taken into account. \begin{table}[htb] \caption{Simulation Parameters} \label{table:parameters} \centering \begin{tabular}{ | l | l | } \hline Carrier center frequency & 470 MHz\\ \hline Bandwidth & ${\cal B}=200$ kHz \\ \hline Single antenna power consumption & $P_\mathrm{ant}=75$ mW \\ \hline Static circuit power consumption & $P_\mathrm{c}=1$ W \\ \hline Power amplifier efficiency & $\xi=0.4$ \\ \hline Antennas gain & 10 dBi \\ \hline Noise power & $\sigma^2= -47$ dBm \\ \hline Rician factor & 3 dB \\ \hline Reference distance & 1 meters \\ \hline Maximum service distance & 10 meters \\ \hline Energy conversion efficiency & $\eta=0.8$ \\ \hline \end{tabular} \end{table} \begin{figure}[htb] \centering \includegraphics[scale=0.9]{sepuser_3D_1} \caption{System performance trade-off region between IR-EE, EH-EE, and transmit power.} \label{fig:sepuser_3D_1} \vspace*{5mm} \includegraphics[scale=0.9]{sepuser_3D_2} \caption{System performance trade-off region between achievable rate, harvested energy, and transmit power.} \label{fig:sepuser_3D_2} \end{figure} In the following, we show the trade-off region between multiple system objectives from two aspects. In one aspect, we examine the trade-off between the average IR-EE, EH-EE, and transmit power in terms of system EE, which is shown in Figures \ref{fig:sepuser_3D_1}, \ref{fig:2d_IRee_EHee}, \ref{fig:2d_IRee_Pt}, \ref{fig:2d_EHee_Pt}, \ref{fig:sepuser_IRee_EHee}, \ref{fig:sepuser_IRee_Ptrans}, and \ref{fig:sepuser_EHee_Ptrans}. On the other hand, from the aspect of system throughput, the trade-off between average achievable rate, harvested energy, and transmit power is illustrated in Figures \ref{fig:sepuser_3D_2}, \ref{fig:sepuser_Rate_Pharv}, \ref{fig:sepuser_Rate_Ptrans}, and \ref{fig:sepuser_Pharv_Ptrans}. For comparison, we also propose a baseline scheme, where MOOP of achievable rate maximization, harvested energy maximization, and transmit power minimization is solved. The system performance are compared between the proposed EE algorithm and the baseline scheme in Figure \ref{fig:sepuser_IRee_EHee}-- Figure \ref{fig:sepuser_Pharv_Ptrans}. Figures \ref{fig:sepuser_3D_1} and \ref{fig:sepuser_3D_2} give the 3-dimension trade-off regions of the system energy efficiency and system throughput, respectively, for 8 transmitting antennas. The 3-dimension trade-off regions are obtained by solving the MOOP \ref{prob:multiobj_WIPTsepuser} with different sets of weights on the system design objectives. Specifically, the points consisting of the regions are calculated out by uniformly varying the weight $\omega_j$ with a step size of $0.04$ such that $\sum_j\omega_j=1$. It can be observed in Figure \ref{fig:sepuser_3D_1} that the trade-off region between IR-EE, EH-EE, and transmit power is formed by the points gradually spreading from the right bottom corner to the left top corner. In particular, both IR-EE and EH-EE grow rapidly for small transmit power. When the transmit power is high, EH-EE continues increasing, however, IR-EE declines. On the other hand, Figure \ref{fig:sepuser_3D_2} illustrates that high transmit power supports the increment of both the achievable rate and harvested energy. \begin{figure}[htb] \centering \includegraphics[scale=0.9]{2d_IRee_EHee} \caption{Trade-off region between IR-EE and EH-EE.} \label{fig:2d_IRee_EHee} \end{figure} \begin{figure}[htb] \centering \includegraphics[scale=0.9]{2d_IRee_Pt} \caption{Trade-off region between IR-EE and transmit power.} \label{fig:2d_IRee_Pt} \vspace*{5mm} \includegraphics[scale=0.9]{2d_EHee_Pt} \caption{Trade-off region between EH-EE and transmit power.} \label{fig:2d_EHee_Pt} \end{figure} In addition, for a better illustration, we also provide different side-views of the 2-dimension trade-off region in Figures \ref{fig:2d_IRee_EHee}, \ref{fig:2d_IRee_Pt}, and \ref{fig:2d_EHee_Pt} for revealing the trade-offs between different pairs of objective functions. Figure \ref{fig:2d_IRee_EHee} shows the trade-off between IR-EE and EH-EE. Figure \ref{fig:2d_IRee_Pt} shows the trade-off between IR-EE and transmit power. Figure \ref{fig:2d_EHee_Pt} shows the trade-off between EH-EE and transmit power. It can be observed from these figures that IR-EE and EH-EE are partially aligned with each other for small transmit power. In particular, IR-EE and EH-EE both increase rapidly when transmit power grows from zero. However, IR-EE reduces dramatically in the high transmit power regime which results in bell-shaped curves as shown in Figures \ref{fig:2d_IRee_EHee} and \ref{fig:2d_IRee_Pt}. This diminishing return of IR-EE is due to the slow logarithmical growth of the achievable rate in the high transmit power regime while the transmit power is linearly increasing. In contrast, EH-EE is monotonically increasing with the increasing transmit power as shown in Figure \ref{fig:2d_EHee_Pt}. In other words, more energy is carried by the transmit signal, higher EH-EE can be achieved. This is thanks to the linear relationship between the harvested energy and the transmit power. Notably, the trade-off region in Figure \ref{fig:2d_IRee_Pt} is non-convex. In other words, the proposed multi-objective system design algorithm is able to obtain the non-convex feasible region, despite of the non-convexity of the MOOP. Besides, the three extreme points in Figure \ref{fig:2d_IRee_EHee} correspond to the three single-objective functions, respectively. The zero point for IR-EE and EH-EE in Figure \ref{fig:2d_IRee_EHee} corresponds to the zero transmit power in Figures \ref{fig:2d_IRee_Pt} and \ref{fig:2d_EHee_Pt}, which represents the minimum transmit power. It is the optimal value of single-objective Problem \ref{prob:WIPT_Ptrans} which can also be obtained by solving the MOOP with $\omega_3=1$. The second extreme point in the middle of the curve in Figure \ref{fig:2d_IRee_EHee} is the maximum IR-EE, i.e., the optimal value of single-objective Problem \ref{prob:WIPT_IR-EE}, which can also be obtained by solving the MOOP with $\omega_1=1$. The third extreme point at the tail in Figure \ref{fig:2d_IRee_EHee} demonstrates the maximum EH-EE, which is the optimal value of single-objective Problem \ref{prob:WIPT_EH-EE}. It can also result from the MOOP with $\omega_2=1$. In Figures \ref{fig:sepuser_IRee_EHee} and \ref{fig:sepuser_Rate_Pharv}, the average IR-EE versus the average EH-EE, and the average achievable rate versus the average harvested energy are showed, respectively. These curves are obtained by solving the MOOP for $\omega_3=0$ and $0\leq\omega_j\leq1, j=1,2$, where the value of $\omega_j$ is uniformly varied with a step size of $0.01$ such that $\sum_j\omega_j=1$. Figure \ref{fig:sepuser_IRee_EHee} shows the trade-off between IR-EE and EH-EE when the objective of transmit power minimization is not considered. We can see that IR-EE is monotonically decreasing as EH-EE increasing, since the objective preference shifts from IR-EE to EH-EE, i.e., $\omega_1$ decreases and $\omega_2$ increases. Interestingly, we have a distinct dropping point at the tail of the curve corresponding to $\omega_1=0$ and $\omega_2=1$. This point indicates the solution of the single-objective problem of EH-EE maximization. Based on Theorem \ref{thm:rankone} and Appendix \ref{app:rankone}, we have $\Rank(\mathbf{W}_\mathrm{E})=1$ at this point instead of $\mathbf{W}_\mathrm{E}=\mathbf{0}$ in other points. In other words, the energy signal occupies a part of the total available power. Thus, IR-EE drops due to a smaller power allocation on information signal. Moreover, compared to the baseline scheme, it is obvious that IR-EE of the proposed EE algorithm achieves a significant gain. Besides, when the number of transmitting antenna is increased from $N_\mathrm{T}=4$ to $N_\mathrm{T}=8$, the trade-off region is enlarged in both EE algorithm and the baseline scheme. Since extra degrees of freedom offered by more transmitting antennas can be exploited. Thus, the system performance on EE is improved. However, IR-EE for $N_\mathrm{T}=8$ is smaller than that for $N_\mathrm{T}=4$ in the low transmit power regime. This can be explained that with small transmit power, the achievable rates for both $N_\mathrm{T}=8$ and $N_\mathrm{T}=4$ are quite small. But the total power consumption is large for $N_\mathrm{T}=8$ due to a high antenna power dissipation, which result in a smaller IR-EE. \begin{figure}[htb] \begin{minipage}[t]{0.5\textwidth} \centering \includegraphics[width=\textwidth,height=\textwidth]{sepuser_IRee_EHee} \caption{Average IR-EE versus \\average EH-EE.} \label{fig:sepuser_IRee_EHee} \end{minipage} \begin{minipage}[t]{0.5\textwidth} \centering \includegraphics[width=\textwidth,height=\textwidth]{sepuser_Rate_Pharv} \caption{Average achievable rate versus \\average harvested energy.} \label{fig:sepuser_Rate_Pharv} \end{minipage} \end{figure} In terms of of system throughput, we see from Figure \ref{fig:sepuser_Rate_Pharv} that there is a consistent trend between the achievable rate and the harvested energy for EE algorithm. This is because as EH-EE is gradually emphasized, the amount of harvested energy grows due an increment of transmit power. On the other hand, according to Theorem \ref{thm:rankone} and Appendix \ref{app:rankone}, $\Rank(\mathbf{W}_\mathrm{I})=1$ and $\mathbf{W}_\mathrm{E}=\mathbf{0}$ when IR-EE and EH-EE are both optimized. It means that the information signal occupies all the available transmit power. Thus, the information signal becomes stronger with the increasing transmit power which brings the improvement of achievable rate. As a result, the alignment between the achievable rate and the harvested energy occurs. In contrast to this consistency in the proposed algorithm, the achievable rate and the harvested energy in the baseline scheme conflict with each other. In particular, high data rate corresponds to low harvested energy, and vice versa. Moreover, it is noted that the curve stretches to the right end with a distinct drooping point as in Figure \ref{fig:sepuser_IRee_EHee}. This is caused by the same reason aforementioned. Besides, when more transmitting antennas are equipped at the transmitter, the system throughput with respect to the achievable rate and the harvested energy are both improved since extra degrees of freedom are utilized. \begin{figure}[htb] \begin{minipage}[t]{0.5\textwidth} \centering \includegraphics[width=\textwidth,height=\textwidth]{sepuser_IRee_Ptrans} \caption{Average IR-EE versus \\average transmit power.} \label{fig:sepuser_IRee_Ptrans} \end{minipage} \begin{minipage}[t]{0.5\textwidth} \centering \includegraphics[width=\textwidth,height=\textwidth]{sepuser_Rate_Ptrans} \caption{Average achievable rate versus \\average transmit power.} \label{fig:sepuser_Rate_Ptrans} \end{minipage} \end{figure} Figures \ref{fig:sepuser_IRee_Ptrans} and \ref{fig:sepuser_Rate_Ptrans} illustrates the average IR-EE versus the average transmit power, and the average achievable rate versus the average transmit power, respectively. The curves are obtained by solving Problem \ref{prob:multiobj_WIPTsepuser} for $\omega_2=0$ and $0\leq\omega_j\leq1, j=1,3$, where the value of $\omega_j$ is uniformly varied with a step size of 0.01 such that $\sum_j\omega_j=1$. Without the objective of EH-EE maximization, the power allocation policy in the proposed EE algorithm is designed for IR-EE maximization and transmit power minimization. For the proposed EE algorithm, we can observe from Figures \ref{fig:sepuser_IRee_Ptrans} and \ref{fig:sepuser_Rate_Ptrans} that for small transmit power, both IR-EE and the achievable rate grow monotonically as the transmit power ascends from zero. Figure \ref{fig:sepuser_IRee_Ptrans} shows that IR-EE approaches to its maximum point at a very small transmit power. Figure \ref{fig:sepuser_Rate_Ptrans} shows the corresponding achievable rate which remains at a low level due to the small transmit power. In contrast, for the baseline scheme, we can see a bell-shaped trend of IR-EE in Figure \ref{fig:sepuser_IRee_Ptrans} and a monotonically ascending trend of the achievable rate in Figure \ref{fig:sepuser_Rate_Ptrans} with the increasing transmit power. In the small transmit power regime, IR-EE and data rate behave similarly as in the EE algorithm. However, in the high transmit power regime, the logarithmical growth of data rate is slower than the linear increment of the transmit power, which leads to energy-inefficient, i.e., IR-EE declines. Besides, when $N_\mathrm{T}=8$, the system performance shows a reduction on IR-EE and an growth on rate compared to the case of 4 transmitting antennas. This is because the achievable rate is improved by exploiting extra degrees of freedom offered by more transmitting antennas. However, this improvement cannot compensate the increment of antenna power consumption. Thus, a lower IR-EE is resulted. \begin{figure}[htb] \begin{minipage}[t]{0.5\textwidth} \centering \includegraphics[width=\textwidth,height=\textwidth]{sepuser_EHee_Ptrans} \caption{Average EH-EE versus \\average transmit power.} \label{fig:sepuser_EHee_Ptrans} \end{minipage} \begin{minipage}[t]{0.5\textwidth} \centering \includegraphics[width=\textwidth,height=\textwidth]{sepuser_Pharv_Ptrans} \caption{Average harvested energy \\versus average transmit \\power.} \label{fig:sepuser_Pharv_Ptrans} \end{minipage} \end{figure} Figures \ref{fig:sepuser_EHee_Ptrans} and \ref{fig:sepuser_Pharv_Ptrans} depict the average EH-EE versus the average transmit power, and average harvested energy versus the average transmit power, respectively. Similarly, the curves are obtained by solving the MOOP for $\omega_1=0$ and $0\leq\omega_j\leq1, j=2,3$, where the value of $\omega_j$ is uniformly varied with a step size of $0.01$ such that $\sum_j\omega_j=1$. With no concerns on the objective of IR-EE maximization, the power allocation policy is designed for EH-EE maximization and transmit power minimization. It can be observed that both EH-EE and harvested energy are growing with the increasing transmit power. Especially, the curves of the proposed EE algorithm and the baseline scheme overlap, which means that the maximal EH-EE and the maximal harvest energy are simultaneously obtained by the same amount of transmit power. This is thanks to the linear relationship between the harvested power and the transmit power. In terms of the comparison for different number of antennas, a better performance on both EH-EE and harvested energy is showed for $8$ transmitting antennas. In particular, for $N_\mathrm{T}=8$, we see that EH-EE increases faster with the increasing transmit power than the case of $N_\mathrm{T}=4$. This implies a more efficient and effective power transfer is achieved by using more transmitting antennas.
{"config": "arxiv", "file": "1504.02360/2_WIPT_sepuser.tex"}
TITLE: >Let $X_1,\ldots,X_n$ indepent variable RV $X_n\sim Bern(1/n)$ Does the $X_n\underset{a.s}{\to}0$ QUESTION [0 upvotes]: Let $X_1,\ldots,X_n$ indepent variable RV $X_n\sim Bern(1/n)$ Does the $X_n\underset{a.s}{\to}0$ I tried to use Borel-Cantelli but I get that $\sum_{i=1}^{\infty}\mathbb{P}(A_n^{\epsilon})=\infty$ where $A_n^{\epsilon}=\{|X_n-X|\geq\epsilon\}$ any hint please REPLY [2 votes]: $\sum P(X_n=1)=\sum \frac 1 n =\infty$. By independence and Borel Cantelli Lemma this implies $P(X_n =1 i.o.) =1$. Hence it is not true that $X_n \to 0$ almost surely.
{"set_name": "stack_exchange", "score": 0, "question_id": 3968862}
TITLE: A magician places $n$ coins on a table and walks down off the stage. QUESTION [5 upvotes]: A magician places n coins on a table and walks down off the stage. A volunteer comes, turns over whichever coins he wishes, selects one coin and whispers its number to the apprentice. Then the apprentice turns over one coin aiming to assist the magician to know the selected coin. For which values of n the magic will always succeed? What if the apprentice may turn at most one coin? For the first question: (For which values of n the magic will always succeed?) Answer: We have $n$ coins which can be in $2$ different states - hands or tails, thus there's $2^n$ coin states. Let's make the coin's number as its binary representation with number of bits that is equal to the chosen $n$. The assistant will do the XOR operation with the coins that their heads match the coin that has selected. This will be achieved by using the bitwise XOR of current state with the coin that was selected (by the volunteer) and flipping the coin with that number. To conclude, $n$ has to be a factor of $2^n$, meaning that $n$ has to be a power of $2$. Flipping the $0$ coin is free, so the assistant can flip a coin whenever he wants. I thought also about approach with representing the $2^n$ states with a hypercube, we'll give each vertex a color to represent one of the $n$ named coins. There must be one of each color adjacent to any starting position, so there are equal number of each color. Thus, as stated above $n$ must be a factor of $2^n$, thus $n$ must be a power of $2$. I'm interested to know for which values of $n$ the magic will succeed when the apprentice may turn at most one coin. REPLY [3 votes]: representing the $2^n$ states with a hypercube, we'll give each vertex a color to represent one of the $n$ named coins. There must be one of each color adjacent to any starting position. This means that each monochromatic set is a dominating set for the hypercube graph $Q_n$. The domatic number $d(G)$ of a graph $G$ is the maximum number of elements in a partition of the set of vertices of $G$ into dominating sets. Thus the question is whether $d(Q_n)\ge n$. According to [JBM], Zelinka in [Z] proved that if $n=2^k$ for a positive integer $k$ then graphs $Q_n$ and $Q_{n-1}$ both have the domatic number $n$, so in this case the answer to your question is positive. On the other hand, for each graph $G$ we have $d(G)\le |G|/\gamma(G)$, where $|G|$ is the number of vertices in $G$ and $\gamma(G)$ is the number of vertices in a smallest dominating set for $G$. According to [KM] “Unfortunately, the exact domination number is known only for small dimensional hypercubes and two infinite families: $\gamma(Q_3) = 2$, $\gamma(Q_4) = 4$, $\gamma(Q_5) = 7$, $\gamma(Q_6) = 12$, and $\gamma(Q_n) = 2^{n−k}$ for $n = 2^k − 1$ or $n = 2^k$, see [HL]. In general, $\gamma(Q_n) \le 2^{n−3}$ for $n\ge 7$ [AK]”. Thus we have $d(Q_5)\le 4$ and $d(Q_6)\le 5$, so for these values of $n$ the answer to your question is negative. References [AK] S. Arumugam, R. Kala, Domination parameters of hypercubes, J. Indian Math. Soc. (N.S.) 65 (1998), 31–38. [HHW] Frank Harary, John P. Hayes, Horng-Jyh Wu, A survey of the theory of hypercube graphs, Comput. Math. Applic. 15:4 (1988), 277-289. [HL] F. Harary, M. Livingston, Independent domination in hypercubes, Appl. Math. Lett. 6 (1993), 27–28. [JBM] T.N.Janakiraman, M.Bhanumathi, S.Muthammai, Domination Parameters Of Hypercubes, International Journal of Engineering Science, Advanced Computing and Bio-Technology, 1:1 (January -March 2010), 19–28. [KM] Sandi Klavžar, Meijie Ma, The domination number of exchanged hypercubes. [Z] B.Zelinka, Domination numbers of cube graphs, Math Slovaca, 32:2 (1982), 117–1. (unfortunalety, I failed to find this paper).
{"set_name": "stack_exchange", "score": 5, "question_id": 3292668}
TITLE: Probability of randomly choosing all elements fulfilling a certain condition QUESTION [0 upvotes]: Assume you have a bag containing $m$ marbles, of $c$ different colors, where the number of marbles of each color is equal to $\frac mc$. If $n$ marbles are drawn from the bag, without replacement, what is the probability $P$ that at least one complete set of all the marbles of one color is drawn? Obviously, if $n<\frac mc$, then $P=0$. Also, by the pigeonhole principle, if $n>m-c$ then $P=1$, because $m-c$ is the number of marbles you would have to draw in order to draw all but one of each color. So far, all I've been able to find information on is how to find the probability that all marbles drawn are the same color, in which case, if $n>\frac mc$, then $P=0$. In the case where $n=\frac mc$, the formulas are the same: $$\frac{c\left(\frac{\left(\frac mc\right)!}{\left(\left(\left(\frac mc\right)-n\right)!\right)n!}\right)}{\frac{m!}{\left(\left(m-n\right)!\right)n!}}$$ That's assuming I correctly translated that formula from my notes on the subject. In case I didn't, here's the original, which is unformatted and uses a different set of variables: "Where: t = total # of marbles, s = # of marbles of each color, and p = # of marbles picked (t / s) * (s!/((s-p)!p!)) / (t!/((t-p)!p!))" I'm sure the formula to solve this is out there somewhere, assuming that finding a general formula isn't np-hard, as a programmer friend of mine suggested it might be, but, for the life of me, I haven't been able to find it. If needed, I have a 14-page Google Doc full of my notes on my attempts to solve this problem, including several brute-force attempts (which contribute most of its length), but it's a slog, and I don't want to subject you to it if someone can just give me a general formula. REPLY [0 votes]: NOTE: When I started writing this there was no Answer yet, but it took me so long to write this and get all the formatting correct, that @saulspatz scooped me. :) I will leave this up anyway... You can solve this using Inclusion-Exclusion Principle, but (as usual for I-E) the result is a big summation, not a closed form. For shorthand, define $r = m/c =$ no. of marbles of each color. For $j \in \{1, 2, \dots, c\}$ let $N_j =$ the event that color $j$ is Not drawn (among the $n$ marbles), and $D_j =$ the complementary event that the color $j$ is Drawn. You want every color to be drawn, i.e. you want $P(\bigcap_{j=1}^c D_j) = 1 - P(\bigcup_{j=1}^c N_j)$ The union can be calculated by I-E: $$ \begin{align} P(\bigcup_{j=1}^c N_j) &= \sum_{1\le j \le c} P(N_j) - \sum_{1 \le j_1 < j_2 \le c} P(N_{j_1} \cap N_{j_2}) + \sum_{1 \le j_1 < j_2 < j_3 \le c} P(N_{j_1} \cap N_{j_2} \cap N_{j_3}) - \cdots\\ &= \sum_{k=1}^c (-1)^{k+1} \Big(\sum_{1 \le j_1 < \cdots < j_k \le c} P(N_{j_1} \cap \cdots \cap N_{j_k}) \Big) \end{align} $$ Now consider a particular $k$. Each term in the summation represents a subset of $k$ colors that are not drawn, i.e. the event that all $n$ must come from the other $c-k$ colors, i.e. the other $r(c-k) = m-rk$ marbles. The probability is: $$P(N_{j_1} \cap \cdots \cap N_{j_k}) = \begin{cases} {m-rk \choose n} / {m \choose n} & \text{if } m-rk \ge n, \text{ i.e. } k \le {m-n \over r}\\ 0 & \text{if } m-rk < n \end{cases}$$ The number of terms in the summation $\sum_{1 \le j_1 < \cdots < j_k \le c}$ is ${c \choose k}$, representing all possible $k$-sized subsets. Combining these observations: $$ \begin{align} P(\bigcup_{j=1}^c N_j) &= \sum_{k=1}^d (-1)^{k+1} \Big( {c \choose k} {{m - rk \choose n} \over {m \choose n}} \Big) \end{align} $$ where $d = \lfloor {m-n \over r} \rfloor$ is the new limit of the summation because for $k > d$ the terms are zero.
{"set_name": "stack_exchange", "score": 0, "question_id": 3261641}
TITLE: Proving big-O notation? QUESTION [0 upvotes]: $2n^2 \in O(n^2-19n)$ This was proven in my lecture notes but it didn't make sense to me. I tried solving for c like this: $n_0 = 1$ $2n^2 ≤ c * n^2 - 19n$ $2 ≤ c * (1-19)$ $2 ≤ c * -18$ $-36 \leq c$, but $c$ has to be a positive constant. REPLY [1 votes]: Often it is easier to consider a quotient and study its behaviour as $n\to \infty$. Here you have $$\frac{n^2-19n}{2n^2} = \frac{1}{2}- \frac{19}{2n}\geq \frac{1}{4} \quad \text{for} \quad \frac{19}{2n} \leq \frac{1}{4} \iff n \geq 38$$ So, you have for $n \geq N=38$: $\boxed{2n^2 \leq 4(n^2-19n)}$ which means exactly that $2n^2 \in O(n^2-19n)$.
{"set_name": "stack_exchange", "score": 0, "question_id": 3219273}
\section{Introduction \label{sec:introduction}} Recently, there has been an immense surge of interest in the use of unmanned aerial systems (UASs) for civil applications \cite{Tice91, Debusk10, Amazon16, AUVSI16, BBC16}, which will involve unmanned aerial vehicles (UAVs) flying in urban environments, potentially in close proximity to humans, other UAVs, and other important assets. As a result, new scalable ways to organize an airspace are required in which potentially thousands of UAVs can fly together \cite{FAA13, Kopardekar16}. One essential problem that needs to be addressed for this endeavor to be successful is that of trajectory planning: how a group of vehicles in the same vicinity can reach their destinations while avoiding situations which are considered dangerous, such as collisions. Many previous studies address this problem under different assumptions. In some studies, specific control strategies for the vehicles are assumed, and approaches such as those involving induced velocity obstacles \cite{Fiorini98, Chasparis05, Vandenberg08,Wu2012} and those involving virtual potential fields to maintain collision avoidance \cite{Olfati-Saber2002, Chuang07} have been used. Methods have also been proposed for real-time trajectory generation \cite{Feng-LiLian2002}, for path planning for vehicles with linear dynamics in the presence of obstacles with known motion \cite{Ahmadzadeh2009}, and for cooperative path planning via waypoints which do not account for vehicle dynamics \cite{Bellingham}. Other related work is in the collision avoidance problem without path planning. These results include those that assume the system has a linear model \cite{Beard2003, Schouwenaars2004, Stipanovic2007}, rely on a linearization of the system model \cite{Massink2001, Althoff2011}, assume a simple positional state space \cite{Lin2015}, and many others \cite{Lalish2008, Hoffmann2008, Chen2016}. However, methods to flexibly plan provably safe and dynamically feasible trajectories without making strong assumptions on the vehicles' dynamics and other vehicles' motion are lacking. Moreover, any trajectory planning scheme that addresses collision avoidance must also guarantee both goal satisfaction and safety of UAVs despite disturbances and communication faults \cite{Kopardekar16}. Furthermore, unexpected scenarios such as UAV malfunctions or even UAVs with malicious intent need to be accounted for. Finally, the proposed scheme should scale well with the number of vehicles. Hamilton-Jacobi (HJ) reachability-based methods \cite{Barron90, Mitchell05, Bokanowski10, Bokanowski11, Margellos11, Fisac15} are particularly suitable in the context of UAVs because of the formal guarantees provided. In this context, one computes the reach-avoid set, defined as the set of states from which the system can be driven to a target set while satisfying time-varying state constraints at all times. A major practical appeal of this approach stems from the availability of modern numerical tools which can compute various definitions of reachable sets \cite{Sethian96, Osher02, Mitchell02, Mitchell07b}. These numerical tools, for example, have been successfully used to solve a variety of differential games, trajectory planning problems, and optimal control problems \cite{Bayen07, Ding08, Bouffard12, Huang11}. However, reachable set computations involve solving a HJ partial differential equation (PDE) or variational inequality (VI) on a grid representing a discretization of the state space, resulting in an \textit{exponential} scaling of computational complexity with respect to the system dimensionality. Therefore, reachability analysis or other dynamic programming-based methods alone are not suitable for managing the next generation airspace, which is a large-scale system with a high-dimensional joint state space because of the possible high density of vehicles that needs to be accommodated \cite{Kopardekar16}. To overcome this problem, the priority-based Sequential Trajectory Planning (STP) method has been proposed \cite{Chen15c, Bansal2017}. In this context, higher-priority vehicles plan their trajectories without taking into account the lower-priority vehicles, and lower-priority vehicles treat higher-priority vehicles as moving obstacles. Under this assumption, time-varying formulations of reachability \cite{Bokanowski11, Fisac15} can be used to obtain the optimal and provably safe trajectories for each vehicle, starting from the highest-priority vehicle. Thus, the curse of dimensionality is overcome at the cost of a structural assumption, under which the computation complexity scales just \textit{linearly} with the number of vehicles. In addition, such a structure has the potential to flexibly divide up the airspace for the use of many UAVs and allows tractable multi-vehicle trajectory-planning. Practically, different economic mechanisms can be used to establish a priority order. One example could be first-come-first-serve mechanism, as highlighted in NASA's concept of operations for UAS traffic management \cite{Kopardekar16}. However, if a vehicle not in the set of STP vehicles enters the system, or even worse, if this vehicle has malicious intent, the original plan can lead to a vehicle colliding with another vehicle, leading to a domino effect, causing the entire STP structure to collapse. Thus, STP vehicles must plan with an additional safety margin that takes a potential intruder into account. The authors in \cite{Chen2016d} propose an STP algorithm that accounts for such a potential intruder. However, a new full-scale trajectory planning problem is required to be solved in real time to ensure safe transit of the vehicles to their respective destinations. Since the replanning must be done in real-time, the proposed algorithm in \cite{Chen2016d} is intractable for large-scale systems even with the STP structure. In this work, we propose a novel algorithm that limits the replanning to a \textit{fixed number of vehicles}, irrespective of the total number of STP vehicles. Moreover, this design parameter can be chosen beforehand based on the computational resources available. Intuitively, for every vehicle, we compute a \textit{separation region} such that the vehicle needs to account for the intruder if and only if the intruder is inside this separation region. We then compute a \textit{buffer region} between the separation regions of any two vehicles, and ensure that this buffer is maintained as vehicles are traveling to their destinations. Thus, to intrude every additional vehicle, the intruder must travel through the buffer region. Therefore, we can design the buffer region size such that the intruder can affect at most a specified number of vehicles within some duration. A high-level overview of the proposed algorithm is provided in Algorithm \ref{alg:basic_idea}. \begin{algorithm}[tb] \SetKwInOut{Input}{input} \SetKwInOut{Output}{output} \DontPrintSemicolon \caption{Overview of the proposed intruder avoidance algorithm (planning phase)} \label{alg:basic_idea} \Input{Set of vehicles $\veh_i, i = 1, \ldots, \N$ in the descending priority order;\newline Vehicle dynamics and initial states;\newline Vehicle destinations and any obstacles to avoid;\newline Intruder dynamics;\newline $\nva$: Maximum number of vehicles allowed to re-plan their trajectories.} \Output{Provably safe vehicle trajectories to respective destinations despite disturbances and intruder;\newline Intruder avoidance and goal-satisfaction controller.} \For{\text{$i=1:N$}}{ compute the separation region of $\veh_i$;\; compute the required buffer region based on $\nva$;\; use STP algorithm for trajectory planning of $\veh_i$ such that the buffer region is maintained between $\veh_i$ and $\veh_j$ for all $j<i$;\; output the trajectory and the optimal controller for $\veh_i$.\; } \end{algorithm} In Section \ref{sec:formulation}, we formalize the STP problem in the presence of disturbances and adversarial intruders. In Section \ref{sec:background}, we present an overview of time-varying reachability and basic STP algorithms in \cite{Chen15c}, \cite{Bansal2017}. In Section \ref{sec:intruder}, we present our proposed algorithm. Finally, we illustrate this algorithm through a fifty-vehicle simulation in an urban environment in Section \ref{sec:simulations}. Notations are summarized in Table \ref{table:notation}.
{"config": "arxiv", "file": "1711.02540/introduction.tex"}
TITLE: Find the density function from a joint density function QUESTION [0 upvotes]: I try to solve the following task and I don't know what the correct way to do is. Let $p\in(0,1)$ and $(X,Y)$ be a pair of random variables with distribution density function $$f(x,y)=\frac{1}{2\pi\sqrt{1-p^2}} \exp\left(-\frac{1}{2(1-p^2)}(x^2-2pxy+y^2)\right)$$ Show that $X$ and $Z=(Y-pX)/\sqrt{1-p^2}$ are independent standard Gauss random variables. What I've have done so far: Independence means $\mathbb{E}[XZ]=\mathbb{E}[X]\mathbb{E}[Z]$ meaning if this holds $$\int\int xzf_{X,Z}(x,y)\mathrm{d}x \mathrm{d}y=\int\int xf_X(x,y)\mathrm{d}x \mathrm{d}y \int\int zf_Z(x,y)\mathrm{d}x \mathrm{d}y$$ where $f$ is the density function, then $Z$ and $X$ are independent. Now here I have great troubles to find the probability density functions of $X$ or $Y$, mainly because of the parameter $p$. $$\mathbb{E}[XY]=\frac{1}{2\pi\sqrt{1-p^2}}\exp\left(-\frac{1}{2(1-p^2)}\right)\int\int x \exp\left(x^2-2pxy+y^2\right) \mathrm{d}x \mathrm{d}y$$ Let us look at the integral $$\int\int x \exp\left(x^2-2pxy+y^2\right) \mathrm{d}x \mathrm{d}y=\int \exp(y^2)\int x \exp\left(x^2-2pxy\right) \mathrm{d}x \mathrm{d}y$$ Now since this integral is positive, we can apply Fubini-Tonelli. But this parameter $p$ makes it very difficult to compute the integral, so I'm not even sure, if this is the right / smart way to solve this task. Thank you for help. REPLY [0 votes]: HINT Note that in the exponential we have $$ x^2-2pxy+y^2=(y-px)^2+(1-p^2)x^2 $$ so that $$ f(x,y)=\frac{1}{2\pi\sqrt{1-p^2}} \mathrm e^{-\frac{(y-px)^2}{2(1-p^2)}-\frac{x^2}{2}}=\frac{1}{\sqrt{2\pi}\sqrt{1-p^2}}\exp\left(-\frac{(y-px)^2}{2(1-p^2)}\right)\times \frac{1}{\sqrt{2\pi}}\exp\left(-\frac{x^2}{2}\right) $$ REPLY [0 votes]: $(X,Z)$ are independent if forany $f,g$ $$\mathbb{E}(f(X)g(Z))=\mathbb{E}(f(X))\mathbb{E}(g(Z))$$ In your case : $$\mathbb{E}(f(X)g(Z))=\mathbb{E}\left(f(X)g\left(\frac{Y-pX}{\sqrt{1-p^2}}\right)\right)=\int_{\mathbb{R}^2}f(x)g\left(\frac{y-px}{\sqrt{1-p^2}}\right)p(x,y)~dx~dy$$ so this is a change of variable problem, you set : $z=\frac{y-px}{\sqrt{1-p^2}}$ and you get : $y=\sqrt{1-p^2}z+px$ plugging this into $p(x,y)$ gives you : $$p(x,y)=p(x,\sqrt{1-p^2}z+px)=q(x)q(z)$$ where $$q(x)=\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}$$ you then plug it into above equations and this is solved. Note in the case of a density $p(x,y)$, $X$ and $Y$ are independent if and only if $p(x,y)=p_X(x)p_Y(y)$ where $p_X$ (resp. $p_Y$) is the density of $X$ (resp. $Y$)
{"set_name": "stack_exchange", "score": 0, "question_id": 1762534}
TITLE: Homology spheres and fundamental group QUESTION [16 upvotes]: I have a curiosity about homology spheres: I was wondering if they were uniquely characterized by their fundamental group. I.e. given two $n-$dimensional (integral) homology spheres with isomorphic fundamental groups, are they homeomorphic? If not, how many homeomorphism classes corresponds to a given fundamental group? REPLY [20 votes]: See edit at bottom for further information answering the question in all dimensions. In all odd dimensions $2k -1 > 3$, there are non-homeomorphic homology spheres with fundamental group G = the binary icosahedral group (fundamental group of the Poincaré homology sphere). This follows from basic surgery theory; the Wall group $L_{2k}(G)$ contains a subgroup of the form $\mathbb{Z}^n$ for some $n$ related to the number of irreducible complex representations of $G$. This subgroup is detected by the so-called multisignature, as described in Wall's book on surgery theory. Choose one homology sphere $M^{2k-1}$ with $\pi_1(M) = G$. For instance, you can repeatedly spin the Poincaré sphere, where spinning $P$ means doing surgery on the obvious $S^1$ in $S^1\times P$. For any $M$ with $\pi_1(M) = G$, there's an invariant originally described by Atiyah and Singer. A priori, it's a smooth invariant, but is known to be a homeomorphism invariant. Roughly speaking, one knows that for some $d \in \mathbb{N}$, the manifold $d\cdot M$ is the boundary of a $2k$-manifold $X$ with $\pi_1(X) = G$. Then $X$ has a collection of equivariant signatures (associated to the action of $G$ on the universal cover of $X$), known collectively as the multisignature. The multisignature (divided by $d$) is a topological invariant. (Technically, this is only well-defined up to a choice of isomorphism $\pi_1(X) \to G$ but this is readily dealt with.) Now, the group $L_{2k}(G)$ acts on the structure set of $M$, as described in Wall's book. The effect of the action is to change the multisignature, and hence it changes the homeomorphism type of $M$. In this construction, you not only preserve the fundamental group, but also the (simple) homotopy type of $M$. It's possible that acting on an even-dimensional homology sphere $M^{2k}$ with fundamental group $G$ by elements of $L_{2k+1}(G)$ could change the homeomorphism type. But odd dimensional $L$-groups are much harder, and you'd need some serious expertise to see what the effect should be. I rather suspect that you could do something simpler, and change the homology with local coefficients or something like that. Addendum: There are two papers of Alex Suciu that answer this question in dimensions at least 4. In "Homology 4-spheres with distinct k-invariants," Topology and its Applications 25 (1987) 103-110, he gives examples of homology 4-spheres with the same $\pi_1$ and $\pi_2$ that are not homotopy equivalent. In "Iterated spinning and homology spheres", Trans AMS 321 (1990) he constructs homology n-spheres in dimensions $n \geq 5$ with the same property. Combined with the remarks above about dimension 3, this answers your question.
{"set_name": "stack_exchange", "score": 16, "question_id": 211160}
TITLE: Algorithm behind Sin Function QUESTION [1 upvotes]: The relationship between the hypothenuse and the opposite cathetus of an angle can be described by $$\sin\theta=\frac{a}{h}$$ , where $a$ is the opposite cathetus, and $h$ is the hypothenuse. But I am curious about the sine function, it is not a variable and my math teacher defines it as a function. So if it is a function, could anyone kindly tell me how this function was found and what variable terms it is composed as? Thanks REPLY [3 votes]: Since the sin function is given by an alternating Taylor series, the approximation error made when truncating the series is determined by its final term. The error is less than that of the final term. A pretty efficient algorithm in Python to calculate sin follows. def sin(x): epsilon = 0.1e-16 sinus = 0.0 sign = 1 term = x n = 1 while term > epsilon: sinus += sign*term sign = -sign term *= x * x / (n+1) / (n+2) n += 2 return sinus About one term is needed per digit of accuracy. Note that the angle $x$ is given in terms of the fraction of circumference of a circle that subtends it, in English that means that since $360^\circ = 2\pi$ is a full circle, then $45^\circ = \pi/2$ radians. Take care when using this code since it will not work as it stands when $term < 0$, so keep $x$ positive. It works well when $0 \leq x \leq 2\pi$. Adding the following four lines directly after def sin(x): makes the code more robust. while x < 0: x += 2*pi while x > 2*pi: x -= 2*pi
{"set_name": "stack_exchange", "score": 1, "question_id": 4378919}
TITLE: How can we apply this simple eigenvector expression 'repeatedly'? QUESTION [3 upvotes]: Let $A,B$ be linear operators on a complex vector space $V$ and suppose $$ABu = (\alpha + 2)Bu$$ where $u \in V$ is an eigenvector of $A$ with eigenvalue $\alpha$ and $\alpha \in \mathbb{C}$. We can interpret this as either $Bu = 0$ or $Bu$ is an eigenvector of $A$ with eigenvalue $\alpha+2$. I'm reading a proof which proves this equation above about the operators $A,B$ as a lemma and then claims that 'by applying the lemma repeatedly' we get the formula $$AB^ku = (\alpha+2k)B^ku$$ I have tried many things but cannot see how to apply it repeatedly. The lemma seems to a statement about $ABu$, sticking many more $B$'s seems like it would prevent us from using it. The context: $\pi$ is a representation of the Lie algebra $sl(2,\mathbb{C})$ acting on $V$. Above I was letting $A = \pi(H)$ and $B = \pi(X)$ where $$H = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \quad \quad X = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}$$ Calculation shows $[X,H] = HX - XH = 2X$, and because $\pi$ is a Lie algebra homomorphism, we have $\pi([H,X]) = [\pi(H),\pi(X)] = 2\pi(X)$. Playing with this equation is what proves the lemma above. Maybe playing with this repeatedly is what is needed. REPLY [0 votes]: First, thank you for your hard work Nevzat! Here is the simple answer. Interpret the lemma as $$u \text{ is an eigenvector of } A \text{ eigenvalue } \alpha \Longrightarrow Bu \text{ is an eigenvector of } A \text{ eigenvalue } \alpha + 2$$ But then applying this implication with the eigenvector $Bu$ gives $B^2u$ an eigenvector of $A$, eigenvalue $(\alpha +2) + 2 = \alpha + 4$. This is the repeated application, and gives the formula.
{"set_name": "stack_exchange", "score": 3, "question_id": 4263040}
\begin{document} \maketitle \begin{abstract} We consider the {\em Deligne-Simpson problem (DSP) (resp. the weak DSP): Give necessary and sufficient conditions upon the choice of the $p+1$ conjugacy classes $c_j\subset gl(n,{\bf C})$ or $C_j\subset GL(n,{\bf C})$ so that there exist irreducible $(p+1)$-tuples (resp. $(p+1)$-tuples with trivial centralizers) of matrices $A_j\in c_j$ with zero sum or of matrices $M_j\in C_j$ whose product is $I$.} The matrices $A_j$ (resp. $M_j$) are interpreted as matrices-residua of Fuchsian linear systems (resp. as monodromy matrices of regular linear systems) of differential equations with complex time. In the paper we give sufficient conditions for solvability of the DSP in the case when one of the matrices is with distinct eigenvalues. \end{abstract} \section{Introduction} \subsection{Basic notions and purpose of this paper} In the present paper we consider the {\em Deligne-Simpson problem (DSP): Give necessary and sufficient conditions upon the choice of the $p+1$ conjugacy classes $c_j\subset gl(n,{\bf C})$ or $C_j\subset GL(n,{\bf C})$ so that there exist irreducible $(p+1)$-tuples of matrices $A_j\in c_j$ satisfying the condition \begin{equation}\label{A_j} A_1+\ldots +A_{p+1}=0 \end{equation} or of matrices} $M_j\in C_j$ satisfying the condition \begin{equation}\label{M_j} M_1\ldots M_{p+1}=I \end{equation} \begin{conv}\label{conv1} In what follows we write ``tuple'' instead of ``$(p+1)$-tuple'' and the matrices $A_j$ (resp. $M_j$) are always supposed to satisfy condition (\ref{A_j}) (resp. (\ref{M_j})). \end{conv} The matrices $A_j$ (resp. $M_j$) are interpreted as matrices-residua of a Fuchsian system of linear differential equations (resp. as monodromy matrices of a regular linear system) on Riemann's sphere; see a more detailed description in \cite{Ko1} or \cite{Ko2}. \begin{rem} The version with matrices $A_j$ (resp. $M_j$) is called the {\em additive} (resp. the {\em multiplicative}) version of the DSP. The multiplicative version of the problem was formulated by P.Deligne and C.Simpson was the first to obtain results towards its resolution, see \cite{Si1} and \cite{Si2}. The additive version is due to the author. \end{rem} We presume the necessary condition $\prod \det (C_j)=1$ (resp. $\sum$Tr$(c_j)=0$) to hold. In terms of the eigenvalues $\sigma _{k,j}$ (resp. $\lambda _{k,j}$) of the matrices from $C_j$ (resp. $c_j$) repeated with their multiplicities, this condition reads $\prod _{k=1}^n\prod _{j=1}^{p+1}\sigma _{k,j}=1$ (resp. $\sum _{k=1}^n\sum _{j=1}^{p+1}\lambda _{k,j}=0$). \begin{defi}\label{genericevs} An equality $\prod _{j=1}^{p+1}\prod _{k\in \Phi _j}\sigma _{k,j}=1$, resp. $\sum _{j=1}^{p+1}\sum _{k\in \Phi _j}\lambda _{k,j}=0$, is called a {\em non-genericity relation}; the sets $\Phi _j$ contain one and the same number $N<n$ of indices for all $j$ (when wishing to specify $N$ we say ``$N$-relation'' instead of ``non-genericity relation''). Eigenvalues satisfying none of these relations are called {\em generic}. \end{defi} \begin{rems}\label{reducible} 1) Reducible tuples of matrices $A_j$ or $M_j$ exist only for non-generic eigenvalues (the eigenvalues of each diagonal block of a block upper-triangular tuple satisfy some non-genericity relation). Therefore for generic eigenvalues existence of tuples implies automatically their irreducibility. This is not true for non-generic eigenvalues. 2) It is clear that the presence of a non-genericity relation with $N=N_0$ implies the presence of one with $N=n-N_0$ (just replace the sets $\Phi _j$ by their complements in $\{ 1,2,\ldots ,n\}$). Therefore in what follows we consider only non-genericity relations with $N\leq n/2$. \end{rems} Part 1) of the above remarks explains why for non-generic eigenvalues it is reasonable to require instead of irreducibility of the tuple only triviality of its centralizer (i.e. only scalar matrices to commute with all matrices from the tuple). This is the {\em weak version of the DSP} (or just the {\em weak DSP} for short). \begin{defi} We say that the DSP (resp. the weak DSP) is {\em solvable} for a given tuple of conjugacy classes $c_j$ or $C_j$ if there exist irreducible tuples of matrices $A_j\in c_j$ or $M_j\in C_j$ (resp. if there exist tuples of such matrices with trivial centralizers). \end{defi} We assume throughout the paper that there holds \begin{conv}\label{conv2} The conjugacy classes $c_1$ and $C_1$ are with distinct eigenvalues. \end{conv} The purpose of the present paper is to show as precisely as possible where passes the border between the cases when the DSP is solvable and when it is not but the weak DSP is solvable. \subsection{The known results} \begin{defi}\label{JNform} Call {\em Jordan normal form (JNF) of size $n$} a family $J^n=\{ b_{i,l}\}$ ($i\in I_l$, $I_l=\{ 1,\ldots ,s_l\}$, $l\in L$) of positive integers $b_{i,l}$ whose sum is $n$. Here $L$ is the set of indices of eigenvalues (all distinct) and $I_l$ is the set of indices of Jordan blocks with eigenvalue $l$, $b_{i,l}$ is the size of the $i$-th block with this eigenvalue. An $n\times n$-matrix $Y$ has the JNF $J^n$ (notation: $J(Y)=J^n$) if to its distinct eigenvalues $\lambda _l$, $l\in L$, there belong Jordan blocks of sizes $b_{i,l}$. We use the following notation (illustrated by an example): the JNF $\{ \{ 3,2\} ,\{ 7,6,1\} \}$ is the one with two eigenvalues to the first (to the second) of which there belong two blocks, of sizes $3$ and $2$ (resp. three blocks, of sizes $7$, $6$ and $1$). \end{defi} \begin{nota}\label{dr} 1) We denote by $C(Y)$ the conjugacy class (in $gl(n,{\bf C})$ or $GL(n,{\bf C})$) of the matrix $Y$. We set $C(Y)=C(X)\times C(Z)$ if $Y=\left( \begin{array}{cc}X&0\\0&Z\end{array}\right)$ (here $X$ is $l\times l$ and $Z$ is $(n-l)\times (n-l)$. 2) For a conjugacy class $C$ in $GL(n,{\bf C})$ or $gl(n,{\bf C})$ denote by $d(C)$ its dimension and by $J(C)$ the JNF it defines. For a matrix $Y\in C$ set $r(C):=\min _{\lambda \in {\bf C}}{\rm rank}(Y-\lambda I)$. The integer $n-r(C)$ is the maximal number of Jordan blocks of $J(Y)$ with one and the same eigenvalue. Set $d_j:=d(C_j)$ (resp. $d(c_j)$), $r_j:=r(C_j)$ (resp. $r(c_j)$). The quantities $r(C)$ and $d(C)$ depend only on the JNF $J(Y)=J^n$, not on the eigenvalues, so we write sometimes $r(J^n)$ and $d(J^n)$. \end{nota} \begin{prop}\label{d_jr_j} (C. Simpson, see \cite{Si1}.) The following couple of inequalities is a necessary condition for the existence of irreducible $(p+1)$-tuples satisfying (\ref{M_j}) or (\ref{A_j}): \[ \begin{array}{lll}d_1+\ldots +d_{p+1}\geq 2n^2-2&~~~~~~~~&(\alpha _n)\\ \\ {\rm for~all~}j,~r_1+\ldots +\hat{r}_j+\ldots +r_{p+1}\geq n&~~~~~~~~& (\beta _n)\end{array}\] \end{prop} The above proposition holds without Convention~\ref{conv2}. When Convention~\ref{conv2} holds, then $r_1=n-1$ and condition $(\beta _n)$ is tantamount to $r_2+\ldots +r_{p+1}\geq n$. \begin{defi}\label{indexrig} The quantity $\kappa =2n^2-d_1-\ldots -d_{p+1}$ (see Notation~\ref{dr}) is called the {\em index of rigidity} of a given tuple of conjugacy classes or of JNFs. It has been introduced by N.Katz, see \cite{Ka}. If condition $(\alpha _n)$ holds, then $\kappa$ can take the values $2$, $0$, $-2$, $-4$, $\ldots$. The case $\kappa =2$ is called the {\em rigid} one. \end{defi} \begin{defi} A {\em multiplicity vector (MV)} is a vector whose components are non-negative integers whose sum is $n$. Further in the text components of the MVs are the multiplicities of the eigenvalues of $n\times n$-matrices. \end{defi} \begin{rem}\label{d_jdiag} For a diagonalizable conjugacy class $C$ with MV equal to $(m_1,\ldots ,m_s)$ one has $d(C)=n^2-m_1^2-\ldots -m_s^2$. \end{rem} \begin{defi}\label{corrdefi} For a given JNF $J^n=\{ b_{i,l}\}$ define its {\em corresponding} diagonal JNF ${J'}^n$. A diagonal JNF is a partition of $n$ defined by the multiplicities of the eigenvalues. For each $l$ $\{ b_{i,l}\}$ is a partition of $\sum _{i\in I_l}b_{i,l}$ and $J'$ is the disjoint sum of the dual partitions. Thus if for each fixed $l$ one has $b_{1,l}\geq$$\ldots$$\geq b_{s_l,l}$, then the eigenvalue $l\in L$ is replaced by $b_{1,l}$ new eigenvalues $h_{1,l}$, $\ldots$, $h_{b_{1,l},l}$ (hence, ${J'}^n$ has $\sum _{l\in L}b_{1,l}$ distinct eigenvalues). \end{defi} \begin{rems}\label{corrrems} One has the following properties of corresponding JNFs (see \cite{Ko2}:) 1) For $l$ fixed, set $g_k$ for the multiplicity of the eigenvalue $h_{k,l}$. Then the first $b_{s_l,l}$ numbers $g_k$ equal $s_l$, the next $b_{s_{l-1},l}-b_{s_l,l}$ equal $s_l-1$, $\ldots$, the last $b_{1,l}-b_{2,l}$ equal 1. 2) There hold the equalities $r(J^n)=r({J'}^n)$ and $d(J^n)=d({J'}^n)$. 3) To each diagonal JNF there corresponds a unique JNF with a single eigenvalue. \end{rems} \begin{lm}\label{betaequalalpha} Given the $p+1$ diagonalizable conjugacy classes $c_j$ or $C_j$ satisfying condition $(\beta _n)$ and Convention~\ref{conv2}, condition $(\alpha _n)$ does not hold for them only in {\bf Case A) :} $p=2$, $n\geq 4$ is even and the MVs of $c_2$ and $c_3$ (resp. of $C_2$ and $C_3$) both equal $(n/2,n/2)$. \end{lm} The lemma is proved at the end of the subsection. \begin{rem}\label{betaequalalpharem} Making use of Definition~\ref{corrdefi} and Remarks~\ref{corrrems} one can extend the lemma to the case of not necessarily diagonalizable matrices (except $A_1$ or $M_1$). In such a context, in Case A) each conjugacy class $c_2$, $c_3$ or $C_2$, $C_3$ is either diagonalizable and as in the lemma or with a single eigenvalue and $n/2$ Jordan blocks of size $2$ belonging to it. Indeed, this is the only non-diagonal JNF corresponding to the one with two eigenvalues each of multiplicity $n/2$. \end{rem} The first important result in the resolution of the DSP was the following \begin{tm}\label{Simpson} (C.Simpson, see \cite{Si1}) For generic eigenvalues and under Convention~\ref{conv2} conditions $(\alpha _n)$ and $(\beta _n)$ together are necessary and sufficient for the solvability of the DSP for given conjugacy classes $C_j$. \end{tm} The same result for classes $c_j$ is proved in \cite{Ko4}, Theorem 19. For arbitrary eigenvalues there holds the following theorem (see \cite{Ko3}, Theorem 6). \begin{tm}\label{weakDSPdiag} Under Convention~\ref{conv2} conditions $(\alpha _n)$ and $(\beta _n)$ together are necessary and sufficient for the solvability of the weak DSP for given conjugacy classes $c_j$ or $C_j$. \end{tm} \begin{rems}\label{4rigidcases} 1) In \cite{Si1} C.Simpson has considered the rigid case for diagonalizable matrices and under Convention~\ref{conv2}. He has shown that conditions $(\alpha _n)$ and $(\beta _n)$ together hold only if $p=2$ and the MVs of the three matrices correspond to one of the four cases: \[ \begin{array}{lllll}(1,\ldots ,1)&(1,\ldots ,1)&(n-1,1)&& {\rm hypergeometric~family}\\ \\ (1,\ldots ,1)&(\frac{n}{2},\frac{n}{2}-1,1)&(\frac{n}{2},\frac{n}{2})&& {\rm even~family}\\ \\ (1,\ldots ,1)&(\frac{n-1}{2},\frac{n-1}{2},1)&(\frac{n+1}{2},\frac{n-1}{2})&& {\rm odd~family}\\ \\ (1,1,1,1,1,1,)&(2,2,2)&(4,2)&&{\rm extra~case}\end{array}\] Observe that in all four cases one has $r_2+r_3=n$, i.e. there is an equality in condition $(\beta _n)$. Although C.Simpson considers only matrices $M_j$, the result is automatically extended to the case of matrices $A_j$. 2) If one wants to get rid of the condition the matrices to be diagonalizable (except $A_1$ or $M_1$), then to the above list one should add all cases when a diagonal JNF from the list is replaced by a JNF corresponding to it. All JNFs corresponding to the one with $n$ distinct eigenvalues are the ones in which to each eigenvalue there belongs a single Jordan block. Using the notation from Definition~\ref{JNform}, give the list of all JNFs corresponding to the other diagonal JNFs (defined by the MVs) encountered in part 1) of the present remarks: \[ \begin{array}{lllll}(n-1,1)&&\{ 2,1,\ldots ,1\} &&\\ \\ (\frac{n}{2},\frac{n}{2}-1,1)&&\{ 3,2,\ldots ,2,1\} &{\rm or}& \{ \{ 1,\ldots ,1\}\{ 2,1,\ldots ,1\} \} ~ {\rm (}n/2~{\rm and~}n/2-2~{\rm units)}\\ \\ (\frac{n}{2},\frac{n}{2})&&\{ 2,\ldots 2\} &&\\ \\ (\frac{n-1}{2},\frac{n-1}{2},1)&&\{ 3,2,\ldots ,2\} &{\rm or}& \{ \{ 1,\ldots ,1\}\{ 2,1,\ldots ,1\} \} ~{\rm (}(n-1)/2~ {\rm and}~(n-3)/2~{\rm units)}\\ \\ (\frac{n+1}{2},\frac{n-1}{2})&&\{ 2,\ldots ,2,1\} &&\\ \\ (2,2,2)&& \{ 3,3\} &{\rm or}&\{ \{ 2,2\} \{ 1,1\} \} \\ \\ (4,2)&&\{ 2,2,1,1\} &&\end{array}\] \end{rems} {\em Proof of Lemma~\ref{betaequalalpha}:} $1^0$. Suppose first that one has \[ r_j\leq n/2~~{\rm for~~}j=2,\ldots ,p+1~~~~~~~~~~~(*)\] Then one has $d_j\geq 2r_j(n-r_j)$ and there is equality if and only if the MV of $c_j$ or $C_j$ equals $(r_j,n-r_j)$. This follows from Remark~\ref{d_jdiag}. For $r_2+\ldots +r_{p+1}$ fixed the sum $d_2+\ldots +d_{p+1}$ is minimal for $r_2=r_3=[n/2]$ where $[.]$ stands for the entire part of. Indeed, one has $d_2+\ldots +d_{p+1}=(r_2+\ldots +r_{p+1})n-r_2^2-\ldots -r_{p+1}^2$ and one has to maximize $r_2^2+\ldots +r_{p+1}^2$ for $r_2+\ldots +r_{p+1}$ fixed while respecting condition $(*)$. If $n$ is even and $r_2=r_3=n/2$, $r_j=0$ for $j>3$, then condition $(\alpha _n)$ fails if and only if $n\geq 4$ (this is Case A)); if $r_4\neq 0$, then condition $(\alpha _n)$ holds. If $n$ is odd, then the sum $d_2+\ldots +d_{p+1}$ is minimal for $r_2=r_3=[n/2]$, $r_4=1$ and condition $(\alpha _n)$ holds. One cannot have $r_j=0$ for all $j>3$ because then condition $(\beta _n)$ does not hold. $2^0$. Suppose that $r_2>n/2$. Denote the MV of the class $c_2$ or $C_2$ by $(m_1,\ldots ,m_s)$, with $m_1\geq \ldots \geq m_s$. Then $d_2$ is minimal if $m_1=m_2=\ldots =m_{s-1}=n-r_2$, see Remark~\ref{d_jdiag}. The sum $d_3+\ldots +d_{p+1}$ is minimal if $r_3=m_1=n-r_2$, $r_4=\ldots =r_{p+1}=0$ and the MV defining the class $c_3$ or $C_3$ equals $(r_2,n-r_2)$. Set $n=(s-1)m_1+m_s$. Recall that $1\leq m_s\leq m_1$. Hence, \[ d_1=n^2-n~,~d_2=n^2-(s-1)m_1^2-m_s^2\geq n^2-m_1n~,~d_3=2m_1(n-m_1)~~ {\rm and}\] \[ d_1+d_2+d_3\geq 2n^2-n+m_1n-2m_1^2\geq 2n^2-n+n-2= 2n^2-2~~{\rm because}~~1\leq m_1<n/2 \] The lemma is proved.~~~~~$\Box$ \subsection{The new results} \begin{defi} The eigenvalues of the matrices $A_j$ or $M_j$ are called $k$-{\em generic}, $k\in {\bf N}$, if they satisfy non-genericity relations only with $N\geq k$, see Definition~\ref{genericevs} and part 2) of Remarks~\ref{reducible}. \end{defi} \begin{tm}\label{123generic} Under Convention~\ref{conv2}, if the eigenvalues are $2$-generic, and if $\kappa \leq 0$ (see Definition~\ref{indexrig}), then conditions $(\alpha _n)$ and $(\beta _n)$ are necessary and sufficient for the solvability of the DSP. \end{tm} The theorem is proved in Section~\ref{proofof123generic}. Examples~\ref{exTn+1} and \ref{ex23} below show that the theorem cannot be made stronger. \begin{tm}\label{n+1} Under Convention~\ref{conv2} and for arbitrary eigenvalues, if $r_2+\ldots +r_{p+1}\geq n+1$, then the DSP is solvable for such conjugacy classes. \end{tm} The theorem is proved in Section~\ref{proofofn+1}. Example~\ref{exTn+1} below shows that for $r_2+\ldots +r_{p+1}=n$ Theorem~\ref{n+1} is no longer true. \begin{rem}\label{trivc} The above two theorems imply that under Convention~\ref{conv2} the weak DSP is solvable but the DSP is not only if $r_2+\ldots +r_{p+1}=n$ and either $\kappa =2$ or the eigenvalues satisfy a $1$-relation. \end{rem} \begin{cor}\label{cor1} Under Convention~\ref{conv2} a block upper-triangular tuple of diagonalizable matrices $A_j$ or $M_j$ with $3$-generic eigenvalues can be deformed into one from the same conjugacy classes and with trivial centralizer. \end{cor} Indeed, $3$-genericity implies that for each diagonal block (say, of size $s\geq 3$) there holds condition $(\beta _s)$ and Case A) from Lemma~\ref{betaequalalpha} is avoided; hence, condition $(\beta _n)$ holds for the tuple of conjugacy classes (the quantity $r$ computed for the whole matrix is not smaller than the sum of the quantities $r$ computed for the diagonal blocks), and Case A) is avoided (because the blocks are of size $\geq 3$ -- we leave the details for the reader). Hence, for the given tuple of conjugacy classes there hold conditions $(\alpha _n)$ and $(\beta _n)$ (see Lemma~\ref{betaequalalpha}). The claim follows now from Lemma~24 from \cite{Ko3}.~~~~~$\Box$ \begin{cor} Under Convention~\ref{conv2}, if the eigenvalues are $2$-generic, and if Case A) is avoided, then for such a block upper-triangular tuple of diagonalizable matrices $A_j$ or $M_j$ there hold conditions $(\beta _n)$ and $(\alpha _n)$. Moreover, the tuple can be deformed into one from the same conjugacy classes and with trivial centralizer. \end{cor} The first claim is proved as Corollary~\ref{cor1}, the second follows from Lemma~24 from \cite{Ko3}. \begin{nota}\label{semidirect} For a tuple of matrices $A_j$ or $M_j$ in block upper-triangular form $\left( \begin{array}{cc}P_j&Q_j\\0&R_j\end{array}\right)$ (where $P_j\in gl(l,{\bf C})$, $R_j\in gl(n-l,{\bf C})$) set $d_j^1=d(P_j)$, $r^1_j=r(P_j)$, $d_j^2=d(R_j)$, $r^2_j=r(R_j)$, $s_j=$dim${\cal X}_j$ where ${\cal X}_j=\{ Z\in M_{l,n-l}|Z=P_jX_j-X_jR_j, X_j\in M_{l,n-l}\}$. Denote by ${\cal P}$, ${\cal R}$ the representations defined by the tuples of matrices $P_j$, $R_j$. \end{nota} \begin{rem}\label{semidirectrem} If the MVs of the diagonalizable matrices $P_j$ and $R_j$ equal respectively $(m_1',\ldots ,m_s')$, $(m_1'',\ldots ,m_s'')$ (there might be zeros among these numbers as some eigenvalue might be absent in $P_j$ or $R_j$), then $s_j=l(n-l)-\sum _{i=1}^sm_i'm_i''$. This implies that if one exchanges the positions of the blocks $P_j$ and $R_j$, then the quantities $s_j$ do not change. \end{rem} \begin{lm}\label{Ext} If the representations ${\cal P}$ and ${\cal R}$ are with trivial centralizers, then one has \[ \delta :={\rm dim\,Ext}^1({\cal P},{\cal R})=s_1+\ldots +s_{p+1}-2l(n-l)~.\] \end{lm} {\em Proof:} Notice first that ${\cal X}_j$ is the space of right upper blocks of matrices of the form \[ \left( \begin{array}{cc}I&X_j\\0&I\end{array}\right) ^{-1} \left( \begin{array}{cc}P_j&0\\0&R_j\end{array}\right) \left( \begin{array}{cc}I&X_j\\0&I\end{array}\right) ~.\] To obtain $\delta$ one must first subtract $l(n-l)$ from $\sum _{j=1}^{p+1}$dim${\cal X}_j$ (because the sum of these right upper blocks must be $0$) and then again subtract $l(n-l)$ (to factor out the simultaneous conjugation with matrices $\left( \begin{array}{cc}I&X\\0&I\end{array}\right)$; as $A_1$ or $M_1$ is with distinct eigenvalues, no such matrix with $X\neq 0$ commutes with all matrices from the tuple).~~~~~$\Box$ \begin{ex}\label{exTn+1} Consider under Convention~\ref{conv2} a tuple of diagonalizable conjugacy classes $c_j$ for which $r_2+\ldots +r_{p+1}=n$, $n>2$. Denote by $\mu _1$ an eigenvalue of $c_1$ and by $\mu _2$, $\ldots$, $\mu _{p+1}$ eigenvalues of $c_2$, $\ldots$, $c_{p+1}$ of maximal possible multiplicity; we assume these multiplicities to be $>n/2$. Suppose that the eigenvalues of the classes $c_j$ satisfy the only non-genericity relation $\mu _1 +\ldots +\mu _{p+1}=0$. Denote by $c_j'\subset gl(n-1,{\bf C})$ the conjugacy classes obtained from $c_j$ by deleting the eigenvalues $\mu _j$. Hence, condition $(\beta _{n-1})$ holds for the classes $c_j'$ and the sum of their eigenvalues is $0$. Moreover, the classes $c_j'$ do not correspond to Case A) from Lemma~\ref{betaequalalpha} (we let the reader check this oneself). Hence, there exist block upper-triangular matrices $A_j=\left( \begin{array}{cc}A_j'&D_j\\0&\mu _j\end{array}\right)$, $A_j'\in c_j'$, whose tuple defines a semi-direct sum (but not a direct one); the matrices $A_j'$ define an irreducible representation. Indeed, one checks directly that dim\,Ext$^1(A,M)=1$ (this results from $r_2+\ldots +r_{p+1}=n$). The same equality shows that the variety ${\cal V}$ consisting of tuples of matrices $A_j\in c_j$ which are block upper-triangular up to conjugacy (i.e. like $A_j$ above) is of dimension dim${\cal W}$ where ${\cal W}$ is the variety of tuples with trivial centralizers from the classes $c_j$. This means that there exist no irreducible tuples from the classes $c_j$. Indeed, should they exist, their variety (which is part of ${\cal W}$) should contain in its closure the variaty ${\cal V}$ (see Theorem~6 from \cite{Ko3}), hence, one would have dim${\cal V}<$dim${\cal W}$ which is a contradiction. The example shows that Theorem~\ref{123generic} is not true without the condition the eigenvalues to be $2$-generic and that Theorem~\ref{n+1} is not true if there is an equality in $(\beta _n)$. A similar example can be given for matrices $M_j$. \end{ex} \begin{ex}\label{ex23} There exist triples of diagonalizable $2\times 2$-matrices $M_j^1$ (resp. $M_j^2$) with (generic) eigenvalues equal to $(a,b)$, $(\mu ,\nu )$, $(\eta ,\xi )$ (resp. to $(c,d)$, $(\mu ,\nu )$, $(\eta ,\zeta )$); same (different) letters denote same (different) eigenvalues. Then there exists a block upper-triangular triple of matrices $M_j=\left( \begin{array}{cc}M_j^1&B_j\\0&M_j^2\end{array}\right)$ defining a semi-direct sum of the representations ${\cal P}^1$ and ${\cal P}^2$ defined by the matrices $M_j^1$ and $M_j^2$ (because dim\, Ext$^1({\cal P}^1,{\cal P}^2)=1$). One checks directly that a) the centralizer of the matrices $M_j$ is trivial; b) their eigenvalues can be chosen $2$-generic (we assume that they satisfy only the following non-genericity relations: $ab\mu \nu \eta \xi =1$ and $cd\mu \nu \eta \zeta =1$); c) one has $\kappa =2$ for the triple of conjugacy classes of the matrices $M_j$. As $\kappa =2$, one cannot have coexistence of irreducible and reducible triples, see \cite{Ka}. This means that the DSP is not solvable for the triple of conjugacy classes of the matrices $M_j$ (but the weak DSP is, see a)). Hence, Theorem~\ref{123generic} is not true for $\kappa =2$. A similar example can be given for matrices $A_j$. \end{ex} \section{Proof of Theorem~\protect\ref{123generic} \protect\label{proofof123generic}} \subsection{The method of proof} $1^0$. Suppose that for the conjugacy classes $c_j$ or $C_j$ (with $2$-generic eigenvalues) there hold conditions $(\alpha _n)$ and $(\beta _n)$. The variety of matrices $A_j\in c_j$ (satisfying (\ref{A_j})) or of matrices $M_j\in C_j$ (satisfying (\ref{M_j})) is of dimension $d':=d_1+\ldots +d_{p+1}-n^2+1$ at each tuple with trivial centralizer, see \cite{Ko6}, Proposition~2. Given a reducible tuple of matrices from these conjugacy classes (block upper-triangular up to conjugacy, with trivial centralizer, with given sizes of the diagonal blocks and with given conjugacy classes of the restrictions of the matrices to the diagonal blocks) we compute the dimension $d''$ of the variety of such tuples and we show that $d''<d'$. If this is the case of all such reducible tuples, then the variety of tuples with trivial centralizers must contain irreducible tuples as well. Hence, the DSP is solvable for the given conjugacy classes. \begin{lm}\label{lmdim} Under Convention~\ref{conv2}, suppose that the tuple of diagonalizable matrices $A_j$ or $M_j$ is as in Notation~\ref{semidirect}, and that the representations ${\cal P}$ and ${\cal R}$ are with trivial centralizers. If $\delta :=$dim\,Ext$^1({\cal P}^1,{\cal P}^2)>1$, then $d''<d'$. \end{lm} All lemmas from the proof of the theorem are proved in Subsection~\ref{prlm}. \begin{cor}\label{existirred} If the representations ${\cal P}$ and ${\cal R}$ from the lemma are irreducible, then there exist irreducible tuples from the conjugacy classes $c(P_j)\times c(R_j)$. \end{cor} The corollary is immediate. We prove the theorem for diagonalizable matrices in $2^0$ -- $5^0$ and then we treat the general case in $6^0$ -- $11^0$. \subsection{The proof for diagonalizable matrices} $2^0$. Prove the theorem for diagonalizable matrices. \begin{lm}\label{deform} Suppose that the tuples of diagonalizable matrices $P_j\in gl(l,{\bf C})$ and $R_j\in gl(n-l,{\bf C})$ (resp. $P_j\in GL(l,{\bf C})$ and $R_j\in GL(n-l,{\bf C})$) are with trivial centralizers, $P_1$ and $R_1$ being each with distinct eigenvalues and with no eigenvalue in common, and that $l\geq n-l\geq 2$. Then $\delta \geq 2$ with the exception of the cases listed below\footnote{When listing the cases we begin with B, not with A, in order to avoid mixing up with Case A) from Lemma~\protect\ref{betaequalalpha}}. In all of them one has $p=2$. (We give the list of the eigenvalues of the matrices $P_2$, $R_2$ and $P_3$, $R_3$, equal (different) letters denote equal (different) eigenvalues if they correspond to one and the same index $j$. In Cases C) -- F) one can exchange the roles of $P_2$, $R_2$ and $P_3$, $R_3$.) \[ \begin{array}{lllll}{\rm Case~B)}&l=n-l=2& (a,b)&~~&(c,d)\\&&(a,b)&&(c,d)\\ \\ {\rm Case~C)}&l=n-l=2&(a,b)&~~&(c,d)\\&&(a,g)&&(c,d)\\ \\ {\rm Case~D)}&l=n-l=3&(a,b,c)&~~&(f,g,g)\\&&(a,b,c)&&(f,g,g)\\ \\ {\rm Case~E)}&l=2q+1,~n-l=2&(\underbrace{a,\ldots ,a}_{q~{\rm times}}, \underbrace{b,\ldots ,b}_{q~{\rm times}},c)&~~& (\underbrace{f,\ldots ,f}_{q+1~{\rm times}}, \underbrace{g,\ldots ,g}_{q~{\rm times}})\\&&(a,b)&&(f,g)\\ \\ {\rm Case~F)}&l=2q,~n-l=2&(\underbrace{a,\ldots ,a}_{q~{\rm times}}, \underbrace{b,\ldots ,b}_{q-1~{\rm times}},c)&~~& (\underbrace{f,\ldots ,f}_{q~{\rm times}}, \underbrace{g,\ldots ,g}_{q~{\rm times}})\\&&(a,b)&&(f,g) \end{array}\] In Case B) condition $(\alpha _n)$ does not hold for the conjugacy classes $C(P_j)\times C(R_j)$, in the other cases it holds and is an equality. One has $\delta =0$ in Case B) and $\delta =1$ in Cases C) -- F). \end{lm} \begin{cor} In the conditions of the lemma and if the representations ${\cal P}$ and ${\cal R}$ are irreducible the DSP is solvable for the tuple of conjugacy classes $C(S_j)=C(P_j)\times C(R_j)$ (except for Cases B) -- F)). \end{cor} {\em Proof:} The condition $\delta >0$ implies that there exists a semi-direct sum of the representations ${\cal P}$ and ${\cal R}$ (we use Notation~\ref{semidirect} here) which is not reduced to a direct one. The centralizer of this semi-direct sum is trivial. Indeed, one can assume that $P_1$ and $R_1$ are diagonal, so a matrix $X$ from the centralizer must be also diagonal. The $P$-block of $X$ commutes with all matrices $P_j$, hence, it is scalar (because the centralizer of ${\cal P}$ is trivial). In the same way the $R$-block of $X$ must be scalar. Finally, these blocks must be equal, otherwise the commutation relations imply that all blocks $Q_j$ must be $0$ which contradicts the sum of ${\cal P}$ and ${\cal R}$ not to be a direct one. Hence, the variety ${\cal V}$ of tuples of matrices defining semi-direct sums of ${\cal P}$ and ${\cal R}$ is non-empty and its dimension is smaller than the dimension of the variety ${\cal W}\supset {\cal V}$ of tuples with trivial centralizers of matrices from the classes $C(S_j)$ (see Lemma~\ref{lmdim}). Hence, ${\cal V}$ is locally a proper subvariety of ${\cal W}$ and a tuple from ${\cal V}$ can be deformed into a tuple from ${\cal W}\backslash {\cal V}$ (see Theorem~6 from \cite{Ko3}). The latter must be irreducible. Indeed, ${\cal V}$ contains locally all reducible tuples because ${\cal P}$ and ${\cal R}$ are irreducible.~~~~~$\Box$\\ $3^0$. Deduce the theorem from the corollary. The weak DSP is solvable for conjugacy classes in the conditions of the theorem. Indeed, $2$-genericity implies that a tuple from the given conjugacy classes is (up to conjugacy) block upper-triangular with diagonal blocks all of sizes $\geq 2$ and defining irreducible representations. (We assume that there is more than one diagonal block, otherwise the tuple is irreducible and there is nothing to prove.) The restriction of the tuple to the union of diagonal blocks is a tuple from the same conjugacy classes (because the conjugacy classes are diagonalizable). Consider a couple of consecutive diagonal blocks. (We denote the restrictions of the matrices $A_j$ or $M_j$ to these two blocks by $A_j^i$, $M_j^i$, $i=1,2$.) They are both of size $\geq 2$, and if one is not in one of the Cases B) -- F), then one can apply the above corollary and obtain the existence of irreducible tuples of matrices from the conjugacy classes $C(A_j^1)\times C(A_j^2)$ (resp. $C(M_j^1)\times C(M_j^2)$). Thus we obtain a block-diagonal tuple of $n\times n$-matrices with one diagonal block less. Continuing like this we end with an irreducible tuple of matrices which solves the DSP for the conjugacy classes $c_j$ or $C_j$. $4^0$. There might be a problem, however, with Cases B) -- F). First of all notice that this does not happen if $p\geq 3$. Indeed, in this case one can always choose two diagonal blocks defining irreducible representations and in which at least four conjugacy classes $C(A_j^1)\times C(A_j^2)$ (resp. $C(M_j^1)\times C(M_j^2)$) are not scalar (including $j=1$). So one can permute the diagonal blocks (to get two consecutive blocks not from Cases B) -- F)) and the proof is carried out as in $3^0$. $5^0$. So suppose that $p=2$. We start again with the restriction of the tuple to the set of diagonal blocks defining irreducible representations. It is not possible to have all couples of diagonal blocks to correspond to Case B) from the lemma because this will mean that the classes $c_j$ or $C_j$ are from Case A) of Lemma~\ref{betaequalalpha}. So choose a couple of consecutive diagonal blocks which are not from Case B) and replace them by a single block $B$ defining a semi-direct sum of the representations which they define while keeping the other diagonal blocks the same. This is possible because for the chosen blocks one has $\delta \geq 1$, see the lemma. At each next step one has a block-diagonal tuple with diagonal blocks defining irreducible representations except $B$ which defines one with trivial centralizer. At each step choose a block $W$ different from $B$ and next to $B$ (hence, their couple is not from Case B) because $B$ is of size $>2$), so one can replace it by a new block (which is the new block $B$) defining a semi-direct sum of the representations they define. So at each step the blocks $B$, $W$ are not from Case B). At the last step we obtain a representation with trivial centralizer. The last couple of blocks $B$, $W$ is not from Cases B) -- F). Indeed, should it be from these cases, then for the conjugacy classes $c_j$ or $C_j$ one should have $\kappa \geq 2$ (to be checked directly). Hence, for the last couple of blocks $B$, $W$ one has $\delta \geq 2$. This means that $d''<d'$, see $1^0$. This proves the theorem in the case of diagonalizable matrices.\\ \subsection{The proof in the general case} $6^0$. \begin{conv} From here till the end of this subsection when Case A) of Lemma~\ref{betaequalalpha} or Cases B) -- F) of Lemma~\ref{deform} are cited the JNFs of the matrices $A_j$ or $M_j$ ($j\geq 2$) will be assumed either to be the ones given in these two lemmas or to correspond to them, see Remarks~\ref{betaequalalpharem} and \ref{4rigidcases}. \end{conv} Such a change of the definition of these cases does not change the quantity $\delta$, see part 2) of Remarks~\ref{corrrems}. Hence, Lemma~\ref{deform} is applicable after the change as well. $7^0$. Consider a tuple in block upper-triangular form whose diagonal blocks define irreducible representations. Consider the restriction of the tuple to the set of diagonal blocks. The conjugacy class $c_j'$ (resp. $C_j'$) of the restriction of the matrix $A_j$ (resp. $M_j$) from the tuple to the set of diagonal blocks belongs to the closure of $c_j$ (resp. of $C_j$) but is not necessarily equal to it (one might obtain a ``less generic'' Jordan structure when cutting off the blocks above the diagonal; the eigenvalues and their multiplicities do not change). If for the conjugacy classes $c_j'$ or $C_j'$ the index of rigidity is $\leq 0$, then as in the case of diagonalizable conjugacy classes one shows that the DSP is solvable for the classes $c_j'$ or $C_j'$. This implies its solvability for the classes $c_j$ (resp. $C_j$) (which can be proved by analogy with part~2 of Lemma~53 from \cite{Ko2}). $8^0$. Suppose (in $8^0$ -- $11^0$) that the index of rigidity of the tuple of conjugacy classes $c_j'$ or $C_j'$ is $>0$. Then for some $j_0>1$ there exists a conjugacy class $c_{j_0}''$ (or $C_{j_0}''$; we write further only $c_{j_0}''$ for short) such that 1) $c_{j_0}'$ belongs to the closure of $c_{j_0}''$; 2) $c_{j_0}''$ is obtained from $c_{j_0}'$ when a couple of Jordan blocks with one and the same eigenvalue, of sizes $l,s$, $l\geq s$, are replaced by Jordan blocks (with the same eigenvalues) of sizes $l+1,s-1$, see Section~8 in \cite{Ko2}; the rest of the Jordan structure remains the same; 3) $c_{j_0}''$ belongs to the closure of $c_{j_0}$ (eventually, $c_{j_0}''=c_{j_0}$). When passing from $c_{j_0}'$ to $c_{j_0}''$ the index of rigidity decreases by at least $2$. If the change 2) can take place by changing the JNF of the restriction of $A_{j_0}$ or $M_{j_0}$ to some diagonal block, then we perform this change and further the proof is done as in the case of diagonalizable matrices. $9^0$. If for the change 2) one has to change a block above the diagonal, and if there are at least $3$ diagonal blocks, then one proceeds as in $5^0$ and one proves that $d''<d'$ exactly in the same way. Indeed, at the first step one replaces two diagonal blocks (defining irreducible representations) by a single one (defining their semi-direct sum). Namely, using Notation~\ref{semidirect}, one chooses the block $Q_{j_0}$ such that the change 2) to take place. Then one chooses the block $Q_1$ such that condition (\ref{A_j}) or (\ref{M_j}) to hold (recall that $A_1$ and $M_1$ are with distinct eigenvalues, therefore changing the block $Q_1$ while keeping $P_1$ and $R_1$ the same does not change the conjugacy class of $A_1$ or $M_1$). The next steps are as in $5^0$. $10^0$. If there are just two diagonal blocks, not from Case B), then one first constructs a block upper-triangular tuple (with trivial centralizer) defining a semi-direct sum of the representations defined by the diagonal blocks but without changing the class $c_{j_0}'$. Then conjugate the tuple with a block upper-triangular matrix so that the matrix $A_{j_0}$ or $M_{j_0}$ to be in JNF (hence, it will be block diagonal as well). After this perform a change $A_{j_0}\mapsto A_{j_0}+\varepsilon U$ or $M_{j_0}\mapsto M_{j_0}+\varepsilon U$, $\varepsilon \in ({\bf C},0)$ where only the left lower block of $U$ is non-zero and is not of the form $R_{j_0}X-XP_{j_0}$; $U$ is chosen such that for $\varepsilon \neq 0$ one has $A_{j_0}\in c_{j_0}''$ (resp. $M_{j_0}\in C_{j_0}''$). To preserve condition (\ref{A_j}) or (\ref{M_j}) one looks then for deformations of the matrices $A_j$ or $M_j$, $j\neq j_0$, analytic in $\varepsilon$. Such a deformation exists, see the description of the ``basic technical tool'' in \cite{Ko2} (one conjugates the matrices $A_j$ or $M_j$, $j\neq 0$, with matrices which are analytic deformations of $I$). \begin{lm}\label{Burnside} For $\varepsilon \neq 0$ small enough the constructed tuple is irreducible. \end{lm} The lemma implies the theorem in this case. $11^0$. If the two diagonal blocks are from Case B), then one change 2) is not sufficient to make the index of rigidity $\leq 0$. Hence, at least two changes are necessary. With the first of them we construct the semi-direct sum of representations defined by the two diagonal blocks; this time we change one of the JNFs for $j=j_*>1$. When performing this change we change the block $Q_{j_*}$ and then we change $Q_1$ to restore condition (\ref{A_j}) or (\ref{M_j}). Suppose that the second change must take place for $j=j_0\neq j_*$. Then after the second change 2) (performed as in $10^0$, using an analytic deformation) one has an irreducible representation by full analogy with Lemma~\ref{Burnside}. If $j_*=j_0$ (and, say, $j_0=2$), then there are two possibilities. Either this JNF has a single eigenvalue, or it is with two double eigenvalues and three Jordan blocks. In the first case one can assume that the couple $A_2$, $U$ (resp. $M_2$, $U$) looks like this (after the analog of the conjugation from $10^0$): \[ A_2=\left( \begin{array}{cccc}a&1&0&0\\0&a&0&\underline{1}\\0&0&a&1\\0&0&0&a \end{array}\right) ~~~,~~~U=\left( \begin{array}{cccc}0&0&0&0\\ 0&0&0&0\\1&0&0&0\\0&0&0&0\end{array}\right) ~.\] We underline the unit which is introduced after the first change 2). Its introduction results in changing the JNF like this: $\{ 2,2\} \mapsto \{ 3,1\}$. In the second case the couple looks like this: \[ A_2=\left( \begin{array}{cccc}a&0&\underline{1}&0\\0&b&0&0\\0&0&a&0\\0&0&0&b \end{array}\right) ~~~,~~~U=\left( \begin{array}{cccc}0&0&0&0\\ 0&0&0&0\\0&0&0&0\\0&1&0&0\end{array}\right) ~.\] For the rest the proof is carried out as in $8^0$ -- $10^0$. The theorem is proved.~~~~~$\Box$ \subsection{Proofs of the lemmas\protect\label{prlm}} {\bf Proof of Lemma~\ref{lmdim}:}\\ To obtain $d''$ one must add $l(n-l)$ to $d'''$, the dimension of the variety of block upper-triangular tuples as in the lemma (truly block upper-triangular, not only up to conjugacy). Indeed, $l(n-l)$ is the size of the left lower block and adding this corresponds to taking into account the possibility to conjugate such a tuple by matrices of the form $\left( \begin{array}{cc}I&0\\X&I\end{array}\right)$. One has $d'''=\Delta _1+\Delta _2+\Delta _3$ where $\Delta _1=\sum _{j=1}^{p+1}d_j^1-l^2+1$, $\Delta _2=\sum _{j=1}^{p+1}d_j^2-(n-l)^2+1$ and $\Delta _3=\sum _{j=1}^{p+1}s_j-l(n-l)$ (the contributions to $d'''$ from the $P$-, $R$- and $Q$-block). On the other hand, $d_j=d_j^1+d_j^2+2s_j$ (this can be deduced from Remark~\ref{d_jdiag}). Hence, $d'''=\sum _{j=1}^{p+1}d_j-\sum _{j=1}^{p+1}s_j-n^2+l(n-l)+2= \sum _{j=1}^{p+1}d_j-\delta -n^2-l(n-l)+2$ and $d''=\sum _{j=1}^{p+1}d_j-\delta -n^2+2$. One has $d'=\sum _{j=1}^{p+1}d_j-n^2+1=d''+\delta -1$. Hence, for $\delta >1$ one has $d'>d''$.~~~~~$\Box$\\ {\bf Proof of Lemma~\ref{deform}:}\\ We transform the proof of the lemma into finding the cases when $\delta \leq 1$. \begin{stat}\label{stat1} One has~~~$s_j\geq r_j^1(n-l)~~~(A)$~~~and~~~$s_j\geq r_j^2l~~~(B)$~~~(see Notation~\ref{semidirect}). \end{stat} {\em Proof:}\\ Use Remark~\ref{semidirectrem} (and the notation from it) and Lemma~\ref{Ext}. Denote by $\mu '$ (resp. $\mu ''$) the biggest among the numbers $m_j'$ (resp. $m_j''$). Then $s_j\geq l(n-l)-\mu '(n-l)=r_j^1(n-l)$ because $\sum _{i=1}^sm_i'm_i''\leq \mu '\sum _{i=1}^sm_i''=\mu '(n-l)$. In the same way $s_j\geq l(n-l)-\mu ''l=r_j^2l$.~~~~~$\Box$ \begin{rem}\label{equ} Inequality $(A)$ becomes an equality exactly if $m_i''=0$ whenever $m_i'<\mu '$. Inequality $(B)$ becomes an equality exactly if $m_i'=0$ whenever $m_i''<\mu ''$. \end{rem} \begin{stat}\label{stat2} If for some index $j>1$ (say, $j=2$) one has $r_j^1=0$, $r_j^2>0$, then one has $\delta \geq 2$. The same is true if $r_j^1=r_j^2=0$ and $c_j$ is not scalar. The same is true if $r_j^1>0$, $r_j^2=0$. \end{stat} {\em Proof:}\\ Consider the first and the second of the three claims. By $(A)$ one has $s_3+\ldots +s_{p+1}\geq (r_3^1+\ldots +r_{p+1}^1)(n-l)\geq l(n-l)$; recall that $s_1=l(n-l)$. In the first claim one has also $s_2\geq r_j^2l\geq 2$, hence, $\delta \geq 2$. In the second claim the conjugacy class $c_2$ defines the MV $(l,n-l)$ and one has $s_2=l(n-l)\geq 2$ and again $\delta \geq 2$. The third claim is proved in the same way as the first one using $(B)$.~~~~~$\Box$\\ \begin{conv} From now till the end of the proof of the lemma we assume (using the above statement) that for all indices $j>1$ one has $r_j^1>0$, $r_j^2>0$. \end{conv} \begin{stat}\label{stat3} If $p\geq 3$, then $\delta \geq 2$. \end{stat} {\em Proof:}\\ It suffices to consider the following two cases (up to permutation of the indices $j>1$): 1) $r_2^1\geq l/2$, $r_3^1\geq l/2$, $r_4^1>0$; 2) $r_2^1>0$, $l/2>r_j^1>0$ for $j>2$. In case 1) one has $s_2+s_3\geq l(n-l)$ (see $(A)$), $s_4\geq n-l\geq 2$, $s_1=l(n-l)$, so $\delta \geq 2$, see Lemma~\ref{Ext}. In case 2) recall first that $r_j^2>0$ for $j>2$. For $j=3,4,\ldots ,p+1$ one has $s_j>r_j^1(n-l)$, i.e. $s_j\geq r_j^1(n-l)+1$, see Statement~\ref{stat1} and Remark~\ref{equ}. One has $s_1=l(n-l)$, $s_2\geq r_2^1(n-l)$ (see $(A)$), hence, $s_1+\ldots +s_{p+1}\geq 2l(n-l)+2$ and again $\delta \geq 2$.~~~~~$\Box$ \begin{conv} From now till the end of the proof of the lemma we assume that $p=2$, see Statement~\ref{stat3}. \end{conv} \begin{stat}\label{stat4} If $r_2^1+r_3^1\geq l+1$ or $r_2^2+r_3^2\geq n-l+1$, then $\delta \geq 2$. \end{stat} Indeed, if $r_2^1+r_3^1\geq l+1$, then (see $(A)$) $s_2+s_3\geq (l+1)(n-l)\geq l(n-l)+2$ and $\delta \geq 2$. In the same way if $r_2^2+r_3^2\geq n-l+1$, then $s_2+s_3\geq l(n-l+1)\geq l(n-l)+2$ and $\delta \geq 2$.~~~~~$\Box$ \begin{stat}\label{stat5} If $l$ is even and $r_2^1=r_3^1=l/2$, then $\delta \geq 2$, except in Cases B), C) and F) from the lemma. \end{stat} {\em Proof:}\\ $1^0$. If $l=2$, then $n-l=2$ and one has $\delta \leq 1$ only in one of Cases B) or C) from the lemma.\\ $2^0$. If $l\geq 4$, then $\delta \geq 2$. Indeed, to avoid Case A) from Lemma~\ref{betaequalalpha} for the block $P$, one must suppose that at least one of the two matrices $P_2$ and $P_3$ (say, $P_2$) has at least three distinct eigenvalues. Assume that the MV of $P_2$ looks like this: $(m_1',\ldots ,m_s')$, with $m_1'=\mu '>m_2'\geq \ldots \geq m_s'$ (the inequality $m_1'>m_2'$ results from $r_2^1=l/2$). If for at least two indices $i>1$ one has $m_i''\neq 0$, then for them one has $m_i'm_i''<\mu 'm_i''$ and $\sum _{i=1}^sm_i'm_i''\leq \mu '\sum _{i=1}^sm_i''-2=\mu '(n-l)-2$. Hence, $s_2\geq r_2^1(n-l)+2$ (see Remark~\ref{semidirectrem}), $s_3\geq r_3^1(n-l)$ (see $(A)$) and $\delta \geq 2$.\\ $3^0$. If for only one index $i>1$ one has $m_i''\neq 0$ (i.e. $R_2$ has only two different eigenvalues), then similarly $s_2\geq r_2^1(n-l)+1$ with equality only if the MV of $P_2$ equals $(l/2,l/2-1,1)$ and the two eigenvalues, of the two greatest multiplicities, are eigenvalues of $R_2$ as well; moreover, its only eigenvalues.\\ $4^0$. If $P_3$ has at least three different eigenvalues, then in the same way $s_3\geq r_3^1(n-l)+1$ and, hence, $\delta \geq 2$. So the only possibility to have $\delta \leq 1$ is the MV of $P_3$ to be $(l/2,l/2)$. If $n-l=2$, then $\delta \leq 1$ only in Case F). If $n-l>2$, then $R_3$ must have at least three distinct eigenvalues (otherwise condition $(\alpha _{n-l})$ fails for the block $R$) and $s_3\geq lr_3^2+1$. One has also $s_2\geq lr_2^2+1$ (to be checked directly), hence, again $\delta \geq 2$.~~~~~$\Box$ \begin{stat}\label{stat6} Suppose that $r_2^1+r_3^1=l$. If $r_2^1>l/2$, $r_3^1<l/2$ or $r_2^1<l/2$, $r_3^1>l/2$, then $\delta \geq 2$ except in Cases D), E) from the lemma. \end{stat} {\em Proof:}\\ $1^0$. Without loss of generality we assume that $r_2^1>l/2$, $r_3^1<l/2$. If $l=3$ and $n-l=2$ or $n-l=3$, then one has $\delta \leq 1$ only in Case E) with $q=1$ or in Case D) of the lemma. Indeed, $s_j$ is minimal only if all eigenvalues of $R_j$ are eigenvalues of $P_j$ as well for $j=2,3$.\\ $2^0$. If $l\geq 5$ and $n-l\geq 4$, then $\delta \geq 2$. Indeed, if the MVs of $P_3$ and $R_3$ equal respectively $(m_1',\ldots ,m_s')$, $(m_1'',\ldots ,m_s'')$, with $m_1'=\mu '>m_2'\geq \ldots \geq m_s'$, then one has $m_i'm_i''\leq (\mu '-1)m_i''$ for $i>1$ and $m_i''>0$; hence, \[ \sum _{i=1}^sm_i'm_i''\leq \mu 'm_1''+(\mu '-1)\sum _{i=2}^sm_i'' =\mu '\sum _{i=1}^sm_i''-\sum _{i=2}^sm_i''=\mu '(n-l)-\sum _{i=2}^sm_i''~.\] If $\sum _{i=2}^sm_i''\geq 2$, then $s_3\geq r_3^1(n-l)+2$ (see Remark~\ref{semidirectrem}), $s_2\geq r_2^1(n-l)$ (see $(A)$) and $\delta \geq 2$. So $\delta$ can be $\leq 1$ only in case that $\sum _{i=2}^sm_i''=1$, i.e. the MV of $R_3$ is of the form $(n-l-1,1)$. If this is so, then the MV of $R_2$ is $(1,\ldots ,1)$ (otherwise $(\alpha _{n-l})$ fails for the block $R$), i.e. $R_2$ has distinct eigenvalues. Hence, $s_2\geq l(n-l-1)$ whatever the eigenvalues of $P_2$ are. But then $s_3$ is minimal if and only if the MV of $P_3$ equals $(l-1,1)$ and $P_3$ has the same eigenvalues as $R_3$ (the proof of this is left for the reader). In this case $s_3=(l-1)+(n-l-1)=n-2$, hence, $s_2+s_3\geq l(n-l)+n-l-2$ and for $n-l\geq 4$ one has $\delta \geq 2$.\\ $3^0$. If $l\geq 5$ and $n-l=2$, and if $P_2$ has at least $4$ distinct eigenvalues, then $s_2\geq l+2$. Indeed, $s_2$ is minimal only if each eigenvalue of $R_2$ is eigenvalue of $P_2$ as well. In such a case one has $s_2=2l-m_{i_1}'-m_{i_2}'$ where $m_{i_1}'$, $m_{i_2}'$ are the multiplicities of the eigenvalues of $R_2$ as eigenvalues of $P_2$. As $m_{i_1}'+m_{i_2}'\leq l-2$ (there are at least two more eigenvalues of $P_2$, each of multiplicity $\geq 1$), one gets $s_2\geq l+2$. In a similar way, $s_3\geq l$, with equality when $P_3$ has two eigenvalues which are eigenvalues of $R_3$ as well, hence, $\delta \geq 2$. If $P_2$ has exactly three distinct eigenvalues, then one has $s_2\geq l+1$ with equality exactly if the eigenvalue which is not eigenvalue of $P_2$ is simple. Hence, $\delta \leq 1$ only in Case~E) from the lemma.\\ $4^0$. If $l\geq 5$ and $n-l=3$, then at least one of the matrices $R_2$, $R_3$ must have $3$ distinct eigenvalues (otherwise $(\beta _3)$ fails for the block $R$). The respective quantity $s_j$ must be $\geq 2l=r_j^2l$, see $(B)$. If the other matrix $R_j$ ($j=2$ or $3$) has also $3$ distinct eigenvalues, then $s_2+s_3\geq 4l>3l+2$ and $\delta \geq 2$. If the MV of the other matrix $R_j$ (say, $R_3$) equals $(2,1)$, then $s_3$ is minimal exactly if $P_3$ has the same eigenvalues as $R_3$, of multiplicities $l-1$ and $1$. In this case $s_3=l+1$. But then $P_2$ must be with distinct eigenvalues (otherwise $(\alpha _l)$ fails for the block $P$), $s_2\geq 3l-3$, and $\delta >2$.\\ $5^0$. If $l=4$, then one can have $r_2^1>2$, $r_3^1<2$ only if $P_3$ has four distinct eigenvalues and the MV of $P_3$ is $(1,3)$. We let the reader check oneself that in all possible cases ($n-l=2,3$ or $4$) one has $\delta \geq 2$.~~~~~$\Box$\\ The lemma follows from Statements~\ref{stat2}, \ref{stat3}, \ref{stat4}, \ref{stat5} and \ref{stat6}.~~~~~$\Box$\\ {\bf Proof of Lemma~\ref{Burnside}:}\\ Denote by ${\cal T}$, the matrix algebra of all block upper-triangular matrices with square diagonal blocks of sizes $l$ and $n-l$. A priori the representation defined by the deformed matrices is either irreducible (and the corresponding matrix algebra is $gl(n,{\bf C})$) or is reducible and defines a matrix algebra which up to analytic conjugation equals ${\cal T}$ (the statement results from a more general one which can be found in \cite{Ko7}). The second case, however, is impossible because such a conjugation of $A_{j_0}$ or $M_{j_0}$ (with a matrix $I+O(\varepsilon )$) cannot make the left lower block of $U$ disappear (because it is not of the form $R_{j_0}X-XP_{j_0}$).~~~~~$\Box$ \section{Proof of Theorem~\protect\ref{n+1}\protect\label{proofofn+1}} \subsection{Proof in the case of matrices $A_j$\protect\label{caseofA_j}} \begin{defi}\label{regulardefi} A conjugacy class is called {\em regular} if to every eigenvalue there corresponds a single Jordan block of size equal to the multiplicity of the eigenvalue. \end{defi} \begin{rem}\label{regularrem} The JNFs of all regular conjugacy classes correspond to each other (see Definition~\ref{corrdefi}) and, in particular, to the diagonal JNF with distinct eigenvalues and to the JNF with a single eigenvalue and a single Jordan block of size $n$. \end{rem} \begin{prop}\label{reg2n} The DSP is positively solvable for classes $c_j$ where $c_1$ is regular and one has $r_2+\ldots +r_{p+1}\geq n+1$. \end{prop} The proposition implies the theorem in the case of matrices $A_j$. To prove the proposition we need the following lemma. \begin{lm}\label{nilp2n} The DSP is positively solved for tuples of nilpotent conjugacy classes $c_j$ with $r_1+\ldots +r_{p+1}\geq 2n$ in which $r_1=n-1$, i.e. the conjugacy class $c_1$ has a single Jordan block of size $n$. \end{lm} The lemma is a particular case of the results in \cite{Ko8}. It follows also from the ones in \cite{C-B}.\\ {\em Proof of the proposition:}\\ Given an irreducible tuple of nilpotent matrices $A_j$ satisfying the conditions of the lemma one can deform it analytically into an irreducible tuple of matrices $A_j'$ where for each $j$ either $J(A_j')=J(A_j)$ or $J(A_j')$ corresponds to $J(A_j)$. The eigenvalues of the matrices $A_j'$ must be close to $0$. These statements can be deduced from \cite{Ko2}, see the definition of the basic technical tool there which is a way to deform analytically tuples of matrices with trivial centralizers; compare also with Lemma~53 from \cite{Ko2}. Thus one obtains the positive solvability of the DSP for all tuples of JNFs $J(c_j)$ satisfying the condition $r_2+\ldots +r_{p+1}\geq n+1$; see Definition~\ref{corrdefi} and Remarks~\ref{corrrems} (especially part 2) of them). However, solvability is proved only for eigenvalues close to $0$. By multiplying the tuples of matrices $A_j'$ by non-zero complex numbers (i.e. $(A_1',\ldots ,A_{p+1}')\mapsto (gA_1',\ldots ,gA_{p+1}'), g\in {\bf C}^*$) one can obtain irreducible tuples with the same JNFs as $A_j'$ and with any eigenvalues whose sum (taking into account the multiplicities) is $0$. This proves the proposition.~~~~~$\Box$ \subsection{Proof for matrices $M_j$} Suppose that for some conjugacy classes $C_j$ satisfying the conditions of the theorem there exist no irreducible tuples. Then there exist tuples with trivial centralizers. This follows from Theorem~\ref{weakDSPdiag} and from Lemma~\ref{betaequalalpha}. Each such tuple can be conjugated to a block upper-triangular form in which the diagonal blocks define irreducible or one-dimensional representations. Denote by $s_1$, $\ldots$, $s_{\nu}$ the sizes of the diagonal blocks. We say that these sizes (considered up to permutation) define the {\em type} of the tuple. The tuple is called {\em maximal} if there is no tuple with trivial centralizer and of type $s_1'$, $\ldots$, $s_h'$ such that $h<\nu$ and the sizes $s_i'$ are obtained from the sizes $s_j$ by one or several operations of the form $(s_{j_1},s_{j_2})\mapsto s_{j_1}+s_{j_2}$. We say that the type $s_1'$, $\ldots$, $s_h'$ is {\em greater} than the type $s_1$, $\ldots$, $s_{\nu}$. \begin{lm}\label{constructtuple} Given a maximal tuple of matrices $M_j$ one can construct a tuple of matrices $A_j\in c_j$ of the same type, with trivial centralizer, with $M_j=\exp (2\pi iA_j)$ (up to conjugacy) where for $j>1$ the matrix $A_j$ has no couple of eigenvalues whose difference is a non-zero integer. \end{lm} The lemma is proved in the next subsection. \begin{rem} The condition ``$M_j=\exp (2\pi iA_j)$ (up to conjugacy)'' is introduced with the aim to use the fact that the monodromy operators of the Fuchsian system $dX/dt=(\sum _{j=1}^{p+1}A_j/(t-a_j))X~(**)$ in the absence of non-zero integer differences between the eigenvalues of the matrices $A_j$ equal (up to conjugacy) $\exp (2\pi iA_j)$. See the definition of the monodromy operators in the Introduction of \cite{Ko2}. \end{rem} For the tuple of matrices $A_j$ from the lemma one has that they can be analytically deformed into an irreducible tuple of such matrices. Indeed, for their conjugacy classes the DSP is positively solved (this is already proved in Subsection~\ref{caseofA_j}) and all reducible tuples from these classes belong to the closure of the variety of irreducible tuples, see Theorem~6 from \cite{Ko3}. All irreducible tuples of matrices $A_j^0$ close to tuples $A_j$ from the lemma define Fuchsian systems $dX/dt=(\sum _{j=1}^{p+1}A_j^0/(t-a_j))X~(***)$ whose monodromy groups must be (up to conjugacy) from the type of the tuple of matrices $M_j$ from the lemma. This follows from the tuple of matrices $M_j$ being maximal. Consider the monodromy operators (denoted also by $M_j$) of systems $(**)$ with matrices-residua $A_j$. One has $M_j=\exp (2\pi iA_j)$ (up to conjugacy) and there is a bijection between the eigenvalues of the matrices $A_j$ and the ones of the matrices $M_j$. For each diagonal block the sum of the eigenvalues of the matrices $A_j$ from the lemma is $0$. Hence, the sum of the same eigenvalues of the matrices $A_j^0$ is also $0$. If the monodromy group of system $(***)$ is of the type of the one of system $(**)$, then by Theorem~5.1.2 from \cite{Bo} it should be possible to conjugate the tuple of matrices $A_j^0$ to a block upper-triangular form with blocks as in the type of the matrices $M_j$. This contradicts the irreducibility of the tuple of matrices $A_j^0$. \begin{rem} When applying Theorem~5.1.2 from \cite{Bo} we use the fact that there are no non-zero integer differences between the eigenvalues of the matrices $A_j$. Thus to each eigenvalue $\sigma$ of $M_j$ of a given multiplicity there corresponds only one eigenvalue $\lambda$ of $A_j$ (which is of the same multiplicity) where $\sigma =\exp (2\pi i\lambda )$. Theorem~5.1.2 from \cite{Bo} speaks about the exponents (i.e. the eigenvalues of the matrices $A_j$) corresponding to an invariant subspace. In the absence of non-zero integer differences these exponents are defined by the eigenvalues of the monodromy operators in a unique way. \end{rem} The theorem is proved.~~~~~$\Box$ \subsection{Proof of Lemma~\protect\ref{constructtuple}} $1^0$. One can construct for each size $s_i$ of the type a tuple of matrices $A_{i,j}^*$ such that one has (up to conjugacy) $\exp (2\pi i A_{i,j}^*)=M_{i,j}^*~(*)$ where $M_{i,j}^*$ are the restrictions of the matrices $M_j$ to the diagonal block of size $s_i$, and the matrices $A_{i,j}^*$ define an irreducible or one-dimensional representation. In the one-dimensional case the claim is evident. In the irreducible case one can construct a Fuchsian system with matrices-residua equal up to conjugacy to $A_{i,j}^*$ (where $A_{i,j}^*$ satisfy $(*)$) the real parts of whose eigenvalues can be chosen to belong to $[0,1)$ for $j>1$ (to avoid non-zero integer differences between eigenvalues); the construction is explained in \cite{ArIl}.\\ $2^0$. Consider the tuple of matrices $A_j'$ which are block-diagonal their restrictions to each diagonal block of size $s_i$ being equal to the blocks $A_{i,j}^*$ from $1^0$. We complete them (in $3^0$) by adding entries in the blocks above the diagonal (the newly obtained matrices are denoted by $A_j$) so that one would have $\exp (2\pi i A_j)=M_j$ up to conjugacy. We do this for $j>1$ and then we define $A_1$ so that $A_1+\ldots +A_{p+1}=0$. As $A_1$ has distinct eigenvalues, whatever entries we add in the blocks above the diagonal, they do not change the conjugacy class of $A_1$. As $\exp (2\pi i A_1')=M_1$ up to conjugacy, one will also have $\exp (2\pi i A_1)=M_1$ up to conjugacy.\\ $3^0$. One can conjugate the matrix $M_j$ by a block upper-triangular matrix $B_j$ so that the diagonal blocks of $(B_j)^{-1}M_jB_j$ of sizes $s_i$ to be in JNF and in the blocks above the diagonal non-zero entries to be present only in positions $(i,j)$ such that the $i$-th and $j$-the eigenvalues coincide. For each eigenvalue $\sigma _{k,j}$ of $M_j$ denote by $M_j(\sigma _{k,j})$ the matrix whose restriction to the rows and columns of the eigenvalue $\sigma _{k,j}$ are the same as the ones of $(B_j)^{-1}M_jB_j$ and the rest of its entries are $0$. One can conjugate the matrices $A_j'$ by block-diagonal matrices $D_j$ so that the matrix $(D_j)^{-1}A_j'D_j$ to be in JNF and for each diagonal block there to hold $\exp (2\pi iA_{i,j}^*)=M_{i,j}^*$ (up to conjugacy). Set $(D_j)^{-1}A_j'D_j=\sum _{k,j}\lambda _{k,j}A_j'(\lambda _{k,j})$ where $\lambda _{k,j}$ are the distinct eigenvalues of $A_j'$ and $A_j'(\lambda _{k,j})$ is the matrix whose restriction to the rows and columns of the eigenvalue $\lambda _{k,j}$ is the same as the one of $(D_j)^{-1}A_j'D_j$ and the rest of its entries are $0$. Define the matrices $A_j(\lambda _{k,j})$ by analogy with the matrices $A_j'(\lambda _{k,j})$. Recall that one has $\sigma _{k,j}=\exp (2\pi i \lambda _{k,j})$. Hence, for each diagonal block and for each couple $(k,j)$ the restrictions of the matrices $A_j'(\lambda _{k,j})-\lambda _{k,j}I$ and $M_j(\sigma _{k,j})-\sigma _{k,j}I$ to it are equal. Define the matrices $(D_j)^{-1}A_jD_j$ by the rule for all $(k,j)$ the matrices $A_j(\lambda _{k,j})-\lambda _{k,j}I$ and $M_j(\sigma _{k,j})-\sigma _{k,j}I$ to be equal. The rule implies that the JNFs of the matrices $(B_j)^{-1}M_jB_j$ and $(D_j)^{-1}A_jD_j$, hence, of $M_j$ and $A_j$, coincide. As there are no non-zero integer differences between eigenvalues of $A_j$, one has also $\exp (2\pi iA_j)=M_j$ (up to conjugacy).\\ $4^0$. The tuple of matrices $A_j$ thus constructed might fail to be with trivial centralizer. Hence, the tuple must define a direct sum of representations (this follows from $A_1$ being with distinct eigenvalues). So conjugate it to a block-diagonal form where each block (we call these blocks {\em big blocks}) is small-block upper-triangular and with trivial centralizer. The small blocks are of sizes $s_i$. As in Lemma 24 from \cite{Ko3} one shows that if there are two big blocks of sizes $u,v$ where $u\geq 3$, $v\geq 2$, then one can deform the tuple into one in which these two big blocks are replaced by a single big block of size $u+v$ (with trivial centralizer and with the same small blocks as the two big blocks) while the other big blocks remain the same. The statement holds also if $u=v=2$, $p=2$ (see again Lemma 24 from \cite{Ko3}) and for at least one index $j\geq 2$ the restrictions of the tuple to the two big blocks belong to different conjugacy classes, or if $u=v=2$, $p\geq 3$ and no matrix is scalar. If there is a big block $B$ of size $1$, then it follows from $r_2+\ldots +r_{p+1}\geq n+1$ that for at least one of the other big blocks $B'$ one has Ext$^1(B,B')\geq 1$. Indeed, without loss of generality one can assume that the restrictions of the matrices to the block $B$ equal $0$ for all values of $j$. Hence, for each other big block $B'$ one has Ext$^1(B,B')=\rho (B')-2\sigma (B')$ where $\rho (B')$ is the sum of the ranks $r_j(B')$ of the matrices $A_j|_{B'}$ and $\sigma (B')$ is the size of $B'$. (One subtracts $\sigma (B')$ once because the sum of the matrices $A_j$ is $0$ and once to factor out conjugation with block upper-triangular matrices; see the proof of Lemma~\ref{Ext}.) If for all blocks $B'$ one has Ext$^1(B,B')\leq 0$, then one has \[ 0\geq \sum _{B'}(\rho (B')-2\sigma (B'))= (\sum _{j=1}^{p+1}\sum _{B'}r_j(B'))-2(n-1)\geq (\sum _{j=1}^{p+1}r_j)-2(n-1)\] i.e. $\sum _{j=2}^{p+1}r_j\leq n-1$ (recall that $r_1=n-1$) which is a contradiction. Hence, one can replace the two blocks $B$, $B'$ by a single big block of size $\sigma (B')+1$. There remains to be considered the case when there is no big block of size $1$ or $\geq 3$, i.e. all big blocks are of size $2$; moreover, $p=2$, and for $j>1$ the restrictions of the matrices $A_j$ to the big blocks belong to one and the same conjugacy class. In this case one has $r_2+r_3=n$, i.e. the case has not to be considered.~~~~~$\Box$
{"config": "arxiv", "file": "math0310441.tex"}
TITLE: Group as direct sum of cyclic groups QUESTION [2 upvotes]: What are necessary conditions for a cyclic group $G$ to be a direct sum of cyclic groups? I saw somewhere that $G$ must be a non $p$-group. But I couldn't prove it. Thank you for your hints/help REPLY [2 votes]: I think the only necessary condition is that the group $G$ is abelian. This condition is not sufficient. Every finitely-generated abelian group has a description as a direct sum of cyclic groups, but the direct product of infinitely many cyclic groups is not a direct sum. Some $p$-groups are the direct sum of cyclic groups, for example $\mathbb Z/p\mathbb Z \oplus \mathbb Z/p\mathbb Z$. After your edit: we can see that if a cyclic group is to be a direct sum of (at least two nontrivial) cyclic groups, our group must be finite (since $\mathbb Z$ is not), of order, say $N$, and that $N$ must be a composite number with no multiplicity in its prime factorization. The Chinese Remainder Theorem allows us to find decompositions in this case. What's more, this is a necessary condition: as an example, $\mathbb Z/p^2\mathbb Z$ would need to be isomorphic to $\mathbb Z/p \mathbb Z \oplus \mathbb Z /p \mathbb Z$, but the former has an element of order $p^2$ while the latter does not. This is the general obstruction to a finite cyclic group being the direct sum of at least two non-trivial cyclic groups.
{"set_name": "stack_exchange", "score": 2, "question_id": 3187934}
\section{Spectral Gap Undecidability of a Continuous~Family~of~Hamiltonians} In this section, we combine the 2D Marker Hamiltonian with the QPE History State construction. Despite the two-dimensional marker Hamiltonian, the setup is very reminiscent of the 1D construction; the crucial difference being the more finely-geared bonus and penalties we need to analyse. \subsection{Uncomputability of the Ground State Energy Density} \newcommand{\PiEdge}{\Pi_\mathrm{edge}} \newcommand{\PiCorner}{\Pi_\mathrm{corner}} \newcommand{\HSm}{\HS_\mathrm{m}} \newcommand{\HSq}{\HS_\mathrm{q}} \newcommand{\sups}[1][tot]{^\mathrm{#1}} \begin{lemma}\label{lem:undec-tech1} Let $\op h_1,\op h_2\sups[row],\op h_2\sups[col]$ be the one- and two-local terms of $\HmarkerN$ with local Hilbert space $\HSm$, and similarly denote with $\op q_1,\op q_2$ be the one- and two-local terms of $\HUTM$ from \cref{def:UTM-Ham} with local Hilbert space $\HSq$, respectively. Let $\PiEdge$ be a projector onto the edge tiles in \cref{prop:unconstrained-tiling}. Define the combined Hilbert space $\HS := \HSm \otimes (\HSq \oplus \field C)$, where $\ket 0$ denotes the basis state for the extension of $\HSq$. We define the following one- and two-local interactions: \begin{align*} \op h_1\sups &:= \op h_1 \ox \1 + \PiEdge \ox \op q_1 + \PiEdge \ox \ketbra0 + \PiEdge^\perp \ox (1 - \ketbra 0) \\ \op h_2\sups[tot,row] &:= \op h_2\sups[row] \ox \1 + \PiEdge^{\ox 2} \ox \op q_2 \\ \op h_2\sups[tot,col] &:= \op h_2\sups[col] \ox \1 \\ \op p_2\sups[tot,row] &:= \left[ \ketbra{\Ts(R,B,*R,*B)} \ox \1 \right] \ox \left[ \1 \ox \ketbra*{\leftend} \right] + \\ &\hspace{5.05mm} \left[ \1 \ox \ketbra{\Ts(R,B,*R,*B)} \right] \ox \left[ \ketbra*{\rightend} \ox \1 \right] \end{align*} On a lattice $\Lambda$ define the overall Hamiltonian \[ \op H:=\sum_{i\in\Lambda} \op h_{1,(i)}\sups + \sum_{i\in\Lambda} \left(\op h_{2,(i)}\sups[tot,row] + \op p_{2,(i)}\sups[tot,row] \right) + \sum_{i\in\Lambda} \op h_{2,(i)}\sups[tot,col], \] where each sum index runs over the lattice $\Lambda$ where the corresponding Hamiltonian term can be placed. Then $\op H$ has the following properties: \begin{enumerate} \item $\op H = \bigoplus_s \op H_s \oplus \op B'$ block-decomposes as $\HmarkerN$ in \cref{th:marker-ham}, where $\op B'=\op B\ox\1$. \item $\op B'\ge 0$. \item All eigenstates of\, $\op H_s$ are product states across squares in the tiling with square size $s$, product across rows within each square, and product across the local Hilbert space $\HSm\otimes (\HSq \oplus \field C)$. \item Within a single square $A$ of side length $s$ within a block $\op H_s$, all eigenstates are of the form $\ket{\boxplus_s}|_A \ox \ket{r_0} \ox \ket r$, where \begin{enumerate} \item $\ket{\boxplus_s}$ is the ground state of the 2D marker Hamiltonian block $\HmarkerN_s$, \item $\ket{r_0}$ is an eigenstate of $\HUTM\oplus\mathbf 0$, i.e.\ the history state Hamiltonian with local padded Hilbert space $\HSq \oplus \field C$, and \item $\ket{r} \in (\HSq\oplus\field C)^{\ox (s\times(s-1))}$ defines the state elsewhere. \end{enumerate} \item The ground state of $\op H_s|_A$ is unique and given by$\ket r=\ket 0^{\ox(s\times(s-1))}$ and $\ket{r_0} = \ket\Psi$, where \[ \ket\Psi = \sum_{t=0}^{T-1} \ket t \ket{\psi_t} \] is the history state of $\HUTM$ as per \cref{Theorem:QTM_in_local_Hamiltonian}, and such that $\ket{\psi_0}$ is correctly initialized. \end{enumerate} \end{lemma} \begin{proof} We already have all the machinery in place to swiftly prove this lemma. First note that, by construction, all of $\{\op h_1\sups, \op h_2\sups[tot,row], \op h_2\sups[tot,col], \op h_2\sups[col], \op p_2\sups[tot,row]\}$ pairwise commute with the respective tiling Hamiltonian terms $\{ \op h_1,\op h_2\sups[row], \op h_2\sups[col]\}$. Furthermore, the local terms from $\HUTM$---$\op q_1$ and $\op q_2$---are positive semi-definite; together with \cref{th:marker-ham} this proves the first three claims. As shown in \cref{Theorem:QTM_in_local_Hamiltonian} and since the Hamiltonian constraints in $\op p_2\sups[tot,row]$ enforce the ground state of the top row within the square $A$ to be bracketed, the first and third claim imply the fourth and fifth. \end{proof} \begin{lemma}\label{lem:undec-tech2} Take the same setup as in \cref{lem:undec-tech1}, and let $\HUTM=\HUTM(\varphi')$ for $\varphi' \in [\varphi(\ivar), \varphi(\ivar) + 2^{-\ivar-\ell})$, where $\varphi(\ivar)$ is the unary encoding of $\ivar\in\field N$ from \cref{def:qpe-encoding}. As usual $\ell\ge1$. Then for a block $\op H_s$ we have \begin{enumerate} \item If $s<\ivar$, $\op H_s\ge 0$. \item If $s\ge \ivar$ and $\mathcal M$ does not halt on input $\ivar$ within space $s$, then $\op H_s\ge 0$. \item If $s\ge \ivar$ and $\mathcal M$ halting on input $\ivar$, and $\ell \ge \log_2(s^{-2}2^{s^{1/4}})$ as per \cref{eq:ell-bound}, then $\lmin(\op H_s) < 0$. \end{enumerate} \end{lemma} \begin{proof} We start with the first claim. By \cref{lem:undec-tech1}, it suffices to analyse a single square $A$ of side length $s$; the proof then essentially follows that of \cite[Thm.\ 20]{Bausch_1D_Undecidable}. We first assume $s<\ivar$. Using the same notation as in \cref{th:marker-ham}, and denoting with $\Pi_\mathrm{edge}$ the projector onto the white horizontal edge within $A$, we have \begin{align*} \lmin( \op H_s|_A ) &= \lmin\left[ \HmarkerN(s)|_A \otimes \1 + \Pi_\mathrm{edge} \otimes \HUTM(\varphi') \right] \\ &= \Eedge(s) + \Epen{tooshort}(s) \geq 0, \end{align*} where we used \cref{cor:balance,th:marker-ham} and the fact that the two Hamiltonian terms in the sum commute. The other claims follow equivalently: in each case by \cref{cor:balance}, the sum of the edge bonus and TM penalties satisfy \cref{eq:f-bound}. For the second claim, by the same process we thus get \[ \lmin( \op H_s|_A ) = \Eedge(s) + \Epen{non-halt}(s) \geq 0. \] Then for the third claim, \[ \lmin( \op H_s|_A ) = \Eedge(s) + \Epen{halt}(s) < 0. \qedhere \] \end{proof} \begin{corollary} \label{Corollary:GSE_of_Lattice} Take the same setup as in \cref{lem:undec-tech2}, and let $\varphi(\ivar)$ encode a halting instance. Set $w=\argmin_s\{ \lmin({\op H}_s)< 0 \}$, and $W$ a single tile of size $w\times w$. Then the ground state energy of $\op H(\varphi')$ on a grid $\Lambda$ of size $L\times H$ is bounded as \begin{equation}\label{eq:gs-energy} \lmin(\op H(\varphi'))=\left\lfloor \frac{L}{w} \right\rfloor\left\lfloor \frac{H}{w} \right\rfloor \lmin(\op H(\varphi')|_W). \end{equation} \end{corollary} \begin{proof} From \cref{lem:undec-tech1}, we know the ground state of $\op H(\varphi')$ is a grid with offset $(0,0)$ from the lattice's origin in the lower left. Each square of the grid contributes energy $\lmin(\op H(\varphi')|_W)<0$; the prefactor in \cref{eq:gs-energy} is simply the number of complete squares within the lattice. For all truncated squares on the right hand side, $\HUTM$ from \cref{def:UTM-Ham} with either the left or right ends truncated has zero ground state energy, since it is either free of the in- or output penalty terms. Furthermore, we see that if we truncate the right end of the 1D Marker Hamiltonian $\Hmarker_1$ in \cref{Lemma:1D_Marker_Hamiltonian}, it has a zero energy ground state since it never encounters the tile pair \[ \begin{tiles} \T(0,*B,1,B) \T(R,B,*R,*B) \end{tiles} \] from \cref{th:marker-ham} necessary for a bonus. Truncating squares at the top does not yield any positive or negative energy contribution. The total lattice energy is therefore simply the number of complete squares on the lattice, multiplied by the energy contribution of each square. \end{proof} \begin{theorem}[Undecidability of Ground State Energy Density] Discriminating between a negative or nonnegative ground state energy density of $\op H(\varphi')$ is undecidable. \end{theorem} \begin{proof} Immediate from \cref{lem:undec-tech2,Corollary:GSE_of_Lattice}; the energy of a single square is either a small negative constant, or nonnegative. Determining which is at least as hard as solving the halting problem. \end{proof} With this result we can almost lift the undecidability of ground state energy density to the spectral gap problem. In order to make the result slightly stronger, for this we first shift the energy of $\op H(\varphi')$ by a constant. \begin{lemma}[\makebox{\cite[Lem.\ 23]{Bausch_1D_Undecidable}}]\label{lem:shift-ham} By adding at most two-local identity terms, we can shift the energy of $\op H$ from \cref{lem:undec-tech1} such that \[ \lmin(\op H) \begin{cases} \ge 1 & \text{in the non-halting case, and} \\ \longrightarrow-\infty & \text{otherwise.} \end{cases} \] \end{lemma} \subsection{Undecidability of the Spectral Gap} With the proven uncomputability of the ground state energy density, we can lift the result using the usual ingredients---a Hamiltonian with a trivial ground state, as well as a dense spectrum Hamiltonian that will be pulled down alongside the spectrum of the QPE Hamiltonian, if the encoded universal Turing machine halts on the input encoded in the phase parameter---to prove that the existence of a spectral gap for our constructed one-parameter family of Hamiltonians is undecidable as well. \begin{theorem}[Undecidability of the Spectral Gap]\label{th:main-2} For a continuous-parameter family of Hamiltonians, discriminating between gapped with trivial ground state $\ket{0}^{\ox \Lambda}$, and gapless as defined in \cref{def:gapped,def:gapless}, is undecidable. \end{theorem} \begin{proof} So far we have constructed a Hamiltonian $\op H(\varphi')$ with undecidable ground state energy asymptotics given in \cref{lem:shift-ham}; we denote its Hilbert space with $\HS_1$. We add the usual Hamiltonian ingredients as in \cite{Cubitt_PG_Wolf_Undecidability} or \cite[Thm.\ 25]{Bausch_1D_Undecidable}: \begin{description} \item[$\Hdens$] Asymptotically dense spectrum in $[0,\infty)$ on Hilbert space $\HS_2$. \item[$\Htriv$] Diagonal in the computational basis, with a single $0$ energy product ground state $\ket{0}^{\ox \Lambda}$, and a spectral gap of $1$ (i.e.\ all other eigenstates have nonnegative energy $\ge 0$); its Hilbert space we denote with $\HS_3$. \item[$\Hguard$] A 2-local Ising type interaction on $\HS:=\HS_1\otimes\HS_2\oplus\HS_3$ defined as \[ \Hguard:=\sum_{i\sim j}\left(\1_{1,2}^{(i)}\otimes\1_3^{(j)} + \1_3^{(i)}\otimes\1_{1,2}^{(j)}\right), \] \noindent where the summation runs over all neighbouring spin sites of the underlying lattice $\Lambda$ (horizontal and vertical). \end{description} We then define \[ \Hundec[L](\varphi') := \op H(\varphi')\otimes\1_2 \oplus \op 0_3 + \1_1\otimes\Hdens\oplus \op 0_3 + 0_{1,2} \oplus \Htriv + \Hguard. \] The guard Hamiltonian ensures that any state with overlap both with $\HS_1\otimes\HS_2$ and $\HS_3$ will incur a penalty $\ge 1$. It is then straightforward to check that the spectrum of $\op H_\mathrm{tot}$ is given by \[ \spec(\Hundec) = \{0\} \cup (\spec(\op H(\varphi')) + \spec(\Hdens)) \cup G \] for some $G \subset [1,\infty)$, where the single zero energy eigenstate stems from $\Htriv$. In case that $\lmin(\op H(\varphi'))\geq 1$, $\spec(\op H(\varphi')) + \spec(\Hdens) \subset [1,\infty)$ and hence the ground state of $\Hundec$ is the ground state of $\Htriv$ with a spectral gap of size one. For $\lmin(\op H(\varphi')) \longrightarrow -\infty$, $\Hdens$ is asymptotically gapless and dense; this means that $\Hundec$ becomes asymptotically gapless as well. \end{proof} Since the spectral properties of $\op H(\varphi')$ are---by \cref{lem:undec-tech2}---robust to a choice of $\varphi'$ within an interval around an encoded instance $\varphi(\ivar)$ as per \cref{def:qpe-encoding}---i.e.\ for large enough $\ell$ we can vary $\varphi'\in[\varphi(\ivar), \varphi(\ivar) + 2^{-\ivar-\ell})$---\cref{th:main-2} immediately proves \cref{th:main,cor:main,cor:main'}.
{"config": "arxiv", "file": "1910.01631/ch-together.tex"}
TITLE: Solve $ 11 \cdot 16^{1/(n-1)} = 16^{n/(n-1)} - 10 $ QUESTION [2 upvotes]: This is probably an easy task for the users here, but I could not solve it. $$ 11 \cdot 16^{1/(n-1)} = 16^{n/(n-1)} - 10 $$ Wolfram Alpha gives the result $ n= 5 $. What are the steps to solve this? REPLY [6 votes]: $11\times 16^{\frac{1}{n-1}}=16^{1+\frac{1}{n-1}}-10=16\times 16^{\frac{1}{n-1}}-10$, so $5\times 16^{\frac{1}{n-1}}=10$, so $16^{\frac{1}{n-1}}=2$, so $\frac{1}{n-1}=\frac{1}{4}$, so $n=5$. REPLY [4 votes]: $$11 \cdot 16^{1/(n-1)} = 16^{n/(n-1)} - 10$$ $$11 \cdot 16^{1/(n-1)} = 16^{1+1/(n-1)} - 10$$ $$11 \cdot 16^{1/(n-1)} = 16\cdot16^{1/(n-1)} - 10$$ $$16\cdot 16^{1/(n-1)} - 11\cdot16^{1/(n-1)}=10$$ $$5\cdot 16^{1/(n-1)}=10$$ $$16^{1/(n-1)}=2=16^{1/4}$$ $$1/(n-1)=1/4$$ $$n=5$$
{"set_name": "stack_exchange", "score": 2, "question_id": 257044}
TITLE: $\int fd\mu=\sup\{\int_{E}fd\mu:E\in S,\mu(E)<+\infty\}$ QUESTION [1 upvotes]: I'm trying to prove the next proposition: Let $(X,S)$ be a measurable space. Let $f$ be a $S-$measurable function non negative such that $\int fd\mu<+\infty.$ Then $$\int fd\mu=\sup\{\int_{E}fd\mu:E\in S,\mu(E)<+\infty\}.$$ Because of $E\subset X$ we have $\int_{E} fd\mu\leq\int fd\mu$ and therefore $$\int fd\mu\geq\sup\{\int_{E}fd\mu:E\in S,\mu(E)<+\infty\}.$$ For the other inequality I have trouble. Even I don't know why is necessary the condition of $\int fd\mu<+\infty.$ Any kind of help is thanked in advance. REPLY [0 votes]: $\newcommand{\d}{\operatorname{d}}$ Since $\int_X f \d\mu<+\infty$, $\mu\{x\in X|f(x)=+\infty\}=0$. For each $n\in \mathbb{N}$, let $ X_n=\{x\in X|f(x)>n\}\in S,$ then $X_n\subset X_{n+1}$ and $\mu(X-\bigcup_{n\in \mathbb{N}} X_n)=0 $. Hence $f_n=f\chi_{X_n}$ is an increasing sequence of non negative measurable functions which converges to $f$ a.e.. By Levi's theorem, $\int_{X_n} f\d\mu=\int_X f_n\d\mu\to \int_X f\d\mu $. Hence the other direction is true.
{"set_name": "stack_exchange", "score": 1, "question_id": 2601771}
\begin{document} \maketitle \begin{abstract} A common problem in applied mathematics is to find a function in a Hilbert space with prescribed best approximations from a finite number of closed vector subspaces. In the present paper we study the question of the existence of solutions to such problems. A finite family of subspaces is said to satisfy the \emph{Inverse Best Approximation Property (IBAP)} if there exists a point that admits any selection of points from these subspaces as best approximations. We provide various characterizations of the IBAP in terms of the geometry of the subspaces. Connections between the IBAP and the linear convergence rate of the periodic projection algorithm for solving the underlying affine feasibility problem are also established. The results are applied to problems in harmonic analysis, integral equations, signal theory, and wavelet frames. \end{abstract} \section{Introduction} A classical problem arising in areas such as harmonic analysis, optics, and signal theory is to find a function $x\in L^2(\RR^N)$ with prescribed values on subsets of the space (or time) and Fourier domains \cite{Byrn05,Dono89,Havi94,Mele96,Star81,Star87}. In geometrical terms, this problem can be abstracted into that of finding a function possessing prescribed best approximations from two closed vector subspaces of $L^2(\RR^N)$ \cite{Youl78}. More generally, a broad range of problems in applied mathematics can be formulated as follows: given $m$ closed vector subspaces $(U_i)_{1\leq i\leq m}$ of a (real or complex) Hilbert space $\HH$, \begin{equation} \label{e:puertoprincessa-mai2008-1} \text{find}\;\;x\in\HH\;\;\text{such that}\;\; (\forall i\in\{1,\ldots,m\})\quad P_ix=u_i, \end{equation} where, for every $i\in\{1,\ldots,m\}$, $P_i$ is the (metric) projector onto $U_i$ and $u_i\in U_i$. In connection with \eqref{e:puertoprincessa-mai2008-1}, a central question is whether a solution exists, irrespective of the choice of the prescribed best linear approximations $(u_i)_{1\leq i\leq m}$. The main objective of the present paper is to address this question. \begin{definition} \label{d:el-nido2009-03-07} Let $(U_i)_{1\leq i\leq m}$ be a family of closed vector subspaces of $\HH$ and let $(P_i)_{1\leq i\leq m}$ denote their respective projectors. Then $(U_i)_{1\leq i\leq m}$ satisfies the \emph{inverse best approximation property (IBAP)} if \begin{equation} \label{e:manilla2007-9} \big(\forall (u_i)_{1\leq i\leq m}\in\cart_{i=1}^mU_i\big) \big(\exi x\in\HH\big)\big(\forall i\in\{1,\ldots,m\})\quad P_ix=u_i. \end{equation} Moreover, for every $(u_i)_{1\leq i\leq m}\in\cart_{i=1}^mU_i$, we set \begin{equation} \label{e:manilla2008-9} S(u_1,\ldots,u_m)=\bigcap_{i=1}^m\menge{x\in\HH}{P_ix=u_i}, \end{equation} and, for every $i\in\{0,\ldots,m-1\}$, \begin{equation} \label{e:palawan-mai2008} U_{i+}=\sum_{j=i+1}^mU_j\,,\quad P_{i+}=P_{\overline{U_{i+}}}\,, \quad\text{and}\quad P_{i+}^\bot=P_{{U_{i+}^\bot}}. \end{equation} \end{definition} The paper is organized as follows. In Section~\ref{sec:2}, we first show that the linear independence of the subspaces $(U_i)_{1\leq i\leq m}$ is necessary to satisfy the IBAP, but that it is not sufficient in infinite dimensional spaces. The main result of Section~\ref{sec:2} is Theorem~\ref{t:palawan-mai2008}, which provides various characterizations of the IBAP. Several corollaries are derived and, in particular, we obtain in Proposition~\ref{p:feasibility} conditions for the consistency of affine feasibility problems. In Section~\ref{sec:3}, we discuss minimum norm solutions and establish connections between the IBAP and the rate of convergence of the periodic projection algorithm for solving \eqref{e:puertoprincessa-mai2008-1}. Finally, Section~\ref{sec:4} is devoted to applications to systems of integral equations, constrained moment problems, harmonic analysis, wavelet frames, and signal recovery. \begin{remark} \label{r:ElNido-mars09} Since best approximations are well defined for nonempty closed convex subsets of $\HH$, the IBAP could be considered in this more general context. However, useful results can be expected to be scarce, even for two closed convex cones $K_1$ and $K_2$. Indeed, denote the projectors onto $K_1$ and $K_2$ by $P_1$ and $P_2$, respectively. If $k_1$ is a point on the boundary of $K_1$ which is not a support point of $K_1$ (by the Bishop-Phelps theorem \cite[Theorem~3.18(i)]{Phel93} support points are dense in the boundary of $K_1$), then the only point $x\in\HH$ such that $P_1x=k_1$ is $x=k_1$. Therefore, there is no point $x\in\HH$ such that $P_1x=k_1$ and $P_2x=k_2$ unless $k_2=P_2k_1$, which means that the IBAP does not hold. Let us add that, even if every boundary point of $K_1$ is a support point (e.g., the interior of $K_1$ is nonempty or $\HH$ is finite dimensional), the IBAP can also trivially fail: take for instance $\HH=\RR^2$, $K_1=\RP\times\RP$, $K_2=\menge{(\beta,-\beta)}{\beta\in\RR}$, $k_1=(0,1)$, and $k_2=(1,-1)$. \end{remark} Throughout, $\HH$ is a real or complex Hilbert space with scalar product $\scal{\cdot}{\cdot}$ and norm $\|\cdot\|$. The distance to a closed affine subspace $S$ of $\HH$ is denoted by $d_S$, and its projector by $P_S$. Moreover, $(U_i)_{1\leq i\leq m}$ is a fixed family of closed vector subspaces of $\HH$ with respective projectors $(P_i)_{1\leq i\leq m}$. \section{Characterizations of the inverse best approximation property} \label{sec:2} We first record some useful descriptions of the set of solutions to \eqref{e:puertoprincessa-mai2008-1}. \begin{proposition} \label{p:carac} Let $(u_i)_{1\leq i\leq m}\in\cart_{i=1}^mU_i$. Then the following hold. \begin{enumerate} \item \label{p:caraci} $S(u_1,\ldots,u_m)=\bigcap_{i=1}^m(u_i+U_i^\bot)$. \item \label{p:caracii} Let $x\in S(u_1,\ldots,u_m)$. Then $S(u_1,\ldots,u_m)=x+\bigcap_{i=1}^m U_i^{\bot}$. \end{enumerate} \end{proposition} \begin{proof} \ref{p:caraci}: Let $x\in\HH$ and $i\in\{1,\ldots,m\}$. The projection theorem asserts that $P_ix=u_i$ $\Leftrightarrow$ $x-u_i\in U_i^\bot$ $\Leftrightarrow$ $x\in u_i+U_i^\bot$. Hence, \eqref{e:manilla2008-9} yields $x\in S(u_1,\ldots,u_m)$ $\Leftrightarrow$ $x\in\bigcap_{i=1}^m(u_i+U_i^\bot)$. \ref{p:caracii}: Let $y\in\HH$. By linearity of the operators $(P_i)_{1\leq i\leq m}$, $y\in S(u_1,\ldots,u_m)$ $\Leftrightarrow$ $(\forall i\in\{1,\ldots,m\})$ $P_i(y-x)=0$ $\Leftrightarrow$ $(\forall i\in\{1,\ldots,m\})$ $y-x\in U_i^\bot$ $\Leftrightarrow y\in x+\bigcap_{i=1}^m U_i^{\bot}$. \end{proof} The main objective of this section is to provide characterizations of the inverse best approximation property. Let us start with a necessary condition. \begin{proposition} \label{p:kimono-ken} Let $(u_i)_{1\leq i\leq m}\in(\cart_{i=1}^mU_i) \smallsetminus\{(0,\ldots,0)\}$ be such that $\sum_{i=1}^mu_i=0$. Then $S(u_1,\ldots,u_m)=\emp$. \end{proposition} \begin{proof} Suppose that $x\in S(u_1,\ldots,u_m)$. Then, for every $i\in\{1,\ldots,m\}$, $u_i=P_ix$ and therefore $\scal{u_i}{x-u_i}=0$, i.e., $\|u_i\|^2=\scal{u_i}{x}$. Hence $0<\sum_{i=1}^m\|u_i\|^2=\scal{\sum_{i=1}^mu_i}{x}=0$, and we reach a contradiction. \end{proof} \begin{corollary} \label{c:kimono-ken} Suppose that $(U_i)_{1\leq i\leq m}$ satisfies the inverse best approximation property. Then the subspaces $(U_i)_{1\leq i\leq m}$ are linearly independent. \end{corollary} As the following example shows, the linear independence of the subspaces $(U_i)_{1\leq i\leq m}$ is not sufficient to guarantee the inverse best approximation property. \begin{example} \label{ex:7} Suppose that $\HH$ is separable, let $(e_n)_{n\in\NN}$ be an orthonormal basis of $\HH$, let $(\alpha_n)_{n\in\NN}$ be a square-summable sequence in $\RPP$, and set $(\forall n\in\NN)$ $f_n=(e_{2n}+\alpha_ne_{2n+1})/\sqrt{1+\alpha_n^2}$. Set $m=2$, \begin{equation} U_1=\spc\{e_{2n}\}_{n\in\NN},\; U_2=\spc\{f_n\}_{n\in\NN},\; u_1=0,\; \;\text{and}\;\;u_2=\sum_{n\in\NN}\alpha_nf_n. \end{equation} Then $U_1\cap U_2=\{0\}$ and $S(u_1,u_2)=\emp$. \end{example} \begin{proof} By construction, $(e_{2n})_{n\in\NN}$ and $(f_n)_{n\in\NN}$ are orthonormal bases of $U_1$ and $U_2$, respectively. It follows easily that $U_1\cap U_2=\{0\}$. Now suppose that there exists a vector $x\in\HH$ such that $_1x=u_1$ and $P_2x=u_2$. Then the identities $\sum_{n\in\NN}\scal{x}{e_{2n}}e_{2n}=P_1x=u_1=0$ imply that \begin{equation} \label{e:3-0} (\forall n\in\NN)\quad\scal{x}{e_{2n}}=0. \end{equation} Hence, it results from the identities $\sum_{n\in\NN}\alpha_nf_n=u_2=P_2x=\sum_{n\in\NN}\scal{x}{f_n}f_n$ that \begin{equation} (\forall n\in\NN)\quad\alpha_n= \scal{x}{f_n}=\frac{\alpha_n}{\sqrt{1+\alpha_n^2}}\scal{x}{e_{2n+1}}. \end{equation} Therefore, $\inf_{n\in\NN}\scal{x}{e_{2n+1}}=\inf_{n\in\NN} \sqrt{1+\alpha_n^2}=1$, which is impossible. \end{proof} The next result states that linear independence is necessary and sufficient to obtain an approximate inverse best approximation property. \begin{proposition} \label{p:denseness} The following are equivalent. \begin{enumerate} \item \label{p:densenessi} The subspaces $(U_i)_{1\leq i\leq m}$ are linearly independent. \item \label{p:densenessii} For every $(u_i)_{1\leq i\leq m}\in\cart_{i=1}^mU_i$ and every $\varepsilon\in\RPP$, there exists $x\in\HH$ such that \begin{equation} \label{e:23octobre2008} \max_{1\leq i\leq m}\|P_ix-u_i\|\leq\varepsilon. \end{equation} \end{enumerate} \end{proposition} \begin{proof} Set $V=\menge{(P_ix)_{1\leq i\leq m}}{x\in\HH}$ and let $W$ be the orthogonal complement of $V$ in the Hilbert direct sum $\bigoplus_{i=1}^mU_i$. \ref{p:densenessi}$\Rightarrow$\ref{p:densenessii}: Take $(u_i)_{1\leq i\leq m}\in W$ and set $x=\sum_{i=1}^{m}u_i$. Then $\sum_{i=1}^{m}\scal{u_i}{x}=\sum_{i=1}^{m}\scal{u_i}{P_ix}=0$, which implies that $\|x\|^2=\sum_{i=1}^{m}\scal{u_i}{x}=0$. Hence $x=0$ and, in view of the assumption of independence, we conclude that $(\forall i\in\{1,\ldots,m\})$ $u_i=0$. Therefore, $V$ is dense in $\bigoplus_{i=1}^mU_i$. \ref{p:densenessii}$\Rightarrow$\ref{p:densenessi}: Take $(u_i)_{1\leq i\leq m}\in\cart_{i=1}^mU_i$ such that $\sum_{i=1}^{m}u_i=0$, take $\varepsilon\in\RPP$, and take $x\in\HH$ such that \eqref{e:23octobre2008} holds. Then $\sum_{i=1}^m\scal{u_i}{P_ix}=\sum_{i=1}^m\scal{u_i}{x}=0$ and therefore \begin{align} \sum_{i=1}^m\|u_i\|^2 &=\sum_{i=1}^m\|u_i-P_ix\|^2+2 \text{Re} \sum_{i=1}^m\scal{u_i-P_ix}{P_ix} +\sum_{i=1}^m\|P_ix\|^2\nonumber\\ &=\sum_{i=1}^m\|u_i-P_ix\|^2-\sum_{i=1}^m\|P_ix\|^2\nonumber\\ &\leq m\varepsilon^2. \end{align} Hence, $(\forall i\in\{1,\ldots,m\})$ $u_i=0$. \end{proof} In order to provide characterizations of the inverse best approximation property, we require the following tools. \begin{definition}{\rm\cite[Definition~9.4]{Deut01}} Let $U$ and $V$ be closed vector subspaces of $\HH$. The angle determined by $U$ and $V$ is the real number in $[0,\pi/2]$ the cosine of which is given by \begin{equation} \label{e:angol} {\mathsf c}(U,V) =\sup\menge{\abscal{x}{y}}{x\in U\cap(U\cap V)^{\bot},\: y\in V\cap(U\cap V)^{\bot},\:\|x\|\leq 1,\:\|y\|\leq 1}. \end{equation} \end{definition} \begin{lemma} \label{l:palawan-mai2008} Let $U$ and $V$ be closed vector subspaces of $\HH$, let $u\in U$, let $v\in V$, and set $S=(u+U^\bot)\cap(v+V^\bot)$. Then the following hold. \begin{enumerate} \item \label{l:palawan-mai2008i} Let $x\in S$. Then $S=P_{\overline{U+V}}\,x+(U^\bot\cap V^\bot)$. \item \label{l:palawan-mai2008ii} Suppose that $\|P_UP_V\|<1$ and set \begin{equation} \label{e:teryaki-boy3} z={\overline{u}}+{\overline{v}},\quad\text{where}\quad \begin{cases} {\overline{u}}=(\Id-P_UP_V)^{-1}(u-P_Uv)\\ {\overline{v}}=(\Id-P_VP_U)^{-1}(v-P_Vu). \end{cases} \end{equation} Then the following hold. \begin{enumerate} \item \label{l:palawan-mai2008iia} $S\neq\emp$. \item \label{l:palawan-mai2008iib} $z=P_S\,0$. \end{enumerate} \end{enumerate} \end{lemma} \begin{proof} \ref{l:palawan-mai2008i}: As in Proposition~\ref{p:carac}, we can write $S=x+(U^{\bot}\cap V^{\bot})$. Hence, since $(\overline{U+V})^\bot=(U+V)^\bot=U^\bot\cap V^\bot$, we get $S=x+(U^{\bot}\cap V^{\bot}) =P_{(U^{\bot}\cap V^{\bot})^\bot}x+(U^{\bot}\cap V^{\bot}) =P_{\overline{U+V}}\,x+(U^{\bot}\cap V^{\bot})$. \ref{l:palawan-mai2008ii}: These properties are known (see for instance \cite[Item~3.B) p.~91]{Havi94} and \cite[Section~5 on pp.~92--93]{Havi94}, respectively); we provide short alternative proofs for completeness. \ref{l:palawan-mai2008iia}: Let $u\in U$ and $v\in V$. Since $P_U$ and $P_V$ are self-adjoint, $\|P_VP_U\|=\|(P_VP_U)^*\|=\|P_U^*P_V^*\|=\|P_UP_V\|<1$, and the vectors ${\overline{u}}$ and ${\overline{v}}$ are therefore well defined. Moreover, it follows from the identity ${\overline{u}}=\sum_{j\in\NN}(P_UP_V)^j(u-P_Uv)$ that ${\overline{u}}\in U$ and therefore that $P_U{\overline{u}}=\overline{u}$. On the other hand, the second equality in the right-hand side of \eqref{e:teryaki-boy3} yields \begin{align} \label{e:teryaki-boy1} P_U{\overline{v}} &=P_U\bigg(\sum_{j\in\NN}(P_V P_U)^j(v-P_Vu)\bigg)\nonumber\\ &=(\Id-P_UP_V)^{-1}(P_Uv-P_UP_Vu)\nonumber\\ &=(\Id-P_UP_V)^{-1}\big((\Id-P_UP_V)u-(u-P_Uv)\big)\nonumber\\ &=u-\overline{u}. \end{align} Thus, $P_Uz=P_U({\overline{u}}+{\overline{v}})= {\overline{u}}+P_U{\overline{v}}=u$. Likewise, $P_V{\overline{v}}=\overline{v}$ and $P_V\overline{u}=v-\overline{v}$, which implies that $P_Vz=P_V({\overline{u}}+{\overline{v}})= P_V{\overline{u}}+{\overline{v}}=v$. Altogether, $z\in S$. \ref{l:palawan-mai2008iib}: As seen above, $z\in S$, ${\overline{u}}\in U$, and ${\overline{v}}\in V$. Now let $x\in S$. As in Proposition~\ref{p:carac}\ref{p:caracii}, we can write $x=z+w=\overline{u}+\overline{v}+w$, for some $w\in U^\bot\cap V^\bot$. Hence, $\|x\|^2=\|z\|^2+2\text{Re}\scal{\overline{u}}{w}+2\text{Re} \scal{\overline{v}}{w}+\|w\|^2=\|z\|^2+\|w\|^2\geq\|z\|^2$. \end{proof} We can now provide various characterizations of the inverse best approximation property (the notation \eqref{e:palawan-mai2008} will be used repeatedly). \begin{theorem} \label{t:palawan-mai2008} The following are equivalent. \begin{enumerate} \item \label{t:palawan-mai2008i} $(U_i)_{1\leq i\leq m}$ satisfies the inverse best approximation property. \item \label{t:palawan-mai2008ii} $(\forall i\in\{1,\ldots,m-1\})(\forall u_i\in U_i)(\exi x\in\HH)$ $u_i=P_ix$ and $(\forall j\in\{i+1,\ldots,m\})$ $P_jx=0$. \item \label{t:palawan-mai2008iii} $(\forall i\in\{1,\ldots,m-1\})$ $P_i(U_{i+}^\bot)=U_i$. \item \label{t:palawan-mai2008iv} $(\forall i\in\{1,\ldots,m-1\})$ $U_i^\bot+U_{i+}^\bot=\HH$. \item \label{t:palawan-mai2008x} The subspaces $(U_i)_{1\leq i\leq m}$ are linearly independent and $(\forall i\in\{1,\ldots,m-1\})(\exi\gamma_i\in\RPP)$ $d_{U_i^\bot\cap U^\bot_{i+}}\leq\gamma_i\big( d_{U^\bot_i}+d_{U^\bot_{i+}}\big)$. \item \label{t:palawan-mai2008v} The subspaces $(U_i)_{1\leq i\leq m}$ are linearly independent and $(\forall i\in\{1,\ldots,m-1\})$ $U_i+U_{i+}$ is closed. \item \label{t:palawan-mai2008vi} The subspaces $(U_i)_{1\leq i\leq m}$ are linearly independent and, for every $i\in\{1,\ldots,m-1\}$, ${\mathsf c}(U_i,U_{i+})<1$. \item \label{t:palawan-mai2008vii} $(\forall i\in\{1,\ldots,m-1\})(\exi\gamma_i\in\left[1,\pinf\right[) (\forall u_i\in U_i)$ $\|u_i\|\leq\gamma_i\|P_{i+}^\bot u_i\|$. \item \label{t:palawan-mai2008vii'} $(\forall i\in\{1,\ldots,m-1\})(\exi\gamma_i\in\left[2,\pinf\right[) (\forall x\in\HH)$ $\|x\|\leq\gamma_i(\|P_i^\bot x\| +\|P_{i+}^\bot x\|)$. \item \label{t:palawan-mai2008viii} $(\forall i\in\{1,\ldots,m-1\})$ $\|P_iP_{i+}\|<1$. \end{enumerate} \end{theorem} \begin{proof} \ref{t:palawan-mai2008i}$\Rightarrow$\ref{t:palawan-mai2008ii}: Clear. \ref{t:palawan-mai2008ii}$\Rightarrow$\ref{t:palawan-mai2008iii}: Let $i\in\{1,\ldots,m-1\}$. It is clear that $P_i(U_{i+}^\bot)\subset U_i$. Conversely, let $u_i\in U_i$. By assumption, there exists $x\in\bigcap_{j=i+1}^mU_j^\bot=U_{i+}^\bot$ such that $u_i=P_ix$. In other words, $U_i\subset P_i(U_{i+}^\bot)$. Altogether, $P_i(U_{i+}^\bot)=U_i$. \ref{t:palawan-mai2008iii}$\Rightarrow$\ref{t:palawan-mai2008iv}: Let $i\in\{1,\ldots,m-1\}$. We have \begin{equation} \HH=U_i^\bot+U_i=U_i^\bot+P_i(U_{i+}^\bot) =U_i^\bot+\bigcup_{v\in U_{i+}^\bot}(v-P_{U_i^\bot}v) =U_i^\bot+\bigcup_{v\in U_{i+}^\bot}v=U_i^\bot+U_{i+}^\bot. \end{equation} \ref{t:palawan-mai2008iv}$\Rightarrow$\ref{t:palawan-mai2008x}: Let $i\in\{1,\ldots,m-1\}$. We have \begin{equation} U_i\cap U_{i+}=(U_i^\bot+U_{i+}^\bot)^\bot=\HH^\bot=\{0\}. \end{equation} This shows the independence claim. Moreover, since $U_i^\bot+U_{i+}^\bot=\HH$ is closed, the inequality on the distance functions follows from \cite[Corollaire~II.9]{Brez93}. \ref{t:palawan-mai2008x}$\Rightarrow$\ref{t:palawan-mai2008v}: Let $i\in\{1,\ldots,m-1\}$. It follows from \cite[Remarque~7~p.~22]{Brez93} (see also \cite[Proposition~5.16]{Baus96}) that $U_i^\bot+U_{i+}^\bot$ is closed. In turn, since \cite[Th\'eor\`eme~II.15]{Brez93} asserts that $U_i^{\bot\bot}+U^{\bot\bot}_{i+}$ is closed, we deduce that \begin{equation} \label{e:brezis82} U_i+\overline{U_{i+}}~\text{is closed}. \end{equation} It remains to show that $U_{i+}$ is closed. If $i=m-1$, $U_{i+}=U_m$ is closed. On the other hand, if $i\in\{2,\ldots,m-1\}$ and $U_{i+}$ is closed, we deduce from \eqref{e:brezis82} that $U_{(i-1)+}=U_i+U_{i+}=U_i+\overline{U_{i+}}$ is closed. \ref{t:palawan-mai2008v}$\Rightarrow$\ref{t:palawan-mai2008vi}: Let $i\in\{1,\ldots,m-1\}$. Then $U_{i+}$ and $U_i+U_{i+}$ are closed and it follows from \cite[Theorem~9.35]{Deut01} that ${\mathsf c}(U_i,U_{i+})<1$. \ref{t:palawan-mai2008vi}$\Rightarrow$\ref{t:palawan-mai2008vii}: Let $i\in\{1,\ldots,m-1\}$ and let $u_i\in U_i$. Then \eqref{e:angol} yields \begin{align} \label{e:3nov2008-1} \|u_i\|^2 &=\|P_{i+}^\bot u_i\|^2+\|P_{i+}u_i\|^2\nonumber\\ &=\|P_{i+}^\bot u_i\|^2+\scal{u_i}{P_{i+}u_i}\nonumber\\ &\leq\|P_{i+}^\bot u_i\|^2+{\mathsf c}(U_i,U_{i+})\|u_i\|\, \|P_{i+}u_i\|\nonumber\\ &\leq\|P_{i+}^\bot u_i\|^2+{\mathsf c}(U_i,U_{i+})\|u_i\|^2. \end{align} Hence, $\|P^\bot_{i+}u_i\|^2\geq(1-{\mathsf c}(U_i,U_{i+}))\|u_i\|^2$. \ref{t:palawan-mai2008vii}$\Rightarrow$\ref{t:palawan-mai2008vii'}: Let $i\in\{1,\ldots,m-1\}$ and let $x\in\HH$. There exists $\gamma\in\left[1,\pinf\right[$ such that \begin{align} \label{e:3nov2008-2} \|x\| &\leq\|P_ix\|+\|P^\bot_ix\|\nonumber\\ &\leq\gamma\|P_{i+}^\bot P_ix\|+\|P^\bot_ix\|\nonumber\\ &\leq\gamma(\|P_{i+}^\bot x\|+\|P_{i+}^\bot P^\bot_ix\|) +\|P^\bot_ix\|\nonumber\\ &\leq\gamma\|P_{i+}^\bot x\|+(1+\gamma)\|P^\bot_ix\|. \end{align} \ref{t:palawan-mai2008vii'}$\Rightarrow$\ref{t:palawan-mai2008viii}: Let $i\in\{1,\ldots,m-1\}$ and let $x\in\HH$. There exists $\gamma\in\left[2,\pinf\right[$ such that \begin{align} \|P_ix\|^2 &=\|P_{i+}P_ix\|^2+\|P_{i+}^\bot P_ix\|^2\nonumber\\ &=\|P_{i+}P_ix\|^2+(\|P_{i+}^\bot P_ix\|+\|P_i^\bot P_ix\|)^2\nonumber\\ &\geq\|P_{i+}P_ix\|^2+\gamma^{-2}\|P_ix\|^2. \end{align} Therefore $\|P_{i+}P_ix\|^2\leq(1-\gamma^{-2})\|P_ix\|^2\leq(1-\gamma^{-2})\|x\|^2$. Hence $\|P_{i+}P_i\|<1$ and, in turn, $\|P_iP_{i+}\|=\|P_i^*P_{i+}^*\|=\|(P_{i+}P_i)^*\|=\|P_{i+}P_i\|<1$. \ref{t:palawan-mai2008viii}$\Rightarrow$\ref{t:palawan-mai2008i}: Fix $(u_i)_{1\leq i\leq m}\in\cart_{i=1}^mU_i$ and set $(\forall i\in\{0,\ldots,m-1\}$ $S_i=\bigcap_{j=i+1}^m(u_i+U_i^\bot)$. Let us show by induction that \begin{equation} \label{e:mary'scottage-mai2008} (\forall i\in\{0,\ldots,m-2\})\quad S_i\neq\emp\quad\text{and}\quad (\forall x_i\in S_i)\quad S_i=P_{i+}x_i+U_{i+}^\bot. \end{equation} First, let us set $i=m-2$. Since, by assumption $\|P_{m-1}P_m\|<1$, it follows from Lemma~\ref{l:palawan-mai2008}\ref{l:palawan-mai2008iia} that $S_{m-2}\neq\emp$. Moreover, we deduce from Lemma~\ref{l:palawan-mai2008}\ref{l:palawan-mai2008i} that, for every $x_{m-2}\in S_{m-2}$, \begin{equation} S_{m-2}=P_{\overline{U_{m-1}+U_m}}\,x_{m-2}+(U_{m-1}^\bot\cap U_m^\bot)=P_{(m-2)+}x_{m-2}+U_{(m-2)+}^\bot. \end{equation} Next, suppose that \eqref{e:mary'scottage-mai2008} is true for some $i\in\{1,\ldots,m-2\}$ and let $x_i\in S_i$. Then, using Lemma~\ref{l:palawan-mai2008}\ref{l:palawan-mai2008i}, we obtain \begin{equation} \label{e:rains} S_{i-1} =(u_i+U_i^\bot)\cap S_i =(u_i+U_i^\bot)\cap\big(P_{i+}x_i+U_{i+}^\bot). \end{equation} Since, by assumption $\|P_iP_{i+}\|<1$, it follows from Lemma~\ref{l:palawan-mai2008}\ref{l:palawan-mai2008iia} that $S_{i-1}\neq\emp$. Now, let $ x_{i-1}\in S_{i-1}$. Combining \eqref{e:rains} and Lemma~\ref{l:palawan-mai2008} \ref{l:palawan-mai2008i}, we obtain \begin{equation} S_{i-1}=P_{\overline{U_i+U_{i+}}}x_{i-1} + \left( U_i^{\bot}\cap U_{i+}^{\bot}\right) = P_{(i-1)+} x_{i-1} + U_{(i-1)+}^{\bot}. \end{equation} This proves by induction that \eqref{e:mary'scottage-mai2008} is true. For $i=0$, we thus obtain $S_0=\bigcap_{j=1}^m(u_j+U_j^\bot)\neq\emp$. In view of Proposition~\ref{p:carac}\ref{p:caraci}, the proof is complete. \end{proof} An immediate application of Theorem~\ref{t:palawan-mai2008} concerns the area of affine feasibility problems \cite{Kruk06,Butn01,Byrn05,Proc93,Kosm91,Star87}. Given a family of closed affine subspaces $(S_i)_{1\leq i\leq m}$ of $\HH$, the problem is to \begin{equation} \label{e:palawan-mars2009-1} \text{find}\;\;x\in\bigcap_{i=1}^mS_i. \end{equation} In applications, a key issue is whether this problem is consistent in the sense that it admits a solution. Our next proposition gives a sufficient condition for consistency. First, we recall a standard fact. \begin{lemma} \label{l:45} Let $S$ be a closed affine subspace of $\HH$, let $V=S-S$ be the closed vector subspace parallel to $S$, and let $y\in S$. Then $S=y+V$ and $(\forall x\in\HH)$ $P_Sx=y+P_V(x-y)$. \end{lemma} \begin{proposition} \label{p:feasibility} Let $(S_i)_{1\leq i\leq m}$ be closed affine subspaces of $\HH$ and suppose that $(U_i)_{1\leq i\leq m}$ are the orthogonal complements of their respective parallel vector subspaces. If $(U_i)_{1\leq i\leq m}$ satisfies the inverse best approximation property (in particular, if any of properties~{\rm\ref{t:palawan-mai2008ii}--\ref{t:palawan-mai2008viii}} in Theorem~{\rm\ref{t:palawan-mai2008}} holds), then the affine feasibility problem \eqref{e:palawan-mars2009-1} is consistent. \end{proposition} \begin{proof} For every $i\in\{1,\ldots,m\}$, let $a_i\in S_i$, and set $V_i=S_i-S_i$ and $u_i=P_ia_i$. Then, by Lemma~\ref{l:45}, $(\forall i\in\{1,\ldots,m\})$ $S_i=a_i+V_i=a_i+U_i^{\bot}=u_i+U_i^\bot$. Thus, \begin{equation} \bigcap_{i=1}^mS_i=\bigcap_{i=1}^m(u_i+U_i^{\bot}), \end{equation} and it follows from Proposition~\ref{p:carac}\ref{p:caraci} that \eqref{e:palawan-mars2009-1} is consistent if $(U_i)_{1\leq i\leq m}$ satisfies the IBAP. \end{proof} \begin{remark} The converse to Proposition~\ref{p:feasibility} fails. For instance, let $S_1$ and $S_2$ be distinct intersecting lines in $\HH=\RR^3$. Then $U_1=(S_1-S_1)^\bot$ and $U_2=(S_2-S_2)^\bot$ are two-dimensional planes and they are therefore linearly dependent. Hence, the IBAP cannot hold by virtue of Corollary~\ref{c:kimono-ken}. \end{remark} In the case of two subspaces, Theorem~\ref{t:palawan-mai2008} yields simpler conditions. \begin{corollary} \label{c:palawan-mai2008} The following are equivalent. \begin{enumerate} \item \label{c:palawan-mai2008i} $(U_1,U_2)$ satisfies the inverse best approximation property. \item \label{c:palawan-mai2008ii} $(\forall u_1\in U_1)$ $S(u_1,0)\neq\emp$. \item \label{c:palawan-mai2008iii} $P_1(U_2^\bot)=U_1$. \item \label{c:palawan-mai2008iv} $U_1^\bot+U_2^\bot=\HH$. \item \label{c:palawan-mai2008x} $U_1\cap U_2=\{0\}$ and $(\exi\gamma\in\RPP)$ $d_{U^\bot_1\cap U^\bot_2}\leq\gamma\big(d_{U^\bot_1}+d_{U^\bot_2}\big)$. \item \label{c:palawan-mai2008v} $U_1\cap U_2=\{0\}$ and $U_1+U_2$ is closed. \item \label{c:palawan-mai2008vi} $U_1\cap U_2=\{0\}$ and ${\mathsf c}(U_1,U_2)<1$. \item \label{c:palawan-mai2008vii} $(\exi\gamma\in\left[1,\pinf\right[) (\forall u_1\in U_1)$ $\|u_1\|\leq\gamma\|P_2^\bot u_1\|$. \item \label{c:palawan-mai2008vii+} $(\exi\gamma\in\left[2,\pinf\right[) (\forall x\in\HH)$ $\|x\|\leq\gamma(\|P_1^\bot x\|+\|P_2^\bot x\|)$. \item \label{c:palawan-mai2008viii} $\|P_1P_2\|<1$. \end{enumerate} \end{corollary} \begin{remark} Corollary~\ref{c:palawan-mai2008} provides necessary and sufficient conditions for the existence of solutions to \eqref{e:puertoprincessa-mai2008-1} when $m=2$. The implication \ref{c:palawan-mai2008vii+}$\Rightarrow$\ref{c:palawan-mai2008i} appears in \cite[Item~3.B)~p.~91]{Havi94}, the equivalences \ref{c:palawan-mai2008v}$\Leftrightarrow$\ref{c:palawan-mai2008vii} $\Leftrightarrow$\ref{c:palawan-mai2008vii+} $\Leftrightarrow$\ref{c:palawan-mai2008viii} appear in \cite[Item~1.A)~p.~88]{Havi94}, and the equivalences \ref{c:palawan-mai2008iii}$\Leftrightarrow$\ref{c:palawan-mai2008iv} $\Leftrightarrow$\ref{c:palawan-mai2008viii} appear in \cite[Lemma on p.~201]{Niko86}. \end{remark} As consequences of Theorem~\ref{t:palawan-mai2008}, we can now describe scenarii in which the necessary condition established in Corollary~\ref{c:kimono-ken} is also sufficient. \begin{corollary} \label{c:paris-octobre2008} Suppose that the closed vector subspaces $(U_i)_{1\leq i\leq m}$ are linearly independent, that $\|P_{m-1}P_m\|<1$ and that, for every $i\in\{1,\ldots,m-2\}$, $U_i$ is finite dimensional or finite codimensional. Then $(U_i)_{1\leq i\leq m}$ satisfies the inverse best approximation property. \end{corollary} \begin{proof} In view of the equivalence \ref{t:palawan-mai2008i}$\Leftrightarrow$\ref{t:palawan-mai2008vi} in Theorem~\ref{t:palawan-mai2008}, it is enough to show that $(\forall i\in\{1,\ldots,m-1\})$ ${\mathsf c}(U_i,U_{i+})<1$. For $i=m-1$, since $\|P_iP_{i+}\|=\|P_{m-1}P_m\|<1$, we derive from the implication \ref{c:palawan-mai2008viii}$\Rightarrow$\ref{c:palawan-mai2008vi} in Corollary~\ref{c:palawan-mai2008} that ${\mathsf c}(U_i,U_{i+})<1$. Now suppose that, for some $i\in\{2,\ldots,m-1\}$, ${\mathsf c}(U_i,U_{i+})<1$. Using to the implication \ref{c:palawan-mai2008vi}$\Rightarrow$\ref{c:palawan-mai2008v} in Corollary~\ref{c:palawan-mai2008}, we deduce that $U_{(i-1)+}=U_i+U_{i+}$ is closed. In turn, since $U_{i-1}$ is finite or cofinite dimensional, it follows from \cite[Corollary~9.37]{Deut01} that ${\mathsf c}(U_{(i-1)},U_{(i-1)+})<1$, which completes the proof by induction. \end{proof} \begin{corollary} \label{c:puertoprincessa-mai2008} Suppose that the closed vector subspaces $(U_i)_{1\leq i\leq m}$ are linearly independent and that, for every $i\in\{1,\ldots,m-1\}$, $U_i$ is finite dimensional or finite codimensional. Then $(U_i)_{1\leq i\leq m}$ satisfies the inverse best approximation property. \end{corollary} \begin{proof} Since $U_{m-1}$ is finite dimensional or finite codimensional, it follows from \cite[Corollary~9.37]{Deut01} and the implication \ref{c:palawan-mai2008vi}$\Rightarrow$\ref{c:palawan-mai2008viii} in Corollary~\ref{c:palawan-mai2008} that $\|P_{m-1}P_m\|<1$. Hence, the claim follows from Corollary~\ref{c:paris-octobre2008}. \end{proof} \begin{example} \label{prob:72} Let $V$ be a closed vector subspace of $\HH$ and let $(v_i)_{1\leq i\leq m-1}$ be linearly independent vectors such that $V^\bot\cap \spa\{v_i\}_{1\leq i\leq m-1}=\{0\}$. Then, for every $(\eta_i)_{1\leq i\leq m-1}\in\CC^{m-1}$, the constrained moment problem \begin{equation} \label{e:p72} x\in V\quad\text{and}\quad (\forall i\in\{1,\ldots,m-1\})\quad\scal{x}{v_i}=\eta_i \end{equation} admits a solution. \end{example} \begin{proof} This is a special case of Corollary~\ref{c:puertoprincessa-mai2008}, where $U_m=V^\bot$, $u_m=0$, and, for every $i\in\{1,\ldots,m-1\}$, $U_i=\spa\{v_i\}$ and $u_i=\eta_iv_i/\|v_i\|^2$. \end{proof} \begin{corollary} \label{c:puertoprincessa-mai2008-2} Suppose that the subspaces $(U_i)_{1\leq i\leq m}$ are linearly independent and that $\HH$ is finite dimensional. Then $(U_i)_{1\leq i\leq m}$ satisfies the inverse best approximation property. \end{corollary} The above results pertain to the existence of solutions to \eqref{e:puertoprincessa-mai2008-1}. We conclude this section with a uniqueness result that follows at once from Proposition~\ref{p:carac}\ref{p:caracii}. \begin{proposition} \label{p:23} Let $(u_i)_{1\leq i\leq m}\in\cart_{i=1}^mU_i$. Then \eqref{e:puertoprincessa-mai2008-1} has at most one solution if and only if $\bigcap_{i=1}^mU_i^\bot=\{0\}$. \end{proposition} Combining Theorem~\ref{t:palawan-mai2008} and Proposition~\ref{p:23} yields conditions for the existence of unique solutions to \eqref{e:puertoprincessa-mai2008-1}. Here is an example in which $m=2$. \begin{example} \label{ex:unique1} The following are equivalent. \begin{enumerate} \item \label{ex:unique1i} For every $u_1\in U_1$ and $u_2\in U_2$, $S(u_1,u_2)$ is a singleton. \item \label{ex:unique1ii} $U_1^\bot+U_2^\bot=\HH$ and $U_1^{\bot}\cap U_2^{\bot}=\{0\}$. \end{enumerate} \end{example} \begin{proof} Existence follows from the implication \ref{c:palawan-mai2008iv}$\Rightarrow$\ref{c:palawan-mai2008i} in Corollary~\ref{c:palawan-mai2008}, and uniqueness from Proposition~\ref{p:23}. \end{proof} \section{IBAP and the periodic projection algorithm} \label{sec:3} If $(U_i)_{1\leq i\leq m}$ satisfies the IBAP, then \eqref{e:puertoprincessa-mai2008-1} will in general admit infinitely many solutions (see Proposition~\ref{p:23}) and it is of interest to identify specific solutions such as those of minimum norm. \begin{proposition} \label{p:24} Suppose that $(U_i)_{1\leq i\leq m}$ satisfies the inverse best approximation property, let $(u_i)_{1\leq i\leq m}\in\cart_{i=1}^mU_i$, and, for every $i\in\{1,\ldots,m-1\}$, set \begin{eqnarray} \label{e:snakeisland-mai2008} T_i\colon U_{i+}&\to& U_i+U_{i+}\nonumber\\ v~&\mapsto &(\Id-P_iP_{i+})^{-1}(u_i-P_iv)+ (\Id-P_{i+}P_i)^{-1}(v-P_{i+}u_i). \end{eqnarray} Define recursively $\overline{x}_m=u_m$ and $(\forall i\in\{m-1,\ldots,1\})$ $\overline{x}_i=T_i\overline{x}_{i+1}$. Then, for every $i\in\{1,\ldots,m\}$, \begin{equation} \label{e:minimal} \overline{x}_i=P_{S_i}0,\quad\text{where}\quad S_i=\bigcap_{j=i}^m(u_j+U_j^\bot). \end{equation} In particular, $\overline{x}_1=P_{S(u_1,\ldots,u_m)}0$ is the minimal norm solution to \eqref{e:puertoprincessa-mai2008-1}. \end{proposition} \begin{proof} Let $i\in\{1,\ldots,m-1\}$. We first observe that the operator $T_i$ is well defined since the implication \ref{t:palawan-mai2008i}$\Rightarrow$\ref{t:palawan-mai2008viii} in Theorem~\ref{t:palawan-mai2008} yields $\|P_iP_{i+}\|=\|P_{i+}P_i\|<1$. Moreover, the expansions $(\Id-P_iP_{i+})^{-1}=\sum_{j\in\NN}(P_iP_{i+})^j$ and $(\Id-P_{i+}P_i)^{-1}=\sum_{j\in\NN}(P_{i+}P_i)^j$ imply that its range is indeed contained in $U_i+U_{i+}$. Thus, $\overline{x}_i$ is a well defined point in $U_i+U_{i+}=U_{(i-1)+}$. To prove \eqref{e:minimal}, we proceed by induction. First, for $i=m$, since $u_m\in U_m$, we obtain at once \begin{equation} \overline{x}_{i}=u_m=P_{(u_m+U_m^\bot)}0=P_{S_i}0. \end{equation} Now, suppose that \eqref{e:minimal} is true for some $i\in\{2,\ldots,m\}$. By definition, \begin{equation} \overline{x}_{i-1}= (\Id-P_{i-1}P_{(i-1)+})^{-1}(u_{i -1}-P_{i-1}\overline{x}_i)+ (\Id-P_{(i-1)+}P_{i-1})^{-1}(\overline{x}_i-P_{(i-1)+}u_{i-1}). \end{equation} Since $\overline{x}_i\in U_{(i-1)+}$ and $u_{i-1}\in U_{i-1}$, Lemma~\ref{l:palawan-mai2008}\ref{l:palawan-mai2008iib} asserts that $\overline{x}_{i-1}$ is the element of minimal norm in $(u_{i-1}+U^{\bot}_{i-1})\cap(\overline{x}_i+U^{\bot}_{(i-1)+})$. On the other hand since, by \eqref{e:minimal}, $\overline{x}_i\in\bigcap_{j=i}^m(u_j+U_j^\bot)$, we derive from \eqref{e:palawan-mai2008} that, as in Proposition~\ref{p:carac}, \begin{equation} \overline{x}_i+U^{\bot}_{(i-1)+} =\overline{x}_i+\Bigg(\sum_{j=i}^mU_j\Bigg)^\bot =\overline{x}_i+\bigcap_{j=i}^mU_j^\bot =\bigcap_{j=i}^m(u_j+U_j^\bot). \end{equation} As a result, $\overline{x}_{i-1}$ is the element of minimum norm in \begin{equation} (u_{i-1}+U^{\bot}_{i-1})\cap\bigcap_{j=i}^m(u_j+U_j^\bot). \end{equation} In other words, $\overline{x}_{i-1}=P_{S_{i-1}}0$, which completes the proof. \end{proof} Conceptually, Proposition~\ref{p:24} provides a finite recursion for computing the minimal norm solution $\overline{x}_1$ to \eqref{e:puertoprincessa-mai2008-1} for a given selection of vectors $(u_i)_{1\leq i\leq m}\in\cart_{i=1}^mU_i$. This scheme is in general not of direct numerical use since it requires the inversion of operators in \eqref{e:snakeisland-mai2008}. However, minimal norm solutions and, more generally, best approximations from the solution set of \eqref{e:puertoprincessa-mai2008-1} can be computed iteratively via projection methods. Indeed, for every $r\in\HH$ and $(u_i)_{1\leq i\leq m}\in\cart_{i=1}^mU_i$, let us denote by $B(r;u_1,\ldots,u_m)$ the best approximation to $r$ from $S(u_1,\ldots,u_m)$, i.e., by Proposition~\ref{p:carac}\ref{p:caraci}, \begin{equation} \label{e:diliman2009-03-12a} B(r;u_1,\ldots,u_m)=P_{S(u_1,\ldots,u_m)}r= P_{\bigcap_{i=1}^m(u_i+U_i^\bot)}r. \end{equation} A standard numerical method for computing $B(r;u_1,\ldots,u_m)$ is the periodic projection algorithm \begin{equation} \label{e:el-nido2009-03-10a} x_0=r\quad\text{and}\quad(\forall n\in\NN)\quad x_{n+1}=Q_1\cdots Q_mx_n \end{equation} where, for every $i\in\{1,\ldots,m\}$, $Q_i$ is the projector onto $u_i+U_i^\bot$, i.e., \begin{equation} \label{e:puertoprincessa-mars2009-1} Q_i=P_{u_i+U_i^\bot}\colon x\mapsto u_i+x-P_ix. \end{equation} This algorithm is rooted in the classical work of Kaczmarz \cite{Kacz37} and von Neumann \cite{Vonn49}. Although it has been generalized in various directions \cite{Baus96,Kruk06,Butn01,Jamo97}, it is still widely used due to its simplicity and ease of implementation. If $S(u_1,\ldots,u_m)\neq\emp$, the sequence $(x_n)_{n\in\NN}$ generated by \eqref{e:el-nido2009-03-10a} converges strongly to $B(r;u_1,\ldots,u_m)$. If $u_i\equiv 0$, this result was first established by von Neumann \cite{Vonn49} for $m=2$ and extended by Halperin \cite{Halp62} for $m>2$. Strong convergence to $B(r;u_1,\ldots,u_m)$ in the general affine case ($u_i\not\equiv 0$) is a routine modification of Halperin's proof via Lemma~\ref{l:45} (see \cite{Deut01} for a detailed account). Interestingly, if the projectors are not activated periodically in \eqref{e:el-nido2009-03-10a} but in a more chaotic fashion, only weak convergence has been established \cite{Amem65} and it is still an open question whether strong convergence holds. In connection with \eqref{e:el-nido2009-03-10a}, an important question is whether the convergence of $(x_n)_{n\in\NN}$ to $B(r;u_1,\ldots,u_m)$ occurs at a linear rate. The answer is negative and it has actually been shown that arbitrarily slow convergence may occur \cite{Baus09} in the sense that, for every sequence $(\alpha_n)_{n\in\NN}$ in $\zeroun$ such that $\alpha_n\downarrow 0$, there exits $r\in\HH$ such that \begin{equation} \label{e:el-nido2009-03-10c} (\forall n\in\NN)\quad\|x_n-B(r;u_1,\ldots,u_m)\|\geq\alpha_n. \end{equation} On the other hand, several conditions have been found \cite{Baus96,Baus09,Baus03,Deut84,Deut08,Kaya88} that guarantee that, if \eqref{e:puertoprincessa-mai2008-1} admits a solution for some $(u_i)_{1\leq i\leq m}\in\cart_{i=1}^mU_i$, then, for every $r\in\HH$, the sequence $(x_n)_{n\in\NN}$ generated by \eqref{e:el-nido2009-03-10a} converges uniformly linearly to $B(r;u_1,\ldots,u_m)$ in the sense that there exists $\alpha\in\Zeroun$ such that \cite[Section~4]{Deut08} \begin{equation} \label{e:puertoprincessa-mars2009-2} (\forall n\in\NN)\quad \|x_n-B(r;u_1,\ldots,u_m)\|\leq\alpha^n\|r-B(r;u_1,\ldots,u_m)\|. \end{equation} The next result states that the IBAP implies uniform linear convergence of the periodic projection algorithm for solving the underlying affine feasibility problem \eqref{e:puertoprincessa-mai2008-1} for every $(u_i)_{1\leq i\leq m}\in\cart_{i=1}^mU_i$ and every $r\in\HH$. In other words, if \eqref{e:puertoprincessa-mai2008-1} admits a solution for every $(u_i)_{1\leq i\leq m}\in\cart_{i=1}^mU_i$, then uniform linear convergence always occurs in \eqref{e:el-nido2009-03-10a}. \begin{proposition} \label{p:palawan32} Suppose that $(U_i)_{1\leq i\leq m}$ satisfies the inverse best approximation property and set \begin{equation} \label{e:el-nido2009-03-07b} \alpha=\sqrt{1-\prod_{i=1}^{m-1}\Big(1-{\mathsf c} \big(U_i^\bot,U_{i+}^\bot\big)^2\Big)}. \end{equation} Then $\alpha\in\Zeroun$ and, for every $r\in\HH$ and every $(u_i)_{1\leq i\leq m}\in\cart_{i=1}^mU_i$, the sequence $(x_n)_{n\in\NN}$ generated by \eqref{e:el-nido2009-03-10a} satisfies \eqref{e:puertoprincessa-mars2009-2}. \end{proposition} \begin{proof} We first deduce from the implication \ref{t:palawan-mai2008i}$\Rightarrow$\ref{t:palawan-mai2008vi} in Theorem~\ref{t:palawan-mai2008} that $(\forall i\in\{1,\ldots,m-1\})$ ${\mathsf c}(U_i,U_{i+})<1$. Hence, it follows from \cite[Theorem~9.35]{Deut01} that $(\forall i\in\{1,\ldots,m-1\})$ ${\mathsf c}(U^\bot_i,U^\bot_{i+})<1$. In turn, \eqref{e:el-nido2009-03-07b} and \eqref{e:palawan-mai2008} imply that \begin{equation} \label{e:palawan08} \alpha=\sqrt{1-\prod_{i=1}^{m-1}\bigg(1-{\mathsf c} \bigg(U_i^\bot,\bigcap_{j=i+1}^mU_j^\bot\bigg)^2\bigg)}\in\Zeroun. \end{equation} Now let $(u_i)_{1\leq i\leq m}\in\cart_{i=1}^mU_i$. Since the IBAP holds, we have \begin{equation} \label{e:rio-10mai2009} S(u_1,\ldots,u_m)\neq\emp. \end{equation} Altogether, it follows from \eqref{e:palawan08}, \eqref{e:rio-10mai2009}, and \cite[Corollary~9.34]{Deut01} applied to $(U_i^\bot)_{1\leq i\leq m}$ that \eqref{e:puertoprincessa-mars2009-2} holds. \end{proof} In the case when $m=2$, the above result admits a partial converse based on a result of \cite{Baus09}. \begin{proposition} \label{p:diliman2009-03-12} Suppose that $U_1\cap U_2=\{0\}$, that $(U_1,U_2)$ does not satisfy the IBAP, and that $(u_1,u_2)\in U_1\times U_2$ satisfies $S(u_1,u_2)\neq\emp$. Let $(\alpha_n)_{n\in\NN}$ be a sequence in $\zeroun$ such that $\alpha_n\downarrow 0$. Then there exits $r\in\HH$ such that the sequence $(x_n)_{n\in\NN}$ generated by \eqref{e:el-nido2009-03-10a} with $m=2$ satisfies \begin{equation} \label{e:el-nido2009-03-10d} (\forall n\in\NN)\quad\|x_n-B(r;u_1,u_2)\|\geq\alpha_n. \end{equation} \end{proposition} \begin{proof} It follows from our hypotheses and the equivalence \ref{c:palawan-mai2008i}$\Leftrightarrow$\ref{c:palawan-mai2008v} in Corollary~\ref{c:palawan-mai2008} that $U_1+U_2$ is not closed. In turn, we derive from \cite[Theorem~1.4(2)]{Baus09} that there exists $y_0\in\HH$ such that the sequence $(y_n)_{n\in\NN}$ generated by the alternating projection algorithm \begin{equation} \label{e:diliman2009-03-12b} (\forall n\in\NN)\quad y_{n+1}=P_{U_1^\bot}P_{U_2^\bot}y_n \end{equation} satisfies \begin{equation} \label{e:diliman2009-03-12c} (\forall n\in\NN)\quad\|y_n-P_{U_1^\bot\cap U_2^\bot}y_0\|\geq\alpha_n. \end{equation} Now let $y\in S(u_1,u_2)$ and set $r=y+y_0$. It follows from Proposition~\ref{p:carac}\ref{p:caracii} that $S(u_1,u_2)=y+(U_1^\bot\cap U_2^\bot)$. Hence, it follows from \eqref{e:diliman2009-03-12a} and Lemma~\ref{l:45} that \begin{equation} \label{e:rio-9mai2009} B(r;u_1,u_2)=y+P_{U_1^\bot\cap U_2^\bot}(r-y) =y+P_{U_1^\bot\cap U_2^\bot}y_0. \end{equation} On the other hand, $x_0-y=y_0$ and, using Lemma~\ref{l:45}, \eqref{e:el-nido2009-03-10a} with $m=2$ and \eqref{e:rio-9mai2009} yield \begin{equation} \label{e:diliman2009-03-12d} (\forall n\in\NN)\quad x_{n+1}-y=P_{u_1+U_1^\bot}P_{u_2+U_2^\bot}x_n-y =P_{y+U_1^\bot}P_{y+U_2^\bot}x_n-y =P_{U_1^\bot}P_{U_2^\bot}(x_n-y). \end{equation} This and \eqref{e:diliman2009-03-12b} imply by induction that $(\forall n\in\NN)$ $x_n-y=y_n$. In turn, we derive from \eqref{e:rio-9mai2009} and \eqref{e:diliman2009-03-12c} that \begin{equation} \label{e:diliman2009-03-12e} (\forall n\in\NN)\quad \|x_n-B(r;u_1,u_2)\| =\|(y_n+y)-(y+P_{U_1^\bot\cap U_2^\bot}y_0)\| =\|y_n-P_{U_1^\bot\cap U_2^\bot}y_0\| \geq\alpha_n, \end{equation} which completes the proof. \end{proof} \section{Applications} \label{sec:4} In this section, we present several applications of Theorem~\ref{t:palawan-mai2008}. As usual, $L^2(\RR^N)$ is the space of real- or complex-valued absolutely square-integrable functions on the $N$-dimensional Euclidean space $\RR^N$, $\widehat{x}$ denotes the Fourier transform of a function $x\in L^2(\RR^N)$ and $\supp\widehat{x}$ the support of $\widehat{x}$. Moreover, if $A\subset\RR^N$, $1_A$ denotes the characteristic function of $A$ and $\complement A$ the complement of $A$. Finally, $\mu$ designates the Lebesgue measure on $\RR^N$, $\ran T$ the range of an operator $T$ and $\cran T$ is the closure of $\ran T$. The following lemma and its subsequent refinement, will be used on several occasions. \begin{lemma}{\rm\cite[Proposition~8]{Amre77}, \cite[Corollary~1]{Bene85}} \label{l:1} Let $A$ and $B$ be measurable subsets of $\RR^N$ of finite Lebesgue measure, and let $x\in L^2(\RR^N)$ be such that $x1_{\complement A}=0$ and $\widehat{x}1_{\complement B}=0$. Then $x=0$. \end{lemma} \begin{lemma}{\rm\cite[p.~264]{Amre77}, \cite[Theorem~8.4]{Foll97}} \label{l:2} Let $A$ and $B$ be measurable subsets of $\RR^N$ of finite Lebesgue measure. Set $U=\menge{x\in L^2(\RR^N)}{x1_{\complement A}=0}$ and $V=\menge{x\in L^2(\RR^N)}{\widehat{x}1_{\complement B}=0}$. Then $\|P_UP_V\|<1$. \end{lemma} \subsection{Systems of linear equations} Going back to Definition~\ref{d:el-nido2009-03-07}, we can say that $(U_i)_{1\leq i\leq m}$ satisfies the IBAP if for every $(u_i)_{1\leq i\leq m}\in\cart_{i=1}^m\ran P_i$ there exists $x\in\HH$ such that $(\forall i\in\{1,\ldots,m\})$ $P_ix=u_i$. As we have shown, this property holds if \ref{t:palawan-mai2008iv} in Theorem~\ref{t:palawan-mai2008} is satisfied, i.e., if $(\forall i\in\{1,\ldots,m-1\})$ $\ker P_i+\bigcap_{j=i+1}^m\ker P_j=\HH$. In the following proposition, we show that such surjectivity results remain valid if projectors are replaced by more general linear operators. \begin{proposition} \label{p:el-nido2009-03-04} For every $i\in\{1,\ldots,m\}$, let $\GG_i$ be a normed vector space and let $T_i\colon\HH\to\GG_i$ be linear and bounded. Suppose that \begin{equation} \label{e:el-nido2009-03-04a} (\forall i\in\{1,\ldots,m-1\})\quad \ker T_i+\bigcap_{j=i+1}^m\ker T_j=\HH. \end{equation} Then, for every $(y_i)_{1\leq i\leq m}\in\cart_{i=1}^m\ran T_i$, there exists $x\in\HH$ such that \begin{equation} \label{e:el-nido2009-03-04b} (\forall i\in\{1,\ldots,m\})\quad T_ix=y_i. \end{equation} \end{proposition} \begin{proof} For every $i\in\{1,\ldots,m\}$, let $y_i\in\ran T_i$, set $U_i=(\ker T_i)^\bot$, and let $u_i\in U_i$ be such that $T_iu_i=y_i$. Now let $x\in\HH$. Then $x$ solves \eqref{e:el-nido2009-03-04b} $\Leftrightarrow$ $(\forall i\in\{1,\ldots,m\})$ $T_ix=T_iu_i$ $\Leftrightarrow$ $(\forall i\in\{1,\ldots,m\})$ $T_i(x-u_i)=0$ $\Leftrightarrow$ $(\forall i\in\{1,\ldots,m\})$ $x-u_i\in\ker T_i=U_i^\bot$ $\Leftrightarrow$ $(\forall i\in\{1,\ldots,m\})$ $P_ix=u_i$. We thus recover an instance problem \eqref{e:puertoprincessa-mai2008-1} and, in view of the equivalence between items \ref{t:palawan-mai2008i} and \ref{t:palawan-mai2008iv} in Theorem~\ref{t:palawan-mai2008}, we obtain the existence of solutions to \eqref{e:el-nido2009-03-04b} if, for every $i\in\{1,\ldots,m-1\}$, $U_i^\bot+U_{i+}^\bot=\HH$, i.e., if \eqref{e:el-nido2009-03-04a} holds. \end{proof} We now give an application of Proposition~\ref{p:el-nido2009-03-04} to systems of integral equations. \begin{proposition} \label{p:el-nido2009-03-05} For every $i\in\{1,\ldots,m\}$, let $v_i$, $w_i$, and $y_i$ be functions in $L^2(\RR^N)$ such that there exists $x_i\in L^2(\RR^N)$ that satisfies $\int_{\RR^N}x_i(s)v_i(s)w_i(t-s)ds=y_i(t)$ $\mu$-a.e.\ on \ $\RR^N$. Moreover, suppose that there exist measurable sets $(A_i)_{1\leq i\leq m}$ in $\RR^N$ such that \begin{equation} \label{e:supports} (\forall i\in\{1,\ldots,m\})\quad \mu\big((A_i+\supp\widehat{v_i})\cap\supp\widehat{w_i}\big)=0 \end{equation} and \begin{equation} \label{e:overlapping} (\forall i\in\{1,\ldots,m-1\})\quad A_i\cup\bigcap_{j=i+1}^m A_j=\RR^N. \end{equation} Then there exists $x\in L^2(\RR)$ such that \begin{equation} \label{e:el-nido2009-03-05} (\forall i\in\{1,\ldots,m\})\quad \int_{\RR^N}x(s)v_i(s)w_i(t-s)ds=y_i(t)\;\:\mu\text{-a.e.\ on\;}\RR^N. \end{equation} \end{proposition} \begin{proof} The result is an application of Proposition~\ref{p:el-nido2009-03-04} in $\HH=L^2(\RR^N)$. To see this, denote by $\star$ the $N$-dimensional convolution operator and, for every $i\in\{1,\ldots,m\}$ and every $x\in\HH$, set $T_ix=(xv_i)\star w_i$. Then $(T_i)_{1\leq i\leq m}$ are bounded linear operators from $\HH$ to $\HH$ since, by \cite[Th\'eor\`eme~IV.15]{Brez93}, \begin{equation} \big(\forall i\in\{1,\ldots,m\}\big)\big(\forall x\in \HH\big) \quad\|T_ix\|=\|(xv_i)\star w_i\|\leq\|xv_i\|_{L^1}\|w_i\| \leq\|x\|\,\|v_i\|\,\|w_i\|. \end{equation} Now fix $i\in\{1,\ldots,m-1\}$. Since \eqref{e:el-nido2009-03-05} can be written as \eqref{e:el-nido2009-03-04b}, Proposition~\ref{p:el-nido2009-03-04} asserts that it suffices to show that \begin{equation} \label{e:el-nido2009-03-05b} \ker T_i+\bigcap_{j=i+1}^m\ker T_j=\HH. \end{equation} To this end, let $z\in \HH$. It follows from \eqref{e:overlapping} that we can write $z=z_1+z_2$, where $\widehat{z}_1=\widehat{z}\,1_{A_i}$ and $\widehat{z}_2=\widehat{z}\,1_{\complement A_i} $. We have \begin{equation} \label{e:elnido-mars2009-f} \widehat{T_iz_1}=\big[(z_1v_i)\star w_i\big]^\wedge= (\widehat{z_1}\star\widehat{v_i})\widehat{w_i}= \big((\widehat{z}\,1_{A_i})\star\widehat{v_i}\big)\widehat{w_i} \end{equation} and \begin{equation} \label{e:elnido-mars2009-g} \supp\big((\widehat{z}\,1_{A_i})\star\widehat{v_i}\big)\subset \supp(\widehat{z}\,1_{A_i})+\supp\widehat{v_i}\subset A_i+\supp\widehat{v_i}. \end{equation} Therefore, we derive from \eqref{e:elnido-mars2009-f} and \eqref{e:supports} that \begin{equation} \mu\big(\supp\widehat{T_iz_1}\big)= \mu\Big(\supp\big((\widehat{z}\,1_{A_i})\star\widehat{v_i}\big) \cap\supp\widehat{w_i}\Big)=0. \end{equation} This shows that $z_1\in\ker T_i$. Now fix $j\in\{i+1,\ldots,m\}$. Then it remains to show that $z_2\in\ker T_j$. Since \eqref{e:overlapping} yields $\complement A_i=\bigcap_{k=i+1}^mA_k\subset A_j$, arguing as above, we get \begin{equation} \supp\widehat{T_jz_2}= \supp\big(((\widehat{z}\,1_{\complement A_i})\star\widehat{v_j}) \widehat{w_j}\big) \subset\big(\complement A_i+\supp\widehat{v_j}\big)\cap\supp\widehat{w_j} \subset\big(A_j+\supp\widehat{v_j}\big)\cap\supp\widehat{w_j}. \end{equation} In turn, we deduce from \eqref{e:supports} that $\mu(\supp\widehat{T_jz_2})=0$ and therefore that $z_2\in\ker T_j$. \end{proof} We now give an example in which the hypotheses of Proposition~\ref{p:el-nido2009-03-05} are satisfied with $m=3$. \begin{example} Let $\{\alpha,\beta,\gamma\}\subset\RR$ and let $\{v_1,v_2,v_3,w_1,w_2,w_3\}\subset L^{2}(\RR)$. Suppose that $0<\gamma<2\alpha$ and that \begin{equation} \begin{cases} \supp\widehat{v_1}\subset\left[\beta,\beta+\gamma\right], \; \supp\widehat{v_2}\subset\left[\alpha,\pinf\right[, \; \supp\widehat{v_3}\subset\left]\minf,-\alpha\right]\\ \supp\widehat{w_1}\subset\left[-\alpha+\beta+\gamma,\alpha+\beta\right],\; \supp\widehat{w_2}\subset\left]\minf,0\right],\; \supp\widehat{w_3}\subset\left[0,\pinf\right[. \end{cases} \end{equation} Now set $A_1=\left]\minf,-\alpha\right]\cup\left[\alpha,\pinf\right[$, $A_2=\left[-\alpha,\pinf\right[$, and $A_3=\left]\minf,\alpha\right]$. Then \eqref{e:overlapping} is satisfied and, since $A_1+\supp\widehat{v_1}\subset\left]\minf,-\alpha+\beta+\gamma\right] \cup\left[\alpha+\beta,\pinf\right[$, $A_2+\supp\widehat{v_2}\subset\left[0,\infty\right[$, and $A_3+\supp\widehat{v_3}\subset\left]\minf,0\right]$, so is \eqref{e:supports}. \end{example} Next, we consider a moment problem with wavelet frames \cite{Chri96,Chri01,Daub92}. \begin{proposition} Let $\psi$ be a band-limited function in $L^2(\RR)$, say $\supp\widehat{\psi}\subset[\rho,\rho]$ for some $\rho\in\RPP$. Suppose that $(\psi_{j,k})_{(j,k)\in\ZZ^2}$, where $\psi_{j,k}\colon t\mapsto 2^{j/2}\psi(2^jt-k)$, is a frame for $L^2(\RR)$, i.e., there exist constants $\alpha$ and $\beta$ in $\RPP$ such that \begin{equation} \label{e:frame} \big(\forall x\in L^2(\RR)\big)\quad \alpha\|x\|^2\leq\sum_{j\in\ZZ}\sum_{k\in\ZZ} \abscal{x}{\psi_{j,k}}^2\leq\beta\|x\|^2, \end{equation} and, moreover, that $(\psi_{j,k})_{(j,k)\in\ZZ^2}$ admits a lower Riesz bound $\gamma\in\RPP$, i.e., \begin{equation} \label{e:lower-riesz} \big(\forall (c_{j,k})_{(j,k)\in\ZZ^2}\in\ell^2(\ZZ^2)\big) \quad\sum_{j\in\ZZ}\sum_{k\in\ZZ}|c_{j,k}|^2\leq \gamma\Bigg\|\sum_{j\in\ZZ}\sum_{k\in\ZZ}c_{j,k}\psi_{j,k}\Bigg\|^2. \end{equation} Let $A$ be a measurable subset of $\RR$ such that $0<\mu(A)<\pinf$, let $J\in\ZZ$, and set \begin{equation} \label{e:taichung2009-04-19} \Lambda=\menge{(j,k)\in\ZZ\times\ZZ}{j\leq J}. \end{equation} Then, for every function $y\in L^2(A)$ and every sequence $(\eta_{j,k})_{(j,k)\in\Lambda}\in\ell^2(\Lambda)$, there exists $x\in L^2(\RR)$ such that \begin{equation} \label{e:desired} x|_A=y\quad\text{and}\quad(\forall (j,k)\in\Lambda)\quad \scal{x}{\psi_{j,k}}=\eta_{j,k}. \end{equation} \end{proposition} \begin{proof} Set $\HH=L^2(\RR)$, $\GG_1=L^2(A)$, and $\GG_2=\ell^2(\Lambda)$, and define bounded linear operators \begin{equation} \label{e:taipei2009-04-25} T_1\colon\HH\to\GG_1\colon x\mapsto x|_A\quad\text{and}\quad T_2\colon\HH\to\GG_2\colon x\mapsto (\scal{x}{\psi_{j,k}})_{(j,k)\in\Lambda}. \end{equation} Then $\ran T_1=\GG_1$ and, on the other hand, it follows from \cite[Lemma~2.2(ii)]{Casa02} and \eqref{e:lower-riesz} that $\ran T_2=\GG_2$. Hence, in view of \eqref{e:desired}, we must show that, for every $y_1\in\ran T_1$ and every $y_2\in\ran T_2$, there exists $x\in\HH$ such that $T_1x=y_1$ and $T_2x=y_2$. Appealing to Proposition~\ref{p:el-nido2009-03-04}, it is enough to show that $\ker T_1+\ker T_2=\HH$ or, equivalently, that \begin{equation} \label{e:ker-ker} U_1^\bot+U_2^\bot=\HH,\quad\text{where}\quad U_1=\cran T_1^*\quad\text{and}\quad U_2=\cran T_2^*. \end{equation} Set $U=\menge{x\in L^2(\RR)}{x1_{\complement A}=0}$, $B=[-2^J\rho,2^J\rho]$, and $V=\menge{x\in L^2(\RR)}{\widehat{x}1_{\complement B}=0}$. By Lemma~\ref{l:1}, $U\cap V=\{0\}$ and it therefore follows from \cite[Lemma~9.5]{Deut01} and Lemma~\ref{l:2} that \begin{equation} \label{e:taipei2009-04-24} {\mathsf c}(U,V)=\|P_UP_V-P_{U\cap V}\|=\|P_UP_V\|<1. \end{equation} On the other hand, it follows from \eqref{e:taipei2009-04-25} that $T_1^*\colon\GG_1\to\HH$ satisfies \begin{equation} (\forall y\in\GG_1)(\forall t\in\RR)\quad (T_1^*y)(t)= \begin{cases} y(t),&\text{if}\;\;t\in A;\\ 0,&\text{otherwise} \end{cases} \end{equation} and that \begin{equation} T_2^*\colon\GG_2\to\HH\colon(\eta_{j,k})_{(j,k)\in\Lambda}\mapsto \sum_{(j,k)\in\Lambda}\eta_{j,k}\psi_{j,k}. \end{equation} Since $\bigcup_{(j,k)\in\Lambda}\supp\widehat{\psi_{j,k}}\subset B$, we have \begin{equation} U_1\subset U\quad\text{and}\quad U_2\subset V. \end{equation} Hence, $U_1\cap U_2=\{0\}$ and \eqref{e:taipei2009-04-24} yields \begin{equation} \label{e:angles} {\mathsf c}(U_1,U_2)\leq{\mathsf c}(U,V)<1. \end{equation} In view of the implication \ref{c:palawan-mai2008vi}$\Rightarrow$\ref{c:palawan-mai2008iv} in Corollary~\ref{c:palawan-mai2008}, we conclude that \eqref{e:ker-ker} holds. \end{proof} \subsection{Subspaces spanned by nearly pairwise bi-orthogonal sequences} The following proposition provides a wide range of applications of Theorem~\ref{t:palawan-mai2008} with $m=3$. \begin{proposition} \label{p:example} Let $(u_{1,k})_{k\in\ZZ}$, $(u_{2,k})_{k\in\ZZ}$, and $(u_{3,k})_{k\in\ZZ}$ be orthonormal sequences in $\HH$ such that \begin{equation} \label{e:bi-ortho2} (\forall k\in\ZZ)(\forall i\in\{1,2\})(\forall j\in\{i+1,3\}) (\forall l\in\ZZ\smallsetminus\{k\}) \quad u_{i,k}\perp u_{j,l}. \end{equation} Moreover, suppose that \begin{equation} \label{e:neat} \sup_{k\in\ZZ}\sqrt{\abscal{u_{1,k}}{u_{2,k}}}+ \sup_{k\in\ZZ}\sqrt{\abscal{u_{2,k}}{u_{3,k}}}+ \sup_{k\in\ZZ}\sqrt{\abscal{u_{1,k}}{u_{3,k}}}<1. \end{equation} Then, for every sequences $(\alpha_{1,k})_{k\in\ZZ}$, $(\alpha_{2,k})_{k\in\ZZ}$, and $(\alpha_{3,k})_{k\in\ZZ}$ in $\ell^2(\ZZ)$, there exists $x\in\HH$ such that \begin{equation} (\forall k\in\ZZ)\quad \alpha_{1,k}=\scal{x}{u_{1,k}},\; \alpha_{2,k}=\scal{x}{u_{2,k}},\;\text{and}\;\; \alpha_{3,k}=\scal{x}{u_{3,k}}. \end{equation} \end{proposition} \begin{proof} For every $i\in\{1,2,3\}$, set $U_i=\spc\{u_{i,k}\}_{k\in\ZZ}$ and observe that $(\forall x\in\HH)$ $P_ix=\sum_{k\in\ZZ}\scal{x}{u_{i,k}}u_{i,k}$. Accordingly, we have to show that $(U_1,U_2,U_3)$ satisfies the IBAP. Using the equivalence \ref{t:palawan-mai2008i}$\Leftrightarrow$\ref{t:palawan-mai2008viii} in Theorem~\ref{t:palawan-mai2008}, this amounts to showing that $\|P_1P_{1+}\|<1$ and $\|P_2P_3\|<1$. First, let us fix $i\in\{1,2\}$ and $j\in\{i+1,3\}$, and let us show that \begin{equation} \label{e:quick} \sup_{k\in\ZZ}\abscal{u_{i,k}}{u_{j,k}}^{2}\leq\|P_iP_j\| \leq\sup_{k\in\ZZ}\abscal{u_{i,k}}{u_{j,k}}. \end{equation} In view of \eqref{e:bi-ortho2}, we have \begin{equation} \label{e:hongkong2009-04-16a} (\forall x\in\HH)\quad P_iP_jx=\sum_{l\in\ZZ}\scal{x}{u_{j,l}}\scal{u_{j,l}}{u_{i,l}}u_{i,l}. \end{equation} Hence, for every $k\in\ZZ$, $P_iP_ju_{i,k}=\abscal{u_{i,k}}{u_{j,k}}^{2}u_{i,k}$ and therefore $\|P_i P_j\|\geq\|P_iP_ju_{i,k}\|=\abscal{u_{i,k}}{u_{j,k}}^{2}$. This proves the first inequality in \eqref{e:quick}. On the other hand, it follows from \eqref{e:hongkong2009-04-16a} that \begin{equation} (\forall x\in\HH)\quad \|P_iP_jx\|^2=\sum_{l\in\ZZ}|\scal{x}{u_{j,l}}\scal{u_{j,l}}{u_{i,l}}|^2 \leq\sup_{l\in\ZZ}\abscal{u_{j,l}}{u_{i,l}}^2\|x\|^2. \end{equation} This proves the second inequality in \eqref{e:quick}. Since \eqref{e:neat} and \eqref{e:quick} imply that $\|P_2P_3\|<1$, it remains to show that $\|P_1P_{1+}\|<1$. We derive from \eqref{e:neat} and \eqref{e:quick} that \begin{align} \label{e:assurance} (\sqrt{\|P_1P_2\|} +\sqrt{\|P_1P_3\|})(1+\sqrt{\|P_2P_3\|}) &=\sqrt{\|P_1 P_2\|}+\sqrt{\|P_1P_3\|} -\|P_2P_3\| \nonumber\\ &\quad +(\sqrt{\|P_1 P_2\|}+\sqrt{\|P_1P_3\|} +\sqrt{\|P_2P_3\|})\sqrt{\|P_2P_3\|}\nonumber\\ &<1-\|P_2P_3\|. \end{align} For every $k\in\ZZ$, let $P^{\bot}_{3,k}$ denote the projector onto $\{u_{3,k}\}^\bot$ and set \begin{equation} \label{e:puerto-coffee-mars2009b} v_{2,k}=\frac{P^\bot_{3,k}u_{2,k}}{\|P^\bot_{3,k}u_{2,k}\|} =\frac{u_{2,k}-\scal{u_{2,k}}{u_{3,k}}u_{3,k}} {\sqrt{1-\abscal{u_{2,k}}{u_{3,k}}^2}}, \end{equation} which is well defined since \eqref{e:neat} guarantees that $\abscal{u_{2,k}}{u_{3,k}}<1$. Let us note that \eqref{e:puerto-coffee-mars2009b} yields \begin{align} \label{e:2009-04-08} u_{3,k}-\frac{\scal{u_{3,k}}{u_{2,k}}v_{2,k}} {\sqrt{1-\abscal{u_{2,k}}{u_{3,k}}^2}} &=u_{3,k}-\frac{\scal{u_{3,k}}{u_{2,k}}u_{2,k}} {1-\abscal{u_{2,k}}{u_{3,k}}^2} +\frac{\abscal{u_{2,k}}{u_{3,k}}^2u_{3,k}} {1-\abscal{u_{2,k}}{u_{3,k}}^2}\nonumber\\ &=\frac{1}{1-\abscal{u_{2,k}}{u_{3,k}}^2} \big(u_{3,k}-\scal{u_{3,k}}{u_{2,k}}u_{2,k}\big). \end{align} On the other hand, it follows from \eqref{e:bi-ortho2} and \eqref{e:puerto-coffee-mars2009b} that $\{v_{2,k}\}_{k\in\ZZ}\cup\{u_{3,k}\}_{k\in\ZZ}$ is an orthonormal set and that \begin{equation} \label{e:puertoprincesa-coffeeshop-mars2009} \spc\big(\{v_{2,k}\}_{k\in\ZZ}\cup\{u_{3,k}\}_{k\in\ZZ}\big)= \spc\big(\{P^\bot_{3,k}u_{2,k}\}_{k\in\ZZ}\cup\{u_{3,k}\}_{k\in\ZZ}\big) =\overline{U_2+U_3}=\overline{U_{1+}}. \end{equation} To compute $\|P_1P_{1+}\|$, let $x\in\HH$ and let $k\in\ZZ$. We derive from \eqref{e:puertoprincesa-coffeeshop-mars2009} that \begin{equation} P_{1+}x=\sum_{l\in\ZZ}\scal{x}{v_{2,l}}v_{2,l} +\sum_{l\in\ZZ}\scal{x}{u_{3,l}}u_{3,l}. \end{equation} Hence, using \eqref{e:bi-ortho2}, \eqref{e:puerto-coffee-mars2009b}, and \eqref{e:2009-04-08}, we obtain \begin{align} \label{e:coeff} \scal{P_{1+} x}{u_{1,k}} &=\scal{x}{v_{2,k}}\scal{v_{2,k}}{u_{1,k}} +\scal{x}{u_{3,k}}\scal{u_{3,k}}{u_{1,k}}\nonumber\\ &=\scal{x}{u_{2,k}}\frac{\scal{v_{2,k}}{u_{1,k}}} {\sqrt{1-\abscal{u_{2,k}}{u_{3,k}}^2}} \nonumber\\ &\quad\;+\scal{x}{u_{3,k}}\Bigg(\scal{u_{3,k}}{u_{1,k}}- \frac{\scal{u_{3,k}}{u_{2,k}}\scal{v_{2,k}}{u_{1,k}}} {\sqrt{1-\abscal{u_{2,k}}{u_{3,k}}^2}}\Bigg)\nonumber\\ &=\scal{x}{u_{2,k}}\beta_k+\scal{x}{u_{3,k}}\gamma_k, \end{align} where \begin{equation} \beta_k=\frac{\scal{u_{2,k}}{u_{1,k}}-\scal{u_{2,k}}{u_{3,k}} \scal{u_{3,k}}{u_{1,k}}}{1-\abscal{u_{2,k}}{u_{3,k}}^2} \end{equation} and \begin{equation} \gamma_k=\frac{\scal{u_{3,k}}{u_{1,k}}- \scal{u_{3,k}}{u_{2,k}}\scal{u_{2,k}}{u_{1,k}}} {1-\abscal{u_{2,k}}{u_{3,k}}^2}. \end{equation} We note that \eqref{e:quick} yields \begin{equation} \label{e:betak} |\beta_k|\leq\frac{\abscal{u_{1,k}}{u_{2,k}}+ \abscal{u_{2,k}}{u_{3,k}}\abscal{u_{1,k}}{u_{3,k}}} {1-\abscal{u_{2,k}}{u_{3,k}}^2}\leq\frac{\sqrt{\|P_1P_2\|}+ \sqrt{\|P_2P_3\|}\,\sqrt{\|P_1P_3\|}}{1-\|P_2P_3\|} \end{equation} and, likewise, \begin{equation} \label{e:gammak} |\gamma_k|\leq\frac{\sqrt{\|P_1P_3\|}+\sqrt{\|P_2P_3\|}\, \sqrt{\|P_1P_2\|}}{1-\|P_2P_3\|}. \end{equation} Thus, we obtain \begin{align} \label{e:parseval} (\forall x\in\HH)\quad\|P_1P_{1+}x\| &=\sqrt{\sum_{k\in\ZZ}\abscal{P_{1+}x}{u_{1,k}}^2}\nonumber\\ &\leq\sqrt{\sum_{k\in\ZZ}|\scal{x}{u_{2,k}}\beta_k|^2} +\sqrt{\sum_{k\in\ZZ}|\scal{x}{u_{3,k}}\gamma_k|^2}\nonumber\\ &\leq\bigg(\sup_{k\in\ZZ}|\beta_k|+\sup_{k\in\ZZ}|\gamma_k|\bigg) \|x\|\nonumber\\ &\leq\frac{(\sqrt{\|P_1 P_2\|}+\sqrt{\|P_1P_3\|}) (1+\sqrt{\|P_2P_3\|})}{1-\|P_2P_3\|}\|x\|. \end{align} Appealing to \eqref{e:assurance}, we conclude that $\|P_1P_{1+}\|<1$. \end{proof} \begin{remark} A concrete example of subspaces satisfying the hypotheses of Proposition~\ref{p:example} can be constructed from an orthonormal wavelet basis. Take $\psi\in L^2(\RR)$ such that the functions $(\psi_{k,l})_{k\in\ZZ^2}$, where $\psi_{k,l}\colon t\mapsto 2^{k/2}\psi(2^{k}t-l)$, form an orthonormal basis of $L^2(\RR)$ \cite{Daub92}. For every $i\in\{1,2,3\}$ let, for every $k\in\ZZ$, $(\eta_{i,k,l})_{l\in\ZZ}$ be a sequence in $\ell^2(\ZZ)$ such that $\sum_{l\in\ZZ}|\eta_{i,k,l}|^2=1$ and define \begin{equation} U_i=\spc\{u_{i,k}\}_{k\in\ZZ},\quad\text{where}\quad (\forall k\in\ZZ)\quad u_{i,k}=\sum_{l\in\ZZ}\eta_{i,k,l}\psi_{k,l}. \end{equation} Then $(u_{1,k})_{k\in\ZZ}$, $(u_{2,k})_{k\in\ZZ}$, and $(u_{3,k})_{k\in\ZZ}$ are orthonormal sequences in $L^2(\RR)$ that satisfy \eqref{e:bi-ortho2}. Moreover since, for every $i$ and $j$ in $\{1,2,3\}$ and every $k\in\ZZ$, $\scal{u_{i,k}}{u_{j,k}}= \sum_{l\in\ZZ}\eta_{i,k,l}\overline{\eta_{j,k,l}}$, the main hypothesis \eqref{e:neat} is equivalent to \begin{equation} \sup_{k\in\ZZ}\sqrt{\bigg|\sum_{l\in\ZZ}\eta_{1,k,l} \overline{\eta_{2,k,l}}\bigg|}+ \sup_{k\in\ZZ}\sqrt{\bigg|\sum_{l\in\ZZ} \eta_{2,k,l}\overline{\eta_{3,k,l}}\bigg|}+ \sup_{k\in\ZZ}\sqrt{\bigg|\sum_{l\in\ZZ}\eta_{1,k,l} \overline{\eta_{3,k,l}}\bigg|}<1. \end{equation} \end{remark} \subsection{Harmonic analysis and signal recovery} Many problems arising in areas such as harmonic analysis \cite{Amre77,Bene85,Foll97,Havi94,Jami07,Mele96}, signal theory \cite{Byrn05,Papo75,Youl78}, image processing \cite{Proc93,Star87}, and optics \cite{Mont82,Star81} involve imposing known values of an ideal function in the time (or spatial) and Fourier domains. In this section, we describe applications of Theorem~\ref{t:palawan-mai2008} to such problems. The following lemma will be required. \begin{lemma} \label{l:3} Let $U$, $V$, and $W$ be closed vector subspaces of $\HH$ such that $W\subset V$. Then $\|P_UP_W\|\leq\|P_UP_V\|$. \end{lemma} \begin{proof} Set $B=\menge{x\in\HH}{\|x\|\leq 1}$. Then $P_W(B)\subset B$. In turn, since $W\subset V$, $P_W(B)=P_V(P_W(B))\subset P_V(B)$ and hence $P_U(P_W(B))\subset P_U(P_V(B))$. Consequently, $\|P_UP_W\|=\sup \menge{\|P_U P_W x \|}{x\in B} \leq\sup\menge{\|P_U P_V x\|}{x\in B} =\|P_UP_V\|$. \end{proof} The scenario of the next proposition has a simple interpretation in signal recovery \cite{Proc93,Star87}: an $N$-dimensional square-summable signal has known values over certain domains of the spatial and frequency domains and, in addition, $m-2$ scalar linear measurements of it are available. \begin{proposition} \label{p:77} Let $A$ and $B$ be measurable subsets of $\RR^N$ of finite Lebesgue measure, and suppose that $m\geq 3$. Moreover, let $(v_i)_{1\leq i\leq m-2}$ be functions in $L^2(\RR^N)$ with disjoint supports $(C_i)_{1\leq i\leq m-2}$ such that \begin{equation} \label{e:77} (\forall i\in\{1,\ldots,m-2 \})\quad\mu(C_i)<\pinf\quad\text{and}\quad \mu(C_i\cap\complement A)>0. \end{equation} Then, for every functions $v_m$ and $v_{m-1}$ in $L^2(\RR^N)$ and every $(\eta_i)_{1\leq i\leq m-2}\in\RR^{m-2}$, there exists a function $x\in L^2(\RR^N)$ such that \begin{equation} \label{e:rainy77} (\forall i\in\{1,\ldots,m-2 \})\quad \int_{C_i}x(t)\overline{v_i(t)}dt=\eta_i,\;\; x|_A=v_{m-1}|_A,\;\;\text{and}\;\; \widehat{x}|_{B}=\widehat{v}_m|_{B}. \end{equation} \end{proposition} \begin{proof} We first observe that the problem under consideration is a special case of \eqref{e:puertoprincessa-mai2008-1} with $\HH=L^2(\RR^N)$, \begin{equation} \label{e:kokuyo1} \begin{cases} U_i=\spa\{v_i\}&\text{and}\quad u_i=\eta_iv_i/\|v_i\|^2,\quad 1\leq i\leq m-2; \\ U_{m-1}=\menge{x\in\HH}{x1_{\complement A}=0} &\text{and}\quad u_{m-1}=v_{m-1}1_A; \\ U_m=\menge{x\in\HH}{\widehat{x}1_{\complement B}=0} &\text{and}\quad\widehat{u_m}=\widehat{v_m}1_B. \end{cases} \end{equation} It follows from Lemma~\ref{l:2} that $\|P_{m-1}P_m\|<1$. Hence, in view of Corollary~\ref{c:paris-octobre2008}, it suffices to show that the closed vector subspaces $(U_i)_{1\leq i\leq m}$ are linearly independent. Since the supports $(C_i)_{1\leq i\leq m-2}$ are disjoint, the subspaces $(U_i)_{1\leq i\leq m-2}$ are independent. Therefore, if we set $U=\sum_{i=1}^{m-2}U_i$, it is enough to show that $U$, $U_{m-1}$, and $U_m$ are independent. To this end, take $(y,y_{m-1},y_m)\in U\times U_{m-1}\times U_m$ such that \begin{equation} \label{e:anr05} y+y_{m-1}+y_m=0, \end{equation} and set $C=\bigcup_{i=1}^{m-2}C_i$. We have $(y+y_{m-1})1_{\complement(A\cup C)}=0$, $\mu(A\cup C)<\pinf$, $\widehat{y}_m1_{\complement B}=0$, and $\mu(B)<\pinf$. Hence, it follows from \eqref{e:anr05} and Lemma~\ref{l:1} that \begin{equation} \label{e:kokuyo2} y+y_{m-1}=0\;\;\text{and}\;\;y_m=0. \end{equation} It remains to show that $y=0$. Since $y\in U$, there exist $(\alpha_i)_{1\leq i\leq m-2}\in\CC^{m-2}$ such that $y=\sum_{i=1}^{m-2}\alpha_iv_i$. However, since the supports $(C_i)_{1\leq i\leq m-2}$ are disjoint, \begin{equation} \label{e:kokuyo5} \|y\|^2=\bigg\|\sum_{i=1}^{m-2}\alpha_iv_i\bigg\|^2 =\sum_{i=1}^{m-2}|\alpha_i|^2\|v_i\|^2. \end{equation} On the other hand, \eqref{e:77} implies that, for every $i\in\{1,\ldots,m-2\}$, \begin{equation} \label{e:kokuyo4} \|v_i\|^2=\int_{C_i\cap A}|v_i(t)|^2dt +\int_{C_i\cap\complement A}|v_i(t)|^2dt >\int_{C_i\cap A}|v_i(t)|^2dt=\|v_i1_A\|^2. \end{equation} At the same time, we derive from \eqref{e:kokuyo2} that $y=-y_{m-1}\in U_{m-1}$ and therefore from \eqref{e:kokuyo1} that $y1_{\complement A}=0$. Consequently, \eqref{e:kokuyo5} yields \begin{equation} \label{e:kokuyo3} \sum_{i=1}^{m-2}|\alpha_i|^2\|v_i\|^2= \|y\|^2=\|y1_A\|^2=\bigg\|\sum_{i=1}^{m-2}\alpha_iv_i1_A\bigg\|^2 =\sum_{i=1}^{m-2}|\alpha_i|^2\|v_i1_A\|^2. \end{equation} In view of \eqref{e:kokuyo4}, we conclude that $(\forall i\in\{1,\ldots,m-2\})$ $\alpha_i=0$. \end{proof} \begin{remark} \label{r:3} In connection with Proposition~\ref{p:77}, let us make a few comments on the following classical problem: given measurable subsets $A$ and $B$ of $\RR^N$ such that $\mu(A)>0$ and $\mu(B)>0$, and functions $a$ and $b$ in $L^2(\RR^N)$, is there a function $x\in L^2(\RR^N)$ such that \begin{equation} \label{e:74-7} x|_A=a|_A\quad\text{and}\quad\widehat{x}|_{B}=b|_{B}\;? \end{equation} To answer this question, let us set \begin{equation} \label{e:kokuyo7} \begin{cases} U_1=\menge{x\in L^2(\RR^N)}{x1_{\complement A}=0} &\text{and}\quad u_1=a1_A,\\ U_2=\menge{x\in L^2(\RR^N)}{\widehat{x}1_{\complement B}=0} &\text{and}\quad\widehat{u_2}=b1_B. \end{cases} \end{equation} Thus, the problem reduces to an instance of \eqref{e:puertoprincessa-mai2008-1} in which $m=2$. \begin{itemize} \item If $\mu(A)<\pinf$ and $\mu(B)<\pinf$, it follows from Lemma~\ref{l:2}, \eqref{e:kokuyo7}, and the implication \ref{c:palawan-mai2008viii}$\Rightarrow$\ref{c:palawan-mai2008i} in Corollary~\ref{c:palawan-mai2008} that the answer is affirmative (see also \cite[Corollary~5.B~p.~100]{Havi94}). \item If $\mu(\complement A)<\pinf$ and $\mu(\complement B)<\pinf$, it follows from \eqref{e:kokuyo7}, Proposition~\ref{p:23}, and Lemma~\ref{l:1} (applied to $U_1^\bot$ and $U_2^\bot$) that \eqref{e:74-7} has at most one solution. \item Suppose that $A$ is bounded and that $\mu(\complement B)>0$, and let $\varepsilon\in\RPP$. Then there exists $x\in L^2(\RR^N)$ such that \begin{equation} \label{e:desired-ineq} \int_A|x(t)-a(t)|^2dt+ \int_B|\widehat{x}(\xi)-b(\xi)|^2d\xi <\varepsilon. \end{equation} To show this, we first observe that $U_1\cap U_2=\{0\}$. Indeed, let $y\in U_1\cap U_2$. Then $\widehat{y}$ can be extended to an entire function on $\CC^N$ (see \cite[Theorem~7.23]{Rudi91} or \cite[Theorem~III.4.9]{Stei71}) and, at the same time, $\widehat{y}1_{\complement B}=0$, which implies that $\widehat{y}=0$ \cite[Theorem~I.3.7]{Rang86}. Hence, applying Proposition~\ref{p:denseness} with $m=2$, we obtain the existence of $x\in L^2(\RR^N)$ such that \begin{equation} \|P_1x-u_1\|^2+\|P_2x-u_2\|^2<\varepsilon, \end{equation} which yields \eqref{e:desired-ineq}. In the case when $\complement B$ is a ball centered at the origin and $b=0$, \eqref{e:desired-ineq} provides the following approximate band-limited extrapolation result: there exists $x\in L^2(\RR^N)$ which approximates $a$ on $A$ and such that $\widehat{x}$ nearly vanishes for high frequencies. \end{itemize} \end{remark} The following example describes a situation in which the IBAP fails. \begin{example} The following example is from \cite{Mont82}. Let $C=[-1/2,1/2] \times [-1/2,1/2]$ and set $\HH=L^2(C)$. Moreover, define \begin{equation} (\forall (m,n)\in\ZZ^2)\quad\widehat{x}(m,n)= \int_Cx(s,t)\exp(-i2\pi(ms+nt))dsdt, \end{equation} set $A=[0,1/2]\times [0,1/2]$, and set $B=F\cup\menge{(m,0)}{m\in\ZZ}$, where $F$ is a nonempty finite subset of $\ZZ\times\ZZ$. The problem amounts to finding functions with prescribed best approximations from the closed vector subspaces \begin{equation} \begin{cases} U_1=\menge{x\in\HH}{x1_{\complement A}=0}\\ U_2=\menge{x\in\HH}{x(s,t)=x(-s,t)\;\text{a.e.\ on}\;C}\\ U_3=\menge{x\in\HH}{\widehat{x}1_{B}=0}. \end{cases} \end{equation} Since $U^{\bot}_1+U^{\bot}_2+U^{\bot}_3=\{0\}$ \cite{Mont82}, it follows from Proposition~\ref{p:23} that the problem has at most one solution. However, the subspaces are not independent. Indeed, given a finite subset $I$ of $\ZZ$ such that $(0,n)\notin F$ whenever $n\in I$ and complex numbers $(c_n)_{n\in I}$, the trigonometric polynomial \begin{equation} (s,t)\mapsto\sum_{n\in I}c_n e^{i2\pi nt} \end{equation} is in $U_2\cap U_3$. Therefore, in the light of Corollary~\ref{c:kimono-ken}, the IBAP does not hold. \end{example}
{"config": "arxiv", "file": "0905.3520.tex"}
TITLE: Night Louder Than The Day QUESTION [0 upvotes]: Have you guys ever felt that it is quiet at night than in comparison to day?I'm known with the fact that reduction in people's activity makes night quiet...but is there something else which in a way amplifies the sound wave in night? REPLY [1 votes]: The difference between day and night can be pretty big, up to tens of decibels. But most of this is likely due to different activity levels - less traffic, industry, voices/animal sounds etc. Outdoor sound propagation depends on a number of factors, but their impact in day and night will be variable. In particular, there is a temperature dependency in how fast sound intensities are attenuated in air: as it gets colder attenuation goes up somewhat. So one might think that as the day cools off the range of sounds decrease. However, there are complications here like humidity changes (dampens some frequencies but not others). Temperature also affects the speed of sound, making temperature gradients refract sounds in the direction of lower sound velocity (that is, lower temperature). In the night a temperature inversion is common, with the ground and the low air colder than the upper air, and this tends to make sound travel longer distances since it is focused along the ground rather than radiated upward. A further issue is wind, which can amplify sound in some directions, add turbulent damping, and of course cause noise. In short, there are various attenuation effects that could play a role. But I suspect the main cause is just less noise sources.
{"set_name": "stack_exchange", "score": 0, "question_id": 410940}
TITLE: Check whether the following is a subgroup of S4 QUESTION [1 upvotes]: Let $G=S_4$ and $U=\{\sigma \in S_4 | σ^2=(1)\}$ Prove or disprove that $U$ is a subgroup of $G$. I tried this: Let $a,b \in U$. Because $(ab)^2 =(1):$ $$ab=(ab)^{-1}=abab=ab(ab)^{-1} =(1)$$ Is my proof totally wrong , can it be saved somehow?. If not please give me some tips on how to do it correctly. I want this question to be solved by the subgroup criteria. REPLY [3 votes]: Quick way: $((12) (13))^{2} \ne 1$, so it's not a subgroup. But why did I choose those two elements? Well, if $a^{2} = b^{2} = 1$, then $(a b)^{2} = 1$ implies $1 = a b a b = a b a^{-1} b^{-1}$, whence $ab = ba$. So I have chosen $a = (12)$ and $b = (13)$ because $ab \ne ba$.
{"set_name": "stack_exchange", "score": 1, "question_id": 2026371}
\begin{document} \title{Generalized Rough Polyharmonic Splines for Multiscale PDEs with Rough Coefficients} \author[Liu X. e t.~al.]{Xinliang Liu\affil{1}, Lei Zhang\affil{1}\comma\corrauth and Shengxin Zhu\affil{2}\comma\affil{3}$^*$ } \address{\affilnum{1}\ Institute of Natural Sciences, School of Mathematical Sciences, and MOE-LSC, Shanghai Jiao Tong University. \\ \affilnum{2}\ Research Center for mathematics, Beijing Normal University, Zhuhai 519087, \\ \affilnum{3}\ Division of Science and Technology, BNU-HKBU United International College, Zhuhai 519087 } \emails{{\tt lzhang2012@sjtu.edu.cn} (L.~Zhang), {\tt liuxinliang@sjtu.edu.cn} (X.~Liu), {\tt Shengxin.Zhu@bnu.edu.cn}, {\tt shengxinzhu@uic.edu.cn} (S.~Zhu)} \date{\today} \begin{abstract} In this paper, we demonstrate the construction of generalized Rough Polyhamronic Splines (GRPS) within the Bayesian framework, in particular, for multiscale PDEs with rough coefficients. The optimal coarse basis can be derived automatically by the randomization of the original PDEs with a proper prior distribution and the conditional expectation given partial information on edge or derivative measurements. We prove the (quasi)-optimal localization and approximation properties of the obtained bases, and justify the theoretical results with numerical experiments. \end{abstract} \keywords{generalized Rough Polyharmonic Splines, multiscale elliptic equation, Bayesian numerical homogenization, edge measurement, derivative measurement. } \maketitle \section{Introduction} \def\L{\mathcal{L}} \def\B{\mathcal{B}} \def\H{\mathcal{H}} Problems with a wide range of coupled temporal and spatial scales are ubiquitous in many phenomena and processes of materials science and biology. Multiscale modeling and simulation is essential in underpinning the discovery and synthesis of new materials and chemicals with novel functionalities in key areas such as energy, information technology and bio-medicine. There has been many existing work concerning the design of novel numerical methods for multiscale problems and the mathematics to foresee and assess their performance in engineering and scientific applications, such as homogenization \cite{papanicolau1978asymptotic,jikov2012homogenization}, numerical homogenization \cite{dur91,ab05,weh02}, heterogeneous multi-scale methods \cite{ee03,abdulle2014analysis, ming2005analysis,li2012efficient}, multi-scale network approximations \cite{berlyand2013introduction}, multi-scale finite element methods \cite{Arbogast_two_scale_04, eh09, Review,chen2015mixed}, variational multi-scale methods \cite{hughes98, bazilevs2007variational}, flux norm homogenization \cite{berlyand2010flux,owhadi2008homogenization}, rough polyharmonic splines (RPS) \cite{OwhZhaBer:2014}, generalized multi-scale finite element methods \cite{egh12, chung2014adaptiveDG, chung2015residual}, localized orthogonal decomposition \cite{MalPet:2014,Henning2014,Henning2015,Peterseim2017a}, etc. Fundamental questions for numerical homogenization are: how to approximate the high dimensional solution space by a low dimensional approximation space with optimal error control, and furthermore, how to construct the approximation space efficiently, for example, whether its basis can be localized on a coarse patch. Surprisingly, those questions have deep connections with Bayesian inference, kernel learning and probabilistic numerics \cite{Owhadi2015,OwhadiMultigrid:2017,owhadi2020kernel,owhadi2019kernel}. In this paper, we generalize the so-called \textit{Rough Polyharmonic Splines} (RPS) \cite{OwhZhaBer:2014} within the Bayesian framework \cite{Owhadi2015} for the following integral-differential equation \begin{equation} \begin{cases} \mathcal{L} u &= g, \quad \text{on } \Omega, \\ \mathcal{B} u &=0, \quad \text{on } \partial \Omega. \end{cases} \label{eqn:ellp} \end{equation} where $\L$ and $\B$ are integro-differential operators on $\Omega$ and $\partial\Omega$, such that $(\L, \B): \H(\Omega) \to \H_{\L}(\Omega) \times \H_{\B}(\partial \Omega)$, where $\H(\Omega), \H_{\L}(\Omega)\text{ and }\H_{\B}(\partial \Omega)$ are Hilbert spaces of generalized functions on $\Omega$ and $\partial \Omega$, such that $\H(\Omega) \subset L^2(\Omega)\subset \H_{\L}(\Omega)$. A prototypical example is the second order divergence form elliptic equation with rough coefficients, such that $\mathcal{L}=- \div (\kappa(x) \nabla \cdot) $, $\B = \mathrm{Id}$, and $\Omega$ is a simply connected domain with piecewise smooth boundary $\partial\Omega$. The \textit{rough coefficient}, $\kappa(x) \in L^{\infty}(\Omega)$, represents multiscale media with high contrast and fast oscillations. We only require $\kappa$ to be uniformly elliptic on $\Omega$, i.e., that $\kappa$ is uniformly bounded from above and below by two strictly positive constants, denoted by $\kappa_{min}$, $\kappa_{max}$. For this example, we have $\H(\Omega)=H^1_0(\Omega)$, and $\H_{\L}(\Omega)=H^{-1}(\Omega)$. It is well-known that for an arbitrary $\kappa$, solving the elliptic equation with linear or polynomial finite element methods can be arbitrarily slow \cite{BO00}. To tackle with such a challenge, last decades has witnessed the fast development of multiscale finite element methods \cite{HouWu:1997, EfeHou:2009b} and numerical homogenization approaches \cite{OwhZha:2007, MalPet:2014, OwhZhaBer:2014}. One essential component of these methods is the construction of a proper coarse space with desired approximation and localization properties. The Bayesian homogenization approach \cite{Owhadi2015} provides a unified framework for such constructions \cite{OwhZhaBer:2014}. Under the Bayesian framework, the \textit{generalized Rough Polyharmonic Splines} (GRPS) space can be identified by the choice of random noise and measurement function. Point and volume measurements have been used in \cite{OwhZhaBer:2014} and \cite{OwhadiMultigrid:2017}, respectively. In this paper, we construct two new GRPS spaces based on the edge measurements or derivative measurements, and provide rigorous proof of their approximation and localization properties. It is sometimes natural to use edge based measurements due to the presence of elongated structures such as cracks and channels in heterogeneous media. The derivative based GRPS can be seen as a higher order method. We note that our method is different from the so-called edge multiscale finite elements in \cite{fu2019edge}, which forms the multiscale finite element space by solving local Steklov eigenvalue problems and a local harmonic function with constant flux (also appears in the mixed multiscale finite element method, for example \cite{chen2003mixed}). The paper is organized as follows: in Section \ref{sec:formulation}, we first introduce the Bayesian homogenization framework and the variational formulation of numerical homogenization (coarse) basis, then we present the details for the construction of such basis. In Section \ref{sec:analysis}, we provide the rigorous error analysis of the corresponding numerical homogenization method. Numerical examples are presented in Section \ref{sec:numerics} to validate the method. We conclude the paper in Section \ref{sec:conclusion}. \paragraph{Notations} The symbol $C$ (or $c$) denotes generic positive constant that may change from one line of an estimate to the next. The dependence of $C$ will be clear from the context or stated explicitly. We use standard notations $L^2(\Omega)$, $H^1(\Omega)$ for Lebesgue and Sobolev spaces, and $H_{0}^{1}(\Omega):=\left\{u \in H^{1}(\Omega): u=0 \text { on } \partial\Omega\right\}$. For any measurable subset $\omega\subset\Omega$, the $d$ or $(d-1)$ dimensional Lebesgue measure of $\omega $ is denoted by $|\omega|$ and the $L^2$ norm is denoted by $\|\cdot\|_{L^2(\omega)}$. We denote $\#$ the cardinality of a set. \section{Formulation} \label{sec:formulation} \subsection{Bayesian homogenization framework} \label{sec:formulation:Bayesian} In the Bayesian homogenization framework \cite{Owhadi2015}, the coarse space can be identified by a Bayesian inference problem (in particular, Gaussian process regression), through the randomization of the original deterministic problem \eqref{eqn:ellp}, \begin{equation} \begin{cases} \mathcal{L} v(x) = \zeta(x), \quad &\text{on } \Omega, \\ \mathcal{B}v =0, \quad &\text{on } \partial \Omega. \end{cases} \label{eqn:ranellp} \end{equation} where $\zeta(x)$ is a centered Gaussian process on $\Omega$ with covariance $\Lambda(x,y)$. The solution $v(x)$ is also a centered Gaussian process on $\Omega$ with covariance \begin{equation} \Gamma(x,y): = \bE[v(x)v(y)]= \int_{\Omega\times\Omega}G(x,z)\Lambda(z,z')G(y,z')dzdz', \label{eqn:cov} \end{equation} where $G(x,y)$ is the Green's function such that $\mathcal{L}G(x,y) = \delta(x-y)$ on $\Omega$, $\mathcal{B}G = 0$ on $ \partial\Omega$. Given an index set $\I$ and a set of linearly independent \textit{measurement functions} $\Phi:=\{\phi_i(x)\}_{i\in \I}$ , such that $\int_{\Omega\times\Omega} \phi_i(x) \Gamma(x,y) \phi_j(x)\dx $ are well-defined, we define the measurements $M:=(m_1, \dots, m_{N})$, where $m_i:=\int_{\Omega} v(x)\phi_i(x)\dx$ for $i\in \I$ and $N=\#\I$. For example, when $\phi_i(x)=\delta(x-x_i)$, $m_i$ is the point value at $x_i$ for any continuous $v$. Note that $M$ is a centered Gaussian vector with covariance matrix $\Theta$, where $$ \Theta_{i,j} : = \int_{\Omega\times\Omega} \phi_i(x) \Gamma(x,y) \phi_j(y) \dx \dy.$$ The optimal approximation space can be identified through the conditional expectation of $v(x)$ with respect to measurements $M$, \begin{equation} \mathbb{E}[v|M] = \sum_{i\in \I} m_i \psi_i(x), \end{equation} where $\psi_i$ has the following explicit representation formula from the conditional expectation of Gaussian process, \begin{equation} \psi_i(x): = \sum_{j\in \I} \Theta_{i,j}^{-1}\int_\Omega \Gamma(x,y) \phi_j(y) \dy. \label{eqn:psi_baye} \end{equation} $ \{\psi_i\}_{i\in\I}$ can be regarded as a set of posterior basis with respect to the noise $\zeta$ and the measurement functions $\Phi$. $\Psi:=\mathrm{span} \{\psi_i\}_{i\in\I}$ can be used as a coarse space to approximate the solution of deterministic problem \eqref{eqn:ellp}. We note that the above formulation can be seen as a prototypical example for the emerging field of probabilistic numerics, and we refer interested readers to \cite{Owhadi2015,OwhadiMultigrid:2017,owhadi2019operator,cockayne2019bayesian} for more details. It is difficult to use \eqref{eqn:psi_baye} for numerical computation since it involves convolutions over the whole domain $\Omega$. We will give a variational formulation for $\psi_i$ in the next Section \ref{sec:formulation:variation}. Before proceeding, we first discuss the choice of the noise $\zeta$ and the measurement functions $\Phi$. \paragraph{Choice of the noise $\zeta$} For the centered Gaussian field $\zeta$, it reduces to the choice of the covariance function $\Lambda(x,y)$ , which in turn determines the regularity of the solution space. There are two natural possibilities: \begin{enumerate} \item (white noise) Taking $\zeta(x)$ as the white noise, i.e. $\Lambda(x,y)= \delta(x,y)$. This is the choice for the RPS formulation in \cite{OwhZhaBer:2014,Owhadi2015}. \item ($\L$ noise) Taking the covariance operator of $\zeta$ as $\L$, namely, for any $g$ such that $\int g\L g\dx <\infty$, $\int g(x)\zeta(x) \dx$ is a Gaussian random variable with mean 0 and variance $\int g\L. g\dx $. This is the choice for the Gamblet formulation in \cite{OwhadiMultigrid:2017}. \end{enumerate} \paragraph{Choice of $\Phi$} The RPS formulation uses point value measurement functions $\phi_i(x) = \delta(x-x_i)$, while the Gamblet formulation takes scaled volume characteristic functions as measurement functions, and volume averages as measurements. In this paper, we take edge averages or volume averaged first order derivatives as measurements to construct the approximation space $\Psi$. We postpone the specification of the measurements $\Phi$ after we set up the discretization in Section \ref{sec:formulation:numericalmethod}. In the following, we refer to the basis with white noise as RPS basis, and the basis with $\L$ noise as GRPS basis. For instance, we name the RPS basis with point measurement as RPS-P basis (or in short, RPS basis), GRPS basis with volume measurements as GRPS-V basis, and so on. \subsection{Variational formulation} \label{sec:formulation:variation} We introduce the solution space to the original problem \eqref{eqn:ellp} as, \begin{equation} V:=\{ v | \mathcal{L} v \in L^2(\Omega), \mathcal{B} v =0 \text{ on } \partial \Omega \}. \end{equation} We define the bilinear form $a:V\times V\to \mathbb{R}$ as \begin{equation} a ( u, v ) = \left\{ \begin{array}{cc} \int_\Omega (\L u) (\L v) \dx, & \text{$\zeta$ is white noise },\\ \int_\Omega u\L v \dx, & \text{$\zeta$ is $\L$ noise}, \end{array} \right. \end{equation} and the norm $\|\cdot\|:=\big(a(\cdot,\cdot)\big)^\frac12$. $[u,v]:= \int_{\Omega} u v \dx$ is the scalar product. Instead of using the Bayesian representation formula \eqref{eqn:psi_baye}, we propose the following variational formulation to compute the basis. For any $i\in \I$, \begin{equation} \begin{cases} \psi_i =\argmin\limits_{v\in V} a(v,v) \\ s.t.\ [v, \phi_j] =\delta_{i,j},\ \forall j\in \I. \end{cases} \label{eqn:psi} \end{equation} Then $\Psi:=\operatorname{span}\{\psi_i\}_{i\in \I}$ is the GRPS space. The well-posedness of \eqref{eqn:psi} and the variational property of the derived basis are shown in the following proposition. \begin{proposition}\cite[Prop 4.2]{Owhadi2015} The constrained minimization problem \eqref{eqn:psi} is strictly convex and admits a unique minimizer $\psi_i\in V$, which also satisfies the Bayesian formula \eqref{eqn:psi_baye}. Furthermore, $\psi_i$ fulfils the variational property in the sense that, for any $v$ such that $[v, \phi_j] =0,\ \forall j\in \I$, we have $a(\psi_i,v)=0$. \label{prp:var} \end{proposition} \begin{remark}\label{prop:deriv} The constrained minimization problems \eqref{eqn:psi} is equivalent to the following saddle point problem \cite{Owhadi2015}, namely, finding $\psi_i \in V$ and $\lambda \in \Phi $ such that \begin{equation} \begin{cases} a(\psi_i,v)+[ \lambda, v ] = 0, \, \forall v \in V,\\ [\mu, \psi_i] =f_i(\mu), \, \forall \mu \in \Phi, \end{cases} \end{equation} where $f_i$ is a linear functional on $\Phi$ such that $f_i(\phi_j) =\delta_{i,j},\, \forall j\in\I$. \end{remark} \subsection{Numerical Method} \label{sec:formulation:numericalmethod} In this section, we present the discretization of \eqref{eqn:ellp} using finite element methods. We focus on the second order elliptic operator $\mathcal{L}=- \div (\kappa(x) \nabla \cdot)$ with a rough coefficient $\kappa(x)$, and the Dirichlet boundary condition such that $\mathcal{B} = Id$. Let $\Omega\subset\mathbb{R}^{d}$ ($d\geq 2$) be an open, bounded, and connected polyhedral domain with a Lipschitz boundary $\partial \Omega$. Let the coefficient $\kappa(x)\in (L^\infty)^{d\times d}$, $0<\kappa_{min}:= \inf_{x\in\Omega}\lambda_{\min} (\kappa(x))$ and $ \sup_{x\in\Omega} \lambda_{\max} (\kappa(x))=:\kappa_{max} < \infty$. The variational problem corresponding to \eqref{eqn:ellp} is \begin{equation} a(u,v)=[g,v],\ \forall v\in H_0^1(\Omega), \label{eqn:varellp} \end{equation} where $a ( u, v ):= \int_{\Omega} u \mathcal{L} v \dx =\int_{\Omega}\kappa(x)\nabla u \cdot \nabla v \dx$. \def \TH{\mathcal{T}_{H}} \def \Th{\mathcal{T}_{h}} \def \EH{\mathcal{E}_{H}} \def \NH{\mathcal{N}_{H}} Let $\TH$ be a coarse simplicial subdivision of $\Omega$, where $H:=\max_{\tau \in\TH}H_{\tau}$ is the coarse mesh size, and $H_{\tau} := \mathrm{diam}(\tau)$. We assume that $\TH$ is shape regular in the sense that $\max_{\tau\in\TH} \frac{H_{\tau}}{\rho_{\tau}} \leq \gamma$, for a positive constant $\gamma>0$, where $\rho_{\tau}$ is the radius of the inscribed circle in $\tau$. We denote $\NH$ and $\EH$ as the set of all vertices and $d-1$ dimensional faces(or edges) in $\TH$, respectively. A fine mesh $\Th$ with mesh size $h = 2^{-J}H$ can be obtained by uniformly subdividing $\TH$ $J$ times. We refine ${\mathcal{T}_H}$ $J$ times to obtain the fine mesh $\mathcal{T}_h$, namely, $h = 2^{-J} H$. See Figure \ref{fig:mesh} for an illustration of the coarse and fine mesh over a square domain. The finite element space $V_h$ contains continuous piecewise linear functions with respect to $\Th$ which vanish at the boundary $\partial \Omega$. \begin{figure}[H] \centering \subfigure[Coarse mesh, $N_c=2$]{ \includegraphics[width=0.3\textwidth]{Nc2coarse}} \qquad \subfigure[Fine mesh, $N_c=2$,$J=2$]{ \includegraphics[width=0.3\textwidth]{J3fine}} \centering \caption{Coarse and fine meshes of the unit square: The regular coarse mesh $\mathcal{T}_H$ is obtained by first subdividing $\Omega$ uniformly into $N_c\times N_c$ squares, then partitioning each square into two triangles along the $(1,1)$ direction. We can further refine the coarse mesh uniformly by dividing each triangle into four similar subtriangles.} \label{fig:mesh} \end{figure} \subsubsection{Measurement Functions} \label{sec:formulation:numerics:measurement} We are now ready to elaborate three different measurement function sets $\Phi$ for $d=2$. The results can be extended to higher dimensions without much difficulty. \begin{itemize} \item (Case V) Volume measurement function: for $\tau\in \TH$, \begin{equation} \phi_{\tau}: = c_\tau \chi(\tau), \text{ and volume measurement } m_{\tau}: = \int_{\Omega} u \phi_{\tau} \dx = c_{\tau}\int_{\tau} u \dx , \end{equation} where $\chi(\tau)$ is the characteristic function of $\tau$ and $c_\tau:=\sqrt{|\tau|}$ is a scaling factor (see Proposition \ref{prop:iteration} and Lamma \ref{lemma:psi_i^0} for the effect of the scaling factor). The collection of such measurement functions forms a feasible choice of $\Phi$ and is denoted by $\Phi_\mathcal{T}:= \{\phi_{\tau}\}_{\tau\in\TH}$. \item (Case E) Edge measurement function: for $e\in \EH$, \begin{equation} \phi_{e}: = c_e\chi(e), \text{ and edge measurement } m_{e}:= \int_{\Omega} u \phi_{e} \dx = c_e\int_e u \ds, \label{eqn:edge} \end{equation} where $\chi(e)$ is a generalized function such that $\int_{\Omega} u \chi(e) \dx= \int_{e} u \ds,\,\forall u \in H^1_0(\Omega)$ and $c_e: =|e|^{\frac{2-d}{2(d-1)}}$ is a scaling factor (see Proposition \ref{prop:iteration} and Lamma \ref{lemma:psi_i^0} for the effect of the scaling factor). The collection of such measurement functions is denoted by $\Phi_{\mathcal{E}}:= \{\phi_{e}\}_{e\in \EH}$. \item (Case D') First order derivative measurement function: Given a multi-index $\alpha \in \mathcal{A}:=\{(\alpha_1,\ldots,\alpha_d)| \alpha_i=0 \text{ or } 1, \text{ and }\sum_{i=1}^{d}\alpha_i = 1\}$, $D^{\alpha} u$ denotes the first order (weak) partial derivatives associated with $\alpha$, e.g., $D^{(1,0,\ldots,0)} u= \frac{\partial}{\partial x_1} u$ (we make use of the notation and definition for weak derivative from \cite{evans1998partial}). For $\alpha \in \mathcal{A}, \text{ and } \tau \in \TH$, the first order derivative measurement function $$ \phi_{\tau,\alpha}:= D^{\alpha} \phi_{\tau}, \text{ in the sense that } \int_{\Omega} u \phi_{\tau,\alpha} \dx = -\int_{\Omega} D^{\alpha}u \phi_{\tau} \dx, \, \forall u \in H^1_0(\Omega). $$ The set of measurement functions is the union of $\phi_\tau$ and $\phi_{\tau, \alpha}$, for $\tau\in \mathcal{T}$ and $\alpha\in \mathcal{A}$, namely, \begin{equation} \Phi_{\mathcal{\D}'}:= \{\phi_{\tau}\}_{\tau\in \TH}\cup \{\phi_{\tau,\alpha}\}_{\tau\in \TH, \alpha \in \mathcal{A}}. \label{eqn:caseDp} \end{equation} \item (Case D) Combination of volume and edge measurement functions, \begin{equation} \Phi_{\D}:= \{\phi_{e}\}_{e\in \EH}\cup \{\phi_{\tau}\}_{\tau\in \TH}. \label{eqn:caseD} \end{equation} \end{itemize} \begin{remark} \label{rem:deriv} Derivative measurement function can be represented as a linear combination of edge measurement functions. For $\tau\in \TH$ with $\partial \tau = \cup\{e_1,\,e_2,\,e_3\}$, we have \begin{equation} \begin{aligned} m_{\tau,\alpha} := \int_{\Omega} u \phi_{\tau,\alpha} \dx =\int_{\Omega} D^{\alpha}u \phi_{\tau} \dx &= c_\tau( \int_{e_1} u n_{1,\alpha} \ds+\int_{e_2} u n_{2,\alpha} \ds +\int_{e_3} u n_{3,\alpha} \ds ) \\ &=\frac{c_\tau}{c_e}\int_{\Omega} u (n_{1,\alpha}\phi_{e_1}+n_{2,\alpha}\phi_{e_2}+n_{3,\alpha}\phi_{e_3}) \dx, \end{aligned} \label{eqn:deriv} \end{equation} where $n_{i,\alpha}$ denotes the ${\alpha}$ component of the normal direction $\vec{n}_{i}$ of $e_i$, $i = 1, 2, 3$. Therefore, we have $\phi_{\tau,\alpha}= c_\tau/c_e(n_{1,\alpha}\phi_{e_1}+n_{2,\alpha}\phi_{e_2}+n_{3,\alpha}\phi_{e_3})$. \end{remark} \begin{proposition} The set of measurement function $\Phi_D$ defined in \eqref{eqn:caseD} spans the same linear space as $\Phi_{D'}$ defined in \eqref{eqn:caseDp}, for the Dirichlet boundary condition considered in the paper. Moreover, all the measurement functions in $\Phi_{\D}$ are linearly independent, while the first order derivative measurement functions $\phi_{\tau, \alpha}$ can be linearly dependent. See \ref{sec:prop:span_eq} for the proof. \label{prop:span_eq} \end{proposition} By Proposition \ref{prop:span_eq}, we only need to consider case D instead of case D' to avoid working with linearly dependent measurement functions. We construct the GRPS space $\Psi$ for each case, using the variational formulation \eqref{eqn:psi}. \begin{itemize} \item Case V: $\Psi := \mathrm{span}\{\psi_{\tau}\}_{\tau\in \TH},$ where $\psi_{\tau}$ is the solution of \eqref{eqn:psi} with respect to $\Phi_\mathcal{T}$. \item Case E: $\Psi := \mathrm{span}\{\psi_{e}\}_{e\in \EH},$ where $\psi_{e}$ is the solution to \eqref{eqn:psi} with respect to $\Phi_\mathcal{E}$. \item Case D: $\Psi := \mathrm{span}( \{\psi^{\D}_{e}\}_{e\in \EH}\cup \{\psi^{\D}_{\tau}\}_{\tau\in \TH})$. We note that $\psi^\D_{\tau}$ needs to satisfy the constraints such that $[\psi^\D_{\tau}, \phi_{\tau'}] = \delta_{\tau, \tau'}$, and $[\psi^\D_{\tau}, \phi_{e}]=0$, which is different from $\psi_{\tau}$ in case V. $\psi^\D_{e}$ needs to satisfy similar constraints. \end{itemize} In the following, when no confusion arises, we also denote the set of measurement functions by $\Phi:=\{\phi_i\}_{i\in \I}$ and the space of basis by $\Psi:= \mathrm{span}\{\psi\}_{i\in \I}$ using a general index set $\I$ without specifying particular measurements and corresponding bases. We write $N = \# \I$. \subsubsection{Localization} \label{sec:formulation:numerics:localization} \def \Tc{\mathcal{T}_c} The basis defined in \eqref{eqn:psi} is globally supported in $\Omega$, which is not practical for applications. In this section, we introduce the notions of local patches and also the formulation of localized bases. Let the $0$-th layer patch $\Omega_i^0$ be the smallest subset of $\Omega$ such that $\mathrm{supp}(\phi_i) \subset \Omega_i^0$ and consists of simplices in $\TH$. The $\ell$-th layer patch $\Omega^\ell_i = \cup\{\tau\in\TH: \tau\cap\Omega^{\ell-1}_i\neq \emptyset\}$ for $\ell\geq 1$ can be defined recursively. We refer to Figure \ref{fig:patch} and Figure \ref{fig:edge_patch} to illustrate the local patches for volume measurement and edge measurement, respectively. \begin{figure}[H] \centering \subfigure[$\Omega_i^0$] {\includegraphics[width=0.32\textwidth]{volumelayerl_1}} \subfigure[$\Omega_i^1$] {\includegraphics[width=0.32\textwidth]{volumelayerl_2}} \subfigure[$\Omega_i^2$] {\includegraphics[width=0.32\textwidth]{volumelayerl_3}} \caption{Local patches for volume measurements.} \centering \label{fig:patch} \end{figure} \begin{figure}[H] \centering \subfigure[$\Omega_i^0$] {\includegraphics[width=0.32\textwidth]{edgelayerl_1}} \subfigure[$\Omega_i^1$] {\includegraphics[width=0.32\textwidth]{edgelayerl_2}} \subfigure[$\Omega_i^2$] {\includegraphics[width=0.32\textwidth]{edgelayerl_3}} \caption{Local patches for edge measurements.} \centering \label{fig:edge_patch} \end{figure} To this end, we can localize the computation of $\psi_i$ to a local patch $\Omega^\ell_i$. For any $i\in \I$ and ${\ell}\in \mathbb{N}$, \begin{equation} \begin{cases} \psi_i^{\ell} =\argmin a(v,v) \\ s.t.\ v\in H_0^1(\Omega_i^{\ell})\ and\ [v, \phi_j]=\delta_{i,j},\ \forall j \in \I. \end{cases} \label{eqn:psi_i^0} \end{equation} The space of localized bases is $\Psi^{\ell} :=\mathrm{span}\{\psi_i^{\ell}\}_{i \in \I}$. \subsection{Numerical homogenization} By \textit{numerical homogenization}, we refer to the finite element formulation in the coarse space $\Psi$, namely, to find $u_H \in \Psi$ such that \begin{equation} a(u_H,v_H)=[g,v_H],\ \forall v_H\in \Psi. \label{eqn:varellpFEM} \end{equation} In practice, the equation \eqref{eqn:varellpFEM} is solved in the space of localized bases $\Psi^{\ell}$, and we write $u_H^{\ell}\in \Psi^{\ell}$ the solution to \begin{equation} a(u_H^{\ell},v_H)=[g,v_H],\ \forall v_H\in \Psi^{\ell}. \label{eqn:local_varellpFEM} \end{equation} For the analysis, we assume that the coarse bases in $\Psi$ or $\Psi^\ell$ are exact solutions of the variational formulation \eqref{eqn:psi} or \eqref{eqn:psi_i^0}. In the numerical experiments, we compute the basis on a sufficiently fine mesh $\Th$, and assume that the discretization error is negligible. \section{Analysis} \label{sec:analysis} In this section, we first present an error analysis in Section \ref{sec:analysis:globalbasis} for the proposed two-level multiscale methods with the basis defined in \eqref{eqn:psi} using edge or first order derivative measurement functions, such that the optimal convergence rate $\mathcal{O}(H)$ for $\| u-u_H \| $ holds (Theorem \ref{thm:convergense of global}), where $u$ is the true solution and $u_H$ is the finite element solution of \eqref{eqn:varellpFEM} with global GRPS basis functions. In Section \ref{sec:analysis:localization}, we propose to compute the basis on a localized patch, and show the exponential decay of the truncation error between the localized and global basis functions in Theorem \ref{thm:decay}. We conclude Section \ref{sec:analysis:localbasis} with our main Theorem \ref{thm:local}, the error estimate for the multiscale method with localized bases, which states that: Let $\ell$ indicates the number of layers for the support of the localized basis, and $u_H^{\ell}$ be the solution to \eqref{eqn:local_varellpFEM} with the local bases. The solution error can be controlled by $$ \|u -u_H^{\ell}\| \leq \| u -u_H\| +\| u_H- u_H^{\ell}\|. $$ The first term $\| u -u_H\|$ is of order $\mathcal{O}(H)$, and the second term $\| u_H -u_H^{\ell}\|$ depends on the truncation error decaying exponentially with respect to $\ell$. We recall the simplex-wise trace theorem and zero mean boundary type Poincar\'{e} inequality which will be used in the following analysis. \begin{lemma}[trace inequality \cite{verfurth1999error}]\label{lem:trace} For any $\tau \in \mathcal{T}_{H},$ any $e \subset \partial \tau$ and any $v \in H^{1}(\tau)$ we have \begin{equation} \|v\|_{L^2(e)} \leq\left\{d \frac{|e|}{|\tau|}\right\}^{1 / 2}\left\{\|v\|_{L^2(\tau)}+H_{\tau}\|\nabla v\|_{L^2(\tau)}\right\} \end{equation} \end{lemma} \begin{lemma}[zero mean boundary type Poincar\'{e} inequality \cite{NazRep:2015}]\label{lem:zero_mean} Let $\tau$ be a shape regular triangle, then there exists a constant $C_{\tau}$ depending on the diameter of $\tau$ such that \begin{equation} \|w\|_{L^2(\tau)} \leq C_{\tau}\|\nabla w\|_{L^2(\tau)},\ \forall w\in \tilde{H^1}(\tau):=\left\{w\in H^1(\tau)| \int_{\partial \tau} w = 0\right\}, \end{equation} \end{lemma} \subsection{Accuracy of Global Basis} \label{sec:analysis:globalbasis} In this section, we will prove that the finite element solutions to \eqref{eqn:varellpFEM}, with respect to spaces $\Psi$ of global bases derived from case V (volume), case E (edge) or case D (derivative), achieve $\mathcal{O}(H)$ convergence rate. \begin{theorem}\label{thm:convergense of global} Let $u$ be the solution of \eqref{eqn:varellp}, then $u_H := \sum_{i\in \I} m_i \psi_i$, with $ m_i:=[u,\phi_i]$, is the unique finite element solution to \eqref{eqn:varellpFEM}. And we have \begin{equation} \|u-u_H\|_{H_0^1} \leq \kappa_{min}^{-1}C H\|g\|_{L^2} \end{equation} \begin{proof} Uniqueness is a direct result of the coerciveness of $a(\cdot,\cdot)$. The variational property of $\Psi$, in Proposition \ref{prp:var}, suggests that $a(u_H-u,v)=0,\, \forall v\in \Psi$. Hence $u_H$ is the unique solution to \eqref{eqn:varellp} over $\Psi$. Let $r := u-u_H$, recalling the Galerkin orthogonality $a(r, v) = 0, \forall v\in \Psi$, we have \begin{equation} \kappa_{min} \|r\|_{H_0^1}^2 \leq a(r,r)=[g,r]\leq \|g\|_{L^2}\|r\|_{L^2}\sx{.} \label{eqn:coercive} \end{equation} Noting that $[r,\phi_i]=0$ for any $i$, it is sufficient to show \begin{equation} \|r\|_{L^2} \leq C_2 H\|r\|_{H_0^1} \label{eqn:poincare} \end{equation} for all three cases. For case V, the Poincar\'{e} inequality implies $\|r\|_{L^2} \leq C_1 H\|r\|_{H_0^1}$. For case E, \eqref{eqn:poincare} can be verified using Lemma \ref{lem:zero_mean}. For case D, a combination of the Poincar\'{e} inequality and Lemma \ref{lem:zero_mean} leads to \eqref{eqn:poincare}. \end{proof} \end{theorem} \subsection{Localization} \label{sec:analysis:localization} The global basis cannot be used directly in practice, therefore, we propose to use the localized basis defined in \eqref{eqn:psi_i^0}. It is crucial to know the level of localization $\ell$ a priori, given accuracy and complexity constraints. In this section, we show that the corrector $\psi_i - \psi^\ell_i $ in all cases V, E and D decays exponentially with respect to $\ell$, enlightened by the idea of subspace decomposition addressed in \cite{Kornhuber2016e,owhadi2019operator}. First, we introduce a partition of unity. We use $\Ih$ to denote the index set of the partition of unity to distinguish it from the index set $\I$ for the set of measurement functions. For each $ x_{\hat{\dotlessi}} \in \NH$, let $\omega_{\hat{\dotlessi}}:=\cup\{\tau\in \TH | x_{\hat{\dotlessi}} \in \tau\}$, and $\eta_{\hat{\dotlessi}}$ be the piecewise linear function associated with $x_{\hat{\dotlessi}}$, such that $\eta_{\hat{\dotlessi}}(x_{\hat{\dotlessj}})=\delta_{\hat{\dotlessi},\hat{\dotlessj}}$. Then $\{\eta_{\hat{\dotlessi}}\}_{\hat{\dotlessi} \in \Ih}$ forms a partition of unity and $$ H_0^1(\Omega)=\sum_{\hat{\dotlessi}\in \Ih} H_0^1(\omega_{\hat{\dotlessi}}) $$ and $\forall v\in H_0^1(\Omega)$, $$ v = \sum_{\hat{\dotlessi}\in \Ih} v_{\hat{\dotlessi}}, $$ with $v_{\hat{\dotlessi}}= v\eta_{\hat{\dotlessi}}$. We define \begin{equation} \Phi^{\perp}:=\{u\in H_0^1(\Omega)\,\big|[u,\phi_i]=0, \ \forall i \in \I\}, \end{equation} and $\Phi_{\hat{\dotlessi}}^{\perp}:=H_0^1(\omega_{\hat{\dotlessi}})\cap \Phi^{\perp}$. Let $P_{\hat{\dotlessi}}:H_0^1(\Omega)\to \Phi_{\hat{\dotlessi}}^{\perp}$ be the a-orthogonal projection, such that for any $v \in H_0^1(\Omega)$, $$ a(P_{\hat{\dotlessi}} v,w)=a(v,w), \ \forall \, w \in \Phi_{\hat{\dotlessi}}^{\perp}. $$ The additive subspace decomposition operator $P:\Phi^{\perp}\rightarrow \Phi^{\perp}$ is defined as \begin{equation} P:= \sum_{\hat{\dotlessi}\in \Ih}P_{\hat{\dotlessi}}. \end{equation} We note that the support of $P_{\hat{\dotlessi}}v$ is $\omega_{\hat{\dotlessi}}$. For any $v \in H_0^1(\Omega_i^{\ell})$, the support of $Pv$ is contained in $\cup\{\omega_{\hat{\dotlessi}}| \omega_{\hat{\dotlessi}}\cap\Omega_i^{\ell}\neq \emptyset, \hat{\dotlessi}\in \Ih \}$. Namely, applying $P$ on $v$ expands the its support by one layer, and $Pv\in H_0^1(\Omega_i^{{\ell}+1})$. The additive subspace decomposition operator $P$ can be utilized as a preconditioner to iteratively approximate any $\chi \in \Phi^{\perp}$. Noticing that the corrector $\psi_i^\ell - \psi_i\in \Phi^{\perp}$, the following proposition shows that if $\mathrm{cond(P)} < \infty$, the truncation error $\|\psi_i^{\ell}-\psi_i\| $ decays exponentially with respect to ${\ell}$. \begin{lemma} If $\mathrm{cond}(P) < \infty$, then \begin{equation} \|\psi_i^{\ell}-\psi_i\|\leq \big(\frac{\mathrm{cond}(P)-1}{\mathrm{cond}(P)+1}\big)^{\ell} \|\psi_i^0\|. \end{equation} \label{prop:iteration} The proof is very similar to \cite{owhadi2019operator}, see \ref{Sec:lemma subspace decomposition} for details. \end{lemma} \subsubsection{Condition Number of P} This subsection is dedicated to an analysis of the condition number of $P$, in particular, for case V, E and D.\\ \begin{lemma} \cite{Kornhuber2016e}\label{lemma:cond(P)} Let $K_{max}$ be the smallest constant such that \begin{equation} \|\chi\|^2 \leq K_{max}\sum_{\hat{\dotlessi}\in \Ih} \|\chi_{\hat{\dotlessi}}\|^2 \end{equation} holds for any $\chi = \sum_{\hat{\dotlessi}\in \Ih} \chi_{\hat{\dotlessi}}$ with $\chi_{\hat{\dotlessi}} \in \Phi_{\hat{\dotlessi}}^{\perp}$. Let $K_{min}$ be the largest number such that for any $\chi \in \Phi^{\perp} $, there exits a decomposition $\chi = \sum_{\hat{\dotlessi}\in \Ih} \chi_{\hat{\dotlessi}}$ with $\chi_{\hat{\dotlessi}}\in \Phi_{\hat{\dotlessi}}^{\perp}$ such that \begin{equation} K_{min}\sum_{\hat{\dotlessi}\in \Ih}\|\chi_{\hat{\dotlessi}}\|^2\leq\|\chi\|^2. \label{eqn:decomposition} \end{equation} The shape regularity of $\TH$ implies the existence of the overlapping number $n_{\max}$, namely, the maximum number of non-vanishing $\chi_i$ on any given element of $\TH$. We note that $n_{\max}$ depends on $\gamma$ and $d$. It holds true that \begin{equation} K_{min} \leq \lambda_{min}(P), \quad \lambda_{max}(P) \leq K_{max}\leq n_{max}. \end{equation} \end{lemma} For the existence of $K_{min}$, in the following, we will construct a specific decomposition for any $\chi \in \Phi^{\perp} $ satisfying \eqref{eqn:decomposition}. Note that \begin{equation} \chi = \sum_{\hat{\dotlessi}\in \Ih} v_{\hat{\dotlessi}}, \label{eqn:predecompositon} \end{equation} with $v_{\hat{\dotlessi}}=\chi\eta_{\hat{\dotlessi}}$, forms a decomposition of $\chi\in\Phi^{\perp}$; however, $v_{\hat{\dotlessi}}$ does not necessarily belong to $\Phi_{\hat{\dotlessi}}^{\perp}$. We construct the following correction operator $\tilde{P}$ such that $ v_{\hat{\dotlessi}} - \tilde{P} v_{\hat{\dotlessi}} \in \Phi_{\hat{\dotlessi}}^{\perp}$. $\tilde{P}$ varies by the choice of measurement functions $\Phi$. The $0$-th layer patch $\Omega_i^0$ is the smallest subset of $\Omega$, consisting of simplices in $\Th$, such that $\mathrm{supp}(\phi_i) \subset \Omega_i^0$. \begin{itemize} \item Case V: $\Omega_i^0= \tau_i$; \item Case E: $\Omega_i^0= \omega_{e_i}$, where $\omega_{e_i} := \cup\{\tau\in\TH: \tau\cap e_i\neq \emptyset\}$; \item Case D: $\Omega_i^0= \tau_i$ for $\tau_i \in \Phi_{\D}$ and $\Omega_i^0= \omega_{e_i}$ for $e_i \in \Phi_{\D}$. \end{itemize} Let $\psi_i^0$ be the localized basis defined in \eqref{eqn:psi_i^0} with respect to $\Omega_i^0$ . Let $\tilde{P}: H_0^1(\Omega) \rightarrow H_0^1(\Omega)$ be the linear operator defined by \begin{equation} \tilde{P}v:= \sum_{i\in\I} \psi_i^0 [\phi_i, v],\ for\ v\in H_0^1(\Omega), \label{eqn:tildeP} \end{equation} Although $\phi_i$ and $\psi_i^0$ varies by cases, noticing that $\tilde{P}v_{\hat{\dotlessi}} \in H_0^1(\omega_{\hat{\dotlessi}})$ and $[\psi^0_i,\phi_j] = \delta_{i,j}$, we have $v_{\hat{\dotlessi}}-\tilde{P}v_{\hat{\dotlessi}} \in \Phi_{\hat{\dotlessi}}^{\perp}$. Moreover, for $\chi\in \Phi^{\perp}$, we have $\tilde{P}\chi=0$. It follows that \begin{equation} \chi=\chi-\tilde{P}\chi=\sum_{\hat{\dotlessi}\in \Ih}(v_{\hat{\dotlessi}}-\tilde{P}v_{\hat{\dotlessi}})=\sum_{\hat{\dotlessi}\in \Ih}\chi_{\hat{\dotlessi}}, \label{eqn:decomposition_construction} \end{equation} with $\chi_{\hat{\dotlessi}}:=v_{\hat{\dotlessi}}-\tilde{P}v_{\hat{\dotlessi}} \in \Phi^{\perp}_i$, which is the desired decomposition. The next three Lemmas are dedicated to prove that the decomposition is stable in the sense that there exists a constant $K_{min}$, only depending on $d, \gamma, \kappa_{min}, \kappa_{max}$, such that \eqref{eqn:decomposition} is satisfied. \begin{lemma} \label{lemma:predecomposition} There exists a constant $C>0$, depending on $d, \gamma, \kappa_{min}, \kappa_{max}$ only, such that for any $\chi \in \Phi^{\perp}$ the decomposition \eqref{eqn:predecompositon} satisfies \begin{equation} \sum_{\hat{\dotlessi}\in \Ih}\|v_{\hat{\dotlessi}}\|^2\leq C \|\chi\|^2 . \end{equation} See \ref{Sec:lemma:predecomposition} for the proof. \end{lemma} \begin{lemma} \label{lemma:stability} Let $\tilde{P}$ be defined as \eqref{eqn:tildeP}, there exists a constant $C$, depending on $d, \gamma, \kappa_{min}, \kappa_{max}$ only, such that for any $\chi\in \Phi^{\perp}$ \begin{equation} \sum_{\hat{\dotlessi}\in \Ih}\|\tilde{P}v_{\hat{\dotlessi}}\|^2\leq C \|\chi\|^2 \end{equation} with $v_{\hat{\dotlessi}} = \chi \eta_{\hat{\dotlessi}}$, furthermore, there exists a decomposition $\chi = \sum_{\hat{\dotlessi}\in \Ih} \chi_{\hat{\dotlessi}} $ for any $\chi \in \Phi^{\perp}$ with $\chi_{\hat{\dotlessi}}\in \Phi_{\hat{\dotlessi}}^{\perp}$ such that \begin{equation} \sum_{\hat{\dotlessi}\in \Ih}\|\chi_{\hat{\dotlessi}}\|^2\leq 2C \|\chi\|^2 . \end{equation} See \ref{sec:lemma:stability} for the proof. \end{lemma} \begin{lemma}\label{lemma:psi_i^0} For case V, case E and case D, it holds true that \begin{equation} \|\psi^0_j\| \leq C_1 H^{-1} \text{ for any } j\in \I, \end{equation} where $C_1$ only depends on $\gamma$, $\kappa_{max}$ and $d$. See \ref{sec:lemma:psi_i^0} for the proof. \end{lemma} \begin{theorem} \label{thm:decay} It holds that \begin{equation} \|\psi_i-\psi_i^{\ell}\| \leq C_1 H^{-1} e^{-{\ell}/C_2}. \end{equation} \end{theorem} \begin{proof} Lemma \ref{lemma:cond(P)} and \ref{lemma:stability} imply that there exists a constant $C'$ depending on $\gamma, d, \kappa_{max}, \kappa_{min}$ only, such that $1/C' \leq K_{min} \leq \lambda_{min}(P)$. According to Lemma \ref{lemma:cond(P)}, $\mathrm{cond}(P)$ has an upper bound $C_2$ depending on $\gamma, d, \kappa_{max}, \kappa_{min}$ only. Combining Lemma \ref{prop:iteration} and Lemma \ref{lemma:psi_i^0}, we draw the conclusion. \end{proof} \subsection{Accuracy of Localized Basis} \label{sec:analysis:localbasis} Due to the exponential decay of the truncation error, we can use the localized basis $\psi_i^{\ell}$ instead of the global basis $\psi_i$ to reduce computational cost. The following theorem shows that we can preserve the $O(H)$ convergence rate for the global basis in Theorem \ref{thm:convergense of global} if the localization level (number of layers in the localization patch) ${\ell} \simeq O(\log(1/H))$. \begin{theorem} \label{thm:local} Let $u_H^{\ell}\in \Psi^{\ell} $ be the solution to \eqref{eqn:local_varellpFEM}. For ${\ell}\geq C_2\log(1/H)$ we have \begin{equation} \|u-u_H^{\ell}\| \leq CH \|g\|_{L^2(\Omega)}, \end{equation} where C depends on $\kappa_{min},\kappa_{max},d,\Omega$, and $\gamma$. \end{theorem} \begin{proof} Let $u_{\psi}^{\ell}=\Sigma_{i=1}^N m_i \psi_i^{\ell}$, where $m_i=[\phi_i,u],\ for \ i=1,...,N$. Recalling that $u_H = \Sigma_{i=1}^N m_i \psi_i$, we have \begin{equation} \|u-u_{\psi}^{\ell}\| \leq \|u-u_H\|+\|u_H-u_{\psi}^{\ell}\|. \end{equation} Theorem \ref{thm:convergense of global} implies that$\|u-u_H\|\leq CH\|g\|_{L^2}$. To derive an estimate of the second term, let $\chi=u_H-u_{\psi}^{\ell}$. As $\chi \in \Phi^{\perp}$, it can be decomposed as \begin{equation} \chi=\sum_{\hat{\dotlessi}\in \Ih}\chi_{\hat{\dotlessi}}, \end{equation} with $\chi_{\hat{\dotlessi}}:=v_{\hat{\dotlessi}}-\tilde{P}v_{\hat{\dotlessi}} \in \Phi^{\perp}_{\hat{\dotlessi}}$ as in \eqref{eqn:decomposition_construction}. By Lemma \ref{lemma:stability}, it follows that \begin{equation} \sum_{\hat{\dotlessi}\in \Ih}\|\chi_{\hat{\dotlessi}}\|^2\leq C \|\chi\|^2, \label{eqn:stable_decom} \end{equation} where $C$ depends on $d, \gamma, \kappa_{min}, \kappa_{max}$ only. Note that \begin{equation} \|\chi\|^2 = \left(\sum_{\hat{\dotlessi}\in \Ih}\chi_{\hat{\dotlessi}}, \sum_{i\in\I} m_i (\psi_i-\psi_i^{\ell}) \right). \label{eqn:product} \end{equation} $\chi_{\hat{\dotlessi}} \in \Phi^{\perp}_{\hat{\dotlessi}} $ implies $(\chi_{\hat{\dotlessi}}, \psi_i)=0$ and $(\chi_{\hat{\dotlessi}}, \psi_i^{\ell})=0$ for $\omega_{\hat{\dotlessi}} \subset \Omega_i^{\ell}$. Therefore, for any pair $(\hat{\dotlessi},i)\in \{(\hat{\dotlessi},i) \in \Ih \times \I| \omega_{\hat{\dotlessi}} \subset \Omega_i^{\ell} \text{ or } \omega_{\hat{\dotlessi}} \cap \Omega_i^{\ell}=\emptyset\} $, it holds that $$ (\chi_{\hat{\dotlessi}},\psi_i-\psi_i^{\ell})=0. $$ For any nonzero term, by Young's inequality, we have \begin{equation} m_i(\chi_{\hat{\dotlessi}},\psi_i-\psi_i^{\ell}) \leq \frac{1}{2}((C{C_{ol}})^{-1}\|\chi_{\hat{\dotlessi}}\|^2+CC_{ol} m_i^2\|\psi_i-\psi_i^{\ell}\|^2) \label{eqn:nonzero} \end{equation} We note that the number of nonzero terms in \eqref{eqn:product}, for each $\chi_{\hat{\dotlessi}}$ or $\psi_i-\psi_i^{\ell}$, is bounded by a constant $C_{ol}\sim {\ell}^{(d-1)}$ depending on the shape regularity. Combining \eqref{eqn:stable_decom}, \eqref{eqn:product} and \eqref{eqn:nonzero}, it follows that \begin{equation} \begin{aligned} \|\chi\|^2 = &\sum_{\omega_{\hat{\dotlessi}} \not\subset \Omega_i^{\ell}\text{ and }\omega_{\hat{\dotlessi}} \cap \Omega_i^{\ell} \neq \emptyset} m_i(\chi_{\hat{\dotlessi}},\psi_i-\psi_i^{\ell})\\ \leq & \sum_{\omega_{\hat{\dotlessi}} \not\subset \Omega_i^{\ell}\text{ and }\omega_{\hat{\dotlessi}} \cap \Omega_i^{\ell} \neq \emptyset} \frac{1}{2}((C{C_{ol}})^{-1}\|\chi_{\hat{\dotlessi}}\|^2+CC_{ol} m_i^2\|\psi_i-\psi_i^{\ell}\|^2) \\ \leq & \sum_{\hat{\dotlessi}\in \Ih }C_{ol} \frac{1}{2}((C{C_{ol}})^{-1}\|\chi_{\hat{\dotlessi}}\|^2)+ \sum_{i\in \I }C_{ol}(CC_{ol} m_i^2\|\psi_i-\psi_i^{\ell}\|^2)\\ \leq & \frac{1}{2}\|\chi\|^2 + \sum_{i\in \I}CC_{ol}^2 m_i^2\|\psi_i-\psi_i^{\ell}\|^2. \end{aligned} \end{equation} Hence \begin{equation} \|\chi\|^2 \leq 2\sum_{i\in \I}CC_{ol}^2 m_i^2\|\psi_i-\psi_i^{\ell}\|^2. \label{eqn:temp1} \end{equation} By Lemma \ref{lem:trace} and the Poincar\'{e} inequality, it suffices to show \begin{equation} \sum_{i \in \I} m_{i}^2 =\sum_{i\in \I} [\phi_i,u]^2\leq C \|u\|^2, \label{eqn: coef stable} \end{equation} for different cases. We have \begin{itemize} \item Case V: $\sum_{i\in \I} [\phi_i,u]^2 \leq \sum_{i\in \I} \|\phi_i\|_{L^2}^2 \|u\|_{L^2(\tau_i)}^2 \leq \sum_{i\in \I} \|u\|_{L^2(\tau_i)}^2 = \|u\|^2$. \item Case E: $\sum_{i\in \I} [\phi_i,u] ^2 = \sum_{i\in \I}(\int_{e_i}|e_i|^{\frac{2-d}{2(d-1)}} u \ds)^2 \leq \sum_{i\in \I} CH \|u\|_{L^2(e_i)}^2 \leq C \sum_{i\in \I} \|u\|_{L^2(\tau_i)}^2 \leq C \|u\|^2$, where $C$ depends on $d$, $\Omega$, and $\gamma$ only. \item Case D: a combination of case V and case E. \end{itemize} Plugging \eqref{eqn: coef stable} into \eqref{eqn:temp1} and applying Theorem \ref{thm:decay}, we have \begin{equation} \|\chi\|^2 \leq 2 CC_{ol}^2 e^{-{\ell}/C_2}H^{-1}\|g\|_{L^2}^2. \end{equation} For ${\ell}\geq C_2\log(1/H)$, we have $\|\chi\| \leq CH \|g\|_{L^2(\Omega)}$, where $C$ only depends on $\kappa_{min},\kappa_{max},d,\Omega$, and $\gamma$. \end{proof} \section{Numerics} \label{sec:numerics} In this section, we justify our theoretical results through a few examples. We demonstrate the localization property of GRPS basis and the convergence of localized GRPS for benchmark problems with multiscale coefficients. Furthermore, we validate the GRPS method for wave equations in heterogeneous media. \subsection{Multiscale Trigonometric Example} \label{sec:numerics:mstrig} The multiscale trigonometric (mstrig) coefficient $\kappa(x_1,x_2)$ is given by, \begin{multline} \kappa(x_1,x_2):=\frac{1}{6}(\frac{1.1+\sin(2\pi x_1/\epsilon_1)}{1.1+\sin(2\pi x_2/\epsilon_1)}+\frac{1.1+\sin(2\pi x_2/\epsilon_2)}{1.1+\cos(2\pi x_1 /\epsilon_2)} +\frac{1.1+\cos(2\pi x_2/\epsilon_3)}{1.1+\sin(2\pi x_1 /\epsilon_3)}\\ +\frac{1.1+\sin(2\pi x_2/\epsilon_4)}{1.1+\cos(2\pi x_1 /\epsilon_4)} +\frac{1.1+\cos(2\pi x_1/\epsilon_5)}{1.1+\sin(2 \pi x_2/\epsilon_5)}+\sin(4x_1^2x_2^2)+1) \label{eqn:benchmark1} \end{multline} where $\epsilon_1=1/5,\, \epsilon_2=1/13,\, \epsilon_3=1/17,\, \epsilon_4=1/31,\,\epsilon_5=1/65$. $\kappa(x_1,x_2)$ is highly oscillatory with non-separable scales on the unit square $\Omega=[0,1]\times [0,1]$. Figure \ref{fig:MTE} illustrates $\kappa(x_1,x_2)$. $g(x_1,x_2) = \sin(x_1)$. \begin{figure}[H] \centering {\includegraphics[width=0.5\textwidth]{ex2_a}} \caption{Multiscale Trigonometric Example, $\mathrm{contrast}\approx 33.4$} \label{fig:MTE} \end{figure} \subsubsection{Localization of GRPS basis} We first show the exponential decay of GRPS basis function. For a fixed coarse mesh with $N_c\times N_c$ nodes, the degrees of freedom for the RPS, GRPS-V, GRPS-E and GRPS-D bases are $(N_c-1)^2, 2N_c^2, 3N_c^2-2N_c$ and $5N_c^2-2N_c$, respectively. These numbers indicate that for the same mesh, the density (the degree of freedoms) of GRPS-D is the largest, and then GRPS-E, GRPS-V and RPS. We illustrate RPS, GRPS-E, GRPS-V, GRPS-D basis functions in Fig. \ref{fig:PC} to Fig. \ref{fig:DS}. It seems that GRPS-E and GRPS-D bases decay more rapidly compared with GRPS-V and RPS bases. The GRPS-E and GRPS-D bases seem to be more spiky than the GRPS-V basis, and all those three bases are more localized than the RPS basis. \begin{figure}[http] \centering \subfigure[RPS, contour\label{fig:PC}] {\includegraphics[width=0.23\textwidth]{GraphGlobalBasis_01}} \quad \subfigure[GRPS-V, contour\label{fig:VC}]{\includegraphics[width=0.23\textwidth]{GraphGlobalBasis_08}} \subfigure[GRPS-E, contour\label{fig:EC}] {\includegraphics[width=0.23\textwidth]{GraphGlobalBasis_04}}\quad \subfigure[GRPS-D, contour\label{fig:DC}]{\includegraphics[width=0.23\textwidth]{GraphGlobalBasis_12}} \subfigure[RPS, surface\label{fig:PS}] {\includegraphics[width=0.230\textwidth]{GraphGlobalBasis_02}}\quad \subfigure[GRPS-V, surface\label{fig:VS}]{\includegraphics[width=0.230\textwidth]{GraphGlobalBasis_09}} \subfigure[GRPS-E, surface\label{fig:ES}] {\includegraphics[width=0.230\textwidth]{GraphGlobalBasis_05}}\quad \subfigure[GRPS-D, surface\label{fig:DS}]{\includegraphics[width=0.230\textwidth]{GraphGlobalBasis_13}} \subfigure[RPS, $x_2=x_1$\label{fig:Pslice}] {\includegraphics[width=0.230\textwidth]{GraphGlobalBasis_03}}\quad \subfigure[GRPS-V, $x_2=x_1$ \label{fig:Vslice}]{\includegraphics[width=0.230\textwidth]{GraphGlobalBasis_10}} \subfigure[GRPS-E, $x_2=x_1$\label{fig:Eslice}] {\includegraphics[width=0.230\textwidth]{GraphGlobalBasis_06}}\quad \subfigure[GRPS-D, $x_2=x_1$ \label{fig:Dslice}]{\includegraphics[width=0.230\textwidth]{GraphGlobalBasis_18}} \subfigure[RPS, $x_1+x_2=\frac{9}{8}$\label{fig:Pslice2}] {\includegraphics[width=0.230\textwidth]{GraphGlobalBasis_17}}\quad \subfigure[GRPS-V, $x_1+x_2=\frac{9}{8}$ \label{fig:Vslice2}]{\includegraphics[width=0.230\textwidth]{GraphGlobalBasis_15}} \subfigure[GRPS-E, $x_1+x_2=\frac{9}{8}$\label{fig:Eslice2}] {\includegraphics[width=0.230\textwidth]{GraphGlobalBasis_16}}\quad \subfigure[GRPS-D, $x_1+x_2=\frac{9}{8}$ \label{fig:Dslice2}]{\includegraphics[width=0.230\textwidth]{GraphGlobalBasis_14}} \caption{Contour plots (a)-(d), surface plots (e)-(h), 1d slice plots (in the $\log_{10}$-scale) along $x_2=x_1$ (i)-(l), and $x_1+x_2=9/8$ (m)-(p), for RPS, GRPS-V, GRPS-E, GRPS-D basis functions (\textit{mstrig} exmaple). For the coarse mesh $N_c=8$, and for the fine mesh $N_f=512$.} \end{figure} \subsubsection{Convergence} We investigate the convergence property of GRPS bases and RPS bases for the \textit{mstrig} example. We use a fixed fine mesh with $h=2^{-8}$ and coarse meshes with sizes $H=2^{-3},\,2^{-4},\,2^{-5}$, respectively. In Figure \ref{fig:comp_phi}, we compare convergence curves of four different basis functions, with localization levels $\ell=2,\,3,\,4,\,5,\,6$. The x-axis stands for the degree of freedom of the basis functions, and the y-axis stands for the error $\|u_{H}^{\ell}-u_h\|_{H_0^1}$ in $\log_{10}$-scale, where $u_h$ is the finite element reference solution to \eqref{eqn:varellp} over $V_h$. Each curve stands for the convergence rate vs. degrees of freedom plot with respect to a fixed $\ell$. \begin{figure}[http] \centering \subfigure[RPS, $H=2^{-3}$ \label{fig:Pc1}] {\includegraphics[width=0.24\textwidth]{paper_bench2_1}} \quad \subfigure[RPS, $H=2^{-4}$ \label{fig:Pc2}] {\includegraphics[width=0.24\textwidth]{paper_bench2_2}} \quad \subfigure[RPS, $H=2^{-5}$ \label{fig:Pc3}] {\includegraphics[width=0.24\textwidth]{paper_bench2_3}} \subfigure[GRPS-V, $H=2^{-3}$ \label{fig:PT1}] {\includegraphics[width=0.24\textwidth]{paper_bench2_4}} \quad \subfigure[GRPS-V, $H=2^{-4}$ \label{fig:PT2}] {\includegraphics[width=0.24\textwidth]{paper_bench2_5}} \quad \subfigure[GRPS-V, $H=2^{-5}$ \label{fig:PT3}] {\includegraphics[width=0.24\textwidth]{paper_bench2_6}} \subfigure[GRPS-E, $H=2^{-3}$ \label{fig:PE1}] {\includegraphics[width=0.24\textwidth]{paper_bench2_7}} \quad \subfigure[GRPS-E, $H=2^{-4}$ \label{fig:PE2}] {\includegraphics[width=0.24\textwidth]{paper_bench2_8}} \quad \subfigure[GRPS-E, $H=2^{-5}$ \label{fig:PE3}] {\includegraphics[width=0.24\textwidth]{paper_bench2_9}} \quad \subfigure[GRPS-D, $H=2^{-3}$ \label{fig:PD1}] {\includegraphics[width=0.24\textwidth]{paper_bench2_10}} \quad \subfigure[GRPS-D, $H=2^{-4}$ \label{fig:PD2}] {\includegraphics[width=0.24\textwidth]{paper_bench2_11}} \quad \subfigure[GRPS-D, $H=2^{-5}$ \label{fig:PD3}] {\includegraphics[width=0.24\textwidth]{paper_bench2_12}} \caption{Comparison of the convergence with respect to the localization level $\ell$, between RPS, GRPS-V, GRPS-E and GRPS-D. The x-axis stands for the localization level $\ell$ and the y-axis stands for the relative error $\|u_h-u_H^{\ell}\|_{H^1_0(\Omega)}/\|u_h\|_{H^1_0(\Omega)}$ in the $\log_{10}$-scale. } \label{fig:loclevl} \end{figure} In Figure \ref{fig:loclevl}, we show the convergence of GRPS bases with respect to the localization level $\ell$ and the coarse mesh size $H$. For a fixed $H$, the accuracy improves with increasing $\ell$, until it reaches the saturation level $O(H)$. In Figure \ref{fig:comp_phi}, we demonstrate that, for the fixed localization level $\ell=6$, GRPS-D has an approximately second order convergence rate. Compared with other bases, GRPS-D has the smallest computational cost to achieve a given approximation error, in terms of the number of layers and coarse degrees of freedom . \begin{figure}[H] \centering {\includegraphics[width=0.7\textwidth]{paper_bench4_19}} \caption{Convergence curves (error vs. dof) for the \textit{mstrig} example, with fixed localization levels $\ell=2,\,3,\,4,\,5,\,6$, respectively. The x-axis stands for the degrees of freedom in the $\log_{10}$-scale and the y-axis stands for the relative error $\|u_h-u_H^{\ell}\|_{H^1_0(\Omega)}/\|u_h\|_{H^1_0(\Omega)}$ in the $\log_{10}$-scale. } \label{fig:comp_phi} \end{figure} \subsection{SPE10} \label{sec:numerics:spe10} The second example is the \texttt{SPE10} benchmark problem \footnote{\texttt{http://www.spe.org/web/csp/}}, which is a prototypical example with high contrast heterogeneous coefficient. The physical domain is the rectangular cuboid $[0,220] \times [0,60] \times[0,85]$, with piecewise constant coefficient $\kappa(x_1,x_2,x_3)$ given on grid points. We select the coefficients over layer 63 with respect to $z$-axis for our two dimensional test problems, and illustrate its contour in Figure \ref{fig:spe10}. Layer 63 possesses the so called channel features, which makes the problem more challenging. $g(x_1,x_2) = \sin(x_1)$. \begin{figure}[H] \centering {\includegraphics[width=0.85\textwidth]{paper_spe10_layer63}} \caption{SPE10 layer 63, $\mathrm{contrast}\approx 10^{16}$.} \label{fig:spe10} \end{figure} In Figure \ref{fig:comp_phi_2}, we show the performance of RPS, GRPS-V, GRPS-E, and GRPS-D bases for the SPE10 example. For the same degrees of freedom, GRPS-D has the best accuracy compared to others, and it achieves a stable convergences rate in less number of layers $\ell$. \begin{figure}[H] \centering \includegraphics[width=0.80\textwidth]{paper_bench_spe10_63_3} \caption{Convergence curves(error in vs. dof) for SPE10's 63 layer, with fixed localization levels $\ell=2,\,3,\,4,\,5,\,6$, respectively. The x-axis stands for the degrees of freedom in the $\log_{10}$-scale and the y-axis stands for the error relative error $\|u_h-u_H^{\ell}\|_{H^1_0(\Omega)}/\|u_h\|_{H^1_0(\Omega)}$ in the $\log_{10}$-scale. } \label{fig:comp_phi_2} \end{figure} \subsection{Wave equation in heterogeneous media} Now we investigate the convergence property of GRPS bases for the wave equation in stationary heterogeneous media, \begin{equation} \begin{cases} &u_{tt}-\nabla \cdot \kappa(x) \nabla u =f(x,t), x\in \Omega \\ & u(x,0)=u_0(x)\, \, \text{ on } \Omega, \\ & u_t(x,0)=v_0(x)\,\,\text{ on } \Omega. \\ & u(x,t)=0, \,\, \text{ on } \partial \Omega \times[0,T]. \end{cases} \label{eqn:wave} \end{equation} where $\kappa(x)$ is taken as the \textit{mstrig} coefficient in \eqref{eqn:benchmark1}, $\Omega=[0,1]\times [0,1]$, $u(x,t)=0$, $u_t(x,0)=\sin(2\pi x_1)\sin(2\pi x_2)$, and $f(x,t)=0$. We employ the temporal discretization in \cite{OZ11,owhadi2008homogenization}. To be more precise, let $M \in \mathbb{N}$, $\Delta t = T/M$, $\left(t_{n}=n \Delta t\right)_{0 \leqslant n \leqslant M}$ be a discretization of $[0, T]$. We write the trial space $\Psi_{T}^{\ell}$ as \begin{align*} \Psi_{T}^{\ell} := &\{ w \in L^2(0,T; H^1_0(\Omega))| w(x, t)=\sum_{i} c_{i}(t) \psi_{i}^{\ell}(x), \\ & c_{i}(t) \text{ are linear on } \left(t_{n}, t_{n+1}\right] \text{ and continuous on } [0,T] \}. \end{align*} We denote $u_H^{\ell} \in \Psi_{T}^{\ell}$ as the finite element solution of the following implicit weak form, such that, for any $v \in \Psi^{\ell}$, with $\Psi^\ell$ the (localized) GRPS space, \begin{equation} [v, \partial_{t} u_H^{\ell}(t_{n+1})]-[v, \partial_{t} u_H^{\ell}(t_{n})] = -\int_{t_{n}}^{t_{n+1}} a(v, u_H^{\ell}) \dt +\int_{t_{n}}^{t_{n+1}}v f \dt \label{eqn:wave_weak_form} \end{equation} In \eqref{eqn:wave_weak_form}, $\partial_{t} u_H^{\ell}(t) \text { stands for } \lim _{\epsilon_{\downarrow} 0}\left( u_H^{\ell}(t)- u_H^{\ell}(t-\epsilon)\right) / \epsilon$. Once we know the values of $u_H^{\ell}$ and $\partial_{t} u_H^{\ell}$ at $t_n$, \eqref{eqn:wave_weak_form} is a linear system for the unknown coefficients of $\partial_{t} u_H^{\ell}(t_{n+1})$ in $\Psi^{\ell}$. By continuity of $u_H^{\ell}$ in time, we obtain $u_H^{\ell}(t_{n+1})$ by \begin{equation} u_H^{\ell}(t_{n+1}) = u_H^{\ell}(t_{n}) + \partial_{t} u_H^{\ell}(t_{n+1})\Delta t. \label{eqn:wave_weak_form2} \end{equation} We use a fixed fine mesh with $h=2^{-8}$ and coarse meshes with $H=2^{-3},\,2^{-4},\,2^{-5}$. We denote $u_h(x,t)$ as the reference finite element solution obtained to the same weak formulation \eqref{eqn:wave_weak_form} and \eqref{eqn:wave_weak_form2} on the fine mesh. We take $\Delta = 1/200$ and compute the solution up to time $T = 1$. The approximation error is measured by $$\|u_h-u_H^{\ell}\|_{L^2(0,T;H^1_0(\Omega))}:= (\int_0^T \|u_h(\cdot,t)-u_H^{\ell}(\cdot,t)\|^2_{H^1_0(\Omega)} \dt)^{1/2}.$$ Figure \ref{fig:wave_1} illustrates the convergence behavior with respect to the coarse mesh resolution. The convergence is nearly linear when $\ell$ is large enough and GRPS-D achieves a stable convergences rate in less number of layers $\ell$ compared with others. \begin{figure}[H] \centering {\includegraphics[width=0.80\textwidth]{paper_wave_2}}\quad \caption{Convergence curves of wave equation with GRPS bases with fixed localization levels $\ell==2,\,3,\,4,\,5,\,6$, respectively. The x-axis stands for the coarse degrees of freedom in the $\log_{10}$-scale and the y-axis stands for the relative error $\|u_h-u_H^{\ell}\|_{L^2(0,T;H^1_0(\Omega))}/\|u_h\|_{L^2(0,T;H^1_0(\Omega))}$ in the $\log_{10}$-scale. } \label{fig:wave_1} \end{figure} \section{Conclusion} \label{sec:conclusion} In this paper, we generalize the RPS and Gamblet bases for numerical homogenization within the Bayesian framework. We propose to use the edge and first order derivative measurements to construct new generalized rough polyharmonic splines (GRPS) basis. Such a generalization requires some new techniques to prove the localization and convergence properties. Theoretical results on these new GRPS bases are developed and numerical justifications are provided. It seems that those bases are efficient for certain multiscale PDEs. In this paper, we only consider the case of the second order elliptic operator, it is important to note that the framework works for general integro-differential operators \cite{owhadi2019operator}. For example, we can apply the method to heterogeneous elastic-plastic dynamics \cite{zhang2010global}, and furthermore, to nonlinear multiscale equations \cite{liu2021iterated}. \section*{Acknowledgement} This work is partially supported by the National Natural Science Foundation of China (NSFC 11871339, 11861131004). Dr Zhu's research is further supported by Foundation of LCP (No.6142A05180501), BNU-HKBU United International College(UIC) Start-up Research Fund (No.R72021114) and NSFC (No.11771002, 11571047, 11671049, 11671051, 6162003, and 11871339). \bibliographystyle{spmpsci} \bibliography{nh} \appendix \section{Appendix} \subsection{Proof of Proposition \ref{prop:span_eq}} \label{sec:prop:span_eq} \begin{proof} Remark \ref{rem:deriv} implies that $ \operatorname{span}\{\phi_{\tau,\alpha}\}_{\tau\in \TH, \alpha \in \mathcal{A}} \subset \operatorname{span}\{\phi_{e}\}_{e\in \EH}$. For the opposite direction, without loss of generality, we start with an element $\tau_1$, with one edge on the boundary and two edges in the interior of the domain, as illustrated in Figure \ref{fig:illu1}. Two derivative measurement functions over $\tau_1$ are linearly independent, and they span the same space as two edge measurement functions on the interior edges of $\tau_1$, due to the Dirichlet boundary condition and \eqref{eqn:deriv}. We can continue the argument by adding neighboring elements (sharing common edges), for example, $\tau_2$ or $\tau_3$ in \ref{fig:illu1}, until we cover the whole domain $\Omega$. Each time we add an element, by \eqref{eqn:deriv}, the newly added edge measurement functions in the element can be linear represented by derivative measurement functions in the new element and the edge measurement function on the shared edge. The latter can again be linearly represented by derivative measurement functions in the existing elements. Therefore, we have $ \operatorname{span}\{\phi_{\tau,\alpha}\}_{\tau\in \TH, \alpha \in \mathcal{A}} \supset \operatorname{span}\{\phi_{e}\}_{e\in \EH}$. We note that the number of independent edge measurement functions may increase by two ($\tau_2$ in Fig. \ref{fig:illu1}), or one ($\tau_5$ in Fig.\ref{fig:illu2}), or even zero ($\tau_4$ in Fig.\ref{fig:illu2}) when a neighboring element is added. This implies $\{\phi_{\tau,\alpha}\}_{\tau\in \TH, \alpha \in \mathcal{A}}$ can be linearly dependent. \begin{figure}[H] \centering \subfigure[Extended region\label{fig:illu1}]{\includegraphics[width=0.35\textwidth]{derivative_illustration}} \subfigure[Extended region by adding another layer \label{fig:illu2}] {\includegraphics[width=0.35\textwidth]{derivative_illustration2}} \caption{illustration of proposition \ref{prop:span_eq} } \label{fig:illu} \end{figure} \end{proof} \begin{lemma} \label{lem:uniq} If $\lambda_{min}(P) > 0 $, then $\xi_i:=\psi_i^0-\psi_i$ is the unique solution in $\Phi^{\perp}$ such that \begin{equation} P\xi_i = P\psi_i^0 , \label{eqn:solvingchi} \end{equation} where $\psi_i^0$ is defined in \eqref{eqn:psi_i^0}. \begin{proof} The definition of $\psi_i^0$ implies that $\xi_i \in \Phi^{\perp} $. Since $\psi_i$ is the unique minimizer of \eqref{eqn:psi}, we obtain that $\psi_i$ is a-orthogonal to subspace $\Phi^{\perp}$, i.e. $P\psi_i=0$, hence $P\xi_i = P\psi_i^0$. The uniqueness is implied by $\lambda_{min}(P) >0$. \end{proof} \end{lemma} \subsection{Proof of Lemma \ref{prop:iteration}} \label{Sec:lemma subspace decomposition} \begin{proof} In view of Lemma \ref{lem:uniq}, we consider the approximation error $\|\xi_{i,{\ell}}-\xi_i\|$, instead of analyzing $\|\psi_i^{\ell}-\psi_i\|$ directly. Let $\psi_{i,0}=\psi_i^0$, $\xi_{i,0}=0$, and $\xi_{i,{\ell}}$ is constructed by induction \begin{equation} \xi_{i,{\ell}+1} = \xi_{i,{\ell}} + \beta P(\psi_{i,0}-\xi_{i,{\ell}}), \label{interation} \end{equation} where $\beta$ is a parameter to be identified later. Recalling that applying $P$ expands the support by one layer, we have $\xi_{i,{\ell}} \in \Phi^{\perp}\cap H^1_0(\Omega_i^{\ell})$. The iteration scheme (\ref{interation}) can be viewed as a Richardson iteration method to solve $\xi_i$ from \eqref{eqn:solvingchi}, with the iteration matrix $I-\beta P$. Taking $\beta=2/(\lambda_{max}(P)+\lambda_{min}(P))$, we deduce that $\|I-\beta P\| \leq \big(\frac{\mathrm{cond}(P)-1}{\mathrm{cond}(P)+1}\big) $, therefore it holds true that \begin{equation} \|\xi_{i,{\ell}}-\xi_i\|\leq \big(\frac{\mathrm{cond}(P)-1}{\mathrm{cond}(P)+1}\big)^{\ell} \|\xi_i\| . \end{equation} Let $\psi_{i,{\ell}}:=\psi_{i,0}-\xi_{i,{\ell}}$, we have \begin{equation} \|\psi_{i,{\ell}}-\psi_i\| = \|\xi_i-\xi_{i,{\ell}}\| \leq \big(\frac{\mathrm{cond}(P)-1}{\mathrm{cond}(P)+1}\big)^{\ell} \|\xi_i\| . \label{eqn:approximation} \end{equation} Since $\psi_{i,{\ell}}-\psi_i^{\ell} \in \Phi^{\perp}\cap H^1_0(\Omega_i^{\ell}) $, the variational property of $\psi_i$ and $\psi_i^{\ell}$ implies that $a(\psi_{i,{\ell}}-\psi_i^{\ell}, \psi_i^{\ell}-\psi_i)=0$, therefore, \begin{equation} \begin{aligned} \|\psi_i^{\ell}-\psi_i\| &\leq \|\psi_{i,{\ell}}-\psi_i^{\ell}\|+\|\psi_i^{\ell}-\psi_i\| = \|\psi_{i,{\ell}}-\psi_i\|\\ & \leq \big(\frac{\mathrm{cond}(P)-1}{\mathrm{cond}(P)+1}\big)^{\ell} \|\xi_i\| . \label{eqn:approx} \end{aligned} \end{equation} The a-orthogonality of $\psi_i$ to $\Phi^{\perp}$ also implies, \begin{equation} \|\psi_i^0\|^2 = \|\psi_i\|^2 +\|\xi_i\|^2 ,\ \|\xi_i\| \leq \|\psi_i^0\|. \label{eqn:psi_i^0approx} \end{equation} With \eqref{eqn:approx} and \eqref{eqn:psi_i^0approx}, we finally draw the conclusion that \begin{equation} \|\psi_i^{\ell}-\psi_i\|\leq \big(\frac{\mathrm{cond}(P)-1}{\mathrm{cond}(P)+1}\big)^{\ell} \|\psi_i^0\| . \end{equation} \end{proof} \subsection{Proof of Lemma \ref{lemma:predecomposition} } \label{Sec:lemma:predecomposition} \begin{proof} Combining the gradient estimate for $\eta_{\hat{\dotlessi}}$ \begin{equation} |D\eta_{\hat{\dotlessi}}|_{L^{\infty}} \leq C_{\gamma}H^{-1}, \label{eqn:partion_stability} \end{equation} the $H^1$ seminorm estimate for $v_{\hat{\dotlessi}}$ \begin{equation*} |v_{\hat{\dotlessi}}|_1^2\leq |D\eta_{\hat{\dotlessi}}|_{L^{\infty}} \|\chi\|_{L^2(\omega_{\hat{\dotlessi}})}^2+\|\nabla\chi\|_{L^2(\omega_{\hat{\dotlessi}})}^2, \end{equation*} and the Poincar\'{e}'s inequality, we have \begin{equation} \|v_{\hat{\dotlessi}}\|_{H_0^1(\omega_{\hat{\dotlessi}})} ^2\leq C_{\gamma}(H^{-1} \|\chi\|_{L^2(\omega_{\hat{\dotlessi}})}^2+\|\nabla\chi\|_{L^2(\omega_{\hat{\dotlessi}})}^2 ). \label{eqn:stability} \end{equation} Recalling the definition of the overlapping number $n_{max}$ in Lemma \ref{lemma:cond(P)}, it follows that \begin{equation} \sum_{\hat{\dotlessi}\in \Ih}\|\chi\|_{H^1(\omega_{\hat{\dotlessi}})}^2 \leq n_{max}\|\chi\|_{H^1(\Omega)}^2,\, \text{ and } \, \sum_{\hat{\dotlessi}\in \Ih}\|\chi\|_{L^2(\omega_{\hat{\dotlessi}})}^2 \leq n_{max}\|\chi\|_{L^2(\Omega)}^2. \label{eqn:overlap} \end{equation} Combining \eqref{eqn:stability}, \eqref{eqn:overlap} and Poincar\'{e}'s inequality \eqref{eqn:poincare}, we have \begin{equation} \sum_{\hat{\dotlessi}\in \Ih} \|v_{\hat{\dotlessi}}\|_{H_0^1(\omega_{\hat{\dotlessi}})} ^2\leq C \|\chi\|_{H_0^1(\Omega)}^2 , \end{equation} which further implies the following inequality due to the equivalence of $\|\cdot\|$ and $\|\cdot\|_{H_0^1(\Omega)}$, \begin{equation} \sum_{\hat{\dotlessi}\in \Ih} \|v_{\hat{\dotlessi}}\| ^2\leq C \kappa_{max}\kappa_{min}^{-1} \|\chi\|^2 , \end{equation} where $C$ only depends on $\gamma$, $d$. \end{proof} \subsection{Proof of Lemma \ref{lemma:stability}} \label{sec:lemma:stability} \begin{proof} For three cases, it suffices to show \begin{equation} \|\tilde{P}v_{\hat{\dotlessi}}\| \leq C \|v_{\hat{\dotlessi}}\|, \label{eqn:stable} \end{equation} where $C$ only depends on $d$ and $\gamma$. {Case V:} By Poincar\'{e} inequality, $$ [\phi_i, v_{\hat{\dotlessi}}]\leq \|\phi_i\|_{L^2(\omega_{\hat{\dotlessi}})}\|v_{\hat{\dotlessi}}\|_{L^2(\omega_{\hat{\dotlessi}})}\leq CH\|v_{\hat{\dotlessi}}\|_{H_0^1(\omega_{\hat{\dotlessi}})} , $$ where $C$ only depends on $\gamma$ and $d$. Again, by the shape regularity, $\max_{\hat{\dotlessi}}\#\{\tau_i|\ \tau_i\subset\omega_{\hat{\dotlessi}}\}$ is bounded by a constant independent of $H$. By Lemma \ref{lemma:psi_i^0} we have $\|\psi^0_i\| \leq C H^{-1}$, where $\psi^0_i$ corresponds to $\tau_i\in \TH$, for $i\in\I$. Therefore \begin{equation*} \|\tilde{P}v_{\hat{\dotlessi}}\| \leq a_{max}\|\sum_{\tau_i \subset \omega_{\hat{\dotlessi}}} \psi^0_i [\phi_i, v_{\hat{\dotlessi}}] \|_{H_0^1(\omega_{\hat{\dotlessi}})} \leq a_{max}\sum_{\tau_i \subset \omega_{\hat{\dotlessi}}} \|\psi^0_i [\phi_i, v_{\hat{\dotlessi}}] \|_{H_0^1(\omega_{\hat{\dotlessi}})} \leq C\|v_{\hat{\dotlessi}}\|_{H_0^1(\omega_{\hat{\dotlessi}})} . \label{eqn:InterpStability1} \end{equation*} {Case E:} For any $v\in H_0^1(\omega_{\hat{\dotlessi}}) \text{ and any } e_i \subset \omega_{\hat{\dotlessi}}$, by Lemma \ref{lem:trace}, we have $\|v\|_{L^2(e_i)} \leq C H^{1/2}\|v\|_{H_0^1(\omega_{\hat{\dotlessi}})}$. Therefore, $$ [\phi_i, v_{\hat{\dotlessi}}] = \int_{e_i}|e_i|^{\frac{2-d}{2(d-1)}} v_{\hat{\dotlessi}} \ds \leq |e_i|^{\frac{1}{2(d-1)}} \|v_{\hat{\dotlessi}}\|_{L^2(e_i)} \leq C H\|v_{\hat{\dotlessi}}\|_{H_0^1(\omega_{\hat{\dotlessi}})}, $$ where $C$ only depends on $\gamma$ and $d$. Again, by the shape regularity, $\max\limits_{\hat{\dotlessi}\in \Ih}\#\{e_i|\ \omega_{e_i}\subset\omega_{\hat{\dotlessi}}\}$ is bounded by a constant depending on $\gamma$. Therefore, by Lemma \ref{lemma:psi_i^0} we have \begin{equation*} \begin{aligned} \|\tilde{P}v_{\hat{\dotlessi}}\| \leq a_{max}\|\sum_{\omega_{e_i} \subset \omega_{\hat{\dotlessi}}} [\phi_i, v_{\hat{\dotlessi}}]\psi^0_i \|_{H_0^1(\omega_{\hat{\dotlessi}})} &\leq a_{max}\sum_{\omega_{e_i} \subset \omega_{\hat{\dotlessi}}} \|[\phi_i, v_{\hat{\dotlessi}}]\psi^0_i \|_{H_0^1(\omega_{\hat{\dotlessi}})} &\leq C\|v_{\hat{\dotlessi}}\|_{H_0^1(\omega_{\hat{\dotlessi}})} . \end{aligned} \label{eqn:InterpStability2} \end{equation*} {Case D:} We obtain $\|\tilde{P}v_{\hat{\dotlessi}}\| \leq C\|v_{\hat{\dotlessi}}\|_{H_0^1(\omega_{\hat{\dotlessi}})} $ by combining the results in case V and case E. It follows from \eqref{eqn:stable} and Lemma \ref{lemma:predecomposition} that \begin{equation} \sum_{\hat{\dotlessi}\in \Ih}\|\tilde{P}v_{\hat{\dotlessi}}\|^2 \leq C \|\chi\|^2. \label{eqn:InterpStable} \end{equation} Lemma \ref{lemma:predecomposition} and \eqref{eqn:InterpStable} indicate that \begin{equation} \sum_{\hat{\dotlessi}\in \Ih}\|\chi_{\hat{\dotlessi}}\|^2\leq 2C \|\chi\|^2 , \end{equation} where C depends on $ \gamma, \kappa_{min}, \kappa_{max}$ but not on $H$. \end{proof} \subsection{Proof of Lemma \ref{lemma:psi_i^0}}\label{sec:lemma:psi_i^0} \begin{proof} Case V can be referred to Lemma 15.24 in \cite{owhadi2019operator}. For case E, we can use the scaling argument. Consider $\Omega_{ref}:=[0,1]^2$ consisting of two reference triangles $T_{ref_{1}}$ and $T_{ref_{2}}$ with a common edge $E_1$, as illustrated in Figure \ref{fig:Tref}. \begin{figure}[H] \centering \includegraphics[width=0.4 \textwidth]{Tref} \caption{Illustration of reference triangle} \label{fig:Tref} \end{figure} Let \begin{equation} \hat{\psi}_{\Omega_{ref}}:=\argmin \limits_{v\in H_0^1(\Omega_{ref}),\int_{E_{1}} \phi_{E_1} v ds=1}\|v\|_{H_0^1(\Omega_{ref})}, \end{equation} where $\phi_{E_1}$ is the corresponding edge measurement on $E_1$. We have $\|\hat{\psi}_{\Omega_{ref}}\| \leq C$, where $C$ only depends on $\kappa_{max}$ and $d$. We denote the affine mapping $F_k: T_{ref_k} \rightarrow \tau_{i_k}$, $F_k \hat{x}= B_k \hat{x}+b_k$, $k = 1, 2$, where $F_1(E_1)=F_2(E_1)=e_{j}$. By the shape regularity of $\TH$, we have $B_1$, $B_2$ nonsingular, and $\|B_1\|,\|B_2\| \sim \mathcal{O}(H)$. We define $\psi_{i_1}\in H^1(\tau_{i_1})$ by $\psi_{i_1}(x):=\sqrt{2}|e_j|^{-\frac{d}{2(d-1)}}\hat{\psi}(F_1^{-1}{x}),\forall x\in \tau_{i_1}$. By the definition of edge measurement $\phi_{e_j}$ in \eqref{eqn:edge}, we have $ [\psi_{i_1},\phi_{e_j}] = 1, $ and \begin{equation} |\psi_{i_1}|_1 \leq \|B_1^{-1}\| |e_j|^{-\frac{d}{2(d-1)}} |\operatorname{det}(B_1)|^{1/2}|\hat{\psi}_{\Omega_{ref}}|_1 \leq C H^{-1}, \end{equation} where $C$ only depends on the shape regularity parameter $\gamma$, $\kappa_{max}$ and $d$. Similarly, we can obtain $\psi_{i_2} $ on $T_{ref_2}$ and patch them together to form a function $\psi_{i_1,i_2}\in H^1_0(\omega_{e_j}) $, which satisfies $\|\psi_{i_1,i_2}\| \leq C H^{-1}$ and $[\psi_{i_1,i_2},\phi_{e_j}] = 1$. Since $\psi^0_{j}$ is the minimizer in $H^1_0(\omega_{e_j})$, we have $\|\psi^0_{j}\| \leq \|\psi_{i_1,i_2}\| \leq C_1 H^{-1}$, where $C_1$ only depends on $\gamma$, $\kappa_{max}$ and $d$. Case D can be proved similarly. \end{proof} \end{document}
{"config": "arxiv", "file": "2103.01788/rps_main_cicp.tex"}
TITLE: Chevalley basis for $G_2$ QUESTION [0 upvotes]: I want to find the Chevalley basis for the exceptional group $G_2$. Could you point to literature where the computation is done in detail or show me how to do it? REPLY [1 votes]: One can find such a basis in Humphreys' Introduction to Lie Algebras and Representation, $\S$19.3. He leaves (straightforward) computations to the reader.
{"set_name": "stack_exchange", "score": 0, "question_id": 1682183}
TITLE: Three variable, second-degree symmetric Diophantine equation QUESTION [10 upvotes]: Find integers $f,g,h$ such that $3(f^2+g^2+h^2)=14(fg+gh+hf)$. You can do it using a computer or by hand. I tried this problem for ages, got nowhere. Unfortunately I don't know how to program, but I thought it would help a lot here. (Just set up a program to check loads of values until you get one that works?) I would appreciate any type of solution to this. I only need one (f,g,h) that works. Thanks! REPLY [26 votes]: We have $3(f+g+h)^2 = 3(f^2 + g^2 + h^2) + 6(fg + gh + hf) = 20(fg + gh + hf)$, so that both $f+g+h$ and $fg + gh + hf$ are divisible by $5$. Plugging in the roots of $(X-f)(X-g)(X-h) = X^3 - (f+g+h)X^2 + (fg+gh+hf)X - fgh$, we find that $f^3 \equiv g^3 \equiv h^3 \equiv fgh\pmod{5}$, which (by uniqueness of cube roots modulo $5$) is only possible if $f\equiv g \equiv h\pmod{5}$, and it follows that $f$, $g$, and $h$ are all divisible by $5$. But $(f/5,g/5,h/5)$ would then give us another solution of smaller magnitude. By descent, the only possible solution is $(0,0,0)$.
{"set_name": "stack_exchange", "score": 10, "question_id": 1133888}
\begin{definition}[Definition:Well-Ordered Integral Domain/Definition 1] $\struct {D, +, \times \le}$ is a '''well-ordered integral domain''' {{iff}} the [[Definition:Total Ordering Induced by Strict Positivity Property|ordering $\le$]] is a [[Definition:Well-Ordering|well-ordering]] on the [[Definition:Set|set]] $P$ of [[Definition:Strictly Positive|(strictly) positive elements]] of $D$. \end{definition}
{"config": "wiki", "file": "def_30031.txt"}
TITLE: Is a function with everywhere discontinuities of the second kind always measurable? QUESTION [1 upvotes]: Let $f : [0,1] \to \left\{ 0, 1 \right\}$ be a function that has at each point a discontinuity of the second kind. Is $f$ measurable if we equip the domain with the Borel or even Lebesgue $\sigma$-algebra and the range with the discrete $\sigma$-algebra? The indicator of the Cantor set is not a counterexample since it is not discontinuous everywhere. The indicator of the rationals is nowhere continuous and is measurable. This can be helpful in order to search for counterexamples: If $A \subseteq [0,1]$ is a set such that both $A$ and $[0,1] \setminus A$ are dense in $[0,1]$ then the indicator of $A$ is nowhere continuous. So, the question can be weakened: Is every such set $A$ necessarily measurable? Wikipedia says that the set of discontinuities is an $F_\sigma$ set, thus a countable union of closed sets and therefore Borel measurable. In our case it is the whole interval $[0,1]$, so this statement doesn't help. REPLY [3 votes]: Using the axiom of choice we can construct a Bernstein set, which is a set $A\subseteq[0,1]$ such that no perfect set is a subset of $A$. The construction is such that $[0,1]\setminus A$ is also a Bernstein set, and $A$ meets every perfect set, but contains none. We list all the perfect sets (there are $2^{\aleph_0}$ of them, and we use the smallest possible order type) then we choose from each one two points, one which will enter $A$ and the other which will not. At each step we only chose $<2^{\aleph_0}$ points, so we can continue and choose new points at each step. This ensures that $A$ and its complement are both meeting every perfect subset of $[0,1]$ (and therefore neither contains any). In particular $A$ is dense co-dense and not measurable. Not Lebesgue and certainly not Borel measurable.
{"set_name": "stack_exchange", "score": 1, "question_id": 987339}
TITLE: Inequality Proof by Induction involving Euler Totient function and Summation of Euler's Phi function QUESTION [4 upvotes]: I want to Prove the following result using induction: Show that P$_n$: \begin{equation} 2\sum_{k=1}^n \left(\prod_{p \vert k}\left(1-\frac{1}{p}\right)\right)+2 \geq \frac{12(n-1)^2}{\pi^2}\label{eq:15} \end{equation} for all $n,k\in\mathbb{Z}_+$, where the product is over all prime factors of $k$. Proof: P$_1$ case: For $n = 2$ we get, \begin{equation} 2\sum_{k=1}^2 \left(\prod_{p \vert k}\left(1-\frac{1}{p}\right)\right)+2 \geq \frac{12(2-1)^2}{\pi^2} \implies 4 \geq \frac{12}{\pi^2}, \hspace{2em} \text{P$_1$ is True.} \end{equation} P$_\alpha$ case: Suppose Eqn\eqref{eq:15} holds for some $\alpha$ s.t. \begin{equation}2\sum_{k=1}^\alpha \left(\prod_{p \vert k}\left(1-\frac{1}{p}\right)\right)+2 \geq \frac{12(\alpha-1)^2}{\pi^2} ,\hspace{2em} \forall (\alpha,k) \in \mathbb{Z}_+. \label{eq:17} \end{equation} P$_{\alpha + 1}$ case: Now consider that it holds for some $\alpha$ + 1, \begin{equation} 2\sum_{k=1}^{\alpha + 1} \left(\prod_{p \vert k}\left(1-\frac{1}{p}\right)\right)+2 \geq \frac{12((\alpha +1)-1)^2}{\pi^2},\hspace{2em} \label{eq:18} \end{equation} Simplifying and expanding the LHS and RHS we get, \begin{equation} 2\left(\underbrace{\prod_{p \vert 1}\left(1-\frac{1}{p}\right) + ... + \prod_{p \vert \alpha}\left(1-\frac{1}{p}\right)}_{\text{using} \ \text{P}_\alpha} + \prod_{p \vert \alpha + 1}\left(1-\frac{1}{p}\right) \right)+2 \geq \frac{12\alpha^2}{\pi^2} ,\hspace{2em} \label{eq:19} \end{equation} Therefore we get, \begin{equation} \left[2\sum_{k=1}^\alpha \left(\prod_{p \vert k}\left(1-\frac{1}{p}\right)\right) + 2\left(\prod_{p \vert \alpha + 1}\left(1-\frac{1}{p}\right)\right) \right]+2 \geq \frac{12\alpha^2}{\pi^2} ,\hspace{2em} \label{eq:20} \end{equation} We are assuming that Eqn\eqref{eq:17} is true so, if we replace $\Phi(\alpha)$ in the above expression with the smaller number $\frac{12(\alpha-1)^2}{\pi^2}$, we produce a smaller result. So: \begin{equation}2\sum_{k=1}^\alpha \left(\prod_{p \vert k}\left(1-\frac{1}{p}\right)\right) + 2\left(\prod_{p \vert \alpha + 1}\left(1-\frac{1}{p}\right)\right) + 2 \geq\frac{12(\alpha-1)^2}{\pi^2} + 2\left(\prod_{p \vert \alpha + 1}\left(1-\frac{1}{p}\right)\right) + 2. \label{eq:21} \end{equation} Now, \begin{equation} \frac{12(\alpha-1)^2}{\pi^2} + 2\left(\prod_{p \vert \alpha + 1}\left(1-\frac{1}{p}\right)\right) + 2 = \frac{12(2-\alpha)^2}{\pi^2}. \label{eq:22} \end{equation} Thus, \begin{equation} 2\sum_{k=1}^\alpha \left(\prod_{p \vert k}\left(1-\frac{1}{p}\right)\right) + 2\left(\prod_{p \vert \alpha + 1}\left(1-\frac{1}{p}\right)\right) + 2 \geq \frac{12(2-\alpha)^2}{\pi^2} \label{eq:23} \end{equation} Since $\alpha$ $\geq$ 1, we know that $\frac{12(2-\alpha)^2}{\pi^2}$ $\geq$ 0, so \begin{equation} 2\sum_{k=1}^\alpha \left(\prod_{p \vert k}\left(1-\frac{1}{p}\right)\right) + 2\left(\prod_{p \vert \alpha + 1}\left(1-\frac{1}{p}\right)\right) + 2 \geq \frac{12(\alpha-1)^2}{\pi^2} \geq 0 \label{eq:24} \end{equation} Or, \begin{equation} 2\Phi(\alpha) + \varphi(\alpha + 1) + 2 \geq \frac{12(\alpha-1)^2}{\pi^2} \geq 0 \label{eq:25} \end{equation} Where, \begin{equation} \Phi(\alpha) = \sum_{k=1}^\alpha \left(\prod_{p \vert k}\left(1-\frac{1}{p}\right)\right) \end{equation} and \begin{equation} \varphi(\alpha + 1) = \prod_{p \vert \alpha + 1}\left(1-\frac{1}{p}\right) \end{equation} Hence, by the principle of mathematical induction, P$_n$ is true for all integers $\alpha \geq 1$. Q.E.D Question I know I have made a mistake in this proof and is definitely not correct but Im not sure what I've done wrong, I think the error is in this part, \begin{equation} \frac{12(\alpha-1)^2}{\pi^2} + 2\left(\prod_{p \vert \alpha + 1}\left(1-\frac{1}{p}\right)\right) + 2 = \frac{12(2-\alpha)^2}{\pi^2} \end{equation} \begin{equation} 2\sum_{k=1}^\alpha \left(\prod_{p \vert k}\left(1-\frac{1}{p}\right)\right) + 2\left(\prod_{p \vert \alpha + 1}\left(1-\frac{1}{p}\right)\right) + 2 \geq \frac{12(2-\alpha)^2}{\pi^2} \end{equation} Since $\alpha\geq 1$, we know that $\frac{12(2-\alpha)^2}{\pi^2} \geq 0$, so, what mistake did I make and how should I fix it? REPLY [4 votes]: It is worse than a "mistake": it is a non-justified claim: $$\frac{12(\alpha-1)^2}{\pi^2} + 2\left(\prod_{p \vert \alpha + 1}\left(1-\frac{1}{p}\right)\right) + 2 = \frac{12(2-\alpha)^2}{\pi^2},$$ which is obviously false by irrationality of $\pi.$ Your "proof" essentially consists in this claim and is unsalvageable. Your "Result" cannot be proved by a simple induction because it would require, for every integer $\alpha\ge2$: $$2\prod_{p\mid\alpha+1}\left(1-\frac1p\right)\ge\frac{12(\alpha^2-(\alpha-1)^2)}{\pi^2},$$ which is false for every prime $\alpha+1.$ Even your "Result" itself is grossly false: test $P(4),$ or realize that the LHS of your $P(n)$ is $\le2n+2.$
{"set_name": "stack_exchange", "score": 4, "question_id": 4570609}
TITLE: Element of a ring acting as a permutation on an ideal QUESTION [4 upvotes]: I am investigating cases when $r \cdot I = I$ for some element $r$ and an ideal $I$ of a commutative ring or rng R. Clearly, $r \cdot \langle 0 \rangle = \langle 0 \rangle$ for any element $r$ of $R$, and $u \cdot R = R$ for any unit $u$. Let's call elements that generate the same principal ideal associates. $[r]$ is the equivalence class of associates of an element $r$. Let $M$ be the sum of all ideals $I$ such that $r \cdot I = I$. $M$ is the maximal ideal with respect to the property. Obviously, if $r \cdot M = M$, then $[r] \cdot M = M$. Every element of $[r]$ acts as a permutation on $M$. Let's check some examples on cyclic rings: $R = \mathbb Z_{10}, [r] = [2] = \{2,4,6,8\}, M = 2 \mathbb Z_{10} = \{0,2,4,6,8\}$: $2 \cdot M = (2\ 4\ 8\ 6)$ $4 \cdot M = (6\ 8\ 4\ 2)$ $6 \cdot M = Id$ $8 \cdot M = (2\ 8)(4\ 6)$ $R = \mathbb Z_{10}, [r] = [5] = \{5\}, M = 5 \mathbb Z_{10} = \{0,5\}$: $5 \cdot M = Id$ $R = \mathbb Z_{12}, [r] = [2] = \{2,10\}, M = 4 \mathbb Z_{12} = \{0,4,8\}$: $2 \cdot M = (4\ 8)$ $10 \cdot M = Id$ $R = \mathbb Z_{12}, [r] = [3] = \{3,9\}, M = 3 \mathbb Z_{12} = \{0,3,6,9\}$: $3 \cdot M = (3\ 9)$ $9 \cdot M = Id$ Questions: 1. Is the set of permutations $[r] \cdot M = M$ aways a group? 2. If $[r] \cdot M = M$, does it mean $[r] \cdot I = I$ for any ideal $I \subseteq M$? 3. If $\langle 0 \rangle \subsetneq M \subsetneq R$ for an element $r$, does it mean $r$ is irreducible? 4. Are there any interesting properties of such elements and ideals? REPLY [5 votes]: First, some setup. $M$, as an ideal of $R$, is an $R$-module. Multiplication of $M$ by any element of $R$ (with your property or not) is an endomorphism of $M$ (as an abelian group, because $r(m_1+m_2) = rm_1+rm_2$), and because $R$ is commutative, it's even an $R$-module endomorphism (since $r(r'm) = r'(rm)$). Thus sending $r$ to its multiplication action on $M$ gives us a set map $$\Phi: R\rightarrow \operatorname{End}_R(M).$$ Furthermore, right-distributivity and associativity in $R$ mean that $\Phi$ is a rng homomorphism, and if $R$ has a unit, it's even a ring homomorphism. The statement that $r\cdot M = M$ is the statement that $\Phi(r)$ is surjective from $M$ to $M$. Now, one thing that needs to be cleared up: If $r\cdot M = M$, it does not imply that $r$'s action on $M$ (which I have named $\Phi(r)$) is a permutation. It must be surjective, but it could fail to be injective. Example: let $$ R = \mathbb{Z}[x,y_1,y_2,y_3,\dots] / (xy_1, y_1-xy_2, y_2-xy_3, \dots),$$ and let $M$ be the ideal generated by the $y$'s, and take $r=x$. Then multiplication by $r=x$ is surjective onto $M$ since $y_i = xy_{i+1}\in x\cdot M$ for all $i$. However, $xy_1=0$, so $r=x$'s action on $M$ has a nontrivial kernel. So, in full generality, we need to separate the question about what is implied by the assumption that $r\cdot M = M$, versus the stronger assumption that $r$ acts as a permutation on $M$. On the other hand, if $R$ is noetherian, then $r\cdot M=M$ does imply that $r$ acts as a permutation on $M$, which we can see as follows. If $r\cdot M = M$, then $\Phi(r)$ is a surjective $R$-module map from $M$ to itself. In this situation, the first isomorphism theorem tells us that $M \cong M/\ker \Phi(r)$ as $R$-modules. If $r$ acts non-injectively on $M$, i.e., if $\ker \Phi(r)$ is nontrivial, then this isomorphism means $r$ also acts non-injectively on $M/\ker\Phi(r)$, i.e., multiplication by $r$ puts something in $\ker\Phi(r)$ that wasn't already there. In other words, $\ker\Phi(r^2)$ is strictly bigger than $\ker\Phi(r)$. Similar reasoning shows that $\ker\Phi(r^{j+1})$ is strictly bigger than $\ker\Phi(r^j)$, for all $j$. Thus $$\ker\Phi(r)\subset\ker\Phi(r^2)\subset \dots$$ is a nonterminating, strictly increasing chain of $R$-submodules of $M$. But an $R$-submodule of $M$ is an ideal of $R$, because $M\subset R$, so if $R$ is noetherian, this is impossible. So, if you are willing to assume $R$ is noetherian, then we can go ahead and treat "$r\cdot M=M$" and "$r$ acts as a permutation on $M$" as equivalent statements. For the remainder of this answer, I will assume the stronger version: that $r$ acts as a permutation on $M$, i.e., that $\Phi(r)$ is bijective. (Your interest is in $M$'s that are maximal with respect to this property.) However, I will not assume that $R$ is noetherian. Finally, your questions: 1) For a given $r$ and $M$, does the set of permutations coming from multiplication of $r$'s associates by $M$ necessarily form a group? Answer: no. Let $$ R = \mathbb{Q}[x;\dots,y_{-2},y_{-1},y_0,y_1,y_2,\dots]/(\dots,y_{-1} - xy_0, y_0 - xy_1, y_1-xy_2,\dots),$$ and let $M$ be the ideal generated by the $y_i$'s. Let $r=x$. Then $r\cdot M=M$ by similar logic as above: every $y_i$ is the image of multiplication of $y_{i+1}$ by $x$. Furthermore, $M$ is maximal with respect to this property: if $M'$ is any ideal properly containing $M$, then $M'$ has nonzero image in $R/M = \mathbb{Q}[x]$, and then this image is a nonzero ideal in $\mathbb{Q}[x]$ on which multiplication by $x$ acts surjectively. But this doesn't exist: any nonzero ideal in $\mathbb{Q}[x]$ contains a polynomial of minimal degree, and multiplication by $x$ raises the degree, so there is no nonzero ideal in $\mathbb{Q}[x]$ on which multiplication by $x$ acts surjectively. So this is an example of $r,M$ that fits your framework. However, there is no associate of $r=x$, nor any element of $R$ at all, that inverts $x$'s action on $M$. Indeed, e.g., $y_0\in M$, and $xy_0=y_{-1}$, but there is no $r\in R$ such that $ry_{-1} = y_0$. 2) No. In the previous example, take $I$ to be the ideal generated by $y_0,y_{-1},\dots$. Then $r\cdot I$ does not contain $y_0$. Here's even a noetherian example, which shows that $r\cdot I$ doesn't even have to be contained in $I$: let $$ R = \mathbb{Q}[t,x,y]/(x-ty, y-tx).$$ Let $M$ be the ideal generated by $x$ and $y$, and let $r=t$. Then $r\cdot M = M$, but within $M$, $I=(x)$ and $J=(y)$ are transposed with each other by $r=t$, not fixed. 3) No. Take the $R$ of the previous (non-noetherian) example and consider $S = R[z,w]/(x-zw)$, and extend $M$ to $S$ (i.e., replace the previous $M$ with the ideal generated by the $y_i$'s in $S$ instead of $R$). (Retain $r=x$.) Then it is still true $M$ is maximal with respect to the property that $r\cdot M = M$, by similar logic: $S/M$ is $\mathbb{Q}[x,z,w]/(x-zw)\cong \mathbb{Q}[z,w]$. This is again a ring in which there is no ideal that is surjected onto itself by multiplication by $x=zw$. However, $r=x=zw$ is definitely not irreducible. The same type of argument should also work for the noetherian example. Take $S = R[z,w]/(t-zw)$. 4) I don't have a comprehensive answer. (Who decides what's interesting anyway? :) One observation is that if $M$ is nonzero, but finitely generated as an $R$-module (for example, if $R$ is noetherian), then it is not possible for $r$ to lie in the Jacobson radical of $R$. (If $M$ is finitely generated and $rM = M$ with $r$ in the Jacobson radical, then $M$ would $=0$ by Nakayama's lemma.) Therefore, there is some maximal ideal $\mathfrak{m}$ with $r\notin \mathfrak{m}$.
{"set_name": "stack_exchange", "score": 4, "question_id": 3538503}
TITLE: Finding the domain of a natural log composite function QUESTION [0 upvotes]: If $f(x) = ln(x)$, what is the largest possible domain of $f(f(x))$? I only know that the input for $ln(x)$ must always be greater than 0, so the domain must be $x$ > $0$ for $ln(x)$ alone. However, once I plug in $f(x)$ into $f(x)$ to get $f(f(x)) = ln(ln(x))$, I'm not sure how to approach this problem as I believe I have to solve $ln(ln(x)) > 0$. REPLY [2 votes]: You have correctly identified that the domain of $\ln(x)$ is $x>0$. As $\ln(x)$ is only defined for positive $x$ values, the domain of the composition $$\ln(\ln (x))$$ will be where $\ln (x) > 0$. We have $$ \ln (x) > 0 \implies e^{ln(x)}> e^0\implies x > 1$$ Therefore, the domain of $\ln(\ln (x))$ is $x>1$.
{"set_name": "stack_exchange", "score": 0, "question_id": 3336661}
\begin{document} \title{ Weak-star point of continuity property and Schauder bases } \author{Gin{\'e}s L{\'o}pez-P{\'e}rez and Jos{\'e} A. Soler Arias} \address{Universidad de Granada, Facultad de Ciencias. Departamento de An\'{a}lisis Matem\'{a}tico, 18071-Granada (Spain)} \email{glopezp@ugr.es, jasoler@ugr.es} \thanks{Partially supported by MEC (Spain) Grant MTM2006-04837 and Junta de Andaluc\'{\i}a Grants FQM-185 and Proyecto de Excelencia P06-FQM-01438.} \subjclass{46B20, 46B22. Key words: Point of continuity property, trees, boundedly complete sequences} \maketitle \markboth{G. L\'{o}pez and Jos{\'e} A. Soler }{ Weak-star PCP and Schauder bases } \begin{abstract} We characterize the weak-star point of continuity property for subspaces of dual spaces with separable predual and we deduce that the weak-star point of continuity property is determined by subspaces with a Schauder basis in the natural setting of dual spaces of separable Banach spaces. As a consequence of the above characterization we get that a dual space satisfies the Radon-Nikodym property if, and only if, every seminormalized topologically weak-star null tree has a boundedly complete branch, which improves some results in \cite{DF} obtained for the separable case. Also, as a consequence of the above characterization, the following result obtained in \cite{R1} is deduced: {\it every seminormalized basic sequence in a Banach space with the point of continuity property has a boundedly complete subsequence.} \end{abstract} \section{Introduction} \par \bigskip We recall (see \cite{bou} for background) that a bounded subset $C$ of a Banach space satisfies the Radon-Nikodym property (RNP) if every subset of $C$ is dentable, that is, every subset of $C$ has slices of diameter arbitrarily small. A Banach space is said to verify the RNP whenever its closed unit ball satisfies the RNP. It is well known that separable dual spaces have RNP and spaces with RNP contain many subspaces which are themselves separable dual spaces. (Note that containing many separable dual subspaces is equivalent to containing many boundedly complete basic sequences). As RNP is separably determined, that is, a Banach space $X$ has RNP whenever every separable subspace of $X$ has RNP, it seems natural looking for a sequential characterization of RNP in terms of boundedly complete basic sequences. In \cite{DF} is proved that the space $B_{\infty}$ (which fails to have RNP) still has the property: any $w$-null normalized sequence has a boundedly complete basic subsequence. However, it has been proved in \cite{DF} that the dual space of a separable Banach space $X$ has RNP if, and only if, every weak-star null tree in the unit sphere of $X^*$ has some boundedly complete basic branch. It seems then natural looking for a characterization of RNP for general dual Banach spaces in terms of boundedly complete basic sequences, extending the result in \cite{DF} proved for dual of separable Banach spaces. For this, we introduce the concept of topologically weak-star null tree, which is a weaker condition than the weak-star null tree condition, and we characterize in terms of trees the RNP for weak-star compact subsets of general dual Banach spaces in proposition \ref{r1}. As a consequence, we get in theorem \ref{r2} that a dual Banach space $X$ has RNP if, and only if, every seminormalized and topologically weak-star null tree in the unit sphere of $X$ has some boundedly complete branch, which has as an immediate corollary the aforementioned result in \cite{DF}. We recall that a closed and bounded subset of a Banach space $X$ satisfies the point of continuity property (PCP) if every closed subset of $C$ has some point of weak continuity, that is, the weak and the norm topologies agree at this point. Also, when $X$ is a dual space, $C$ is said to satisfy the weak-star point of continuity property ($w^*$-PCP) if every closed subset of $C$ has some point of weak-$*$ continuity, equivalently every nonempty subset of $C$ has relatively $w^*$-open subsets with diameter arbitrarily small. $X$ has PCP (resp. $w^*$-PCP when $X$ is a dual space) if $B_X$, the closed unit ball of $X$, has PCP (resp. $w^*$-PCP). Also, a subspace $X$ of a dual space $Y^*$ is said to verify the $w^*$-PCP if $B_X$, as a subset of $Y^*$, has the $w^*$-PCP. It is well known that RNP implies PCP, being false the converse, and it is clear that $w^*$-PCP implies PCP. Moreover, RNP and $w^*$-PCP are equivalent for convex $w^*$-compact sets in a dual space, see theorem 4.2.13 in \cite{bou}. We will use this last fact freely in the future. We refer to \cite{R2} for background about PCP and $w^*$-PCP. It is a well known open problem \cite{B} if PCP (resp. RNP) is determined by subspaces with a Schauder basis. Our goal is characterize $w^*$-PCP for closed and bounded subsets of dual spaces of separable Banach spaces and conclude in theorem \ref{fin} that, in fact, $w^*$-PCP is determined by subspaces with a Schauder basis in the natural setting of subspaces of dual spaces with a separable predual. As an easy consequence we also deduce from the above characterization of $w^*$-PCP that every seminormalized basic sequence in a Banach space with PCP has a boundedly complete basic subsequence. This last result was obtained in \cite{R1}. We begin with some notation and preliminaries. Let $X$ be a Banach space and let $B_X$, respectively $S_X$, be the closed unit ball, respectively sphere, of $X$. The weak-star topology in $X$, when it is a dual space, will be denoted by $w^*$. If $A$ is a subset in $X$, $\overline{A}^{w^*}$ stands for the weak-star closure of $A$ in $X$. Given $\{e_n\}$ a basic sequence in $X$, $\{e_n\}$ is said to be {\it semi-normalized} if $0<\inf_n\Vert e_n\Vert\leq \sup_n\Vert e_n\Vert <\infty$ and the closed linear span of $\{e_n\}$ is denoted by $[e_n]$. $\{e_n\}$ is called {\it boundedly complete} provided whenever scalars $\{\lambda_i\}$ satisfy $\sup_n\Vert\sum_{i=1}^n\lambda_ie_i\Vert<\infty$, then $\sum_n\lambda_ne_n$ converges. $\{e_n\}$ is called {\it shrinking} if $[e_n]^*=[e_n^*]$, where $\{e_n^*\}$ denotes the sequence of biorthogonal functionals associated to $\{e_n\}$. A boundedly complete basic sequence $\{e_n\}$ in a Banach space $X$ spans a dual space. In fact, $[e_n^*]^*=[e_n]$, where $\{e_n^*\}$ denotes the sequence of biorthogonal functionals in the dual space $X^*$ \cite {LZ}. Following the notation in \cite{S}, it is said that a sequence $\{e_n\}$ in a Banach space is {\it type P} if the set $\{\sum_{k=1}^ne_k:n\in \natu\}$ is bounded. Observe, from the definitions, that type P seminormalized basic sequences fail to be always boundedly complete basic sequences. A sequence $\{x_n\}$ in a Banach space $X$ is said to be {\it strongly summing} if whenever $\{\lambda_n\}$ is a sequence of scalars with $\sup_n\Vert\sum_{k=1}^n\lambda_kx_k\Vert<\infty$ one has that the series of scalars $\sum_n\lambda_n$ converges. The remarkable $c_0$-theorem \cite{R3} assures that every weak-Cauchy and not weakly convergent sequence in a Banach space not containing subspaces isomorphic to $c_0$ has a strongly summing basic subsequence. $\natu^{<\omega}$ stands for the set of all ordered finite sequences of natural numbers joint to the empty sequence denoted by $\emptyset$. We consider the natural order in $\natu^{<\omega}$, that is, given $\alpha=(\alpha_1,\ldots,\alpha_p),\ \beta=(\beta_1,\ldots,\beta_q)\in \natu^{<\omega}$, one has $\alpha\leq \beta$ if $p\leq q$ and $\alpha_i=\beta_i$ $\forall 1\leq i\leq p$. If $\alpha=(\alpha_1,\ldots,\alpha_p)\in \natu^{<\omega}$ we do $\alpha -=(\alpha_1,\ldots,\alpha_{p-1})$. Also $\vert \alpha\vert$ denotes the {\it length} of sequence $\alpha$, and $\emptyset$ is the minimum of $\natu^{<\omega}$ with this partial order. A {\it tree} in a Banach space $X$ is a family $\{x_A\}_{A\in \natu^{<\omega}}$ of vectors in $X$ indexed on $\natu^{<\omega}$. The tree will be said {\it seminormalized} if $0<\inf_A\Vert x_A\Vert\leq \sup_A\Vert x_A\Vert <\infty$. We will say that the tree $\{x_A\}_{A\in \natu^{<\omega}}$ is {\it $w^*$-null}, when $X$ is a dual space, if the sequence $\{x_{(A,n)}\}_n$ is $w^*$-null for every $A\in\natu^{<\omega}$. The tree $\{x_A\}_{A\in \natu^{<\omega}}$ is {\it topologically $w^*$-null} if $0\in \overline{\{x_{(A,n)}:n\in \natu\}}^{w^*}$ for every $A\in\natu^{<\omega}$. A sequence $\{x_{A_n}\}_{n\geq 0}$ is called a {\it branch} if $\{A_n\}$ is a maximal totally ordered subset of $\natu^{<\omega}$, that is, there exists a sequence $\{\alpha_n\}$ of natural numbers such that $A_n=(\alpha_1,\ldots,\alpha_n)$ for every $n\in \natu$ and $A_0=\emptyset$. Given a tree $\{x_A\}_{A\in\natu^{<\omega}}$ in a Banach space, a {\it full subtree} is a new tree $\{y_A\}_{A\in\natu^{<\omega}}$ defined by $y_{\emptyset}=x_{\emptyset}$ and $y_{(A,n)}=x_{(A,\sigma_A(n))}$ for every $A\in \natu^{<\omega}$ and for every $n\in \natu$, where for every $A\in \natu^{<\omega}$, $\sigma_A$ is a strictly increasing map, equivalently when every branch of $\{y_A\}$ is also a branch of $\{x_A\}$. The tree $\{x_A\}_{A\in \natu^{<\omega}}$ is said to be {\it uniformly type P} if every branch of the tree is type P and the partial sums of every branch are uniformly bounded. The tree $\{x_A\}_{A\in \natu^{<\omega}}$ is said to be {\it basic} if the countable set $\{x_A:\ A\in \natu^{\omega}\}$ is a basic sequence for some rearrangement. Whenever $\{x_n\}$ is a sequence in a Banach space $X$, we will see this sequence also like a tree doing $x_A=x_{\max(A)}$ for every $A\in\natu^{<\omega}$. Furthermore the branches of this tree are the subsequences of the sequence $\{x_n\}$. Finally, we recall that a {\it boundedly complete skipped blocking finite dimensional decomposition} (BCSBFDD) in a separable Banach space $X$ is a sequence $\{F_j\}$ of finite dimensional subspaces in $X$ such that:\begin{enumerate} \item $X=[F_j:j\in \natu]$.\item $F_k\cap[F_j:j\neq k]=\{0\}$ for every $k\in \natu$.\item For every sequence $\{n_j\}$ of non-negative integers with $n_j+1<n_{j+1}$ for all $j\in \natu$ and for every $f\in [F_{(n_j,n_{j+1})}:j\in \natu]$ there exists a unique sequence $\{f_j\}$ with $f_j\in F_{(n_j,n_{j+1})}$ for all $j\in \natu$ such that $f=\sum_{j=1}^{\infty}f_j$.\item Whenever $f_j\in F_{(n_j,n_{j+1})}$ for all $j\in \natu$ and $\sup_n\Vert \sum_{j=1}^{n}f_j\Vert<\infty$ then $\sum_{j=1}^{\infty}f_j$ converges.\end{enumerate} If $X$ is a subspace of $Y^*$ for some $Y$, a BCSBFDD $\{F_j\}$ in $X$ will be called $w^*$-continuous if $F_i\cap \overline{[F_j:j\neq i]}^{w^*}= \{0\}$ for every $i$. Here, $[A]$ denotes the closed linear span in $X$ of the set $A$ and, for some nonempty interval of non-negative integers $I$, we denote the linear span of the $F_j'$s for $j\in I$ by $F_{I}$. If $\{F_j\}$ is a BCSBFDD in a separable Banach space $X$ and $\{x_j\}$ is a sequence in $X$ such that $x_j\in F_{(n_j,n_{j+1})}$ for some sequence $\{n_j\}$ of non-negative integers with $n_j+1<n_{j+1}$ for all $j\in \natu$, we say that $\{x_j\}$ is a {\it skipped block sequence} of $\{F_n\}$. It is standard to prove that there is a positive constant $K$ such that every skipped block sequence $\{x_j\}$ of $\{F_n\}$ with $x_j\neq 0$ for every $j$ is a boundedly complete basic sequence with constant at most $K$. From \cite{GM}, we know that the family of separable Banach spaces with PCP is exactly the family of separable Banach spaces with a BCSBFDD. \section{Main results} \par \bigskip We begin with a characterization of RNP for $w^*$-compact of general dual spaces. This result can be seen like a $w^*$-version of results in \cite{LS}. \begin{proposition}\label{r1} Let $X$ be a Banach space and let $K$ be a weak-star compact and convex subset of $X^*$. Then the following assertions are equivalent:\begin{enumerate} \item[i)] $K$ fails RNP. \item[ii)] There is a seminormalized topologically weak-star null tree $\{x_A\}_{A\in\natu^{<\omega}}$ in $X^*$ such that $\{\sum_{C\leq A}x_C:A\in\natu^{<\omega}\}\subset K$.\end{enumerate}\end{proposition} \begin{proof} i)$\Rightarrow$ ii) Assume that $K$ fails RNP. Then, from theorem 2.3.6 in \cite{bou} there is $D$ a non-dentable and countable subset of $K$. Now $\overline{co}^{w^*}(D)$ is a weak-star compact and weak-star separable subset of $K$ failing $w^*$-PCP. So there is $B$ a relatively weak-star separable subset of $\overline{co}^{w^*}(D)$ and $\delta>0$ such that every relatively weak-star open subset of $B$ has diameter greater than $2\delta$. So $b\in \overline{B\setminus B(b,\delta)}^{w^*}$ for every $b\in B$, where $B(b,\delta)$ stands for the open ball with center $b$ and radius $\delta$. Note then that, since $B$ is relatively weak-star separable, for every $b\in B$ there is a countable subset $C_b\in B\setminus B(b,\delta)$ such that $b\in \overline{C_b}^{w^*}$. First, we construct a tree $\{y_A\}_{A\in \natu^{<\omega}}$ in $B$ satisfying:\begin{enumerate}\item[a)]$y_{A}\in\overline{B\setminus B(y_A,\delta)}^{^*}w$ for every $A\in\natu^{<\omega}$.\item[b)] $\Vert y_A-y_{(A,i)}\Vert>\delta$ for every $A\in\natu^{<\omega}$.\item[c)] $y_A\in \overline{\{y_{(A,i)}:i\in\natu\}}^{w^*}$ for every $A\in\natu^{<\omega}$. \end{enumerate} Pick $y_{\emptyset}\in B $. As $y_{\emptyset}\in \overline{B\setminus B(y_{\emptyset},\delta)}^{w^*}$ then there is a countable set $C_{y_{\emptyset}}=\{y_(i):i\in \natu\}\subset B\setminus B(y_{\empty},\delta)$ such that $y_{\emptyset}\in\overline{C_{y_{\empty}}}^{w^*}$. Then a), b) and c) are verified. By iterating this process we construct the tree $\{y_A\}_{A\in \natu^{<\omega}}$ satisfying a), b) and c). Now we define a new tree $\{x_A\}_{A\in\natu^{<\omega}}$ by $x_{\emptyset}=y_{\emptyset}$ and $x_{(A,i)}=y_{(A,i)}-y_A$ for every $i\in \natu$ and $A\in \natu^{<\omega}$. From b) we get that $\{x_A\}_{A\in\natu^{<\omega}}$ is a seminormalized tree, since $B$ is bounded. From c), we deduce that $\{x_A\}_{A\in\natu^{<\omega}}$ is topologically weak-star null. Furthermore, if $A\in \natu^{<\omega}$ then $\sum_{C\leq A}x_C=y_A$, from the definition of the tree $\{x_A\}_{A\in\natu^{<\omega}}$. So $\{x_A\}_{A\in\natu^{<\omega}}$ is a uniformly type P tree, since $B$ is bounded and $y_A\in B$ for every $A\in \natu^{<\omega}$. This finishes the proof of i)$\Rightarrow$ii). ii)$\Rightarrow$i) Let $\{x_A\}$ be a seminormalized topologically weak-star null tree such that $B=\{\sum_{C\leq A}x_C:A\in\natu^{<\omega}\}\subset K$ and let $\delta>0$ such that $\Vert x_A\Vert>\delta$ for every $A\in \natu^{<\omega}$. For every $A\in \natu^{<\omega}$ and for every $n\in \natu$ we have that $\sum_{C\leq (A,n)}x_C=\sum_{C\leq A}x_C+x_{(A,n)}$, but $0\in\overline{\{x_{(A,n)}N\in \natu\}}^{w^*}$, since the tree $\{x_A\}$ is topologically weak-star null. So $\sum_{C\leq A}x_C\in\overline{\{\sum_{C\leq (A,n)}x_C:n\in\natu\}}^{w^*}$ and $\Vert\sum_{C\leq (A,n)}x_C-\sum_{C\leq A}x_C\Vert>\delta$. This proves that $B$ has no points where the identity map is continuous from the weak-star to the norm topologies. In fact, we have proved that every relatively weak-star open subset of $B$ has diameter grater than $\delta$. Now, $\overline{B}^{\Vert\cdot\Vert}$ is a closed and bounded subset of $K$ such that every relatively weak-star open subset of $\overline{B}^{\Vert\cdot\Vert}$ has diameter grater than $\delta$, and so $K$ fails $w^*$-PCP. As $K$ is $w^*$-compact, then $K$ fails RNP.\end{proof} Essentially, the fact that RNP is separably determined has allowed us to get the above result in the setting of general dual spaces. The next theorem characterizes the $w^*$-PCP for subsets of dual spaces with a separable predual in terms of $w^*$-null trees, since in this case the $w^*$ topology is metrizable on bounded sets. It seems natural then thinking that a characterization of $w^*$-PCP for subsets in general dual spaces in terms of topologically $w^*$-null trees has to be true, however we don't know if $w^*$-PCP is separable determined in general. This is the difference between the above proposition and the next one, which is obtained now easily. \begin{proposition}\label{p1} Let $X$ be a separable Banach space and let $K$ be a closed and bounded subset of $X^*$. Then the following assertions are equivalent:\begin{enumerate} \item[i)] $K$ fails $w^*$-PCP. \item[ii)] There is a seminormalized weak-star null tree $\{x_A\}_{A\in\natu^{<\omega}}$ in $X^*$ such that $\{\sum_{C\leq A}x_C:A\in\natu^{<\omega}\}\subset K$.\end{enumerate}\end{proposition} \begin{proof} i)$\Rightarrow$ii) If $K$ fails $w^*$-PCP there is $B$ a subset of $K$ and $\delta>0$ such that every relatively weak-star open subset of $B$ has diameter greater than $2\delta$. So $b\in \overline{B\setminus B(b,\delta)}^{w^*}$ for every $b\in B$, where $B(b,\delta)$ stands for the open ball with center $b$ and radius $\delta$. Note then that, since $X$ is separable the $w^*$-topology in $X^*$ is metrizable on bounded sets, and so for every $b\in B$ there is a countable subset $C_b\in B\setminus B(b,\delta)$ such that $b\in \overline{C_b}^{w^*}$. Hence we can assume that $C_b$ is a sequence $w^*$ converging to $b$. Now we can construct, exactly like in the proof of i)$\Rightarrow$ii) of the above proposition, the desired $w^*$-null tree satisfying ii). ii)$\Rightarrow$i) If one assumes ii) we can repeat the proof of ii)$\Rightarrow$ i) in the above proposition to get that $K$ fails $w^*$-PCP.\end{proof} \begin{remark}\label{remar} If $X$ is a separable subspace of a dual space $Y^*$ with $X$ satisfying the $w^*$-PCP, it is shown in \cite{R2} (see (1) implies (8) of theorem 2.4 joint the comments in page 276) that there is a separable subspace $Z$ of $Y$ such that $X$ is isometric to a subspace of $Z^*$ and $X$ has $w^*$-PCP, as subspace of $Z^*$. Then, in order to study the $w^*$-PCP of a subspace of $Y^*$, it is more natural assume that $Y$ is separable.\end{remark} We show now our characterization of $w^*$-PCP in terms of boundedly complete basic sequences in a general setting. A similar characterization for PCP can be seen in \cite{LS}, but the proof of the following result uses strongly the concept of $w^*$-continuous boundedly complete skipped blocking finite dimensional decomposition and assumes separability in the predual space. \begin{theorem}\label{p2} Let $X$, $Y$ be Banach spaces with $Y$ separable and $X$ a subspace of $Y^*$. Then the following assertions are equivalent:\begin{enumerate} \item[i)] $X$ has $w^*$-PCP. \item[ii)] Every weak-star null tree in $S_{X}$ is not uniformly type P. \item[iii)] Every weak-star null tree in $S_{X}$ has not type P branches. \item[iv)] Every weak-star null tree in $S_{X}$ has a boundedly complete branch.\end{enumerate}\end{theorem} We need the following easy \begin{lemma}\label{le} Let $X$, $Y$ be Banach spaces with $X$ a subspace of $Y^*$, and let $M$ be a finite codimensional subspace of $X$. Assume that $\varepsilon>0$ and $\{x_n^*\}$ is a sequence in $X$ such that $0\in \overline{\{x_n:n\in \natu\}}^{w^*}$. If $P:X\rightarrow N$ is a linear and relatively $w^*$-continuous projection onto some finite dimensional subspace $N$ of $X$ with kernel $M$ then there is $n_0\in \natu$ such that $dist(x_{n_0}^*,M)<\varepsilon$ \end{lemma} \begin{proof} From $0\in\overline{\{x_n^*:n\in \natu\}}^{w^*}$ we deduce that $0\in\overline{\{P(x_n^*):n\in \natu\}}^{\Vert \cdot\Vert}$, since $N$ is a finite dimensional subspace of $X$. Now, pick $n_0\in \natu :\Vert P(x_{n_0}^*)\Vert <\varepsilon$. Then $$dist(x_{n_0}^*,M)=\Vert x_{n_0}^*+M\Vert=\Vert P(x_{n_0}^*)+M\Vert\leq \Vert P(x_{n_0}^*)\Vert <\varepsilon .$$ \end{proof} {\it Proof of theorem} \ref{p2}. iv)$\Rightarrow$iii) is a consequence of the fact that every boundedly complete basic sequence is not type P, commented in the introduction and iii)$\Rightarrow$ii) is trivial. For ii)$\Rightarrow$i) it is enough applying the theorem \ref{p1} for $K=B_{X}$ by assuming that $X$ fails $w^*$-PCP and normalizing. i)$\Rightarrow$iv) Assume that $X$ has $w^*$-PCP and pick a weak-star null tree $\{x_A\}$ in $S_{X}$. From \cite{R2} (see (b) of theorem 3.10 joint to the equivalence between (1) and (3) of corollary 2.6) we know that every separable subspace of $Y^*$ with $w^*$-PCP has a $w^*$-continuous boundedly complete skipped blocking finite dimensional decomposition. As the subspace generated by the tree $\{x_A\}$ is separable we can assume that $X$ has it , that is, there is a sequence $\{F_j\}$ of finite dimensional subspaces in $X$ such that:\begin{enumerate} \item $X=[F_j:j\in \natu]$.\item $F_k\cap[F_j:j\neq k]=\{0\}$ for every $k\in \natu$.\item For every sequence $\{n_j\}$ of non-negative integers with $n_j+1<n_{j+1}$ for all $j\in \natu$ and for every $f\in [F_{(n_j,n_{j+1})}:j\in \natu]$ there exists a unique sequence $\{f_j\}$ with $f_j\in F_{(n_j,n_{j+1})}$ for all $j\in \natu$ such that $f=\sum_{j=1}^{\infty}f_j$.\item Whenever $f_j\in F_{(n_j,n_{j+1})}$ for all $j\in \natu$ and $\sup_n\Vert \sum_{j=1}^{n}f_j\Vert<\infty$ then $\sum_{j=1}^{\infty}f_j$ converges.\item $F_i\cap \overline{[F_j:j\neq i]}^{w^*}= \{0\}$ for every $i$.\end{enumerate} Let $K$ be a positive constant such that every skipped block sequence $\{x_j\}$ of $\{F_n\}$ with $x_j\neq 0$ for every $j$ is a boundedly complete basic sequence with constant at most $K$. Observe that for every $n\in \natu$ there is a linear onto projection\break $\widetilde{P_n}:\overline{[F_i:i\geq n]}^{w^*}\oplus[F_i:i< n]\rightarrow [F_i:i< n]$ with kernel $\overline{[F_i:i\geq n]}^{w^*}$ and so $\widetilde{P_n}$ is $w^*$ continuous, since $\overline{[F_i:i\geq n]}^{w^*}\oplus[F_i:i< n]$ is $w^*$-closed subspace of $Y^*$ and hence a dual Banach space and the closed graph theorem applies to $P_n$ because its kernel is $w^*$-closed and its range is finite-dimensional. Then the restriction of $\widetilde{P}_n$ to $X$, let us say $P_n$, is a linear and relatively $w^*$-continuous projection from $X$ onto $[F_i:i< n]$ with kernel $[F_i:i\geq n]$. We have to construct a boundedly complete branch of the tree $\{x_A\}$. For this, fix a sequence $\{\varepsilon_j\}$ of positive real numbers with $\sum_{j=0}^{\infty}\varepsilon_j<1/2K$, where $K$ is the constant of the decomposition $\{F_j\}$. Now we construct a sequence $\{f_j\}$ in $X$ with $f_j\in F_{(n_j,n_{j+1})}$ for all $j$, for some increasing sequence of integers numbers $\{n_j\}$ and a branch $\{x_{A_j}\}$ of the tree such that $\Vert x_{A_j}-f_j\Vert <\varepsilon_j$ for all $j$. Put $n_0=0$. Then there exists $n_1>2$ and $f_0\in F_{(n_0,n_1)}$ such that $\Vert x_{A_0}-f_0\Vert<\varepsilon_0$, where $A_0=\emptyset$. Now, assume that $n_1,\ldots, n_{j+1}$, $f_1,\ldots, f_j$ and $A_1,\ldots, A_j$ have been constructed. Put $A_k=(p_1,p_2,\ldots, p_k)$ for all $1\leq k\leq j$. As the tree is $w^*$ null we have that $0\in\overline{\{x_{(A_j,p)}:p\in\natu\}}^{w^*}$. Then, by the lemma \ref{le}, we deduce that there is some $p_{j+1}\in \natu$ such that $dist(x_{(A_j,p_{j+1})},[F_{[n_{j+1}+1,\infty)}])<\varepsilon_{j+1}$ since $[F_{[n_{j+1}+1,\infty)}]$ is a finite codimensional subspace in $X$ and $P_{n_{j+1}+1}$ is relatively $w^*$-continuous. Then there exist $n_{j+2}>n_{j+1}+1$ and $f_{j+1}\in F_{(n_{j+1},n_{j+2})}$ such that $\Vert x_{A_{j+1}}-f_{j+1}\Vert<\varepsilon_{j+1}$, where $A_{j+1}=(A_j,p_{j+1})$. This finishes the inductive construction of the branch $\{x_{A_j}\}$ satisfying that $\Vert x_{A_j}-f_j\Vert <\varepsilon_j$ for all $j$. Finally we get that $\sum_{j=1}^{\infty}\Vert x_{A_j}-f_j\Vert<1/2K$. Then $\{x_{A_j}\}$ is a branch of the tree $\{x_A\}_{A\in \natu^{<\omega}}$ which is a basic sequence equivalent to $\{f_j\}$, being $\{f_j\}$ a skipped block sequence of $\{F_n\}$, hence $\{x_{A_j}\}$ is a boundedly complete sequence and the proof of theorem \ref{p2} is finished. \bigskip Now we can get a characterization of RNP for dual spaces, following the above proof. \begin{theorem}\label{r2} Let $X$ be a Banach space. Then the following assertions are equivalent:\begin{enumerate}\item[i)] $X^*$ has RNP. \item[ii)] Every topologically weak-star null tree in $S_{X}$ is not uniformly type P. \item[iii)] Every topologically weak-star null tree in $S_{X}$ has not type P branches. \item[iv)] Every topologically weak-star null tree in $S_{X}$ has a boundedly complete branch.\end{enumerate}\end{theorem} \begin{proof} iv)$\Rightarrow$iii) is a consequence of the fact that every boundedly complete basic sequence is not type P, commented in the introduction and iii)$\Rightarrow$ii) is trivial. For ii)$\Rightarrow$i) it is enough applying the theorem \ref{r1} for $K=B_{X^*}$ by assuming that $X^*$ fails RNP and normalizing. i)$\Rightarrow$iv) Assume that $X^*$ has RNP and pick a topologically weak-star null tree $\{x_A\}$ in $S_{X^*}$. Call $Y$ the closed linear span of the tree $\{x_A\}$. Now $Y$ is a separable subspace of $X^*$ and then there is a separable subspace $Z$ of $X$ norming $Y$ so that $Y$ is isometric to a subspace of $Z^*$. As $X^*$ has RNP, we get that $Z^*$ has RNP. Hence $Y$ is a separable subspace of $Z^*$, being $Z$ a separable space, and $Z^*$ has $w^*$-PCP since $Z^*$ has RNP. Observe that the tree $\{x_A\}$ is now a topologically weak-star null tree in $S_Z^*$, so we can select a full weak-star null subtree of $\{y_A\}$, since $Z$ is separable and so the $w^*$ topology in $Z^*$ is metrizable for bounded sets. We apply the proof of i) $\Rightarrow$ iv) in the above result with $X=Y=Z^*$ to get a boundedly complete branch of $\{y_A\}$. As $\{y_A\}$ is a full subtree of $\{x_A\}$, the branches of $\{y_A\}$ are branches of $\{x_A\}$ and $\{x_A\}$ has a boundedly complete branch.\end{proof} In the case $X$ is a separable Banach space, the above result can be written in a terms of weak-star null trees. Then we get as an immediate consequence in the following corollary a result obtained in \cite{DF} in a different way. \begin{corollary} Let $X$ be a separable Banach space. Then $X^*$ is separable (equivalently, $X^*$ has RNP) if, and only if, every weak-star null tree in $S_{X^*}$ has a boundedly complete branch.\end{corollary} \begin{proof} When $X$ is separable, the weak-star topology in $X^*$ is metrizable on bounded sets and so every topologically weak-star null tree in $S_{X^*}$ has a full subtree which is weak-star null. With this in mind, it is enough to apply the above theorem to conclude, since $X^*$ is separable if, and only if, $X^*$ has RNP, whenever $X$ is separable. \end{proof} The following consequence, obtained in a different way in \cite{R1}, shows how many separable and dual subspaces contains every Banach space with PCP. \begin{corollary} Let $X$ a Banach space with PCP. Then every seminormalized basic sequence in $X$ has a boundedly complete subsequence.\end{corollary} \begin{proof} Pick $\{x_n\}$ a seminormalized basic sequence in $X$. Then either $\{x_n\}$ has a subsequence equivalent to the unit vector basis of $\ell_1$, and hence boundedly complete, or $\{x_n\}$ has a weakly Cauchy subsequence which we denotes again $\{x_n\}$. In the case $\{x_n\}$ is weakly convergent we get that $\{x_n\}$ is weakly null, because it is a basic sequence. Now, $\{x_n\}$ is a seminormalized weakly null tree in $X$ and hence $\{x_n\}$ is a seminormalized weak-star null tree in $X^{**}$. As $[x_n]$ is a separable subspace of $X^{**}$ with $w^*$-PCP, from \ref{remar} there is a separable subspace $Z\subset X^*$ such that $[x_n]$ is an isometric subspace of $Z^*$ with $w^*$-PCP. Therefore $\{x_n\}$ is a seminormalized weak-star null tree in $[x_n]$, which is a subspace of $Z^*$ with $w^*$-PCP, being $Z$ separable. From theorem, we get a boundedly complete branch and so a boundedly complete subsequence. If $\{x_n\}$ is not weakly convergent we can apply the $c_0$-theorem \cite{R3} to get a strongly summing subsequence, denoted again by $\{x_n\}$, since $X$ has PCP and so $X$ does not contain $c_0$. Let $x^{**}=w^*-lim_n\ x_n\in X^{**}$. Now $\{x_n-x^{**}\}$ is a weak-star null sequence in $X\oplus [x^{**}]>\subset X^{**}$. As $X$ has PCP, we get that $X$ has $w^*$-PCP as a subspace of $X^{**}$, then it is easy to see that $X\oplus [x^{**}]\subset X^{**}$ has $w^*$-PCP. Now $[x_n-x^{**}]$ is a separable subspace of $X^{**}$ with $w^*$-PCP and then, from \ref{remar} there is $Z$ a separable subspace of $X^*$ such that $[x_n-x^{**}]$ is an isometric subspace of $Z^*$ with the $w^*$-PCP, being $Z$ separable. From theorem, we get a boundedly complete branch and so a boundedly complete subsequence, denoted again by $\{x_n-x^{**}\}$. So we have that $\{x_n-x^{**}\}$ is boundedly complete and $\{x_n\}$ is strongly summing. Let us see that $\{x_n\}$ is boundedly complete. Indeed, if for some sequence of scalars $\{\lambda_n\}$ we have that $\sup_n\Vert\sum_{k=1}^n\lambda_n x_n\Vert<\infty$, then the series $\sum_n\lambda_n$ is convergent, since $\{x_n\}$ is strongly summing. Now it is clear that $\sup_n\Vert\sum_{k=1}^n\lambda_n (x_n-x^{**})\Vert<\infty$ and then $\sum_n\lambda_n x_n-x^{**}$ converges, since $\{x_n-x^{**}\}$ is boundedly complete. So $\sum_n\lambda_n x_n$ converges, since $\sum_n\lambda_n$ is convergent, and $\{x_n\}$ is boundedly complete.\end{proof} The converse of the above result is false, even for Banach spaces not containing $\ell_1$ (see \cite{GL}). Now we pass to show some consequences about the problem of the determination of $w^*$-PCP by subspaces with a basis. We begin by proving that every seminormalized weak-star null tree has a basic full $w^*$-null subtree. The same result is then true for the weak topology, by considering $X$ as a subspace of $X^{**}$. We don't know exact reference for this result, so we give a proof based on the Mazur proof of the known result that every seminormalized sequence in a dual Banach space such that $0$ belongs to its weak-star closure has a basic subsequence. \begin{lemma}\label{lema3} Let $X$ be a Banach space and $\{x_A\}_{A\in\natu^{<\omega}}$ a seminormalized $w^*$ null tree in $S_{X^*}$. Then for every $\varepsilon>0$ there is a full basic subtree still $w^*$ null with basic constant less than $1+\varepsilon$\end{lemma} \begin{proof} Let $\phi:\natu^{<\omega}\rightarrow \natu\cup\{0\}$ be a fixed bijective map such that $\phi(\emptyset)=0$, $\phi(A)\leq\phi(B)$ whenever $A\leq B\in\natu^{<\omega}$ and $\phi(A,i)\leq\phi(A,j)$ whenever $A\in\natu^{<\omega}$ and $i\leq j$. Fix also $\varepsilon>0$ and a sequence of positive numbers $\{\varepsilon_n\}_{n\geq 0}\in (0,1)$ such that $\frac{1+\sum_{n=0}^{\infty}\varepsilon_n}{\prod_{n=0}^{\infty}(1-\varepsilon_n)} <1+\varepsilon$. Now we proceed by induction to construct the desired subtree $\{y_A\}_{A\in\natu^{<\omega}}$, following the order given by $\phi$ to define for every $A\in\natu^{<\omega}$ $y_A$ and get in this way the full condition. That is, we have to prove that for every $n\in\natu\cup \{0\}$ we can construct $y_{\phi^{-1}(n)}$ such that $\{y_{A}\}_{A\in\natu^{<\omega}}$ is a $w^*$ null full subtree satisfying that for every $n\in\natu\cup\{0\}$ there is a finite set $\{f_1^n,\ldots,f_{k_n}^n\}\subset S_{X}$ such that\begin{enumerate}\item[i)] $\{f_1^n,\ldots,f_{k_n}^n\}$ is a $(1-\varepsilon_n)$-norming set for $Y_n=[y_{\phi^{-1}(0)},\ldots, y_{\phi^{-1}(n)}]$. \item[ii)] $\vert f_i^n(y_{\phi^{-1}(n+1)})\vert <\varepsilon_n$ for every $i$.\item[iii)] For every $A\in \natu^{<\omega}$ there is an increasing map $\sigma_A:\natu\rightarrow\natu$ such that $y_{(A,i)}=x_{(A,\sigma_{A}(i))}$ for every $i$.\end{enumerate} For $n=0$, we have $\phi^{-1}(0)=\emptyset$ and then we define $y_{\emptyset}=x_{\emptyset}$. Now take $f_1^0\in S_{X}$ $(1-\varepsilon_0)$-norming the subspace $Y_0=[y_{\emptyset}]$. As the tree $\{x_A\}_{A\in\natu^{<\omega}}$ is $w^*$ null there is $p_0\in\natu$ such that $\vert f_1^0(x_{(p)})\vert < \varepsilon_0$ for every $p\geq p_0$. Then we do $y_{\phi^{-1}(1)}=x_{(p_0)}$. As $\phi(A,i)\leq\phi(A,j)$ whenever $A\in\natu^{<\omega}$ and $i\leq j$ we deduce that $\phi^{-1}(1)=(1)$ and define $\sigma_{\emptyset}(1)=p_0$ so that $y_{(\emptyset,1)}=x_{(\emptyset ,\sigma_{\emptyset}(1))}$. Assume $n\in\natu$ and that we have already defined $y_{\phi^{-1}(0)},\ldots ,y_{\phi^{-1}(n-1)}$. Now $\phi^{-1}(n)-<\phi^{-1}(n)$, then $\phi(\phi^{-1}(n)-)<n$ and so $y_{\phi^{-1}(n)-}$ has been already constructed, by induction hypotheses. Put $\phi^{-1}(n)=(A,h)$ for some $h\in\natu$, where $A=\phi^{-1}(n)-$. As $\phi(A,k)\leq \phi(A,h)$ for $k\leq h$ we have that $y_{(A,k)}$ has been constructed with $y_{(A,k)}=x_{(A,\sigma_{A}(k))}$ whenever $k< h$ and $\sigma_{A}(k)$ has been constructed strictly increasing for $k< h$. Put $Y_{n-1}=[y_{\phi^{-1}(0)},\ldots ,y_{\phi^{-1}(n-1)}]$ and pick $f_1^{n-1},\ldots ,f_{k_{n-1}}^{n-1}$ elements in $S_{Y_{n-1}}$ $(1-\varepsilon_{n-1})$-norming $Y_{n-1}$. As the tree $\{x_A\}_{A\in\natu^{<\omega}}$ is $w^*$ null there is $p_{n-1}>\max_{k<h}\sigma_{A}(k)$ such that $ \vert f_i^{n-1}(x_{(A,p)})\vert <\varepsilon_{n-1},\ 1\leq i\leq k_{n-1}$ for every $p\geq p_{n-1}$. Then we do $y_{\phi^{-1}(n)}=x_{(A,p_{n-1})}$ and $\sigma_{A}(h)=p_{n-1}$. Then $\sigma_{A}(k)$ is constructed being strictly increasing for $k\leq h$ and $y_{\phi^{-1}(n)}=y_{(A,h)}=x_{(A,\sigma_A(h))}$. Now the construction of the subtree $\{y_A\}$ is complete satisfying i), ii) and iii). From the construction we get that $\{y_A\}$ is a full and $w^*$-null subtree. Let us see that $\{y_{\phi^{-1}(n)}\}$ is a basic sequence in $X$. Put $z_n=y_{\phi^{-1}(n)}$, fix $p<q\in\natu$ and compute $\Vert\sum_{i=1}^q\lambda_iz_i\Vert$, where $\{\lambda_i\}$ is a scalar sequence. Assume that $\Vert\sum_{i=1}^q\lambda_iz_i\Vert\leq 1$. From i) pick $j$ such that $\vert f_j^{q-1}(\sum_{i=1}^{q-1}\lambda_iz_i)\vert >(1-\varepsilon_{q-1})\Vert\sum_{i=1}^{q-1}\lambda_iz_i\Vert.$ Then we have from ii) $\vert f_j^{q-1}(z_q)\vert <\varepsilon_{q-1}$ and so $$\Vert\sum_{i=1}^q\lambda_iz_i\Vert\geq \vert f_j^{q-1}(\sum_{i=1}^q\lambda_iz_i)\vert >(1-\varepsilon_{q-1})\Vert\sum_{i=1}^{q-1}\lambda_iz_i\Vert -\varepsilon_{q-1}.$$ By repeating this computation we get $$\Vert \sum_{i=1}^q\lambda_iz_i\Vert\geq (\prod_{i=p+1}^q(1-\varepsilon_{i-1}))\Vert \sum_{i=1}^p\lambda_iz_i\Vert -(\sum_{i=p+1}^q\varepsilon_{i-1}),$$ and so, $$\Vert\sum_{i=1}^p\lambda_iz_i\Vert\leq \frac{1+\sum_{i=p+1}^q\varepsilon_{i-1}}{\prod_{i=p+1}^q(1-\varepsilon_{i-1})}<1+\varepsilon$$ The last inequality proves that $\{z_n\}$ is a basic sequence in $X$ with basic constant less than $1+\varepsilon$ and the proof is complete.\end{proof} We don¬t know if the above result is still true changing weak-star null by topologically weak-star null. The following result shows that $w^*$-PCP is determined by subspaces with a Schauder basis in the natural setting of dual spaces of separable Banach spaces. \begin{corollary}\label{fin} Let $X$, $Y$ be Banach spaces such that $Y$ is separable and $X$ is a subspace of $Y^*$. Then $X$ has $w^*$-PCP if, and only if, every subspace of $X$ with a Schauder basis has $w^*$-PCP.\end{corollary} \begin{proof} Assume that $X$ fails $w^*$-PCP. Then, from theorem \ref{p2}, there is a $w^*$-null tree in the unit sphere of $X$ without boundedly complete branches. Now, from lemma \ref{lema3}, we can extract a $w^*$-null full basic subtree. The subspace generated by this subtree $Z$ is a subspace of $X$ with a Schauder basis containing a $w^*$-null tree in $S_X$ without boundedly complete branches, from the full condition, so $Z$ fails $w^*$-PCP, from theorem \ref{p2}\end{proof} As a consequence we get, for example, that a subspace of $\ell_{\infty}$, the space of bounded scalar sequences with the sup norm, failing the $w^*$-PCP ( or failing PCP) contains a further subspace with a Schauder basis failing the $w^*$-PCP. If we do $X=Y^*$ in the above corollary one deduces the following \begin{corollary} Let $X$ be a separable Banach space. Then $X^*$ has RNP if, and only if, every subspace of $X^*$ with a Schauder basis has $w^*$-PCP.\end{corollary} \bigskip
{"config": "arxiv", "file": "1309.3862.tex"}
TITLE: Question on Power Spectral Density and Wiener-Khinchin theorem QUESTION [3 upvotes]: I tried asking this question on stackexchange and I also extensively researched it online without results so I will ask here. In my textbook the Wiener-Khinchin theorem is used to connect the auto-correlation definition of PSD with an "intuitive interpretation" of power spectral density for deterministic signals. It says: $S_{xx}(\omega) = \lim_{T\rightarrow\infty} \frac{1}{2T}\mathbb E\left[|FT\{X(t)*I_{[-T,T]}\}|^2\right]$ But such a theorem assumes you can take the fourier transform of a(truncated) realization of the random process, which unless I am missing something - may not be the case. Such a realization may not be integrable. So what is going on? Is there a different definition of integral which allows you to do this? REPLY [2 votes]: Not every stationary stochastic process has a power spectral density, in general it has a power spectral measure. But in any case $X*1_{[-T,+T]}$ is itself a stationary process, whose realisations (smooth or not) are never integrable on $\mathbb R$ (except if $X\equiv 0$). They have a Fourier transform only in the sense of tempered distributions: its values at a point (frequency) $\omega$ are not defined, and taking the expectation of its square doesn't make sense. Defining a power spectral measure or density for (some) deterministic signals was the main goal of Norbert Wiener's book The Fourier Integral, 1938. Relating PSD of stochastic processes to that (in Wiener's sense) of its realisations is interesting, but it cannot be done by taking the limit of something that doesn't exist...
{"set_name": "stack_exchange", "score": 3, "question_id": 297866}
TITLE: Is Von Neumann's function application operation defined for all functions and all arguments? QUESTION [1 upvotes]: Von Neumann's original axioms of what became von Neumann–Bernays–Gödel set theory took as primitive notions function and argument. Accompanying these was a two-variable operation, denoted $[x,y]$, corresponding to function application: The operation $[x,y]$ corresponds to a procedure that is encountered everywhere in mathematics, namely, the formation, from a function $f$ (which must be carefully distinguished from its values $f(x)$!) and an argument $x$, of the value $f(x)$ of the function $f$ for the argument $x$. Instead of $f(x)$ we write $[f,x]$ to indicate that $f$, just like $x$, is to be regarded as a variable in this procedure. Through the use of $[x,y]$ we replace, as it were, all one-variable functions by a single two-variable function. In this scheme the elements of the domain of "functions" answer to the functions (conceived naively) that are defined for the "arguments" and whose values are "arguments". [from von Neumann's 1925 article, An axiomatization of set theory, as translated into English in From Frege to Gödel: A Source Book in Mathematical Logic, 1879-1931, Harvard University Press, pp. 393–413] I'm trying to understand how the $[f,x]$ operation handles the case where $x$ does not belong to the domain of $f$. I feel like this could arise in certain models of the theory, but in first-order logic, shouldn't an expression such as $[f,x]$ be valid to form for all function letters $f$ and argument letters $x$, since we can't determine whether or not $x$ is in the domain of $f$ without referring to an explicit model? Or, is the last line in the above quotation actually imposing a restriction on how we can use the operation? REPLY [1 votes]: I read von Neumann like this: his functions are intended to be total on a universe of what he calls I-objects (i.e., things that intuitively can be arguments and values of functions). The I-objects correspond to sets as opposed to classes in the usual presentation of NBG. Classes correspond to his II-objects and the II-objects include the I-objects. If $x$ is not an I-object, then $[a, x]$ isn't really meaningful and the axioms say nothing about its value. He selects two arbitrary I-objects $A$ and $B$, which are used to represent false and true when a function represents a set. $A$ is also used to act as the "default" value for a function. So a function $f : X \to Y$ in NBG would be represented by two I-objects $f_d$ and $f_v$ representing the domain of $f$ and the values of $f$ separately, thus: $$ \begin{align*} [f_d, x] &= \left\{ \begin{array}{l@{\quad}l} A &\mbox{if $x \not\in X$} \\ B & \mbox{if $x \in X$} \end{array}\right. \\ [f_v, x] &= \left\{ \begin{array}{l@{\quad}l} A &\mbox{if $x \not\in X$} \\ f(x) & \mbox{if $x \in X$} \end{array}\right. \end{align*} $$ This reading is based on von Neumann's axiom IV.2 and his earlier discussion about it. That axiom asserts that a II-object $a$ is not an I-object if the I-objects $x$ such that $[a, x] \neq A$ comprise a proper class and not a set.
{"set_name": "stack_exchange", "score": 1, "question_id": 2739972}
TITLE: Given $f'(x)=\csc{x}(\cot{x}–\sin{2x})$, find $f(x)$ QUESTION [1 upvotes]: I have a question on my integration assignment that I am not quite sure how to approach. I've looked at it and can't seem to think of a suitable place to use u-substitution. All I can figure is that I can expand it out to be $f'(x)=\csc{x}\cot{x}-2\cos{x}$ I've considered maybe setting $u=2\sin{x}\cos{x}$ where $du=2(\cos^{2}{x}+\sin^{2}{x})\ dx$, which then simplifies to $2\ dx$, but then I can't get anywhere with that. REPLY [3 votes]: Use the fact that the derivative of $\sin x$ is $\cos x$ and the derivative of $\csc x$ is $-\csc x\cot x$. Hence $f(x)=-\csc x-2\sin x+C$.
{"set_name": "stack_exchange", "score": 1, "question_id": 298153}
TITLE: Countable partition in atoms QUESTION [4 upvotes]: Let $\mu: \Sigma \to [0, \infty)$ a measure over $\Omega$. We say a set $A \in \Sigma$ is an atom if for all $B \in \Sigma$ with $B \subset A$, $\mu(B)=\mu(A)$ or $\mu(B)=0$. We say that $\mu$ is atomic if every set $A \in \Sigma$ with positive measure contains an atom with positive measure. So, I'm trying to prove that if $\mu$ is atomic, there exists $\{ A_n\}_{n \in \mathbb{N}}$ pairwise disjoint atoms such that its union covers $\Omega$. I can build a sequence this way: Let $A \in \Sigma$ be a set with positive measure. It contains an atom $A_1$.Then I pick up $\Omega \setminus A_1$. Again, it contains an atom $A_2$ which is disjoint from $A_1$. However, I do not know how to continue. I would appreciate any help. REPLY [3 votes]: For finite measures: Proof: If $\mu(\Omega) = 0$ the result is trivial, because $\Omega$ itself is an atom. Assume $\mu(\Omega) > 0$. Since $\mu$ is atomic, take $B_0=\Omega$ and define by induction, for all $n\in \mathbb{N}$, if $\mu(B_n)>0$ then let $A_n \subseteq B_n$ be an atom with the largest possible measure among the atoms contained in $B_n$ (such choice of $A_n $ is always possible because the measure is finite) and let $B_{n+1}=B_n\setminus A_n$. One of two things may happen: For some $n_0\in \mathbb{N}$, $\mu(B_{n_0+1})=0$ and in this case, $B_{n_0+1}$ is an atom, $A_0, \dots, A_{n_0}, B_{n_0+1}$ are disjoint atoms and we have $$\Omega = A_0 \cup \dots \cup A_{n_0} \cup B_{n_0+1}$$ which proves the result (for this case). For all $n\in \mathbb{N}$, $\mu(B_{n})>0$. In this case we have an infinite sequence $\{A_n\}_{n\in \mathbb{N}}$ of disjoint atoms of positive measures. Let $$ A= \bigcup_{n\in \mathbb{N}} A_n$$. Claim: $\mu(\Omega \setminus A)=0$ Proof of the claim: if $\mu(\Omega \setminus A)>0$ the there is an atom $E\subseteq \Omega \setminus A$ such that $\mu(E)>0$. By our choice of $A_n$ at each step, we have that, for all $n\in \mathbb{N}$ $\mu(A_n) \geqslant \mu(E)$. So $$ \mu(A)=\sum_{n\in \mathbb{N}}\mu(A_n) =\infty$$ Contradiction, since the measure is finite. So we proved $\mu(\Omega \setminus A)=0$. So we have that $\Omega \setminus A$ is (trivially) an atom and we have $$\Omega = \left ( \bigcup_{n\in \mathbb{N}} A_n \right ) \cup (\Omega \setminus A)$$ So $\Omega$ is covered by a countable collection of atoms and this proves the result for this case and completes the proof. Remark: if the measure is not finite, this result may not hold. Consider $(\mathbb{R}, P(\mathbb{R}), \mu)$ where $\mu$ is the counting measure. Every singleton is an atom of positive measure. Any set with positive measure is not empty, so it contains a singleton, which means, it contains an atom of positive measure. So $\mu$ is atomic. Note that the only atoms are the singletons (and the empty set), and clearly $\mathbb{R}$ is not a countable union of singletons. Remark 2: Let us prove that, since $\mu$ is finite, the for any set $A$, if $A$ contain an atom, then there is an atom $E\subseteq A$ such that for any atom $F \subseteq A$, $\mu(F)\leqslant \mu(E)$. Suppose $A$ contain an atom. Let us prove the result by contradiction. Suppose that for all an atom $E\subseteq A$, there is an atom $F \subseteq A$, $\mu(F)> \mu(E)$. Let $E_0\subseteq A$ is an atom. Then there is sequence of atoms $E_n\subseteq A$ such that for all $n\in \mathbb{N}$ $\mu(E_n) < \mu(E_{n+1})$. Given any $i<j$, since $E_j$ is an atom and $\mu(E_i\cap E_j)\leqslant\mu(E_i) < \mu(E_j)$, we have that $\mu(E_i\cap E_j)=0$. So we have $$\mu \left ( \bigcup_{n\in \mathbb{N}}E_n\right)= \sum_{n\in \mathbb{N}}\mu(E_n)=+\infty$$ Contradiction, because the measure is finite. So we proved that there is an atom $E\subseteq A$ such that for any atom $F \subseteq A$, $\mu(F)\leqslant \mu(E)$.
{"set_name": "stack_exchange", "score": 4, "question_id": 2025975}
TITLE: Understanding vector space QUESTION [0 upvotes]: I was reading an example from a textbook giving a vector space. It goes with all properties to check if it is a vector space, and finally concludes it is. I am worried about existence of zero. Now, let V be R(+) and define addition and scalar multiplication as follow: X +Y = XY, for all x and y in R(+), and kx = x^(k). It justifies the existence of zero by saying: From the definition, X + 0 = X•0 implies X + 0 = X which implies X = X•0 which again implies 1 being equal to 0, by canceling the X from both sides. And concludes that the zero element is 1. But can it not that this very statement imply that division by zero is fine? Or maybe since 0 is 1, it doesn't matter. But then X + 1 = X would be equal, and we would again say 1 is 0, so it is fine. So which is it, is 1 zero or zero 1? If 1 is 0, then which number has taken the place of 1? It is my first encounter with the topic vector space. And also I am finding out I don't know what it means by zero. I just thought it was just defined nothing. Also, any tips how to think about spaces and sets and definition differently. REPLY [0 votes]: @saulsplatz wrote a good answer. I will try to add to it. Think of the "$0$ vector" in a vector space not as the number zero, not as "nothing" but as "the vector that behaves like the number $0$ for addition". So in ordinary addition, what makes the number $0$ useful and important is the fact that $$ r + 0 = 0 $$ for every real number $r$, So in the vector space $\mathbb{R}^2$ of pairs of numbers, the $0$ vector is $(0,0)$ because $$ (r,s) + (0,0) = (r,s) $$ for every vector $(r,s)$. In the strange vector space in your question, where "$+$" is defined to be multiplication, the thing that "behaves like $0$" is $1$, because $$ r "+" 1 = r \times 1 = r $$ for every positive real number $r$. (PS If I were teaching linear algebra I would not introduce this strange example early in the curriculum. It's potentially confusing in just the way you were confused.) $$
{"set_name": "stack_exchange", "score": 0, "question_id": 3022767}
TITLE: Is indefinite integration non-linear? QUESTION [27 upvotes]: Let us consider this small problem: $$ \int0\;dx = 0\cdot\int1\;dx = 0\cdot(x+c) = 0 \tag1 $$ $$ \frac{dc}{dx} = 0 \qquad\iff\qquad \int 0\;dx = c, \qquad\forall c\in\mathbb{R} \tag2 $$ These are two conflicting results. Based on this other question, Sam Dehority's answer seems to indicate: $$ \int\alpha f(x)\;dx\neq\alpha\int f(x)\;dx,\qquad\forall\alpha\in\mathbb{R} \tag3 $$ However, this clearly implies that indefinite integration is nonlinear, since a linear operator $P$ must satisfy $P(\alpha f) = \alpha Pf, \forall\alpha\in\mathbb{R}$, including $\alpha=0$. After all, a linear combination of elements of a vector space $V$ may have zero valued scalars: $f = \alpha g + \beta h, \forall\alpha,\beta\in\mathbb{R}$ and $g, h\in V$. This all seems to corroborate that zero is not excluded when it comes to possible scalars of linear operators. To take two examples, both matrix operators in linear algebra and derivative operators are linear, even when the scalar is zero. In a matrix case for instance, let the operator $A$ operate a vector: $A\vec{x} = \vec{y}$. Now: $A(\alpha\vec{x}) = \alpha A\vec{x} = \alpha\vec{y}$. This works even for $\alpha = 0$. Why is $(3)$ true? Can someone prove it formally? If $(3)$ is false, how do we fix $(1)$ and $(2)$? When exactly does the following equality hold (formal proof)? $$ \int\alpha f(x)\;dx = \alpha\int f(x)\;dx,\qquad\forall\alpha\in\mathbb{R} $$ I would appreciate formal answers and proofs. REPLY [1 votes]: My "answer" should be a comment under the two posted answers, but I don't have enough reputation to post comments. There is a (very popular) mistake in both answers. Indefinite integral operator does NOT give a class of functions equal up to constant translation. Let's look at an example. Someone might evaluate integral of 1/x like this: $$ \int\frac{1}{x}dx = \begin{cases} logx + C & \text{if x > 0}\\ log(-x) + C & \text{if x < 0}\\ \end{cases} = log|x| + C \\ \text{where C} \in \mathbb{R}. $$ However, this is not an exhaustive answer. The constants for negative and positive x can differ. The exhaustive answer would be $$ \int\frac{1}{x}dx = \begin{cases} logx + C & \text{if x > 0}\\ log(-x) + D & \text{if x < 0}\\ \end{cases},\\ \text{where C,D} \in \mathbb{R}. $$ Thus it is not true to say that $$ \int f(x) dx = \{ F(x) + C : C \in \mathbb{R} \}\\ \text{for a particular $F(x)$}. $$
{"set_name": "stack_exchange", "score": 27, "question_id": 1069664}
TITLE: Spivak problem on Schwarz inequality QUESTION [2 upvotes]: I have a question regarding problem 19 in the 3rd Ed. of Spivak's Calculus. Specifically, part (a). The question concerns the Schwarz inequality: $$ x_1y_1 + x_2y_2 \leq \sqrt{x_1^2+x_2^2}\sqrt{y_1^2+y_2^2} \ . $$ It says to prove that if $x_1=\lambda y_1$ and $x_2 = \lambda y_2$ for some number $\lambda$, then equality holds in the Schwarz inequality. Substituting the given values for $x_1$ and $x_2$ we have $$ \lambda (y_1^2+ y_2^2) \leq |\lambda|(y_1^2+y_2^2) \ . $$ It appears to me that equality can only hold if $\lambda \geq 0$. Can someone explain to me how equality holds for any given $\lambda$? REPLY [0 votes]: If you Chose to define $y_1 \geq x_1 $ and $ y_2 \geq x_2 $ without loss of generality then $ |\lambda | = \lambda $ since the choice is arbitrary in the formula you don't need to define $ \lambda > 0 $
{"set_name": "stack_exchange", "score": 2, "question_id": 392070}
TITLE: Find $G$ and $H$ such that $dF$ has given form QUESTION [1 upvotes]: Let $F$ be a cumulative distribution function and $$dF(x)=\begin{cases}dx/3&x\in (0,1)\cup(2,3)\\1/6&x \in \{1,2\}\\0&\mathrm{elsewhere}\end{cases}$$ Find a continuous cdf $H$, a discrete cdf $G$ and a real constant $c$ such that $$F(x)=cG(x)+(1-c)H(x)\ \forall x$$ What is the best method of solving such tasks? $c=1/3$, $H(x)=x/2,\ x \in (0,1)\cup (2,3)$ but that's just a guess. How can I solve the problem instead of trial and error? REPLY [1 votes]: $$cG+(1-c)H=F=\begin{cases}0,&x<0\\x/3,&0\le x<1\\1/2,&1\le x<2\\x/3,&2\le x<3\\1,&x\ge3\end{cases}$$ $G(x)$ is a staircase function and $H(x)$ is continuous. From the above equation, for $c\ne0$, you know two things: If $F$ has a finite jump $\Delta$ at $y$, then $G$ has a jump at $y$ equal to $\Delta/c$. If $G$ has a finite jump $\delta$ at $y$, then $F$ has a jump at $y$ equal to $c\delta$. Thus, the sets of points of discontinuity of $F$ and $G$ are identical. So $G(x)$ has a jump at $1,2$ and these jumps are equal to $\Delta/c=1/6c$. If you take$$G(x)=\begin{cases}0,&x<1\\1/6c,&1\le x<2\\1/3c,&x\ge2\end{cases}$$Then $x_2=1$ gives $c=1/3$. Now you can find $H(x)$ by solving $H(x)=[F-cG]/(1-c)$.
{"set_name": "stack_exchange", "score": 1, "question_id": 3884457}
TITLE: Integral points on varieties QUESTION [19 upvotes]: I recently came across an interesting phenomenon which confused me slightly, concerning integral points on varieties. For example, consider $X = \mathbb{A}_{\mathbb{Z}}^{n+1} \setminus \{0\}$, affine $n$-space over $\mathbb{Z}$ with the origin removed. Naively, one would guess that $X(\mathbb{Z})$ is the set of integers $\{ (x_0,x_1,\ldots,x_n) \in \mathbb{Z}^{n+1} \setminus \{0\}\}$. However, some work that I have been doing on recently with universal torsors has in fact led me to believe that $X(\mathbb{Z})$ should equal $\mathbb{P}^n(\mathbb{Z})$, at least modulo the action of $\mathbb{G}_m$. That is $X(\mathbb{Z})$ is actually the set of integers $\{ (x_0,x_1,\ldots,x_n) \in \mathbb{Z}^{n+1}\setminus \{0\}\}$ such that $\gcd(x_0,x_1,\ldots,x_n)=1$. Is there a simple explanation for why this is the case? Thanks! Dan REPLY [19 votes]: Dear Daniel, here is a detailed explanation, respectfully following the sacred texts (EGA or Hartshorne). a) First of all, $\mathbb A^{n+1}_{\mathbb Z}$ has no origin, despite our classical intuition! As a substitute, it has the prime (but not maximal) ideal $\mathcal P=(X_1,X_2,...,X_{n+1})$ and corresponding to it the integral subscheme $V=V(\mathcal P)$. And what you want to calculate is the set of $\mathbb Z$-points of $U=\mathbb A^{n+1}\setminus V(\mathcal P)$, i.e. the set of morphisms $Spec(\mathbb Z)\to U$. Let's do that. b) A morphisms $f: Spec \mathbb Z \to \mathbb A^{n+1}$ corresponds to a morphism of rings $ev_a: \mathbb Z[X_1,X_2,...,X_{n+1}] \to \mathbb Z$ , evaluation of integral polynomials at a tuple $a=(a_1,a_2,...,a_{n+1}) \in {\mathbb Z}^{n+1} $. Call $f=f_a : Spec \mathbb Z \to \mathbb A^{n+1}$ the corresponding morphism. c) We must ensure that the image of $f$ lies in $U$ i.e. that it is disjoint from $V=V(\mathcal P)$. But the points of $V$ are its generic point $(X_1,X_2,...,X_{n+1})$ and its closed points $\mathcal M_p=(X_1,X_2,...,X_{n+1}, p)$, $p$ a prime. It is enough to show that these closed points are not in the image of $f$. Equivalently, we must show that the fibre of $f$ at $\mathcal M_p$ is empty. Since this fibre is the spectrum of $\mathbb Z / (a_1,a_2,...,a_{n+1},p)$ our condition is that this ring be zero or equivalently that the ideal $(a_1,a_2,...,a_{n+1},p)$ be zero: this will happen exactly if $p$ does not divide all ot the $a_i$'s. Since this must hold for all primes, we get: Final result The $\mathbb Z$-points of $U=\mathbb A^{n+1}\setminus V(\mathcal P)$ are given by $(n+1)$-tuples of integers whose g.c.d. is 1. Reminder I have used that the fibre of a morphism of affine schemes $f:SpecB\to Spec A$ at $\mathcal M \in Specmax A$ is $Spec ( B/\mathcal M B) $.
{"set_name": "stack_exchange", "score": 19, "question_id": 28485}
TITLE: Every neighbourhood of $1$ is of form $U\setminus \mathbb{N} \cup \{1\}$? QUESTION [1 upvotes]: From Topology without tears: Let $X$ be a set $(\mathbb{R}\setminus\mathbb{N})\cup {1}$. Define a function $f:\mathbb{R}\to X$: \begin{equation} f(x) = \begin{cases} x & \text{if $x \in \mathbb{R}\setminus\mathbb{N}$}\\ 1 & \text{if $x \in \mathbb{N}$} \end{cases} \end{equation} Further define topology $\tau $ on $X$ by $$\tau=\{U:U\subseteq X \text { and } f^{-1}(U) \text{ is open in the euclidean topology on } \mathbb{R} \}$$ Then prove: 1)$f$ is continuous. 2)Every open neighbourhood of $1$ in (X,\tau) is of form $(U\setminus \mathbb{N}) \cup \{1\}$,where $U$ is open in $\mathbb{R}$ I am not getting how every open neighbourhood of $1$ in $(X,\tau)$ of form $U\setminus \mathbb{N} \cup \{1\}$ Because this open neighbourhood will also be a open set in $(X,\tau)$.And According to me, if a $S\subseteq X$ contains $1$,then $f^{-1}(S)=\{x\in \mathbb{R}:f(x)\in S\}$ will also contain $\mathbb{N}$(since for $\forall x \in \mathbb{N},f(x)=1\in S)$.Then in that case how $f^{-1}(S)$ will be open in eucliedean topology??As $\mathbb{N}$ is itself not open in Euclidean topology in \mathbb{R}?? And hence according to me,none of the open set in $(X,\tau)$ must contain $1$ Any hint where I am wrong is appreciable.Thanks in advance... REPLY [1 votes]: You are correct about $f^{-1}(\{1\})$ is $\mathbb N$. To give an example of $S\subset X$ open, pick a neighborhood $U_n$ for each $n\in\mathbb N$. Let $S_0=\bigcup_{n\in\mathbb N}U_n$, and $S=(S_0\setminus\mathbb N)\cup\{1\}$. Then we have $f^{-1}(S)=S_0$, which is open in $\mathbb R$. Now, can you prove that every open sets in $X$ containing $1$ looks like that?
{"set_name": "stack_exchange", "score": 1, "question_id": 4050216}
TITLE: Get a third point (lat, lng) from two given QUESTION [0 upvotes]: I have two points as follow (the distance between them is variable): I need to get a third as shown: The two first points change all the time, including the distance between them. My problem: I have many points in a road got from a GPS, and a photo taken from every point. I need to write a script to put a new point to the right ahead from the point of view of the camera positioned at the point two. Many years have passed since my high school, and I can't remember trigonometry anymore. Does anybody know a formule to get this point? Thank you! REPLY [0 votes]: To get the distance and angle from the first point to the third point: Pictorially, draw a line from the third point to the line passing through the first two points so that it hits perpendicularly. This distance is 2.5m by the 30-60-90 triangle identity. Then the distance from that point where the two lines intersect to the second point is $2.5 \sqrt3$. Add this to the variable distance between the first two points, call this summed distance X. Use Pythagorean theorem so the total distance is $T=\sqrt{2.5^2+{X}^2}$. To get the angle use law of cosines. Take the square of the distance X and add it to the square of distance T and subtract $2.5^2$. Then divide all of this by 2XT. Finally take the arccosine of all of it. That should be the angle from the line passing through the first two points to the line passing through the first and the third. The formula is: \begin{equation*} A = \arccos\left(\frac{X^2+T^2-2.5^2}{2XT}\right) \end{equation*}
{"set_name": "stack_exchange", "score": 0, "question_id": 842634}
\begin{document} \title{The $L^2$ restriction norm of a Maass form on $SL_{n+1}(\mathbb{Z})$.} \author{Xiaoqing Li} \address{Deptartment of Mathematics \\ State University of New York at Buffalo \\ Buffalo, NY, 14260 } \email{XL29@buffalo.edu} \author{Sheng-Chi Liu} \author{Matthew P. Young} \address{Department of Mathematics \\ Texas A\&M University \\ College Station \\ TX 77843-3368 \\ U.S.A.} \email{scliu@math.tamu.edu} \email{myoung@math.tamu.edu} \thanks{This material is based upon work supported by the National Science Foundation under agreement Nos. DMS-0901035 (X.L.), DMS-1101261 (M.Y.). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. The first-named author was also partially supported with an Alfred P. Sloan Foundation Fellowship} \begin{abstract} We discuss upper and lower bounds for the $L^2$ restriction norm of a Maass form on $\SL{n+1}{Z}$. \end{abstract} \maketitle \section{Introduction} It is a basic question in quantum chaos to understand how eigenfunctions of the Laplacian behave as the eigenvalue becomes large. It is natural to study the $L^p$-norm of the eigenfunctions restricted to a submanifold of the original domain. For very general results along these lines, see \cite{BGT} and \cite{Marshall}. In some particularly interesting cases, these restriction norms are related to $L$-functions. Here we study such a case where we consider a Maass form on $\SL{n+1}{Z} \backslash \mathcal{H}^{n+1}$ (with $n \geq 1$ arbitrary) restricted to $(\SL{n}{Z} \backslash \mathcal{H}^{n}) \times \mathbb{R}^+$, where here $\mathcal{H}^m = SL_{m}(\mr)/SO_m(\mr)$. This leads to a family of $GL_{n+1} \times GL_{n}$ $L$-functions. The space $\mathcal{H}^{n+1}$ is a real manifold of dimension $\frac{n(n+3)}{2}$ while $\mathcal{H}^n \times \mathbb{R}^+$ has $\frac{(n-1)(n+2)}{2} + 1$ dimensions, so this is a codimension $n$ restriction. For a given manifold, submanifold, and choice of $L^p$-norm to measure, it is not known what to expect the size of the $L^p$-norms to be. For example, on the space $\SL{2}{\mz} \backslash \mathbb{H}$, it is known that the supremum norm can be $\gg \lambda^{1/12 - \varepsilon}$ \cite{SarnakMorowetz} though the point where this value is attained is high in the cusp (and changes with the eigenvalue). On the other hand, if one restricts to the point $i$ (or any other fixed Heegner point) then the Waldspurger/Zhang \cite{Waldspurger} \cite{Zhang} formula relates the $L^2$-norm of a Maass form to the central values of a Rankin-Selberg $L$-function, and the Lindel\"{o}f Hypothesis gives a best-possible upper bound of $\lambda^{\varepsilon}$ here. There are many interesting subtleties in understanding the sizes of these restriction norms \cite{SarnakMorowetz} \cite{SarnakReznikov} \cite{Milicevic} \cite{Templier}. In a case where the restriction norm is related to a mean value of nonnegative $L$-functions, then the Lindel\"{o}f hypothesis should reveal its size. However, the present example gives a case where even assuming the Lindel\"{o}f hypothesis, it is not immediately clear what it implies about the restriction norm. It is only after some intricate combinatorial arguments that one can deduce the size of the restriction norm. As part of this analysis, we compute the volumes of a certain parametrized family of $n$-dimensional polytopes (see Section \ref{section:polytope} below). More than this, our application requires the estimation of an integral of a more complicated function over such a polytope. We set some notation in order to describe our results. Let $F(z)$ be an even Hecke-Maass form for $\SL{n+1}{Z}$ with Fourier coefficients $A_F(m_1, \dots, m_n)$ and Langlands parameters $(i\alpha_1, \dots, i\alpha_{n+1})$ with $\alpha_j \in \mr $ for all $j$ (meaning the form is tempered) which satisfy $\alpha_1 + \dots + \alpha_{n+1} = 0$. Suppose that $F$ is $L^2$-normalized on $\SL{n+1}{Z} \backslash \mathcal{H}^{n+1}$, and set \begin{equation} \label{eq:normdef} N(F) = \int_0^{\infty} \int_{SL_n(\mathbb{Z}) \backslash \mathcal{H}^n} \Big| F\begin{pmatrix} z_2 y & \\ & 1 \end{pmatrix} \Big|^2 d^*z_2 \frac{dy}{y}, \end{equation} where $z_2$ is defined by \eqref{eq:z2def} below and $d^* z_2$ is the left-invariant $SL_n(\mathbb{R})$ measure on $\mathcal{H}^n$ (see Proposition 1.5.3 of \cite{Goldfeld}) given by \begin{equation} d^*z_2 = \prod_{1 \leq i < j \leq n} d x_{i,j} \prod_{k=1}^{n-1} y_{k+1}^{-k(n-k)-1} dy_{k+1}. \end{equation} If $F$ is odd (so that $n+1$ is even, by Proposition 9.2.5 of \cite{Goldfeld}) then $N(F) = 0$, since the restriction of $F$ is invariant under $SL_{n}(\mz)$, and also odd, and hence zero by the same argument. Let $\lambda(F)$ be the Laplace eigenvalue of $F$, i.e., \begin{equation} \lambda(F) = \frac{(n+1)^3-(n+1)}{24} + \half(\alpha_1^2 + \dots + \alpha_{n+1}^2). \end{equation} Our main goal here is to determine the size of $N(F)$. For $n=1$, it is known that $1 \ll N(F) \ll \lambda(F)^{\varepsilon}$; see Theorem 6.1 of \cite{GRS}. For $n=2$, \cite{LiYoung} showed $N(F) \ll \lambda(F)^{\varepsilon}$ under the assumption that \begin{equation} \label{eq:FirstfourierUB} |A_F(1)|^2 \ll \lambda(F)^{\varepsilon}. \end{equation} Alternatively, one could assume a lower bound on the residue of $L(s, F \times \overline{F})$ at $s=1$; see Proposition \ref{prop:normformula} below. This is a difficult problem amounting to showing the non-existence of a Landau-Siegel zero for the Rankin-Selberg $L$-function $L(s, F \times \overline{F})$. For $GL_2 \times GL_2$, such a bound was famously shown by Hoffstein-Lockhart \cite{HL}. The condition \eqref{eq:FirstfourierUB} is a consequence of the generalized Riemann hypothesis, but is also a consequence of the Langlands functoriality conjectures; see Theorem 5.10 of \cite{IK}. For $n \geq 3$, it seems to be quite ambitious to prove a strong bound on $N(F)$. In fact, even with standard, powerful assumptions in number theory such as Langlands and Lindel\"{o}f, it is {\em still} challenging to see what is the size of $N(F)$. As such, we shall undertake our investigations aided by some additional assumptions that we now describe. Besides \eqref{eq:FirstfourierUB} which was already needed for $n=2$, we also suppose that all Hecke-Maass forms are {\em tempered}, meaning that $\alpha_j \in \mr$ for all $j$. If $F$ is an $\SL{n+1}{Z}$ Hecke-Maass form, we require temperedness of all Hecke-Maass forms on $\SL{m}{Z}$ with $2 \leq m \leq n$ (note this is known for $m=2$ unconditionally). The Langlands functoriality conjectures imply temperedness \cite[Section 8]{Langlands}. We also require what we shall call the {\em weighted local Weyl law} which we describe in more detail in Section \ref{section:Weyl} below; specifically see \eqref{eq:WLWLlower} and \eqref{eq:WLWLupper}. The local Weyl law amounts to an estimate for the number of Langlands parameters $i \beta = (i\beta_1, \dots, i \beta_m)$ that lie in a box $\| \beta - \lambda \| \leq 1$ with $\lambda = (\lambda_1, \dots, \lambda_m) \in \mr^m$, $\lambda_1 + \dots + \lambda_m = 0$. The {\em weighted} local Weyl law gives an estimate for the sum of such Langlands parameters $i\beta$ weighted by the first Fourier coefficient $|A_{\beta}(1)|^2$ of the corresponding Hecke-Maass form. For $m=2$ this is a standard application of the Kuznetsov formula; Blomer \cite{Blomer} recently obtained the weighted local Weyl law for $m=3$ but for $m \geq 4$ this is open. The weights occuring in the weighted local Weyl law are quite natural because they appear in the Kuznetsov formula which has seen extensive applications in number theory. \begin{mytheo} \label{thm:normUB} Assuming the generalized Lindel\"{o}f Hypothesis for Rankin-Selberg $L$-functions, temperedness of $F$ and for all Hecke-Maass forms on $\SL{m}{Z}$ with $2 \leq m \leq n$, the first Fourier coefficient bound \eqref{eq:FirstfourierUB}, and the weighted local Weyl law \eqref{eq:WLWLupper}, we have \begin{equation} \label{eq:NFUB} N(F) \ll_{n,\varepsilon} \lambda(F)^{\varepsilon}. \end{equation} \end{mytheo} \begin{mytheo} \label{thm:normLB} Assuming temperedness for $F$ and for all Hecke-Maass forms on $\SL{m}{Z}$ with $2 \leq m \leq n$, the weighted local Weyl law \eqref{eq:WLWLlower}, and the spacing condition $|\alpha_j - \alpha_k| \geq \lambda(F)^{\varepsilon}$ for all $j \neq k$, we have \begin{equation} N(F) \gg_{n,\varepsilon} \lambda(F)^{-\varepsilon}. \end{equation} \end{mytheo} Taken together, Theorems \ref{thm:normUB} and \ref{thm:normLB} largely pin down the size of $N(F)$. The condition that $|\alpha_j - \alpha_k| \geq \lambda(F)^{\varepsilon}$ in the lower bound can probably be removed; see Section \ref{section:walls}. The spacing condition is relevant because the upper bound on the supremum norm of $F$ becomes smaller when the $\alpha_j$'s are closely spaced; see p.42 of \cite{SarnakMorowetz}. We chose to present the full details of the proof under the spacing assumption for simplicity of exposition. In Theorem \ref{thm:normLB} we do not require \eqref{eq:FirstfourierUB}; rather, we use the lower bound $|A_F(1)|^2 \gg \lambda(F)^{-\varepsilon}$ which is a consequence of the convexity bound for Rankin-Selberg $L$-functions proved in general by Xiannan Li \cite{XiannanLi}. In fact, our proof of Theorem \ref{thm:normLB} shows the stronger bound $N(F) \gg \lambda(F)^{-\varepsilon} |A_F(1)|^2$, and hence if \eqref{eq:NFUB} holds, we \emph{deduce} \eqref{eq:FirstfourierUB}. In this way, we see that the upper bound on $N(F)$ is inextricably linked with an upper bound on the first Fourier coefficient. Note that Theorem \ref{thm:normLB} is unconditional for $n=2$. We conjecture that the right order of magnitude of $N(F)$ is given by \begin{myconj} \label{conj:asymptotic} Based on the conjectures of \cite{CFKRS}, we conjecture \begin{equation} N(F) = C_n(\alpha) \log \lambda(F) + o(\log \lambda(F)), \end{equation} where $C_n(\alpha)$ is a function of the Langlands parameters of $F$ which satisfies $C_n(\alpha) \asymp_n 1$. \end{myconj} It is not clear for $n > 1$ if $C_n(\alpha) \sim_n C_n$ for some constant $C_n$ independent of $\alpha$; see \eqref{eq:Calpha} for the form of $C_n(\alpha)$ which is an $n$-fold integral involving ratios of gamma functions. The previous results all use as a starting point a formula for $N(F)$ in terms of Rankin-Selberg $L$-functions. Specifically, we apply the $GL_n$ spectral decomposition to \eqref{eq:normdef} and calculate the resulting integrals in terms of this family of $L$-functions; this is given in Proposition \ref{prop:normformula}. The Archimedean factors in this formula, crucially calculated by Stade in terms of gamma functions \cite{StadeAJM}, govern the practical parameterization of the family of $L$-functions. Of course, the sum over the $GL_n$ spectrum and the integral on the right hand side of \eqref{eq:normformula} is infinite but except for a finite region, the Archimedean factors are exponentially small (following from Stirling's formula) and do not contribute to $N(F)$ in a practical sense. We carry out this analysis in Section \ref{section:Archimedean}. This region turns out to be closely related to a problem in representation theory, namely, how an irreducible, finite-dimensional representation of $\GL{n+1}{C}$ decomposes into irreducibles when restricted to $\GL{n}{C}$. We explain this following the proof of Lemma \ref{lemma:rzero}; somehow the gamma factors in \eqref{eq:qtalphabetaDEF} are analytically detecting this decomposition. Using this decomposition and the Lindel\"{o}f hypothesis, we are able to deduce Theorem \ref{thm:normUB} in Section \ref{section:upperbound}. The formula \eqref{eq:normformula} below involves a sum over the $GL_n$ spectrum, expressed in terms of the Langlands parameters $\beta = (\beta_1, \dots, \beta_n)$ which live on the hyperplane $\beta_1 + \dots + \beta_n = 0$, as well as a $t$-integral over the real line. When combined, the relevant region becomes an $n$-dimensional box with sides parallel to the standard basis vectors of $\mathbb{R}^n$. Our proof of Theorem \ref{thm:normLB} is more difficult than that of Theorem \ref{thm:normUB}. The underlying reason is that we may use Lindel\"{o}f to obtain a uniform upper bound on the $L$-functions in the family, but there does not exist a uniform lower bound (of course $L$-functions are expected to have zeros on the critical line!). Instead, we rely on a very soft argument: on any sufficiently long interval (say at least some small power of the analytic conductor), the second moment of an individiual $L$-function in the $t$-aspect is at least half the length of the interval; for a precise statement, see Proposition \ref{prop:secondmomentLowerBound}. To use this argument, we need to understand the relevant set of $\beta$'s for which the $t$-integral is long enough to use this lower bound. Instead of obtaining a box in $\mathbb{R}^n$, we obtain a convex polytope. This region is described in Section \ref{section:polytope}. This polytope has a very special structure and we show that it is a zonotope which is in fact naturally given as an affine projection of the $A_n$ lattice. Using this structure of the polytope, we are able to complete the proof of Theorem \ref{thm:normLB} in Section \ref{section:lowerbound}. In Section \ref{section:walls} we briefly discuss relaxing the spacing condition $|\alpha_j - \alpha_k| \geq \lambda(F)^{\varepsilon}$ which appears in Theorem \ref{thm:normLB}. Finally, in Section \ref{section:asymptotic} we discuss Conjecture \ref{conj:asymptotic}. Throughout the paper we often view $n$ as fixed and we may not always display the dependence of implied constants on $n$. We also use the common convention of letting $\varepsilon >0$ vary from line-to-line. \section{Maass forms and period integrals} We assume some familiarity with Goldfeld's book \cite{Goldfeld}. Our goal in this section is to produce a formula for $N(F)$ in terms of Rankin-Selberg $L$-functions which we give in Proposition \ref{prop:normformula} below. Let $u_j(z)$ be a Hecke-Maass form for $\Gamma:=SL_n(\mathbb{Z})$ with Fourier coefficients $B_j(m_2, \dots, m_n)$ and Langlands Parameters $(i\beta_{1,j}, \dots, i\beta_{n,j})$ with $\text{Re}(i\beta_{i,j}) = 0$ for all $i$. Set $A_F(m_1, \dots, m_n) = A_F(1,\dots, 1) \lambda_F(m_1, \dots, m_n)$ and similarly $B_j(m_2, \dots, m_n) = B_j(1, \dots, 1) \lambda_j(m_2, \dots, m_n)$. For brevity we sometimes write $A_F(1,\dots, 1) =: A_F(1)$ and $B_j(1,\dots, 1) =: B_j(1)$. We may assume each Maass form is either even or odd according to if $\lambda(m_1, \dots, - m_q) = \pm \lambda(m_1, \dots, m_q)$; see Proposition 9.2.6 in \cite{Goldfeld}. The Rankin-Selberg $L$-function on $GL_{n+1} \times GL_n$ is given by \begin{equation} \label{eq:RankinSelbergCuspidal} L(s, F \times \overline{u_j}) = \sum_{m_1 \geq 1} \dots \sum_{m_{n} \geq 1} \frac{\lambda_F(m_1, \dots, m_n) \overline{\lambda_j}(m_2, \dots, m_n)}{\prod_{k=1}^{n} m_k^{(n+1-k)s}}. \end{equation} Let \begin{equation} P_{\text{min}} := \left\{ \begin{pmatrix} * & * & \dots & * \\ & * & \dots & * \\ & & \ddots & * \\ & & & * \end{pmatrix} \in \GL{n}{R} \right\}, \end{equation} and define the minimal parabolic Eisenstein series \begin{equation} E_{P_{\text{min}}}(z, iw) = \sum_{\gamma \in P_{\text{min}} \cap \Gamma \backslash \Gamma} I_{iw}(\gamma z), \end{equation} with \begin{equation} \label{eq:Ifunction} I_{iw}(z) = \prod_{j=1}^{n-1} \prod_{k=1}^{n-1} y_j^{ b_{j,k} i w_k}, \qquad b_{j,k} = \begin{cases} jk, \quad &\text{if } j + k \leq n \\ (n-j)(n-k), \quad &\text{if } j+k \geq n. \end{cases} \end{equation} Here $\text{Re}(i w_k) = \frac{1}{n}$, the Langlands parameters of $E_{P_{\text{min}}}$ are $(i\beta_{1,w}, \dots, i\beta_{n,w})$ with $\text{Re}(i\beta_{j,w}) = 0$ for all $j$, and satisfying \begin{equation} i w_k = \frac{1}{n}(1 + i\beta_{k,w} - i\beta_{k+1,w}), \qquad 1 \leq k \leq n-1. \end{equation} Let $B_{P_{\text{min}}}(m_2, \dots, m_n) = B_{P_{\text{min}}}(1,\dots, 1) \lambda_{P_{\text{min}}}(m_2, \dots, m_n)$ be the non-degenerate Fourier coefficients of $E_{P_{\text{min}}}$, and define \begin{equation} \label{eq:RankinSelbergEisenstein} L(s, F \times \overline{E}_{P_{\text{min}}}) = \sum_{m_1 \geq 1} \dots \sum_{m_{n-1} \geq 1} \sum_{m_n \geq 1} \frac{\lambda_F(m_1, \dots, m_n) \overline{\lambda_{P_{\text{min}}}}(m_2, \dots, m_n)}{\prod_{k=1}^{n} |m_k|^{(n+1-k)s}}. \end{equation} For $r \geq 2$ let $P = P_{n_1, \dots, n_r}$ be a standard parabolic subgroup of $\GL{n}{R}$ associated to the partition $n=n_1 + \dots + n_r$, i.e., \begin{equation} P =NM = \left\{ \begin{pmatrix} I_{n_1} & * & \dots & * \\ & I_{n_2} & \dots & * \\ & & \ddots & * \\ & & & I_{n_r} \end{pmatrix} \begin{pmatrix} m_{n_1} & & & \\ & m_{n_2} & & \\ & & \ddots & \\ & & & m_{n_r} \end{pmatrix} \in GL_n(\mathbb{R}) \right\}, \end{equation} where $I_k$ is the $k\times k$ identity matrix and $m_k \in \GL{k}{R}$. Let $\phi_j = (\phi_{j_1}, \dots \phi_{j_r})$ be a vector of $r$ Hecke-Maass forms where $\phi_{j_k}$ with $1\leq k \leq r$ runs through an orthogonal basis of $C_{\text{cusp}}(\SL{n_k}{Z})$ with first Fourier coefficient equal to $1$, and the Langlands parameters of $\phi_{j_k}$ are \begin{equation} (i\beta_{j_k, \eta_k +1}, \dots, i\beta_{j_k, \eta_k + n_k}), \quad \text{where } \eta_1 =0, \quad \eta_k = n_1 + \dots + n_{k-1},\text{ for } k > 1. \end{equation} Note that $\eta_k + n_k = \eta_{k+1}$ for $k \geq 1$. Then for $v = (v_1, \dots, v_r) \in \mathbb{C}^r$ with $\sum_{k=1}^{r} n_k v_k = 0$ define the cuspidal Eisenstein series \begin{equation} E_P(z, iv, \phi_j) = \sum_{\gamma \in P \cap \Gamma \backslash \Gamma} \prod_{k=1}^{r} \phi_{k,j}(m_{n_k}(\gamma z)) I_{iv}(\gamma z, P) \end{equation} as in \cite{Goldfeld}, Definition 10.5.3. Also assume \begin{equation} \text{Re}(v_k + \eta_k + \frac{n_k - n}{2}) = 0, \end{equation} and let \begin{equation} i v_k^* = v_k + \eta_k + \frac{n_k - n}{2}. \end{equation} Notice in \cite{Goldfeld} p.318 Proposition 10.9.3, $s_k + \eta_k$ should be $s_k + \eta_k + \frac{n_k-n}{2}$. The Langlands parameters of $E_{P_{n_1, \dots, n_r}}$ are the components of $i\beta$ where \begin{equation} \label{eq:LanglandsEisenstein} \beta = (v_1^* + \beta_{j_1,1}, \dots, v_1^*+ \beta_{j_1, n_1} \big| v_2^* + \beta_{j_2, n_1 + 1}, \dots, v_2^* + \beta_{j_2, n_1 + n_2} \big| \dots). \end{equation} Here the notation indicates that $\beta$ has $n$ components, broken into $r$ blocks of size $n_1, n_2, \dots, n_r$; the vertical lines separate these blocks. Let \begin{equation} B_{{P_{n_1, \dots, n_r}}}(m_2, \dots, m_n) = B_{{P_{n_1, \dots, n_r}}}(1, \dots, 1)\lambda_{{P_{n_1, \dots, n_r}}}(m_2, \dots, m_n) \end{equation} be the non-degenerate Fourier coefficients of $E_{P_{n_1, \dots, n_r}}$. As before, one can define the Rankin-Selberg $L$-function as in \eqref{eq:RankinSelbergEisenstein}. Our formula for $N(F)$ requires the following definitions. Recall that $F$ is even. Let \begin{equation} \mathcal{L}(s, F \times \overline{u_j}) = 2\overline{B_j}(1) A_F(1) L(s, F \times \overline{u_j}) G_j(s), \end{equation} if $u_j$ is even, and $\mathcal{L}(s, F \times \overline{u_j}) = 0$ if $u_j$ is odd, with \begin{multline} G_j(s) = G_j^*(s) \pi^{\frac{n^2}{2} - \sum_{l=1}^{n-1} \sum_{l \leq k \leq n-1} v_{l,k,j} - \sum_{l=1}^{n} \sum_{l \leq k \leq n} v_{l,k}^*} \\ \Big(\prod_{1 \leq k < l \le n+1} \Gamma(\frac{1+ i\alpha_k - i\alpha_l}{2}) \Big)^{-1} \Big(\prod_{1 \leq k < l \le n} \Gamma(\frac{1- i\beta_{k,j} + i\beta_{l,j}}{2}) \Big)^{-1}, \end{multline} \begin{equation} G_j^*(s) = 2^{-n} \pi^{-\frac{n(n+1)}{2} s} \prod_{l=1}^{n} \prod_{k=1}^{n+1} \Gamma(\frac{s- i\alpha_k + i\beta_{l,j}}{2}), \end{equation} and with \begin{equation} v_{l,k,j} = \frac{i}{2}(\beta_{n-k,j} - \beta_{n-k+l, j}), \quad v_{l,k}^* = \frac{i}{2}(\alpha_{n+1-k} - \alpha_{n+1-k+l}). \end{equation} Similarly, \begin{equation} \mathcal{L}(s, F \times \overline{E}_{P_{n_1, \dots, n_r}}) = 2\overline{B}_{P_{n_1, \dots, n_r}}(1) A_F(1) L(s, F \times \overline{E}_{P_{n_1, \dots, n_r}}) G_{j,P}(s), \end{equation} if $E_{P_{n_1, \dots, n_r}}$ is even (and vanishes if it is odd), with \begin{multline} G_{j,P}(s) = G_{j,P}^*(s) \pi^{\frac{n^2}{2} - \sum_{l=1}^{n-1} \sum_{l \leq k \leq n-1} v_{l,k,P_{n_1, \dots, n_r}} - \sum_{l=1}^{n} \sum_{l \leq k \leq n} v_{l,k}^*} \prod_{1 \leq k < l \le n+1} \Gamma(\frac{1+ i\alpha_k - i\alpha_l}{2})^{-1} \\ \Big(\prod_{1 \leq k_1 \leq k_2 \leq r} \mathop{\prod_{1 \leq l_1 \leq n_{k_1}} \prod_{1 \leq l_2 \leq n_{k_2}} }_{1 \leq \eta_{k_1} + l_1 < \eta_{k_2} + l_2 \leq n} \Gamma(\frac{1- i(v_{k_1}^* + \beta_{j_{k_1}, \eta_{k_1} + l_1}) + i(v_{k_2}^* + \beta_{j_{k_2}, \eta_{k_2} + l_2})}{2}) \Big)^{-1}, \end{multline} where \begin{equation} G_{j,P}^*(s) = 2^{-n} \pi^{-\frac{n(n+1)}{2} s} \prod_{m=1}^r \prod_{l=1}^{n_m} \prod_{k=1}^{n+1} \Gamma(\frac{s- i\alpha_k + i(v_m^* + \beta_{j_m, \eta_m+l})}{2}), \end{equation} and with \begin{equation} v_{l,k,P_{n_1, \dots, n_r}} = \frac{i}{2} (\beta_{n-k, P_{n_1,\dots, n_r}} - \beta_{n-k+l, P_{n_1, \dots, n_r}}), \end{equation} where $i\beta_{m, P_{n_1, \dots, n_r}}$ is the $m$-th Langlands parameter of $E_{P_{n_1, \dots, n_r}}$. Finally, \begin{equation} \mathcal{L}(s, F \times \overline{E}_{min}(\cdot, w)) = 2\overline{B}_{P_{min}}(1) A_F(1) L(s, F \times \overline{E}_{P_{min}}(\cdot, w)) G_{min}(s) \end{equation} if $E_{P_{min}}$ is even (and vanishes if it is odd), with \begin{multline} G_{min}(s) = G_{min}^*(s) \pi^{\frac{n^2}{2} - \sum_{l=1}^{n-1} \sum_{l \leq k \leq n-1} v_{l,k,P_{min}} - \sum_{l=1}^{n} \sum_{l \leq k \leq n} v_{l,k}^*} \\ \Big(\prod_{1 \leq k < l \le n+1} \Gamma(\frac{1+ i\alpha_k - i\alpha_l}{2}) \Big)^{-1} \Big(\prod_{1 \leq k < l \le n} \Gamma(\frac{1- i\beta_{k,w} + i\beta_{l,w}}{2}) \Big)^{-1}, \end{multline} \begin{equation} G_{min}^*(s) = 2^{-n} \pi^{-\frac{n(n+1)}{2} s} \prod_{l=1}^{n} \prod_{k=1}^{n+1} \Gamma(\frac{s- i\alpha_k + i\beta_{l,w}}{2}), \end{equation} \begin{equation} v_{l,k,P_{min}} = \frac{i}{2}(\beta_{n-k,w} - \beta_{n-k+l, w}), \end{equation} where recall $(i\beta_{1,w}, \dots, i \beta_{n,w})$ are the Langlands parameters of $E_{P_{min}}$. \begin{myprop} \label{prop:normformula} We have \begin{multline} \label{eq:normformula} N(F) = \frac{n}{2\pi} \sum_{\substack{\SL{n}{Z} \\ \text{ cuspidal spectrum}}} \intR |\mathcal{L}(1/2 +it, F \times \overline{u_j})|^2 dt \\ + \sum_{\substack{\SL{n_1}{Z} \\ \text{ cuspidal spectrum}}} \dots \sum_{\substack{\SL{n_r}{Z} \\ \text{ cuspidal spectrum}}} c_{n_1, \dots, n_r} \int_{\mr^r} |\mathcal{L}(1/2 + it, F \times \overline{E}_{P_{n_1, \dots, n_r}}(\cdot, iv, \phi_j)|^2 dt dv_1^* \dots dv_{r-1}^* \\ + c\intR \dots \intR |\mathcal{L}(1/2 + it, F \times \overline{E}_{P_{\text{min}}}(\cdot, iw)|^2 dt d \beta_{1,w} \dots d \beta_{n-1,w}, \end{multline} with the middle sum running over all partitions $n_1 + \dots + n_r = n$ where each $n_i \geq 1$, and $c_{n_1 \dots, n_r}$ and $c$ are certain positive constants; we take the convention that if $n_i = 1$ then we take the constant eigenfunction. \end{myprop} To prove Proposition \ref{prop:normformula}, we need the following lemmas. \begin{mylemma} For fixed $y > 0$, \begin{equation} f_y(z_2) := F\begin{pmatrix} z_2 y & \\ & 1 \end{pmatrix} \in L^2(\SL{n}{Z} \backslash \mathcal{H}^n), \end{equation} where \begin{equation} \label{eq:z2def} z_2 := \begin{pmatrix} 1 & x_{1,2} & \dots & x_{1,n} \\ & 1 & \dots & x_{2,n} \\ & & \ddots & \\ & & & 1 \end{pmatrix} Y, \quad Y=\begin{pmatrix} y_2 \dots y_n & & & & \\ & y_2 \dots y_{n-1} & & & \\ & & \ddots & & \\ & & & y_2 & \\ & & & & 1 \end{pmatrix} \prod_{k=2}^{n} y_k^{-\frac{n+1-k}{n}} \end{equation} \end{mylemma} \begin{proof} Since $F$ is a Maass form for $\SL{n+1}{Z}$, it has rapid decay when $y_k \rightarrow \infty$, $2 \leq k \leq n$. \end{proof} \begin{mylemma} \label{lemma:ujFinnerproduct} Let \begin{equation} \mathcal{L}(s, F \times \overline{u_j}) := \int_0^{\infty} \int_{\SL{n}{Z} \backslash \mathcal{H}^n} \overline{u_j}(z_2) F\begin{pmatrix} z_2 y & \\ & 1 \end{pmatrix} y^{n(s-\half)} d^*z_2 \frac{dy}{y}. \end{equation} Then $\mathcal{L}(s, F \times \overline{u_j}) = 0$ if $u_j$ is odd, while if $u_j$ is even, we have \begin{equation} \mathcal{L}(s, F \times \overline{u_j}) = 2G_j(s) A_F(1) \overline{B_j}(1)L(s, F \times \overline{u_j}). \end{equation} \end{mylemma} \begin{proof} We have the Fourier expansion \begin{multline} F(z) = \sum_{\gamma \in U_n(\mathbb{Z}) \backslash \SL{n}{Z}} \sum_{m_1 \geq 1} \dots \sum_{m_{n-1} \geq 1} \sum_{m_n \neq 0} \frac{A_F(m_1, \dots, m_n)}{\prod_{k=1}^n |m_k|^{\frac{k(n+1-k)}{2}}} \\ W_J\left( \begin{pmatrix} m_{1} \dots |m_n| & & & \\ & \ddots & & \\ & & m_1 & \\ & & & 1 \end{pmatrix} \cdot \begin{pmatrix} \gamma & \\ & 1 \end{pmatrix} z, i\alpha, \psi_{1,\dots, 1, \frac{m_n}{|m_n|}} \right). \end{multline} Here $i\alpha = (i\alpha_1, \dots, i\alpha_n)$ are the Langlands parameters of $F$. Then writing $M$ as shorthand for the diagonal matrix above, we have by unfolding \begin{multline} \label{eq:ujFinnerproduct} \int_{\SL{n}{Z} \backslash \mathcal{H}^n} \overline{u_j}(z_2) F\begin{pmatrix} z_2 y & \\ & 1 \end{pmatrix} d^*z_2 = \sum_{m_1 \geq 1} \dots \sum_{m_{n-1} \geq 1} \sum_{m_n \neq 0} \frac{A_F(m_1, \dots, m_n)}{\prod_{k=1}^n |m_k|^{\frac{k(n+1-k)}{2}}} \\ \times \int_{U_n(\mathbb{Z}) \backslash \mathcal{H}^n} \overline{u_j}(z_2) W_J\Big(M \begin{pmatrix} z_2 y & \\ & 1 \end{pmatrix} , \psi_{1,\dots, 1, \frac{m_n}{|m_n|}} \Big) d^* z_2. \end{multline} By \cite{Goldfeld} p.132, and with $\psi_M(x) = e(m_2 x_{n-1, n} + m_3 x_{n-2, n-1} + \dots + m_n x_{1,2})$, we have \begin{align} W_J\Big(M \begin{pmatrix} z_2 y & \\ & 1 \end{pmatrix} z, \psi_{1,\dots, 1, \frac{m_n}{|m_n|}} \Big) &= \psi_M(x) W_J\Big(M \begin{pmatrix} Y y & \\ & 1 \end{pmatrix}, \psi_{1,\dots, 1, \frac{m_n}{|m_n|}} \Big) \\ &= \psi_M(x) W_J\Big(M \begin{pmatrix} Y y & \\ & 1 \end{pmatrix}, \psi_{1,\dots, 1} \Big). \end{align} Also note \begin{multline} \int_0^1 \dots \int_0^1 \overline{u_j}(z_2) \psi_M(x) \prod_{1 \leq i < j \leq n} d x_{i,j} \\ = \frac{\overline{B_j}(m_2, \dots, m_n)}{\prod_{k=2}^n |m_k|^{\frac{(k-1)(n+1-k)}{2}}} \overline{W}_J \left( \begin{pmatrix} m_{2} \dots |m_n| y_2 \dots y_n & & & \\ & \ddots & & \\ & & m_2 y_2 & \\ & & & 1 \end{pmatrix} , i\beta \right), \end{multline} where $i\beta = (i\beta_1, \dots, i\beta_n)$ are the Langlands parameters of $u_j$, and where here and in the following we do not write $\psi_{1,\dots,1}$ in the definition of the Jacquet Whittaker function. Then we have \begin{multline} \label{eq:Lmiddlecalculation} \mathcal{L}(s, F \times \overline{u_j}) = \sum_{m_1 \geq 1} \dots \sum_{m_{n-1} \geq 1} \sum_{m_n \neq 0} \frac{A_F(m_1, \dots, m_n) \overline{B_j}(m_2, \dots, m_n)}{m_1^{n/2} \prod_{k=2}^n |m_k|^{\frac{(n+1-k)(2k-1)}{2}} } \\ \int_0^{\infty} \dots \int_0^{\infty} W_J\Big(M \begin{pmatrix} Y y & \\ & 1 \end{pmatrix}, i\alpha \Big) \overline{W}_J \left( \begin{pmatrix} m_{2} \dots |m_n| y_2 \dots y_n & & & \\ & \ddots & & \\ & & m_2 y_2 & \\ & & & 1 \end{pmatrix} ,i\beta \right) \\ y_1^{n (s-\half)} \frac{dy_1}{y_1} \prod_{k=2}^n y_k^{-(k-1)(n+1-k)} \frac{dy_k}{y_k}. \end{multline} The inner integrals above simplify as \begin{multline} \int_0^{\infty} \dots \int_0^{\infty} W_J\left(M \begin{pmatrix} y_1 y_2 \dots y_n (y_2^{n-1} \dots y_n)^{-1/n} & & & \\ & \ddots & & \\ & & y_1 (y_2^{n-1} \dots y_n)^{-1/n} & \\ & & & 1 \end{pmatrix}, i\alpha \right) \\ \overline{W}_J \left( \begin{pmatrix} m_{2} \dots |m_n| y_2 \dots y_n & & & \\ & \ddots & & \\ & & m_2 y_2 & \\ & & & 1 \end{pmatrix} ,i\beta \right) y_1^{n (s-\half)} \frac{dy_1}{y_1} \prod_{k=2}^n y_k^{-(k-1)(n+1-k)} \frac{dy_k}{y_k}. \end{multline} Changing variables $y \rightarrow y_1 (y_2^{n-1} \dots y_n)^{1/n}$ gives that this is \begin{multline} \int_0^{\infty} \dots \int_0^{\infty} (y_1^n y_2^{n-1} \dots y_n)^{(s-\half)} W_J \left(M \begin{pmatrix} y_1 y_2 \dots y_n & & & \\ & \ddots & & \\ & & y_1 & \\ & & & 1 \end{pmatrix}, i\alpha \right) \\ \overline{W}_J \left( \begin{pmatrix} m_{2} \dots |m_n| y_2 \dots y_n & & & \\ & \ddots & & \\ & & m_2 y_2 & \\ & & & 1 \end{pmatrix}, i\beta \right) \frac{dy_1}{y_1} \prod_{k=2}^n y_k^{-(k-1)(n+1-k)} \frac{dy_k}{y_k}. \end{multline} Next we change variables with $y_j \rightarrow y_j/|m_j|$, getting \begin{multline} \int_0^{\infty} \dots \int_0^{\infty} W_J\left(\begin{pmatrix} y_1 y_2 \dots y_n & & & \\ & \ddots & & \\ & & y_1 & \\ & & & 1 \end{pmatrix} , i\alpha \right) \overline{W}_J \left( \begin{pmatrix} y_2 \dots y_n & & & \\ & \ddots & & \\ & & y_2 & \\ & & & 1 \end{pmatrix}, i\beta \right) \\ \Big(\prod_{k=1}^n \frac{1}{|m_k|^{(n+1-k)s}} \Big) \Big(\prod_{k=1}^n |m_k|^{\frac{(n+1-k)(2k-1)}{2}} \Big) (y_1^n y_2^{n-1} \dots y_n)^{(s-\half)} \frac{dy_1}{y_1} \prod_{k=2}^n y_k^{-(k-1)(n+1-k)} \frac{dy_k}{y_k}. \end{multline} Inserting this into \eqref{eq:Lmiddlecalculation}, we obtain \begin{equation} \mathcal{L}(s, F \times \overline{u_j}) = 2\overline{B_j}(1) A_F(1) L(s, F \times \overline{u_j}) G_j(s), \end{equation} provided $u_j$ is even, where \begin{multline} G_j(s) = \int_0^{\infty} \dots \int_0^{\infty} W_J \left(\begin{pmatrix} y_1 y_2 \dots y_n & & & \\ & \ddots & & \\ & & y_1 & \\ & & & 1 \end{pmatrix}, i\alpha \right) \\ \overline{W}_J \left( \begin{pmatrix} y_2 \dots y_n & & & \\ & \ddots & & \\ & & y_2 & \\ & & & 1 \end{pmatrix}, i\beta \right) (y_1^n y_2^{n-1} \dots y_n)^{(s-\half)} \prod_{k=1}^n y_k^{-(k-1)(n+1-k)} \frac{dy_k}{y_k}. \end{multline} Let \begin{equation} \label{eq:WhittakerScaledDef} W_J^*(y, i\beta) = W_J(y, i\beta) \prod_{j=1}^{n-1} \prod_{j \leq k \leq n-1} \pi^{-\half - iv_{j,k}} \Gamma(\thalf + iv_{j,k}), \end{equation} where \begin{equation} iv_{j,k} = \frac{i}{2} \sum_{l=0}^{j-1} (\beta_{n-k+l} - \beta_{n-k+l+1}) = \frac{i}{2} (\beta_{n-k} - \beta_{n-k+j}), \end{equation} Here $W_J^*(y, i\nu)$ is the Whittaker function normalized by Stade \cite{StadeAJM}. We quote a formula of Stade \cite[Theorem 3.4]{StadeAJM}: \begin{multline} \int_0^{\infty} \dots \int_0^{\infty} W_J^* \left(\begin{pmatrix} y_1 y_2 \dots y_n & & & \\ & \ddots & & \\ & & y_1 & \\ & & & 1 \end{pmatrix}, i\alpha \right) \overline{W}_J^* \left( \begin{pmatrix} y_2 \dots y_n & & & \\ & \ddots & & \\ & & y_2 & \\ & & & 1 \end{pmatrix}, i\beta \right) \\ \times \prod_{k=1}^n (\pi y_k)^{(n+1-k)s} 2y_k^{-(n+1-k)(k-\half)} \frac{dy_k}{y_k} = \prod_{l=1}^n \prod_{k=1}^{n+1} \Gamma(\frac{s+ i\beta_{l,j} - i\alpha_k}{2}). \end{multline} From this, we deduce \begin{multline} G_j^*(s) = \int_0^{\infty} \dots \int_0^{\infty} W_J^* \left(\begin{pmatrix} y_1 y_2 \dots y_n & & & \\ & \ddots & & \\ & & y_1 & \\ & & & 1 \end{pmatrix}, i\alpha \right) \\ \times \overline{W}_J^* \left( \begin{pmatrix} y_2 \dots y_n & & & \\ & \ddots & & \\ & & y_2 & \\ & & & 1 \end{pmatrix}, i\beta \right) \prod_{k=1}^n y_k^{(n+1-k)(s+\half-k)} \frac{dy_k}{y_k} \\ = 2^{-n} \pi^{-\frac{n(n+1)}{2} s} \prod_{l=1}^n \prod_{k=1}^{n+1} \Gamma(\frac{s+ i\beta_{l,j} - i\alpha_k}{2}), \end{multline} and hence \begin{multline} G_j(s) = G_j^*(s) \prod_{j=1}^{n-1} \prod_{j \leq k \leq n-1} \pi^{\half - v_{j,k}} \Big(\prod_{1 \leq k < l \leq n} \Gamma(\frac{1-i\beta_k+i\beta_l}{2}) \Big)^{-1} \\ \prod_{j=1}^{n} \prod_{j \leq k \leq n} \pi^{\half + v_{j,k}^*} \Big(\prod_{1 \leq k < l \leq n+1} \Gamma(\frac{1+i\alpha_k-i\alpha_l}{2}) \Big)^{-1}, \end{multline} which becomes \begin{multline} G_j^*(s) \pi^{\frac{n^2}{2} - \sum_{j=1}^{n-1} \sum_{j \leq k \leq n-1} v_{j,k} + \sum_{j=1}^n \sum_{j \leq k \leq n} v_{j,k}^*} \\ \prod_{1 \leq k < l \leq n} \Gamma(\frac{1-i\beta_k+i\beta_l}{2})^{-1} \prod_{1 \leq k < l \leq n+1} \Gamma(\frac{1+i\alpha_k-i\alpha_l}{2})^{-1}. \qedhere \end{multline} \end{proof} \begin{proof}[Proof of Proposition \ref{prop:normformula}] By the spectral decomposition of $\SL{n}{Z}$ \cite{MW}, \begin{equation} L^2(\SL{n}{Z} \backslash \mathcal{H}^n) = C_{\text{cusp}}(\SL{n}{Z} \backslash \mathcal{H}^n) \oplus (\text{Residual spectrum}) \oplus (\text{Continuous spectrum}). \end{equation} From the proof of Lemma \ref{lemma:ujFinnerproduct}, one sees that $\langle f_{y_1}, \phi \rangle = 0$ if $\phi$ has only degenerate Fourier coefficients. As a result, the residual spectrum does not enter, and only the cuspidal Eisenstein series contribute (note there are more Eisenstein series in the continuous spectrum in general), i.e., \begin{multline} f_{y_1}(z_2) = \sum_{\substack{ \SL{n}{Z} \\ \text{ cuspidal spectrum}}} \langle f_{y_1}, u_j \rangle u_j(z_2) \\ + \sum_{k=1}^{r} \sum_{\substack{\SL{n_k}{Z} \\ \text{ cuspidal spectrum}}} c_{n_1,\dots,n_r} \int_{\mr^r} \langle f_{y_1}, E_{P_{n_1, \dots, n_r}}(\cdot, iv, \phi_j) \rangle E_{P_{n_1, \dots, n_r}}(z_2, iv, \phi_j) dv_1^* \dots dv_{r-1}^* \\ + c \intR \dots \intR \langle f_{y_1}, E_{P_{\text{min}}}(\cdot, iw) \rangle E_{P_{\text{min}}}(z_2, iw)d \beta_{1,w} \dots d \beta_{n-1, w}, \end{multline} where $n_1 + \dots + n_r = n$ with $n_i \geq 1$, and $(n_1, \dots, n_r)$ runs through all such partitions of $n$. By Parseval, \begin{multline} \int_{\SL{n}{Z} \backslash \mathcal{H}^n} |f_{y_1}(z_2)|^2 d^* z_2 = \sum_{\substack{ \SL{n}{Z} \\ \text{ cuspidal spectrum}}} |\langle f_{y_1}, u_j \rangle|^2 \\ + \sum_{k=1}^{r} \sum_{\substack{\SL{n_k}{Z} \\ \text{ cuspidal spectrum}}} c_{n_1, \dots, n_r} \int_{\mr^r} |\langle f_{y_1}, E_{P_{n_1, \dots, n_r}}(\cdot, iv, \phi_j) \rangle|^2 dv_1^* \dots dv_{r-1}^* \\ + c \intR \dots \intR |\langle f_{y_1}, E_{P_{\text{min}}}(\cdot, iw) \rangle |^2 d\beta_{1,w} \dots d\beta_{n-1,w}. \end{multline} The Plancherel formula says \begin{equation} \int_0^{\infty} |h(y)|^2 \frac{dy}{y} = \frac{n}{2\pi} \intR |\widetilde{h}(n i t)|^2 dt, \qquad \widetilde{h}(ni t) = \int_0^{\infty} h(y) y^{ni t} \frac{dy}{y}. \end{equation} Hence \begin{multline} \int_{-\infty}^{\infty} |\langle f_{y_1}, u_j \rangle|^2 \frac{dy_1}{y_1} =\frac{n}{2\pi} \int_{-\infty}^{\infty} \Big| \int_0^{\infty} \int_{\SL{n}{Z} \backslash \mathcal{H}^n} f_{y_1}(z_2) \overline{u_j}(z_2) d^* z_2 y_1^{n i t} \frac{dy_1}{y_1} \Big|^2 dt \\ = \frac{n}{2\pi} \intR |\mathcal{L}(1/2 + it, F \times \overline{u_j})|^2 dt. \end{multline} The other terms with the Eisenstein series arrive in a similar way. \end{proof} \section{Rankin-Selberg calculations} In this section we relate the $L^2$ norm of a Maass form $F$ to the Rankin-Selberg $L$-function $L(s, F \times \overline{F})$: \begin{myprop} \label{prop:RankinSelbergL2formula} Suppose that $F$ is a tempered Hecke-Maass form for $\SL{n+1}{Z}$. Then for some absolute constant $c(n) > 0$, we have \begin{equation} \langle F, F \rangle = c(n) |A_F(1)|^2 \text{Res}_{s=1} L(s, F \times \overline{F}). \end{equation} \end{myprop} Xiannan Li \cite{XiannanLi} has shown that $\text{Res}_{s=1} L(s, F \times \overline{F}) \ll \lambda(F)^{\varepsilon}$ which shows $|A_F(1)|^2 \gg \lambda(F)^{-\varepsilon}$ provided $F$ is $L^2$-normalized. The lower bound $\text{Res}_{s=1} L(s, F \times \overline{F}) \gg \lambda(F)^{-\varepsilon}$ is not known in general but would follow from the generalized Riemann hypothesis. \begin{proof} We generalize the calculation by starting with $F$ and $G$ tempered Hecke-Maass forms on $\SL{n+1}{Z}$ with respective Langlands parameters $i\alpha = (i \alpha_1, \dots, i \alpha_{n+1})$, and $i\beta = (i\beta_1, \dots, i \beta_{n+1})$. Recall the definition \eqref{eq:WhittakerScaledDef}. We also quote the following formula of Stade \cite{StadeIsrael}. \begin{multline} \Gamma(\frac{(n+1)s}{2}) \int_0^{\infty} \dots \int_0^{\infty} W_J^* (y, i\alpha) \overline{W}_J^*(y, i\beta) \prod_{j=1}^{n} (\pi y_j)^{(n+1-j)s} (2y_j)^{-(n+1-j)j} \frac{dy_j}{y_j} \\ = \prod_{j=1}^{n+1} \prod_{k=1}^{n} \Gamma(\frac{s + i\alpha_j - i \beta_k}{2}), \end{multline} where $y = \text{diag}(y_1 \dots y_n, y_1 \dots y_{n-1}, \dots, y_1, 1)$, and in the calculation we have used the fact that the $\beta_j$ and $\alpha_k$ are real. By a calculation on p.369 of \cite{Goldfeld}, we have if $F \overline{G}$ is even (that is, $F$ and $G$ are both even or both odd), then \begin{equation} \label{eq:FGinnerproduct} \zeta((n+1)s) \langle F \overline{G}, E_P(\cdot, \overline{s}) \rangle = 2 A_F(1) \overline{A_G(1)} L(s, F \times \overline{G}) G_{i\alpha, i\beta}(s), \end{equation} where \begin{equation} G_{i\alpha, i\beta}(s) = \int_0^{\infty} \dots \int_0^{\infty} W_J(y, i\alpha) \overline{W}_J(y, i \beta) \det(y)^s d^*y, \end{equation} where $d^*y = \prod_{k=1}^n y_k^{-k(n+1-k)} \frac{dy_k}{y_k}$ and $\det(y) = \prod_{j=1}^n y_j^{n+1-j}$. Thus \begin{equation} \det(y)^s d^*y= \prod_{j=1}^n y_j^{(n+1-j)s} y_j^{-j(n+1-j)} \frac{dy_j}{y_j}. \end{equation} By Stade's formula, we have with $a=-\frac{n(n+1)}{2}$ and $b=\frac{n(n+1)(n+2)}{6}$ \begin{multline} \label{eq:StadeGLnGLn} G_{i\alpha, i\beta}(s) = \frac{\pi^{as} 2^b}{\Gamma(\frac{(n+1)s}{2})} \Big[ \prod_{j=1}^n \prod_{j \leq k \leq n} \pi^{-\half - \half(i\alpha_{n+1-k}-i\alpha_{n+1-k+j})} \Gamma(\frac{1 + i\alpha_{n+1-k} - i\alpha_{n+1-k+j}}{2}) \Big]^{-1} \\ \Big[\prod_{j=1}^n \prod_{j \leq k \leq n} \pi^{-\half + \half(i\beta_{n+1-k}-i\beta_{n+1-k+j})} \overline{\Gamma}(\frac{1 + i\beta_{n+1-k} - i\beta_{n+1-k+j}}{2}) \Big]^{-1} \prod_{j=1}^{n+1} \prod_{k=1}^{n+1} \Gamma(\frac{s + i\alpha_j - i \beta_k}{2}). \end{multline} By \cite{Goldfeld} Proposition 10.7.5, $E_P^*(z,s) = \pi^{-(n+1)s/2} \Gamma((n+1)s/2) \zeta((n+1)s) E_P(z,s)$ has a simple pole at $s=1$ with say residue $R$. Taking $F=G$ and the residue at $s=1$ on both sides of \eqref{eq:FGinnerproduct}, we have \begin{multline} R \frac{\pi^{\frac{n+1}{2}}}{ \Gamma(\frac{n+1}{2})} \langle F, F \rangle = |A_F(1)|^2 \text{Res}_{s=1} L(s, F \times \overline{F}) \frac{\pi^a 2^b}{\Gamma(\frac{n+1}{2})} \prod_{j=1}^{n+1} \prod_{k=1}^{n+1} \Gamma(\frac{1 + i\alpha_j - i \alpha_k}{2}) \\ \times \Big[ \prod_{j=1}^n \prod_{j \leq k \leq n} \pi^{-\half - \half(i\alpha_{n+1-k}-i\alpha_{n+1-k+j})} \Gamma(\frac{1 + i\alpha_{n+1-k} - i\alpha_{n+1-k+j}}{2}) \Big]^{-1} \\ \times \Big[\prod_{j=1}^n \prod_{j \leq k \leq n} \pi^{-\half + \half(i\alpha_{n+1-k}-i\alpha_{n+1-k+j})} \overline{\Gamma}(\frac{1 + i\alpha_{n+1-k} - i\alpha_{n+1-k+j}}{2}) \Big]^{-1}. \end{multline} Observe that the gamma factors involving $\alpha$ are cancelled, as are the powers of $\pi$ involving $\alpha$, and the proof is complete. \end{proof} \section{Local Weyl Law} \label{section:Weyl} The formula for $N(F)$ given by Proposition \ref{prop:normformula} involves a spectral sum of $\SL{n}{Z}$ Maass forms and as such we need some control on this spectral sum. This topic has seen some major recent advances but the precise results required here do not yet exist. Suppose that $(i\beta_1, \dots, i\beta_{n})$ are the Langlands parameters of a Maass form on $GL_n$, with $\beta_1 + \dots + \beta_n = 0$. We shall suppose that all forms are tempered so that $\beta_l \in \mathbb{R}$, for all $1 \leq l \leq n$. For a vector $\lambda = (\lambda_1, \dots, \lambda_n)$ with $\lambda_1 + \dots + \lambda_n = 0$, each $\lambda_l \in \mathbb{R}$, set \begin{equation} \label{eq:mudefinition} \mu(\lambda) = \prod_{1\leq k < l \leq n} (1 + |\lambda_k - \lambda_l|). \end{equation} Here our $\mu(\lambda)$ is $\tilde{\beta}(\lambda)$ as given by (3.4) in \cite{LapidMuller}. Then according to Proposition 4.5 of \cite{LapidMuller}, we have\footnote{Technically, Lapid and M\"{u}ller require a congruence subgroup of $\SL{n}{Z}$ so strictly speaking this result is not unconditional, but this is apparently a minor technical issue.} \begin{equation} \#\{\beta : \| \beta - \lambda \| \leq 1 \} \ll \mu(\lambda), \end{equation} where the count is over Maass forms with Langlands parameter $i\beta$. The corresponding lower bound is apparently not known in general. However, the lower bound is known on average in the sense that the number of Maass forms with Langlands parameter $\beta$ lying in a region of the form $t \Omega$ is \begin{equation} \asymp \int_{t \Omega} \mu(\lambda) d \lambda, \end{equation} as $t \rightarrow \infty$. In fact, \cite{LapidMuller} find an asymptotic count with a power saving. For our applications here, we find it most desirable to have the following estimates. Let $\lambda$ be given. Then for some fixed absolute constant $K \geq 1$, \begin{equation} \label{eq:WLWLlower} \mu(\lambda) \ll \sideset{}{^+}\sum_{\beta: \| \beta - \lambda \| \leq K} |B_{\beta}(1)|^2, \end{equation} where $B_{\beta}(1)$ is the first Fourier coefficient of the Hecke-Maass form associated to $\beta$, and the $+$ denotes the sum is restricted to even forms. We also require an analogous upper bound with the continuous spectrum included, namely \begin{multline} \label{eq:WLWLupper} \sum_{\beta: \| \beta - \lambda \| \leq 1} |B_{\beta}(1)|^2 + \sum_{\phi_{j_1}} \dots \sum_{\phi_{j_r}} c_{n_1, \dots, n_r} \int_{\| \beta_{j_1, \dots, j_r} - \lambda \| \leq 1} |B_{P_{n_1, \dots, n_r}}(1)|^2 dv_1^* \dots dv_{r-1}^* \\ + c \int_{\| \beta_w - \lambda \| \leq 1} |B_{P_{min}}(1)|^2 d_{\beta_{1,w}} \dots d \beta_{n-1, w} \ll \mu(\lambda), \end{multline} where $i\beta_{j_1, \dots j_r}$ is the vector of Langlands parameters of $E_{P_{n_1, \dots, n_r}}$ and $i \beta_w$ is the vector of Langlands parameters of $E_{min}$, all the other notations being defined before and in Proposition \ref{prop:normformula}. We call \eqref{eq:WLWLlower}-\eqref{eq:WLWLupper} the {\em weighted local Weyl law} (though technically it is not an asymptotic). Blomer has recently shown \eqref{eq:WLWLlower}-\eqref{eq:WLWLupper} for $n=3$ \cite{Blomer}. Blomer's proof relies on the $GL_3$ Kuznetsov formula as opposed to the Arthur-Selberg trace formula. The weighting inherent in the Kuznetsov formula (with the first Fourier coefficient as weights) is much more natural in our application here. \section{Archimedean development of the norm formula} \label{section:Archimedean} In this section we scrutinize the Archimedean part of the norm formula. We are eventually led to a combinatorial-type problem of integrating a certain function over a polytope. According to Proposition \ref{prop:normformula}, write $N(F) = N_d(F) + N_{\text{max}}(F) + N_{\text{min}}(F)$. Then $N(F) \geq N_d(F)$ so for the purpose of the lower bound in Theorem \ref{thm:normLB} we may restrict our attention to $N_d(F)$. Our methods for obtaining an upper bound on $N_d(F)$ turn out to apply to the Eisenstein series also. It is a familiar fact from $\SL{2}{Z}$ that the continuous spectrum is usually negligible compared to the cuspidal spectrum. By Proposition \ref{prop:normformula}, we have (switching $u_j$ with $\overline{u_j}$) \begin{equation} \label{eq:normUBspectraldecomposition} N_d(F) \asymp |A_F(1)|^2 \sumstar_{j} |B_j(1)|^2 \intR |L(1/2 + it, F \times u_j)|^2 q(t, \alpha, \beta_j) dt, \end{equation} where the $*$ indicates the sum is restricted to cusp form $u_j$ of the same parity of $F$, the implied constants depend only on $n$, and \begin{equation} \label{eq:qtalphabetaDEF} q(t, \alpha, \beta_j) = \frac{\prod_{l=1}^{n} \prod_{k=1}^{n+1} |\Gamma(\frac{1/2 + it+ i\alpha_k + i\beta_{l,j}}{2}) |^2}{\Big(\prod_{1 \leq k < l \le n+1} |\Gamma(\frac{1+ i\alpha_k - i\alpha_l}{2}) |^2\Big) \Big(\prod_{1 \leq k < l \le n} | \Gamma(\frac{1+ i\beta_{k,j} - i\beta_{l,j}}{2})|^2 \Big)}. \end{equation} Recall that we are assuming all our forms are tempered so that $\alpha_k, \beta_{l,j} \in \mathbb{R}$. Then Stirling's formula shows that \begin{equation} \label{eq:qtalphabetaSTIRLING} q(t, \alpha, \beta) \asymp \exp(- \frac{\pi}{2} r(t, \alpha, \beta)) \prod_{l=1}^{n} \prod_{k=1}^{n+1} (1 + |t + \alpha_k + \beta_{l}|)^{-1/2}, \end{equation} where \begin{equation} r(t, \alpha, \beta) = \sum_{l=1}^n \sum_{k=1}^{n+1} |t + \alpha_k + \beta_{l}| - \sum_{1 \leq k < l \leq n+1} |\alpha_k-\alpha_l| - \sum_{1 \leq k < l \leq n} |\beta_{k} - \beta_{l}| . \end{equation} Since $F$ is a nice function that is $L^2$-normalized, all of its $L^p$ norms (as well as $N(F)$) are polynomially bounded in terms of $\lambda(F)$. However, this is not (yet!) clear from the formula \eqref{eq:normUBspectraldecomposition}. In Lemma \ref{lemma:rnonnegative} below we will show that $r(t, \alpha, \beta) \geq 0$ for all $t \in \mathbb{R}$, $\alpha \in \mathbb{R}^{n+1}$, and $\beta \in \mathbb{R}^n$ so that at least $q(t,\alpha, \beta)$ is not exponentially large. We will also show that for some nice set of $t$ and $\beta$ that $r(t, \alpha, \beta) = 0$; outside of this set, $r(t,\alpha,\beta)$ quickly becomes large which allows us to finitize the integral over $t$ and sum over $j$ in \eqref{eq:normUBspectraldecomposition}. The set of $\beta$'s such that there exists a $t$ with $r(t, \alpha, \beta) = 0$ turns out to define a polytope that we shall study extensively in Section \ref{section:polytope}. \begin{mylemma} \label{lemma:rnonnegative} Suppose $\alpha \in \mr^{n+1}$, $\beta \in \mr^n$, and $t \in \mathbb{R}$. Then $r(t,\alpha, \beta) \geq 0$. \end{mylemma} \begin{proof} Note that as a function of $t$, $r(t, \alpha, \beta)$ is piecewise linear. It has slope $n(n+1)$ for large $t > 0$, and slope $-n(n+1)$ for large $-t > 0$. Each time $-t$ passes through a point $\alpha_k + \beta_{l}$ the slope changes by $2$. By this reasoning we see that the graph of $r(t,\alpha, \beta)$ is ``flat" (has zero slope) on an interval between the two ``middle" points $\alpha_k + \beta_{l}$. To this end, it is natural to partition the set $S=\{ \alpha_k + \beta_l : 1 \leq k \leq n+1, 1 \leq l \leq n \}$ as $S_{+} \cup S_{-}$ where $|S_{+}| = |S_{-}| = \half |S|$, and each element of $S_+$ is $\geq$ each element of $S_-$; in case of multiplicity there may be more than one way to choose $S_+$ and $S_-$. Define the {\em median interval} $I_M$ as \begin{equation} \label{eq:IMdef} I_M = \{t \in \mathbb{R} : t+ s_{+} \geq 0 \text{ and } t + s_{-} \leq 0 \text{ for all } s_{+} \in S_{+} \text{ and } s_{-} \in S_{-} \}. \end{equation} Note that $I_M$ may consist of only one point if $S_+ \cap S_-$ is nonempty. By elementary reasoning, the minimum of $r(t, \alpha, \beta)$ must occur when $t =- \alpha_k - \beta_{l}$ for some $k,l$. By symmetry, say it occurs at $-\alpha_{n+1} -\beta_{n}$. Then \begin{multline} \label{eq:rF} r(-\alpha_{n+1} - \beta_n, \alpha, \beta) = \sum_{l=1}^n \sum_{k=1}^{n+1} | \alpha_k - \alpha_{n+1} + \beta_{l} - \beta_n| - \sum_{1 \leq k < l \leq n+1} |\alpha_k-\alpha_l| - \sum_{1 \leq k < l \leq n} |\beta_{k} - \beta_{l}| . \end{multline} Proceed by induction. If $n=1$ then \eqref{eq:rF} is zero, and we are done. Suppose $n > 1$. In the first sum above, take $k=n+1$ and $l=n$ separately, and similarly take $l=n+1$ and $l=n$ separately in the second and third sums above. Then we obtain \begin{equation} \label{eq:rF2} r(-\alpha_{n+1} - \beta_n, \alpha, \beta) = \sum_{l=1}^{n-1} \sum_{k=1}^{n} | - \alpha_{n+1} - \beta_n +\alpha_k + \beta_{l} | - \sum_{1 \leq k < l \leq n} |\alpha_k-\alpha_l| - \sum_{1 \leq k < l \leq n-1} |\beta_{k} - \beta_{l}|. \end{equation} The right hand side of \eqref{eq:rF2} takes the form $r(t, \alpha', \beta')$ for some $t \in \mathbb{R}$ (in fact $t=-\alpha_{n+1}-\beta_n$), $\alpha' = (\alpha_1, \dots, \alpha_{n})$, and $\beta' = (\beta_1, \dots, \beta_{n-1})$, so by the induction hypothesis we are done. \end{proof} \begin{mylemma} \label{lemma:rzero} Suppose that $\alpha = (\alpha_1, \dots, \alpha_{n+1}) \in \mr^{n+1}$, $\beta = (\beta_1, \dots, \beta_n) \in \mr^n$ and $\alpha_1 \geq \alpha_2 \geq \dots \geq \alpha_{n+1}$, $\beta_1 \geq \beta_2 \geq \dots \geq \beta_n$. Then there exists $t\in \mr$ such that $r(t,\alpha,\beta) = 0$ if and only if \begin{equation} \label{eq:alphajbetakcondition} \alpha_{n+1-k} + \beta_{k} \geq \alpha_{n+2-l} + \beta_{l}, \end{equation} for any $k,l \in \{1, \dots, n\}$. \end{mylemma} It may be helpful to visualize the numbers $\alpha_k + \beta_l$ in the following array: \begin{equation} \label{eq:alphabetaarray} \begin{array}{cccc} \alpha_1 + \beta_1 & \alpha_1 + \beta_2 & \dots & \alpha_1 + \beta_n \\ \cline{4-4} \alpha_2 + \beta_1 & \alpha_2 + \beta_2 & \dots & \alpha_2 + \beta_n \\ \vdots & \ddots & \dots & \dots \\ \cline{2-2} \alpha_n + \beta_1 & \alpha_n + \beta_2 & \dots & \alpha_n + \beta_n \\ \cline{1-1} \alpha_{n+1} + \beta_1 & \alpha_{n+1} + \beta_2 & \dots & \alpha_{n+1} + \beta_n. \end{array} \end{equation} Notice that the entries of the array are decreasing (that is, non-increasing) in each row and in each column. The condition \eqref{eq:alphajbetakcondition} means that each entry above one of the horizontal lines is $\geq$ each entry below one of the horizontal lines. \begin{proof} From the orderings imposed on the $\alpha_k$ and $\beta_l$, we have \begin{equation} r(t, \alpha, \beta) = \sum_{l=1}^n \sum_{k=1}^{n+1} |t + \alpha_k + \beta_{l}| - n \alpha_1 - (n-2) \alpha_2 - \dots + n \alpha_{n+1} - (n-1)\beta_1 - \dots + (n-1)\beta_n. \end{equation} From Lemma \ref{lemma:rnonnegative}, we know $r(t,\alpha, \beta) \geq 0$. Furthermore, as noted in the proof of Lemma \ref{lemma:rnonnegative}, $r$ is minimized when $t \in I_M$, so the only possible region of $t$ with $r(t,\alpha, \beta) = 0$ is $t \in I_M$. By glancing at \eqref{eq:alphabetaarray}, one can see that $S_{+}$ consists of the elements $\alpha_k + \beta_l$ lying above the horizontal lines, and similarly the elements of $S_{-}$ are below the horizontal lines. Using this, one calculates easily that $r(t,\alpha,\beta) = 0$ for such $t$. Thus, if \eqref{eq:alphajbetakcondition} holds then $r(t, \alpha, \beta) = 0$ for $t \in I_M$. Next we show the other half of the ``iff'' statement. In general (not necessarily assuming \eqref{eq:alphajbetakcondition} holds), one obtains that for $t \in I_M$ that \begin{equation} \label{eq:rtalphabetageneral} r(t,\alpha,\beta) = \sum_{s_+ \in S_{+}} s_+ - \sum_{s_- \in S_-} s_- - n \alpha_1 - (n-2) \alpha_2 - \dots + n \alpha_{n+1} - (n-1)\beta_1 - \dots + (n-1)\beta_n. \end{equation} Let $T_+$ be the set of elements $\alpha_k + \beta_l$ above the horizontal lines in \eqref{eq:alphabetaarray}, and similarly $T_-$ is the set of elements below the horizontal lines. Then our previous calculation shows \begin{equation} \sum_{t_+ \in T_+} t_+ - \sum_{t_- \in T_-} t_- - n \alpha_1 - (n-2) \alpha_2 - \dots + n \alpha_{n+1} - (n-1)\beta_1 - \dots + (n-1)\beta_n = 0, \end{equation} so inserting this into \eqref{eq:rtalphabetageneral}, we obtain \begin{equation} r(t, \alpha, \beta) = \sum_{s_+ \in S_{+}} s_+ -\sum_{t_+ \in T_+} t_+ + \sum_{t_- \in T_-} t_- - \sum_{s_- \in S_-} s_- = 2 \sum_{s_+ \in S_+ \cap T_-} s_+ - 2 \sum_{s_- \in S_- \cap T_+} s_-. \end{equation} If \eqref{eq:alphajbetakcondition} does not hold then there exists an $s_+ \in S_+ \cap T_-$ and $s_- \in S_- \cap T_+$ such that $s_+ > s_-$, and hence $r(t,\alpha, \beta) > 0$. \end{proof} The structure of the set of $t \in I_M$ and $\beta$ satisying \eqref{eq:alphajbetakcondition} is related to the branching law\footnote{A branching law describes how a representation of a group $G$ decomposes into irreducible representations upon restriction to a subgroup $H$ of $G$.} of $\GL{n+1}{C}$ to $\GL{n}{C}$ as we now explain. Let $\lambda_j = \beta_j + t$ for all $j$; in terms of $\lambda = (\lambda_1, \dots, \lambda_n)$, the condition that $t \in I_M$ and $\beta$ satisfies \eqref{eq:alphajbetakcondition} is equivalent to $\lambda_j + \alpha_{n+1-j} \geq 0$ and $\lambda_j + \alpha_{n+2-j} \leq 0$, for all $j \in \{1,\dots, n\}$. This is in turn equivalent to \begin{equation} \label{eq:interlacing} -\alpha_{n+1} \geq \lambda_1 \geq -\alpha_n \geq \lambda_2 \geq \dots \geq -\alpha_2 \geq \lambda_n \geq -\alpha_1. \end{equation} Note that $\widetilde{\alpha} :=(-\alpha_{n+1}, \dots, -\alpha_1)$ are the Langlands parameters of $\overline{F}$, and \eqref{eq:interlacing} then says that $\lambda$ {\em interlaces} $\widetilde{\alpha}$ (it would also be natural to take the dual of $u_j$ instead of $F$). Theorem 8.1.1 of \cite{GoodmanWallach}, for example, expresses the branching law from $\GL{n+1}{C}$ to $\GL{n}{C}$ via the interlacing of the highest weight vectors of the corresponding irreducible representations of the two groups. For more information about branching laws, for example see Chapter 8 of \cite{GoodmanWallach} or Chapter XVIII of \cite{Zelobenko}. \section{The upper bound} \label{section:upperbound} Lemmas \ref{lemma:rnonnegative} and \ref{lemma:rzero} give us good control on the exponential part of $q(t,\alpha, \beta)$. We also need to understand the rational part of $q$ which is established with the following. \begin{mylemma} \label{lemma:qtalphabetaUB} Suppose that $\alpha, \beta$ are as in Lemma \ref{lemma:rzero}, \eqref{eq:alphajbetakcondition} holds, and $t \in I_M$. Then \begin{equation} \label{eq:qtalphabetaUB} q(t, \alpha, \beta) \ll \frac{1}{\mu(\beta)} \prod_{k+l = n+1} (1+ |t+ \alpha_k + \beta_l|)^{-1/2} \prod_{k+l = n+2} (1+ |t+ \alpha_k + \beta_l|)^{-1/2}. \end{equation} \end{mylemma} Recall that $\mu(\beta)$ is defined by \eqref{eq:mudefinition}. \begin{proof} We estimate $q(t,\alpha,\beta)$ using \eqref{eq:qtalphabetaSTIRLING}. Since we assume $t \in I_M$ and \eqref{eq:alphajbetakcondition} holds, we have $r(t,\alpha,\beta) = 0$. The terms with $k+l = n+1$ and $k+l=n+2$ are already present in \eqref{eq:qtalphabetaUB}; these are the terms corresponding to elements of the array \eqref{eq:alphabetaarray} directly above or below one of the horizontal lines. Consider first the other terms above the horizontal lines in \eqref{eq:alphabetaarray}, that is, $1 + |t+ \alpha_k + \beta_l|$ with $k +l \leq n$. Since $t \in I_M$ and we have the ordering \eqref{eq:alphajbetakcondition}, then \begin{equation} 1 + |t+\alpha_k + \beta_l| = 1 + t + \alpha_k + \beta_l = 1 + (\beta_l - \beta_{n+1-k}) + (t + \alpha_{k} + \beta_{n+1-k}) \geq 1 + (\beta_l - \beta_{n+1-k}). \end{equation} Thus we have \begin{equation} \prod_{k+l \leq n} (1 + |t + \alpha_k + \beta_l|)^{-1/2} \leq \prod_{k+l \leq n} (1 + |\beta_l - \beta_{n+1-k}|)^{-1/2} = \mu(\beta)^{-1/2}, \end{equation} recalling the definition \eqref{eq:mudefinition}. Similarly, for the terms with $k+ l \geq n+3$ we have \begin{equation} 1 + |t+ \alpha_k + \beta_l| = 1-t - \alpha_k - \beta_l = 1 - t - \alpha_k -\beta_{n+2-k} + (\beta_{n+2-k} - \beta_l) \geq 1 + (\beta_{n+2-k} - \beta_l). \end{equation} Then these terms with $k+l \geq n+3$ also contribute $\leq \mu(\beta)^{-1/2}$ to $q(t,\alpha,\beta)$, and the proof is complete. \end{proof} We shall need the following elementary integral bound. \begin{mylemma} Suppose $a, b \in \mathbb{R}$. Then \begin{equation} \label{eq:telementaryintegral} \int_{-X}^X (1 + |t+a|)^{-1/2} (1+ |t+b|)^{-1/2} dt \leq \log(1+|a|+X) + \log(1+|b|+X). \end{equation} Similarly, \begin{equation} \label{eq:nelementarysum} \sum_{|n| \leq X} (1 + |n+a|)^{-1/2} (1 + |n+b|)^{-1/2} \ll \log(2+|a|+X) + \log(2+|b|+X) \end{equation} Finally, if $a < b$ then \begin{equation} \label{eq:hahb} \int_{-b}^{-a} (1 + |t+a|)^{-1/2} (1+ |t+b|)^{-1/2} dt \leq 4. \end{equation} \end{mylemma} \begin{proof} We begin with \eqref{eq:telementaryintegral}. Using $2\sqrt{xy} \leq x + y$, it is enough to consider the case $a=b$. For $X \geq |a|$, we have \begin{equation} \int_{-X}^{X} (1+ |t+a|)^{-1} dt = \log(1 + a + X) + \log(1 + X-a) \leq 2 \log(1 + |a| + X). \end{equation} One can check the same bound holds for $X < |a|$ also, so the proof of \eqref{eq:telementaryintegral} is complete. The proof of \eqref{eq:nelementarysum} follows similar lines. Let $h_u(t) = (1 + |t+u|)^{-1/2}$. Then \begin{multline} \int_{-b}^{-a} h_a(t) h_b(t) dt = 2 \int_{-b}^{-b + \frac{b-a}{2}} h_a(t) h_b(t) dt \leq 2(1 + \frac{b-a}{2})^{-1/2} \int_{-b}^{-b + \frac{b-a}{2}} h_b(t) dt \\ = 4(1 + \frac{b-a}{2})^{-1/2} \big((1 + \frac{b-a}{2})^{1/2} - 1 \big) \leq 4. \qedhere \end{multline} \end{proof} Now we are ready to prove Theorem \ref{thm:normUB}. First we show the following \begin{mylemma} \label{lemma:qUB} We have \begin{equation} \label{eq:localsum} \sum_{\beta} |B_{\beta}(1)|^2 \int_{t \in I_M} q(t,\alpha,\beta) dt \ll 1, \end{equation} where the sum is over $\beta$, the Langlands parameters of the $SL_{n}(\mz)$ cuspidal spectrum, which also satisfy \eqref{eq:alphajbetakcondition}. \end{mylemma} Note that if $t \in I_M$, then it is implicit that \eqref{eq:alphajbetakcondition} holds, by Lemma \ref{lemma:rzero}. \begin{proof} Let $S$ denote the left hand side of \eqref{eq:localsum}. By Lemma \ref{lemma:qtalphabetaUB}, we have \begin{equation} S \ll \sum_{\beta} \int_{t \in I_M} |B_\beta(1)|^2 \frac{1}{\mu(\beta)} f(\beta + t) dt, \end{equation} where with $\lambda = (\lambda_1, \dots, \lambda_n)$, we set \begin{equation} f(\lambda) = \prod_{k+l = n+1} (1+ |\alpha_k + \lambda_l|)^{-1/2} \prod_{k+l = n+2} (1+ |\alpha_k + \lambda_l|)^{-1/2}. \end{equation} Let $\gamma = (\gamma_1, \dots, \gamma_n) \in \mathbb{Z}^n$ and let $H$ denote the hyperplane $u_1 + \dots + u_n = 0$. Note that $f(\lambda') \asymp f(\lambda)$ if $\| \lambda - \lambda' \| = O(1)$. Hence \begin{equation} \label{eq:Sbound} S \ll \sum_{\gamma \in \mathbb{Z}^n \cap H} \sum_{\beta: \|\beta - \gamma \| \leq K} \int_{t \in I_M} \frac{|B_{\beta}(1)|^2}{\mu(\gamma)} f(\gamma + t) dt. \end{equation} Recall that the condition $t \in I_M$ is equivalent to \eqref{eq:interlacing}, that is, \begin{equation} -\alpha_{n+1} \geq \beta_1 + t \geq -\alpha_n \geq \dots \geq -\alpha_2 \geq \beta_n + t \geq -\alpha_1, \end{equation} so by positivity we can extend this condition $t \in I_M$ to \begin{equation} \label{eq:tinterlacing} -\alpha_{n+2-k} + K \geq \gamma_k + t \geq -\alpha_{n+1-k} - K, \end{equation} for all $k\in \{1,2,\dots, n\}$. Hence we obtain by \eqref{eq:WLWLupper} that \begin{equation} S \ll \sum_{\gamma \in \mathbb{Z}^n \cap H} \int_{t} f(\gamma + t) dt , \end{equation} where the integral is over $t$ such that \eqref{eq:tinterlacing} holds. Next we argue that the sum over $\gamma$ can be replaced by an integral. For this, we use the simple inequality \begin{equation} \sum_{-a-K \leq m \leq -b + K} (1 + |m+a|)^{-1/2} (1 + |m+b|)^{-1/2} \ll 1 + \int_{-a-K}^{-b + K} (1 + |u+a|)^{-1/2} (1 + |u+b|)^{-1/2} du, \end{equation} and note that since $K \geq 1$, the integral is $\gg 1$, so in fact the sum can be bounded by a constant multiple of the integral. Changing variables $\lambda_k = \gamma_k + t$, we have that the region of integration for $\lambda$ is a box. Specifically, we have \begin{equation} \label{eq:Sbound2} S \ll \prod_{k=1}^{n} \int_{-\alpha_{n+1-k} -K}^{-\alpha_{n+2-k} +K} (1 + |\alpha_{n+2-k} + \lambda_k|)^{-1/2} (1 + |\alpha_{n+1-k} + \lambda_k|)^{-1/2} d\lambda_k. \end{equation} By \eqref{eq:hahb}, we deduce $S \ll 1$, as desired. \end{proof} \begin{mytheo} \label{thm:normcuspUB} Assume the conditions of Theorem \ref{thm:normUB}. Then \begin{equation} N_d(F) \ll \lambda(F)^{\varepsilon}. \end{equation} \end{mytheo} \begin{proof} This will follow from the proof of Lemma \ref{lemma:qUB} after some reductions. The first thing to note is that since the spectral measure $\mu(\beta)$ is invariant under permutations of $(\beta_1, \dots, \beta_n)$, it suffices to consider the ordering as in Lemma \ref{lemma:rzero} (indeed, the Langlands parameters $(\beta_1, \dots, \beta_n)$ are only defined up to permutation anyway). Next we need to finitize the integral and sum above. If $I_M = [a,b]$ then let $I_M^* = [a-\log^2 \lambda(F), b + \log^2 \lambda(F)]$. We may restrict $t$ to $I_M^*$ since $q(t, \alpha, \beta)$ is exponentially small otherwise. Similarly we can restrict the $\beta_j$'s so that $\beta_{n+1-k} - \beta_{n+1-j} \leq \alpha_j - \alpha_{k+1} + \log^2(\lambda(F))$. The Lindel\"{o}f Hypothesis implies that the $L$-functions are bounded by $\lambda(F)^{\varepsilon}$. Then following the arguments of Lemma \ref{lemma:qUB}, we see that these slight extensions of the integral and sums, and the use of Lindel\"{o}f only alters the final bound by $\ll \lambda(F)^\varepsilon$. \end{proof} \begin{myprop} Assume the conditions of Theorem \ref{thm:normUB}. Then \begin{equation} N_{\text{max}}(F) + N_{\text{min}}(F) \ll \lambda(F)^{\varepsilon}. \end{equation} \end{myprop} \begin{proof} We have that the contribution of a $E_{P_{n_1, \dots, n_r}}$ to \eqref{eq:normUBspectraldecomposition} is of the form \begin{equation} \ll |A_F(1)|^2 \sum_{k=1}^{r} \sum_{\substack{SL_{n_k}(\mathbb{Z})} } \int |B_{P_{n_1, \dots, n_r}}(1)|^2 |L(1/2+it, F \times E_{P_{n_1, \dots, n_r}})|^2 q_v(t, \alpha, \beta) dt dv_1^* \dots dv_{r-1}^*, \end{equation} where $q_v(t,\alpha, \beta)$ is given by \eqref{eq:qtalphabetaSTIRLING} but with $\beta$ defined by \eqref{eq:LanglandsEisenstein}, that is, \begin{equation} \beta = (v_1^* + \beta_{j_1,1}, \dots, v_1^*+ \beta_{j_1, n_1} \big| v_2^* + \beta_{j_2, n_1 + 1}, \dots, v_2^* + \beta_{j_2, n_1 + n_2} \big| \dots). \end{equation} By the same reasoning as in the proof of Theorem \ref{thm:normcuspUB}, a bound of $\lambda(F)^{\varepsilon}$ for \begin{equation} \sum_{k=1}^{r} \sum_{SL_{n_k}(\mathbb{Z})} \int_{\mr^{r-1}} \int_{t \in I_M} |B_{P_{n_1, \dots, n_r}}(1)|^2 q_v(t, \alpha, \beta) dt dv_1^* \dots dv_{r-1}^* \end{equation} will carry over to $N_{max}(F)$. The proof of Lemma \ref{lemma:qUB} carries over almost without change. The case of the minimal Eisenstein series is the easiest of all and we omit it. \end{proof} \section{A polytope} \label{section:polytope} This section is self-contained and our notation may not agree with other sections of the paper. Let $\alpha, \beta$ be as in Lemma \ref{lemma:rzero}, suppose $t \in I_M$, and suppose that \eqref{eq:alphajbetakcondition} holds. We think of $\alpha$ as given (it comes from the Langlands parameters of $F$) and we wish to understand the set of $\beta$ and $t$ such that \eqref{eq:alphajbetakcondition} holds with $t \in I_M$. Without some extra work, it is not obvious that there even exists such $\beta$. Let $x_j = \beta_j - \beta_{j+1}$ and $y_j = \alpha_{n+1-j}-\alpha_{n+2-j}$ so $x_j, y_j \geq 0$. Then the system of inequalities \eqref{eq:alphajbetakcondition} is equivalent to the following system \begin{equation} \label{eq:system} y_{j+1} + \dots + y_{k-1} \leq x_j + \dots + x_{k-1} \leq y_j + \dots + y_k, \end{equation} for $1 \leq j < k \leq n$. We use the standard convention that if $j+1 > k-1$ then the left hand side of \eqref{eq:system} denotes $0$. \begin{figure}[h] \input{polytopepicture5.pspdftex} \caption{The polytope $\mathcal{P}$ for $n=3$} \label{fig:polytope} \end{figure} This defines a convex polytope in $\mathbb{R}^{n-1}$ which we denote as $\mathcal{P} = \mathcal{P}(y)$. We suppose that none of the $y_j = 0$ since this is a somewhat degenerate situation. The description of $\mathcal{P}$ via \eqref{eq:system} is not conducive for our analysis. Instead, we have a decomposition of $\mathcal{P}$ into $n$ parallelohedra each with a quite simple description. Let $x = (x_1, \dots, x_{n-1})$, $w = (y_2, \dots, y_n)$, $v_1 = (1, 0, \dots, 0)$, $v_2 = (-1, 1, 0, \dots, 0)$, $v_3 = (0, -1, 1, 0, \dots, 0), \dots, v_{n-1} = (0, \dots, 0, -1, 1)$, $v_n = (0, \dots, 0, -1)$; so $v_2, \dots, v_{n-1}$ follow a pattern while the endpoint terms $v_1$ and $v_n$ have a different rule. \begin{mytheo} \label{thm:Q=P} Define $\mathcal{Q}$ to be the convex polytope defined by the set of $x \in \mathbb{R}^{n-1}$ such that \begin{equation} \label{eq:polytopedecomposition} x = w + t_1 y_1 v_1 + t_2 y_2 v_2 + \dots + t_n y_n v_n, \end{equation} for $0 \leq t_1, \dots, t_n \leq 1$. Similarly, for each $j \in \{1, \dots, n \}$, let $\mathcal{Q}_j \subset \mathcal{Q}$ denote the parallelohedron defined by \begin{equation} x = w + t_1 y_1 v_1 + \dots + t_{j-1} y_{j-1} v_{j-1} + t_{j+1} y_{j+1} v_{j+1} + \dots + t_n y_n v_n, \end{equation} for $0 \leq t_1, \dots, t_n \leq 1$. Then $\mathcal{Q} = \mathcal{P}$. Moreover, $\cup_{j=1}^{n} \mathcal{Q}_j = \mathcal{Q}$, and $\mathcal{Q}_j \cap \mathcal{Q}_k$ is a subset of an $(n-2)$-dimensional cone in $\mathbb{R}^{n-1}$ for $j \neq k$. \end{mytheo} Figure \ref{fig:polytope} illustrates Theorem \ref{thm:Q=P}, the three parallelograms being $\mathcal{Q}_j$ for $j \in \{1,2,3\}$. The inner vertex is $w = (y_2, y_3)$. \begin{mycoro} The polytope $\mathcal{P}$ is a zonotope (i.e., a Minkowski sum of line segments). \end{mycoro} This is obvious because the definition of $\mathcal{Q}$ via \eqref{eq:polytopedecomposition} is precisely one of the standard definitions of a zonotope. See Lecture 7 of \cite{Ziegler} for more information on zonotopes. An alternate definition of a zonotope is the image of a cube under an affine projection (see Definition 7.13 of \cite{Ziegler}); we give a concrete description of this in Remark \ref{remark:An} below. \begin{mycoro} \label{coro:polytopevolume} The volume of the polytope $\mathcal{P}$ defined by \eqref{eq:system} is \begin{equation} \label{eq:volumepolynomial} \sum_{j=1}^{n} y_1 \dots y_{j-1} y_{j+1} \dots y_n. \end{equation} \end{mycoro} It is easy to express the volume of $\mathcal{Q}_j$ as the absolute value of the determinant of $(y_1 v_1^T, \dots, y_{j-1} v_{j-1}^T, y_{j+1} v_{j+1}^T, \dots, y_n v_n^T)$ which is easily calculated to be $y_1 \dots y_{j-1} y_{j+1} \dots y_n$. In general it is difficult to find the volume of a polytope defined parametrically as in \eqref{eq:system}. There are formulas for the volume of a polytope if one knows the vertex set and its connecting edges but since our polytope is defined by half-planes this information is not given to us directly. We also observe that the polynomial \eqref{eq:volumepolynomial} equals the Schur polynomial $S_{\lambda}(y_1, \dots, y_n)$ associated to the partition $\lambda = (1,1, \dots, 1, 0) \in \mathbb{Z}^n$. \begin{myremark} \label{remark:An} The polytope $\mathcal{P}$ has a relation to the $A_{n}$ lattice as we explain. Recall the definition of the $A_{n}$ lattice as \begin{equation} A_{n} = \{ (x_0, x_1, \dots, x_{n}) \in \mathbb{Z}^{n+1} : x_0 + \dots + x_{n} = 0 \}, \end{equation} which defines an $n$-dimensional lattice in $\mathbb{R}^{n+1}$; see \cite{ConwaySloane}, Chapter 4.6.1. It is the root lattice for $\mathfrak{sl}_{n+1}$. It has generator matrix \begin{equation} M= \begin{pmatrix} -1 & 1 & 0 & 0 & \dots & 0 & 0 \\ 0 & -1 & 1 & 0 & \dots & 0 & 0 \\ 0 & 0 & -1 & 1 & \dots & 0 & 0 \\ \cdot & \cdot & \cdot & \cdot & \dots & \cdot & \cdot \\ 0 & 0 & 0 & 0 & \dots & -1 & 1 \end{pmatrix}, \end{equation} where the $n$ rows, say $w_1, \dots, w_n$, respectively, are generators for the lattice. A fundamental domain for $A_n$ inside the hyperplane $x_0 + \dots + x_n = 0$ is then $\{ t_1 w_1 + \dots + t_n w_n : 0 \leq t_j \leq 1 \text{ for all $j$} \}$. Stretching the vector $w_j$ by $y_j$ for each $j$, we obtain the region $\mathcal{R}$ of the form \begin{equation} \mathcal{R} := \{t_1 y_1 w_1 + \dots + t_n y_n w_n : 0 \leq t_j \leq 1 \text{ for all $j$} \}. \end{equation} Then $\mathcal{Q}-w$ is the projection of $\mathcal{R}$ via $x_0 = x_n = 0$, as can be seen since deleting the first and last columns of $M$ gives a matrix whose rows are $v_1, \dots, v_n$. Note that this projection reduces the dimension of $\mathcal{R}$ by one. \end{myremark} \begin{proof}[Proof of Theorem \ref{thm:Q=P}] First we show $\mathcal{Q} \subset \mathcal{P}$. We can write \eqref{eq:polytopedecomposition} in the form \begin{equation} \label{eq:polytopematrixformula} \begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_{n-1} \end{pmatrix} = \begin{pmatrix} t_1 & 1- t_2 & && \dots & \\ & t_2 & 1-t_3 & & \dots & \\ & & t_3 & 1-t_4 & \dots & \\ \vdots & & &\ddots & & \\ & & & & t_{n-1} & 1-t_n \end{pmatrix} \begin{pmatrix} y_1 \\ y_2 \\ \vdots \\ y_{n} \end{pmatrix}. \end{equation} By adding consecutive rows of the matrix, one sees that \begin{equation} \label{eq:xjtj} x_j + \dots + x_{k-1} = t_j y_j + (y_{j+1} + \dots + y_{k-1}) + (1-t_{k}) y_k, \end{equation} so that if $0 \leq t_1, \dots, t_n \leq 1$ then \eqref{eq:system} holds. That is, $\mathcal{Q} \subset \mathcal{P}$. Next we explain why $\cup_{j=1}^{n} \mathcal{Q}_j = \mathcal{Q}$ and $\mathcal{Q}_j \cap \mathcal{Q}_k$ is a subset of an $(n-2)$-dimensional cone in $\mathbb{R}^{n-1}$ for $j \neq k$. Let $e_1, \dots, e_{n-1}$ denote the standard basis vectors of $\mathbb{R}^{n-1}$. Then \eqref{eq:polytopedecomposition} is the same as \begin{equation} \label{eq:cones} x - w = u_1 w_1 + \dots + u_n w_n, \end{equation} where $u_j = t_j y_j$, $w_1 = e_1, w_2 = e_2-e_1, \dots, w_{n-1} = e_n - e_{n-1}, w_n = -e_n.$ Observe that $w_1 + \dots + w_{n} = 0$, and any set of $n-1$ $w_j$'s is a basis for $\mathbb{R}^{n-1}$. For each $j \in \{1, \dots, n \}$, let \begin{equation} C_j = \{ u_1 w_1 + \dots + u_{j-1} w_{j-1} + u_{j+1} w_{j+1} + \dots + u_n w_n : u_k \geq 0 \text{ for all $k$} \}. \end{equation} Then $C_j$ is a cone in $\mathbb{R}^{n-1}$. We claim that $\cup_{j=1}^{n} C_j = \mathbb{R}^{n-1}$ and $C_j \cap C_k$ is an $(n-2)$-dimensional cone for $j \neq k$. To prove the claim, first suppose $v \in \mathbb{R}^{n-1}$, and express it (non-uniquely) as $v = u_1 w_1 + \dots + u_{n} w_{n}$ with $u_j \in \mathbb{R}$. Then \begin{equation} \label{eq:vformula} v = (u_1 + q) w_1 + \dots + (u_{n}+q) w_{n}, \end{equation} for any $q \in \mathbb{R}$; choosing $q = \max(-u_1, -u_2, \dots, -u_n)$ gives $u_j + q \geq 0$ for all $j$, and $u_{j_0} + q = 0$ for some $j_0$, whence $v \in C_{j_0}$. Next suppose $v \in C_j \cap C_k$ for some $j \neq k$. By symmetry, suppose $j=n$, and $k=n-1$. Then for some $u_j, u_j' \geq 0$, we have \begin{align} v = u_1 w_1 + \dots + u_{n-1} w_{n-1} &= u_1' w_1 + \dots + u_{n-2}' w_{n-2} + u_{n}' w_n, \\ &= (u_1' - u_n') w_1 + \dots + (u_{n-2}'-u_n') w_{n-2} - u_{n}' w_{n-1}. \end{align} Since $w_1, \dots w_{n-1}$ form a basis, we have in particular that $u_{n-1} = - u_n'$, but since $u_{n-1} \geq 0$ and $u_n' \geq 0$, we conclude $u_{n-1} = u_n' = 0$ and hence $v$ lies in the cone spanned by $w_1, \dots, w_{n-2}$. Also, any element in this cone is an element of $C_n \cap C_{n-1}$. The above claim immediately shows that $\mathcal{Q}_j \cap \mathcal{Q}_k$ lies in an $(n-2)$-dimensional cone for $j \neq k$, since $\mathcal{Q}_j \subset C_j$ for all $j$. Also, the proof can be modified to show that $\cup_{j=1}^{n} \mathcal{Q}_j = \mathcal{Q}$: suppose $v \in \mathcal{Q}$ is given in the form \eqref{eq:vformula} with $0 \leq u_j \leq y_j$. Then choosing $q = \max(-u_1, -u_2, \dots, -u_n)$ shows $v \in \mathcal{Q}_j$ for some $j$. Finally, we need to show that $\mathcal{Q} = \mathcal{P}$. It seems tricky to do this directly and our strategy is to show that every facet of $\mathcal{P}$ is contained in $\mathcal{Q}$; this then shows $\mathcal{P} \subset \mathcal{Q}$ because $\mathcal{P}$ is the convex hull of its facets (indeed, it is the convex hull of its vertices by the ``main theorem'' of polytopes: see Theorem 1.1 of \cite{Ziegler}). More precisely, given $j_0, k_0 \in \{1, \dots, n \}$ with $j_0 \neq k_0$, consider the facet $\mathcal{F}_{j_0, k_0}$ of $\mathcal{Q}_{j_0}$ which equals the set of points of the form \eqref{eq:polytopedecomposition} with $t_{j_0} = 0$ and $t_{k_0} = 1$. We will show that every facet of $\mathcal{P}$ equals $\mathcal{F}_{j_0, k_0}$ for some choice of $j_0, k_0$. First we argue that $\mathcal{Q}_{j_0}$ is given by the following system of $n-1$ equations: \begin{align} \label{eq:xjtj0left} x_j + \dots + x_{j_0-1} &= t_j y_j + (y_{j+1} + \dots + y_{j_0}), \qquad 1 \leq j < j_0 \\ \label{eq:xjtj0right} x_{j_0} + \dots + x_{k-1} &= y_{j_0+1} + \dots + y_{k-1} + (1-t_k) y_k, \qquad j_0 < k \leq n, \end{align} where $0 \leq t_j \leq 1$ for all $j \neq j_0$. First we note that in general, \eqref{eq:xjtj} is equivalent to \eqref{eq:polytopedecomposition}. Setting $t_{j_0} = 0$, and taking only \eqref{eq:xjtj} with $j=j_0$ or $k=j_0$, gives precisely the equations \eqref{eq:xjtj0left} and \eqref{eq:xjtj0right}. This shows $\mathcal{Q}_{j_0}$ is contained in the set of solutions to \eqref{eq:xjtj0left} and \eqref{eq:xjtj0right}. On the other hand, any solution to \eqref{eq:xjtj0left} and \eqref{eq:xjtj0right} uniquely determines $t_j$'s, $j \neq j_0$ with $0 \leq t_j \leq 1$, and therefore gives a solution to \eqref{eq:polytopematrixformula}. Now suppose that $j_0 < k_0$, say. If we impose the extra condition $t_{k_0} = 1$, then \eqref{eq:xjtj0left}-\eqref{eq:xjtj0right} show that $\mathcal{F}_{j_0, k_0}$ is given by the following system: \begin{align} \label{eq:xjtj0tk0left} x_j + \dots + x_{j_0-1} &= t_j y_j + (y_{j+1} + \dots + y_{j_0}), \qquad 1 \leq j < j_0 \\ \label{eq:xjtj0tk0middle} x_{j_0} + \dots + x_{k-1} &= y_{j_0+1} + \dots + y_{k-1} + (1-t_k) y_k, \qquad j_0 < k < k_0, \\ \label{eq:xjtj0tk0face} x_{j_0} + \dots + x_{k_0-1} &= y_{j_0+1} + \dots + y_{k_0-1} \\ \label{eq:xjtj0tk0right} x_{k_0} + \dots + x_{k-1} &= y_{k_0} + y_{k_0+1} + \dots + y_{k-1} + (1-t_k) y_k, \qquad k_0 < k \leq n. \end{align} Next we show that the facet of $\mathcal{P}$ defined by \eqref{eq:xjtj0tk0face} (which is indeed seen to be a facet by taking \eqref{eq:system} with $j=j_0, k=k_0$) is equivalent to the above system. Suppose that $x$ satisfies \eqref{eq:system} and \eqref{eq:xjtj0tk0face}. First we show \eqref{eq:xjtj0tk0left} with some $t_j \in [0,1]$ (the only issue is that $t_j$ lies in this interval). It is immediate from \eqref{eq:system} that $x_j + \dots + x_{j_0-1} \leq y_{j} + \dots + y_{j_0}$ so $t_j \leq 1$. To see $t_j \geq 0$, write $x_j + \dots + x_{j_0-1} = x_j + \dots + x_{k_0-1} - (x_{j_0} + \dots + x_{k_0-1})$ and apply \eqref{eq:system} and \eqref{eq:xjtj0tk0face} to these two terms, respectively, so that we see \begin{equation} x_j + \dots + x_{j_0-1} \geq y_{j+1} + \dots + y_{k_0-1} - (y_{j_0+1} + \dots + y_{k_0-1}) = y_{j+1} + \dots + y_{j_0}. \end{equation} For \eqref{eq:xjtj0tk0middle}, we obtain $t_k \leq 1$ immediately from the lower bound in \eqref{eq:system} (with $j=j_0$). The bound $t_k \geq 0$ follows by writing $x_{j_0} + \dots + x_{k-1} = x_{j_0} + \dots + x_{k_0-1} - (x_k + \dots + x_{k_0-1})$ and using \eqref{eq:xjtj0tk0face} and \eqref{eq:system}, respectively, to obtain \begin{equation} x_{j_0} + \dots + x_{k-1} \leq y_{j_0+1} + \dots + y_{k_0-1} - (y_{k+1} + \dots + y_{k_0-1}) = y_{j_0+1} + \dots + y_{k}. \end{equation} The final case of \eqref{eq:xjtj0tk0right} is similar. The condition $t_k \geq 0$ is immediate from \eqref{eq:system}. The upper bound $t_k \leq 1$ uses $x_{k_0} + \dots + x_{k-1} = x_{j_0} + \dots + x_{k-1} - (x_{j_0} + \dots + x_{k_0-1})$ and \eqref{eq:system} and \eqref{eq:xjtj0tk0face}, respectively, to obtain \begin{equation} x_{k_0} + \dots + x_{k-1} \geq y_{j_0+1} + \dots + y_{k-1} - (y_{j_0+1} + \dots + y_{k_0-1}) = y_{k_0} + \dots + y_{k-1}. \end{equation} We have thus shown that any point $x \in \mathcal{P}$ that lies on the facet \eqref{eq:xjtj0tk0face} lies in $\mathcal{F}_{j_0, k_0}$. So far we have not treated the opposite facet $x_{j_0} + \dots x_{k_0-1} = y_{j_0} + \dots + y_{k_0}$, but a symmetry argument suffices here as we now explain. One can check that the change of variables $x_j = y_j + y_{j+1} - x_j'$, applied to the system \eqref{eq:system}, leads to the same system \eqref{eq:system} (in terms of the new variables $x_j'$) but with each of the opposite facets reversed. This change of variables, combined with $t_j = 1 - t_j'$, also leaves \eqref{eq:polytopedecomposition} invariant. Thus the opposite facets of $\mathcal{P}$ also occur as facets of $\mathcal{Q}_{j_0}$. \end{proof} \section{The lower bound} \label{section:lowerbound} \subsection{} \label{subsection:lowerbound} The lower bound for $N_d(F)$ is, in a combinatorial sense, much more difficult than the upper bound (ignoring the major assumption of Lindel\"{o}f which is required for the upper bound). The difference is that we need much more refined information about the spectral sum in \eqref{eq:normUBspectraldecomposition}. This was largely obtained in Section \ref{section:polytope}. \begin{myprop} \label{prop:normLBwithoutLfunction} Assume \eqref{eq:WLWLlower} holds and $|\alpha_j - \alpha_k| \geq \lambda(F)^{\varepsilon}$ for all $j \neq k$. Then \begin{equation} \sumstar_j |B_j(1)|^2 \intR q(t, \alpha, \beta_j) dt \gg 1. \end{equation} \end{myprop} The left hand side above is the unweighted version of \eqref{eq:normUBspectraldecomposition}, up to the factor $|A_F(1)|^2$. \begin{mylemma} \label{lemma:qtLB} Suppose that $\alpha = (\alpha_1, \dots, \alpha_{n+1}) \in \mr^{n+1}$, $\beta = (\beta_1, \dots, \beta_n) \in \mr^n$ and $\alpha_1 \geq \alpha_2 \geq \dots \geq \alpha_{n+1}$, $\beta_1 \geq \beta_2 \geq \dots \geq \beta_n$, and that \eqref{eq:alphajbetakcondition} holds. Also suppose that $\sup S_- =: \smax $ and $\inf S_{+} =: \smin$. Then for $t \in I_M$, we have \begin{equation} \label{eq:qtLB} q(t,\alpha, \beta) \geq \prod_{s_+ \in S_+} (1+ |s_{+} - \smax |)^{-1/2} \prod_{s_- \in S_-+} (1+ |s_{-} - \smin |)^{-1/2}. \end{equation} Furthermore, \begin{equation} \label{eq:qtintegralLB} \intR q(t, \alpha, \beta) dt \gg |I_M| \prod_{s_+ \in S_+} (1+ |s_{+} - \smax |)^{-1/2} \prod_{s_- \in S_-+} (1+ |s_{-} - \smin |)^{-1/2}. \end{equation} \end{mylemma} \begin{proof} We note \begin{equation} 1 + |t+ s_+| = 1 + t + s_+ = 1 + t + \smax + (s_+ - \smax) \leq 1 + (s_+ - \smax). \end{equation} A similar bound holds for $s_-$, so \eqref{eq:qtLB} is shown; \eqref{eq:qtintegralLB} follows immediately. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:normLBwithoutLfunction}] We need to show that \begin{equation} \label{eq:lowerestimatewithbetaj} \sumstar_{\beta_j \text{ admissible}} |B_j(1)|^2 |I_M| \prod_{s_+ \in S_+} (1+ |s_{+} - \smax |)^{-1/2} \prod_{s_- \in S_-+} (1+ |s_{-} - \smin |)^{-1/2} \gg 1, \end{equation} where $\beta_j$ admissible means that \eqref{eq:alphajbetakcondition} holds. The work in Section \ref{section:polytope} is precisely what we need to obtain control over the various parameters above. Recall that we used the notation $x_j = \beta_j - \beta_{j+1}$ and $y_j = \alpha_{n+1-j} - \alpha_{n+2-j}$. With this notation, $\beta_j$ being admissible is equivalent to $x \in \mathcal{P}$. Theorem \ref{thm:Q=P} provides a useful decomposition of $\mathcal{P}$ into parallelohedra; in particular, we shall restrict attention to $\mathcal{Q}_{j_0} \subset \mathcal{P}$ where $j_0 \in \{1, \dots, n \}$ is chosen to minimize $y_{j_0}$ (amongst all other choices of $y_j$). We showed that $\mathcal{Q}_{j_0}$ is parameterized by \eqref{eq:xjtj0left} and \eqref{eq:xjtj0right}, where $0 \leq t_j \leq 1$ for all $j$. Define $\mathcal{Q}_{j_0}^* \subset \mathcal{Q}_{j_0}$ to be given by \eqref{eq:xjtj0left} and \eqref{eq:xjtj0right} but further restricted by $\frac14 \leq t_j \leq \frac34$ for all $j$. For $x \in \mathcal{Q}_{j_0}^*$ we can simplify \eqref{eq:lowerestimatewithbetaj}. In general for $x \in \mathcal{P}$, we have \begin{equation} |I_M| = \min\{ \alpha_{n+1-j} + \beta_j - \alpha_{n+2-k} - \beta_k : 1 \leq j, k \leq n \}. \end{equation} If $j \leq k$ then \begin{equation} \alpha_{n+1-j} + \beta_j - \alpha_{n+2-k} - \beta_k = x_j + \dots + x_{k-1} - (y_{j+1} + \dots + y_{k-1}) = t_j y_j + (1-t_k)y_k, \end{equation} using \eqref{eq:xjtj}. If $j \geq k$ then switching $j$ and $k$ gives \begin{equation} \alpha_{n+1-k} + \beta_k - \alpha_{n+2-j} - \beta_j = y_j + \dots + y_k - (x_j + \dots + x_{k-1}) = (1-t_j) y_j + t_k y_k. \end{equation} Since $t_{j_0} = 0$ and $y_{j_0}$ is minimal, we conclude that for $x \in \mathcal{Q}_{j_0}^*$, $|I_M| \geq \frac14 y_{k_0}$, say where $k_0 \neq j_0$ is the second smallest of the $y_j$'s, after $y_{j_0}$. Actually, in general, we have for $x \in \mathcal{Q}_{j_0}^*$ that \begin{equation} \frac18(y_j + y_k) \leq \alpha_{n+1-j} + \beta_j - \alpha_{n+2-k} - \beta_k \leq \frac{3}{4}(y_j + y_k). \end{equation} Next we estimate the terms in \eqref{eq:lowerestimatewithbetaj} with $s_+$ and $s_-$ not lying immediately above or below a horizontal line in \eqref{eq:alphabetaarray}. We will show \begin{equation} \label{eq:alphakbetalawayfromdiagonal} \prod_{l+m \leq n} (1 + |\alpha_l + \beta_m - \smax|)^{-1/2} \prod_{l+m \geq n+3} (1 + |\alpha_l + \beta_m - \smin|)^{-1/2} \gg \frac{1}{\mu(\beta)}. \end{equation} Suppose $l+m \leq n$. Then we estimate the term with $s_+ = \alpha_l + \beta_m$ in \eqref{eq:lowerestimatewithbetaj} by comparing with $\alpha_l + \beta_{n+1-l}$ which is the entry in \eqref{eq:alphabetaarray} in the same row as $s_+$ and directly above one of the horizontal lines. We note $\alpha_l + \beta_{n+1-l} - \smax \leq \alpha_l + \beta_{n+1-l} - (\alpha_{l+1} + \beta_{n+1-l})$, since $\alpha_l + \beta_{n+1-l}$ lies below the horizontal lines in \eqref{eq:alphabetaarray} while $\smax$ is the largest entry below the horizontal lines. Taken together, we deduce \begin{multline} 1 + \alpha_l + \beta_m - \smax = 1 + (\beta_m - \beta_{n+1-l}) + (\alpha_l + \beta_{n+1-l} - \smax). \\ \leq 1 + (\beta_m - \beta_{n+1-l}) + (\alpha_l - \alpha_{l+1}). \end{multline} Now we note that by \eqref{eq:xjtj}, \begin{equation} \label{eq:betambetan} \beta_m - \beta_{n+1-l} = x_m + \dots + x_{n-l} \geq \frac14 y_{n+1-l} = \frac14 (\alpha_l - \alpha_{l+1}). \end{equation} Hence, for $l + m \leq n$ we have \begin{equation} \label{eq:smaxupperbound} 1 + \alpha_l + \beta_m - \smax \leq 5 (1 + (\beta_m - \beta_{n+1-l})). \end{equation} Similarly, if $l +m \geq n+3$ we have \begin{multline} 1 + \smin - \alpha_l - \beta_m = 1 + (\beta_{n+2-l} - \beta_m) + (\smin - \alpha_l - \beta_{n+2-l}) \\ \leq 1 + (\beta_{n+2-l} - \beta_m) + (\alpha_{l-1} - \alpha_l). \end{multline} By \eqref{eq:xjtj}, we have \begin{multline} \beta_{n+2-l} - \beta_m = t_{n+2-l} y_{n+2-l} + y_{n+3-l} + \dots + y_{m-1} + (1-t_m) y_m \geq t_{n+2-l} y_{n+2-l} + \frac14 y_m. \end{multline} This case is slightly different from \eqref{eq:betambetan} because if $n+2-l = j_0$ then $t_{n+2-l} = 0$. However, if $n+2-l = j_0$ then $y_m \geq y_{j_0}$ by minimality, so we can always conclude $\beta_{n+2-l} - \beta_m \geq \frac14 (\alpha_{l-1} - \alpha_{l})$, so we have \begin{equation} \label{eq:sminupperbound} 1 + \smin - \alpha_l - \beta_m \leq 5 ( 1 + \beta_{n+2-l} - \beta_m). \end{equation} Then putting together \eqref{eq:smaxupperbound} and \eqref{eq:sminupperbound}, we deduce \eqref{eq:alphakbetalawayfromdiagonal} by noting that a term $|\beta_k - \beta_l |$ occurs exactly once in each of \eqref{eq:smaxupperbound} and \eqref{eq:sminupperbound}, for $k \neq l$. Now we turn to the terms with $k+l = n+1$ and $k+l = n+2$. If $k \neq j_0, k_0$ then we use \begin{equation} 1 + \alpha_{n+1-k} + \beta_{k} - \smax \leq 1 + \alpha_{n+1-k} + \beta_{k} - (\alpha_{n+2-k} + \beta_{k}) = 1 + y_k. \end{equation} For $k = j_0$ we use \begin{equation} 1 + \alpha_{n+1-j_0} + \beta_{j_0} - \smax \leq 1 + \alpha_{n+1-j_0} + \beta_{j_0} - \alpha_{n+2-k_0} - \beta_{k_0} \asymp y_{j_0} + y_{k_0}. \end{equation} Similarly for $k = k_0$. Then we obtain \begin{multline} \label{eq:alphakbetalalongdiagonal} \prod_{k+l=n+1} (1 + |\alpha_k + \beta_l - \smax|)^{-1/2} \prod_{k+l = n+2} (1 + |\alpha_k + \beta_l - \smin|)^{-1/2} \\ \gg (y_1 \dots y_{j_0-1} (y_{j_0} + y_{k_0}) y_{j_0 + 1} \dots y_{k_0-1} (y_{j_0} + y_{k_0}) y_{k_0+1} \dots y_n)^{-1}. \end{multline} By combining \eqref{eq:alphakbetalawayfromdiagonal} with \eqref{eq:alphakbetalalongdiagonal} and our lower bound $|I_M| \gg y_{j_0} + y_{k_0}$, we have that the left hand side of \eqref{eq:lowerestimatewithbetaj} is \begin{equation} \label{eq:lowerestimatesimplified} \gg \frac{y_{j_0} + y_{k_0}}{y_1 \dots y_{j_0-1} y_{j_0+1} \dots y_{k_0-1} (y_{j_0} + y_{k_0})^2 y_{k_0+1} \dots y_n} \sum_{\beta_j : x \in \mathcal{Q}_{j_0}^*} |B_j(1)|^2 \frac{1}{\mu(\beta_j)}. \end{equation} The weighted local Weyl law \eqref{eq:WLWLlower} counts the number of $\beta_j$ in a small box, weighted by $|B_j(1)|^2$, so calculating the sum amounts to finding the number of integer vectors $x$ that lie in $\mathcal{Q}_{j_0}^*$ (which is essentially the volume of $\mathcal{Q}_{j_0}^*$ since it is a parallelohedron with sides longer than $\lambda(F)^{\varepsilon}$). The volume of $\mathcal{Q}_{j_0}^*$ is $\gg y_1 \dots y_{j_0-1} y_{j_0+1} \dots y_{k_0-1} (y_{j_0} + y_{k_0}) y_{k_0+1} \dots y_n$, and hence \eqref{eq:lowerestimatesimplified} is \begin{equation} \gg (y_{j_0} + y_{k_0}) \frac{ y_1 \dots y_{j_0-1} y_{j_0+1} \dots y_{k_0-1} (y_{j_0} + y_{k_0}) y_{k_0+1} \dots y_n}{y_1 \dots y_{j_0-1} y_{j_0+1} \dots y_{k_0-1} (y_{j_0} + y_{k_0})^2 y_{k_0+1} \dots y_n} \asymp 1. \qedhere \end{equation} \end{proof} Finally, we need to deduce Theorem \ref{thm:normLB} from Proposition \ref{prop:normLBwithoutLfunction}. It is not difficult to lower bound the second moment of $L$-functions as we codify with the following: \begin{myprop} \label{prop:secondmomentLowerBound} Let $T_0 \in \mathbb{R}$, $T > 0$, let $L(f, s)$ be an $L$-function of degree $d$. Suppose that for $s=1/2 + it$ with $T_0-T \leq t \leq T_0 + T$, the analytic conductor $q(f,s)$ of $L(f,s)$ satisfies $q(f,s) \leq Q$. Then with $T \geq Q^{\varepsilon}$, we have \begin{equation} \int_{T_0-T}^{T_0+T} |L(f, 1/2 + it)|^2 dt \geq T + O_{d,\varepsilon}(Q^{-99}). \end{equation} \end{myprop} With Proposition \ref{prop:secondmomentLowerBound}, it only takes a very small modification to prove Theorem \ref{thm:normLB}. The basic idea is that for $t \in I_M$, \eqref{eq:qtLB} provides a lower bound on $q(t,\alpha, \beta)$, while for $x \in \mathcal{Q}_{j_0}^*$ then as we showed in the proof of Proposition \ref{prop:normLBwithoutLfunction}, we have $|I_M| \gg (\alpha_{k_0} - \alpha_{k_0-1})$, which is $\gg \lambda(F)^{\varepsilon}$ by the spacing condition assumed in Theorem \ref{thm:normLB}. Thus \begin{equation} \int_{t \in I_M} q(t, \alpha, \beta) |L(1/2 + it, F \times u_j)|^2 dt \gg |I_M| \prod_{s_+ \in S_+} (1+ |s_{+} - \smax |)^{-1/2} \prod_{s_- \in S_-+} (1+ |s_{-} - \smin |)^{-1/2}, \end{equation} and since in our proof of Proposition \ref{prop:normLBwithoutLfunction} we showed \eqref{eq:lowerestimatewithbetaj}, we finish the deduction. \begin{proof}[Proof of Proposition \ref{prop:secondmomentLowerBound}] By a standard one-piece approximate functional equation, we have \begin{equation} L(f, 1/2 + it) = \sum_{n \leq Q^{1+\varepsilon}} \frac{\lambda_f(n)}{n^{1/2 + it}} V(n) + O(Q^{-100}), \end{equation} where $V$ depends on the degree $d$ but not on $f$. One can choose $V$ so that $V(1) = 1 + O(Q^{-100})$. Suppose $w$ is a fixed smooth, nonnegative function such that $w(x) = 1$ for $|x| < 1/2$ and $w(x) = 0$ for $|x| > 1$. Then by Cauchy's inequality and positivity, \begin{equation} \int_{T_0-T}^{T_0 + T} |L(f, 1/2 + it)|^2 dt \geq \frac{\big|\intR L(f, 1/2 + it) w(\frac{t-T_0}{T}) dt \big|^2}{\intR w(\frac{t-T_0}{T}) dt}. \end{equation} The denominator above is $T \widehat{w}(0)$. By the approximate functional equation, the numerator is \begin{equation} \sum_{n \leq Q^{1+\varepsilon}} \frac{\lambda_f(n)}{n^{1/2}} V(n) \intR n^{-it} w(\frac{t-T_0}{T}) dt + O(Q^{-99}). \end{equation} The inner integral simplifies as $T n^{-iT_0} \widehat{w}(\frac{T \log{n}}{2\pi})$, which for $n > 1$ is $\ll T^{-A}$ for $A$ arbitrarily large. Taking $A$ large compared to $\varepsilon$, we can trivially bound the terms $n > 1$ by $O(Q^{-100})$. Thus \begin{equation} \int_{T_0-T}^{T_0+T} |L(f, 1/2 + it)|^2 dt \geq T \widehat{w}(0) + O(Q^{-99}). \end{equation} Since $\widehat{w}(0) \geq \int_{-1/2}^{1/2} dx = 1$, we complete the proof. \end{proof} \subsection{Approaching the walls} \label{section:walls} It seems interesting to understand the behavior of $N(F)$ as the spectral parameters of $F$ may become close; that is, dropping the assumption that $|\alpha_k - \alpha_l| \geq \lambda(F)^{\varepsilon}$ for all $k \neq l$. The upper bound did not need this assumption so we only consider the lower bound. Recall \eqref{eq:qtalphabetaSTIRLING}. Instead of restricting attention to the set where $r(t,\alpha,\beta) = 0$ (which may have very small measure in this degenerate situation), we consider the set where $r(t,\alpha, \beta) \geq - C'$ where $C' > 1$ may depend on $n$ only. If $I_M = [a,b]$ then define $I_M^C = [a-C, b+C]$; then for $t \in I_M^C$, we have $r(t,\alpha,\beta) \geq -n(n+1)C$. We also relax the condition \eqref{eq:alphajbetakcondition} to (say) \begin{equation} \label{eq:extendedalphajbetakcondition} \alpha_{n+1-k} - \alpha_{n+2-j} + C \geq \beta_j - \beta_k \geq \alpha_{n+2-k} - \alpha_{n+1-j} - C. \end{equation} If $t \in I_M^C$ and \eqref{eq:extendedalphajbetakcondition} holds, then $r(t,\alpha,\beta) \geq -C'$ for some $C' > 1$ depending only on $n$ and the choice of $C > 0$. If we define $y_j = \max(\alpha_{n+1-j} - \alpha_{n+2-j}, C'')$ for some $C'' > 1$ then \eqref{eq:system} implies that \eqref{eq:extendedalphajbetakcondition} holds (for some fixed $C > 0$). At this point the work in Section \ref{subsection:lowerbound} carries through almost unchanged. \section{Asymptotics} \label{section:asymptotic} In this section we let $c$ denote a positive constant that may change line to line (to avoid excessive re-labelling). We have \begin{equation} N(F) \sim c |A_F(1)|^2 \sum_{j} \intR |L(1/2 + it, F \times \overline{u_j})|^2 q(t, \alpha, \beta_j) dt + \dots \end{equation} with the dots indicating $N_{\text{max}}(F) + N_{\text{min}}(F)$. Recall that the $L$-function has Dirichlet series given by \eqref{eq:RankinSelbergCuspidal}, and $q$ is defined by \eqref{eq:qtalphabetaDEF}. In fact, $q$ contains the appropriate gamma factors for the completed $L$-function. We wish to apply the moment conjectures of \cite{CFKRS} to find the asymptotic of $N(F)$. For this, we shift slightly and consider \begin{multline} \sumstar_j \intR q(t-iz,\alpha, \beta_j) \sum_{m_1, m_1' \geq 1} \dots \sum_{m_{n}, m_n' \geq 1} \\ \frac{\lambda_F(m_1, \dots, m_n) \overline{\lambda_F}(m_1', \dots, m_n') \lambda_j(m_2, \dots, m_n) \overline{\lambda_j}(m_2', \dots, m_n')}{\prod_{k=1}^{n} m_k ^{(n+1-k)(\half + it+ z)} (m_k')^{(n+1-k)(\half -it+ z)}} + \dots, \end{multline} the dots indicating an identical term under $z \rightarrow -z$. Together, the $j$-sum and the $t$-integral should pick out the diagonal terms with $m_k = m_k'$ for all $1 \leq k \leq n$. Thus we should have \begin{equation} N(F) \sim c |A_F(1)|^2 \sum_{m_1, \dots, m_n} \frac{|\lambda_F(m_1, \dots, m_n)|^2}{m_1^{n(1+2z)} m_2^{(n-1)(1+2z)} \dots m_n^{1+2z}} \sumstar_j \intR q(t-iz, \alpha, \beta_j) dt + \dots. \end{equation} Note that this is \begin{equation} N(F) \sim c |A_F(1)|^2 L(1 + 2z, F \times \overline{F}) \sumstar_j \intR q(t-iz,\alpha,\beta_j) dt + \dots. \end{equation} Note for $z$ small that \begin{equation} \frac{q(t-iz, \alpha, \beta_j)}{q(t, \alpha, \beta_j)} = 1 -iz \frac{q'}{q}(t, \alpha, \beta_j) + O(z^2), \end{equation} and \begin{equation} \frac{q'}{q}(t,\alpha,\beta_j) = \sum_{l=1}^{n} \sum_{k=1}^{n+1} \log |\frac{1/2 + it+ i\alpha_k + i\beta_{l}}{2} |^2 + O(1). \end{equation} Our work in Sections \ref{section:Archimedean} and \ref{section:polytope} describes the effective region of integration appearing in \eqref{eq:NFasymptotic}. In particular, with some work, one can derive that $\frac{q'}{q}(t, \alpha, \beta_j) = \log \lambda(F) + O(1)$ inside the region of interest; the reason for this is that when $t \in I_M$ and $\beta$ lies in the polytope $\mathcal{P}$, the largest element $t + \alpha_1 + \beta_1$ is $\geq \alpha_1 - \alpha_n$. Typically, $\alpha_n \leq 0$ in which case this term is $\geq \alpha_1$. If not, then we look at the smallest term $|t+ \alpha_{n+1} + \beta_n| \geq \alpha_2 - \alpha_{n+1}$ which is $ - \alpha_{n+1}$ assuming $\alpha_2 \geq 0$. Thus $\frac{q'}{q} \geq \log \alpha_1^2 + \log \alpha_{n+1}^2 + O(1) = \log \lambda(F) + O(1)$. On the other hand, the upper bound $\frac{q'}{q} \leq \log \lambda(F) + O(1)$ is relatively trivial. Write $L(1 + 2z, F \times \overline{F}) = \frac{r}{2z} (1 + r'z + O(z^2))$. By Theorem 5.17 of \cite{IK} (conditional on GRH and Ramanujan), we have $r' \ll \log \log \lambda(F)$. Since $q'/q \sim \log \lambda(F)$, we then derive the following asymptotic after letting $z \rightarrow 0$: \begin{equation} \label{eq:NFasymptotic} N(F) \sim c |A_F(1)|^2 r \sumstar_j \intR q'(t, \alpha, \beta_j) dt \sim c \sum_j \intR q'(t, \alpha, \beta_j) dt, \end{equation} in view of Proposition \ref{prop:RankinSelbergL2formula}. Thus, \begin{equation} N(F) \sim c \log \lambda(F) \sumstar_j \intR q(t, \alpha, \beta_j) dt. \end{equation} Now it should be the case that $\sumstar_j q(t, \alpha, \beta_j) \sim c \int q(t, \alpha, \beta) \mu(\beta) d\beta$, where $\mu(\beta)$ is the spectral measure. So we need to calculate the following integral \begin{equation} \intR \int \mu(\beta) \frac{\prod_{l=1}^{n} \prod_{k=1}^{n+1} |\Gamma(\frac{1/2 + it+ i\alpha_k + i\beta_{l}}{2}) |^2}{\Big(\prod_{1 \leq k < l \le n+1} |\Gamma(\frac{1+ i\alpha_k - i\alpha_l}{2}) |^2\Big) \Big(\prod_{1 \leq k < l \le n} | \Gamma(\frac{1+ i\beta_{k} - i\beta_{l}}{2})|^2 \Big)} d\beta dt. \end{equation} Recall that $\beta_1 + \dots + \beta_n = 0$ and the $\beta$-integral is over this hyperplane. Changing variables $\beta_k \rightarrow \beta_k - t$ for all $k$ reduces to calculating \begin{equation} \int_{\mathbb{R}^{n}} \mu(\beta) \frac{\prod_{l=1}^{n} \prod_{k=1}^{n+1} |\Gamma(\frac{1/2 + i\alpha_k + i\beta_{l}}{2}) |^2}{\Big(\prod_{1 \leq k < l \le n+1} |\Gamma(\frac{1+ i\alpha_k - i\alpha_l}{2}) |^2\Big) \Big(\prod_{1 \leq k < l \le n} | \Gamma(\frac{1+ i\beta_{k} - i\beta_{l}}{2})|^2 \Big)} d\beta_1 \dots d \beta_n. \end{equation} The spectral measure is given by \begin{equation} \mu(\beta) = c \prod_{1 \leq k < l \leq n} \frac{| \Gamma(\frac{1+ i\beta_{k} - i\beta_{l}}{2})|^2}{| \Gamma(\frac{i\beta_{k} - i\beta_{l}}{2})|^2}, \end{equation} so that we need to calculate \begin{equation} \label{eq:Calpha} c(\alpha) \int_{\mathbb{R}^{n}} \frac{\prod_{l=1}^{n} \prod_{k=1}^{n+1} |\Gamma(\frac{1/2 + i\alpha_k + i\beta_{l}}{2}) |^2}{ \prod_{1 \leq k < l \leq n} | \Gamma(\frac{i\beta_{k} - i\beta_{l}}{2})|^2 } d\beta_1 \dots d \beta_n, \quad c(\alpha) = \prod_{1 \leq k < l \le n+1} |\Gamma(\frac{1+ i\alpha_k - i\alpha_l}{2}) |^{-2}. \end{equation} We continue as follows. By Stirling, the integral is \begin{equation} \label{eq:approximateintegralafterStirling} \approx c \int_R \prod_{1 \leq k < l \leq n} (1+|\beta_k - \beta_l|) \prod_{l=1}^{n} \prod_{k=1}^{n+1} (1+|\alpha_k + \beta_l|)^{-1/2} d \beta, \end{equation} where the region of integration $R$ is defined by $\beta_j + \alpha_{n+1-j} \geq 0$ and $\beta_j + \alpha_{n+2-j} \leq 0$ for all $j$ (this is equivalent to the definition of $I_M$ after changing variables). The restriction to $R$ should not alter the final answer very much because of the exponential decay of the gamma factors outside of $R$. As noted with \eqref{eq:interlacing}, the region defined by $R$ is equivalent to \begin{equation} -\alpha_{n+1} \geq \beta_1 \geq -\alpha_n \geq \beta_2 \geq \dots \geq -\alpha_{2} \geq \beta_n \geq -\alpha_1, \end{equation} which defines a box in $\mathbb{R}^n$. By Lemma \ref{lemma:qtalphabetaUB}, \eqref{eq:approximateintegralafterStirling} is \begin{multline} \ll \int_R \prod_{k=1}^n (1 + |\beta_k + \alpha_{n+1-k}|)^{-1/2} (1 + |\beta_k + \alpha_{n+2-k}|)^{-1/2} d \beta \\ = \prod_{k=1}^n \int_{-\alpha_{n+1-k}}^{\alpha_{n+2-k}} (1 + |\beta_k + \alpha_{n+1-k}|)^{-1/2} (1 + |\beta_k + \alpha_{n+2-k}|)^{-1/2} d\beta_k. \end{multline} By \eqref{eq:hahb}, we then have that \eqref{eq:approximateintegralafterStirling} is $\ll 1$. Thus we are led to the conjecture $N(F) \ll \log \lambda(F)$. The lower bound of the same order of magnitude is contained in Section \ref{section:lowerbound}. One may observe that the above upper bound is a somewhat conceptually simpler version of the arguments given to prove Lemma \ref{lemma:qUB}.
{"config": "arxiv", "file": "1212.4002/GLnRestrictionSubmit3.tex"}
TITLE: Shortest path in linear time QUESTION [3 upvotes]: Suppose each edge can receive one of two weights $\{r_1,r_2\}$ where $r_1$ and $r_2$ are real and non-negative. And suppose $r_1 \leq r_2$. How do you find the shortest path from a given vertex s to every other vertex in the graph in linear time? ($O(V+E)$) REPLY [3 votes]: The asymptotically best algorithm known for this (well studied) problem is an implementation of Dijkstra'a algorithm, which runs in $O(|E|+ |V|\log|V|)$ time. That is almost but not quite as good as you asked for, so probably you just asked too much.
{"set_name": "stack_exchange", "score": 3, "question_id": 97244}
TITLE: Prove $ \int _0^1\:e^xf\left(1-x\right)dx=\int _0^1\:e^x\left(f′\left(1-x\right)\right)dx$. QUESTION [0 upvotes]: I'm having trouble with following problem, thinking about integration by parts but just getting circular answer: Let $f$ by continuous on $[0,1]$ and differentiable on $(0,1)$, and also $f(0)=f(1)=0$. Prove $$ \int _0^1\:e^xf\left(1-x\right)=\int _0^1\:e^x\left(f′\left(1-x\right)\right)$$ REPLY [1 votes]: Using integration by parts and exploiting of the fact $f(0)= f(1)=0$, it all boils down to $$\int_0^1 e^x f(1-x) \mathrm{d}x = -\int_0^1 e^x \Big(f(1-x)\Big)' \mathrm{d}x+ [e^x f(1-x)]_{0}^1 = \int_0^1 e^x f'(1-x) \mathrm{d}x $$ as desired, after some caution with the argument of the $f$ function and a chain rule application, as $(1-x)' = -1$. REPLY [0 votes]: Let $f(1-x)=u$ and $e^xdx=dv$ then $f'(1-x)\times-1dx=du$ and $e^x=v$, with integration by parts $$\int udv=uv-\int v du$$ we have \begin{align} \int _0^1\:e^xf\left(1-x\right)dx &= f(1-x)e^x|_0^1-\int _0^1e^xf'(1-x)\times-1dx \\ &= f(0)e^1-f(1)e^0+\int _0^1e^xf'(1-x)dx \\ &= \int _0^1e^xf'(1-x)dx \end{align}
{"set_name": "stack_exchange", "score": 0, "question_id": 2440456}
\begin{document} \title[ Controlled rectangular metric type spaces and some applications to polynomial equations] {Controlled rectangular metric type spaces and some applications to polynomial equations} \author[Nabil Mlaiki] {Nabil Mlaiki} \address{Nabil Mlaiki \newline Department of Mathematics and General Sciences, Prince Sultan University Riyadh, Saudi Arabia 11586} \email{nmlaiki@psu.edu.sa, nmlaiki2012@gmail.com} \subjclass[2010]{47H10, 54H25.} \keywords{Controlled rectangular $b-$metric spaces, Fixed point, Zeros of high degree polynomials.} \begin{abstract} In this paper, we introduce a generalization of rectangular $b-$metric spaces, by changing the rectangular inequality as follows \begin{equation*} \rho(x,y)\le \theta(x,y,u,v)[\rho(x,u)+\rho(u,v)+\rho(v,y)], \end{equation*} for all distinct$\ x,y,u,v\in X.$ We prove some fixed-point theorems and we use our results to present a nice application in last section of this paper. Moreover, in the conclusion we present some new open questions.\newline \end{abstract} \maketitle \section{Introduction} In the last two decades, the generalization of metric spaces has been the focus of many researchers, and that is due to the importance of metric spaces and fixed point theory in solving open problems in many different fields. So, first of all we remind the reader of the definition of metric spaces. \begin{definition}(Metric spaces) Let $X$ be a nonempty set. A mapping $D:X^{2}\rightarrow [0,\infty)$ is called a metric on $X$ if for any $x,y,z \in X$ the following conditions are satisfied:\\ ($R_1$) $x=y$ if and only if $D(x,y)=0$; \\ ($R_2$) $D(x,y)=D(y,x)$;\\ ($R_3$) $D(x,y)\leq D(x,z) +D(z,y)$.\\ In this case, the pair $(X,D)$ is called a metric space. \end{definition} A generalization of metric spaces to $b-$metric spaces was introduced, and we refer the reader to \cite{G2}. In 2017, Kamran in \cite{Kamran}, introduced an interesting generalization of the $b-$metric spaces called extended $b-$metric spaces and defined as follows. \begin{definition}\cite{Kamran} Given a function $\theta: X\times X\rightarrow [1,\infty)$, where $X$ is a nonempty set. The function $B:X\times X\rightarrow\mathbb [0,\infty)$ is called an extended $b$-metric if \begin{enumerate} \item $B(x,y)=0 \Longleftrightarrow x=y$; \item $B(x,y) = B(y,x)$; \item $B(x,y) \leq \theta(x,y) [B(x,z) + B(z,y)]$, \end{enumerate} for all $x,y,z \in X$. \end{definition} In 2000, Branciari in \cite{Branc} introduced the concept of rectangular metric spaces. In 2015, George et al. in\cite{Geo}, generalized rectangular metric spaces to rectangular $b-$metric spaces. In this paper, and inspired by the work of Karman in \cite{Kamran}, we give a generalization to the rectangular $b-$metric spaces, called controlled rectangular $b-$metric spaces, but first we would like to remind the reader of the definitions of both spaces. \begin{definition}\cite{Branc}\label{def3} (Rectangular (or Branciari) metric spaces)) Let $X$ be a nonempty set. A mapping $L:X^{2}\rightarrow [0,\infty)$ is called a rectangular metric on $X$ if for any $x,y \in X$ and all distinct points $u,v \in X\setminus \{x,y\}$, it satisfies the following conditions:\\ ($R_1$) $x=y$ if and only if $L(x,y)=0$; \\ ($R_2$) $L(x,y)=L(y,x)$;\\ ($R_3$) $L(x,y)\leq L(x,u) + L(u,v)+L(v,y)$.\\ In this case, the pair $(X,L)$ is called a rectangular metric space. \end{definition} \begin{definition}\cite{Geo}\label{def5} (Rectangular $b-$metric spaces)) Let $X$ be a nonempty set. A mapping $L:X^{2}\rightarrow [0,\infty)$ is called a rectangular $b-$metric on $X$ if there exists a constant $a\ge 1$ such that for any $x,y \in X$ and all distinct points $u,v \in X\setminus \{x,y\}$, it satisfies the following conditions:\\ ($R_{b1}$) $x=y$ if and only if $L(x,y)=0$; \\ ($R_{b2}$) $L(x,y)=L(y,x)$;\\ ($R_{b3}$) $L(x,y)\leq a[L(x,u) + L(u,v)+L(v,y)]$.\\ In this case, the pair $(X,L)$ is called a rectangular $b-$metric space. \end{definition} As a generalization of the rectangular metric spaces and rectangular $b-$metric spaces, we define controlled rectangular $b-$metric spaces as follows. \begin{definition} Let $X$ be a non empty set, a function $\theta:X^{4}\rightarrow [1,\infty)$\\ and $\rho :X^{2}\rightarrow [0,\infty).$ We say that $(X,\rho)$ is a controlled rectangular $b-$metric space if all distinct $x,y,u,v \in X$ we have: \begin{enumerate} \item $\rho(x,y)=0$ if and only if $x=y;$ \item $\rho(x,y)=\rho(y,x);$ \item $\rho(x,y)\le \theta(x,y,u,v)[\rho(x,u)+\rho(u,v)+\rho(v,y)].$ \end{enumerate} \end{definition} \begin{definition}\label{d2} Let $(X,\rho)$ be controlled rectangular $b-$metric space, \begin{enumerate} \item A sequence $\{x_{n}\}$ is called $\rho-$convergent in a controlled rectangular $b-$metric space $(X,\rho),$ if there exists $x\in X$ such that $\lim_{n\rightarrow \infty}\rho(x_{n},\nu)=\rho(\nu, \nu).$ \item A sequence $\{x_{n}\}$ is called $\rho-$Cauchy if and only if $\lim_{n,m\rightarrow \infty}\rho(x_{n},x_{m}) \ \ \text{exists and finite}.$ \item A controlled rectangular $b-$metric space $(X,\rho)$ is called $\rho-$complete if for every $\rho-$Cauchy sequence $\{x_n\}$ in $X$, there exists $\nu\in X$, such that $\lim_{n\rightarrow \infty}\rho(x_{n},\nu)= \lim_{n,m\rightarrow \infty}\rho_{r}(x_{n},x_m)=\rho_{r}(\nu,\nu).$ \item Let $a\in X$ define an open ball in a controlled rectangular $b-$metric space $(X,\rho)$ by $B_{\rho}(a, \eta)=\{b\in X\mid \rho(a,b)<\eta\}.$ \end{enumerate} \end{definition} Notice that, rectangular metric spaces and rectangular $b-$metric spaces are controlled rectangular $b-$metric spaces, but the converse is not always true. In he following example, we present a controlled rectangular $b-$metric space that is not a rectangular metric space. \begin{example} Let $X=Y\cup Z$ where $Y=\{\frac{1}{m}\mid m \ \ \text{is a natural number}\}$ and $Z$ be the set of positive integers. We define $\rho :X^{2}\rightarrow [0,\infty)$ by \[ \rho(x,y)=\begin{cases} 0, &\text{if and only if} \ \ x=y\\ 2\beta, &\text{if}\ \ x,y\in Y\\ \frac{\beta}{2}, &\textrm{otherwise}, \end{cases} \] where, $\beta$ is a constant bigger than $0.$ Now, define $\theta:X^{4}\rightarrow [1,\infty)$ by\\ $\theta(x,y,u,v)=\max\{x,y,u,v\}+2\beta.$ It is not difficult to check that $(X,\rho)$ is a controlled rectangular $b-$metric space. However, $(X,\rho)$ is not a rectangular metric space, for instance notice that $\rho(\frac{1}{2},\frac{1}{3})=2\beta> \rho(\frac{1}{2},2)+\rho(2,3)+\rho(3,\frac{1}{3})=\frac{3\beta}{2}.$ \end{example} \section{Main Results} \begin{theorem}\label{one} Let $(X,\rho)$ is a controlled rectangular $b-$metric space, and $T$ a self mapping on $X.$ If there exists $0<k<1,$ such that $$\rho(Tx,Ty)\le k\rho(x,y)$$ and $$\sup_{m>1}\lim_{n\rightarrow \infty} \theta(x_{n},x_{n+1},x_{n+2},x_{m})\le \frac{1}{k},$$ then $T$ has a unique fixed point in $X.$ \end{theorem} \begin{proof} Let $x_{0}\in X$ and define the sequence $\{x_{n}\}$ as follows $x_{1}=Tx_{0},x_{2}=T^{2}x_{0},\cdots,x_{n}=T^{n}x_{0},\cdots$ Now, by the hypothesis of theorem we have \begin{align*} \rho(x_{n},x_{n+1})&\le k\rho(x_{n-1},x_{n})\\ &\le k^{2}\rho(x_{n-2},x_{n-1})\\ & \le \cdots\\ & \le k^{n}\rho(x_{0},x_{1}). \end{align*} Note that, if we take the limit of the above inequality as $n\rightarrow \infty$ we deduce that $\rho(x_{n},x_{n+1})\rightarrow 0\ \ \ \text{as} \ \ \ n\rightarrow \infty \ \ \label{E1}$ Denote by $\rho_{i}=\rho(x_{n+i},x_{n+i+1})$ For all $n\geq 1,$ we have two cases.\\ \textbf{Case 1:} Let $x_n=x_m$ for some integers $n\neq m$. So, if for $m>n$ we have $T^{m-n}(x_n)=x_{n}$. Choose $y=x_{n}$ and $p=m-n$. Then $T^py=y,$ and that is, $y$ is a periodic point of $T$. Thus, $\rho(y,Ty)=\rho(T^py,T^{p+1}y)\leq k^p \rho(y,Ty).$ Since $k\in (0,1)$, we get $\rho(y,Ty)=0$, so $y=Ty$, that is, $y$ is a fixed point of $T$.\\ \textbf{Case 2:} Suppose that $T^nx\neq T^mx$ for all integers $n\neq m$. Let $n<m$ be two natural numbers, to show that $\{x_{n}\}$ is a $\rho-$Cauchy sequence, we need to consider two subcases:\\ Subcase 1: Assume that $m=n+2p+1.$ By property $(3)$ of the controlled rectangular $b-$metic spaces we have, \vspace{-2cm} \begin{align*} \rho(x_{n},x_{n+2p+1})&\le \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p+1})[\rho(x_{n},x_{n+1})+\rho(x_{n+1},x_{n+2})+\rho(x_{n+2},x_{n+2p+1})]\\ &\le \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p+1})\rho(x_{n},x_{n+1})+ \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p+1})\rho(x_{n+1},x_{n+2})\\ &+ \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p+1}) \theta(x_{n+2},x_{n+3},x_{n+4},x_{n+2p+1})[\rho(x_{n+2},x_{n+3})\\ &+\rho(x_{n+3},x_{n+4})+\rho(x_{n+4},x_{n+2p+1})]\\&\le \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p+1})\rho(x_{n},x_{n+1})+ \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p+1})\rho(x_{n+1},x_{n+2})\\ &+ \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p+1}) \theta(x_{n+2},x_{n+3},x_{n+4},x_{n+2p+1})\rho(x_{n+2},x_{n+3})\\ &+\theta(x_{n},x_{n+1},x_{n+2},x_{n+2p+1}) \theta(x_{n+2},x_{n+3},x_{n+4},x_{n+2p+1})\rho(x_{n+3},x_{n+4})\\ &+ \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p+1}) \theta(x_{n+2},x_{n+3},x_{n+4},x_{n+2p+1})\rho(x_{n+4},x_{n+2p+1}) \\&\le \cdots\\ & \le \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p+1})\rho(x_{n},x_{n+1})+ \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p+1})\rho(x_{n+1},x_{n+2})\\ &+ \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p+1}) \theta(x_{n+2},x_{n+3},x_{n+4},x_{n+2p+1})\rho(x_{n+2},x_{n+3})\\ &+\theta(x_{n},x_{n+1},x_{n+2},x_{n+2p+1}) \theta(x_{n+2},x_{n+3},x_{n+4},x_{n+2p+1})\rho(x_{n+3},x_{n+4})\\ &+ \cdots +\theta(x_{n},x_{n+1},x_{n+2},x_{n+2p+1}) \theta(x_{n+2},x_{n+3},x_{n+4},x_{n+2p+1})\\ &\cdots \theta(x_{n+2p-2},x_{n+2p-1},x_{n+2p},x_{n+2p+1} )\rho(x_{n+2p},x_{n+2p+1})\\ & \le \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p+1})\rho_{0}+ \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p+1})\rho_{1}\\ &+ \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p+1}) \theta(x_{n+2},x_{n+3},x_{n+4},x_{n+2p+1})\rho_{2}\\ &+\theta(x_{n},x_{n+1},x_{n+2},x_{n+2p+1}) \theta(x_{n+2},x_{n+3},x_{n+4},x_{n+2p+1})\rho_{3}\\ &+ \cdots \\ &+\theta(x_{n},x_{n+1},x_{n+2},x_{n+2p+1}) \theta(x_{n+2},x_{n+3},x_{n+4},x_{n+2p+1})\times \cdots \\ &\times \cdots \theta(x_{n+2p-2},x_{n+2p-1},x_{n+2p},x_{n+2p+1} )\rho_{2p}\\ & = \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p+1})[\rho_{0}+ \rho_{1}]\\ &+ \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p+1}) \theta(x_{n+2},x_{n+3},x_{n+4},x_{n+2p+1})[\rho_{2}+\rho_{3}]\\ &+ \cdots+\theta(x_{n},x_{n+1},x_{n+2},x_{n+2p+1}) \theta(x_{n+2},x_{n+3},x_{n+4},x_{n+2p+1})\times \cdots \\ &\times \cdots \theta(x_{n+2p-2},x_{n+2p-1},x_{n+2p},x_{n+2p+1})[\rho_{2p-1}+\rho_{2p}] \end{align*} \begin{align*} &\le \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p+1})[(k^{n}+k^{n+1})\rho(x_{0},x_{1})]\\ &+ \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p+1}) \theta(x_{n+2},x_{n+3},x_{n+4},x_{n+2p+1})[(k^{n+2}+k^{n+3})\rho(x_{0},x_{1})]\\ &+ \cdots+\theta(x_{n},x_{n+1},x_{n+2},x_{n+2p+1}) \theta(x_{n+2},x_{n+3},x_{n+4},x_{n+2p+1})\times \cdots \\ &\times \cdots \theta(x_{n+2p-2},x_{n+2p-1},x_{n+2p},x_{n+2p+1})[(k^{n+2p-2}+k^{n+2p-1})\rho(x_{0},x_{1})]\\ & \le [\theta(x_{n},x_{n+1},x_{n+2},x_{n+2p+1})(k^{n}+k^{n+1})\\ &+ \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p+1}) \theta(x_{n+2},x_{n+3},x_{n+4},x_{n+2p+1})(k^{n+2}+k^{n+3})+ \\ & \cdots+\theta(x_{n},x_{n+1},x_{n+2},x_{n+2p+1}) \theta(x_{n+2},x_{n+3},x_{n+4},x_{n+2p+1})\times \cdots\\ &\times \cdots \theta(x_{n+2p-2},x_{n+2p-1},x_{n+2p},x_{n+2p+1})(k^{n+2p-2}+k^{n+2p-1})]\rho(x_{0},x_{1})\\ & =\sum_{l=0}^{p-1}\prod_{i=0}^{l}\theta(x_{n+2i},x_{n+2i+1},x_{n+2i+2},x_{n+2p+1})[k^{n+2l}+k^{n+2l+1}]\rho(x_{0},x_{1})\\ &= \sum_{l=0}^{p-1}\prod_{i=0}^{l}\theta(x_{n+2i},x_{n+2i+1},x_{n+2i+2},x_{n+2p+1})[1+k]k^{n+2l}\rho(x_{0},x_{1}) \end{align*} Now, using the fact that $k<1$ the above inequalities implies the following: $$ \rho(x_{n},x_{n+2p+1})< \sum_{l=0}^{p-1}\prod_{i=0}^{l}\theta(x_{n+2i},x_{n+2i+1},x_{n+2i+2},x_{n+2p+1})2k^{n+2l}\rho(x_{0},x_{1}).$$ Since, $\sup_{m>1}\lim_{n\rightarrow \infty} \theta(x_{n},x_{n+1},x_{n+2},x_{m})\le \frac{1}{k},$ we deduce, \begin{align*} \lim_{n,p\rightarrow \infty}\rho(x_{n},x_{n+2p+1})&< \sum_{l=0}^{\infty}\prod_{i=0}^{l}\theta(x_{n+2i},x_{n+2i+1},x_{n+2i+2},x_{n+2p+1})2k^{n+2l}\rho(x_{0},x_{1})\\ &\le \sum_{l=0}^{\infty}\frac{1}{k^{l+1}}2k^{n+2l}\rho(x_{0},x_{1})\\ &\le \sum_{l=0}^{\infty}2k^{n+l-1}\rho(x_{0},x_{1}). \end{align*} Note that, the series $\sum_{l=0}^{\infty}2k^{n+l-1}\rho(x_{0},x_{1})$ converges by the ratio test, which implies that $\rho(x_{n},x_{n+2p+1}) \ \ \text{converges} \ \ \text{as} \ \ n,p\rightarrow \infty.$\\ Subcase 2: $m=n+2p$ Fist of all, Note that \begin{align*} \rho(x_{n},x_{n+2})&\le k\rho(x_{n-1},x_{n+1})\\&\le k^{2}\rho(x_{n-2},x_{n})\\&\le \cdots \\&\le k^{n}\rho(x_{0},x_{2}) \end{align*} which leads us to conclude that $\rho(x_{n},x_{n+2}) \rightarrow 0 \ \ \text{as} \ \ n\rightarrow \infty.$ Similarly to Subcase 1 we have:\vspace{-3cm} \begin{align*} \rho(x_{n},x_{n+2p})&\le \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p})[\rho(x_{n},x_{n+1})+\rho(x_{n+1},x_{n+2})+\rho(x_{n+2},x_{n+2p})]\\ &\le \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p})\rho(x_{n},x_{n+1})+ \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p})\rho(x_{n+1},x_{n+2})\\ &+ \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p}) \theta(x_{n+2},x_{n+3},x_{n+4},x_{n+2p})[\rho(x_{n+2},x_{n+3})\\ &+\rho(x_{n+3},x_{n+4})+\rho(x_{n+4},x_{n+2p})]\\&\le \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p})\rho(x_{n},x_{n+1})+ \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p})\rho(x_{n+1},x_{n+2})\\ &+ \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p}) \theta(x_{n+2},x_{n+3},x_{n+4},x_{n+2p})\rho(x_{n+2},x_{n+3})\\ &+\theta(x_{n},x_{n+1},x_{n+2},x_{n+2}) \theta(x_{n+2},x_{n+3},x_{n+4},x_{n+2p})\rho(x_{n+3},x_{n+4})\\ &+ \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p}) \theta(x_{n+2},x_{n+3},x_{n+4},x_{n+2p})\rho(x_{n+4},x_{n+2p})\\ & \le \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p})\rho_{0}+ \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p})\rho_{1}\\ &+ \theta(x_{n},x_{n+1},x_{n+2},x_{n+2p}) \theta(x_{n+2},x_{n+3},x_{n+4},x_{n+2p})\rho_{2}\\ &+\theta(x_{n},x_{n+1},x_{n+2},x_{n+2p}) \theta(x_{n+2},x_{n+3},x_{n+4},x_{n+2p})\rho_{3}\\ &+ \cdots \\ &+\theta(x_{n},x_{n+1},x_{n+2},x_{n+2p}) \theta(x_{n+2},x_{n+3},x_{n+4},x_{n+2p})\times \cdots \\ &\times \cdots \theta(x_{n+2p-3},x_{n+2p-2},x_{n+2p-1},x_{n+2p} )\rho_{2p}\\ &+ \prod_{i=0}^{2p-2} \theta(x_{n+2i},x_{n+2i+1},x_{n+2i+1},x_{n+2p})\rho(x_{n+2p-2 },x_{n+2p})\\ & =\sum_{l=0}^{p-1}\prod_{i=0}^{l}\theta(x_{n+2i},x_{n+2i+1},x_{n+2i+2},x_{n+2p+1})[k^{n+2l}+k^{n+2l+1}]\rho(x_{0},x_{1})\\ &+ \prod_{i=0}^{2p-2} \theta(x_{n+2i},x_{n+2i+1},x_{n+2i+1},x_{n+2p})\rho(x_{n+2p-2 },x_{n+2p})\\ &= \sum_{l=0}^{p-1}\prod_{i=0}^{l}\theta(x_{n+2i},x_{n+2i+1},x_{n+2i+2},x_{n+2p+1})[1+k]k^{n+2l}\rho(x_{0},x_{1})\\ &+\prod_{i=0}^{2p-2} \theta(x_{n+2i},x_{n+2i+1},x_{n+2i+1},x_{n+2p})\rho(x_{n+2p-2 },x_{n+2p})\\ &\le \sum_{l=0}^{p-1}\prod_{i=0}^{l}\theta(x_{n+2i},x_{n+2i+1},x_{n+2i+2},x_{n+2p+1})[1+k]k^{n+2l}\rho(x_{0},x_{1})\\ &+\prod_{i=0}^{2p-2} \theta(x_{n+2i},x_{n+2i+1},x_{n+2i+1},x_{n+2p})k^{n+2p-2}\rho(x_{0 },x_{2}) \end{align*} Since, $\sup_{m>1}\lim_{n\rightarrow \infty} \theta(x_{n},x_{n+1},x_{n+2},x_{m})\le \frac{1}{k},$ we deduce, \begin{align*} \lim_{n,p\rightarrow \infty}\rho(x_{n},x_{n+2p}) &\le \lim_{n,p\rightarrow \infty}\sum_{l=0}^{p-1}\frac{1}{k^{l+1}}[1+k]k^{n+2l}\rho(x_{0},x_{1}) +k^{2p-1}k^{n+2p-2}\rho(x_{0 },x_{2})\\ &= \lim_{n,p\rightarrow \infty}\sum_{l=0}^{p-1}[1+k]k^{n+l-1}\rho(x_{0},x_{1}) +k^{n-1}\rho(x_{0 },x_{2})\\ &\le \sum_{m=0}^{\infty}[1+k]k^{m}\rho(x_{0},x_{1}) +k^{m}\rho(x_{0 },x_{2}) \end{align*} By using the Ratio Test, it is not difficult to see that the series $$\sum_{m=0}^{\infty}[1+k]k^{m}\rho(x_{0},x_{1}) +k^{m}\rho(x_{0},x_{2})$$ converges. Hence, $\rho(x_{n},x_{n+2p})$ converges as $n,p\ \ \text{go toward } \ \ \infty.$ Thus, by subcase 1 and subcase 2, we deduce that the sequence $\{x_{n}\}$ is a $\rho-$Cauchy sequence. Since $(X,\rho)$ is a $\rho-$complete extended rectangular $b-$metric space, we deduce that $\{x_{n}\}$ converges to some $\nu \in X.$ We claim that $\nu$ is fixed point of $T.$ Note that, if there exists an integer $N$ such that $x_{N}=\nu$. Due to case 2, $T^nx \neq \nu$ for all $n>N$. Similarly, $T^nx \neq T\nu$ for all $n>N$. Hence, we are in case 1, so $\nu$ is a fixed point of $T$.\\ Also, if there exists an integer $N$ such that $T^Nx=T\nu$. Again, necessarily $T^nx\neq \nu$ and $T^nx\neq T\nu$ for all $n>N$. Thus, $T\nu=\nu$. Therefore, we may assume that for all $n$ we have $x_{n}\not\in \{\nu, T\nu\}.$ \begin{align*} \rho(\nu,T\nu)&\le \theta(\nu , T\nu , x_{n},x_{n+1})[\rho(\nu ,x_{n})+\rho(x_{n},x_{n+1})+ \rho(x_{n+1},T\nu)]\\ &\le \theta(\nu , T\nu , x_{n},x_{n+1})[\rho(\nu ,x_{n})+\rho(x_{n},x_{n+1})+ \rho(Tx_{n},T\nu)]\\ &\le \theta(\nu , T\nu , x_{n},x_{n+1})[\rho(\nu ,x_{n})+\rho(x_{n},x_{n+1})+k\rho(x_{n},\nu)] \end{align*} Now, taking the limit as $n\rightarrow \infty$ we deduce that $\rho(\nu,T\nu)=0$ and that is $T\nu =\nu$ and $\nu$ is a fixed point of $T$ as desired. Finally, to show uniqueness assume there exist two fixed points of $T$ say $\nu $ and $\mu$ such that $\nu\neq \mu.$ By the contractive property of $T$ we have: $$ \rho(\nu, \mu)=\rho(T\nu, T\mu)\le k \rho(\nu, \mu)<\rho(\nu, \mu)$$ which leads us to contradiction.Thus, $T$ has a unique fixed point as required. \end{proof} \begin{theorem} Let $(X,\rho)$ be a complete extended rectangular $b-$metric space, and $T$ a self mapping on $X$ satisfying the following condition; for all $x,y\in X$ there exists $0<k<\frac{1}{2}$ such that $$ \rho(Tx,Ty)\le k[\rho(x,Tx)+\rho(y,Ty)]$$ Also, if $$\sup_{m>1}\lim_{n\rightarrow \infty} \theta(x_{n},x_{n+1},x_{n+2},x_{m})\le \frac{1}{k},$$ and for all $u,v \in X$ we have: $$ \lim_{n\rightarrow \infty} \theta(u,v,x_{n},x_{n+1})\le 1,$$ then $T$ has a unique fixed point in $X.$ \end{theorem} \begin{proof} Let $x_{0}\in X$ and define the sequence $\{x_{n}\}$ as follows $$x_{1}=Tx_{0}, x_{2}=Tx_{1}=T^{2}x_{0}, \cdots, x_{n}=Tx_{n-1}=T^{n}x_{0}, \cdots $$ First of all, Note that for all $n\ge 1$ we have \begin{align*} &\rho(x_{n},x_{n+1})\le k[\rho(x_{n-1},x_{n})+\rho(x_{n},x_{n+1})]\\&\Rightarrow (1-k)\rho(x_{n},x_{n+1})\le k\rho(x_{n-1},x_{n})\\ &\Rightarrow \rho(x_{n},x_{n+1})\le \frac{k}{1-k}\rho(x_{n-1},x_{n}). \end{align*} Since $0<k<\frac{1}{2}$ one can easily deduce that $0<\frac{k}{1-k}<1.$ So, let $\mu =\frac{k}{1-k}$ Hence, \begin{align*} \rho(x_{n},x_{n+1})&\le \mu \rho(x_{n-1},x_{n})\\ &\le \mu^{2}\rho(x_{n-2},x_{n-1})\\ & \le \cdots\\ & \le \mu^{n}\rho(x_{0},x_{1}). \end{align*} Therefore, $$\rho(x_{n},x_{n+1})\rightarrow 0\ \ \ \text{as} \ \ \ n\rightarrow \infty \ \ \label{E3}$$ Also, for all $n\ge 1$ we have $$\rho(x_{n},x_{n+2})\le k[\rho(x_{n-1},x_{n})+\rho(x_{n+1},x_{n+2})]$$ Thus, by using the fact that $\rho(x_{n},x_{n+1})\rightarrow 0\ \ \ \text{as} \ \ \ n\rightarrow \infty,$ we deduce that $$\rho(x_{n},x_{n+2})\rightarrow 0\ \ \ \text{as} \ \ \ n\rightarrow \infty.$$ Now, similarly to the prove of case 1 and case 2 of Theorem \ref{one}, we deduce that the sequence $\{x_{n}\}$ is a $\rho-$Cauchy sequence. Since $(X,\rho)$ is a $\rho-$complete extended rectangular $b-$metric space, we conclude that $\{x_{n}\}$ converges to some $\nu \in X.$ Using the argument in the prove of Theorem \ref{one}, we may assume that for all $n\ge 1 \ \ \text{we have}\ \ x_{n}\not\in \{\nu,T\nu\}.$ Thus, \begin{align*} \rho(\nu,T\nu)&\le \theta(\nu , T\nu , x_{n},x_{n+1})[\rho(\nu ,x_{n})+\rho(x_{n},x_{n+1})+ \rho(x_{n+1},T\nu)]\\ &\le \theta(\nu , T\nu , x_{n},x_{n+1})[\rho(\nu ,x_{n})+\rho(x_{n},x_{n+1})+ \rho(Tx_{n},T\nu)]\\ &\le \theta(\nu , T\nu , x_{n},x_{n+1})[\rho(\nu ,x_{n})+\rho(x_{n},x_{n+1})+k\rho(x_{n}, Tx_{n})+k\rho(\nu,T\nu)]. \end{align*} Taking the limit of the above inequalities we get: $$\rho(\nu,T\nu)\le [0+0+0+k\rho(\nu,T\nu)]< \rho(\nu, T\nu).$$ Thus, $\rho(\nu,T\nu)=0$ which implies that $T\nu=\nu$ and hence $\nu$ is a fixed point of $T.$ Finally, to show uniqueness assume there exist two fixed points of $T$ say $\nu $ and $\mu$ such that $\nu\neq \mu.$ By the contractive property of $T$ we have: $$ \rho(\nu, \mu)=\rho(T\nu, T\mu)\le k \rho(\nu, \mu)<\rho(\nu, \mu)$$ which leads us to contradiction.Thus, $T$ has a unique fixed point as required. \end{proof} \section{Application} In closing, we present the following application for our results. \begin{theorem}\label{tm} For any natural number $m\ge 3$ the equation \begin{equation}\label{app1} x^{m}+1= (m^{4}-1)x^{m+1}+m^{4}x \end{equation} has a unique real solution. \end{theorem} \begin{proof} First of all, note that if $|x|>1,$ Equation $(3.1),$ does not have a solution. So, let $X=[-1,1]$ and for all $x,y\in X$ let $\rho(x,y)=|x-y|$\\ and $\alpha(x,y,u,v)=\max\{x,y,u,v\}+2.$ It is not difficult to see that $(X,\rho)$ is a $\rho-$complete controlled rectangular $b-$metric space. Now, let \begin{equation*} Tx=\frac{x^{m}+1}{(m^{4}-1)x^{m}+m^{4}} \end{equation*} Notice that, since $m\ge 2,$ we can deduce that $m^{4}\ge 6.$ Thus, \begin{align*} \rho(Tx,Ty)&=|\frac{x^{m}+1}{(m^{4}-1)x^{m}+m^{4}}-\frac{y^{m}+1}{(m^{4}-1)y^{m}+m^{4}}| \\ & = | \frac{x^{m}-y^{m}}{((m^{4}-1)x^{m}+m^{4})((m^{4}-1)y^{m}+m^{4})}|\\ &\le \frac{|x-y|}{m^{4}}\\ &\le \frac{|x-y|}{6}\\ &= \frac{1}{6}\rho(x,y) \end{align*} Hence, \begin{equation*} \rho(Tx,Ty)\le k \rho(x,y) \ \ \ \text{where} \ \ \ k=\frac{1}{6} \end{equation*} On the other hand, notice that for all $x_{0}\in X$ we have \begin{equation*} x_{n}=T^{n}x_{0}\le \frac{2}{m^{4}} \end{equation*} Thus, \begin{eqnarray*} \sup_{n\geq 1}\lim_{i\rightarrow \infty }\theta (x_{i},x_{i+1},x_{i+2},x_{n}) &= &\frac{2}{m^{4}}\\ &\le &2<6=\frac{1}{k}. \end{eqnarray*} Finally, note that $T$ satisfies all the hypothesis of Theorem \ref{one}. Therefore, $T$ has a unique fixed point in $X,$ which implies that Equation $(3.1),$ has a unique real solution as desired. \end{proof} \section{Conclusion} In closing, we would like to bring to the readers attention the following open questions; \begin{question} Let $(X,\rho)$ is a controlled rectangular $b-$metric space, and $T$ a self mapping on $X.$ Also, assume that for all distinct $x,y,Tx,Ty\in X$ there exists $k\in (0,1)$ such that \begin{equation*} \rho(Tx,Ty)\le k\theta(x,y,Tx,Ty)\rho(x,y) \end{equation*} what are the other hypothesis we should add so that $T$ has a unique fixed point in the whole space $X?$ \end{question} \begin{question} Let $(X,\rho)$ is a controlled rectangular $b-$metric space, and $T$ a self mapping on $X.$ Also, assume that for all distinct $x,y,Tx,Ty\in X$ there exists $k\in (0,1)$ such that \begin{equation*} \rho(Tx,Ty)\le \theta(x,y,Tx,Ty)[\rho(x,Tx)+\rho(y,Ty)] \end{equation*} what are the other hypothesis we should add so that $T$ has a unique fixed point in the whole space $X?$ \end{question} \title{\textbf{Acknowledgements}}\\ The author would like to thank Prince Sultan University for funding this work through research group Nonlinear Analysis Methods in Applied Mathematics (NAMAM) group number RG-DES-2017-01-17.\\
{"config": "arxiv", "file": "1910.13704.tex"}
TITLE: Continuity of the inverse map QUESTION [1 upvotes]: If we have a function $F(x): \mathbb{R^4} \rightarrow \mathbb{R^3}$. Defined as \begin{align} x_1\, x_4&=y_1 \\ x_2\, x_4&=y_2 \\ x_1^2+x_2^2-x_3^2&=y_3 \end{align} Can a continuous inverse map exist? I'm intuitively guessing that the problem with continuity can occur at $y=(0,0,1)$ but I'm not being able to prove it. REPLY [2 votes]: The question as asked is easier than what I attack here. Certainly the lack of injectivity spells doom for an inverse, much less a continuous inverse. That said, it's fun to think about how we could suitably restrict the given function as to obtain an inverse. The Jacobian matrix essentially reveals when and what we can do in that regard. I give an illustration of this below: Observe, $F(x_1,x_2,x_3,x_4) = (y_1,y_2,y_3)$ defined by: \begin{align} x_1\, x_4&=y_1 \\ x_2\, x_4&=y_2 \\ x_1^2+x_2^2-x_3^2&=y_3 \end{align} for all $(x_1,x_2,x_3,x_4) \in \mathbb{R}^4$ has Jacobian matrix: $$ J_F = \left[ \frac{\partial F}{\partial x_1} \bigg{|} \frac{\partial F}{\partial x_2}\bigg{|}\frac{\partial F}{\partial x_3}\bigg{|}\frac{\partial F}{\partial x_4}\right] = \left[ \begin{array}{cccc} x_4 & 0 & 0 & x_1 \\ 0 & x_4 & 0 & x_2 \\ 2x_1 & 2x_2 & -2x_3 & 0 \end{array}\right]$$ This shows us what the dimension of the image is locally. In particular, the rank of the Jacobian shows us the dimension of the image near the point as the component functions are polynomial and hence continuous. For example, if $x_3=0$ then we need the following determinant to be nonzero in order that $J_F$ have rank 3: $$\text{det}\left[ \begin{array}{ccc} x_4 & 0 & x_1 \\ 0 & x_4 & x_2 \\ 2x_1 & 2x_2 & 0 \end{array}\right] = -2x_4(x_2^2+x_1^2)$$ which is nonzero for $x_4 \neq 0$ and $(x_1,x_2) \neq (0,0)$. With this in mind, I return to the original system of equations and invert them with respect the given conditions: \begin{align} x_1\, x_4&=y_1 \\ x_2\, x_4&=y_2 \\ x_1^2+x_2^2 &=y_3 \end{align} we wish to solve for $x_1,x_2,x_4$ in terms of $y_1,y_2,y_2$. Note, $x_1/x_2 = y_1/y_2$ thus $ y_1^2+y_2^2 = x_4^2y_3$. Therefore, \begin{align} x_4 &= \pm \sqrt{(y_1^2+y_2^2)/y_3} \\ x_1 &= y_1/x_4 = \pm y_1/\sqrt{(y_1^2+y_2^2)/y_3} \\ x_2 &= y_2/x_4 = \pm y_2/\sqrt{(y_1^2+y_2^2)/y_3} \end{align} The formulas above give a pair of local inverses for $F$ restricted to the three-dimensional subset $S$ of $\mathbb{R}^4$ for which $(x_1,x_2,x_3,x_4)$ has $x_3=0$ and $x_4 \neq 0$ and $(x_1,x_2) \neq (0,0)$. In particular, if we denote $S=S_+ \cup S_-$ where $S_+$ has $x_4 >0$ and $S_-$ has points with $x_4<0$ then the formulas above define inverse functions for $F$ restricted to $S_\pm$.
{"set_name": "stack_exchange", "score": 1, "question_id": 1386721}
TITLE: Traffic lights probability QUESTION [0 upvotes]: Ive been asked the following : Two consecutive traffic lights have been synchronized to make a run of green lights more likely. In particular, if a driver finds the first light to be red, the second light will be green with probability 0.9, and if the first light is green the second will be green with probability 0.71 . If the probability of finding the first light green is 0.62 , find the probability that a driver will find both lights green. Here is how I've modeled the probabilities : Defining : a : first light red b : second light green c : first light green Therefore: $$ P(a\cap b)= 0.9$$ $$ P(c\cap b)= 0.71$$ If $ P(c)= 0.62$ what is $ P(c\cap b) $ ? I calculate the probability to be $ P(c\cap b)P(c)=(.71)(.62)=.4402 $ I doubt this is correct but not sure what other path to take ? Referencing this question which is similar : Traffic Light Probability REPLY [1 votes]: You are asked to find the probability that the first light and the second light are green. For two events $A$ and $B$ we have $$\mathsf P(A\cap B)=\mathsf P(A\mid B)\mathsf P(B)$$ where $\mathsf P(A\cap B)$ denotes the probability that $A$ and $B$ both occur and $\mathsf P(A\mid B)$ denotes the probability that $A$ happens, given that $B$ happens. Using the events you have already defined we have $$\mathsf P(B\cap C)=\mathsf P(B\mid C)\mathsf P(C)=0.71\cdot0.62=0.4402$$ Note in probability we usually capitalize letters which represent random variables.
{"set_name": "stack_exchange", "score": 0, "question_id": 2993630}
TITLE: Homotopy equivalence in terms of strong deformation retract QUESTION [2 upvotes]: Visualizing homotopy equivalence maps are not so easy. I thought before that $f:X\to Y$ and $g:Y\to X$ are homotopy equivalence iff one can deform $X$ continuously to $Y$. But this is wrong in general. So I tried the following: Q1: $f:X\to Y$ and $g:Y\to X$ are homotopy equivalence iff one can deform $X$ and $Y$ continuously to a third space $Z$. or I think equivalently $f:X\to Y$ and $g:Y\to X$ are homotopy equivalence iff there is a $A\subset X$ such that $A$ be a strong deformation retract of $X$ and $f(A)$ be a strong deformation retract of $Y$ Q2: What about this one? $f:X\to Y$ and $g:Y\to X$ are homotopy equivalence iff there is a $A\subset X$ such that $A$ be a strong deformation retract of $X$ and $f(A)$ be a strong deformation retract of $Y$ (added after Paul's answer) AND there is a $B\subset Y$ such that $B$ be a strong deformation retract of $Y$ and $g(B)$ be a strong deformation retract of $X$. Are the above statements true? any proof or counterexample? REPLY [1 votes]: It is wrong. You ask whether the following two conditions are equivalent for two maps $f:X\to Y$ and $g:Y\to X$: $f$ and $g$ are homotopy equivalences. There is a $A\subset X$ such that $A$ be a strong deformation retract of $X$ and $f(A)$ be a strong deformation retract of $Y$. First note that $g$ does not play any role in 2. Now let $f : S^1 \to *$ be the constant map, where $*$ is a one-point space, and $g : * \to S^1$ be any map. $f$ is no homotopy equivalence. Now take $A = S^1$. Then you see that 2. is satisfied. Update for Q2: It is wrong. For $n \le 0$ let $C_n \subset \mathbb R^2$ be the circle with radius $1/3$ and center $(n,0)$, for $n > 0$ let $C_n = \{(n,0)\}$. Define $$X = Y = \bigcup_{n \in \mathbb Z} C_n ,$$ $$f : X \to Y, f(z) = \begin{cases} z + (1,0) & z \in C_n, n \ne 0 \\ (1,0) & z \in C_0 \end{cases}$$ and $g = f$. This map translates $C_n$ to $C_{n+1}$ if $n \ne 0$ and collapses the circle $C_0$ to the point $C_1$. $f$ is not a homotopy equivalence. Let $A = X$. Then $f(A) = X$ and your condition on $f$ is satisfied. Since $g = f$, also the condition on $g$ is satisfied.
{"set_name": "stack_exchange", "score": 2, "question_id": 3849785}
\begin{document} \title{Extinction probabilities of branching processes with countably infinitely many types} \authorone[The University of Melbourne]{S. Hautphenne} \authortwo[Universit\'e libre de Bruxelles]{G. Latouche} \authorthree[The University of Adelaide]{G. T. Nguyen} \addressone{Department of Mathematics and Statistics, The University of Melbourne, VIC 3010, Australia. sophiemh@unimelb.edu.au.} \addresstwo{D\' epartement d'Informatique, Universit\'{e} libre de Bruxelles 1050 Brussels, Belgium. latouche@ulb.ac.be.} \addressthree{School of Mathematical Sciences, The University of Adelaide, 5005, Australia. \newline giang.nguyen@adelaide.edu.au.} \begin{abstract} We present two iterative methods for computing the global and partial extinction probability vectors for Galton-Watson processes with countably infinitely many types. The probabilistic interpretation of these methods involves truncated Galton-Watson processes with finite sets of types and modified progeny generating functions. In addition, we discuss the connection of the convergence norm of the mean progeny matrix with extinction criteria. Finally, we give a sufficient condition for a population to become extinct almost surely even though its population size explodes on the average, which is impossible in a branching process with finitely many types. We conclude with some numerical illustrations for our algorithmic methods. \end{abstract} \keywords{multitype branching process; extinction probability; extinction criteria; iterative methods } \ams{60J80}{60J05;60J22;65H10} \section{Introduction} Branching processes are powerful mathematical tools frequently used to study the evolution of collections of individuals over time. In particular, multi-type Galton-Watson processes represent populations in which individuals are classified into different categories and live for one unit of time. Each individual may reproduce at the end of its lifetime, with reproduction rules dependent on its type. When the number of types is finite, one extinction criterion is based on the spectral radius $\spc(M)$ of the mean progeny matrix $M$, the elements $M_{ij}$ of which are the expected number of direct offsprings with type $j$ for a parent of type~$i$. Moreover, the extinction probability vector $\bs{q}$ is the minimal nonnegative solution of the fixed-point equation $\bs{q} = \bs P(\bs{q})$, where each component $q_i$ is the extinction probability given the initial type~$i$, and $\bs P(\cdot)$ is the progeny generating function of the process. Harris~\cite{harris63} and references therein present a comprehensive analysis of extinction criteria and extinction probability for Galton-Watson processes with finitely many types. To allow, as we do here, the set of types to be infinite gives rise to three main challenges. First, as the mean progeny matrix $M$ has infinite dimension, one has to look for a replacement to the spectral radius as an extinction criterion. Second, one needs to determine how to compute the extinction probability vector $\bs{q}$ which now has infinitely many entries. Third, the concept of extinction has to be defined carefully: when there are infinitely many types, it is possible for every type to eventually disappear while the whole population itself explodes. We use the term \emph{global extinction} to indicate that the whole population becomes extinct, and represent by $\bs q$ the probability vector for this event; we refer to the event that every type becomes extinct as \emph{partial extinction}, and denote its probability vector by $\tilde{\bs q}$, with $\bs q \leq \tilde{\bs q}$, naturally, and the question is whether they are equal or not. Galton-Watson processes with infinitely many types have been much investigated already. Moyal~\cite{moyal62} assumes that the types belong to an abstract space and proves that the extinction probability is a solution of the fixed point equation $\bs s = \bs{P}(\bs s)$. Mode~\cite[Theorem~7.2]{mode71}, for a restricted family of progeny densities, gives an extinction criterion based on the spectral radius of some integral operator. Focusing on denumerably infinite sets of types, Moy~\cite{moy66,moy67} and Spataru~\cite{spataru89} use ergodic properties for infinite matrices, and analyse in special cases the role of the \emph{convergence norm} of $M$ as an extinction criterion. Recently, some authors in the literature of branching random walks have defined \emph{local survival}, meaning that for every given type $i$ and arbitrarily large epoch $T$ there is at least one individual of type $i$ alive at some time $t > T$, with \emph{global survival} meaning that at least one individual is alive at any time, and \emph{strong local survival}, when the two have the same probability. We refer to Bertacchi and Zucca~\cite{bertacchi09}, Zucca~\cite{zucca11}, and to~Gantert \emph{et al}.~\cite{gantert10}. There is, however, no simple general extinction criteria for Galton-Watson processes with countably infinitely many types so far, and the question of actually computing the extinction probability vector has received scant attention, if any. Our main result is the development of two algorithmic methods for computing the global and the local extinction probability vectors $\bs{q}$ and $\tilde{\bs{q}}$. The methods, which are presented in Section~\ref{sec:algos}, have a physical interpretation based on two truncated Galton-Watson processes with finite sets of types. They may be applied to both irreducible and reducible branching processes with countably infinitely many types. In Section~\ref{sec:superc} we discuss some extinction criteria expressed in terms of the convergence norm of the mean progeny matrix $M$ in the irreducible case, or of irreducible sub-matrices of $M$ when $M$ is reducible. We also give a sufficient condition under which the population becomes extinct almost surely while its expected size tends to infinity. That condition implies that the asymptotic growth rate of the process may depend on the distribution of the initial individual's type. In Section~\ref{sec:rand}, we provide some numerical illustrations. Our examples are taken from two classes of processes for which the matrix $M$ is tridiagonal (and irreducible) or super-diagonal (and reducible). \section{Preliminaries} \label{sec:model} Consider the process $\{\mathcal{Z}_n=(Z_{n1}, Z_{n2}, \ldots)\}_{n\in \mathds{N}}$, where $Z_{n{\ell}}$ is the number of individuals of type $\ell$ alive at the $n$th generation, for $\ell$ in the countably infinite set of types $\mathcal{S}= \{1, 2, 3, \ldots\}$. Unless otherwise stated, the process starts in generation~0 with one individual. We denote by $p_{i\bs{j}}$ for $\bs{j} = (j_1, j_2, \ldots)$ the probability that an individual of type~$i$ gives birth to~$j_1$ children of type~1, $j_2$ children of type~2, etc., and the progeny generating function $P_i(\boldsymbol{s})$ of type $i\in \mathcal{S}$ is given by \begin{align*} P_i(\bs{s}) = \sum_{\bs j \in \mathds{N}^{\infty}}p_{i\bs{j}} \bs{s}^{\bs{j}} = \sum_{\bs j \in \mathds{N}^{\infty}}p_{i\bs{j}} \prod_{k = 1}^{\infty} s_k^{j_k}, \end{align*} with $\bs{s} = (s_1, s_2, \ldots)$, $s_i \in [0,1]$ for all $i$. We define $\bs P(\bs{s}) = (P_1(\bs{s}), P_2(\bs{s}), \ldots)$. The {mean progeny matrix} $M$ is defined by \[ M_{ij} = \left.\frac{\partial P_i(\bs{s})}{\partial s_j} \right |_{\bs{s} = \bs{1}} \qquad \mbox{for $i,j \in \mathcal S$}, \] and $M_{ij}$ is the expected number of direct offspring of type $j$ born to a parent of type~$i$. The process $\{\mathcal{Z}_n\}$ is said to be {irreducible} if $M$ is irreducible, and it is {reducible} otherwise. The total population size at the $n$th generation is $|\mathcal{Z}_{n}| = \sum_{\ell = 1}^{\infty} {Z}_{n\ell}$, and we denote by $\varphi_0$ the type of the first individual in generation~0. The conditional {\em global} extinction probability vector, given the initial type, is $\vc{q} = (q_1,q_2, \ldots)$ where \begin{align*} q_{i} & = \mathds{P}[\lim_{n\rightarrow \infty}\left. |\mathcal{Z}_{n} |= {0} \right| \varphi_0 = {i}] \qquad \mbox{ for } i \in \mathcal{S}. \end{align*} This is the usual conditional probability that the whole population eventually becomes extinct, given the type of the initial individual, and we write that $\vc q = \mathds{P}[\lim_{n\rightarrow \infty} \left. |\mathcal{Z}_{n}| = {0} \right| \varphi_0]$ for short. The vector $\vc q$ is the minimal nonnegative solution of the fixed-point equation \begin{align}\label{exteq} \vc{P}(\vc{s}) = \vc{s}. \end{align} This equation has at most two distinct solutions, $\vc 1$ and $\vc q\leq \vc 1$, if $M$ is irreducible, and potentially infinitely many solutions otherwise (Moyal~\cite{moyal62}, Spataru~\cite{spataru89}). The conditional \emph{partial} extinction probability, given the initial type, is $\tilde{\vc{q}} = (\tilde{q}_1, \tilde{q}_2, \ldots) $ where \begin{align*} \tilde{q}_{i} & = \mathds{P}[\forall \ell: \lim_{n\rightarrow \infty} \left. {Z}_{n\ell} = {0} \right| \varphi_0 = {i}] \qquad \mbox{ for } i \in \mathcal{S}. \end{align*} In the irreducible case, Zucca~\cite{zucca11} observes that $\lim_{n\rightarrow \infty} {Z}_{n\ell} = {0}$ for all types~$\ell$ if and only if the limit is zero for at least one type, regardless of the initial type. The vector $\tilde{\vc q}$ is also a solution of \eqref{exteq}. Indeed, by conditioning on the progeny of the initial individual and using the independence between individuals, we readily obtain, for any $i$, \begin{align*} \tilde{q}_i & = \mathds{P}[\forall \ell: \lim_{n\rightarrow \infty} \left. {Z}_{n\ell} = {0} \right| \varphi_0 = {i}] \\ & = \sum_{\bs j=(j_1,j_2,\ldots) }p_{i\bs{j}} \prod_{k = 1}^{\infty} \mathds{P}[\forall \ell: \lim_{n\rightarrow \infty} \left. {Z}_{n\ell} = {0} \right| \varphi_0 = {k}]^{j_k} \\ & = P_i(\tilde{\vc{q}}). \end{align*} When the set of types is finite, global and partial extinction are equivalent, but this is not necessarily the case when the set of types is infinite: by Fatou's Lemma: \begin{align*} \lim_{n\rightarrow \infty} |\mathcal{Z}_{n} |=\lim_{n\rightarrow \infty} \sum_{\ell=1}^{\infty} {Z}_{n\ell} \geq \sum_{\ell=1}^{\infty} \lim_{n\rightarrow \infty}{Z}_{n\ell}, \end{align*} so that \begin{align*} \mathds{P}[\lim_{n\rightarrow \infty} |\mathcal{Z}_{n} |=0| \varphi_0 = i] & \leq \mathds{P}[\forall \ell: \lim_{n\rightarrow \infty}{Z}_{n\ell}=0| \varphi_0 = i], \end{align*} for $i,\ell \in \mathcal{S}$ and $\vc0\leq \vc{q}\leq \tilde{\vc{q}} \leq \vc 1$. As the vectors $\vc q$, $\tilde{\vc q}$ and $\vc 1$ are all solutions of (\ref{exteq}) and since there are at most two distinct solutions in the irreducible case, the following lemma is immediate. \begin{lem}\label{lem1} If $M$ is irreducible, and $\tilde{\vc{q}}<\vc 1$, then $\vc q=\tilde{\vc{q}}$.\cqfd \end{lem} In the irreducible case, $\vc q = \tilde{\vc q}<\vc1$ is equivalent to strong local survival in the terminology of branching random walks and, although it is expressed differently, this sufficient condition is observed in~\cite{gantert10} with the assumption that $M$ is tridiagonal, and in \cite{zucca11} for the general case. When $M$ is reducible, it is possible that $\vc{q}< \tilde{\vc{q}} < \vc1$, and we give an example at the end of Section~\ref{sec:rand}. \section{Computational aspects} \label{sec:algos} In this section, we develop iterative methods to compute the extinction probability vectors $\vc q$ and $\tilde{\vc{q}}$. The procedures apply for both irreducible and reducible processes. The underlying idea is to compute approximations of the infinite vectors $\vc q$ and $\tilde{\vc{q}}$ by solving finite systems of equations in such a way that the successive approximations have probabilistic interpretations: for $\vc q$ we use a time-truncation argument, and for $\tilde{\vc q}$ a space-truncation argument. \subsection{Global extinction probability} \label{sec:global} Denote by $N_e$ the generation number when the process becomes extinct. Clearly, $\vc{q} = \mathds{P}[{N_e} <\infty\,|\varphi_0]$. Let $T_k=\{k + 1, k + 2, \ldots\}$ be the set of types strictly greater than $k$, and define the first passage time $\tau_k=\inf\{n: \sum_{\ell\in T_k}Z_{n\ell}>0\}$, for $k \geq 0$. This is the first generation when an individual of any type in $T_k$ is born. Furthermore, define \begin{align*} {q_i^{(k)}} = \mathds{P}[{{N_e}} < {\tau_k}|\varphi_0 = i], \end{align*} the conditional probability that the process eventually becomes extinct before the birth of any individual of a type in $T_k$, given that the initial individual has type $i$, and $\bs{q}^{(k)} = (q_1^{(k)}, q_2^{(k)}, \ldots ).$ \begin{lem} The sequence $\{\bs{q}^{(k)}\}_{k\geq0}$ is monotonically increasing and converges pointwise to the global extinction probability vector $\vc q$. \end{lem} \begin{proof} Clearly, $T_k\supset T_{k+1}$ for all $k$, and $\tau_k\leq\tau_{k+1}$, so that $[{N_e} < {\tau_k}] \subseteq [{N_e} < {\tau_{k+1}}]$, and ${\bs{q}^{(k)}}\leq{\bs{q}^{(k+1)}}$. Therefore, for any $i$, \begin{align*} \lim_{k\rightarrow\infty}q_i^{(k)}& = \lim_{k\rightarrow\infty}\mathds{P}[{{N_e}}<{\tau_k}\,|\,\varphi_0 = i] \\ & = \mathds{P}[{{N_e}}<\lim_{k\rightarrow\infty} {\tau_k} \,|\,\varphi_0 = i]\\ & = \mathds{P}[{{N_e}} < \infty\,|\,\varphi_0 = i] \\ & = q_i, \end{align*} which completes the proof. \end{proof} By definition, ${q}^{(k)}_i=0$ for all $i\in T_k$. Consequently, \begin{align*} \bs{q}^{(0)} & =(0, 0,\ldots), \\ \bs{q}^{(k)} & = (q^{(k)}_1, \ldots, q^{(k)}_k, 0, 0, \ldots) \quad \mbox{ for } k\geq1. \end{align*} Thus, at the $k$th iteration one only needs to compute the finite vector $\bs{w}^{(k)} = (q^{(k)}_1, \ldots, q^{(k)}_k)$, which we do as follows. Consider a branching process $\{\mathcal{W}_{n}^{(k)}\}_{n\in\mathds{N}}$ which evolves like $\{\mathcal Z_n\}$ under taboo of the types in $T_{k}$. The taboo progeny distribution $f_{i\vc j}^{(k)}$ associated with types $i\in\{1, \ldots, k\}$ in $\{\mathcal{W}_{n}^{(k)}\}$ is defined as \begin{align*} f^{(k)}_{i(j_1,\ldots,j_k)}= p_{i(j_1,\ldots,j_k,0,0,\ldots)}. \end{align*} If the process is irreducible, then $ \sum_{\vc j\in\mathds{N}^{k}} f^{(k)}_{i(j_1,\ldots,j_k)} \leq 1, $ for $1 \leq i \leq k$, with at least one strict inequality, and we need to add an absorbing state $\Delta$ to the state space $\mathds{N}^{k}$ of $\{\mathcal{W}_{n}^{(k)}\}$ to account for the missing probability mass. Obviously, absorption in $\Delta$ precludes extinction, and $\bs{w}^{(k)}$ is the vector of probability that $\{\mathcal{W}_{n}^{(k)}\}$ becomes extinct before being absorbed in $\Delta$, given the type of the initial particle. In consequence, $\bs{w}^{(k)}$ is the minimal nonnegative solution of the finite system of equations \begin{align} \label{sys1} s_i=F^{(k)}_i(s_1,s_2,\ldots,s_k), \qquad \mbox{ for } 1\leq i\leq k, \end{align} where $F_i^{(k)}(\vc s)=P_i(s_1,\ldots,s_k,0,0\ldots)$ is the probability generating function of the distribution $f^{(k)}_{i\vc j}$. We may compute $\bs{w}^{(k)}$ by linear functional iteration, for instance, on the fixed-point equation \eqref{sys1}: one easily verifies that, for any $k\geq1$, the sequence $\{\bs{w}^{(k,n)}=({w}_1^{(k,n)},\ldots,{w}_k^{(k,n)})\}_{n\geq0}$ recursively defined as \begin{align*} w_i^{(k,n)} & = F^{(k)}_i(w_1^{(k,n-1)}, \ldots, w_k^{(k,n-1)}) \qquad \mbox{for $1\leq i\leq k$,} \end{align*} for $n \geq 1$, with $\bs{w}^{(k,0)}=(0,0\ldots,0)$, satisfies \begin{align*} \bs{w}^{(k,n)} & = \mathds{P}[{{N_e}} < \tau_k \mbox{ and } N_e \leq n \,|\; \varphi_0], \end{align*} and is, therefore, monotonically increasing to $\bs w^{(k)}$. In practice, we would terminate the functional iteration for a given $k$ when $|| \vc w^{(k,n+1)}-\vc w^{(k,n)}||$ becomes sufficiently small. \subsection{Partial extinction probability} \label{subsec:localex} Here, we associate to the branching process $\{ \calz_n\}$ a family of processes $\{\mathcal{Z}^{(k)}_n=(Z^{(k)}_{n1},Z^{(k)}_{n2},\ldots)\}_{n\in\mathds{N}}$, for $k \geq 0$, obtained as follows: for a given $k$, we count neither the individuals of types in ${T_k}$, nor {\em any} of their descendants, whatever their types. It is as if all individuals of types in ${T_k}$ became {sterile}. Define $\tilde{\bs{q}}^{(k)}$ to be the global extinction probability vector of the process $\{\mathcal{Z}^{(k)}_n\}$. \begin{lem} \label{qtidleseq} The sequence of vectors $\{\tilde{\bs{q}}^{(k)}\}_{k \geq 0}$ is monotonically decreasing and converges pointwise to the partial extinction probability vector $\tilde{\bs{q}}$.\end{lem} \begin{proof} Obviously, \begin{align*}\mathcal{Z}^{(k)}_n & =(Z^{(k)}_{n1},Z^{(k)}_{n2},\ldots,Z^{(k)}_{nk},0,0,0,\ldots)\\ & \leq (Z^{(k+1)}_{n1},Z^{(k+1)}_{n2},\ldots,Z^{(k+1)}_{nk},Z^{(k+1)}_{n(k+1)},0,0,\ldots) \quad \textrm{a.s.}\\ & = \mathcal{Z}^{(k+1)}_n, \end{align*} so that, for $n$ fixed and $k \rightarrow \infty$, $\mathcal{Z}^{(k)}_n$ monotonically converges to $\mathcal{Z}_n$. Furthermore, \begin{align*} \lim_{n\rightarrow\infty} |\mathcal{Z}^{(k)}_n|\leq \lim_{n\rightarrow \infty} |\mathcal{Z}^{(k+1)}_n|, \end{align*} so that $\tilde{\bs{q}}^{(k)} \geq \tilde{\bs{q}}^{(k+1)}$, for $k\geq0$. Let $B_{\ell}^{(k)}=[\lim_{n\rightarrow\infty} Z^{(k)}_{n\ell}=0]$ be the event that type $\ell$ of $\{\mathcal{Z}_{n}^{(k)}\}$ eventually becomes extinct, and let $A^{(k)}=\cap_{\ell\geq 1} B_{\ell}^{(k)}$ be the event that all types of $\{\mathcal{Z}_{n}^{(k)}\}$ eventually become extinct. We have \begin{align*} \tilde{\bs{q}}^{(k)} &= \mathds{P}[\lim_{n\rightarrow\infty} |\mathcal{Z}^{(k)}_n|=0\,|\,\varphi_0] = \mathds{P}[A^{(k)}\,|\,\varphi_0],\end{align*} since $|\mathcal{Z}^{(k)}_n|=\sum_{\ell} Z^{(k)}_{n,\ell}$ contains only finitely many nonzero terms. Furthermore, $B_{\ell}^{(k+1)}\subseteq B_{\ell}^{(k)}$, so that $ A^{(k+1)}\subseteq A^{(k)}$, and \begin{align*} A^{(k)} \searrow A^{(\infty)}=\bigcap_{\ell\geq 1}[\lim_{n\rightarrow\infty} Z_{n\ell}=0]. \end{align*} Therefore, \begin{align*} \lim_{k\rightarrow\infty}\tilde{\bs{q}}^{(k)} = \mathds{P}[A_{\infty}\,|\,\varphi_0] = \mathds{P}[\forall\ell: \lim_{n\rightarrow\infty} Z_{n\ell}=0\,|\,\varphi_0] = \tilde{\bs{q}},\end{align*} which completes the proof. \end{proof} By definition of $\{\calz_n^{(k)}\}$, $\tilde{q}^{(k)}_i=1$ for all $i\in T_k$, and so \begin{align*} \tilde{\bs{q}}^{(0)} & = (1,1,\ldots), \\ \tilde{\bs{q}}^{(k)} & = (\tilde{q}^{(k)}_1, \ldots, \tilde{q}^{(k)}_k, 1, 1, \ldots). \end{align*} To compute the finite vector $\widetilde{\bs{w}}^{(k)} = (\tilde{q}^{(k)}_1, \ldots, \tilde{q}^{(k)}_k)$, we may interpret $\{\calz_n^{(k)}\}$ as a Galton-Watson process with finitely many types and progeny distribution $\tilde f_{i\vc j}^{(k)}$ defined as follows: \begin{align*} \tilde f^{(k)}_{i(j_1,\ldots,j_k)}= \sum_{j_{k+1}, j_{k+2}, \ldots \geq 0} p_{i(j_1,\ldots,j_k,j_{k+1}, j_{k+2},\ldots)}. \end{align*} and $\widetilde{\bs{w}}^{(k)} $ is the minimal nonnegative solution of the finite system of equations \begin{align*} s_i=\widetilde{F}^{(k)}_i(s_1,s_2,\ldots,s_k), \qquad \mbox{ for } 1\leq i\leq k, \end{align*} where $\widetilde{F}^{(k)}_i(\bs s)=P_i(s_1,\ldots,s_k,1,1,\ldots)$ is the probability generating function of the distribution $\tilde f_{i\vc j}^{(k)}$. That equation may be solved by functional iteration, as explained at the end of Subsection~\ref{sec:global}. \section{Extinction criteria} \label{sec:superc} When the number of types is finite and $M$ is irreducible, it is well-known that \begin{itemize} \item $\vc q < \vc 1$ if $\spc(M) > 1$, \item $\vc q = \vc 1$ if $\spc(M) \leq 1$. \end{itemize} If $M$ is reducible, then \begin{itemize} \item $\vc{q} \lneqq\vc 1$ if and only if $\spc(M) > 1$, \end{itemize} where we write $\vc{v} \lneqq\vc 1$ to indicate that $v_i \leq 1$ for all $i$, with at least one strict inequality. Indeed, if $M$ is reducible, there may exist some type (but not all) from which partial extinction is almost sure even if $\spc(M)> 1$ (Hautphenne~\cite{hautphenne12}). We expect that, in the infinite countable case, some analogue of $\spc(M)$ also plays a role in determining if extinction occurs almost surely or not. This is the case for partial extinction, but not necessarily for global extinction. \subsection{Partial extinction --- $M$ irreducible} \label{subset:partialcriteria} We denote by $\widetilde{M}^{(k)}$ the mean progeny matrix of the process $\{\calz_{n}^{(k)}\}$ defined in Section~\ref{subsec:localex}, and by $M^{(k)}$ the $k \times k$ north-west truncations of the infinite matrix $M$. As we do not count individuals with types in $T_k$, $\widetilde{M}^{(k)}$ is given by \begin{align*} \widetilde{M}^{(k)}=\left[\begin{array}{c|c} M^{(k)} & 0 \\\hline 0 & 0\end{array}\right], \end{align*} and it is clear that $\tilde{\vc{q}}^{(k)}= (\tilde{q}_1^{(k)}, \ldots, \tilde{q}_k^{(k)},1,1,\ldots) = \vc 1$ if $\spc(M^{(k)}) \leq 1$, otherwise $\tilde{\vc q}^{(k)} \lneqq\vc 1 $, with $(\tilde{q}_1^{(k)},\ldots,\tilde{q}_k^{(k)}) < \vc 1 $ if $M^{(k)}$ is irreducible. We assume in this subsection that $M$ is irreducible. The \textit{convergence norm} of $M$ is defined as follows. Let $R$ be the convergence radius of the power series $\sum_{k \geq 0} r^k(M^k)_{ij}$, which does not depend on $i$ and $j$. The {convergence norm} ${\nu}$ of ${M}$ is \begin{align*} {\nu} & = {R}^{-1} = {\lim_{k \rightarrow \infty} \{(M^k)_{ij}\}^{1/k}}; \end{align*} it is also the smallest value such that there exists a nonnegative vector $\vc x$ satisfying $ \vc x M\leq \nu\vc x$ (Seneta~\cite[Definition~6.3]{senetabook}). Note that the convergence norm of a finite matrix is equal to its spectral radius. If we assume that all but at most a finite number of truncations $M^{(k)}$ are irreducible, then by Seneta (\cite[Theorem 6.8]{senetabook}) the sequence $\{\spc(M^{(k)})\}$ is non-decreasing and $\lim_{k \rightarrow \infty} \spc(M^{(k)}) = \nu$, and one immediately shows the following property. \begin{prop}\label{locextcrit} Assume that $M$ is irreducible. The partial extinction probability vector ${\tilde{\bs{q}}}$ is such that $ {\tilde{\bs{q}}} <\vc 1$ if and only if $\nu>1$.\end{prop} \begin{proof} If $\nu > 1$, as $\spc(M^{(k)})\nearrow \nu$, there exists some $k$ such that $\spc(M^{(k)})>1$ and such that $M^{(k)}$ is irreducible. Thus, $(\tilde{q}_1^{(k)},\ldots,\tilde{q}_k^{(k)})< \vc 1$ and, since $\tilde{\vc q}^{(k)}\searrow {\tilde{\bs{q}}}$ by Proposition \ref{qtidleseq}, $\tilde{\bs{q}} <\vc 1$ in the limit. If $\nu\leq 1$, then for all $k$, $\spc(M^{(k)})\leq1$ and $ {\tilde{\bs{q}}}^{(k)} =\vc 1$, which implies that ${\tilde{\bs{q}}} =\vc 1$. \end{proof} This is also observed in Gantert \textit{et al.} \cite{gantert10} and Müller \cite{muller08}, who use different arguments. \subsection{Partial extinction --- $M$ reducible} \label{s:mreducible} Let us assume now that the matrix $M$ is reducible. The sequence $\{sp(M^{(k)})\}$ is still non-decreasing, but its limit might not be the convergence norm of~$M$. Let $\bar{\nu}\in[0,\infty]$ denote the limit. The proof of the proposition below is very similar to that of Proposition~\ref{locextcrit} and is omitted. \begin{prop}\label{locextcrit2} The partial extinction probability vector ${\tilde{\bs{q}}}$ is such that $ {\tilde{\bs{q}}} =\vc 1$ if and only if $\bar{\nu}\leq1$, otherwise $\tilde{\bs{q}} \lneqq\vc 1$. \end{prop} In other words, there exists at least one type $i$ such that $\tilde{q}_i<1$ if and only if $\bar{\nu}>1$. The next question is, for which $i$ does the inequality $\tilde{q}_i<1$ hold? We give below a necessary and sufficient condition for $\tilde{q}_i$ to be strictly less than one. We write $i\rightarrow j$ when type $i$ has a positive probability to generate an individual of type $j$ in a subsequent generation, that is, if there exists $n\geq1$ such that $(M^n)_{ij}>0$. We define equivalent classes $C_1,C_2,\ldots$ such that, for each $k$, if $i \in C_k$, then $j \in C_k$ if and only if $i\rightarrow j$ and $j\rightarrow i$, for all $j$. This induces a partition of the set of types $\mathcal{S}$ and we write that $C_k\rightarrow C_\ell$ when there exist $i\in C_k$ and $j\in C_\ell$ such that $i\rightarrow j$. We denote by $M_{k}$ the irreducible mean progeny matrix restricted to types in $C_k$, that is, $M_{k}=(M_{ij})_{i,j\in C_k}$, and by $\nu_k$ the convergence norm of $M_{k}$. \begin{prop}\label{locextcrit3} If $i$ is a type in $C_k$, then the partial extinction probability $\tilde{q}_i$ is strictly less than 1 if and only if $\nu_k > 1$ or there exists a class $C_\ell$ such that $C_k\rightarrow C_\ell$ and $\nu_\ell>1$.\end{prop} \begin{proof} Let $i\in C_k$ and assume that $\nu_k>1$. Then, by Proposition~\ref{locextcrit}, the probability that every type in $C_k$ eventually becomes extinct, given that the initial type is in $C_k$, is strictly less than one, hence $\tilde{q}_i<1$. Now assume that $i\in C_k$, and that there exists a class $C_\ell$ such that $C_k\rightarrow C_\ell$ and $\nu_\ell>1$. Then, $i\rightarrow j$ for all $j\in C_\ell$, that is, there is a positive probability that type $i$ has type $j\in C_\ell$ among its descendants; moreover, since $\nu_\ell>1$, starting from any $j\in C_\ell$, the probability that every type in $C_\ell$ eventually becomes extinct is strictly less than one by Proposition~\ref{locextcrit}. We thus obtain $\tilde{q}_i<1$. If $\nu_k\leq 1$ and there is no class $C_\ell$ such that $C_k\rightarrow C_\ell$ and $\nu_\ell>1$, then all classes $C_\ell$ such that $C_k\rightarrow C_\ell$ satisfy $\nu_\ell\leq1$. In other words, by Proposition~\ref{locextcrit}, all the descendants of type $i$ will generate a process which partially becomes extinct with probability 1. So partial extinction is almost sure if the process is initiated by type $i$, and we have $\tilde{q}_i=1$. \end{proof} \subsection{Global extinction --- $M$ irreducible} \label{subsec:globalcriteria} We assume again that $M$ is irreducible. By Lemma \ref{lem1} and Proposition \ref{locextcrit}, we know that if $\nu >1$, then ${\vc q}= {\tilde{\bs{q}}} <\vc1$, and if $\nu \leq 1$, then $\vc q \leq {\tilde{\bs{q}}} =\vc1$. One question which remains is to determine additional conditions that guarantee $\vc q = {\tilde{\bs{q}}} =\vc1$. The most precise answers are conditioned on the dichotomy property, which states that with probability 1 the population either becomes extinct or drifts to infinity (Harris~\cite{harris63}). In the finite case, this follows under very mild conditions but it is more problematic if the number of types is infinite. In particular, Tetzlaff~\cite[Condition~2.1 and Proof of~Proposition~2.2]{tetzlaff03} gives the following sufficient condition for the dichotomy property to hold: it suffices that for all $k \geq 1$, there exists an index $m_k$ and a real number $d_k > 0$ such that \begin{align} \label{dich} \inf_{i} \mathds{P}[|\mathcal{Z}_{m_k}| = 0 \; | \;\varphi_0=i,\,1 \leq |\mathcal{Z}_1| \leq k] \geq d_k. \end{align} This indicates that there is a positive, and bounded away from zero, probability for the population to become extinct. The next property is proved in~\cite{tetzlaff03}. \begin{prop} \label{prop:Tetzlaff} Assume that the dichotomy property holds. If the limit $\liminf_{n\rightarrow\infty} (M^n\,\vc1)$ is finite, then ${\vc q}=\vc 1$. \end{prop} A direct consequence, which brings the convergence norm back into the picture, is the following. \begin{prop}\label{NC} Assume that the dichotomy property holds. If there exist $\lambda\leq 1$ and $\vc x>\vc 0$ such that $\vc x\vc1<\infty$ and $\vc x M\leq\lambda \vc x$, then ${\vc q}=\vc 1$. \end{prop} \begin{proof} Under the assumptions of the proposition, $\vc x M^n\vc1 \leq \lambda^n \vc x\vc1$, which implies that $\lim_{n\rightarrow\infty}\vc x M^n\vc1 < \infty$. Applying Fatou's Lemma, we obtain $ \vc x \lim_{n\rightarrow\infty}(M^n\vc1)< \infty, $ which leads to $\lim_{n\rightarrow\infty}(M^n\vc1)< \infty$ since $\vc x > \vc 0$. Thus, by Proposition~\ref{prop:Tetzlaff} the result follows. \end{proof} If such a $\lambda$ exists, it is at least equal to $\nu$, and we remember that $\nu \leq 1$ is a necessary condition for $\vc q = \bs{1}$. The difference between $\lambda$ and $\nu$, and the additional constraint imposed by this proposition, is that the measure associated to $\lambda$ must be convergent, which is not necessarily the case with the measure associated with $\nu$. \subsection{Growth rate and extinction} \label{sec:initial} When the number of types is finite and the process is irreducible, the expected total population size increases, or decreases, asymptotically geometrically: independently of the initial type, $\mathbb{E}[|\mathcal{Z}_n|]\sim \rho^n$ where $\rho$ is the spectral radius of $M$. This is no longer the case when the number of types is infinite, and the evolution of $\mathbb{E}[|\mathcal{Z}_n|]$ may depend on the distribution of $\varphi_0$. Actually, it is possible for a process to become globally extinct almost surely while the expected population size increases without bounds. This we show below, and we give one example in the next section. Assume that there exists a probability measure $\vc\alpha_1$ such that $\vc\alpha_1 M \leq \lambda_1 \vc\alpha_1$ with $\lambda_1 < 1$, and a probability measure $\vc\alpha_2$ such that $\vc\alpha_2 M \geq \lambda_2 \vc\alpha_2$ with $\lambda_2 > 1$. If, in addition, the dichotomy property holds, then $\vc q = \vc 1$ by Proposition~\ref{NC}. If $\varphi_0$ has distribution $\vc\alpha_1$, then \begin{align*} \mathbb{E}[|\mathcal{Z}_n|] = \vc \alpha_1 \,M^n\,\vc 1\leq \lambda_1^n \end{align*} so that $\lim_{n \rightarrow \infty}\mathbb{E}[|\mathcal{Z}_n|] = 0$. In the contrary, if $\varphi_0$ has the distribution $\vc\alpha_2$, then by a similar argument $\lim_{n \rightarrow \infty}\mathbb{E}[|\mathcal{Z}_n|] = \infty$ and the extinction probability is equal to $\vc\alpha_2 \vc q = 1$. \section{Illustration} \label{sec:rand} We illustrate the results of the previous sections with two examples, one for which $M$ is tridiagonal (and the process is irreducible) and one for which $M$ is super-diagonal (and the process is reducible). \subsection{Irreducible tridiagonal case} This example corresponds to a homogeneous branching random walk on positive integers with a reflecting wall at $z=1$. The mean progeny matrix~is \begin{align} \label{trid} M& = \left[\begin{array}{cccccc} b & \;\; c & \\ a & \;\; b & \;c \\ & \;\; a & \;b & c \\ & & \ddots & \ddots & \ddots \end{array}\right], \end{align} where $a$ and $c$ are strictly positive, and $b$ is nonnegative. \begin{prop} \label{propertyM} Assume that $M$ is as given in \eqref{trid}. Then, its convergence norm is ${\nu} = b + 2\sqrt{ac}$, and there exists $\vc x > \vc 0$ such that $\vc x M=\lambda \vc x$ for all $\lambda \geq\nu$. In addition, $\vc x\vc1<\infty$ if and only if $\lambda\in[\nu,a+b+c)$ and $a>c.$ The strictly positive and convergent invariant measure $\vc x$ is given by \begin{align} x_k & = \eta \,k\,({\sqrt{ac}}/{a})^k && \mbox{ if } \lambda=\nu, \label{eqn:xkeq} \\ & = \eta \,\{[({\lambda - b +\sqrt{\Delta}})/({2a})]^k-[(\lambda - b -\sqrt{\Delta})/(2a)]^k\} && \mbox{ if } \lambda>\nu, \label{eqn:xkiq} \end{align} for $k\geq1$, where $\eta$ is an arbitrary constant and $\Delta = (b - \lambda)^2 - 4ac$. \end{prop} \begin{proof} Let $M^{(k)}$ denote the $k \times k$ north-west truncations of $M$. Then, by a modification of van Doorn \emph{et al.} \cite[Theorem~1, Eqn~(9)]{dfz09} we obtain \begin{align*} \spc(M^{(k)}) = \min_{\bs{u} \geq \bs{0}} \max_{1 \leq i \leq k} \left\{ b + u_{i + 1} + \frac{ac}{u_i} \right\} . \end{align*} Then, by Seneta~\cite[Theorem~6.8]{senetabook} \begin{align*} \nu & = \lim_{ k \rightarrow \infty} \spc(M^{(k)}) = \min_{\bs{u} \geq \bs{0}} \sup_{i} \left\{ b + u_{i + 1} + \frac{ac}{u_i} \right\} = b + 2\sqrt{ac}, \end{align*} with the last equality following from arguments analogous to those in the proof of Theorem~3.2 in Latouche \emph{et al.} \cite{latouche11}. Now, for any $\bs{x}$ to satisfy $\bs{x} M = \lambda \bs{x}$, its elements have to satisfy the constraints \begin{align} bx_1 + ax_2 & = \lambda x_1 \label{eqn:con1} \\\nonumber ax_{k + 1} + (b - \lambda)x_k + cx_{k - 1} & = 0 \quad \mbox{for } k \geq 2. \end{align} Let $\Delta = (b - \lambda)^2 - 4ac$. There are three cases, for each of which $\bs{x}$ takes a specific form (Korn and Korn~\cite[Chapter~20, Section~4.5]{kornkorn61}). \textbf{Case 1:} $\Delta = 0$. Then, $\lambda = b \pm 2\sqrt{ac}$, and for $k \geq 1$, \begin{align} x_k = c_1[(\lambda - b)/(2a)]^k + c_2 k [(\lambda - b)/(2a)]^k, \label{eqn:xkCase1} \end{align} where $c_1$ and $c_2$ are constants. Substituting \eqref{eqn:xkCase1} into \eqref{eqn:con1} gives us $c_1 = 0$. To ensure $x_k > 0$ for all $k$, it is necessary that $\lambda > b$. Consequently, $\lambda = b + 2\sqrt{ac}$, and we obtain \eqref{eqn:xkeq}. For $\bs{x}$ to be convergent, we require that $\sqrt{ac}/a < 1$ and thus $a > c$. \textbf{Case 2:} $\Delta > 0$. Then, $\lambda < b - 2\sqrt{ac}$ or $\lambda > b + 2\sqrt{ac}$, and for $k \geq 1$, \begin{align} \label{eqn:xkCase2} x_k = c_3[((\lambda - b + \sqrt{\Delta})/(2a)]^k + c_4[((\lambda - b - \sqrt{\Delta})/(2a)]^k, \end{align} where $c_3$ and $c_4$ are constants. Substituting \eqref{eqn:xkCase2} into \eqref{eqn:con1} gives us $c_3 = -c_4$, and \eqref{eqn:xkCase2} simplies to \eqref{eqn:xkiq}. It is clear from \eqref{eqn:xkiq} that $\bs{x} > \bs{0}$ if and only if $\lambda > b$. Thus, $\lambda > b + 2\sqrt{ac}$. Finally, $\bs{x} \bs{1} < \infty$ if and only if $((\lambda - b) + \sqrt{\Delta})/(2a) < 1$, the latter being equivalent to $\lambda < 2a + b$ and $\lambda < a + b + c$. As $b + 2\sqrt{ac} < \lambda < 2a + b$, both $a < c$ and $a = c$ lead to a contradiction. Consequently, $a > c$. \textbf{Case 3:} $\Delta < 0$. Then, $b - 2\sqrt{ac} < \lambda < b + 2\sqrt{ac}$ and \begin{align} x_k = (c/a)^k(c_5 \cos (k \phi) + c_6 \sin (k \phi)), \label{eqn:xkCase3} \end{align} where $\phi = \arccos(b/(2\sqrt{ac})), 0 < \phi < \pi$ and $c_5$ and $c_6$ are constants. Since we are looking for $\bs{x} > \bs{0}$, Case 3 is not feasible. Indeed, we can rewrite \eqref{eqn:xkCase3} as, for $k \geq 1$, $x_k = c_7 (c/a)^k \cos(c_8 + k \phi)$ where $0 \leq c_8 < 2 \pi$ and $c_7$ is arbitrary. It can be easily shown that there exists $k_0$ such that $\cos c_8$ and $\cos (c_8 + k_0 \phi)$ have different signs. \end{proof} Among the progeny distributions that may be associated with the mean progeny matrix given in \eqref{trid}, we choose \begin{align*} P_i(\vc s) & = ({b}/{t}) s_i^t + ({c}/{t})s_{i+1}^t+1-({b+c})/{t} && \mbox{for } i=1,\\ & = ({a}/{u})s_{i-1}^u + ({b}/{u}) s_i^u + ({c}/{u}) s_{i+1}^u+1- ({a+b+c})/{u} && \mbox{for } i\geq 2, \end{align*} where $t=\lceil b+c \rceil+1$ and $u=\lceil a+b+c \rceil+1$. By varying $a,b,$ and $c$, we shall cover the three possible cases $\vc q=\tilde{\vc q}<\vc1$, $\vc q=\tilde{\vc q}=\vc1$, and $\vc q<\tilde{\vc q}=\vc1$. \paragraph{Case 1: $\vc q=\tilde{\vc q}<\vc1$.} Take $a= b=1/2$, and $c=1/3$. With these, $\nu=1.28>1$, and $\vc q=\tilde{\vc q}<\vc1$ by Lemma \ref{lem1} and Proposition \ref{locextcrit}. We illustrate in Figure~\ref{fig1} the convergence of the sequences $\{\vc q^{(k)}\}$ and $\{\tilde{\vc q}^{(k)}\}$. On the left, we plot $q_1^{(k)}$ and $\tilde{q}_1^{(k)}$ for $k= 1$ to 20; the two sequences rapidly converge to a common value approximately equal to 0.89. On the right, we plot $q_i^{(20)}$ and $\tilde{q}_i^{(20)}$ for $i = 1$ to 20; we observe that the first 15 entries are well-approximated after 20 iterations but the next entries require more iterations because for high values of $i$, the approximation process for $q_i$ and $\tilde{q}_i$ starts with a higher value of $k$. A sequence $\{x_k\}_{k\geq0}$ converges linearly to $x$ if there exists $0<\mu<1$ such that $ \lim_{k\rightarrow\infty} {|x-x_{k+1}|}/{|x-x_k|}=\mu$, and $\mu$ is called the convergence rate. Our numerical investigations indicate that the convergence of $q_i^{(k)}$ as well as that of $\tilde{q}_i^{(k)}$ is linear, for fixed $i$. We give one example in Figure~\ref{fig2}, where we plot the ratios $|q_1 - q_1^{(k+1)}|/|q_1 - q_1^{(k)}|$ and $|\tilde q_1 - \tilde q_1^{(k+1)}|/|\tilde q_1 - \tilde q_1^{(k)}|$; not knowing the value of either $q_1$ or $\tilde q_1$, we have used the values at the 20th iteration. The two sequences are seen to converge linearly at the same rate $\mu = 0.26$ approximately. \begin{figure}[!tbp] \centering \includegraphics[scale=0.3]{papbpinffig1.eps} \caption{Case 1. $a=b=1/2, $ and $c=1/3$. Left: the values of $q_1^{(k)}$ (continuous line) and of $\tilde{q}_1^{(k)}$ (dashed line). Right: first entries of $\vc q^{(20)}$ (continuous line) and $\tilde{\vc q}^{(20)}$ (dashed line).} \label{fig1} \end{figure} \begin{figure}[!tbp] \centering \includegraphics[scale=0.3]{papbpinffig2.eps} \caption{Case 1. $a=b=1/2,$ and $c=1/3$. Convergence rates of the sequences $q_1^{(k)}$ (continuous line) and $\tilde{ q}_1^{(k)}$ (dashed line).} \label{fig2} \end{figure} \paragraph{Case 2: $\vc q = \tilde{\vc q} = \vc 1$} The parameters here are $a=b=1/2$, and $ c=1/25$. Here ${{a>c}}$ and for any individual, the type of its descendants drifts over successive generations toward type 1, the least prolific of types. The convergence norm is $\nu=0.78<1$, which implies that $\tilde{\vc q}=\vc 1$. We shall conclude from Proposition~\ref{propertyM} and Proposition~\ref{NC} (with $\lambda=\nu$) that $\vc q = \vc 1$ as well, once we show that the dichotomy property holds. The progeny generating function is given by \begin{align} \label{e:pi} P_i(\vc s) & = (1/4) s_i^2 + (1/50)s_{i+1}^2+(73/100) && \mbox{ for } i=1,\\ \label{e:pia} & = (1/6) s_{i-1}^3 +(1/6) s_i^3 + (1/75) s_{i+1}^3+(49/75) && \mbox{ for } i\geq 2. \end{align} To verify that the dichotomy property holds, we use the sufficient condition~(\ref{dich}). In view of (\ref{e:pi}, \ref{e:pia}), we observe that for all $i$, $\mathds{P}[|\mathcal{Z}_{2}| = 0 \; | \;\varphi_0=i,\,1 \leq |\mathcal{Z}_1| \leq k]\geq (\min(73/100,49/75))^k,$ and we conclude that \eqref{dich} is satisfied with $m_k=2$ and $d_k=(49/75)^k$. \begin{figure}[!tbp] \begin{center}\includegraphics[scale=0.3]{initialtype2}\end{center} \caption{Case 2: $a= b=1/2$, $c=1/25$. The first $50$ components of the initial type distribution vector ${\vc\alpha}_1$, for which $\lim_n \mathbb{E}[|\mathcal{Z}_n|]=0$, and of ${\vc\alpha}_2$, for which $\lim_n \mathbb{E}[|\mathcal{Z}_n|]=\infty$.} \label{init} \end{figure} To illustrate the observation made in Subsection~\ref{sec:initial} about the effects of the initial type's distribution, we take the parameters \[ \lambda_1=\nu=0.78<1 \qquad \mbox{and} \qquad \lambda_2=1.02<1.04 \;(=a+b+c). \] By Proposition~\ref{propertyM}, there exist $\vc\alpha_1>0$ and $\vc\alpha_2>0$ such that $\vc\alpha_1M=\lambda_1\vc\alpha_1$ and $\vc\alpha_1\vc 1=1$, and such that $\vc\alpha_2M=\lambda_2\vc\alpha_2$ and $\vc\alpha_2\vc 1=1$. If $\varphi_0$ has the distribution $\vc\alpha_1$, then $\lim_n \mathbb{E}[|\mathcal{Z}_n|]=0$, while if it has the distribution $\vc\alpha_2$, then $\lim_n \mathbb{E}[|\mathcal{Z}_n|]=\infty$. In both cases, extinction is with probability 1. We plot in Figure~\ref{init} the first 50 components of $\vc\alpha_1$ and $\vc\alpha_2$. The difference between the two is that the distribution $\vc\alpha_1$ is concentrated on small types, so that the process has less chance of building a high population before its eventual extinction. \paragraph{Case 3: $\vc q \leq \tilde{\vc q} = \vc 1$.} Take $a=1/25, b= c=1/2$. Here, $a <c $ and $\nu=0.78<1<a+b+c$; thus, $\vc q\leq\tilde{\vc q}=\vc 1$ but we do not know if $\vc q =\vc 1$ or not. We show on the left of Figure~\ref{fig4} the values of $q_1^{(k)}$ and $\tilde q_1^{(k)}$ for $k=1$ to 60. Judging from this, we conclude that $q_1 < 1 = \tilde q_1$. On the right of that figure, we give $q_i^{(60)}$ and $q_i^{(60)}$ for $i=1$ to 60. \begin{figure}[!tbp] \begin{center}\includegraphics[scale=0.3]{papbpinffig4}\end{center} \caption{Case 3: $a=1/25, b=c=1/2$. Left: the values of $q_1^{(k)}$ (continuous line) and of $\tilde{q}_1^{(k)}$ (dashed line). Right: first entries of $\vc q^{(60)}$ (continuous line) and $\tilde{\vc q}^{(60)}$ (dashed line).} \label{fig4} \end{figure} \begin{figure}[!tbp] \begin{center}\includegraphics[scale=0.3]{exslides4bis}\end{center} \caption{Case 3. $a=1/25, b=c=1/2$. Simulation of the evolution of the population size in different types and of the total population size} \label{fig5} \end{figure} To confirm the conclusion that $\vc q < \tilde{\vc q} = \vc 1$, we have simulated the branching process and we give one particular sample path on Figure~\ref{fig5}: the whole population $|\calz_n|$ seems to grow without bounds, while individual types appear, grow in importance, and eventually disappear from the population. \subsection{Reducible example} Consider the mean progeny matrix with the structure \begin{align*} M& = \left[\begin{array}{cccccc} b_1 & \;\; c_1 & \\ & \;\; b_2 & \;c_2 \\ & \;\; & \;b_3 & c_3 \\ & & & \ddots & \ddots \end{array}\right], \end{align*} where $b_i\geq 0$ and $c_i > 0$ for all $i$. In this special case of reducible mean progeny matrix we may associate another interpretation to the sequence $\{\tilde{\vc q}^{(k)}\}$. Let us define the \emph{local extinction} of a specific type. This event is $E_k = [\lim_{n\rightarrow\infty} Z_{nk}=0]$, independently of the other types. A moment of reflection shows that $E_k\equiv \cap_{\ell\leq k} E_\ell$ and, furthermore, that $\tilde{q}^{(k)}_i$ is the probability that type $k$ eventually becomes extinct, given that the process starts with a first individual of type $i$. This allows us to give another proof that $\tilde{\vc q}^{(k)}\geq \tilde{\vc q}^{(k+1)}$ and that the sequence converges to $\tilde{\vc q}$: \begin{align*} \lim_{k\rightarrow\infty} \tilde{\vc q}^{(k)} & = \lim_{k\rightarrow\infty}\mathds{P}[E_k\,|\,\varphi_0] \ = \lim_{k\rightarrow\infty}\mathds{P}[\cap_{\ell\leq k} E_\ell\, |\,\varphi_0] \\ & = \mathds{P}[\cap_{\ell\leq \infty} E_\ell\,\Big|\,\varphi_0] \ =\tilde{\vc q}. \end{align*} In the reducible case, the equation $\vc s=\vc P(\vc s)$ may have more than two distinct solutions and, in particular, it is possible that \mbox{$\vc q<\tilde{\vc q}< \vc 1$}, as we show on one example. Take $b_i=0$ and $c_i=1.9$ for every $i$ except for $i = 10$, where $b_{10}=1.6$ and $c_{10}=0.8$. That is, in general, type $i\neq10$ individuals have only children of the next type, slightly less than two on average, and type 10 is different. If it were not for type 10, the whole population would behave as a supercritical process, with each type getting extinct after one generation. Individuals of type 10 do reproduce themselves, in a supercritical fashion. Assume that the progeny generating function is \begin{align*} P_i(\vc s) & = (19/30)s_{i+1}^3+(11/30) && \mbox{ for } i\neq 10,\\ & = (2/5) s_i^4 + (1/5) s_{i+1}^4+(2/5) && \mbox{ for } i=10. \end{align*} As the sequence $\{\spc(M^{(k)})\}$ converges to $\bar{\nu}=1.6>1$, we know by Proposition~\ref{locextcrit2} that $\tilde{\vc q} \lneqq\vc 1$. Furthermore, Proposition~\ref{locextcrit3} implies that $\tilde{q}_i<1$ for $1\leq i\leq 10$, and $\tilde{q}_i=1$ for $i\geq 11$. We show $\{q_8^{(k)}\}$ and $\{\tilde{q}_8^{(k)}\}$ on the left in Figure~\ref{fig6} and the plot clearly makes it appear that $q_8 < \tilde q_8 < 1$. On the right, we give the values of $q_i^{(30)}$ and $\tilde q_i^{(30)}$ for $1 \leq i \leq 30$. For $i \geq 11$, local extinction has probability 1 since every type exists for one generation only, and the global probability, at least if $i$ is sufficiently smaller than 30, is close to 0.41, the extinction probability of a single-type branching process with progeny generating function \begin{align*} P(s) & = (19/30)s^3+(11/30). \end{align*} We thus see that if extinction happens in the single-type process, then it does so in a few generations. \begin{figure} \begin{center}\includegraphics[scale=0.30]{exreduc1}\end{center} \caption{Left: the values of $q_8^{(k)}$ (continuous line) and of $\tilde{q}_8^{(k)}$ (dashed line). Right: first entries of $\vc q^{(30)}$ (continuous line) and $\tilde{\vc q}^{(30)}$ (dashed line).} \label{fig6} \end{figure} \acks All three authors thank the Minist\`ere de la Communaut\'e fran\c{c}aise de Belgique for funding this research through the ARC grant AUWB-08/13--ULB~5. The first and third authors also acknowledge the financial support of the Australian Research Council through the Discovery Grant DP110101663. \bibliographystyle{apt} \bibliography{BP_infinity_bib}
{"config": "arxiv", "file": "1211.4129/HLN_multitypeBP.tex"}
TITLE: How to guarantee if a real function is $C^\infty$ without using $f^{(n)}$ QUESTION [0 upvotes]: I know that function $f(x)=x+\frac{1}{1+e^x}$ is $C^\infty$ but I wanna prove that. Question Is there something that I could use to guarantee this without actually calculating $f^{(n)}$ ? REPLY [2 votes]: It is the sum of a trivially $\mathcal C^\infty$ function and the reciprocal of another $\mathcal C^\infty$ function which never vanishes. The rules of computation of the derivative of a reciprocal show this reciprocal is also $\mathcal C^\infty$.
{"set_name": "stack_exchange", "score": 0, "question_id": 3456321}
\begin{document} \newtheorem{theorem}{Theorem} \newtheorem{lemma}{Lemma} \newtheorem{corollary}{Corollary} \newenvironment{proof}{{\bf Proof}.}{\hspace{3mm}\rule{3mm}{3mm}} \newenvironment{mainproof}{{\bf Proof of Theorem 7}.}{\hspace{3mm}\rule{3mm}{3mm}} \newenvironment{cor1proof}{{\bf Proof of Corollary 1}.}{\hspace{3mm}\rule{3mm}{3mm}} \newenvironment{cor2proof}{{\bf Proof of Corollary 2}.}{\hspace{3mm}\rule{3mm}{3mm}} \newenvironment{cor3proof}{{\bf Proof of Corollary 3}.}{\hspace{3mm}\rule{3mm}{3mm}} \newenvironment{subproof}{{\bf Proof of Claim}.}{\hspace{3mm}\rule{3mm}{3mm}} \maketitle\thispagestyle{empty} \begin{abstract} Let $D$ be a directed graph with vertex set $V$ and order $n$. An {\em anti-directed (hamiltonian) cycle} $H$ in $D$ is a (hamiltonian) cycle in the graph underlying $D$ such that no pair of consecutive arcs in $H$ form a directed path in $D$. An {\em anti-directed 2-factor} in $D$ is a vertex-disjoint collection of anti-directed cycles in $D$ that span $V$. It was proved in \cite{BJMPT} that if the indegree and the outdegree of each vertex of $D$ is greater than $\frac{9}{16}n $ then $D$ contains an anti-directed hamiltonian cycle. In this paper we prove that given a directed graph $D$, the problem of determining whether $D$ has an anti-directed 2-factor is NP-complete, and we use a proof technique similar to the one used in \cite{BJMPT} to prove that if the indegree and the outdegree of each vertex of $D$ is greater than $\frac{25}{48}n $ then $D$ contains an anti-directed 2-factor. \end{abstract} \section{Introduction} Let $G$ be a multigraph with vertex set $V(G)$ and edge set $E(G)$. For a vertex $v \in V(G)$, the degree of $v$ in $G$, denoted by ${\rm deg}(v,G)$ is the number of edges of $G$ incident on $v$. Let $\delta(G) = {\rm min}_{v \in V(G)}\{{\rm deg}(v,G)\}$. The simple graph underlying $G$ denoted by simp($G$) is the graph obtained from $G$ by replacing all multiple edges by single edges. A {\em 2-factor} in $G$ is a collection of vertex-disjoint cycles that span $V(G)$. Let $D$ be a directed graph with vertex set $V(D)$ and arc set $A(D)$. For a vertex $v \in V(D)$, the {\em outdegree} (respectively, {\em indegree}) of $v$ in $D$ denoted by $d^{+}(v,D)$ (respectively, $d^{-}(v,D)$) is the number of arcs of $D$ directed out of $v$ (respectively, directed into $v$). Let $\delta(D) = {\rm min}_{v \in V(D)}\{{\rm min}\{d^{+}(v,D), d^{-}(v,D)\}\}$. The {\em multigraph underlying $D$} is the multigraph obtained from $D$ by ignoring the directions of the arcs of $D$. A {\em directed (Hamilton) cycle} $C$ in $D$ is a (Hamilton) cycle in the multigraph underlying $D$ such that all pairs of consecutive arcs in $C$ form a directed path in $D$. An {\em anti-directed (Hamilton) cycle} $C$ in $D$ is a (Hamilton) cycle in the multigraph underlying $D$ such that no pair of consecutive arcs in $C$ form a directed path in $D$. A {\em directed 2-factor} in $D$ is a collection of vertex-disjoint directed cycles in $D$ that span $V(D)$. An {\em anti-directed 2-factor} in $D$ is a collection of vertex-disjoint anti-directed cycles in $D$ that span $V(D)$. Note that every anti-directed cycle in $D$ must have an even number of vertices. We refer the reader to ([1,7]) for all terminology and notation that is not defined in this paper. The following classical theorems by Dirac \cite{Dirac} and Ghouila-Houri \cite{GH} give sufficient conditions for the existence of a Hamilton cycle in a graph $G$ and for the existence of a directed Hamilton cycle in a directed graph $D$ respectively. \begin{theorem}{\rm \cite{Dirac}} If $G$ is a graph of order $n \geq 3$ and $\delta(G) \geq \frac{n}{2}$, then $G$ contains a Hamilton cycle. \end{theorem} \begin{theorem}{\rm \cite{GH}} If $D$ is a directed graph of order $n$ and $\delta(D) \geq \frac{n}{2}$, then $D$ contains a directed Hamilton cycle. \end{theorem} Note that if $D$ is a directed graph of even order $n$ and $\delta(D) \geq \frac{3}{4}n$ then $D$ contains an anti-directed Hamilton cycle. To see this, let $G$ be the multigraph underlying $D$ and let $G'$ be the subgraph of $G$ consisting of the parallel edges of $G$. Now, $\delta(D) \geq \frac{3}{4}n$ implies that $\delta({\rm simp}(G')) \geq \frac{n}{2}$ and hence Theorem 1 implies that simp($G'$) contains a Hamilton cycle which in turn implies that $D$ contains an anti-directed Hamilton cycle. The following theorem by Grant \cite{Grant} gives a sufficient condition for the existence of an anti-directed Hamilton cycle in a directed graph $D$. \begin{theorem} {\rm \cite{Grant}} If $D$ is a directed graph with even order $n$ and if $\delta(D) \geq \frac{2}{3}n + \sqrt{n{\rm log}(n)}$ then $D$ contains an anti-directed Hamilton cycle. \end{theorem} In his paper Grant \cite{Grant} conjectured that the theorem above can be strengthened to assert that if $D$ is a directed graph with even order $n$ and if $\delta(D) \geq \frac{1}{2}n$ then $D$ contains an anti-directed Hamilton cycle. Mao-cheng Cai \cite{Mao} gave a counter-example to this conjecture. \noindent In \cite{BJMPT} the following sufficient condition for the existence of an anti-directed Hamilton cycle in a directed graph was proved. \begin{theorem} {\rm \cite{BJMPT}} Let $D$ be a directed graph of even order $n$ and suppose that $\frac{1}{2} < p < \frac{3}{4}$. If $\delta(D) \geq pn$ and $n > \frac{{\rm ln}(4)}{\left(p - \frac{1}{2}\right){\rm ln}\left(\frac{p + \frac{1}{2}}{\frac{3}{2} - p}\right)}$, then $D$ contains an anti-directed Hamilton cycle. \end{theorem} \noindent It was shown in \cite{BJMPT} that Theorem 4 implies the following corollary that is an improvement on the result in Theorem 3. \noindent \begin{corollary} {\rm \cite{BJMPT}} If $D$ is a directed graph of even order $n$ and $\delta(D) > \frac{9}{16}n $ then $D$ contains an anti-directed Hamilton cycle. \end{corollary} \noindent The following theorem (see \cite{digraphsbook}) gives a necessary and sufficient condition for the existence of a directed 2-factor in a digraph $D$. \begin{theorem} A directed graph $D = (V,A)$ has a directed 2-factor if and only if $|\bigcup_{v \in X}N^{+}(v)| \geq |X|$ for all $X \subseteq V$. \end{theorem} \noindent We note here that given a directed graph $D$ the problem of determining whether $D$ has a directed Hamilton cycle is known to be NP-complete, whereas, there exists an O$(\sqrt{n}m)$ algorithm (see \cite{digraphsbook}) to check if a directed graph $D$ of order $n$ and size $m$ has a directed 2-factor. On the other hand, the following theorem proves that given a directed graph $D$, the problem of determining whether $D$ has a directed 2-factor is NP-complete. We are indebted to Sundar Vishwanath for pointing out the short proof of Theorem 6 given below. \noindent \begin{theorem} {\rm \cite{Sundar}} Given a directed graph $D$, the problem of determining whether $D$ has an anti-directed 2-factor. is NP-complete. \end{theorem} \begin{proof} Clearly the the problem of determining whether $D$ has an anti-directed 2-factor is in NP. A graph $G$ is said to be $k$-edge colorable if the edges of $G$ can be colored with $k$ colors in such a way that no two adjacent edges receive the same color. It is well known that given a cubic graph $G$, it is NP-complete to determine if $G$ is 3-edge colorable. Now, given a cubic graph $G = (V,E)$, construct a directed graph $D = (V,A)$, where for each $\{u,v\}$ $\in$ $E$, we have the oppositely directed arcs $(u,v)$ and $(v,u)$ in $A$. It is clear that $G$ is 3-edge colorable if and only if $D$ contains an anti-directed 2-factor. This proves that the the problem of determining whether a directed graph $D$ has an anti-directed 2-factor is NP-complete. \end{proof} \\ \\ \noindent In Section 1 of this paper we prove the following theorem that gives a sufficient condition for the existence of an anti-directed 2-factor in a directed graph. \begin{theorem} Let $D$ be a directed graph of even order $n$ and suppose that $\frac{1}{2} < p < \frac{3}{4}$. If $\delta(D) \geq pn$ and $n > \frac{{\rm ln}(4)}{\left(p - \frac{1}{2}\right){\rm ln}\left(\frac{p + \frac{1}{2}}{\frac{3}{2} - p}\right)} (??)$, then $D$ contains an anti-directed 2-factor. \end{theorem} \noindent In Section 1 we will show that Theorem 7 implies the following corollary. \noindent \begin{corollary} {\rm \cite{BJMPT}} If $D$ is a directed graph of even order $n$ and $\delta(D) > \frac{25}{48}n $ then $D$ contains an anti-directed 2-factor. \end{corollary} \section{Proof of Theorem 7 and its Corollary} A partition of a set $S$ with $|S|$ being even into $S = X \cup Y$ is an {\em equipartition} of $S$ if $|X| = |Y| = \frac{|S|}{2}$. The proof of Theorem 4 mentioned in the introduction made extensive use of the following theorem by Chv\'atal \cite{Chvatal}. \begin{theorem}{\rm \cite{Chvatal}} Let $G$ be a bipartite graph of even order $n$ and with equipartition $V(G) = X \cup Y$. Let $(d_{1},d_{2},\ldots,d_{n})$ be the degree sequence of $G$ with $d_{1} \leq d_{2}\leq \ldots \leq d_{n}$. If $G$ does not contain a Hamilton cycle, then for some $i \leq \frac{n}{4}$ we have that $d_{i} \leq i$ and $d_{\frac{n}{2}} \leq \frac{n}{2} - i$. \end{theorem} We prepare for the proof of Theorem 7 by proving Theorems 10 and 11 which give necessary degree conditions (similar to those in Theorem 8) for the non-existence of a 2-factor in a bipartite graph $G$ of even order $n$ with equipartition $V(G) = X \cup Y$. \\ \noindent Let $G = (V,E)$ be a bipartite graph of even order $n$ and with equipartition $V(G) = X \cup Y$. For $U \subseteq X$ (respectively $U \subseteq Y$) define $N^{(2)}(U)$ as being the multiset of vertices $v \in Y$ (respectively $v \in X$) such that $(u,v) \in E$ for some $u \in U$ and with $v$ appearing twice in $N^{(2)}(U)$ if there are two or more vertices $u \in U$ with $(u,v) \in E$ and $v$ appearing once in $N^{(2)}(U)$ if there is exactly one $u \in U$ with $(u,v) \in E$. We will use the following theorem by Ore \cite{Ore} that gives a necessary and sufficient condition for the non-existence of a 2-factor in a bipartite graph of even order $n$ with equipartition $V(G) = X \cup Y$. \begin{theorem} Let $G = (V,E)$ be a bipartite graph of even order $n$ and with equipartition $V(G) = X \cup Y$. $G$ contains no 2-factor if and only if there exists some $U \subseteq X$ such that $|N^{(2)}(U)| < 2|U|$. \end{theorem} For a bipartite graph $G = (V,E)$ of even order $n$ and with equipartition $V(G) = X \cup Y$, a set $U \subseteq X$ or $U \subseteq Y$ is defined to be a {\em deficient} set of vertices in $G$ if $|N^{(2)}(U)| < 2|U|$. \\ \\ \noindent We now prove four Lemmas that will be used in the proof of Theorems 10 and 11. \begin{lemma} Let $G$ be a bipartite graph of even order $n$ and with equipartition $V(G) = X \cup Y$. If $U$ is a minimal deficient set of vertices in $G$ then $2|U| - 2 \leq |N^{(2)}(U)|$. \end{lemma} \begin{proof} Clear by the minimality of $U$. \end{proof} \begin{lemma} Let $G$ be a bipartite graph of even order $n$ and with equipartition $V(G) = X \cup Y$, and let $U$ be a minimal deficient set of vertices in $G$. Let $M \subseteq N(U)$ be the set of vertices in $N(U)$ that are adjacent to exactly one vertex in $U$. Then, no vertex of $U$ is adjacent to more than one vertex of $M$. \end{lemma} \begin{proof} If a vertex $u \in U$ is adjacent to two vertices of $M$, since $U$ is a deficient set of vertices in $G$, we have that $|N^{(2)}(U - u)| \leq |N^{(2)}(U)| - 2 < 2|U| - 2 = 2|U - u|$. This implies that $U - u$ is a deficient set of vertices in $G$, which in turn contradicts the minimality of $U$. \end{proof} \begin{lemma} Let $G$ be a bipartite graph of even order $n$ and with equipartition $V(G) = X \cup Y$, and suppose that $G$ does not contain a 2-factor. If $U$ is a minimal deficient set in $G$ with $|U| = k$, then ${\rm deg}(u) \leq k$ for each $u \in U$ and $|\{u \in U: {\rm deg}(u) \leq k - 1\}| \geq k -1$. \end{lemma} \begin{proof} Suppose that ${\rm deg}(u) \geq k + 1$ for some $u \in U$ and let $M \subseteq N(U)$ be the set of vertices in $N(U)$ that are adjacent to exactly one vertex in $U$. Then Lemma 2 implies that $u$ is adjacent to at most one vertex in $M$ which implies that $u$ is adjacent to at least $k$ vertices in $N(U) - M$. This implies that $|N^{(2)}(U)| \geq 2k$, which contradicts the assumption that $U$ is a deficient set. This proves that ${\rm deg}(u) \leq k$ for each $u \in U$. If two vertices in $U$ have degree $k$ then similarly Lemma 2 implies that $|N^{(2)}(U)| \geq 2k$, which contradicts the assumption that $U$ is a deficient set. This proves the second part of the Lemma. \end{proof} \begin{lemma} Let $G = (V,E)$ be a bipartite graph of even order $n$ and with equipartition $V(G) = X \cup Y$ and suppose that $U \subseteq X$ is a minimal deficient set in $G$. Let $Y_{0} = \{v \in Y: v \not\in N(U)\}$, $Y_{1} = \{v \in Y: |U \cap N(v)| = 1\}$, and $Y_{2} = \{v \in Y: |U \cap N(v)| \geq 2\}$. Let $U^{*} = Y_{0} \cup Y_{1}$. Then $U^{*}$ is a deficient set in $G$. \end{lemma} \begin{proof} Let $X_{0} = X - U, X_{1} = \{u \in U: (u,v) \in E \ {\rm for\ some\ } v \in Y_{1}\}$, and $X_{2} = U - X_{1}$. Note that $|X| = |Y|$ implies that $|X_{0}| + |X_{1}| + |X_{2}| = |Y_{0}| + |Y_{1}| + |Y_{2}|$. Now, since by Lemma 2 we have that $|X_{1}| = |Y_{1}|$, this implies that $|X_{0}| + |X_{2}| = |Y_{0}| + |Y_{2}|$. Since $U$ is a deficient set we have that $|N^{(2)}(U)| = |Y_{0}| + 2|Y_{2}| < 2|U| = 2(|X_{1}| + |X_{2}|$. Hence, $|Y_{1}| + 2(|X_{0}| + |X_{2}| - |Y_{0}|) < 2(|X_{1}| + |X_{2}|)$, which in turn implies that $2|X_{0}| + |X_{1}| < 2(|Y_{0}| + |Y_{1}|)$. This proves that $U^{*}$ is a deficient set in $G$. \end{proof} \\ \\ \noindent We are now ready to prove two theorems which give necessary degree conditions (similar to those in Theorem 8) for the non-existence of a 2-factor in a bipartite graph $G$ of even order $n$ with equipartition $V(G) = X \cup Y$. \begin{theorem} Let $G$ be a bipartite graph of even order $n = 4s \geq 12$ and with equipartition $V(G) = X \cup Y$. Let $(d_{1},d_{2},\ldots,d_{n})$ be the degree sequence of $G$ with $d_{1} \leq d_{2}\leq \ldots \leq d_{n}$. If $G$ does not contain a 2-factor, then either \begin{itemize} \item[(1)] for some $k \leq \frac{n}{4}$ we have that $d_{k} \leq k$ and $d_{k-1} \leq k - 1$, or, \item[(2)] $d_{\frac{n}{4} - 1} \leq \frac{n}{4} - 1$. \end{itemize} \end{theorem} \begin{proof} We will prove that for some $k \leq \frac{n}{4}$, $G$ contains $k$ vertices with degree at most $k$, and that of these $k$ vertices, $(k-1)$ vertices have degree at most $(k-1)$, or, that $G$ contains at least $\frac{n}{4} - 1$ vertices of degree at most $\frac{n}{4} - 1$. \\ \noindent Since $G$ does not contain a 2-factor, Theorem 9 implies that $G$ contains a deficient set of vertices. Let $U \subseteq X$ be a minimal deficient set of vertices in $G$. If $|U| \leq \frac{n}{4}$, then Lemma 3 implies that statement (1) is true and the result holds. \\ \noindent Now suppose that $|U| > \frac{n}{4}$. As in the statement of Lemma 4, let $Y_{0} = \{v \in Y: v \not\in N(U)\}$, $Y_{1} = \{v \in Y: |U \cap N(v)| = 1\}$, and $Y_{2} = \{v \in Y: |U \cap N(v)| \geq 2\}$. Let $U^{*} = Y_{0} \cup Y_{1}$. Then Lemma 4 implies that $U^{*}$ is a deficient set in $G$. If $|U^{*}| \leq \frac{n}{4}$ then again statement (1) is true and the result holds. \\ \noindent Now suppose that $|U^{*}| > \frac{n}{4}$, and as in the proof of Lemma 4, let $X_{0} = X - U, X_{1} = \{u \in U: (u,v) \in E \ {\rm for\ some\ } v \in Y_{1}\}$, and $X_{2} = U - X_{1}$. We have that ${\rm deg}(u) \leq 1 + |Y_{2}|$ for each $u \in U$, and hence we may assume that $|Y_{2}| \geq \frac{n}{4} - 1$, else the result holds. Similarly, since ${\rm deg}(u) \leq 1 + |X_{0}|$ for each $u \in U^{*}$, we may assume that $|X_{0}| \geq \frac{n}{4} - 1$. Note that $|U| > \frac{n}{4}$ and $|X_{0}| \geq \frac{n}{4} - 1$ implies that $|U| = \frac{n}{4} + 1$, and that $|U^{*}| > \frac{n}{4}$ and $|Y_{2}| \geq \frac{n}{4} - 1$ implies that $|U^{*}| = \frac{n}{4} + 1$. Now, since $U$ is a minimal deficient set of vertices in $G$, Lemma 1 implies that $|X_{1}| = 2$ or $X_{1} = 3$. If $|X_{1}| = 2$ then at least $\frac{n}{4} - 1$ of the vertices in $U$ must have degree at most $\frac{n}{4} - 1$, and statement (2) of the theorem is true. Finally, if $|X_{1}| = 3$ then at least $\frac{n}{2} - 4$ (and hence at least $\frac{n}{4} - 1$ because $n \geq 12$) of the vertices in each of $U$ and $U^{*}$ must have degree at most $\frac{n}{4} - 1$, and statement (2) of the theorem is true. \end{proof} \begin{theorem} Let $G$ be a bipartite graph of even order $n = 4s + 2\geq 14$ and with equipartition $V(G) = X \cup Y$. Let $(d_{1},d_{2},\ldots,d_{n})$ be the degree sequence of $G$ with $d_{1} \leq d_{2}\leq \ldots \leq d_{n}$. If $G$ does not contain a 2-factor, then either \begin{itemize} \item[(1)] for some $k \leq \frac{(n - 2)}{4}$ we have that $d_{k} \leq k$ and $d_{k-1} \leq k - 1$, or, \item[(2)] $d_\frac{(n-2)}{2} \leq \frac{(n-2)}{4}$. \end{itemize} \end{theorem} \begin{proof} We will prove that for some $k \leq \frac{n}{4}$, $G$ contains $k$ vertices with degree at most $k$, and that of these $k$ vertices, $(k-1)$ vertices have degree at most $(k-1)$, or, that $G$ contains at least $\frac{(n-2)}{2}$ vertices of degree at most $\frac{(n-2)}{4}$. \\ \noindent Since $G$ does not contain a 2-factor, Theorem 9 implies that $G$ contains a deficient set of vertices. Without loss of generality let $U \subseteq X$ be a minimum cardinality deficient set of vertices in $G$. If $|U| \leq \frac{(n - 2)}{4}$, then Lemma 3 implies that statement (1) is true and the result holds. \\ \noindent Now suppose that $|U| > \frac{(n - 2)}{4}$. As in the statement of Lemma 4, let $Y_{0} = \{v \in Y: v \not\in N(U)\}$, $Y_{1} = \{v \in Y: |U \cap N(v)| = 1\}$, and $Y_{2} = \{v \in Y: |U \cap N(v)| \geq 2\}$. Let $U^{*} = Y_{0} \cup Y_{1}$. Then Lemma 4 implies that $U^{*}$ is a deficient set in $G$. Since $U$ is a minimum cardinality deficient set of vertices in $G$, we have that$|U^{*}| \geq |U| > \frac{(n - 2)}{4}$. \\ \noindent Now, as in the proof of Lemma 4, let $X_{0} = X - U, X_{1} = \{u \in U: (u,v) \in E \ {\rm for\ some\ } v \in Y_{1}\}$, and $X_{2} = U - X_{1}$. We have that ${\rm deg}(u) \leq 1 + |Y_{2}|$ for each $u \in U$, and hence we may assume that $|Y_{2}| \geq \frac{(n - 2)}{4} - 1$, else the result holds. Similarly, since ${\rm deg}(u) \leq 1 + |X_{0}|$ for each $u \in U^{*}$, we may assume that $|X_{0}| \geq \frac{(n - 2)}{4} - 1$. Note that $|U| > \frac{(n - 2)}{4}$ and $|X_{0}| \geq \frac{(n - 2)}{4} - 1$ implies that $\frac{(n - 2)}{4} + 1 \leq |U| \leq \frac{(n - 2)}{4} + 2$. We now examine the two cases: $|U| = \frac{(n - 2)}{4} + 1$ and $|U| = \frac{(n - 2)}{4} + 2$. \begin{itemize} \item[(1)] $|U| = \frac{(n - 2)}{4} + 1$. In this case we must have that $|X_{0}| = \frac{(n - 2)}{4}$. Note that $|X_{1}| \leq 3$ because if $|X_{1}| \geq 4$ then since $U$ is a minimal deficient set of vertices, we would have that $|Y_{2}| \leq \frac{(n - 2)}{4} - 2$, a contradiction to the assumption at this point that $|Y_{2}| \geq \frac{(n - 2)}{4} - 1$. We now examine the following four subcases separately. \begin{itemize} \item[(1)a] $|X_{1}| = 0$. In this case we have that $|Y_{1}| = 0$ and $|X_{2}| = \frac{(n - 2)}{4} + 1$. Since $U$ is a minimal deficient set of vertices, Lemma 1 implies that $|Y_{2}| = \frac{(n - 2)}{4}$ and $|Y_{0}| = \frac{(n - 2)}{4} + 1$. Thus, $X_{2} \cup Y_{0}$ is a set of $\frac{n}{2} + 1$ vertices of degree at most $\frac{(n - 2)}{4}$ which meets the requirement of the theorem.. \item[(1)b] $|X_{1}| = 1$. In this case we have that $|Y_{1}| = 1$ and $|X_{2}| = \frac{(n - 2)}{4}$. Since $U$ is a minimal deficient set of vertices, Lemma 1 implies that $|Y_{2}| = \frac{(n - 2)}{4}$ and $|Y_{0}| = \frac{(n - 2)}{4}$. Thus, $X_{2} \cup Y_{0}$ is a set of $\frac{n}{2} + 1$ vertices of degree at most $\frac{(n - 2)}{4}$ each as required by the theorem. \item[(1)c] $|X_{1}| = 2$. In this case we have that $|Y_{1}| = 2$ and $|X_{2}| = \frac{(n - 2)}{4} - 1$. Since $U$ is a minimal deficient set of vertices, Lemma 1 implies that $|Y_{2}| = \frac{(n - 2)}{4} - 1$ and $|Y_{0}| = \frac{(n - 2)}{4}$. Thus, $X_{2} \cup X_{1} \cup Y_{0}$ is a set of $\frac{n}{2}$ vertices of degree at most $\frac{(n - 2)}{4}$ which meets the requirement of the theorem. \item[(1)d] $|X_{1}| = 3$. In this case we have that $|Y_{1}| = 3$ and $|X_{2}| = \frac{(n - 2)}{4} - 2$. Since $U$ is a minimal deficient set of vertices, Lemma 1 implies that $|Y_{2}| = \frac{(n - 2)}{4} - 1$ and $|Y_{0}| = \frac{(n - 2)}{4} - 1$. Thus, $X_{2} \cup X_{1} \cup Y_{0}$ is a set of $\frac{n}{2} - 1$ vertices of degree at most $\frac{(n - 2)}{4}$ as required by the theorem. \end{itemize} \item[(2)] $|U| = \frac{(n - 2)}{4} + 2$. In this case we have that $|X_{0}| = \frac{(n - 2)}{4} - 1$. Since $U$ is a minimum cardinality deficient set of vertices, we also have that $|U^{*}| = |U| = \frac{(n - 2)}{4} + 2$. Hence we now have that $|Y_{2}| = |X_{0}| = \frac{(n - 2)}{4} - 1$. Thus, $U \cup U^{*}$ is a set of $\frac{n}{2} + 3$ vertices of degree at most $\frac{(n - 2)}{4}$ which meets the requirement of the theorem. \end{itemize} \end{proof} \begin{lemma} Let $x, y, r$ be positive numbers such that $x \geq y$ and $r < y$. Then $\frac{(x+r)(x-r)}{(y+r)(y-r)} \geq {(\frac{x}{y})}^{2}$. \end{lemma} \begin{proof} $y^{2}(x^{2} - r^{2}) \geq (y^{2} - r^{2})x^{2}$, so the result follows. \end{proof} \\ \\ \noindent \begin{mainproof} For an equipartition of $V(D)$ into $V(D) = X \cup Y$, let $B(X \rightarrow Y)$ be the bipartite directed graph with vertex set $V(D)$, equipartition $V(D) = X \cup Y$, and with $(x,y) \in A(B(X \rightarrow Y))$ if and only if $x \in X$, $y \in Y$, and, $(x,y) \in A(D)$. Let $B(X,Y)$ denote the bipartite graph underlying $B(X \rightarrow Y)$. It is clear that $B(X,Y)$ contains a Hamilton cycle if and only if $B(X \rightarrow Y)$ contains an anti-directed Hamilton cycle. We will prove that there exists an equipartition of $V(D)$ into $V(D) = X \cup Y$ such that $B(X,Y)$ contains a Hamilton cycle. In the argument below, we make the simplifying assumption that $d^{+}(v) = d^{-}(v) = \delta(D)$ for each $v \in V(D)$. It is straightforward (see the remark at the end of the proof) to see that the argument extends to the case in which some indegrees or outdegrees are greater than $\delta(D)$. \\ \noindent Let $v \in V(D)$. Let $n_{k}$ denote the number of equipartitions of $V(D)$ into $V(D) = X \cup Y$ for which ${\rm deg}(v,B(X,Y)) = k$. Since $v \in X$ or $v \in Y$ and since $d^{+}(v) = d^{-}(v) = \delta(D)$, we have that $n_{k} = 2{\delta \choose k}{n - \delta - 1 \choose \frac{n}{2} - k}$. Note that if $k > \frac{n}{2}$ or if $k < \delta - \frac{n}{2} +1$ then $n_{k} = 0$. Thus the total number of equipartitions of $V(D)$ into $V(D) = X \cup Y$ is \begin{equation} T = \sum_{k = \delta - \frac{n}{2} +1}^{\frac{n}{2}} n_{k} = \sum_{k = \delta - \frac{n}{2} +1}^{\frac{n}{2}}2{\delta \choose k}{n - \delta - 1 \choose \frac{n}{2} - k} = {n \choose \frac{n}{2}}. \end{equation} Denote by $N = {n \choose \frac{n}{2}}$ the total number of equipartitions of $V(D)$. For a particular equipartition of $V(D)$ into $V(D) = X_{i} \cup Y_{i}$, let $(d_{1}^{(i)},d_{2}^{(i)},\ldots,d_{n}^{(i)})$ be the degree sequence of $B(X_{i},Y_{i})$ with $d_{1}^{(i)} \leq d_{2}^{(i)}\leq \ldots \leq d_{n}^{(i)}$, $i = 1,2,\ldots, N$, and, let $P_{i} = \{j: d_{j}^{i} \leq \frac{n}{4}\}$. If $B(X_{i},Y_{i})$ does not contain a Hamilton cycle then Theorem 8 implies that there exists $k \leq \frac{n}{4}$ such that $d_{k}^{i} \leq k$ and hence, $|\{d_{j}^{i} : d_{j}^{i} \leq k, j = 1,2,\ldots,n\}| \geq k$. This in turn implies that $\sum_{j\in P_{i}} \frac{1}{d_{j}^{i}} \geq 1$. Hence, the number of equipartitions of $V(D)$ into $V(D) = X \cup Y$ for which $B(X,Y)$ does not contain a Hamilton cycle is at most \begin{equation} S = n\left(\frac{n_{2}}{2} + \frac{n_{3}}{3} + \ldots + \frac{n_{\lfloor \frac{n}{4}\rfloor}}{\lfloor\frac{n}{4}\rfloor}\right) \end{equation} Thus, to show that there exists an equipartition of $V(D)$ into $V(D) = X \cup Y$ such that $B(X,Y)$ contains a Hamilton cycle, it suffices to show that $T > S$, i.e., \begin{equation} \sum_{k = \delta - \frac{n}{2} +1}^{\frac{n}{2}}2{\delta \choose k}{n - \delta - 1 \choose \frac{n}{2} - k} > n \sum_{k = 2}^{\lfloor \frac{n}{4}\rfloor} \frac{2{\delta \choose k}{n - \delta - 1 \choose \frac{n}{2} - k}}{k} \end{equation} We break the proof of (3) into three cases. \\ \noindent {\bf Case 1}: $n = 4m$ and $\delta = 2d$ for some positive integers $m$ and $d$. \newline For $i = 0,1,\ldots,\frac{n}{4} - 2$, let $A_{i} = n_{(d + i)} = 2{\delta \choose d + i}{n - \delta - 1 \choose 2m - d - i}$, and let $B_{i} = n_{(\frac{n}{4}-i)} = 2{\delta \choose m - i}{n - \delta - 1 \choose m + i}$. Clearly, (3) is satisfied if we can show that \begin{equation} A_{i} > \frac{nB_{i}}{\frac{n}{4} - i},\ {\rm for\ each}\ i = 0,1,\ldots,\frac{n}{4} - 2. \end{equation} We prove (4) by recursion on $i$. We first show that $A_{0} > \frac{nB_{0}}{\frac{n}{4}}$, i.e. $n_{\frac{\delta}{2}} > n\left(\frac{n_{\frac{n}{4}}}{\frac{n}{4}}\right) = 4n_{\frac{n}{4}}$. Let $\delta = \frac{n}{2} + s$. We have that \begin{eqnarray} \frac{A_{0}}{B_{0}}& = &\frac{(\frac{n}{4})!(\delta - \frac{n}{4})!(\frac{n}{4})!(\frac{3n}{4} - \delta - 1)!} {\frac{\delta}{2}!\frac{\delta}{2}!(\frac{n}{2}-\frac{\delta}{2})!(\frac{n}{2}-\frac{\delta}{2}- 1)!} \nonumber \\ &=& \frac{(\frac{n}{4})!(\frac{n}{4} + s)!(\frac{n}{4})!(\frac{n}{4} - s -1)!}{(\frac{n}{4} + \frac{s}{2})!(\frac{n}{4} + \frac{s}{2})! (\frac{n}{4} - \frac{s}{2})! (\frac{n}{4} - \frac{s}{2} - 1)!} \nonumber \\ &=&\frac{(\frac{n}{4} + s)(\frac{n}{4} + s - 1)\ldots(\frac{n}{4} + \frac{s}{2} + 1)(\frac{n}{4})(\frac{n}{4} - 1)\ldots(\frac{n}{4} - \frac{s}{2} + 1)}{(\frac{n}{4} + 1)(\frac{n}{4} + 2)\ldots (\frac{n}{4} + \frac{s}{2})(\frac{n}{4} - \frac{s}{2} - 1)(\frac{n}{4} - \frac{s}{2} - 2)\ldots(\frac{n}{4} - s)}\nonumber \end{eqnarray} \noindent Now, applications of Lemma 1 give \begin{eqnarray} \frac{A_{0}}{B_{0}}& \geq &\frac{{(\frac{n}{4} + \frac{3s}{4} + \frac{1}{2})}^{\frac{s}{2}}} {{(\frac{n}{4} + \frac{s}{4} + \frac{1}{2})}^{\frac{s}{2}}} \frac{{(\frac{n}{4} - \frac{s}{4} + \frac{1}{2})}^{\frac{s}{2}}} {{(\frac{n}{4} - \frac{3s}{4} - \frac{1}{2})}^{\frac{s}{2}}}\nonumber \\ & \geq & \frac{{(\frac{n}{4} + \frac{s}{4} + \frac{1}{2})}^{s}} {{(\frac{n}{4} - \frac{s}{4})}^{s}} \end{eqnarray} Since $\delta \geq pn$, we have that $s = \delta - \frac{n}{2} \geq (p - \frac{1}{2})n$. Thus, (5) gives \begin{equation} \frac{A_{0}}{B_{0}} \geq {\left(\frac{\frac{n}{4} + \frac{(p - \frac{1}{2})n}{4}}{\frac{n}{4} -\frac{(p - \frac{1}{2})n}{4} }\right)} ^{\left(p - \frac{1}{2}\right)n}= {\left(\frac{p + \frac{1}{2}}{\frac{3}{2} - p}\right)}^{\left(p - \frac{1}{2}\right)n} \end{equation} Because $ n > \frac{{\rm ln}(4)}{\left(p - \frac{1}{2}\right){\rm ln}\left(\frac{p + \frac{1}{2}}{\frac{3}{2} - p}\right)}$, (6) implies that $\frac{A_{0}}{B_{0}} > 4$, thus proving (4) for $i = 0$. \\ We now turn to the recursive step in proving (4) and assume that $A_{k} > \frac{nB_{k}}{\frac{n}{4} - k},\ {\rm for\ }\ 0 < k < \frac{n}{4} - 2$. We will show that \begin{equation} \frac{A_{k + 1}}{A_{k}} \geq \left(\frac{\frac{n}{4} - k}{\frac{n}{4} - k -1}\right) \frac{B_{k + 1}}{B_{k}} \end{equation} This will suffice because (7) together with the recursive hypothesis implies that $A_{k+1} \geq \left(\frac{\frac{n}{4} - k}{\frac{n}{4} - k -1}\right) \frac{A_{k}}{B_{k}}B_{k+1} > \left(\frac{\frac{n}{4} - k}{\frac{n}{4} - k -1}\right) \frac{n}{\frac{n}{4} - k}B_{k+1} = \frac{n}{\frac{n}{4} - k - 1}B_{k+1}$. We have that \[ \frac{A_{k+1}}{A_{k}} = \frac{{\delta \choose \frac{\delta}{2} + k + 1}{n - \delta - 1 \choose \frac{n}{2} - \frac{\delta}{2} - k - 1}} {{\delta \choose \frac{\delta}{2} + k }{n - \delta - 1 \choose \frac{n}{2} - \frac{\delta}{2} - k}} = \frac{\left(\frac{\delta}{2} - k\right)\left(\frac{n}{2} - \frac{\delta}{2} - k\right)} {\left(\frac{\delta}{2} + k + 1\right)\left(\frac{n}{2} - \frac{\delta}{2} + k\right)},\] \[ {\rm and}, \frac{B_{k+1}}{B_{k}} = \frac{{\delta \choose \frac{n}{4} - k - 1}{n - \delta - 1 \choose \frac{n}{4} + k + 1}} {{\delta \choose \frac{n}{4} - k }{n - \delta - 1 \choose \frac{n}{4} + k}} = \frac{\left(\frac{n}{4} - k\right)\left(\frac{3n}{4} - \delta - k - 1\right)} {\left(\delta - \frac{n}{4} + k + 1\right)\left(\frac{n}{4} + k + 1\right)}.\] Hence, letting $\delta = \frac{n}{2} + s$, we have that \begin{eqnarray} \frac{\left(\frac{A_{k+1}}{A_{k}}\right)}{\left(\frac{B_{k+1}}{B_{k}}\right)}& = & \frac{\left(\frac{\delta}{2} - k\right)\left(\frac{n}{2} - \frac{\delta}{2} - k\right)\left(\delta - \frac{n}{4} + k + 1\right)\left(\frac{n}{4} + k + 1\right)} {\left(\frac{n}{4} - k\right)\left(\frac{3n}{4} - \delta - k - 1\right)\left(\frac{\delta}{2} + k + 1\right)\left(\frac{n}{2} - \frac{\delta}{2} + k\right)}\nonumber \\ & = & \frac{\left(\frac{n}{4} + \frac{s}{2} - k\right)\left(\frac{n}{4} - \frac{s}{2} - k\right)\left(\frac{n}{4} + s + k + 1\right) \left(\frac{n}{4} + k + 1\right)} {\left(\frac{n}{4} - k\right)\left(\frac{n}{4} - s - k - 1\right)\left(\frac{n}{4} + \frac{s}{2} + k + 1\right) \left(\frac{n}{4} - \frac{s}{2} + k \right)} \end{eqnarray} Note that in equation (8) we have, $\frac{\left(\frac{n}{4} + \frac{s}{2} - k\right)}{\left(\frac{n}{4} - k\right)} > 1$, $\frac{\left(\frac{n}{4} + s + k + 1\right)} {\left(\frac{n}{4} + \frac{s}{2} + k + 1\right)} > 1$, $\frac{\left(\frac{n}{4} + k + 1\right)} {\left(\frac{n}{4} - \frac{s}{2} + k \right)} > 1$, and in addition because $k < \frac{n}{4}$, it is easy to verify that $\frac{\left(\frac{n}{4} - \frac{s}{2} - k\right)} {\left(\frac{n}{4} - s - k - 1\right)} > \frac{\left(\frac{n}{4} - k\right)} {\left(\frac{n}{4} - k - 1\right)}$. Now (8) implies (7) which in turn proves (4). This completes the proof of Case 1. \\ \\ \noindent {\bf Case 2}: $n = 4m$ and $\delta = 2j + 1$ for some positive integers $m$ and $j$. \\ For $i = 0,1,\ldots,\frac{n}{4} - 2$, let $A_{i} = n_{(j + i)} = 2{\delta \choose j + i}{n - \delta - 1 \choose 2m - j - i}$, and as in Case 1, let $B_{i} = n_{(\frac{n}{4}-i)} = 2{\delta \choose m - i}{n - \delta - 1 \choose m + i}$. As in Case 1, we prove by recursion on $i$ that inequality (4) is satisfied for $A_{i}$ and $B_{i}$ defined here. Towards this end, let $\delta = \frac{n}{2} + s$ where $s$ is odd. We have that, \begin{eqnarray} \frac{A_{0}}{B_{0}}& = &\frac{(\frac{n}{4})!(\delta - \frac{n}{4})!(\frac{n}{4})!(\frac{3n}{4} - \delta - 1)!} {j!(\delta - j)!(\frac{n}{2}-j)!(\frac{n}{2}-\delta + j - 1)!} \nonumber \\ &=& \frac{(\frac{n}{4})!(\frac{n}{4} + s)!(\frac{n}{4})!(\frac{n}{4} - s -1)!}{(\frac{n}{4} + \frac{s}{2}-\frac{1}{2})!(\frac{n}{4} + \frac{s}{2} +\frac{1}{2})! (\frac{n}{4} - \frac{s}{2} + \frac{1}{2})! (\frac{n}{4} - \frac{s}{2} - \frac{3}{2})!} \nonumber \\ &=&\frac{(\frac{n}{4} + s)(\frac{n}{4} + s - 1)\ldots(\frac{n}{4} + \frac{s}{2} + \frac{3}{2})(\frac{n}{4})(\frac{n}{4} - 1)\ldots(\frac{n}{4} - \frac{s}{2} + \frac{3}{2})}{(\frac{n}{4} + \frac{s}{2} - \frac{1}{2})(\frac{n}{4} + \frac{s}{2} -\frac{3}{2})\ldots (\frac{n}{4} + 1)(\frac{n}{4} - \frac{s}{2} - \frac{3}{2})(\frac{n}{4} - \frac{s}{2} - \frac{5}{2})\ldots(\frac{n}{4} - s)}\nonumber \\ & \geq & \frac{(\frac{n}{4} + s)(\frac{n}{4} + s - 1)\ldots(\frac{n}{4} + \frac{s}{2} + \frac{3}{2})(\frac{n}{4} - 1)\ldots(\frac{n}{4} - \frac{s}{2} + \frac{3}{2})}{(\frac{n}{4} + \frac{s}{2} - \frac{1}{2})(\frac{n}{4} + \frac{s}{2} -\frac{3}{2})\ldots (\frac{n}{4} + 1)(\frac{n}{4} - \frac{s}{2} - \frac{3}{2})\ldots(\frac{n}{4} - s + 1)}\frac{\frac{n}{4}}{(\frac{n}{4} - s)} \nonumber \end{eqnarray} Now, applications of Lemma 1 give \begin{eqnarray} \frac{A_{0}}{B_{0}}& \geq &\frac{{(\frac{n}{4} + \frac{3s}{4} + \frac{3}{4})}^{\left(\frac{s}{2}-\frac{1}{2}\right)}} {{(\frac{n}{4} + \frac{s}{4} + \frac{1}{4})}^{\left(\frac{s}{2}-\frac{1}{2}\right)}} \frac{{(\frac{n}{4} - \frac{s}{4} + \frac{1}{4})}^{\left(\frac{s}{2}-\frac{1}{2}\right)}} {{(\frac{n}{4} - \frac{3s}{4} - \frac{1}{4})}^{\left(\frac{s}{2}-\frac{1}{2}\right)}}\frac{\frac{n}{4}}{(\frac{n}{4} - s)}\nonumber \\ & \geq & \frac{{(\frac{n}{4} + \frac{s}{4} + \frac{1}{2})}^{s-1}} {{(\frac{n}{4} - \frac{s}{4})}^{s-1}}\frac{\frac{n}{4}}{(\frac{n}{4} - s)}\nonumber \\ & \geq & \frac{{(\frac{n}{4} + \frac{s}{4} + \frac{1}{2})}^{s}} {{(\frac{n}{4} - \frac{s}{4})}^{s}} \nonumber \end{eqnarray} This is exactly inequality (5) obtained in proving Case 1. The rest of the proof for Case 2 is similar to that of Case 1 and we omit it. \\ \\ \noindent {\bf Case 3}: $n \equiv 2\pmod{4}$. \\ In this case we point out that a proof similar to that in cases 1 and 2 above verifies the result. \\ \\ Remark: We argue that there was no loss of generality in our assumption at the beginning of the proof of Theorem 7 that $d^{+}(v) = d^{-}(v) = \delta(D)$ for each $v \in V(D)$. Let $D^{*} = (V^{*}, A(D^{*})$ be a directed graph with $d^{+}(v) \geq \delta(D^{*})$, and $d^{-}(v) \geq \delta(D^{*})$ for each $v \in V(D^{*})$. Let $v \in V(D^{*})$, and, let $n_{k}^{*}$ denote the number of equipartitions of $V(D^{*})$ into $V(D^{*}) = X \cup Y$ for which ${\rm deg}(v,B(X,Y)) = k$. We can delete some arcs pointed into $v$ and some arcs pointed out of $v$ to get a directed graph $D = (V^{*},A(D))$ in which $d^{+}(v) = d^{-}(v) = \delta(D^{*})$. Now as before let $n_{k}$ denote the number of equipartitions of $V(D)$ into $V(D) = X \cup Y$ for which ${\rm deg}(v,B(X,Y)) = k$. It is clear that $\sum_{k = 2}^{q}n_{k} \geq \sum_{k = 2}^{q}n_{k}^{*}$ for each $q$, and that $ \sum_{k = \delta - \frac{n}{2} +1}^{\frac{n}{2}} n_{k} = \sum_{k = \delta - \frac{n}{2} +1}^{\frac{n}{2}} {n_{k}}^{*}$ = total number of equipartitions of $V(D^{*})$. Hence, the proof above that $T > S$ holds with $n_{k}$ replaced by $n_{k}^{*}$. \end{mainproof} \\ \\ We now prove the corollaries of Theorem 7 mentioned in the introduction. \\ \\ \noindent \begin{cor1proof} If $n \leq 10$ then $\delta(D) > \frac{2}{3}n$ and Theorem 6 implies that $D$ has an anti-directed Hamilton cycle. Hence, assume that $n > 10$, and for given $n$, let $p$ be the unique real number such that $\frac{1}{2} < p < \frac{3}{4}$ and $n = \frac{{\rm ln}(4)}{\left(p - \frac{1}{2}\right){\rm ln}\left(\frac{p + \frac{1}{2}}{\frac{3}{2} - p}\right)}$. The result follows from Theorem 7 if $\delta(D) > pn$ and since $\delta(D) > \frac{1}{2}n + \sqrt{n\ln(2)}$, it suffices to show that $pn \leq \frac{1}{2}n + \sqrt{n\ln(2)}$. Let $x = p - \frac{1}{2}$ and note that $0 < x < \frac{1}{4}$. Now, $pn \leq \frac{1}{2}n + \sqrt{n\ln(2)}$ if and only if $xn \leq \sqrt{n\ln(2)}$ if and only if $\sqrt{\frac{\ln(4)}{x\ln\left(\frac{1 + x}{1 - x} \right)}} \leq \frac{\sqrt{\ln(2)}}{x}$ if and only if $2x \leq \ln(1+x) - \ln(1-x)$. Since $0 < x < \frac{1}{4}$, we have that $\ln(1+x) - \ln(1-x) = \sum_{k = 0}^{\infty}\frac{2x^{2k + 1}}{2k + 1}$ and this completes the proof of Corollary 1. \end{cor1proof} \\ \\ \noindent \begin{cor2proof} For $p = \frac{9}{16}$, $177 < \frac{{\rm ln}(4)}{\left(p - \frac{1}{2}\right){\rm ln}\left(\frac{p + \frac{1}{2}}{\frac{3}{2} - p}\right)} < 178$. Hence, Theorem 7 implies that the corollary is true for all $n \geq 178$. If $n < 178$, $\delta(D) > \frac{9}{16}n$, and, $n \not\equiv 0\pmod{4}$, we can verify that inequality (3) is satisfied by direct computation. If $n < 178$, $\delta(D) > \frac{9}{16}n$, and, $n \equiv 0\pmod{4}$, a use of Theorem 8 that is stronger than its use in deriving the bound $S$ in equation (2) yields that the number of equipartitions of $V(D)$ into $V(D) = X \cup Y$ for which $B(X,Y)$ does not contain a Hamilton cycle is at most \begin{equation} S' = n\left(\frac{n_{2}}{2} + \frac{n_{3}}{3} + \ldots + \frac{n_{\lfloor \frac{n}{4}\rfloor}}{2\lfloor\frac{n}{4}\rfloor}\right). \end{equation} Direct computation now verifies that $T > S'$. \end{cor2proof} \\ \\ \noindent \begin{cor3proof} If $n \leq 14$ is even and $\delta(D) > \frac{1}{2}n$ then we have that $\delta(D) > \frac{9}{16}n$ and Corollary 2 implies Corollary 3. \end{cor3proof}
{"config": "arxiv", "file": "1012.1231/0201124.tex"}
\section{Numerical results}\label{sec:results} We now present the results for a series of simulations chosen to demonstrate the properties of our numerical approach. We first consider the INS equations on a sequence of refined unit square meshes to study the accuracy of the scheme; cases with and without the divergence damping are considered subject to both the TN and WABE boundary conditions. Some benchmark problems are also considered to further illustrate the numerical properties of our scheme and to compare with existing results. Finally, as a demonstration that our scheme can be easily extended to work with higher-order elements, we solve the classical flow-past-a-cylinder problem using $\Pe_n$ finite elements with $n\geq 1$. {\bf Remark}: for all the test problems with known exact solutions, errors of the numerical solutions are measured using both $L_\infty$ and $L_2$ norms. To be specific, given an exact solution $v$ and its FEM approximation $v_h$ in the finite space $V_h=\mathrm{span}\{\varphi_1,\varphi_2,\dots,\varphi_n\}$, we define the error function as $$ E(v) = |v_h-v_e|, $$ where $v_e$ is the projection of the exact solution $v$ onto the finite space $V_h$, i.e., $ v_e = \sum_{i=1}^{n} v(\xv_i)\varphi_i $. Here $\xv_i$ is the coordinates of the corresponding degree of freedom. In $V_h$, the $L_\infty$ and $L_2$ norms of the error function is given by $$ ||E(v) || _\infty = \max(E(v) )~~\text{and}~~||E(v)||_2=\left(\int_\Omega E(v)^2d\xv\right)^{1/2}. $$ A numerical quadrature with sufficient order of accuracy is used to compute the integral; for example, for $\Pe_1$ elements, a third order accurate quadrature rule is used. \input texFiles/manufactured \input texFiles/ldc \input texFiles/fpc
{"config": "arxiv", "file": "1902.06773/texFiles/results.tex"}
TITLE: I get 2 possibilities for one problem. In first one, x=y for an equation but in another one x is not equal to y for same equation. Why? QUESTION [0 upvotes]: Equation is $$9x^2+6x-5=9y^2+6y-5$$ I was solving this to prove $x=y$ for one-one function. Although I proved it but here $$ (x-y)(9x+9y+6)=0$$ Both sides can be divided either by $(9x+9y+6)$ to get the required answer i.e. $x=y$ or by $(x-y)$. On dividing both sides by $(x-y)$, I didn't get $x=y$. So my question is why am I getting different answers although the equation is same? Edit: $x,y$ belongs to domain i.e. positive real numbers and range of this function is $[-5,∞]$. REPLY [1 votes]: The function in your question is NOT one-one if you consider it from $\mathbb R\to \mathbb R$. I think in the question you're solving, the function was defined on some other domain like let's say $(0,1)$ on which it is one-one. Note that it that domain, $9(x+y)=-6$ can never be true. So, you must have $x=y$.
{"set_name": "stack_exchange", "score": 0, "question_id": 4190295}
\begin{document} \fontsize{12pt}{14pt} \textwidth=14cm \textheight=21 cm \numberwithin{equation}{section} \title{Estimates of sections of determinant line bundles on Moduli spaces of pure sheaves on algebraic surfaces} \author{Yao YUAN} \date{\small\textsc SISSA, Via Bonomea 265, 34136, Trieste, ITALY \\ yuayao@gmail.com} \maketitle \begin{flushleft}{\textbf{Abstract:}} Let $X$ be any smooth simply connected projective surface. We consider some moduli space of pure sheaves of dimension one on $X$, i.e. $\mhu$ with $u=(0,L,\chi(u)=0)$ and $L$ an effective line bundle on $X$, together with a series of determinant line bundles associated to $r[\mo_X]-n[\mo_{pt}]$ in Grothendieck group of $X$. Let $g_L$ denote the arithmetic genus of curves in the linear system $\ls$. For $g_L\leq2$, we give a upper bound of the dimensions of sections of these line bundles by restricting them to a generic projective line in $\ls$. Our result gives, together with G\"ottsche's computation, a first step of a check for the strange duality for some cases for $X$ a rational surface. \end{flushleft} \section{Introduction.} let $X$ be a smooth complex projective surface with $H$ an ample divisor, and $u$ and $c^r_n$ two elements in the Grothendiek group $\hk(X)$ of $X$ which are specified as $u=(0,L,\chi(u)=0)$ for $L$ an effective line bundle on $X$, and $c_n^r=r[\mathcal{O}_X]-n[\mathcal{O}_{pt}]$ where $\mo_{pt}$ is the skyscraper sheaf supported at a point in $X.$ Denote $\mhu$ (resp. $M^H_X(c_n^r)$) the moduli space of semistable sheaves with respect to $H$ on $X$ of class $u$ (resp. $c_n^r$). There is a so-called determinant line bundle $\lcn$ (resp. $\lu$) on $\mhu$ (resp. $M^H_X(c_n^r)$) associated to $c_n^r$ (resp. $u$) (See \cite{dan} Chapter 8 for more details). It is conjectured by Strange Duality that there is a natural isomorphism between the following two spaces (see \cite{mofir} for more details) \begin{equation}\label{sdm}D:H^0(\mhu,\lcn)^{\vee}\ra H^0(M_X^H(c_n^r),\lu).\end{equation} We are concerned on the numerical version of the conjecture. In other words, we would like to check the following equality \begin{equation}\label{numcon}h^0(\mhu,\lcn)=h^0(M^H_X(c_n^r),\lu). \end{equation} In \cite{yuan} for $X=\mathbb{P}^2$ or $\mathbb{P}(\mo_{\pone}\oplus\mo_{\pone}(-e))$ with $e=0,1$ and $L=2G+aF$ with $2e\leq a\leq e+3$ where $F$ is the fiber class and $G$ is the section such that $G.G=-e$, we have computed the generating function \begin{equation}\label{generating}Z^r(t)=\sum_{n\geq0}h^0(\mhu,\lcn)t^n,\end{equation} for all $r\geq1$. Moreover when $r=2$, the result matches G\"ottsche's computation on the rank 2 sheaves side and gives a numerical check of Strange Duality for these cases (See \cite{yuan} Corollary 4.4.2 and Corollary 4.5.3). In this paper we consider more general cases. We ask $X$ to be any smooth simply connected projective surface over the complex number $\mathbb{C}$. Let $K$ be the canonical divisor of $X$. Let $\ls$ be the linear system associated to the line bundle $L$ and $l$ the dimension of $\ls$. Let $g_L$ be the arithmetic genus of curves in $\ls$. For any two line bundles $L$ and $L'$, we denote $L.L'$ to be the intersection number of their divisors; and moreover we write $L'\leq L$ if $L\otimes L'^{-1}$ is an effective line bundle, i.e. $h^0(L\otimes L'^{-1})\neq0;$ and write $L'<L$ if $L'\leq L$ and $L'\neq L.$ We state two assumptions on $L$ as follows which are all the assumptions we need $(\ha'_1)$ $L.K<0;$ $(\ha'_2)$ For any $0<L',L''<L$ with $L'+L''=L,$ we have $l'+l''\leq l-2$ where $l'=dim~|L'|$ and $l''=dim~|L''|.$ Since we deal with more general cases, the techniques we used in \cite{yuan} to obtain the normality and irreducibility of the Moduli space $\mhu$ and the dualizing sheaf on $\mhu$ don't work any more. We thus lose many good properties of the moduli spaces, but anyway we still have some results providing an estimate for the dimension of sections of $\lcn$ on $\mhu$. We have obtained in this paper the following three theorems: \begin{thm}\label{thmone}Let $X$ be simply connected and let $L$ satisfy $(\ha'_1)$ and $(\ha'_2)$. Then we have for all $n\geq0$ \[h^0(M(c^1_n),\lu)\geq h^0(M(u),\lambda_{c^1_n}).\] Moreover for any fixed $r$, once the strict inequality holds for $n=n_0$, it holds for all $n\geq n_0.$ \end{thm} Denote \[Y^r_{g_L=1}(t)=\sum_{n\geq0}y^{r}_{n,g_L=1}t^n=\frac{1+t^{2}+t^{3}+\ldots+t^{r}}{(1-t)^2};\] and let $y_{n,g_L=1}^r=0$ for all $n<0$. Then we have \begin{thm}\label{thmtwo}Let $X$ be a smooth simply connected projective surface and $L$ satisfy $(\ha'_1)$ and $(\ha'_2)$ with $g_L=1$. Then we have for all $n\in \mathbb{Z}$ and $r\geq1$, \[y^r_{n,g_L=1}\geq h^0(M(u),\lambda_{c^r_n}).\] Moreover for any fixed $r$, once the strict inequality holds for $n=n_0$, it holds for all $n\geq n_0.$ \end{thm} Let $Y^1_{g_L=2}=\sum_{n}y_{n,g_L=2}^1t^n=\frac1{(1-t)^2}$ and for $r\geq2$ \[Y^r_{g_L=2}(t)=\sum_{n}y_{n,g_L=2}^rt^n=\frac{1+3t^{2}+\sum_{i=3}^r ((i+1)t^{i}+(i-2)t^{i+1})}{(1-t)^{l+1}}.\] Let $y_{n,g_L=2}^r=0$ for all $n<0$. Then we have \begin{thm}\label{thmthree}Let $X$ be a a smooth simply connected projective surface and $L$ satisfy $(\ha'_1)$ and $(\ha'_2)$ with $g_L=2$ and $dim~\ls\geq3$. Then we have for all $n\in \mathbb{Z}$ and $r\geq1$, \[y^r_{n,g_L=2}\geq h^0(M(u),\lambda_{c^r_n}).\] Moreover for any fixed $r$, once the strict inequality holds for $n=n_0$, it holds for all $n\geq n_0.$ \end{thm} \begin{rem}Fix $r=2$. G\"ottsche's results for rational ruled surfaces together with his blow-up formulas give many examples for $X$ a rational surface, in which $L$ satisfies $(\ha'_1)$ and $(\ha'_2)$ with $g_L=1$ or $g_L=2$ and $l\geq3$, and also the following equalities hold under some suitable polarization (a change of the polarization may give a difference of a polynomial) \[\sum_{n\geq0}\chi(M(c_n^2),\lambda_L)t^n=\frac{1+t^2}{(1+t)^{l+1}}=Y^2_{g_L=1}(t),~if~g_L=1;\] \[\sum_{n\geq0}\chi(M(c_n^2),\lambda_L)t^n=\frac{1+3t^2}{(1+t)^{l+1}}=Y^2_{g_L=2}(t),~if~g_L=2.\] Hence we have for these cases under a suitable polarization for all $n\geq0$ \[\chi(M(c^2_n),\lu)\geq h^0(M(u),\lambda_{c^2_n}).\] In particular (under any polarization) for $n\gg0$, we have \[\chi(M(c^2_n),\lu)=h^0(M(c^2_n),\lu)\geq h^0(M(u),\lambda_{c^2_n}).\] \end{rem} The main idea to prove these three theorems is to restrict $\z^r$ to intersections of pull back of hyperplanes in $\ls$ until finally we reach a generic projective line $T$ in $\ls$. We then compute the splitting type of $\pi_{*}(\z^r|_{\pi^{-1}(T)})$ on $T$. We prove Theorem \ref{thmone} in Section 4, Theorem \ref{thmtwo} in Section 5. The proof of Theorem \ref{thmthree} is the most complicated one among the three and is done in Section 6. Also in Section 6 we obtain a corollary (Corollary \ref{jaco}) in the theory of compactified Jacobian of integral curves with planar singularities. \section{Notations.} Let $\uchi$ be an element in $\hk(X)$ given by $\uchi=(0,L,\chi(\uchi)=\chi)$, and $\mchi$ the moduli space of semistable sheaves (w.r.t. $H$) of class $\uchi$ on $X$. Denote $\mchi^s$ the stable locus of $\mchi$. Notice that when $g.c.d(\chi,L.H)=1,$ $\mchi=\mchi^s.$ Let $\ls^{IC}$ be the open subset of $\ls$ consisting of points corresponding to integral curves. By $(\ha'_2)$, we have $\ls-\ls^{IC}$ is of codimension $\geq 2$ in $\ls$. There is a projection $\pi_{\chi}:\mchi\ra\ls$ which is defined by sending every sheaf to its schematic support. $\pi_{\chi}$ is a morphism according to Proposition 3.0.2 in \cite{yuan}. $(\ha'_1)$ implies that Ext$^2(\mf,\mf)=0$ for all $\mf$ semistable of class $\uchi$ that are supported on integral curves. Hence by Lemma 4.2.3 in \cite{yuan} the moduli space $\mchi$ is smooth of dimension $g_L+l$ at the point $[\mf]$ if $\mf$ is supported on an integral curve, i.e. $\pi_{\chi}([\mf])\in\ls^{IC}$. For $\chi=0$ we write $u$, $M$, $M^s$ and $\pi$ instead. It is easy to see that $M$ does not depend on the polarization, but $\mchi$ might for $\chi\neq0.$ We denote $\z$ and $\lambda_{pt}$ the determinant line bundles on $\mhu$ associated to $[\mo_X]$ and $[\mo_{pt}]$. Hence we have $\lcn\simeq \z^{\otimes r}\otimes \lambda_{pt}^{\otimes -n}.$ We moreover ask $\mo_{pt}$ not to be supported at the base point of $\ls$, then by Proposition 2.8 in \cite{le} we have that $\lambda_{pt}\simeq\pi^{*}\mo_{\ls}(-1)$. Let $\z^r(n):=\z^r\otimes\pi^{*}\mo_{\ls}(n)$. \section{Restrict $\z^r$ to intersections of pull backs of hyperplanes in $\ls|$.} Choose $l-1$ generic points in $X$: $x_1,x_2,\ldots,x_{l-1}.$ For each $x_i$, by asking the supporting curves of the sheaves to pass through it, we can get an equation $f_i$ up to scalar in $|\pi_{\chi}^{*}\mo_{\ls}(1)|$. Let $V_i$ be the divisor defined by $f_i$. Since $x_1,\ldots,x_{l-1}$ are generic, we let $V_i$ intersect each other transversally. There is also a series of closed subschemes in $\ls$: $P_1,P_2,\ldots,P_{l-1},$ where $P_{i}$ consists of curves passing through $x_1,\ldots,x_{i}.$ $P_{i}\simeq\mathbb{P}^{l-i}$ and $\pi_{\chi}^{-1}(P_{i})=\cap_{1\leq m\leq i}V_m$. Let $T:=P_{l-1}$. Then $T$ is a projective line in $\ls.$ Because $\ls-\ls^{IC}$ is of codimension $\geq 2$ in $\ls$, we can assume that $T\subset \ls^{IC}$. We then have the following Cartesian diagram \begin{equation}\label{lnode} \xymatrix{\mchi^T\ar[r]^s\ar[d]_{\pi^T_{\chi}}&\mchi^{IC}\ar[r]^j\ar[d]^{\pi^{IC}_{\chi}}&\mchi\ar[d]^{\pi_{\chi}}\\ T\ar[r]^t&\ls^{IC}\ar[r]^i&\ls} \end{equation} $\mchi^{IC}$ is contained in the stable locus $\mchi^s$ and is smooth. We can also assume that $\mchi^T$ is smooth since $|\pi_{\chi}^{*}\mo_{\ls}(1)|$ has no base point. For $\chi=0,$ $\mchi=M,$ we have an exact sequence on $M:$ \begin{equation}\label{pone} 0\ra\pi_{\chi}^{*}\mo_{\ls}(-1)\ra\mo_{M}\ra\mo_{\pi^{-1}(P_1)}\ra0. \end{equation} We then tensor (\ref{pone}) by $\z^r(n)$ \begin{equation}\label{ot} \xymatrix{ 0\ar[r]&\z^r(n-1)\ar[r]&\z^r(n)\ar[r]&\z^r(n)|_{\pi^{-1}(P_1)}\ar[r]&0.} \end{equation} Taking the global sections, we have \begin{equation}\label{otv} 0\ra H^0(\z^r(n-1))\ra H^0(\z^r(n))\ra H^0(\z^r(n)|_{\pi^{-1}(P_1)})\ra H^1(\z^r(n-1)). \end{equation} Sequence (\ref{otv}) implies that $h^0(\z^r(n))-h^0(\z^r(n-1))\leq h^0(\z^r(n)|_{\pi^{-1}(P_1)}).$ Denote $Z^r_i(t)=\sum_{n}h^0(M,\z^r(n)|_{\pi^{-1}(P_i)})t^n$ for all $i=1,\ldots,l-1.$ Notice that the sum are bounded from below for all $i$. Hence we have \begin{equation}\label{sim}h^0(M,\z^r(n))\leq\sum_{m\leq n}h^0(\z^r(n)|_{\pi^{-1}(P_1)})\end{equation} The inequality (\ref{sim}) will become an equality if $h^1(M,\z^r(n-1))=0$ for all $n$ such that $h^0(\pi^{-1}(P_1),\z^r(n)|_{\pi^{-1}(P_1)})\neq0.$ And once the strict inequality holds for $n=n_0$, it holds for all $n\geq n_0.$ On the other hand we have \[\sum_{n}(\sum_{m\leq n}h^0(\z^r(n)|_{\pi^{-1}(P_1)}))t^n=\frac{Z^r_1(t)}{1-t}.\] Inductively for all $1\leq i\leq l-2,$ we have an exact sequence \begin{equation}\label{itv} 0\ra\z^r(n-1)|_{\pi^{-1}(P_i)}\ra\z^r(n)|_{\pi^{-1}(P_i)}\ra\z^r(n)|_{\pi^{-1}(P_{i+1})}\ra0, \end{equation} This implies that \begin{equation}\label{sime}h^0(M,\z^r(n)|_{\pi^{-1}(P_i)})\leq\sum_{m\leq n}h^0(\z^r(n)|_{\pi^{-1}(P_{i+1})})\end{equation} Finally we come to the generic projective line $T=P_{l-1}$ in the linear system. Define \[\sum_{n}a^r_nt^n:=\frac{Z^r_{l-1}(t)}{(1-t)^{l+1}}.\] Then we have \begin{equation}\label{rgeq}h^0(M,\z^r(n))\leq a_n^r.\end{equation} We will compute $Z^r_{l-1}(t)$ for $g_L=1,2$ in the next sections. \section{Moduli spaces over one dimensional linear systems.} In this section, we construct a new moduli space $\tilde{\mchi}$ over a one dimensional linear system $\tilde{\ls}$ on a surface $\tilde{X}$ obtained by blowing up points in $X$. Then we show that $\tilde{\mchi}$ can be identified with $\mchi^T.$ The construction is as follows. Choose $l-1$ generic points in $X$: $x_1,x_2,\ldots,x_{l-1};$ such that curves passing through all these $l-1$ points are integral curves (this is to say that the line $T$ defined by those points is contained in $\ls^{IC}$) and all of them except finitely many are smooth. Moreover those curves are smooth at $x_1,x_2,\ldots,x_{l-1}$ (this is possible since the points are finitely many). We then blow up all these $l-1$ points and get a new surface $\tilde{X}$ together with a projection $\rho:\tilde{X}\ra X.$ We have a new moduli space $\tilde{\mchi}=M_{\tilde{X}}(\tilde{u}_{\chi}),$ where $\tilde{u}_{\chi}=(0,\tilde{L}=\rho^{*}L-E_1-E_2-\ldots-E_{l-1},\chi)$ with the $E_i$ the exceptional divisors. Notice that there is a natural closed embedding $\imath:|\tilde{L}|\ra\ls$ with its image $T.$ In particular for $\tilde{u}_{\chi}=\tilde{u}_{0}=:\tilde{u},$ we denote $\tilde{\z}$ the determinant line bundle on $\tilde{M}=\tilde{M}_0$ associated to the structure sheaf $\mo_{\tilde{X}}.$ Then we have the following proposition: \begin{prop}\label{muse} There is a morphism $\underline{f}:\tilde{\mchi}\ra \mchi,$ which factors through the embedding $j\circ s$ as in diagram (\ref{lnode}) and induces an isomorphism $f:\tilde{\mchi}\ra \mchi^T;$ and we have the Cartesian diagram as follows \begin{equation}\label{car}\xymatrix{ \tilde{\mchi} \ar[d]_{\tilde{\pi}_{\chi}} \ar[r]^{f} & \mchi^T\ar[d]_{\pi^T_{\chi}}\ar[r]^{j\circ s}&\mchi\ar[d]^{\pi_{\chi}} \\ |\tilde{L}|\ar[r]^{\imath}&T\ar[r]^{i\circ t} &\ls }\end{equation} And moreover for $\chi=0$ we have $\underline{f}_{*}\tilde{\z}^r\simeq (j\circ s)_{*}f_{*}\tilde{\z}^r$ and $f_{*}\tilde{\z}^r\simeq (j\circ s)^{*}\z^r.$ \end{prop} \begin{proof}First we have two lemmas \begin{lemma}\label{univ}There is a universal sheaf on $\tilde{X}\times \tilde{\mchi}.$ That is to say, $\tilde{\mchi}$ is a fine moduli space. \end{lemma} \begin{proof}Let $\tilde{\Omega}_{\chi}$ be the open subscheme of the $Quot$-scheme and $\tilde{\phi}_{\chi}:\tilde{\Omega}_{\chi}\ra \tilde{\mchi}$ be the good quotient. Since all curves in $|\tilde{L}|$ are irreducible and reduced, all semistable sheaves in $\tilde{u}_{\chi}$ are stable and the morphism $\tilde{\phi}_{\chi}:\tilde{\Omega}_{\chi}\ra \tilde{\mchi}$ is a principal $G$-bundle, with $G$ some reductive group. There is a universal quotient $\tilde{\mathcal{E}}_{\chi}$ on $\tilde{X}\times \tilde{\Omega}_{\chi}$. \[\xymatrix{ \tilde{\mathcal{E}}_{\chi} \ar[r] & \tilde{X}\times \tilde{\Omega}_{\chi} \ar[ld]^{q} \ar[d]^{p_{\chi}} \\ \tilde{X} &\tilde{\Omega}_{\chi} }\] Let $A=det~R^{\bullet}p_{\chi }(\tilde{\mathcal{E}}_{\chi}\otimes q^{*}\mathcal{O}_{\tilde{X}}((1-\chi)E_1)).$ $A$ is a line bundle on $\tilde{\Omega}_{\chi}$ and carries a natural $G$-linearization of $Z$-weight $\chi((\tilde{\mathcal{E}}_{\chi})_y\otimes\mathcal{O}_{\tilde{X}}((1-\chi)E_1))$ for every closed point $y\in\tilde{\Omega}_{\chi}$. Since $E_i.\tilde{L}=1$ and $(\tilde{\mathcal{E}}_{\chi})_y$ is of rank 0 and Euler characteristic $\chi$ for every $y,$ we have $\chi((\tilde{\mathcal{E}}_{\chi})_y\otimes\mo_{\tilde{X}}((1-\chi)E_1))=1$ which means $A$ is of $Z$-weight 1. According to Proposition 4.6.2 and Theorem 4.6.5 in \cite{dan}, we have the lemma. \end{proof} \begin{lemma}\label{integral}$\tilde{\pi}$ is flat and $\tilde{\mchi}$ is an integral scheme. \end{lemma} \begin{proof}Since curves in $|\tilde{L}|$ are reduced and irreducible and with at most planar singularities, every fiber of $\tilde{\pi}$ is integral and of dimension $g.$ Hence $\tilde{\mchi}$ can not have more than one component because $|\tilde{L}|$ is just a projective line. Then $\tilde{\pi}$ is flat because there is no component contained in any fiber. $\tilde{\mchi}$ is reduced because all fibers of $\tilde{\pi}$ are reduced and $|\tilde{L}|$ is reduced. \end{proof} Now let $\tilde{\mathcal{U}}_{\chi}$ be a universal sheaf on $\tilde{X}\times \tilde{\mchi}$. Push it forward along $\rho\times id_{\tilde{\mchi}}$ and get a flat family $\mathcal{U}_{\chi}:=(\rho\times id_{\tilde{\mchi}})_{*}\tilde{\mathcal{U}}_{\chi}$ on $X\times \tilde{\mchi}.$ Over every point $[\mathcal{F}]\in \tilde{\mchi},$ $\rho_{*}\mathcal{F}$ is a stable sheaf whose support is the push forward of the support of $\mathcal{F},$ hence $[\rho_{*}\mathcal{F}]\in \mchi^T.$ The flat family $\mathcal{U}_{\chi}$ induces a morphism $\underline{f}:\tilde{\mchi}\rightarrow \mchi,$ with its image contained in $\mchi^T.$ Since $\mchi^T$ is smooth hence normal and $\tilde{\mchi}$ is integral, to prove that $f:\tilde{\mchi}\ra \mchi^T$ is an isomorphism, it is enough to show that it is bijective. The injectivity is because $\rho|_{C_{\mathcal{F}}}:C_{\mathcal{F}}\rightarrow C_{\rho_{*}\mathcal{F}}$ is an isomorphism, where $C_{\mathcal{F}}$ is the supporting curve of $\mathcal{F}.$ To prove the surjectivity, we need to show that $\forall [\mathcal{G}]\in \mchi^T,$ $\exists [\tilde{\mathcal{G}}]\in \tilde{\mchi}$ such that $\rho_{*}\tilde{\mathcal{G}}\simeq \mathcal{G}.$ Pull back $\mathcal{G}$ to get a sheaf on $\tilde{X}$ with support $C=C_{\rho^{*}\mathcal{G}}\in |\rho^{*}L|.$ On $\tilde{X}$ we have \[0\rightarrow\mathcal{O}_{E_i}(-1)^{\oplus_{i=1}^{l-1}}\rightarrow\mathcal{O}_{C}\rightarrow\mathcal{O}_{\tilde{C}}\rightarrow0.\] Tensor this sequence by $\rho^{*}\mathcal{G}.$ \[\xymatrix@C=0.6cm{ Tor^1(\rho^{*}\mathcal{G},\mathcal{O}_{\tilde{C}})\ar[r]^{\tau~~~}&\mathcal{O}_{E_i}(-1)^{\oplus_{i=1}^{l-1}}\otimes\rho^{*}\mathcal{G}\ar[r]&\rho^{*}\mathcal{G}\ar[r]&\mathcal{O}_{\tilde{C}}\otimes\rho^{*}\mathcal{G}\ar[r]&0.}\] $c_1(\mathcal{O}_{\tilde{C}}\otimes\rho^{*}\mathcal{G})=\tilde{L},$ so $c_1($im$\tau)=0,$ while im$\tau$ (i.e. the image of $\tau$) is contained in $\mathcal{O}_{E_i}(-1)^{\oplus_{i=1}^{l-1}}\otimes\rho^{*}\mathcal{G}=\mathcal{O}_{E_i}(-1)^{\oplus_{i=1}^{l-1}},$ which is pure on its support. Therefore $\tau=0.$ Hence we have \[0\rightarrow\mathcal{O}_{E_i}(-1)^{\oplus_{i=1}^{l-1}}\rightarrow\rho^{*}\mathcal{G}\rightarrow\mathcal{O}_{\tilde{C}}\otimes\rho^{*}\mathcal{G}\rightarrow0.\] Push it forward. Because of the vanishing of $\rho_{*}\mathcal{O}_{E_i}(-1)$ and $R^1\rho_{*}\mathcal{O}_{E_i}(-1),$ we have $\rho_{*}(\rho^{*}\mathcal{G})\simeq \rho_{*}(\mathcal{O}_{\tilde{C}}\otimes\rho^{*}\mathcal{G}).$ $\rho$ restricted on $\tilde{C}$ is an isomorphism. So if $\rho_{*}(\rho^{*}\mathcal{G})\simeq \mathcal{G},$ then $\mathcal{O}_{\tilde{C}}\otimes\rho^{*}\mathcal{G}$ is a pure sheaf of rank 1 on $\tilde{C}$ and of Euler characteristic 0, hence $[\mathcal{O}_{\tilde{C}}\otimes\rho^{*}\mathcal{G}]\in \tilde{\mchi},$ and hence we have found $[\tilde{\mathcal{G}}]=[\mathcal{O}_{\tilde{C}}\otimes\rho^{*}\mathcal{G}]\in \tilde{\mchi},$ such that $f([\tilde{\mathcal{G}}])=[\mathcal{G}].$ Now we only need to show $\rho_{*}(\rho^{*}\mathcal{G})\simeq \mathcal{G}.$ Firstly, we show that $\rho_{*}(\rho^{*}\mathcal{O}_{C})\simeq \mathcal{O}_C.$ This can be seen from $\rho_{*}(\rho^{*}\mathcal{G})\simeq \rho_{*}(\mathcal{O}_{\tilde{C}}\otimes\rho^{*}\mathcal{G}),$ with $\mathcal{G}=\mathcal{O}_{C}.$ Then since $\mathcal{G}$ is locally free on its support outside the singular points, we have that the isomorphism holds outside the singular points; but around the singular points, $\rho$ is an isomorphism. Finally let $\chi=0$. The claim on the determinant line bundles is somehow obvious: by the universal property of $\z,$ we have $\underline{f}^{*}(\z)\simeq (det~R^{\bullet}p~\mathcal{U})^{\vee},$ where $\mathcal{U}$ is the flat family on $X\times \tilde{M}$ obtained by pushing $\tilde{\mathcal{U}}$ forward along $\rho\times id_{\tilde{M}}.$ \[\xymatrix{ \tilde{\mathcal{U}} \ar[d]^{(\rho\times id_{\tilde{M}})_{*}}\ar[r] &\tilde{X}\times \tilde{M}\ar[d]^{\rho\times id_{\tilde{M}}}\\ \mathcal{U}\ar[r] &X\times \tilde{M}\ar[d]^{p}\\ & \tilde{M}}\] Hence $R^{\bullet}p~\mathcal{U}\simeq R^{\bullet}p~((\rho\times id_{\tilde{M}})_{*}\tilde{\mathcal{U}}).$ \begin{lemma}$R^i(\rho\times id_{\tilde{M}})_{*}\tilde{\mathcal{U}}=0,$ for all $i>0.$ \end{lemma} \begin{proof}One can see that $\rho\times id_{\tilde{M}}$ is an isomorphism when restricted to the support of $\tilde{\mathcal{U}},$ hence the lemma. \end{proof} As $R^i(\rho\times id_{\tilde{M}})_{*}\tilde{\mathcal{U}}=0,$ for all $i>0,$ we have $\underline{f}^{*}\z=det~R^{\bullet}p~\mathcal{U}\simeq det~R^{\bullet}(p\circ (\rho\times id_{\tilde{M}}))~\tilde{\mathcal{U}}= \tilde{\z}.$ Hence $\underline{f_{*}}(\tilde{\Theta}^r)\simeq\underline{f_{*}}(\underline{f}^{*}(\z^r))\simeq\underline{f}_{*}(\mathcal{O}_{\tilde{M}})\otimes \z^r\simeq (j\circ s)_{*}\mathcal{O}_{M^T}\otimes\z^r$ and $f_{*}\tilde{\z}^r\simeq (j\circ s)^{*}\z^r$ for all $r$. So we have proven the proposition. \end{proof} \begin{rem}According to Proposition \ref{muse}, $\tilde{M}_{\chi}$ is a smooth projective scheme of dimension $g_L+1.$ But Ext$^2(\mf,\mf)_0$ may not vanish for $[\mf]\in \tilde{M},$ because $(\tilde{L},\tilde{K})$ might not satisfy $(\ha'_1).$ \end{rem} \begin{rem}For the moduli space $\tilde{\mchi}$, we did not specify the ample line bundle $\mo_{\tilde{X}}(1)$ on the blow-up $\tilde{X},$ but it is easy to see that the moduli space $\tilde{\mchi}$ does not depend on the polarization. \end{rem} \begin{prop}\label{ndchi}$\tilde{\mchi}$ is isomorphic to $\tilde{M}$ for any $\chi\in\mathbb{Z}$. \end{prop} \begin{proof}Recall that $\tilde{\mchi}$ is a fine moduli space for any $\chi$. Let $\tilde{\mathcal{U}}_{\chi}$ be some universal sheaf on $\tilde{X}\times \tilde{\mchi}$. We have the diagram \[\xymatrix{ \tilde{\mathcal{U}}_{\chi} \ar[r] & \tilde{X}\times \tilde{\mchi} \ar[ld]^{q} \ar[d]^{p_{\chi}} \\ \tilde{X} &\tilde{\mchi} }\] Then $\tilde{\mathcal{U}}_{\chi}\otimes q^{*}\mo_{\tilde{X}}((-\chi)E_1)$ is a flat family on $\tilde{X}\times \tilde{\mchi}$ of stable sheaves of class $\tilde{u}$, and hence induces a morphism $\varphi_{\chi}:\tilde{\mchi}\ra \tilde{M}$. It is easy to see that $\varphi_{\chi}$ is bijective, hence an isomorphism since both $\tilde{\mchi}$ and $\tilde{M}$ are smooth. Notice that one can construct the isomorphism $\varphi_{\chi}$in many ways and there is no canonical way if $l\geq2$. \end{proof} Now we have identified $(\tilde{M},\tilde{\z}^r)$ with $(M^T,\z^r|_{M^T})$, hence we can focus on $\tilde{\pi}_{*}\tilde{\z}^r$ on $\tilde{\ls}$, instead of $\pi^T_{*}(\z^r|_{M^T})$ on $T.$ \begin{lemma}\label{hzeta}$(1)$ $R^{i}\tilde{\pi}_{*}\tilde{\z}^r=0$ for all $i>0$ and $r>0$, $R^{i}\tilde{\pi}_{*}\tilde{\z}^r=0$ for all $i<g_L$ and $r<0$; $(2)$ For $r>0$, $\tilde{\pi}_{*}\tilde{\z}^r$ is locally free of rank $r^{g_L}$ and $\tilde{\pi}_{*}\tilde{\z}\simeq \mo_{\tilde{\ls}};$ $(3)$ For $r<0$, $R^{g_L}\tilde{\pi}_{*}\tilde{\z}^r$ is locally free of rank $(-r)^{g_L}$. \end{lemma} \begin{proof}By Proposition 3.0.4 in \cite{yuan} we know that $\tilde{\z}(s)$ is ample for $s\gg0$, hence $\tilde{\z}$ restricted to every fiber of $\tilde{\pi}$ is ample. By Corollary \ref{tangent} that we will prove later, the dualizing sheaf on every fiber of $\tilde{\pi}$ is invertible and corresponds to a torsion class in the Picard group. Hence restricted to every fiber $\tilde{\z}^r$ has no higher cohomology for $r>0$. Hence $R^{i}\tilde{\pi}_{*}\tilde{\z}^r=0$ for all $i>0$ and $r>0$ and $\tilde{\pi}_{*}\tilde{\z}^r$ is locally free. Moreover by the basic theory of Jacobians, we know that $\tilde{\pi}_{*}\tilde{\z}^r$ is of rank $r^{g_L}$. When $r=1$, $\tilde{\pi}_{*}\tilde{\z}$ is a line bundle with a nowhere vanishing section hence isomorphic to $ \mo_{\tilde{\ls}}$. The argument for $r<0$ is analogous. \end{proof} \begin{proof}[Proof of Theorem \ref{thmone}]From the result in \cite{adv}, we know that \[Y^1(t)=\sum_{n}h^0(M(c^1_n),\lambda_{u})t^n=\frac{1}{(1-t)^{l+1}}.\] Then Theorem \ref{thmone} is just a corollary of the Statement 2 in Lemma \ref{hzeta}. \end{proof} We obtain the moduli space $\tilde{M}$ by blowing up $l-1$ generic points $x_1,\ldots,x_{l-1}$ on $X.$ On the other hand we may first blow up one point $x_1$ to get a surface $X_1$ with the morphism $\rho_1:X_1\ra X$, and let $L_1=\rho_1^{*}L-E_1$. Then similarly we have the moduli space $M_1$ and $\z_1$ which is the determinant line bundle associated to $\mo_{X_1}$. Tautologically, blowing up the $l-1$ points $x_1,\ldots,x_{l-1}$ in $X$ is the same as blowing up $\rho_1(x_2),\ldots,\rho(x_{l-1})$ in $X_1$. Hence we get the same triple ($\tilde{X}$,$\tilde{M}$,$\tilde{\z}$) for both ($X$,$M$,$\z$) and ($X_1$,$M_1$,$\z_1$). There is a rational map $\nu:M_1--> M,$ but not necessary a morphism in general. However because of Proposition \ref{muse}, we have the following trivial remark. Notice that if $L$ satisfies condition $(\ha'_2)$, then so does $L_1$ for $x_1$ generic. And $K.L=K_1.L_1-1$ with $K_1=\rho_1^{*}K+E_1$ the canonical divisor on $X_1$. \begin{rem}Let ($X$,$M$,$\z$), ($X_1$,$M_1$,$\z_1$) and ($\tilde{X}$,$\tilde{M}$,$\tilde{\z}$) be as in the previous paragraph. Let $T$ be the projective line in $\ls$ defined by asking curves to pass through all the $l-1$ points $x_1,\ldots,x_{l-1}$, and $T_1$ the line in $|L_1|$ consisting of curves passing through all the $l-2$ points $\rho_1(x_2),\ldots,\rho_1(x_{l-1})$. If $L$ satisfies $(\ha'_1)$ and $L.K<-1$, then we have the following Cartesian diagram with $f$ and $f_1$ isomorphisms and $f^{*}\z^r\simeq f_1^{*}\z_1^r\simeq\tilde{\z}^r.$ \begin{equation}\label{carbu}\xymatrix{ M_1\ar[d]_{\pi_{1}} &M_1^{T_1}\ar[d]_{\pi^{T_1}_{1}}\ar[l]_{j_1\circ s_1}&\tilde{M} \ar[l]_{f_1}\ar[d]^{\tilde{\pi}_{\chi}} \ar[r]^{f} & M^T\ar[d]^{\pi^T}\ar[r]^{j\circ s}&M\ar[d]^{\pi} \\ |L_1|&T_1\ar[l]^{i_1\circ t_1}&|\tilde{L}|\ar[l]^{\imath_1}\ar[r]_{\imath}&T\ar[r]_{i\circ t} &\ls }\end{equation} For $\mchi$ with any $\chi$, we have an analogous Cartesian diagram as $(\ref{carbu})$. \end{rem} At the end of this section, we prove some lemmas which will be used in the next two sections. Let ($X$, $L$) and ($\tilde{X}$, $\tilde{L}$) be the same as in Proposition \ref{muse}. $K$ and $\tilde{K}$ are the canonical divisor on $X$ and $\tilde{X}$ respectively, and $\tilde{K}=\rho^{*}K+E_1+\ldots+E_{l-1}.$ Since there is more than one integral curve in $\ls$, $(\ha'_1)$ implies that $K$ is not effective, hence nor is $\tilde{K}.$ \begin{lemma}\label{none} $h^1(\tilde{L})=h^1(L)=0,$ $h^2(\tilde{L})=h^2(L)=0,$ hence $\chi(L)=l+1$ and $\chi(\tilde{L})=2.$ \end{lemma} \begin{proof}Since $K$ is noneffective, $L^{-1}\otimes K$ must be noneffective which means $h^0(L^{-1}\otimes K)=h^2(L)=0.$ Similarly $h^2(\tilde{L})$ must be zero because $\tilde{K}$ is not effective. By a direct computation we get $\chi(L)-\chi(\tilde{L})=h^0(L)-h^0(\tilde{L})=l-1,$ hence $h^1(L)=h^1(\tilde{L})$. On $X$ we have the following exact sequence \[0\rightarrow L^{-1}\otimes K\rightarrow K\rightarrow \mathcal{O}_C(K)\rightarrow0,\] with $C$ some smooth curve in $\ls.$ $L.K<0$, hence $\mo_{C}(K)$ is locally free on $C$ with negative degree and has no sections. So there is an injective map sending $H^1(L^{-1}\otimes K)$ into $H^1(K).$ So $h^1(L)=h^1(L^{-1}\otimes K)\leq h^1(K).$ $X$ is simply connected, then $H^1(K)=0$ and $h^1(L)=0.$ Hence the lemma. \end{proof} \begin{lemma}\label{firstchern} Let $\omega_{\mchi^{IC}}$ denote the canonical line bundle of $\mchi^{IC}$, then we have $c_1(\omega_{\mchi^{IC}})=[(\pi_{\chi}^{IC})^{*}\mo_{\ls^{IC}}(1)^{\otimes L.K}].$ \end{lemma} \begin{proof}The proof is essentially the same as what Danila does in \cite{nila} for $X=\mathbb{P}^2$. $\mchi^{IC}$ is smooth. Hence it will suffice to prove that $c_1(\mathcal{T}_{\mchi^{IC}})=[(\pi_{\chi}^{IC})^{*}\mo_{\ls}(-1)^{\otimes L.K}]$, where $\mathcal{T}_{\mchi^{IC}}$ is the tangent bundle on $\mchi^{IC}.$ Recall there is a morphism $\phi_{\chi}^{IC}:\Omega_{\chi}^{IC}\ra \mchi^{IC}$ which is a principal $G$-bundle with $G=\emph{PGL}(V)$. We have $Pic~(\mchi^{IC})\simeq Pic^G(\Omega_{\chi}^{IC})$ (Theorem 4.2.16 in \cite{dan}). And also because there is no surjective homomorphism from $G$ to $\mathbb{G}_m,$ the natural morphism $Pic^G(\Omega_{\chi}^{IC})\ra Pic (\Omega_{\chi}^{IC})$ is injective (\cite{git} Chap 1, Section 3, Proposition 1.4). Hence it is enough to prove that $(\phi_{\chi}^{IC})^{*}(c_1(\mathcal{T}_{\mchi^{IC}}))=[(\phi_{\chi}^{IC})^{*}(\pi_{\chi}^{IC})^{*}\mo_{\ls}(-1)^{\otimes L.K}]$ We have a universal sheaf on $X\times\Omega_{\chi}^{IC}.$ We denote it $\mathcal{E}_{\chi}^{IC}$. \begin{equation}\label{ppq} \xymatrix{ \mathcal{E}_{\chi}^{IC} \ar[r] & X\times \Omega_{\chi}^{IC} \ar[ld]^{q} \ar[d]^{p_{\chi}} \\ X & \Omega_{\chi}^{IC} \ar[d]^{\phi^{IC}_{\chi}} \\ &\mchi^{IC}\ar[d]^{\pi_{\chi}^{IC}}\\&\ls^{IC} } \end{equation} In the Grothendieck group, we have \[(\phi_{\chi}^{IC})^{*}\mathcal{T}_{\mchi^{IC}}=\mathcal{E}xt_{p_{\chi}}^1(\mathcal{E}_{\chi}^{IC},\mathcal{E}_{\chi}^{IC}).\] And $(\phi_{\chi}^{IC})^{*}(c_1(\mathcal{T}_{\mchi^{IC}}))=c_1((\phi_{\chi}^{IC})^{*}\mathcal{T}_{\mchi^{IC}}).$ So it is enough to compute $c_1((\phi_{\chi}^{IC})^{*}\mathcal{T}_{\mchi^{IC}}).$ Because of $(\ha'_1),$ we have that over every closed point $y\in \Omega_{\chi}^{IC},$ Ext$^i((\mathcal{E}_{\chi}^{IC})_y, (\mathcal{E}_{\chi}^{IC})_y)=0,$ for all $i\geq 2.$ Hence $\mathcal{E}xt_{p_{\chi}}^i(\mathcal{E}_{\chi}^{IC},\mathcal{E}_{\chi}^{IC})=0,$ for all $i\geq2,$ because fiberwise they are Ext$^i((\mathcal{E}_{\chi}^{IC})_y,(\mathcal{E}_{\chi}^{IC})_y).$ Also we have Ext$^0((\mathcal{E}_{\chi}^{IC})_y, (\mathcal{E}_{\chi}^{IC})_y)=\mathbb{C},$ hence $\mathcal{E}xt_{p_{\chi}}^0(\mathcal{E}_{\chi}^{IC},\mathcal{E}_{\chi}^{IC})=(p_{\chi})_{*}\mathcal{H}om(\mathcal{E}_{\chi}^{IC},\mathcal{E}_{\chi}^{IC})$ is a line bundle on $\Omega_{\chi}^{IC},$ hence isomorphic to $\mathcal{O}_{\Omega_{\chi}^{IC}}$ since it has a nowhere vanishing global section. Therefore \[ [det~\mathcal{E}xt_{p_{\chi}}^{\bullet}(\mathcal{E}_{\chi}^{IC},\mathcal{E}_{\chi}^{IC})]=[det~R^{\bullet}p_{\chi}~(\mathcal{E}xt^{\bullet}(\mathcal{E}_{\chi}^{IC},\mathcal{E}_{\chi}^{IC}))]=[(det~\mathcal{E}xt_{p_{\chi}}^1(\mathcal{E}_{\chi}^{IC},\mathcal{E}_{\chi}^{IC}))^{\vee}].\] Hence \begin{equation}\label{dualone}c_1((\phi_{\chi}^{IC})^{*}\mathcal{T}_{\mchi^{IC}})=-c_1(det~R^{\bullet}p_{\chi}~(\mathcal{E}xt^{\bullet}(\mathcal{E}_{\chi}^{IC},\mathcal{E}_{\chi}^{IC}))=-c_1(R^{\bullet}p_{\chi}~(\mathcal{E}xt^{\bullet}(\mathcal{E}_{\chi}^{IC},\mathcal{E}_{\chi}^{IC})).\end{equation} By Grothendieck-Riemann-Roch, \[ch(R^{\bullet}p_{\chi}~\mathcal{E}xt^{\bullet}(\mathcal{E}_{\chi}^{IC},\mathcal{E}_{\chi}^{IC}))=(p_{\chi})_{*}(ch(\mathcal{E}_{\chi}^{IC})\cdot ch((\mathcal{E}_{\chi}^{IC})^{\vee})\cdot td(q^{*}\mathcal{T}_{X})),\] where $\mathcal{T}_{X}$ is the tangent sheaf on $X.$ Since $\mathcal{E}_{\chi}^{IC}$ is a torsion sheaf on $X\times\Omega_{\chi}^{IC},$ \begin{equation}\label{dualtwo}c_1(R^{\bullet}p_{\chi}~\mathcal{E}xt^{\bullet}(\mathcal{E}_{\chi}^{IC},\mathcal{E}_{\chi}^{IC}))=(p_{\chi})_{*}(-\frac12c_1(\mathcal{E}_{\chi}^{IC})c_1(\mathcal{E}_{\chi}^{IC})c_1(q^{*}\mathcal{T}_{X}))=(p_{\chi})_{*}(\frac12c_1(\mathcal{E}_{\chi}^{IC})^2c_1(q^{*}K)).\end{equation} $c_1(\mathcal{E}_{\chi}^{IC})$ is just the support of $\mathcal{E}_{\chi}^{IC},$ which is the pull back along $id_{X}\times (\pi_{\chi}^{IC}\circ\phi_{\chi}^{IC})$ of the universal curve in $X\times \ls^{IC}.$ Therefore, $c_1(\mathcal{E}_{\chi}^{IC})=q^{*}L\otimes p_{\chi}^{*}F,$ where $F$ is the fiber class of $\pi_{\chi}^{IC}$ in $\Omega_{\chi}^{IC},$ i.e. $\mo_{\Omega_{\chi}^{IC}}(F)\simeq(\phi_{\chi}^{IC})^{*}\circ(\pi_{\chi}^{IC})^{*}\mo_{\ls}(1).$ Since $q^{*}L.q^{*}L.q^{*}K=0,$ so we have \[\frac12 (c_1(\mathcal{E}_{\chi}^{IC}))^2.(q^{*}K)=q^{*}L.q^{*}K.p^{*}F+\frac12q^{*}K.(p_{\chi}^{*}F)^2.\] and also $(p_{\chi})_{*}(q^{*}K.(p_{\chi}^{*}F)^2)=0,$ so \begin{eqnarray} (p_{\chi})_{*}(\frac12 (c_1(\mathcal{E}_{\chi}^{IC}))^2.(q^{*}K))&=&(p_{\chi})_{*}(q^{*}L.q^{*}K.p_{\chi}^{*}F)\nonumber\\&=&(L.K)F.\nonumber \end{eqnarray} Hence together with (\ref{dualone}) and (\ref{dualtwo}) we have \[c_1((\phi_{\chi}^{IC})^{*}\mathcal{T}_{\mchi^{IC}})= [(\phi_{\chi}^{IC})^{*}(\pi_{\chi}^{IC})^{*}\mo_{\ls^{IC}}(-1)^{\otimes L.K}].\] Hence the lemma. \end{proof} \begin{coro}\label{tangent}$c_1(\mathcal{T}_{\tilde{M}})= [\tilde{\pi}^{*}\mo_{|\tilde{L}|}(-1)^{\otimes (g_L-2)}],$ where $\mathcal{T}_{\tilde{M}}$ is the tangent bundle on $\tilde{M}.$ \end{coro} \begin{proof}Since $\tilde{M}$ is smooth, $c_1(\mathcal{T}_{\tilde{M}})=-c_1(\omega_{\tilde{M}}),$ where $\omega_{\tilde{M}}$ is the canonical line bundle on $\tilde{M}.$ Moreover as stated in Proposition \ref{muse}, $\omega_{\tilde{M}}=f^{*}\omega_{M^T}$. Because $M^T$ is a complete intersection of $l-1$ divisors in $|\pi^{*}\mo_{\ls}(1)|$ in $M^{IC}$ and also because of Lemma \ref{firstchern}, we have $c_1(\omega_{M^T})=[(\pi^T)^{*}\mo_T(L.K+l-1)]$ and hence $c_1(\omega_{\tilde{M}})=[f^{*}(\pi^T)^{*}\mo_T(L.K+l-1)]=[\tilde{\pi}^{*}\mo_{|\tilde{L}|}(L.K+l-1)].$ Since $L.K+l-1=g_L-2+h^1(L)-h^1(K)=g_L-2,$ we have the lemma. \end{proof} \section{Splitting type for genus one case.} From now on we are always working on $\tilde{M}.$ So for simplicity, we drop all the $~\widetilde{}~$ and just write $X,$ $L,$ $M,$ $\z^r,$ $\pi$, etc. Now $M$ is a flat family of Jacobians over $\ls\simeq\pone$. We will give the formulas for $g_L=1,2$ by giving the explicit splitting types for all $\pi_{*}\z^r,$ $r>0.$ By Lemma 3.0.1 in \cite{yuan}, there is a natural global section of $\Theta$ which vanishes at $[\mathcal{F}]\in M$ such that $H^0(\mathcal{F})\neq0.$ Let $D_{\z}=\{[\mf]\in M:h^0(\mf)\neq0\}$ be the divisor associated to that section. We prove the following proposition in this section. The technique we use is essentially the same as that in \cite{yuan} for genus one case. \begin{prop}\label{gone} If $g_L=1$, then for $r\geq2,$ \[\pi_{*}\Theta^r\simeq \mathcal{O}_{\ls}\oplus(\mathcal{O}_{\ls}(-i))^{\oplus_{i=2}^r}.\] \end{prop} \begin{proof} In $X\times\ls\simeq X\times \mathbb{P}^1$, there is a universal curve $\mathcal{C}$ such that every fiber $\mathcal{C}_s$ is just the curve represented by $s\in\ls.$ \[\xymatrix{ \mathcal{C} \ar[r] & X\times \ls \ar[ld]^{q} \ar[d]^{p} \\ X &\ls }\] Since $\mc_s$ is integral of genus one, $\mo_{\mc_s}$ is stable of Euler characteristic zero for every $s$. Hence the structure sheaf $\mo_{\mc}$ of $\mathcal{C}$ induces an injective morphism embedding $\ls$ as a subscheme of $M.$ \[\imath:\ls\rightarrow M.\] It is easy to see that $\imath$ provides a section of the projection $\pi.$ The image of $\imath$ is contained in $D_{\z}$, and moreover we have the following lemma. \begin{lemma}$\pi$ restricted to $D_{\z}$ is an isomorphism and $\imath$ is its inverse. \end{lemma} \begin{proof}Let $[\mf]\in M$, and $C$ its support. Since $C$ is integral and of genus one, we have $H^0(\mathcal{F})\neq0\Leftrightarrow \mathcal{F}\simeq\mathcal{O}_{C}$. Hence $D_{\z}$ intersects every fiber of $\pi$ at only one reduced point. Hence $\pi$ restricted on it is a morphism of degree $1$, hence an isomorphism. It is obvious to have $\imath\cdot\pi=id_{\ls}.$ \end{proof} Thus on $M$ we have \[0\rightarrow\Theta^{-1}\rightarrow\mathcal{O}_{M}\rightarrow\mathcal{O}_{D_{\z}}\rightarrow0.\] Tensoring by $\Theta^r$ with $r\geq2$, we get \begin{equation}\label{odgone}0\rightarrow\Theta^{r-1}\rightarrow\Theta^r\rightarrow\mathcal{O}_{D_{\z}}(\Theta^r)\rightarrow0.\end{equation} $R^1\pi_{*}\Theta^{r-1}=0$ by Lemma \ref{hzeta}. Push (\ref{odgone}) forward via $\pi$ and we have \begin{equation}\label{zonp} 0\rightarrow\pi_{*}\Theta^{r-1}\rightarrow\pi_{*}\Theta^r\rightarrow\pi_{*}\mathcal{O}_{D_{\z}}(\Theta^r)\rightarrow0.\end{equation} Since $D_{\z}\simeq\ls$ and $\pi\cdot\imath=id_{\ls},$ $\pi_{*}\mathcal{O}_{D_{\z}}(\Theta^r)\simeq \pi_{*}\imath_{*}\imath^{*}\Theta^r\simeq\imath^{*}\z^r.$ According to the universal property of $\Theta$, we have $\imath^{*}\Theta^r\simeq (det(R^{\bullet}p~[\mathcal{O}_{\mathcal{C}}]))^{-r}$. We have an exact sequence on $X\times\ls$. \[0\rightarrow q^{*}\mathcal{O}_{X}(-L)\otimes p^{*}\mathcal{O}_{\ls}(-1)\rightarrow\mathcal{O}_{X\times\ls}\rightarrow\mathcal{O}_{\mathcal{C}}\rightarrow0.\] Hence $ (det(R^{\bullet}p~[\mathcal{O}_{\mathcal{C}}]))^{-1}\simeq (det(R^{\bullet}p~[\mathcal{O}_{X\times\ls}]))^{-1}\otimes det(R^{\bullet}p~[q^{*}\mathcal{O}_{X}(-L)\otimes p^{*}\mathcal{O}_{\ls}(-1)]).$ And also $det(R^{\bullet}p~[\mathcal{O}_{X\times\ls}])\simeq \mathcal{O}_{\ls};$ $det(R^{\bullet}p~[q^{*}\mathcal{O}_{X}(-L)\otimes p^{*}\mathcal{O}_{\ls}(-1)])\simeq \mathcal{O}_{\ls}(-1)^{\otimes \chi(\mathcal{O}_{X}(-L))}.$ Since $g_L=1$, $\chi(\mathcal{O}_{X}(-L))=\chi(\mathcal{O}_{X})=1$ and $\mathcal{O}_{\ls}(\Theta^r)\simeq \mathcal{O}_{\ls}(-r).$ The exact sequence (\ref{zonp}) splits for every $r>1$. And by induction we get \[\pi_{*}\Theta^r\simeq \mathcal{O}_{\ls}\oplus\mathcal{O}_{\ls}(-i)^{\oplus_{i=2}^r}.\] \end{proof} In this case, the generating function can be written down as \begin{eqnarray}Z^r(t)&=&\sum_{n}h^0(M,\lcn)t^n\nonumber\\ &=&\sum_{n}h^0(M,\Theta^r\otimes\pi^{*}\mathcal{O}_{\ls}(n))t^n\nonumber \\ &=&\sum_{n}h^0(\ls, \pi_{*}(\Theta^r)\otimes\mathcal{O}_{\ls}(n))t^n\nonumber\\ &=&\large{\frac{1+t^{2}+t^{3}+\ldots+t^{r}}{(1-t)^2}}.\nonumber \end{eqnarray} \begin{rem}This result is compatible with Statement 2 in Theorem 4.4.1 in \cite{yuan} as $X=\mathbb{P}^2$ and $L=3H$ or $X=\mathbb{P}(\mo_{\pone}\oplus\mo_{\pone}(-e))$ and $L=2G+(e+2)F$ with $e=0,1.$ \end{rem} \begin{proof}[Proof of Theorem \ref{thmtwo}] Recall that we denote \[Y^r_{g_L=1}(t)=\sum_{n\geq0}y^r_{n,g_L=1}t^n=\frac{1+t^{2}+t^{3}+\ldots+t^{r}}{(1-t)^2};\] and let $y_{n,g_L=1}^r=0$ for all $n<0$. In this case we have \[Y^r_{g_L=1}(t)=\frac{Z^r(t)}{(1-t)^{l-1}},\] hence Theorem \ref{thmtwo}. \end{proof} \section{Splitting type for genus two case.} Remember that we get the one-dimensional linear system $\ls$ by blowing up $l-1$ points. So we can write $L=L'-E_1-\ldots-E_{l-1}$ with $L'$ effective and $E_i.L=1$. We in addition ask $l\geq3.$ Then we have the following proposition. \begin{prop}\label{gtwo} For the one-dimensional linear system $L=L'-E_1-\ldots-E_{l-1}$ with $g_L=2$, if $l-1\geq 2$, then $(1)$ $\pi_{*}\Theta^{r-1}$ is a direct summand of $\pi_{*}\Theta^{r}.$ Let $\pi_{*}\Theta^r=\pi_{*}\Theta^{r-1}\oplus\Delta_r;$ $(2)$ $\pi_{*}\Theta^2\simeq \mathcal{O}_{\ls}\oplus(\mathcal{O}_{\ls}(-2))^{\oplus^3},$ $\pi_{*}\Theta^3\simeq\mathcal{O}_{\ls}(-4)\oplus(\mathcal{O}_{\ls}(-3)^{\oplus^4})\oplus(\mathcal{O}_{\ls}(-2)^{\oplus^3})\oplus\mathcal{O}_{\ls};$ $(3)$ for $r\geq4,$ we have the recursion formula \[\pi_{*}\Theta^r\simeq \pi_{*}\Theta^{r-1}\oplus(\mathcal{O}_{\ls}(-r)^{\oplus^2})\oplus(\mathcal{O}_{\ls}(-r-1)^{\oplus^2})\oplus(\Delta_{r-2}\otimes\mathcal{O}_{\ls}(-2)).\] \end{prop} Before proving Proposition \ref{gtwo}, we show some lemmas. \begin{lemma}\label{inter}Let $\mathcal{T}$ be the tangent bundle on $M,$ let $c_i(\mathcal{T})$ be its i-th Chern class, then $c_1(\mathcal{T}).c_i(\mathcal{T})=0$ for all $i.$ \end{lemma} \begin{proof}According to Corollary \ref{tangent} we have $c_1(\mathcal{T})=[\pi^{*}\mo_{\ls}(-1)^{\otimes (g_L-2)}].$ Denote $F$ to be the fiber class of $\pi$. It is enough to show that $c_i(\mathcal{T})|_F=0.$ On the other hand, we can choose a representative of $F$ isomorphic to the Jacobian of some smooth curve. The tangent bundles on Jacobians are trivial with all Chern classes to be zero. Hence the lemma. \end{proof} Since $h^0(\z)=1,$ we have only one $\z$-divisor $D_{\z}$. Let $M_1=D_{\Theta}$. We have exact sequences on $M$. \begin{equation}\label{a}0\rightarrow\Theta^{-1}\rightarrow\mathcal{O}_{M}\rightarrow\mathcal{O}_{M_1}\rightarrow0. \end{equation} \begin{equation}\label{b}0\rightarrow\mathcal{O}_{M}\rightarrow\Theta\rightarrow\mathcal{O}_{M_1}(\Theta)\rightarrow0. \end{equation} \begin{equation}\label{c}~~~~~~~~~~0\rightarrow\Theta^{r-1}\rightarrow\Theta^{r}\rightarrow\mathcal{O}_{M_1}(\Theta^{r})\rightarrow0,~~~~r\geq2. \end{equation} Pushing (\ref{a}) forward, we get three isomorphisms of bundles on $\ls.$ \begin{equation}\label{aa}0\rightarrow\pi_{*}\mathcal{O}_{M}\rightarrow\pi_{*}\mathcal{O}_{M_1}\rightarrow0. \end{equation} \begin{equation}\label{ab}0\rightarrow R^1\pi_{*}\mathcal{O}_{M}\rightarrow R^1\pi_{*}\mathcal{O}_{M_1}\rightarrow0. \end{equation} \begin{equation}\label{ac}0\rightarrow R^2\pi_{*}\Theta^{-1}\rightarrow R^2\pi_{*}\mathcal{O}_{M}\rightarrow0. \end{equation} The isomorphism in (\ref{aa}) is because $\pi_{*}\z^{-1}=R^1\pi_{*}\z^{-1}=0$ by Lemma \ref{hzeta}. The morphism in (\ref{ac}) at first is a surjective map because the relative dimension of $M_1$ over $\ls$ is $1$ and hence $R^2\pi_{*}\mathcal{O}_{M_1}=0;$ then it is an isomorphism because $R^2\pi_{*}\Theta^{-1}$ is a line bundle and $R^2\pi_{*}\mathcal{O}_{M}$ is locally free of rank $1$ on the open set of smooth curves in $\ls$. And then the morphism in (\ref{ab}) has to be an isomorphism because both (\ref{aa}) and (\ref{ac}) are. By pushing forward sequence (\ref{b}), we get three isomorphisms of bundles on $\ls.$ \begin{equation}\label{ba}0\rightarrow\pi_{*}\mathcal{O}_{M}\rightarrow\pi_{*}\Theta\rightarrow0. \end{equation} \begin{equation}\label{bb}0\rightarrow \pi_{*}\mathcal{O}_{M_1}(\Theta)\rightarrow R^1\pi_{*}\mathcal{O}_{M}\rightarrow0. \end{equation} \begin{equation}\label{bc}0\rightarrow R^1\pi_{*}\mathcal{O}_{M_1}(\Theta)\rightarrow R^2\pi_{*}\mathcal{O}_{M}\rightarrow0. \end{equation} We have an isomorphism in (\ref{ba}) because they both are line bundles isomorphic to $\mo_{\ls}$, (\ref{bb}) and (\ref{bc}) are because $R^j\pi_{*}\Theta^i=0,$ for all $j, ~i>0.$ So we have the following lemma. \begin{lemma}\label{va} On $\ls,$ we have $(1)$ $\pi_{*}\mathcal{O}_{M}\simeq\pi_{*}\Theta\simeq\pi_{*}\mathcal{O}_{M_1}\simeq\mathcal{O}_{\ls};$ $(2)$ $R^2\pi_{*}\Theta^{-1}\simeq R^2\pi_{*}\mathcal{O}_{M}\simeq R^1\pi_{*}\mathcal{O}_{M_1}(\Theta)\simeq\mathcal{O}_{\ls}(-2)$. $(3)$ $R^1\pi_{*}\mathcal{O}_{M}\simeq R^1\pi_{*}\mathcal{O}_{M_1}\simeq\pi_{*}\mathcal{O}_{M_1}(\Theta),$ and they are of rank 2 and Euler characteristic $0$. $(4)$ $R^1\pi_{*}\mathcal{O}_{M_1}(\Theta^i)=0,$ for all $i\geq2.$ \end{lemma} \begin{proof}Statement $1$ is trivial. For statement $2$: remember that $\z$ restricted to a generic fiber is the usual $\theta$-bundle on the Jacobian by Lemma 3.0.1 in \cite{yuan}, and hence we have $(D_{\z})^g.F=g!.$ By Corollary \ref{tangent} we know that $c_1(\mathcal{T}_M)=0$ since $g_L=2.$ Hence by Hirzebruch-Riemann-Roch, we have $\chi(\Theta)=-\chi(\Theta^{-1}).$ On the other hand we know that $\chi(\Theta)=\sum(-1)^i\chi (R^i\pi_{*}\Theta)=\chi(\pi_{*}\Theta)=1.$ So as a result $\chi(\Theta^{-1})=\chi(R^2\pi_{*}\Theta^{-1})=-1,$ so the statement. For statement $3$: from Lemma \ref{inter} and Hirzebruch-Riemann-Roch we know that $\chi(\mathcal{O}_{M})=c_1(\mathcal{T}).c_2(\mathcal{T})=0,$ hence $\chi(R^1\pi_{*}\mathcal{O}_{M})=\chi(\pi_{*}\mathcal{O}_{M})+\chi (R^2\pi_{*}\mathcal{O}_{M})=0.$ At last we push (\ref{c}) forward and get $R^1\pi_{*}\mathcal{O}_{M_1}(\Theta^r)=0,$ for $r\geq2$. \end{proof} Push (\ref{c}) forward and we get an exact sequence of bundles on $\ls.$ \begin{equation}\label{d}0\rightarrow\pi_{*}\Theta^{r-1}\rightarrow\pi_{*}\Theta^{r}\rightarrow\pi_{*}\mathcal{O}_{M_1}(\Theta^{r})\rightarrow0.~~~for~r\geq2. \end{equation} We have already seen that $\pi_{*}\Theta\simeq\mathcal{O}_{\ls}$. To get the recursion formula, it is enough to compute the splitting type of $\pi_{*}\mathcal{O}_{M_1}(\Theta^{r})$ for all $r\geq2.$ We define two other determinant line bundles associated to $\mathcal{O}_{X}(E_2-E_1)$ and $\mathcal{O}_{X}(E_1-E_2)$ on $X$ respectively. Let $\eta_1=\lambda_{[\mathcal{O}_{X}(E_2-E_1)]}$ and $\eta_2=\lambda_{[\mathcal{O}_{X}(E_1-E_2)]}.$ According to Lemma 3.0.1 in \cite{yuan}, there is a natural global section of $\eta_1$ (resp. $\eta_2$) whose vanishing locus consists of all $[\mathcal{F}]$ such that $H^0(\mathcal{F}\otimes\mathcal{O}_{X}(E_2-E_1))\neq0$ (resp. $H^0(\mathcal{F}\otimes\mathcal{O}_{X}(E_1-E_2))\neq0$). We denote the two divisors associated to those two natural global sections as $D_1$ and $D_2$ respectively. \begin{rem}\label{etath} Since $[\mo_X(E_1-E_2)]+[\mo_X(E_2-E_1)]=2[\mo_X]-2[\mo_{pt}],$ we have $\eta_1\otimes\eta_2\simeq \Theta^2(2)$ on $M.$ \end{rem} Let $\Pi:=D_1\cap M_1$ and $\Sigma:=D_2\cap M_1.$ Now let $\mathcal{C}$ be the universal curve in $X\times \ls$ and $q$ the projection from $X\times\ls$ to $X$. Then $\mo_{\mathcal{C}}\otimes q^{*}\mo_X(E_1)$ is a flat family of sheaves over $\ls$ and induces a morphism from $\ls$ to $M$ which is a section of $\pi.$ The image of this morphism, we denote it $\Pi_1,$ is contained in $\Pi=D_1\cap M_1.$ And let $\Pi_2=\overline{\Pi-\Pi_1}.$ We define similarly $\Sigma_1$ and $\Sigma_2$: $\Sigma_1$ is the image of $\ls$ via the morphism induced by the flat family $\mo_{\mathcal{C}}\otimes q^{*}\mo_X(E_2)$ on $X\times\ls$, and $\Sigma_2:=\overline{\Sigma-\Sigma_1}$. Both $\Pi_1$ and $\Sigma_1$ are isomorphic to $\ls\simeq\mathbb{P}^1$. $\Pi_1\cap\Sigma_1=\emptyset$ because $E_1$ and $E_2$ intersect every curve in $\ls$ at two different points. For $\Pi_2$ and $\Sigma_2,$ we have the following lemma. \begin{lemma}$\Pi_2$ is also isomorphic to $\ls$ and provides a section of $\pi$ as well. The same is true for $\Sigma_2.$ \end{lemma} \begin{proof}Because $E_1$ and $E_2$ do not intersect each other, they intersect every curve at two different points. And because curves in $\ls$ are of genus $2$, any two different points are not linearly equivalent. So for $i=1,2$, $\eta_i$ restricted to a fiber is algebraically but not linearly equivalent to the usual $\theta$-bundle. Moreover according to basic theory of Jacobians, we know that the intersection number of $\Pi$ with a fiber of $\pi$ is 2. So $\pi$ is a morphism of degree $2$ and when restricted on $\overline{\Pi-\Pi_1}$ it is a morphism of degree $1$ over $\mathbb{P}^1$, hence an isomorphism. So $\Pi_2=\overline{\Pi-\Pi_1}$ is isomorphic to $\ls$ and provides a section of $\pi.$ It is analogous for $\Sigma_2.$ \end{proof} Let $C$ be any curve in $\ls$. We denote $p_C^i$ the point where $E_i$ meets $C.$ $C$ is smooth at $p_C^i$. For any point $q_C^1\in C,$ such that $h^0(q^1_C-p_C^1+p_C^2)\neq0,$ i.e. $[\mo_C(q^1_C)]\in\Pi,$ there is another point $q_C^2\in C$ satisfying that $q^1_C+p_C^2$ is linearly equivalent to $p_C^1+q_C^2$ on $C.$ Hence if $p^2_C\neq q^2_C,q^1_C\neq p_C^1,$ then $h^0(q^1_C+p^2_C)\geq2.$ And hence by Riemann-Roch, we know that $h^1(q^1_C+p^2_C)=h^0(\omega_C-q^1_C-p^2_C)\geq 1,$ and hence $\omega_C\sim q^1_C+p^2_C$ since $C$ is of genus $2$ and the canonical sheaf $\omega_C$ on $C$ is of degree $2.$ So $q^1_C$ has either to be $p_C^1$ or satisfies that $\omega_C\sim p^2_C+q^1_C.$ And if $q^1_C=p^1_C$, then we have $q^2_C=p^2_C$ and $\omega_C\sim p^1_C+p^2_C$. Hence we can assume that $q^1_C\neq p^1_C$ for a generic $C$, and hence $\Pi_1\neq \Pi_2$, $\Sigma_1\neq\Sigma_2$. Hence we can specify the universal sheaf on $X\times \Pi_2$ (resp. $X\times\Sigma_2$) as $\mo_{\mathcal{C}}\otimes q^{*}\mo_X(K+L-E_2)$ (resp. $\mo_{\mathcal{C}}\otimes q^{*}\mo_X(K+L-E_1)$). This is because $\mo_C(K+L)\simeq\omega_C$ for all $[C]\in\ls,$ and $\omega_C\sim p^2_C+q^1_C$ which implies that $\mo_C(K+L-E_2)\sim\mo_C(q^1_C).$ \begin{lemma}\label{deg}For $i=1,2$ we have $\pi_{*}(\Theta^r|_{\Pi_i})\simeq\mathcal{O}_{\ls}(-r\chi(\mo_X))=\mo_{\ls}(-r),$ which is equivalent to saying that $D_{\z}.\Pi_i=-1.$ And the same holds for $\Sigma_i,$ $i=1,2.$ \end{lemma} \begin{proof}By the universal property of $\Theta$ we have that $\z|_{\Pi_1}=(det~R^{\bullet}p~\mathcal{U}^1)^{-1}$ where $\mathcal{U}^1\simeq \mo_{\mathcal{C}}\otimes q^{*}\mo_X(E_1) $ is the universal sheaf on $X\times\Pi_1$. And also we have the exact sequence on $X\times \ls:$ \[0\rightarrow p^{*}\mathcal{O}_{\ls}(-1)\otimes q^{*}\mathcal{O}_{X}(-L+E_1)\rightarrow q^{*}\mathcal{O}_{X}(E_1)\rightarrow \mathcal{U}_1\rightarrow0.\] So \[det~R^{\bullet}p~\mathcal{U}_1\simeq det~R^{\bullet}p~(q^{*}\mathcal{O}_{X}(E_1))\otimes (det~R^{\bullet}p~(p^{*}\mathcal{O}_{\ls}(-1)\otimes q^{*}\mathcal{O}_{X}(-L+E_1)))^{-1}.\] Then we have \[ det~R^{\bullet}p~(q^{*}\mathcal{O}_{X}(E_1))\simeq \mo_{\ls},\] \[ det~R^{\bullet}p~(p^{*}\mathcal{O}_{\ls}(-1)\otimes q^{*}\mathcal{O}_{X}(-L+E_1))\simeq \mo_{\ls}(-1)^{\otimes \chi(\mo_X(-L+E_1))}.\] $\chi(\mo_X(-L+E_1))=\chi(\mo_X(E_1))-\chi(\mo_C(E_1))=\chi(\mo_X(E_1))$, since $C$ is a curve of genus $2$ and $\mo_{C}(E_1)$ is a line bundle of degree $1$ on $C.$ By Hirzebruch-Riemann-Roch we know that $\chi(\mo_X(E_1))=\chi(\mo_X)=1.$ For $\Pi_2$, we use $\mo_{\mathcal{C}}\otimes q^{*}\mo_X(K+L-E_2)$ as the universal sheaf. Similar computation shows that $D_{\z}.\Pi_2=-\chi(\mo_X(K+L-E_2))=-\chi(\mo_X)$ since $K.(K+L)=2g_L-2=2.$ For $\Sigma_i$ the argument is analogous. \end{proof} $\Pi+\Sigma\sim(2D_{\z}+2F)|_{D_{\z}}.$ Lemma \ref{deg} implies that $(\Pi+\Sigma).D_{\z}=-4$. Moreover $F.D^2_{\z}=g!=2,$ hence we have $2D_{\z}^3+4=(\Pi+\Sigma).D_{\z}=-4.$ Then we get the following proposition immediately. \begin{prop}\label{alpha}On the moduli space $M,$ we have $D_{\z}^3=-4.$ \end{prop} Since we know that $\chi(\z)=1$, by Proposition \ref{alpha} we can compute $\chi(\z^r(n))$ for all $r$ and $n$. And we have \begin{equation}\label{rnchi} \chi(\z^r(n))=-\frac23r^3+nr^2+\frac53r. \end{equation} However, if we want to write down explicitly the splitting type of $\pi_{*}\z^r$ and get a result which is not only numerical but also gives some geometric description, we have to see how the four projective lines, $\Pi_1$, $\Pi_2$, $\Sigma_1$ and $\Sigma_2$ intersect each other. It is obvious that $\Pi_1\cap\Sigma_1=\emptyset$ because $E_1$ and $E_2$ intersect every curve in $\ls$ at two different points. We have several lemmas: \begin{lemma}\label{pszero}$\Pi_2$ has no intersection with $\Sigma_2,$ i.e. $\Pi_2.\Sigma_2=0$. \end{lemma} \begin{proof} Let $C$ be any curve in $\ls$. As we mentioned before, if $[\mo_C(q_C^1)]\in\Pi_2$ and $[\mo_C(q_C^2)]\in\Sigma_2$, then $q_C^1+p_C^2\sim p_C^1+q_C^2$ with $p^i_C$ the point where $C$ meets $E_i.$ Since $p^1_C\neq p^2_C,$ and $p_C^1-p_C^2\sim q_C^1-q_C^2,$ we have $q^1_C\neq q^2_C$ for any $[C]\in\ls$ and hence the lemma. \end{proof} Now we compute $\Pi_1.\Sigma$ and $\Pi.\Sigma_1.$ Notice that the universal sheaf $\mathcal{U}^1$ over $X\times \Pi_1$ can be chosen to be $\mathcal{O}_{\mathcal{C}}\otimes q^{*}\mathcal{O}_{X}(E_1),$ as a result $[\mathcal{F}]\in \Pi_1\cap \Sigma\Leftrightarrow H^0(\mathcal{O}_{C_{\mathcal{F}}}\otimes q^{*}\mo_X(E_1)\otimes q^{*}\mo_X(E_1-E_2))\neq0,$ where $C_{\mf}$ is the supporting curve of $\mf.$ It is analogous for $\Pi\cap\Sigma_1.$ Let $\mathcal{B}^1=\mathcal{O}_{\mathcal{C}}\otimes q^{*}\mathcal{O}_X(2E_1-E_2),$ $\mathcal{B}^2=\mathcal{O}_{\mathcal{C}}\otimes q^{*}\mathcal{O}_X(2E_2-E_1).$ These two sheaves are also flat families over $X\times\ls$ hence induce two embeddings mapping $\ls$ to $M$ which both are sections of $\pi.$ Denote their image in $M$ as $P_1$ and $P_2$ respectively. $P_i\simeq\pone.$ \begin{lemma}\label{degp}$\Theta|_{P_i}\simeq \mathcal{O}_{\mathbb{P}^1}(-\chi(\mo_X)+2)=\mo_{\mathbb{P}^1}(1),$ for $i=1,2.$ \end{lemma} \begin{proof}The proof is analogous to Lemma \ref{deg}, and instead of $\chi(-L+E_1)$ we have $\chi(-L+2E_1-E_2)$ or $\chi(-L+2E_2-E_1)$ which are equal to $\chi(-L+E_1)-2.$ \end{proof} \begin{lemma}\label{dm} For any curve $\mathbf{C}$ in $M,$ let $d=deg~\Theta|_{\mathbf{C}},$ $(1)$ If $d<0$, then $\mathbf{C}\subset M_1.$ $(2)$ If $d\geq 0,$ and also $\mathbf{C}$ is not contained in $M_1,$ then $d=\#(\mathbf{C}\cap M_1),$ counting with multiplicity. \end{lemma} \begin{proof}If the curve is not contained in $M_1=D_{\z},$ then there is a nonzero global section of $\Theta$ vanishing at points corresponding to sheaves with global sections. Hence the degree of $\Theta$ restricted to that curve should be nonnegative and must equal to $\mathbf{C}\cap M_1$ counting with multiplicity. \end{proof} \begin{rem}\label{contain}Because of Lemma \ref{dm}, if $P_1$ (resp. $P_2$) is not contained in $M_1,$ then $\Pi_1.\Sigma=\# \Pi_1\cap \Sigma=1$ (resp. $\Pi.\Sigma_1=\# \Pi\cap \Sigma_1=1$). \end{rem} \begin{lemma}\label{notcon}Neither $P_1$ nor $P_2$ is contained in $M_1.$ \end{lemma} \begin{proof}Note that a priori, $P_i$ is contained in $D_i$ for $i=1,2.$ If $P_1$ is contained in $M_1$, then $P_1\subset M_1\cap D_1=\Pi$. Hence $P_1$ has to be either $\Pi_1$ or $\Pi_2$. But $\z$ restricted on $P_1$ has degree $1$ while restricted on $\Pi_i$ it has degree $-1$ by Lemma \ref{deg}. So we know that $P_1$ can not be contained in $M_1$. For $P_2$ it is analogous. \end{proof} Because of Lemma \ref{notcon} and Remark \ref{contain}, we have $\Pi_1.\Sigma=\Pi.\Sigma_1=1$. On the other hand, we have $\Pi_1\cap\Sigma_1=\emptyset$, $\Pi_2\cap\Sigma_2=\emptyset$. Hence we have $\Pi_1.\Sigma_2=1$ and $\Pi_2.\Sigma_1=1.$ We now only need to compute $\Pi_1.\Pi_2$ and $\Sigma_1.\Sigma_2.$ Recall that $D_{\z}=M_1.$ Now on $M_1$ we have an exact sequence. \begin{equation}\label{me} 0\rightarrow\eta_1^{-1}\otimes\eta_2^{-1}\rightarrow\mathcal{O}_{M_1}\rightarrow\mathcal{O}_{M_2}\rightarrow0. \end{equation} $M_2$ is a subscheme of $M_1,$ which equals to $\Pi+\Sigma$ as a divisor. $\Pi+\Sigma\sim (2D_{\z}+2F)|_{D_{\z}}.$ Because of Remark \ref{etath} we can rewrite sequence (\ref{me}) as follows: \begin{equation}\label{vme} 0\rightarrow\z^{-2}(-2)|_{M_1}\rightarrow\mathcal{O}_{M_1}\rightarrow\mathcal{O}_{M_2}\rightarrow0. \end{equation} Using formula (\ref{rnchi}), by a direct computation we get $\chi(\mo_{M_2})=2$. Hence the arithmetic genus of $M_2$ is negative. Also we know that $M_2=\Pi_1+\Pi_2+\Sigma_1+\Sigma_2,$ and the $\Pi_i$ and the $\Sigma_i$ are isomorphic to $\pone.$ So $M_2$ can not be connected and therefore $\Pi_1\cap\Pi_2=\Sigma_1\cap\Sigma_2=\emptyset$. \begin{rem}\label{rzero}So the picture of these four curves is very clear: $\Pi_1\cap\Pi_2=\emptyset=\Sigma_1\cap\Sigma_2;$ $\Pi_1.\Sigma_2=1$ and $\Pi_2.\Sigma_1=1;$ and $\Pi_1\cap\Sigma_1=\Pi_2\cap\Sigma_2=\emptyset.$ \end{rem} We have the exact sequence on $M_2$ as follows. \begin{equation}\label{ssmr} 0\rightarrow(\mathcal{O}_{\Pi_1}(-1)\oplus\mathcal{O}_{\Pi_2}(-1))\otimes\Theta^r\rightarrow\mathcal{O}_{M_2}(\Theta^r)\rightarrow(\mathcal{O}_{\Sigma_1}\oplus\mathcal{O}_{\Sigma_2})\otimes\Theta^r\rightarrow0 \end{equation} We then have the following proposition. \begin{prop}\label{th} $\pi_{*}\mathcal{O}_{M_2}(\Theta^r)\simeq \mathcal{O}_{\ls}(-1-r)^{\oplus^2}\oplus\mathcal{O}_{\ls}(-r)^{\oplus^2}.$ \end{prop} \begin{proof} By Lemma \ref{deg} we have $\pi_{*}(\Theta^r|_{\Pi_i})\simeq\pi_{*}(\Theta^r|_{\Sigma_i})\simeq \mathcal{O}_{\ls}(-r),$ for $i=1,2.$ So push (\ref{ssmr}) forward and we get \begin{equation}\label{mrp} 0\rightarrow\mathcal{O}_{\ls}(-1-r)^{\oplus^2}\rightarrow\pi_{*}\mathcal{O}_{M_2}(\Theta^r)\rightarrow\mathcal{O}_{\ls}(-r)^{\oplus^2}\rightarrow0 \end{equation} It is easy to see there are no higher direct image along $\pi$ for sheaves on $M_2$, since $\pi$ restricted on $M_2$ has relative dimension zero. And sequence (\ref{mrp}) splits for every $r$. \end{proof} We tensor the sequence (\ref{vme}) by some power of $\Theta.$ Then we have following exact sequences on $M_1$. \begin{equation}\label{mea} 0\rightarrow\mo_{M_1}(\Theta^{-2}(-2))\rightarrow\mathcal{O}_{M_1}\rightarrow\mathcal{O}_{M_2}\rightarrow0. \end{equation} \begin{equation}\label{meb} 0\rightarrow\mo_{M_1}(\Theta^{-1}(-2))\rightarrow\mathcal{O}_{M_1}(\Theta)\rightarrow\mathcal{O}_{M_2}(\Theta)\rightarrow0. \end{equation} \begin{equation}\label{mes} 0\rightarrow\mathcal{O}_{M_1}(\Theta^{r-2}(-2))\rightarrow\mathcal{O}_{M_1}(\Theta^{r})\rightarrow\mathcal{O}_{M_2}(\Theta^{r})\rightarrow0,~~~~r\geq0. \end{equation} Push all of them forward and we get \begin{equation}\label{meea} 0\rightarrow\pi_{*}\mathcal{O}_{M_1}\rightarrow\pi_{*}\mathcal{O}_{M_2}\rightarrow R^1\pi_{*}\mathcal{O}_{M_1}(\Theta^{-2})\otimes\mathcal{O}_{\ls}(-2)\rightarrow R^1\pi_{*}\mathcal{O}_{M_1}\rightarrow0. \end{equation} \begin{equation}\label{meeb} 0\rightarrow \pi_{*}\mathcal{O}_{M_1}(\Theta)\rightarrow \pi_{*}\mathcal{O}_{M_2}(\Theta)\rightarrow R^1\pi_{*}\mathcal{O}_{M_1}(\Theta^{-1})\otimes\mathcal{O}_{\ls}(-2)\rightarrow R^1\pi_{*}\mathcal{O}_{M_1}(\Theta)\rightarrow0. \end{equation} \begin{equation}\label{mees} \small{\xymatrix@C=0.2cm{0\ar[r]&\pi_{*}\mathcal{O}_{M_1}(\Theta^{r-2})\otimes\mathcal{O}_{\ls}(-2)\ar[r]&\pi_{*}\mathcal{O}_{M_1}(\Theta^{r})\ar[r]&\pi_{*}\mathcal{O}_{M_2}(\Theta^{r})\ar[r] &R^1\pi_{*}\mathcal{O}_{M_1}(\Theta^{r-2})\otimes\mathcal{O}_{\ls}(-2)\ar[r]&0, r\geq2.}} \end{equation} In (\ref{meea}) and (\ref{meeb}), the zeros on the right are because $R^1\pi_{*}\mo_{M_2}(\z^r)=0$ for all $r$. The left zeros are because $\pi_{*}\mathcal{O}_{M_1}(\Theta^{-r})=0,~\forall r\geq1.$ In (\ref{mees}) the right zero is because $R^1\pi_{*}\mathcal{O}_{M_1}(\Theta^{r})=0$ as $r\geq2$ by Lemma \ref{va}. And (\ref{mees}) will be a short exact sequence with three terms when $r\geq4.$ Then we have a simple corollary of Proposition \ref{th}. \begin{coro}\label{trivial}The canonical sheaf $\omega_{M}$ on $M$ is trivial. \end{coro} \begin{proof}Since by Corollary \ref{tangent} we already know that $c_1(\mathcal{T}_{M})=0,$ it is enough to show $h^0(\omega_{M})=h^3(\mathcal{O}_{M})=1.$ From Proposition \ref{th} and Statement $3$ in Lemma \ref{va} and also sequence (\ref{meeb}), we can see that $\chi(\pi_{*}\mo_{M_1}(\z))=0,$ and there is a injective morphism from $\pi_{*}\mo_{M_1}(\z)$ to $\pi_{*}\mo_{M_2}(\z)\simeq\mo_{\ls}(-1)^{\oplus2}\oplus\mo_{\ls}(-2)^{\oplus2}.$ Hence $\pi_{*}\mathcal{O}_{M_1}(\Theta)\simeq\mathcal{O}_{\ls}(-1)^{\oplus^2}.$ Also according to Lemma \ref{va}, we have $\pi_{*}\mathcal{O}_{M_1}(\Theta)\simeq R^1\pi_{*}\mathcal{O}_{M_1}\simeq R^1\pi_{*}\mathcal{O}_{M}\simeq\mathcal{O}_{\ls}(-1)^{\oplus^2},$ and $\pi_{*}\mathcal{O}_{M_1}\simeq\mathcal{O}_{\ls}.$ Hence $H^1(R^1\pi_{*}\mo_{M_1})=H^2(\pi_{*}\mo_{M_1})=0$. On the other hand, since $\pi$ restricted on $M_1$ is of relative dimension $1$, we have $R^i\pi_{*}\mo_{M_1}=0$ for all $i\geq2$. Hence by the spectral sequence we know that $H^2(\mo_{M_1})=0$. From sequence (\ref{a}) we have the exact sequence as follows \[H^2(\mo_{M_1})\ra H^3(\z^{-1})\ra H^3(\mo_{M})\ra0.\] Because $R^2\pi_{*}\z^{-1}\simeq\mathcal{O}_{\ls}(-2)$ and $R^i\pi_{*}\z^{-1}=0$ for all $i<2$, we have $h^3(\z^{-1})=h^1(R^2\pi_{*}\z^{-1})=1;$ together with the vanishing of $H^2(\mo_{M_1})$, we get $h^3(\mathcal{O}_{M})=h^3(\z^{-1})=1.$ \end{proof} Corollary \ref{trivial} gives us an interesting result in the theory of compactified Jacobians of integral curves with planar singularities as follows. \begin{coro}\label{jaco}Let $X$ be any simply connected smooth projective surface over $\mathbb{C}$, $L$ be an effective line bundle satisfying $(\ha'_1)$ and $(\ha'_2)$, moreover $dim~\ls\geq 3$ and $g_L=2$, then for a generic integral curve $C$ in $\ls$, the compactified Jacobian $J^{g_L-1}$ which parametrizes the rank one torsion free sheaves of Euler characteristic zero has its dualizing sheaf be the trivial line bundle. \end{coro} \begin{proof}[Proof of Proposition \ref{gtwo}]As stated in the proof of Corollary \ref{trivial}, we already know that $\pi_{*}\mathcal{O}_{M_1}(\Theta)\simeq R^1\pi_{*}\mathcal{O}_{M_1}\simeq\mathcal{O}_{\ls}(-1)^{\oplus^2}.$ We rewrite (\ref{mees}) with $r=2$ as \begin{equation}\label{vmeec} 0\rightarrow \mathcal{O}_{\ls}(-2)\rightarrow\pi_{*}\mathcal{O}_{M_1}(\Theta^2)\rightarrow\mathcal{O}_{\ls}(-3)^{\oplus^2}\oplus\mathcal{O}_{\ls}(-2)^{\oplus^2}\rightarrow \mathcal{O}_{\ls}(-3)^{\oplus^2}\rightarrow0. \end{equation} Hence $\pi_{*}\mathcal{O}_{M_1}(\Theta^2)\simeq\mathcal{O}_{\ls}(-2)^{\oplus^3},$ together with sequence (\ref{d}) we get the expression for $\pi_{*}\Theta^2.$ Lemma \ref{va} also says that $R^1\pi_{*}\mathcal{O}_{M_1}(\Theta)\simeq \mathcal{O}_{\ls}(-2).$ So sequence (\ref{mees}) with $r=3$ implies that $\pi_{*}\mathcal{O}_{M_1}(\Theta^3)\simeq \mathcal{O}_{\ls}(-4)\oplus\mathcal{O}_{\ls}(-3)^{\oplus^4}$. Then we know the splitting type of $\pi_{*}\Theta^3.$ For $\Theta^r,$ $r\geq 4,$ both (\ref{d}) and (\ref{mees}) are short exact sequences with three terms and split, which implies Statements $1$ and $3$ in the proposition. \end{proof} We have defined $Z^r(t)=\sum_{n}h^0(M,\lcn)t^n =\sum_{n}h^0(M,\Theta^r\otimes\pi^{*}\mathcal{O}_{\ls}(n))t^n$. The generating function $Z^r(t)$ can be written down explicitly as follows: \begin{enumerate} \item $Z^1(t)=\large{\frac{1}{(1-t)^2}};~Z^2(t)=\large{\frac{1+3t^{2}}{(1-t)^2}}; ~Z^3(t)=\large{\frac{1+3t^{2}+4t^{3}+t^4}{(1-t)^2}}.$ \item $for~r\geq 4,~~Z^r(t)=Z^{r-1}(t)+(Z^{r-2}(t)-Z^{r-3}(t))\cdot t^2+\large{\frac{2t^{r}+2t^{r+1}}{(1-t)^2}}.$ \end{enumerate} The recursion formula 2 implies that \[Z^r(t)=\frac{1+3t^{2}+\sum_{i=3}^r ((i+1)t^{i}+(i-2)t^{i+1})}{(1-t)^2}~for~r\geq2.\] \begin{rem}These results are compatible with Statement 2 in Theorem 4.5.2 in \cite{yuan} as $X=\mathbb{P}(\mo_{\pone}\oplus\mo_{\pone}(-e))$ and $L=2G+(e+3)F$ with $e=0,1.$ \end{rem} \begin{proof}[Proof of Theorem \ref{thmthree}] In this case we have \[Y^r_{g_L=2}(t)=\frac{Z^r(t)}{(1-t)^{l-1}},\] and hence the theorem. \end{proof} \begin{flushleft}{\textbf{Acknowledgments.}} I would like to thank Lothar G\"ottsche for his guidance and Barbara Fantechi, Eduardo de Sequeira Esteves and Ramadas Ramakrishnan Trivandrum for many helpful discussions. \end{flushleft}
{"config": "arxiv", "file": "1010.1815.tex"}
TITLE: Understanding the $\mathcal{L}^0(\mu)$ space QUESTION [1 upvotes]: After having introduced the $\mathcal{L}^p(\mu)$ spaces for $p\in(0,\infty)$, my book has a little remark that we can expand the definition to $p=0$. Let $(X,\mathcal{E},\mu)$ be a measure space. Let $\mathcal{M}(\mathcal{E})=\{f:X\to \mathbb{R}\ |\ f \text{ is } \mathcal{E}-\mathcal{B}(\mathbb{R})\text{-measurable}\}$ i.e. the set of measurable functions. For $p=0$ we define the $\mathcal{L}^p(\mu)$ space as: $$ \mathcal{L}^0(\mu)=\{f\in \mathcal{M}(\mathcal{E})|\ \lim_{t\to\infty}\mu(\{ |f|\geq t\})=0 \}$$ The book then simply state that $\mathcal{L}^0(\mu)$ is a vector-space (left as an exercise to the reader), and by markov's inequality $\mathcal{L}^r(\mu)\subseteq \mathcal{L}^0(\mu)$ for all $r\in (0,\infty)$. That is it. I am now trying to understand what this space is. My idea is that this atleast includes excludes functions where $f(x) \to \pm \infty$ for $x\to \pm \infty$, since for all $t$ there will be an infinite interval were the function is larger. We can have a function with vertical asymptotes, but i cant think of a way to make them satisfy $\lim_{t\to\infty}\mu(\{ |f|\geq t\})>0$ without having uncountably many vertical asymptotes, and i do not think that is possible. Am i heading in the right direction when thinking about $\mathcal{L}^0(\mu)$, or how do we intuitively think about this space? REPLY [1 votes]: "My idea is that this at least $\color{red}{includes}$ functions where $f(x)\to\pm\infty$ for $x\to\infty$, ..." I guess that you really meant to exclude them. Mind you that the space $\mathcal{L}^{0}(\mu)$ defined in OP does depend on $\mu$ as well. For example, If $\mu$ is the Lebesgue measure on $\mathbb{R}^d$, then $\mathcal{L}^{0}(\mu)$ exclues any measurable functions $f: \mathbb{R}^d \to \mathbb{R}$ such that $|f(x)| \to \infty$ as $|x| \to \infty$. Of course there are other examples, such as $\tan(\cdot)$ on $\mathbb{R}$. On the other hand, if $\mu$ is a finite measure, then $\mathcal{L}^{0}(\mu) = \mathcal{M}(\mathcal{E})$ by the continuity of $\mu$ from above. In fact, it follows that $$ f \notin \mathcal{L}^{0}(\mu) \qquad \Leftrightarrow \qquad \mu(\{|f| \geq t\}) = \infty \quad \text{for all} \quad t. $$
{"set_name": "stack_exchange", "score": 1, "question_id": 3410955}
TITLE: What is the probability that exactly two of them are trout if we know that at least three of them are not? QUESTION [1 upvotes]: In a small lake, it is estimated that there are 105 fish, of which 40 are trout and 65 are of another species. A fisherman catches 8 fish. What is the probability that exactly two of them are trout if we know that at least three of them are not? My work Let $E_1=$"Exactly two trout" $E_2$="At least three of them are not trout." $A_1$="2 trout" $A_2$="exactly 3 are not trout" $B_1$="2 trout" $B_2$="exactly 4 are not trout $C_1$="2 trout" $C_2$="exactly 5 are not trout $D_1$="2 trout" $D_2$="exactly 6 are not trout We know $P(E_1|E_2)=P(A_1|A_2)+P(B_1|B_2)+P(C_1|C_2)+P(D_1|D_2)$ Solving $P(A_1|A_2)=\frac{P(A_1\cap A_2)}{P(A_2)}=\frac{\frac{2}{40}\times\frac{3}{65}}{\frac{3}{105}}=\frac{21}{260}$ Analogous for the other cases. Then $P(E_1|E_2)=\frac{84}{260}=0.32$ Is good the reasoning? Note: This exercise only can solved using conditional probability. REPLY [0 votes]: Adding conditional probability doesn't work like that. If it did, you could end up with probability greater than one. For instance, what's the probability of a card being something other than an ace? P(not ace|clubs) = P(not ace|diamonds) = P(not ace|clubs) = P(not ace|clubs) = 12/13. So is p(not ace) = 48/13? You need to add up all the possible ways of getting both E$_1$ and E$_2$, and divide by the total number of ways of getting E$_2$. Dividing the subcases and then adding gets a different answer than adding, then dividing.
{"set_name": "stack_exchange", "score": 1, "question_id": 2746456}
TITLE: How to show that a multivariable function is not differentiable? QUESTION [0 upvotes]: How would I show that $$f(x,y) = \frac{2xy}{x^2 + y^2}$$ is not differentiable at the origin? Is it enough to show that as the function tends to the origin along the paths $y = x$ and $y=2x$ that we get different limits and hence the function is not continuous? REPLY [0 votes]: I assume you mean: $$f(x,y)=\begin{cases}\frac{2xy}{x^2+y^2}, & (x,y)\ne (0,0)\\0, & (x,y)= (0,0).\end{cases}$$ In this case, look at the limit of $f$ as $(x,y)\to (0,0)$ along the line $y=x$.
{"set_name": "stack_exchange", "score": 0, "question_id": 1488049}
\begin{document} \title[A surjection theorem]{A surjection theorem for maps with singular perturbation and loss of derivatives} \author[I. Ekeland]{Ivar Ekeland} \address{Universit\'e Paris-Dauphine, PSL University, CNRS, CEREMADE, Place de Lattre de Tassigny, 75016 Paris, France} \email{ekeland@ceremade.dauphine.fr} \author[\'E. S\'er\'e]{\'Eric S\'er\'e} \address{Universit\'e Paris-Dauphine, PSL University, CNRS, CEREMADE, Place de Lattre de Tassigny, 75016 Paris, France} \email{sere@ceremade.dauphine.fr} \date{April 11, 2020} \begin{abstract} In this paper we introduce a new algorithm for solving perturbed nonlinear functional equations which admit a right-invertible linearization, but with an inverse that loses derivatives and may blow up when the perturbation parameter $\varepsilon$ goes to zero. These equations are of the form $F_\varepsilon(u)=v$ with $F_\varepsilon(0)=0$, $v$ small and given, $u$ small and unknown. The main difference with the by now classical Nash-Moser algorithm is that, instead of using a regularized Newton scheme, we solve a sequence of Galerkin problems thanks to a topological argument. As a consequence, in our estimates there are \textit{no quadratic terms}. For problems without perturbation parameter, our results require weaker regularity assumptions on $F$ and $v$ than earlier ones, such as those of H\"{o}rmander \cite{Hormander}. For singularly perturbed functionals $F_\varepsilon$, we allow $v$ to be larger than in previous works. To illustrate this, we apply our method to a nonlinear Schr\"{o}dinger Cauchy problem with concentrated initial data studied by Texier-Zumbrun \cite{TZ}, and we show that our result improves significantly on theirs.\end{abstract} \maketitle \bigskip \bigskip \section{Introduction} The basic idea of the inverse function theorem (henceforth IFT) is that, if a map $F$ is differentiable at a point $u_{0}$ and the derivative $DF\left( u_{0}\right) $ is invertible, then the map itself is invertible in some neighbourhood of $u_{0}$. It has a long and distinguished history (see \cite{BB} for instance), going back to the inversion of power series in the seventeenth century, and has been extended since to maps between infinite-dimensional spaces. If the underlying space is Banach, and if one is only interested in the local surjectivity of $F$, that is, the existence, near $u_0$, of a solution $u$ to the equation $F(u)=v$ for $v$ close to $F(u_0)$, one just needs to assume that $F$ is of class $C^1$ and that $DF(u_0)$ has a right-inverse $L(u_0)$. The standard proof is based on the Picard scheme: $$u_{n}=u_{n-1}-L(u_0)(F(u_{n-1})-v)$$ which converges geometrically to a solution of $F(u)=v$ provided $\Vert F(u_0)-v\Vert$ is small enough. In the $C^2$ case, the Newton algorithm: $$u_{n}=u_{n-1}-L(u_{n-1})(F(u_{n-1})-v)$$ uses the right-invertibility of $DF(u)$ for $u$ close to $u_0$, and provides local quadratic convergence. \medskip In functional analysis, $u$ will typically be a function. In many situations the IFT on Banach spaces will be enough, but in the study of Hamiltonian systems and PDEs, one encounters cases when the right-inverse $L(u)$ of $DF(u)$ loses derivatives, i.e. when $L(u) F(w)$ has less derivatives than $u$ and $w$. In such a case, the Picard and Newton schemes lose derivatives at each step. The first solutions to this problem are due, on the one hand, to Kolmogorov \cite{Kolmogorov} and Arnol'd \cite{Arnold}, \cite{Arnold2}, \cite{Arnold3} who investigated perturbations of completely integrable Hamiltonian systems in the analytic class, and showed that invariant tori persist under small perturbations, and, on the other hand, to Nash \cite{Nash}, who showed that any smooth compact Riemannian manifold can be imbedded isometrically into an Euclidian space of sufficiently high dimension\footnote{Nash's\ theorem on isometric embeddings was later proved by Gunther \cite{Gunther}, who found a different formulation of the problem and was able to use the classical IFT in Banach spaces.}. In both cases, the fast convergence of Newton's scheme was used to overcome the loss of regularity. Since Nash was considering functions with finitely many derivatives, he had to introduce a sequence of smoothing operators $\mathcal{S}_n$, in order to regularize $L(u_{n-1})(F(u_{n-1})-v)$, and the new scheme was $$u_n=u_{n-1}-\mathcal{S}_n L(u_{n-1})(F(u_{n-1})-v)\,.$$ An early presentation of Nash's method can be found in Schwartz' notes \cite{Schwartz}. It was further improved by Moser \cite{Moser}, who used it to extend the Kolmogorov-Arnol'd results to $C^{k}$ Hamiltonians. The Nash-Moser method has been the source of a considerable amount of work in many different situations, giving rise in each case to a so-called "hard" IFT. We will not attempt to review this line of work in the present paper. A survey up to 1982 will be found in \cite{Hamilton}. In \cite{Hormander}, H\"{o}rmander introduced a refined version of the Nash-Moser scheme which provides the best estimates to date on the regularity loss. We refer to \cite{Alinhac} for a pedagogical account of this work, and to \cite{BH} for recent improvements. We also gained much insight into the Nash-Moser scheme from the papers \cite{BB1}, \cite{BB2}, \cite{BB0}, \cite{BBP}, \cite{TZ}. The question we want to address here is the following.\ The IFT\ implies that the range of $F$ contains a neighborhood $\mathcal V$ of $v_0=F(u_0)$. What is the size of $\mathcal V$? In general, when one tries to apply directly the abstract Nash-Moser theorem, the estimates which can be derived from its proof are unreasonably small, many orders of magnitude away from what can be observed in numerical simulations or physical experiments. Moreover, precise estimates for the Nash-Moser method are difficult to compute, and most theoretical papers simply do not address the question. So we shall address instead a ''hard'' singular perturbation problem with loss of derivatives. The same issue appears in such problems, as we shall explain in a moment, but it takes a simpler form: one tries to find a good estimate on the size of $\mathcal V$ as a power of the perturbation parameter $\varepsilon$. Such an asymptotic analysis has been carefully done in the paper of Texier and Zumbrun \cite{TZ} which has been an important source of inspiration to us, and we will be able to compare our results with theirs. As noted by these authors, the use of Newton's scheme implies an intrinsic limit to the size of $\mathcal{V}$. Let us explain this in the ``soft'' case, without loss of derivatives. Suppose that for every $0<\varepsilon\leq 1$ we have a $C^2$ map $F_ \varepsilon$ between two Banach spaces $X$ and $Y$, such that $F_\varepsilon(0) =0$, and, for all $\left\Vert u\right\Vert \leq R$, \begin{align*} |||\, D_{u}F_\varepsilon(u) ^{-1}||| & \leq\varepsilon^{-1}M\\ |||\, D_{uu}^{2}F_\varepsilon(u) \, ||| & \leq K \end{align*} Then the Newton-Kantorovich Theorem (see \cite{Ciarlet}, section 7.7 for a comprehensive discussion) tells us that the solution $u_{\varepsilon}$ of $F_\varepsilon(u) =v$ exists for $\left\Vert v\right\Vert <\frac{\varepsilon^{2}}{2KM^{2}}$, and this is essentially the best result one can hope for using Newton's algorithm, as mentioned by Texier and Zumbrun in \cite{TZ}, Remark 2.22. Note that the use of a Picard iteration would give a similar condition. However, in this simple situation where no derivatives are lost, it is possible, using topological arguments instead of Newton's method, to find a solution $u$ provided $\left\Vert v\right\Vert \leq\varepsilon R/M$: one order of magnitude in $\varepsilon$ has been gained. The first result of this kind, when $F$ is $C^1$ and dim$\,X\,=\,$dim$\,Y\,<\,\infty\,$, is due to Wazewski \cite{W} who used a continuation method. See also \cite{John} and \cite{Soto} and the references in these papers, for more general results in this direction. In \cite{IE3} (Theorem 2), using Ekeland's variational principle, Wazewski's result is proved in Banach spaces, assuming only that $F$ is continuous and G\^ateaux differentiable, the differential having a uniformly bounded right-inverse (in \S 2 below, we recall this result, as Theorem \ref{thm1}). Our goal is to extend such a topological approach to ``hard'' problems with loss of derivatives, which up to now have been tackled by the Nash-Moser algorithm. A first attempt in this direction was made in \cite{IE3} (Theorem 1), in the case when the estimates on the right-inverse do not depend on the base point, but it is very hard to find examples of such situations. The present paper fulfills the program in the general case, where estimates on the inverse depend on the base point. Berti, Bolle and Procesi \cite{BBP} prove a new version of the Nash-Moser theorem by solving a sequence of Galerkin problems $\Pi'_nF(u_n)=\Pi'_n v$, $u_n\in E_n$, where $\Pi_n$ and $\Pi'_n$ are projectors and $E_n$ is the range of $\Pi_n$. They find the solution of each projected equation thanks to a Picard iteration: $$u_n=\lim_{k\to\infty} w^k \ \hbox{ with } \ w^0=u_{n-1} \ \hbox{ and } \ w^{k+1}=w^k-L_{n}(u_{n-1})(F(w^k)-v)\,,$$ where $L_n(u_{n-1})$ is a right inverse of $D(\Pi'_nF_{\,\vert_{E_n}})(u_{n-1})$. So, in \cite{BBP} the regularized Newton step is not really absent: it is essentially the first step in each Picard iteration. As a consequence, the proof in \cite{BBP} involves quadratic estimates similar to the ones of more standard Nash-Moser schemes. Moreover, Berti, Bolle and Procesi assume the right-invertibility of $D(\Pi'_nF_{\,\vert_{E_n}})(u_{n-1})$. This assumption is perfectly suitable for the applications they consider (periodic solutions of a nonlinear wave equation), but in general it is not a consequence of the right-invertibility of $DF(u_{n-1})$, and this restricts the generality of their method as compared with the standard Nash-Moser scheme. As in \cite{BBP}, we work with projectors and solve a sequence of Galerkin problems. But in contrast with \cite{BBP}, the Newton steps are completely absent in our new algorithm, they are replaced by the topological argument from \cite{IE3} (Theorem 2), ensuring the solvability of each projected equation. Incidentally, this allows us to work with functionals $F$ that are only continuous and G\^ateaux-differentiable, while the standard Nash-Moser scheme requires twice-differentiable functionals. Our regularity assumption on $v$ also seems to be optimal, and even weaker than in \cite{Hormander}. Moreover, our method works assuming either the right-invertibility of $D(\Pi'_nF_{\,\vert_{E_n}})(u)$ as in \cite{BBP}, or the right-invertibility of $DF(u)$ (in the second case, our proof is more complicated). But in our opinion, the main advantage of our approach is the following: there are \textit{no more quadratic terms} in our estimates, as a consequence we can deal with larger $v$'s, and this advantage is particularly obvious in the case of singular perturbations. To illustrate this, we will give an abstract existence theorem with a precise estimate of the range of $F$ for a singular perturbation problem: this is Theorem 3 below. Comparing our result with the abstract theorem of \cite{TZ}, one can see that we have weaker assumptions and a stronger conclusion. Then we will apply Theorem 3 to an example given in \cite{TZ}, namely a Cauchy problem for a quasilinear Schr\"{o}dinger system first studied by M\'{e}tivier and Rauch \cite{MR}. Texier and Zumbrun use their abstract Nash-Moser theorem to prove the existence of solutions of this system on a fixed time interval, for concentrated initial data. Our abstract theorem allows us to increase the order of magnitude of the oscillation in the initial data. After reading our paper, Baldi and Haus \cite{BHperso} have been able to increase even more this order of magnitude, using their own version \cite{BH} of the Newton scheme for Nash-Moser, combined with a clever modification of the norms considered in \cite{TZ} and an improved estimate on the second derivative of the functional. In contrast, our proof follows directly from our abstract theorem, taking exactly the same norms and estimates as in \cite{TZ}, and without even considering the second derivative of the functional. The paper is constructed as follows. In Section 2, we present the general framework: we are trying to solve the equation $F_\varepsilon (u) =v$ near $F_\varepsilon (0) = 0 $, when $F_\varepsilon$ maps a scale of Banach spaces of functions into another, admits a right-invertible G\^ateaux differential, with ``tame estimates" involving losses of derivatives and negative powers of $\varepsilon$. After giving our precise assumptions, we state our main theorem. Section 3 is devoted to its proof. In Section 4, we apply it to the example taken from Texier and Zumbrun \cite{TZ}, and we compare our results with theirs.\medskip {\bf Acknowledgement.} We are grateful to Massimiliano Berti, Philippe Bolle, Jacques Fejoz and Louis Nirenberg for their interest in our work and their encouragements. It is a pleasure to thank Pietro Baldi for stimulating discussions in Naples and Paris, for a careful reading of the present paper and for a number of suggestions. We also thank the referees, whose remarks have helped us to improve this manuscript. \bigskip \bigskip \section{Main assumptions and results.} \subsection{Two tame scales of Banach spaces} Let $(V_{s},\,\Vert\cdot\Vert_{s})_{0\leq s \leq S}$ be a scale of Banach spaces, namely: \[ 0\leq s_{1}\leq s_{2}\leq S\Longrightarrow\left[ V_{s_{2}}\subset V_{s_{1}}\text{ \ and\ \ }\Vert\cdot\Vert_{s_{1}}\leq\Vert\cdot\Vert_{s_{2} }\right]\;. \] We shall assume that to each $\Lambda\in [1,\infty)$ is associated a continuous linear projection $\Pi(\Lambda)$ on $V_0$, with range $E(\Lambda)\subset V_S$. We shall also assume that the spaces $E(\Lambda)$ form a nondecreasing family of sets indexed by $[1,\infty)$, while the spaces $Ker\,\Pi(\Lambda)$ form a nonincreasing family. In other words: \[1\leq \Lambda\leq \Lambda'\,\Longrightarrow \,\Pi(\Lambda)\Pi(\Lambda')=\Pi(\Lambda')\Pi(\Lambda)=\Pi(\Lambda)\;.\] Finally, we assume that the projections $\Pi(\Lambda)$ are ``smoothing operators" satisfying the following estimates: \medskip \textbf{Polynomial growth and approximation}: \textit{There are constants }$A_{1},\ A_{2}\geq 1$\textit{ such that, for all numbers} $0\leq s\leq S$\textit{, all }$\Lambda\in [1,\infty)$\textit{ and all }$u\in V_{s}\,$\textit{, we have:} \begin{align} \forall t\in [0,S]\,,\;\;\Vert\Pi(\Lambda)u\Vert_{t} & \leq A_{1}\,\Lambda^{(t-s)^{+}}\Vert u\Vert_{s} \label{loss}\\ \forall t\in [0,s]\,,\;\;\Vert(1-\Pi({\Lambda}))u\Vert_{t} & \leq A_{2}\,\Lambda^{-(s-t)}\Vert u\Vert_{s} \label{gain} \end{align} When the above properties are met, we shall say that $(V_{s}\,,\,\Vert\cdot\Vert_{s})_{0\leq s \leq S}$ endowed with the family of projectors $\left\{\,\Pi(\Lambda)\;,\;\Lambda\in [1,\infty)\,\right\}\,, $ is a {\it tame} Banach scale. \medskip It is well-known (see e.g. \cite{BBP}) that (\ref{loss},\ref{gain}) imply: \medskip \textbf{Interpolation inequality}:\textit{ For }$0\leq t_{1} \,\leq\,s\,\leq\,t_{2}\leq S$\textit{ ,} \begin{equation} \Vert u\Vert_{s}\leq A_{3}\Vert u\Vert_{t_{1}}^{\frac{t_{2}-s}{t_{2}-t_{1}} }\Vert u\Vert_{t_{2}}^{\frac{s-t_{1}}{t_{2}-t_{1}}}\;. \label{inter} \end{equation} Let $(W_{s}\,,\,\Vert\cdot\Vert'_s)_{0\leq s\leq S}$ be another tame scale of Banach spaces. We shall denote by $\Pi^{\prime}(\Lambda)$ the corresponding projections defined on $W_0$ with ranges $E'(\Lambda)\subset W_S$, and by $A'_{i}\;(i=1,2,3)$ the corresponding constants in (\ref{loss}), (\ref{gain}) and (\ref{inter}).\medskip {\bf Remark.} In many practical situations, the projectors form a discrete family as, for instance, $\{\Pi(N)\,,\;N\in\mathbb{N}^*\}$, or $\{\Pi({2^j})\,,\;j\in\mathbb{N}\}$. The first case occurs when $\Pi(N)$ acts on periodic functions by truncating their Fourier series, keeping only frequencies of size less or equal to $N$, as in \cite{BBP}. The second case occurs when truncating orthogonal wavelet expansions as in an earlier version of the present work \cite{ESDebut}. Our choice of notation and assumptions covers these cases, taking $\Pi(\Lambda)=\Pi(\left\lfloor \Lambda\right\rfloor)$ or $\Pi(\Lambda)=\Pi(2^{\left\lfloor \log_2(\Lambda)\right\rfloor})$, where $\left\lfloor \cdot\right\rfloor$ denotes the integer part.\medskip \subsection{Main theorem} We state our result in the framework of singular perturbations, in the spirit of Texier and Zumbrun \cite{TZ}. The norms $\Vert\cdot\Vert_s\,,\,\Vert\cdot\Vert_s'$ on the tame scales $(V_{s})$, $(W_{s})$ may depend on the perturbation parameter $\varepsilon \in(0,1]$, as well as the projectors $\Pi(\Lambda)\,,\,\Pi'(\Lambda)\,$ and their ranges $E(\Lambda)$, $E'(\Lambda)\,.$ But we impose that $S$ and the constants $A_i,\,A'_i$ appearing in estimates (\ref{loss}, \ref{gain}, \ref{inter}) be independent of $\varepsilon$. In order to avoid burdensome notations, the dependence of the norms, projectors and subspaces on $\varepsilon$ will not be explicit in the sequel.\bigskip Denote by $B_{s}$ the unit ball in $V_{s}$: \[ B_{s}=\left\{ u\ |\ \left\Vert u\right\Vert _{s}\leq1\right\} \] In the sequel we fix nonnegative constants $s_0, m,\ell,\,\ell'$ and $g$, independent of $\varepsilon$. We will assume that $S$ is large enough.\bigskip We first recall the definition of G\^ateaux-differentiability, in a form adapted to our framework:\medskip \begin{definition} We shall say that a function $F:\,B_{s_{0}+m}\rightarrow W_{s_{0}}$ is \emph{G\^{a}teaux-differentiable} (henceforth G-differentiable) if for every $u\in B_{s_{0}+m}$, there exists a linear map $DF\left( u\right) :V_{s_0+m}\rightarrow W_{s_0}$ such that for every $s\in [s_0,S-m]$, if $u\in B_{s_{0}+m}\cap V_{s+m}$, then $DF \left( u\right)$ maps continuously $V_{s+m}$ into $W_s$, and \[ \forall h\in V_{s+m}\ ,\ \lim_{t\rightarrow0}\ \left\Vert \frac{1}{t}\left[ F\left( u+th\right) -F\left( u\right) \right] -D F\left( u\right) h\right\Vert _{s}^{\prime}=0\;. \] \end{definition} \medskip Note that,even in finite dimension, a G-differentiable map need not be $C^{1}$, or even continuous. However, if $D F:\,V_{s+m}\to\mathcal{L}(V_{s+m},W_s)$ is locally bounded, then $F:\,V_{s+m}\to W_s$ is locally Lipschitz, hence continuous. In the present paper, we will always be in such a situation.\bigskip We now consider a family of maps $(F_\varepsilon)_{0<\varepsilon\leq 1}$ with $F_\varepsilon:\ B_{s_0+m} \to W_{s_0}$. We are ready to state our assumptions on this family:\bigskip \begin{definition} \leavevmode\par \begin{itemize} \item We shall say that the maps $F_\varepsilon:\,B_{s_{0}+m}\rightarrow W_{s_{0}}\; (0<\varepsilon \leq 1)$ form an $S$-tame differentiable family if they are G-differentiable with respect to $u$, and, for some positive constant $a\,,$ for all $\varepsilon\in (0, 1]$ and all $s \in [s_{0}, S-m]\,$, if $u\in B_{s_{0}+m}\cap V_{s+m}$ and $h\in V_{s+m}\,,$ then $DF_{\varepsilon}\left( u\right) h \in W_s$ with the tame direct estimate \begin{equation} \left\Vert DF_{\varepsilon}\left( u\right) h\right\Vert _{s}^{\prime}\leq a\left( \left\Vert h\right\Vert _{s+m}+\left\Vert u\right\Vert _{s+m} \left\Vert h\right\Vert _{s_{0}+m}\right)\;. \label{tamedirectepsilon} \end{equation} \item Then we shall say that $(DF_\varepsilon)_{0<\varepsilon\leq 1}$ is tame right-invertible if there are $\,b>0$ and $g,\,\ell,\,\ell'\geq 0$ such that for all $0<\varepsilon\leq 1$ and $u\in B_{s_{0}+\max\{m,\ell\}}$ , there is a linear map $L_{\varepsilon}\left( u\right) :W_{s_0+\ell^{\prime}}\rightarrow V_{s_0}$ satisfying \begin{equation} \forall k\in W_{s_0+\ell'}\,,\ \ \ DF_{\varepsilon}\left( u\right) L_{\varepsilon}\left( u\right) k=k \label{rightinverse} \end{equation} and for all $s_{0}\leq s\leq S-\max\left\{ \ell,\ell^{\prime}\right\} $, if $u\in B_{s_{0}+\max\{m,\ell\}}\cap V_{s+\ell}\,$ and $k\in W_{s+\ell^{\prime}}\,,$ then $L_{\varepsilon}\left( u\right) k\in V_s\,,$ with the tame inverse estimate \begin{equation} \left\Vert L_{\varepsilon}\left( u\right) k\right\Vert _{s}\leq b\varepsilon^{-g}\left( \left\Vert k\right\Vert _{s+\ell^{\prime}}^{\prime}+\left\Vert k\right\Vert _{s_{0} +\ell^{\prime}}^{\prime}\left\Vert u\right\Vert _{s+\ell}\right)\;. \label{tameinverseepsilon} \end{equation} \item Alternatively, we shall say that $(DF_\varepsilon)_{0<\varepsilon\leq 1}$ is tame {\it Galerkin} right-invertible if there are $\,\underline{\Lambda}\geq 1\,$, $b>0$ and $\,g,\,\ell,\,\ell'\geq 0$ such that for all $\Lambda\geq \underline{\Lambda}\,,\ 0<\varepsilon\leq 1$ and any $u\in B_{s_{0}+\max\{m,\ell\}}\cap E(\Lambda)$, there is a linear map $L_{\Lambda,\varepsilon}\left( u\right) :E'(\Lambda)\rightarrow E(\Lambda)$ satisfying \begin{equation} \forall k\in E'(\Lambda)\,,\ \ \ \Pi'(\Lambda) DF_{\varepsilon}\left( u\right) L_{\Lambda,\varepsilon}\left( u\right) k=k \label{galrightinverse} \end{equation} and for all $s_{0}\leq s\leq S-\max\left\{ \ell,\ell^{\prime}\right\} $, we have the tame inverse estimate: \begin{equation} \forall k\in E'(\Lambda)\,, \ \left\Vert L_{\Lambda,\varepsilon}\left( u\right) k\right\Vert _{s}\leq b\varepsilon^{-g}\left( \left\Vert k\right\Vert _{s+\ell^{\prime}}^{\prime}+\left\Vert k\right\Vert _{s_{0} +\ell^{\prime}}^{\prime}\left\Vert u\right\Vert _{s+\ell}\right)\;. \label{galtamegalinverseepsilon} \end{equation} \end{itemize} \end{definition} In this definition, the integers $m, \ell, \ell^{\prime}$ denote the loss of derivatives for $DF_{\varepsilon}$ and its right-inverse, and $g>0 $ denotes the strength of the singularity at $\varepsilon = 0$. The unperturbed case of a fixed map can be recovered by setting $\varepsilon =1$. We want to solve the equation $F_\varepsilon(u)=v$. There are three things to look for. How regular is $v$ ? How regular is $u$, or, equivalently, how small is the loss of derivatives between $v$ and $u$ ? How does the existence domain depend on $\varepsilon$ ? The following result answers them in a near-optimal way. \begin{theorem} \label{Thm8} Assume that the maps $F_\varepsilon$ $(0<\varepsilon\leq 1)$ form an $S$-tame differentiable family between the tame scales $(V_s)_{0\leq s\leq S}$ and $(W_s)_{0\leq s\leq S}$, with $F_\varepsilon(0)=0$ for all $0<\varepsilon\leq 1$. Assume, in addition, that $(DF_\varepsilon)_{0<\varepsilon\leq 1}$ is either tame right-invertible or tame Galerkin right-invertible. Let $s_0,\,m,\,g,\,\ell,\,\ell'$ be the associated parameters. Let $s_1 \geq s_0+\max\{m,\ell\}$, $\delta>s_1+\ell'$ and $g'>g$. Then, for $S$ large enough, there is $r>0$ such that, whenever $0<\varepsilon\leq 1$ and $\left\Vert v\right\Vert _{\delta}^{\prime}\leq r\varepsilon^{g^{\prime}}$, there exists some $u_{\varepsilon}\in B_{s_1}$ satisfying: \begin{align*} &F_\varepsilon(u_{\varepsilon})=v\ \\ &\left\Vert u_{\varepsilon}\right\Vert _{s_1}\leq r^{-1}\,\varepsilon^{-g^{\prime} }\left\Vert v\right\Vert _{\delta}^{\prime} \end{align*} \end{theorem} As we will see, the proof of Theorem \ref{Thm8} is much shorter under the assumptions that $DF_\varepsilon$ is Galerkin right-invertible. But in many applications, it is easier to check that $DF_\varepsilon$ is tame right-invertible than tame Galerkin right-invertible. See \cite{BBP}, however, where an assumption similar to (\ref{galrightinverse}, \ref{galtamegalinverseepsilon}) is used. All other ``hard" surjection theorems that we know of require some additional conditions on the second derivative of $F_\varepsilon$. Here we do not need such assumptions, in fact we only assume $F_\varepsilon$ to be G-differentiable, not $C^2$. As for the three questions we raised, let us explain in which sense the answers are almost optimal in Theorem \ref{Thm8}. For the tame estimates \eqref{tamedirectepsilon},\eqref{tameinverseepsilon} to hold, one needs $u\in B_{s_1}$ with $s_1\geq s_0 +\max\{m,\ell\}$. When solving the linearized equation $DF_{\varepsilon}\left( u\right) h=k$ in $V_{s_1}$ by $h =L_{\varepsilon}\left( u\right) k$, one needs $k\in W_{s_1+\ell'}\,$, so it seems necessary to assume $\delta\geq s_1+\ell'\,,$ and we find that the strict inequality is sufficient. Replacing $s_1$ with its minimal value, our condition on $\delta$ becomes \begin{equation*} \delta > s_0 +\max\{m,\ell\}+\ell'\,. \end{equation*} We have not found this condition in the literature: in \cite{Hormander} for instance, a stronger assumption is made, namely $\delta >s_0+\max\{2m+\ell',\ell\}+\ell'$. For the dependence of $\Vert v\Vert'_\delta$ on $\varepsilon$, the constraint $g'>g$ also seems to be nearly optimal. Indeed, the solution $u_\varepsilon$ has to be in $B_{s_1}$, but the right-inverse $L_\varepsilon$ of $DF_{\varepsilon}$ has a norm of order $\varepsilon^{-g}$, so the condition $\Vert v\Vert'_\delta\lesssim \varepsilon^g$ seems necessary. We find that the condition $\Vert v\Vert'_\delta\lesssim \varepsilon^{g'}$ is sufficient. Our condition on $S$ is of the form $S\geq S_0$ where $S_0$ depends only on the parameters $s_0,\,m,\,g,\,\ell,\,\ell'$ and $g',\,s_1\,,\delta$. Then $r$ depends only on these parameters and the constants $A_i$, $A'_i$ associated with the tame scales. In principle, all these constants could be made explicit, but we will not do it here. Let us just mention that one can take $S_0=\mathcal{O}\left(\frac{1}{g'-g}\right)$ as $g'\to g$, all other parameters remaining fixed. This follows from the inequality $\sigma<\zeta g/\eta$ in Lemma \ref{compatibility}. \medskip In the case of a tame right-invertible differential, we can restate our theorem in a form that allows direct comparison with \cite{TZ}: {\it Theorem 2.19 and Remarks 2.9, 2.14}. For this purpose, we consider two tame Banach scales $(V_s,\Vert\cdot\Vert_s)$ and $(W_s,\Vert\cdot\Vert'_s)$ with associated projectors $\Pi_\Lambda\,,\,\Pi'_\Lambda$, we take $\gamma>0$ and we introduce the norms $\vert \cdot\vert_{s}:=\varepsilon^\gamma\Vert \cdot\Vert_{s}$ and $\vert \cdot\vert'_{s}:=\varepsilon^\gamma\Vert \cdot\Vert'_{s}\,$. We then denote ${\mathfrak B}_{s}(\rho)=\left\{ u\ |\ \left\vert u\right\vert _{s}\leq \rho\right\}$ and we consider functions $F_{\varepsilon}$ of the form $F_\varepsilon(u)=\Phi_\varepsilon({\mathfrak a}_\varepsilon+u)-\Phi_\varepsilon({\mathfrak a}_\varepsilon)\,,$ where $\Phi_\varepsilon$ is defined on ${\mathfrak B}_{s_0+m}(2\varepsilon^\gamma)\,$ and ${\mathfrak a}_\varepsilon\in {\mathfrak B}_{S}(\varepsilon^\gamma)\,$ is chosen such that $v_\varepsilon:=-\Phi_\varepsilon({\mathfrak a}_\varepsilon)$ is very small. A point $u$ in $B_{s_0+m}$ satisfies $F_\varepsilon(u)=v_\varepsilon\,$ if and only if it solves the equation $\Phi_\varepsilon({\mathfrak a}_\varepsilon+u)=0$ in ${\mathfrak B}_{s_0+m}(\varepsilon^\gamma)\,.$ We make the following assumptions on $\Phi_\varepsilon$: \medskip {\it For some $\gamma>0$ and any $0<\varepsilon \leq 1$, the map $\,\Phi_{\varepsilon}\,:\; {\mathfrak B}_{s_{0}+m}(2\varepsilon^\gamma)\rightarrow W_{s_{0}}$ is G-differentiable with respect to $u$, and there are constants $a$, $b$ and $g>0$ such that:$\ $ \begin{itemize} \item for all $0<\varepsilon\leq 1$ and $\ s_{0}\leq s\leq S-m\,,$ if $u\in {\mathfrak B}_{s_{0}+m}(2\varepsilon^\gamma)\cap V_{s+m}$ and $h\in V_{s+m}\,,$ then $D\Phi_{\varepsilon}\left( u\right) h\in W_s\,,$ with the tame direct estimate \begin{equation} \left\vert D\Phi_{\varepsilon}\left( u\right) h\right\vert _{s}^{\prime}\leq a\left( \left\vert h\right\vert _{s+m}+\varepsilon^{-\gamma}\left\vert u\right\vert _{s+m} \left\vert h\right\vert _{s_{0}+m}\right) \label{tamedirectepsilonbis} \end{equation} \item for all $0<\varepsilon\leq 1$ and $ u\in {\mathfrak B}_{s_{0}+\max\{m,\ell\}}(2\varepsilon^\gamma)\,$, there is $L_{\varepsilon}\left( u\right) :\,W_{s_0+\ell^{\prime}}\rightarrow V_{s_0}$ linear, satisfying: \begin{equation} \forall k\in W_{s_0+\ell^{\prime}},\ \ \ D\Phi_{\varepsilon}\left( u\right) L_{\varepsilon}\left( u\right) k=k \label{rightinversebis} \end{equation} and for all $s_{0}\leq s\leq S-\max\left\{ \ell,\ell^{\prime}\right\}\,,$ if $ u\in {\mathfrak B}_{s_{0}+\max\{m,\ell\}}(2\varepsilon^\gamma)\cap V_{s+\ell}\,$ and $k\in W_{s+\ell^{\prime}}\,$, then $L_{\varepsilon}\left( u\right) k\in V_s\,,$ with the tame inverse estimate \begin{equation} \left\vert L_{\varepsilon}\left( u\right) k\right\vert _{s}\leq b\varepsilon^{-g}\left( \left\vert k\right\vert _{s+\ell^{\prime}}^{\prime}+\varepsilon^{-\gamma}\left\vert k\right\vert _{s_{0} +\ell^{\prime}}^{\prime}\left\vert u\right\vert _{s+\ell}\right) \label{tameinverseepsilonbis} \end{equation} \end{itemize} } Under these assumptions, the maps $F_\varepsilon:\,B_{s_0+m}\to W_{s_0}$ form an $S$-tame differentiable family for the ``old" norms $\Vert\cdot\Vert_s\,$, $\Vert\cdot\Vert'_s\,$. So the following result holds, as a direct consequence of our main theorem: \begin{corollary} \label{Cor9} Consider two tame Banach scales $(V_s,\vert\cdot\vert_{s})_{0\leq s\leq S}$ and $(W_s,\vert\cdot\vert'_{s})_{0\leq s\leq S}\,$, nonnegative constants $s_0,\, m,\ell,\,\ell',\,g,\,\gamma$, and two positive constants $a,\,b$. Take any $g^{\prime}>g$, $s_1\geq s_0+\max\{m,\ell\}$ and $\delta>s_1+\ell'$. For $S$ large enough and $r>0$ small, if a family of G-differentiable maps $\Phi_\varepsilon\,:\; {\mathfrak B}_{s_{0}+m}(2\varepsilon^\gamma)\rightarrow W_{s_{0}}$ $(0< \varepsilon \leq 1)$ satisfies (\ref{tamedirectepsilonbis},\ref{rightinversebis},\ref{tameinverseepsilonbis}), and, in addition, for some ${\mathfrak a}_\varepsilon\in {\mathfrak B}_{S}(\varepsilon^\gamma)\,,\,$ $\vert \Phi_\varepsilon({\mathfrak a}_\varepsilon)\vert _{\delta}^{\prime}\leq r\varepsilon^{\gamma+g^{\prime}}$, then there exists some $u_{\varepsilon}\in {\mathfrak B}_{s_1,\varepsilon}(\varepsilon^\gamma)$ such that: \begin{align*} &\Phi_\varepsilon({\mathfrak a}_\varepsilon+u_{\varepsilon}) =0\ \\ &\vert u_{\varepsilon}\vert _{s_1} \leq r^{-1}\, \varepsilon^{-g^{\prime}} \vert \Phi_\varepsilon({\mathfrak a}_\varepsilon)\vert _{\delta}^{\prime} \end{align*} \end{corollary} In \cite{TZ} ({\it Theorem 2.19 and Remarks 2.9, 2.14}), the assumptions are stronger, since they involve the second derivative of $\Phi_\varepsilon$. More importantly, we only need the norm of $\Phi_\varepsilon({\mathfrak a}_\varepsilon)$ to be controlled by $\varepsilon^{\gamma+g'}$ with $g'>g$, provided $S\geq S_0$ with $S_0=\mathcal{O}\left(\frac{1}{g'-g}\right)$, while in \cite{TZ} (Assumption 2.15 and Remark 2.23), due to quadratic estimates, one needs $g'>2g$ with the faster growth $S_0=\mathcal{O}\left(\frac{1}{(g'-2g)^2}\right)$. \section{Proof of Theorem \ref{Thm8}} The proof consists in constructing a sequence $(u_{n})_{n\geq 1}$ which converges to a solution $u$ of $F\left(u\right)=v$. At each step, in order to find $u_n$, we solve a nonlinear equation in a Banach space, using Theorem 2 in \cite{IE3}, which we restate below for the reader's convenience (the notation $||| \, L\, |||$ stands for the operator norm of any linear continuous map $L$ between two Banach spaces): \begin{theorem} \label{thm1} Let $X$ and $Y$ be Banach spaces. Let $f:B_X(0,R)\rightarrow Y$ be continuous and G\^{a}teaux-differentiable, with $f\left( 0\right) =0$. Assume that the derivative $Df\left( u\right) $ has a right-inverse $L\left( u\right) $, uniformly bounded on the ball $B_X(0,R)$: \begin{align*} &\forall (u,k)\in B_X(0,R)\times Y\text{, \ }Df\left( u\right) L\left( u\right) \, k=k\ \\ &\sup\left\{ \, |||\, L\left( u\right) ||| \ :\ \left\Vert u\right\Vert_X < R\right\} <M\,. \end{align*} Then, for every $v\in Y$ with $\left\Vert v\right\Vert_Y < RM^{-1}$ there is some $u\in X$ satisfying: \[ f\left( u\right) =v\;\text{ and }\;\left\Vert u\right\Vert_X \leq M\left\Vert v\right\Vert_Y < R\,. \] \end{theorem} \medskip Note first that this is a local surjection theorem, not an inverse function theorem:\ with respect to the IFT, we lose uniqueness.\ On the other hand, the regularity requirement on $f$ and the smallness condition on $v$ are much weaker. As mentioned in the Introduction, for a $C^1$ functional in finite dimensions, this theorem has been proved a long time ago by Wazewski \cite{W} by a continuation argument (we thank Sotomayor for drawing our attention to this result). For a comparison of the existence and uniqueness domains in the $C^2$ case with dim$\,X\,=\,$dim$\,Y\,$, see \cite{Hart}, chapter II, exercise 2.3.\bigskip It turns out that the proof of Theorem \ref{Thm8} is much easier if one assumes that the family $(DF_\varepsilon)$ is tame {\it Galerkin} right-invertible. But most applications require that $(DF_\varepsilon)$ be tame right-invertible. Let us explain why the proof is longer in this case. In our algorithm, we will use two sequences of projectors $\Pi_n:=\Pi(\Lambda_n)$ and $\Pi'_n:=\Pi'(M_n)$ with associated ranges $E_n=E(\Lambda_n)$ and $E'_n=E'(M_n)$, where $\Lambda^0\approx \varepsilon^{-\eta}$ for some small $\eta>0$, $\Lambda_n=\Lambda_0^{\alpha^n}$ for some $\alpha>1$ close to $1$, and $M_n=\Lambda_n^{\vartheta}$ for some $\vartheta\leq 1$ such that $\vartheta\alpha>1$. The algorithm consists in finding, by induction on $n$ and using Theorem \ref{thm1} at each step, a solution $u_n\in E_n$ of the problem $\Pi'_nF_\varepsilon(u_n)=\Pi'_{n-1}v$. For this, we need $\Pi'_n DF_\varepsilon(u)_{\vert_{E_n}}$ to be invertible for $u$ in a certain ball $\mathcal{B}_n$, with estimates on the right inverse for a certain norm $\Vert\cdot\Vert_{\mathcal{N}_n}$. When the family $(DF_\varepsilon)$ is tame Galerkin right-invertible, we can take $\vartheta=1$ so that $M_n=\Lambda_n$, instead of assuming $\vartheta<1$. Then the right-invertibility of $\Pi'_n DF_\varepsilon(u)_{\vert_{E_n}}$ follows immediately from the definition. But when $(DF_\varepsilon)$ is only tame right-invertible, it is crucial to take $\vartheta<1$. The intuitive idea is the following. One can think of $DF_\varepsilon(u)$ as very large right-invertible matrix. The topological argument we use requires $\Pi'_n DF_\varepsilon(u)_{\vert_{E_n}}$ to have a right-inverse for $u$ in a suitable ball. If we take $M_n=\Lambda_n$, this is like asking that the square submatrix of a right-invertible matrix be invertible. In general this is not true. But a rectangular submatrix, with more columns than lines, will be right-invertible if the full matrix is and if there are enough columns in the submatrix. This is why we impose $M_n<\Lambda_n$ when we do not assume the tame Galerkin right-invertibility. In the sequel, we assume that the family $(DF_\varepsilon)$ is tame right-invertible, so we take $\vartheta<1$, and we point out the specific places where the arguments would be easier assuming, instead, that $(DF_\varepsilon)$ is tame {\it Galerkin} right-invertible. The sequence $u_n$ depends on a number of parameters $\eta, \alpha, \beta, \vartheta$ and $\sigma$ satisfying various conditions: in the first subsection we prove that these conditions are compatible. In the next one, we construct an initial point $u_1$ depending on $\eta,\,\alpha$ and $\vartheta$. In the third one we construct, by induction, the remaining points $u_n$ which also depend on $\beta$ and $\sigma$. Finally we prove that the sequence $(u_n)$ converges to a solution $u$ of the problem, satisfying the desired estimates.\medskip \subsection{Choosing the values of the parameters} We are given $s_1\geq s_{0}+\max\left\{ m,\ell\right\},$ $\delta> s_1 +\ell'$ and $g'>g$. These are fixed throughout the proof.\medskip We introduce positive parameters $\eta,\alpha,\beta,\vartheta$ and $\sigma$ satisfying the following conditions: \begin{align} \eta\, & <\frac{g'-g}{\max\left\{ \vartheta \ell',\ell\right\}}\label{eq:i0}\\ \frac{1}{\alpha} & <\vartheta<1\label{eq:i1}\\ \left(1-\vartheta\right)\left(\sigma-\delta\right) & >\vartheta m+\max\left\{ \ell,\vartheta \ell'\right\} +\frac{g}{\eta}\label{eq:i2}\\ \sigma & >\alpha\beta+s_1\label{eq:i4}\\ \left(1+\alpha-\vartheta\alpha\right)\left(\sigma-s_{0}\right) & >\alpha\beta+\alpha\left(m+\ell\right)+\ell'+\frac{g}{\eta}\label{eq:i5}\\ \left(1-\vartheta\right)\left(\sigma-s_{0}\right) & >m+\vartheta \ell'+\frac{g}{\alpha\eta}\label{eq:i6}\\ \delta & >s_{0}+\frac{\alpha}{\vartheta}\left(\sigma-s_{0}-\alpha\beta+\ell"\right)\label{eq:i7}\\ \left(\alpha-1\right)\beta & >\left(1-\vartheta\right)\left(\sigma-s_{0}\right)+\vartheta m+\ell"+\frac{g}{\eta}\label{eq:i8}\\ \ell" & =\max\left\{ \left(\alpha-1\right)\ell+\ell',\alpha\vartheta \ell'\right\} \label{eq:i9} \end{align} Note that condition (\ref{eq:i2}) implies that $\delta < \sigma $ . Note also that condition (\ref{eq:i7}) may be rewritten as $$\beta >\frac{1}{\alpha}\left(\sigma-\delta\right)+\left(1-\frac{\vartheta}{\alpha}\right)\frac{\delta-s_{0}}{\alpha}+\frac{\ell"}{\alpha}$$ which implies the simpler inequality \begin{equation}\beta >\frac{1}{\alpha}\left(\sigma-\delta\right)\label{eq:i3} \end{equation} Inequality (\ref{eq:i3}) will also be used in the proof.\medskip If we assume tame Galerkin right-invertibility instead of tame right-invertibility, we can replace condition (\ref{eq:i2}) by the weaker condition $\delta < \sigma$, we do not need conditions (\ref{eq:i5}), (\ref{eq:i6}) any more, and we can take $\vartheta=1$ instead of $\vartheta<1$. \bigskip \begin{lem} \label{compatibility} The set of parameters $\left(\eta,\alpha,\beta,\vartheta,\sigma\right)$ satisfying the above conditions is non-empty. More precisely, there are some ${\alpha}>1$ and ${\zeta}>0$ depending only on $(s_0,\,m,\,\ell,\,\ell',\,s_1,\,\delta)$, such that, for $\vartheta={\alpha}^{-1/2}$ and for every $\eta>0$, there exist $(\beta,\sigma)$ with $\sigma<{\zeta}g/\eta$ such that the constraints (\ref{eq:i2}) to (\ref{eq:i9}) are satisfied. \end{lem} \begin{proof} Since $\delta>s_{1}+\ell'$, and $\ell"\rightarrow \ell'$ when $\alpha$ and $\vartheta\rightarrow1$, it is possible to choose $\vartheta$ and $\alpha=\vartheta^{-2}$ close enough to $1$ so that $\delta>s_{0}+\frac{\alpha}{\vartheta}\left(s_1-s_{0}+\ell"\right)$. Take some $\tau$ with $0<\tau<\frac{\vartheta}{\alpha}\left(\delta-s_{0}\right)-s_1+s_{0}-\ell"\text{, }$ and set: \begin{equation} \beta=\frac{\sigma}{\alpha}-\frac{s_1+\tau}{\alpha}\label{eq:i10} \end{equation} Then conditions (\ref{eq:i1}), (\ref{eq:i4}) and (\ref{eq:i7}) are satisfied. Note that $\alpha,\vartheta$ and $\tau$ depend only on $\delta$. The remaining inequalities are constraints on $\beta $ and $\sigma $. They can be rewritten as follows: \begin{align} \sigma & >\delta+\frac{1}{1-\vartheta}\left[\vartheta m+\max\left\{ \ell,\vartheta \ell'\right\} +\frac{g}{\eta}\right]\label{eq:i11}\\ \beta< & \left(\frac{1}{\alpha}+1-\vartheta\right)\sigma-m-\ell-\frac{\ell'}{\alpha}-\left(\frac{1}{\alpha}+1-\vartheta\right)s_{0}-\frac{g}{\alpha\eta}\label{eq:i13}\\ \sigma & >s_{0}+\frac{1}{1-\vartheta}\left(m+\vartheta \ell'+\frac{g}{\alpha\eta}\right)\label{eq:i14}\\ \beta & >\frac{1-\vartheta}{\alpha-1}\sigma+\frac{1}{\alpha-1}\left(\vartheta m+\ell"+\frac{g}{\eta}-\left(1-\vartheta\right)s_{0}\right)\label{eq:i15} \end{align} This inequalities define half-planes in the $ (\beta, \sigma)$-plane. Since $\alpha\vartheta>1$, the slopes in (\ref{eq:i10}), (\ref{eq:i13}) and (\ref{eq:i15}) are ordered as follows: \[ 0<\frac{1-\vartheta}{\alpha-1} <\frac{1}{\alpha}<\frac{1}{\alpha}+1-\vartheta<1 \] As a consequence, for the chosen values of $\alpha,\vartheta$ and $\tau$, the domain defined by these three conditions in the $\left(\beta,\sigma\right)$-plane is an infinite half-line stretching to the North-West. The remaining two, (\ref{eq:i11}) and (\ref{eq:i14}), just tell us that $\sigma$ should be large enough. So the set of solutions is of the form $\sigma>\bar{\sigma}$, $\beta=\frac{\sigma}{\alpha}-\frac{s+\tau}{\alpha}$ and $\bar{\sigma}$ is clearly a piecewise affine function of $g/\eta$. So $\sigma < \zeta g / \eta$ for some constant $\zeta\,.$ \end{proof} {\bf Remark.} {\it As already mentioned, if we assume that $(DF_\varepsilon)$ is tame Galerkin right-invertible, (\ref{eq:i2}) can be replaced by the condition $\delta<\sigma$, and (\ref{eq:i5}) and (\ref{eq:i6}) are not needed. The remaining conditions can be satisfied by taking $\vartheta=1$ and for a larger set of the other parameters. The corresponding variant of Lemma \ref{compatibility} has a simpler proof. We can choose $\alpha>1$ such that $\delta>s_{0}+\alpha\left(s_1-s_{0}+\ell"\right)$ and $\tau$ such that $0<\tau< \frac{1}{\alpha}\left(\delta-s_{0}\right)-s_1+s_{0}-\ell"$, and we may impose condition (\ref{eq:i10}). Then conditions (\ref{eq:i11}), (\ref{eq:i13}) and (\ref{eq:i14}) are no longer required, and the last conditions $\delta<\sigma$ and (\ref{eq:i15}) are easily satisfied by taking $\sigma$ large enough.} \bigskip The values $\left(\eta,\alpha,\beta,\vartheta,\sigma\right)$ are now fixed. For the remainder of the proof we introduce an important notation. By \[ x\lesssim y \] we mean that there is some constant $C$ such that $x\leq Cy$. This constant depends on $\,A_i,\,A'_i,\,a,\,b,\,s_0,\,m,\,\ell,\,\ell',\,g,\,g',\,\,s_1,\,\delta$ and our additional parameters $\left(\eta,\alpha,\beta,\vartheta,\sigma\right)$, but NOT on $\varepsilon$, nor on the regularity index $s\in [0,S]$ or the rank $n$ in any of the sequences which will be introduced in the sequel. For instance, the tame inequalities become: \begin{align*} \left\Vert DF_{\varepsilon}\left(u\right)h\right\Vert _{s} & \lesssim\left(\left\Vert u\right\Vert _{s+m}\left\Vert h\right\Vert _{s_{0}+m}+\left\Vert h\right\Vert _{s+m}\right)\\ \left\Vert L_{\varepsilon}\left(u\right)k\right\Vert _{s} & \lesssim\varepsilon^{-g}\left(\left\Vert u\right\Vert _{s+l}\left\Vert k\right\Vert _{s_{0}+l'}+\left\Vert k\right\Vert _{s+l'}\right) \end{align*} In the iteration process, we will need the following result: \begin{lem} \label{boundF} If the maps $F_\varepsilon$ form an $S$-tame differentiable family and $F_\varepsilon\left( 0\right) =0$, then, for $u\in B_{s_0+m}\cap V_{s+m}$ and $s_{0}\leq s\leq S-m$, we have: \[ \left\Vert F_\varepsilon\left( u\right) \right\Vert _{s}\lesssim \left\Vert u\right\Vert _{s+m} \] \end{lem} \begin{proof} Consider the function $\varphi\left( t\right) =\left\Vert F_\varepsilon \left( tu\right) \right\Vert _{s}$. Since $F_\varepsilon$ is G-differentiable, we have: \begin{align*} \varphi^{\prime}\left( t\right) & =\left\langle DF_\varepsilon\left( tu\right) u,\frac{F_{\varepsilon}\left( tu\right) }{\left\Vert F_{\varepsilon}\left( tu\right) \right\Vert _{s}}\right\rangle _{s}\\ & \leq a\left( t\left\Vert u\right\Vert _{s_{0}+m}\left\Vert u\right\Vert _{s+m}+\left\Vert u\right\Vert _{s+m}\right) \end{align*} and since $\varphi\left( 0\right) =0$, we get the result. \end{proof} \subsection{Initialization} \subsubsection{Defining appropriate norms.} This subsection uses condition (\ref{eq:i1}) and the inequalities $s_1+\ell'<\delta<\sigma\,.$, which, as already noted, follows from (\ref{eq:i2}). \smallskip We are given $\left(\eta,\alpha,\vartheta,\delta,\sigma\right)$. We fix a large constant $K>1$, to be chosen later independently of $0<\varepsilon\leq 1$.\smallskip We set $\Lambda_{0}=(K\varepsilon^{-\eta})^{1/\alpha}$, $\Lambda_{1}:=(\Lambda_{0})^{\alpha}=K\varepsilon^{-\eta}$, $M_{0}:= (\Lambda_{0})^{\vartheta}=(K\varepsilon^{-\eta})^{\vartheta/\alpha}$ and $M_{1}:= (\Lambda_{1})^{\vartheta}=(K\varepsilon^{-\eta})^{\vartheta}$. We then have the inequalities $M_0<\Lambda_{0} < M_{1} <\Lambda_{1}\,.$\smallskip Let $\,E_1:=E(\Lambda_1)\,,\Pi_1:=\Pi(\Lambda_1)\,,\;E'_1=E(M_1)\,$ and $\Pi'_i:=\Pi'(M_i)\,$ for $i=0,\,1\,.$\smallskip We choose the following norms on $E_1$ , $E'_1$: \begin{align*} \left\Vert h\right\Vert _{\mathcal{N}_{1}}: & =\left\Vert h\right\Vert _{\delta}+\Lambda_{1}^{-\frac{\vartheta}{\alpha}\left(\sigma-\delta\right)}\left\Vert h\right\Vert _{\sigma}\\ \left\Vert k\right\Vert'_{\mathcal{N}_{1}} & :=\left\Vert k\right\Vert _{\delta}'+\Lambda_{1}^{-\frac{\vartheta}{\alpha}\left(\sigma-\delta\right)}\left\Vert k\right\Vert _{\sigma}' \end{align*} Endowed with these norms, $E_1$ and $E'_1$ are Banach spaces. We shall use the notation $||| \, L\, |||_{_{\mathcal{N}_{1}}}$ for the operator norm of any linear continuous map $L$ from the Banach space $E'_1$ to a Banach space that can be either $E_1$ or $E'_1$.\medskip The map $F_{\varepsilon}$ induces a map $f_{1}:\,B_{s_0+m}\cap E_1\rightarrow E'_1$ defined by \[ f_{1}\left(u\right):=\Pi'_1 F_{\varepsilon}\left(u\right) \] for $u\in B_{s_0+m}\cap E_1$. Note that $f_{1}\left(0\right)=0$. We will use the local surjection theorem to show that the range of $f_{1}$ covers a neighbourhood of $0$ in $E'_1$. We begin by showing that $Df_{1}$ has a right inverse. {\it Note that, if we assume that $DF$ is tame Galerkin right-invertible, we can take $M_1=\Lambda_1\geq\underline{\Lambda}$, and $Df_1$ is automatically right-invertible, with the tame estimate (\ref{galtamegalinverseepsilon}). So the next subsection is only necessary if we assume that $DF$ is tame right-invertible.} \subsubsection{\label{subsec:-i1}$Df_{1}(u)$ has a right inverse for $\Vert u\Vert_{\mathcal{N}_1}\leq 1\,$.} This subsection uses condition (\ref{eq:i2}). We recall it here for the reader's convenience: \[ \left(1-\vartheta\right)\left(\sigma-\delta\right)>\vartheta m+\max\left\{ \ell,\vartheta {\ell}'\right\} +\frac{g}{\eta} \] \begin{lem} \label{lem:i1} For $K$ large enough and for all $u\in E_1$ with $\left\Vert u\right\Vert _{\mathcal{N}_{1}}\leq 1$: \[ |||\, \Pi'_1DF_{\varepsilon}\left(u\right)\left(1-\Pi_1\right)L_{\varepsilon}\left(u\right)|||_{_{\mathcal{N}_{1}}}\leq\frac{1}{2} \] \end{lem} \begin{proof} From $\left\Vert u\right\Vert _{\mathcal{N}_{1}}\leq1$, it follows that $\left\Vert u\right\Vert _{\delta}\leq1$, and since $\delta>s_{0}+\max\left\{ \ell,m\right\} +\ell'$, the tame estimates hold at $u$. Take any $k\in E_1'$ and set $h=\left(1-\Pi_1\right)L_{\varepsilon}\left(u\right)k$. We have $\left\Vert h\right\Vert _{\delta}\lesssim \Lambda_{1}^{\delta-\sigma}\left\Vert L_{\varepsilon}\left(u\right)k\right\Vert _{\sigma}$, and: \begin{align*} \left\Vert \Pi_1'DF_{\varepsilon}\left(u\right)h\right\Vert _{\delta-m}' & \lesssim\left\Vert h\right\Vert _{s_{0}+m}\left\Vert u\right\Vert _{\delta}+\left\Vert h\right\Vert _{\delta}\lesssim\left\Vert h\right\Vert _{\delta}\\ \left\Vert \Pi_1'DF_{\varepsilon}\left(u\right)h\right\Vert _{\delta}' & \lesssim M_{1}^{m}\left\Vert \Pi_1'DF_{\varepsilon}\left(u\right)h\right\Vert _{\delta-m}\lesssim M_{1}^{m}\left\Vert h\right\Vert _{\delta}\,. \end{align*} Hence: \[ \left\Vert \Pi_1'DF_{\varepsilon}\left(u\right)h\right\Vert _{\delta}'\lesssim M_{1}^{m}\Lambda_{1}^{\delta-\sigma}\left\Vert L_{\varepsilon}\left(u\right)k\right\Vert _{\sigma}\,. \] Writing $\left\Vert \Pi_1'DF_{\varepsilon}\left(u\right)h\right\Vert _{\sigma}'\lesssim M_{1}^{\sigma-\delta}\left\Vert \Pi_1'DF_{\varepsilon}\left(u\right)h\right\Vert _{\delta}$ we finally get: \begin{equation} \left\Vert \Pi_1'DF_{\varepsilon}\left(u\right)h\right\Vert _{\mathcal{N}_{1}}'\lesssim M_{1}^{m}\Lambda_{1}^{\delta-\sigma}\left(1+\Lambda_{1}^{-\frac{\vartheta}{\alpha}\left(\sigma-\delta\right)}M_{1}^{\sigma-\delta}\right)\left\Vert L_{\varepsilon}\left(u\right)k\right\Vert _{\sigma}\,. \label{eq:i20} \end{equation} We now have to estimate $\left\Vert L_{\varepsilon}\left(u\right)k\right\Vert _{\sigma}$. By the tame estimates, we have: \begin{align*} \left\Vert L_{\varepsilon}\left(u\right)k\right\Vert _{\sigma} & \lesssim\varepsilon^{-g}\left(\left\Vert k\right\Vert _{\sigma+\ell'}'+\left\Vert u\right\Vert _{\sigma+\ell}\left\Vert k\right\Vert _{s_{0}+\ell'}'\right)\\ & \lesssim\varepsilon^{-g}\left(M_{1}^{\ell'}\left\Vert k\right\Vert _{\sigma}'+\Lambda_{1}^{\ell}\left\Vert u\right\Vert _{\sigma}\left\Vert k\right\Vert _{\delta}'\right) \end{align*} Since $\left\Vert u\right\Vert _{\mathcal{N}_{1}}\leq1$, we have $\left\Vert u\right\Vert _{\sigma}\leq \Lambda_{1}^{\frac{\vartheta}{\alpha}\left(\sigma-\delta\right)}\,$. Substituting, we get: \begin{align} \left\Vert L_{\varepsilon}\left(u\right)k\right\Vert _{\sigma} & \lesssim\varepsilon^{-g}\left(M_{1}^{\ell'}\left\Vert k\right\Vert _{\sigma}'+\Lambda_{1}^{\frac{\vartheta}{\alpha}\left(\sigma-\delta\right)+\ell}\left\Vert k\right\Vert _{\delta}'\right)\nonumber \\ & \lesssim\varepsilon^{-g}\Lambda_{1}^{\frac{\vartheta}{\alpha}\left(\sigma-\delta\right)}\left ( M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right) \left\Vert k\right\Vert _{\mathcal{N}_{1}}'\label{eq:i21} \end{align} Putting (\ref{eq:i20}) and (\ref{eq:i21}) together, we get: \[ \text{\ensuremath{\left\Vert \Pi_1'DF_{\varepsilon}\left(u\right)h\right\Vert _{\mathcal{N}_{1}}'\lesssim\varepsilon^{-g}M_{1}^{m}\Lambda_{1}^{\delta-\sigma}\left(\Lambda_{1}^{\frac{\vartheta}{\alpha}\left(\sigma-\delta\right)}+M_{1}^{\sigma-\delta}\right)\left ( M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right) \left\Vert k\right\Vert _{\mathcal{N}_{1}}'}} \] Since $\alpha>1$, we have $\Lambda_{1}^{\frac{\vartheta}{\alpha}\left(\sigma-\delta\right)}\leq \Lambda_{1}^{\vartheta\left(\sigma-\delta\right)}= M_{1}^{\sigma-\delta}$, so that: \begin{align*} \left\Vert \Pi_1'DF_{\varepsilon}\left(u\right)h\right\Vert _{\mathcal{N}_{1}}' & \lesssim\varepsilon^{-g}M_{1}^{m}\Lambda_{1}^{\delta-\sigma}M_{1}^{\sigma-\delta}\left ( M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right) \left\Vert k\right\Vert _{\mathcal{N}_{1}}'\\ & \lesssim\varepsilon^{-g}\Lambda_{1}^{\vartheta m-\left(1-\vartheta\right)\left(\sigma-\delta\right)+\max\{ \ell,\vartheta \ell'\} }\left\Vert k\right\Vert _{\mathcal{N}_{1}}' \end{align*} Since $\Lambda_{1}=K \varepsilon^{-\eta}$, the inequality becomes: \[ \left\Vert \Pi_1'DF_{\varepsilon}\left(u\right)h\right\Vert _{\mathcal{N}_{1}}'\lesssim K^{-C_0}\varepsilon^{-g+\eta C_0 }\left\Vert k\right\Vert _{\mathcal{N}_{1}}' \] with $C_0:=\left(1-\vartheta\right)\left(\sigma-\delta\right)-\vartheta m-\max\{\ell,\vartheta \ell'\}$. By condition (\ref{eq:i2}), the exponent $C_0$ is larger than $g/\eta$, and the proof follows by choosing $K$ large enough independently of $0<\varepsilon\leq 1$. \end{proof} Introduce the map $\mathcal{L}_{1}\left(u\right)=\Pi_1 L_{\varepsilon}\left(u\right)_{\vert_{E'_1}}$. Since $DF_{\varepsilon}\left(u\right)L_{\varepsilon}\left(u\right)=1\text{, }$it follows from Lemma \ref{lem:i1} that, for $k\in E_1'$, $u\in E_1$ and $\left\Vert u\right\Vert _{\mathcal{N}_{1}}\leq 1\,$, we have: \[ \left\Vert k-Df_{1}\left(u\right)\mathcal{L}_{1}\left(u\right)k\right\Vert'_{\mathcal{N}_{1}}\leq\frac{1}{2}\left\Vert k\right\Vert'_{\mathcal{N}_{1}} \] This implies that the Neumann series $\sum_{i\geq 0} \left(I_{E'_1}-Df_{1}\left(u\right)\mathcal{L}_{1}\left(u\right)\right)^i$ converges in operator norm. Its sum is $S_1(u)=\left(Df_{1}(u)\mathcal{L}_{1}(u)\right)^{-1}$ and it has operator norm at most $2$. Then $T_{1}\left(u\right):=\mathcal{L}_1(u)S_1(u)$ is a right inverse of $Df_{1}\left(u\right)$ and $|||\, T_{1}\left(u\right)|||_{_{\mathcal{N}_{1}}}\leq 2\,|||\, \mathcal{L}_{1}\left(u\right)|||_{_{\mathcal{N}_{1}}}$. By the tame estimates, if $u\in E_1\,,$ $\left\Vert u\right\Vert _{\mathcal{N}_{1}}\leq 1$ and $k\in E_1'$, we have: \begin{align*} \left\Vert \mathcal{L}_{1}\left(u\right)k\right\Vert _{\delta}\lesssim\left\Vert L_{\varepsilon}\left(u\right)k\right\Vert _{\delta} & \lesssim\varepsilon^{-g}\left(\left\Vert k\right\Vert _{\delta+\ell'}'+\left\Vert u\right\Vert _{\delta+\ell}\left\Vert k\right\Vert _{s_{0}+\ell'}'\right)\\ & \lesssim\varepsilon^{-g}\left(M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right)\left\Vert k\right\Vert _{\delta}' \end{align*} Combining with (\ref{eq:i21}), we find: \[ \sup_{\Vert u\Vert_{_{\mathcal{N}_1}\leq 1}} |||\, T_{1}\left(u\right)|||_{_{\mathcal{N}_{1}}}\lesssim\varepsilon^{-g}\left(M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right) = m_1 \] \subsubsection{Local inversion of $f_{1}$.} \leavevmode\par \noindent Applying Theorem 5, we find that if $\left\Vert \Pi'_0 v\right\Vert'_{\mathcal{N}_{1}} < 1/m_1$, then equation $f_{1}\left(u\right)=\Pi'_0 v$ has a solution $u_1 \in E_1$ with $\left\Vert u_1\right\Vert _{\mathcal{N}_{1}}\leq 1$ and $\left\Vert u_1\right\Vert _{\mathcal{N}_{1}}\leq m_1 \left\Vert \Pi'_0 v\right\Vert'_{\mathcal{N}_{1}}$. Note that $\left\Vert \Pi'_0 v\right\Vert _{\sigma}'\lesssim M_{0}^{\sigma-\delta}\left\Vert \Pi'_0 v\right\Vert _{\delta}'\lesssim \Lambda_{1}^{\frac{\vartheta}{\alpha}\left(\sigma-\delta\right)}\left\Vert \Pi'_0 v\right\Vert _{\delta}'$. It follows that \[ \left\Vert \Pi'_0 v\right\Vert' _{\mathcal{N}_{1}}=\left\Vert \Pi'_0 v\right\Vert _{\delta}'+\Lambda_{1}^{-\frac{\vartheta}{\alpha}\left(\sigma-\delta\right)}\left\Vert \Pi'_0 v\right\Vert _{\sigma}'\lesssim\left\Vert \Pi'_0 v\right\Vert _{\delta} \] \medskip Assume from now on: \begin{equation} \left\Vert v\right\Vert _{\delta}'\lesssim\varepsilon^{g}\left(M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right)^{-1}\label{v_petit} \end{equation} Then $\left\Vert \Pi'_0 v\right\Vert'_{\mathcal{N}_{1}}\lesssim m_1^{-1}$, and Theorem 5 applies. The estimate on $u_1$ implies: \begin{equation} \left\Vert u_1\right\Vert _{\delta} \lesssim\text{\ensuremath{\varepsilon^{-g}\left(M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right)\left\Vert v\right\Vert _{\delta}'}}\leq 1\label{u_1_delta} \end{equation} It also implies an estimate in higher norm: \begin{equation} \left\Vert u_1\right\Vert _{\sigma} \lesssim\varepsilon^{-g}\Lambda_{1}^{\frac{\vartheta}{\alpha}\left(\sigma-\delta\right)}\left(M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right)\left\Vert v\right\Vert'_{\delta} \lesssim\, \Lambda_{1}^{\frac{\vartheta}{\alpha}\left(\sigma-\delta\right)} \;.\label{u_1_sigma} \end{equation} \subsection{Induction.} \subsubsection{Finding uniform bounds.} In addition to $\left(\alpha,\vartheta,\delta,\varepsilon,\eta\right)$ we are given $\beta$ satisfying relations (\ref{eq:i4}) and (\ref{eq:i3}) . We recall them here for the reader's convenience. With $s_{1}\geq s_{0}+\max\left\{ m,\ell\right\} $ and $\delta>s_{1}+\ell'$ , \begin{align*} \sigma & >\alpha\beta+s_{1}\\ \beta & >\frac{1}{\alpha}\left(\sigma-\delta\right) \end{align*} We also inherit $\Lambda_{1}=K\varepsilon^{-\eta}$ and $u_1$ from the preceding section. Combining (\ref{eq:i3}) and (\ref{u_1_sigma}), we immediately obtain the estimate \begin{equation} \left\Vert u_1\right\Vert _{\sigma} \lesssim\varepsilon^{-g}\Lambda_{1}^{\beta}\left(M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right)\left\Vert v\right\Vert'_{\delta} \,\lesssim \,\Lambda_{1}^{\beta} \;.\label{u_1_sigma_beta} \end{equation} Consider the sequences of integers $M_{n}$ and $\Lambda_{n}$, $n\geq1\text{, }$defined by $\Lambda_{n}:=\Lambda_{1}^{\text{\ensuremath{\alpha^{n-1}}}}$ and $M_{n}:=\Lambda_{n}^{\vartheta}$. \medskip Let $\Pi_n:=\Pi(\Lambda_n)\,,\;\Pi'_n:=\Pi'(M_n)\,,\;E_n:=E(\Lambda_n)\,,\;E'_n:=E'(M_n)\,.$\medskip We will construct a sequence $u_{n}\in E_{n},\,n\geq1,$ starting from the initial point $u_{1}$ we found in the preceding section. For all $n\geq2$ the remaining points should satisfy the following conditions: \begin{align} \Pi_n'F_{\varepsilon}\left(u_{n}\right) & =\Pi_{n-1}'v\label{eq:i17}\\ \left\Vert u_{n}-u_{n-1}\right\Vert _{s_{0}} & \leq\varepsilon^{-g}\Lambda_{n-1}^{\alpha\beta-\sigma+s_{0}}\left(M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right)\left\Vert v\right\Vert _{\delta}'\label{eq:i18}\\ \left\Vert u_{n}-u_{n-1}\right\Vert _{\sigma} & \leq\varepsilon^{-g}\Lambda_{n-1}^{\alpha\beta}\left(M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right)\left\Vert v\right\Vert _{\delta}'\label{eq:i19} \end{align} We proceed by induction. Suppose we have found $u_{2},...,u_{n-1}$ satisfying these conditions. We want to construct $u_{n}.$ \begin{lem} \label{lem:i5}Let us impose $K\geq 2$. For all $t$ with $s_{0}\leq t <\sigma-\alpha\beta$, and all $i$ with $2\leq i\leq n-1$, we have: \[ \sum_{i=2}^{n-1}\left\Vert u_{i}-u_{i-1}\right\Vert _{t}\leq \varepsilon^{-g}\left(M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right)\Sigma\left(t\right)\left\Vert v\right\Vert _{\delta}' \] where $\Sigma\left(t\right)$ is finite and independent of $n\,,\,\varepsilon$. \end{lem} \begin{proof} By the interpolation formula, we have $\left\Vert u_{i}-u_{i-1}\right\Vert _{t}\leq\varepsilon^{-g}\,\Lambda_{i-1}^{\alpha\beta-\sigma+t}\left(M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right)\left\Vert v\right\Vert _{\delta}'\,$, for all $2\leq i\leq n\,$. Since $\Lambda_{1}=K\varepsilon^{-\eta}\geq 2$, we have: \begin{align*} \sum_{i=2}^{n-1}\left\Vert u_{i}-u_{i-1}\right\Vert _{t} & \leq\varepsilon^{-g}\left(M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right)\sum_{i=2}^{\infty}\Lambda_{i-1}^{\alpha\beta-\sigma+t}\left\Vert v\right\Vert _{\delta}'\\ & \leq\varepsilon^{-g} \left(M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right)\sum_{j=0}^{\infty} 2^{\alpha^{j(\alpha\beta-\sigma+t)}}\left\Vert v\right\Vert _{\delta}' \end{align*} \end{proof} By (\ref{eq:i4}) we can take $t=s_{1}$, and we find a uniform bound for $u_{n-1}$ in the $s_{1}$-norm, namely: \begin{align*} \left\Vert u_{n-1}\right\Vert _{s_{1} } & \leq\left\Vert u_{1}\right\Vert _{\delta}+\sum_{i=2}^{n-1}\left\Vert u_{i}-u_{i-1}\right\Vert _{s_{1} }\\ & \lesssim\varepsilon^{-g}\left(M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right)\left(1+\Sigma(s_{1})\right)\left\Vert v\right\Vert _{\delta}'\\ & \lesssim\varepsilon^{-g}\left(M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right)\left\Vert v\right\Vert _{\delta}' \end{align*} In particular, we will have $\left\Vert u_{n-1}\right\Vert _{s_{1} }\leq 1$ if $\left\Vert v\right\Vert _{\delta}'\lesssim\varepsilon^{g}$$\left(M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right)^{-1}$, so the tame estimates hold at $u_{n-1}.$ Similarly, if $\left\Vert v\right\Vert _{\delta}'\lesssim\varepsilon^{g}$$\left(M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right)^{-1}$ we find a uniform bound in the $\sigma$-norm. We have: $$\left\Vert u_{n-1}\right\Vert _{\sigma} \leq\left\Vert u_{1}\right\Vert _{\sigma}+\sum_{i=2}^{n-1}\left\Vert u_{i}-u_{i-1}\right\Vert _{\sigma}\nonumber$$ and $$\sum_{i=2}^{n-1}\left\Vert u_{i}-u_{i-1}\right\Vert _{\sigma}\lesssim\text{\ensuremath{\varepsilon^{-g}\left(M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right)\sum_{i=1}^{n-1}\Lambda_{i}^{\beta}\left\Vert v\right\Vert _{\delta}'}}\lesssim\varepsilon^{-g}\left(M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right)\Lambda_{n-1}^{\beta}\left\Vert v\right\Vert _{\delta}'$$ so, combining this with (\ref{u_1_sigma_beta}), we get: \begin{equation} \left\Vert u_{n-1}\right\Vert _{\sigma}\lesssim\varepsilon^{-g}\Lambda_{n-1}^{\beta}\left(M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right)\left\Vert v\right\Vert _{\delta}'\,\lesssim\, \Lambda_{n-1}^{\beta}\,.\label{eq:sigma-estim} \end{equation} \subsubsection{Setting up the induction step.} \leavevmode\par \noindent Suppose, as above, that $\left\Vert v\right\Vert _{\delta}'\lesssim\varepsilon^{g}\left(M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right)^{-1}$and that $u_{2},...,u_{n-1}$ have been found. We have seen that $\left\Vert u_{n-1}\right\Vert_{s_1} \leq 1$, so that the tame estimates hold at $u_{n-1}$, and we also have $\left\Vert u_{n-1}\right\Vert _{\sigma}\lesssim \Lambda_{n-1}^{\beta}$ . We want to find $u_{n}$ satisfying (\ref{eq:i17}), (\ref{eq:i18}) and (\ref{eq:i19}). Since $\Pi'_{n-1} F_{\varepsilon}\left(u_{n}\right)=\Pi'_{n-2} v$ , we rewrite the latter equation as follows: \begin{equation} \Pi'_{n} \left(F_{\varepsilon}\left(u_{n}\right)-F_{\varepsilon}\left(u_{n-1}\right)\right)+\left(\Pi'_{n}-\Pi'_{n-1}\right) F_{\varepsilon}\left(u_{n-1}\right)=\left(\Pi'_{n-1}-\Pi'_{n-2}\right)v\label{eq:i22} \end{equation} Define a map $f_{n}:\,E_{n}\rightarrow E'_{n}$ with $f_n\left(0\right)=0$ by: \[ f_{n}\left(z\right)=\Pi'_{n}\left(F_{\varepsilon}\left(u_{n-1}+z\right)-F_{\varepsilon}\left(u_{n-1}\right)\right) \] Equation (\ref{eq:i22}) can be rewritten as follows: \begin{align} f_{n}\left(z\right) & =\Delta_{n}v+e_{n}\label{eq:i23}\\ \Delta_{n}v & =\Pi'_{n-1}\left(1-\Pi'_{n-2}\right)v\label{eq:i24}\\ e_{n} & =-\Pi'_{n}\left(1-\Pi'_{n-1}\right)F_{\varepsilon}\left(u_{n-1}\right)\label{eq:i25} \end{align} We choose the following norms on $E_{n}$ and $E'_{n}$: \begin{align*} \left\Vert x\right\Vert _{\mathcal{N}_{n}} & =\left\Vert x\right\Vert _{s_{0}}+\Lambda_{n-1}^{-\sigma+s_0}\left\Vert x\right\Vert _{\sigma}\\ \left\Vert y\right\Vert _{\mathcal{N}_{n}}' & =\left\Vert y\right\Vert _{s_{0}}'+\Lambda_{n-1}^{-\sigma+s_{0}}\left\Vert y\right\Vert _{\sigma}' \end{align*} Endowed with these norms, $E_n$ and $E'_n$ are Banach spaces. We shall use the notation $||| \, L\, |||_{_{\mathcal{N}_{n}}}$ for the operator norm of any linear continuous map $L$ from the Banach space $E'_n$ to a Banach space that can be either $E_n$ or $E'_n$.\medskip \begin{lem} \label{lem:i2}If $0\leq t\leq\sigma-s_{0}$, then: \begin{align*} \left\Vert x\right\Vert _{s_{0}+t} & \lesssim \Lambda_{n-1}^{t}\left\Vert x\right\Vert _{\mathcal{N}_{n}}\\ \left\Vert y\right\Vert'_{s_{0}+t} & \lesssim \Lambda_{n-1}^{t}\left\Vert y\right\Vert _{\mathcal{N}_{n}}' \end{align*} \end{lem} \begin{proof} Use the interpolation inequality. \end{proof} We will solve the system (\ref{eq:i23}), (\ref{eq:i24}), (\ref{eq:i25}) by applying the local surjection theorem to $f_{n}$ on the ball $B_{\mathcal{N}_{n}}\left(0,r_{n}\right)\subset E_{n}$ where: \begin{equation} r_{n}=\varepsilon^{-g}\Lambda_{n-1}^{\alpha\beta-\sigma+s_{0}}\left(M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right)\left\Vert v\right\Vert _{\delta}'\label{eq:i29} \end{equation} Note that if the solution $z$ belongs to $B_{\mathcal{N}_{n}}\left(0,r_{n}\right)$, then $\left\Vert z\right\Vert _{s_{0}}\leq\varepsilon^{-g}\Lambda_{n-1}^{\alpha\beta-\sigma+s_{0}}\left(M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right)\left\Vert v\right\Vert _{\delta}'$ and $\left\Vert z\right\Vert _{\sigma}\leq\varepsilon^{-g}\Lambda_{n-1}^{\alpha\beta}\left(M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right)\left\Vert v\right\Vert _{\delta}'\,$. In other words, $u_{n}=u_{n-1}+z$ satisfies (\ref{eq:i18}) and (\ref{eq:i19}), so that the induction step is proved. We begin by showing that $Df_{n}\left(z\right)$ has a right inverse. {\it Note that, if we assume that $DF_\varepsilon$ is tame Galerkin right-invertible, we can take $M_n=\Lambda_n$, and the result of the next subsection is obvious. This subsection is only useful if we assume that $DF$ is tame right-invertible but not tame Galerkin right-invertible.} \subsubsection{$Df_{n}(z)$ has a right inverse for $\Vert z\Vert_{\mathcal{N}_n}\leq r_n\,$.\label{subsec:i2}} In this subsection, we use conditions (\ref{eq:i5}) and (\ref{eq:i6}). We recall them for the reader's convenience: \begin{align*} \left(1+\alpha-\vartheta\alpha\right)\left(\sigma-s_{0}\right) & >\alpha\beta+\alpha\left(m+\ell\right)+\ell'+\frac{g}{\alpha\eta}\\ \left(1-\vartheta\right)\left(\sigma-s_{0}\right) & >m+\vartheta \ell'+\frac{g}{\alpha\eta} \end{align*} Take now any $z\in B_{\mathcal{N}_{n}}\left(0,r_{n}\right)$. Arguing as above, we find that if , then: \begin{align} \label{bc}\left\Vert u_{n-1}+z\right\Vert _{s_{1} } & \leq 1 \\ \label{bd}\left\Vert u_{n-1}+z\right\Vert _{\sigma} & \lesssim \Lambda_{n}^{\beta} \end{align} By (\ref{bc}) the tame estimates hold on $z\in B_{\mathcal{N}_{n}}\left(0,r_{n}\right)$. \begin{lem} Take $\Lambda_1= K \varepsilon^{-\eta}$ with $K>1$ chosen large enough, independently of $n$ and $\varepsilon\in (0,1]$. Then, for all $z\in B_{\mathcal{N}_{n}}\left(0,r_{n}\right)$: \[ |||\, \Pi'_n DF_{\varepsilon}\left(u_{n-1}+z\right)\left(1-\Pi_{n}\right)L_{\varepsilon}\left(u_{n-1}+z\right)|||_{ _{\mathcal{N}_{n}}}\leq\frac{1}{2} \] \end{lem} \begin{proof} We proceed as in the proof of Lemma \ref{lem:i1}. For $k\in E'_n$, we set \[h=\left(1-\Pi_{n}\right)L_{\varepsilon}\left(u_{n-1}+z\right)k\;.\] We have: \[ \left\Vert h\right\Vert _{s_{0}+m}\lesssim \Lambda_{n}^{-\sigma+s_{0}+m}\left\Vert L_{\varepsilon}\left(u_{n-1}+z\right)k\right\Vert _{\sigma} \] By (\ref{bd}) and the tame estimates for $L_\varepsilon$, we get: \begin{align} \left\Vert L_{\varepsilon}\left(u_{n-1}+z\right)k\right\Vert _{\sigma} & \lesssim\varepsilon^{-g}\left(\left\Vert u_{n-1}+z\right\Vert _{\sigma+\ell}\left\Vert k\right\Vert _{s_{0}+\ell'}'+\left\Vert k\right\Vert _{\sigma+\ell'}'\right)\nonumber \\ & \lesssim\varepsilon^{-g}\left(\Lambda_{n}^{\beta+\ell}\Lambda_{n-1}^{\ell'}+M_{n}^{l'}\Lambda_{n-1}^{\sigma-s_{0}}\right)\left\Vert k\right\Vert _{\mathcal{N}_{n}}'\label{eq:i27} \end{align} where we have used Lemma \ref{lem:i2}. Substituting in the preceding formula, we get: \[ \left\Vert h\right\Vert _{s_{0}+m}\lesssim\varepsilon^{-g}\left(\Lambda_{n}^{\beta+\ell-\sigma+s_{0}+m}\Lambda_{n-1}^{l'}+M_{n}^{\ell'}\Lambda_{n-1}^{-\left(\alpha-1\right)\left(\sigma-s_{0}\right)+\alpha m}\right)\left\Vert k\right\Vert _{\mathcal{N}_{n}}' \] By the tame estimate (\ref{tamedirectepsilon}), we have: \[ \left\Vert \Pi'_n DF_{\varepsilon}\left(u_{n-1}+z\right)h\right\Vert _{s_{0}}'\lesssim\left\Vert h\right\Vert _{s_{0}+m} \] From this it follows that: \[ \left\Vert \Pi'_n DF_{\varepsilon}\left(u_{n-1}+z\right)h\right\Vert _{\sigma}'\lesssim M_{n}^{\sigma-s_{0}}\left\Vert h\right\Vert _{s_{0}+m} \] Hence: \[ \left\Vert \Pi'_n DF_{\varepsilon}\left(u_{n-1}+z\right)h\right\Vert _{\mathcal{N}_{n}}'\lesssim\left(1+\Lambda_{n-1}^{-\sigma+s_{0}}M_{n}^{\sigma-s_{0}}\right)\left\Vert h\right\Vert _{s_{0}+m} \] We have $\Lambda_{n-1}^{-\sigma+s_{0}}M_{n}^{\sigma-s_{0}}\lesssim \Lambda_{n-1}^{\left(\alpha\vartheta-1\right)\left(\sigma-s_{0}\right)}$. Since $\alpha\vartheta>1$, the dominant term in the parenthesis is the second one, and: \begin{align*} &\left\Vert \Pi'_n DF_{\varepsilon}\left(u_{n-1}+z\right)h\right\Vert _{\mathcal{N}_{n}} \\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \lesssim \Lambda_{n-1}^{-\sigma+s_{0}}M_{n}^{\sigma-s_{0}}\left\Vert h\right\Vert _{s_{0}+m}\\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \lesssim\varepsilon^{-g}M_{n}^{\sigma-s_{0}}\left(\Lambda_{n-1}^{\alpha\left(\beta+\ell-\sigma+s_{0}+m\right)+\ell'-\sigma+s_{0}}+M_{n}^{\ell'}\Lambda_{n-1}^{-\alpha\left(\sigma-s_{0}\right)+\alpha m}\right)\left\Vert k\right\Vert _{\mathcal{N}_{n}}' \end{align*} From (\ref{eq:i5}) and (\ref{eq:i6}), it follows that the right-hand side is a decreasing function of $n$. To check that it is less than $1/2$ for all $n\geq2$, it is enough to check it for $n=2.$ Since $\Lambda_{1}=K\varepsilon^{-\eta},$ substituting in the right-hand side, we get: $$\left\Vert \Pi'_n DF_{\varepsilon}\left(u_{n-1}+z\right)h\right\Vert _{\mathcal{N}_{n}} \lesssim \left(K^{-\min\left\{ C_{1},C_{2}\right\}}\right)^{\alpha^{n-2}}\left(\varepsilon^{\min\left\{ C_{1},C_{2}\right\}-\alpha^{2-n}g/\eta }\right)^{\eta\alpha^{n-2}}\left\Vert k\right\Vert _{\mathcal{N}_{n}}'$$ with \begin{align*} C_{1} & =-\alpha(\beta+\ell+m)-\ell'+\left(1+\alpha-\alpha\vartheta\right)\left(\sigma-s_{0}\right)\\ C_{2} & =\alpha\left((1-\vartheta)(\sigma-s_{0})-\vartheta \ell'-m\right) \end{align*} By (\ref{eq:i5}) and (\ref{eq:i6}), both exponents $C_{1}\text{ and }\ensuremath{C_{2}}$ are larger than $g/\eta$. As a consequence, $\left\Vert \Pi'_n DF_{\varepsilon}\left(u_{n-1}+z\right)h\right\Vert _{\mathcal{N}_{n}}\leq \frac{1}{2}\left\Vert k\right\Vert _{\mathcal{N}_{n}}'$ for $K$ chosen large enough, independently of $n$ and $0<\varepsilon\leq 1$. \end{proof} Define $\mathcal{L}_{n}\left(z\right)=\Pi_n L_{\varepsilon}\left(u_{n-1}+z\right)_{\vert_{E'_n}}\,.$ Arguing as in subsection \ref{subsec:-i1}, we find that the Neumann series $\sum_{i\geq 0} \left(I_{E'_n}-Df_{n}\left(u\right)\mathcal{L}_{n}\left(u\right)\right)^i$ converges in operator norm. Its sum is $S_n(u)=\left(Df_{n}(u)\mathcal{L}_{n}(u)\right)^{-1}$ and it has operator norm at most $2$. Then $T_{n}\left(u\right):=\mathcal{L}_n(u)S_n(u)$ is a right inverse of $Df_{n}\left(u\right)\,,$ with the estimate $|||\, T_{n}\left(u\right)|||_{_{\mathcal{N}_{n}}}\leq 2\,|||\, \mathcal{L}_{n}\left(u\right)|||_{_{\mathcal{N}_{n}}}$. We have already derived estimate (\ref{eq:i27}) which immediately implies: \[ \left\Vert \mathcal{L}_{n}\left(z\right)k\right\Vert _{\sigma}\lesssim\varepsilon^{-g}\Lambda_{n-1}^{\sigma-s_{0}}\left(\Lambda_{n}^{\beta+\ell}\Lambda_{n-1}^{-\sigma+s_{0}+\ell'}+M_{n}^{\ell'}\right)\left\Vert k\right\Vert _{\mathcal{N}_{n}}' \] From the tame estimates and Lemma \ref{lem:i2}, we also have: \[ \left\Vert \mathcal{L}_{n}\left(z\right)k\right\Vert _{s_{0}}\lesssim\varepsilon^{-g}\left\Vert k\right\Vert _{s_{0}+\ell'}'\lesssim\varepsilon^{-g}\Lambda_{n-1}^{\ell'}\left\Vert k\right\Vert _{\mathcal{N}_{n}}' \] Since $\alpha\vartheta>1$, we have $\Lambda_{n-1}^{\ell'}\lesssim M_{n}^{\ell'}$. So the two preceding estimates can be combined, and we get the final estimate for the right inverse in operator norm: \begin{equation} |||\, T_{n}\left(z\right)|||_{_{\mathcal{N}_{n}}}\lesssim\varepsilon^{-g}\left(\Lambda_{n}^{\beta+\ell}\Lambda_{n-1}^{-\sigma+s_0+\ell'}+M_{n}^{\ell'}\right)\label{eq:i28} \end{equation} \subsubsection{Finding $u_{n}$.} In this subsection, we use relations (\ref{eq:i4}),(\ref{eq:i7}), (\ref{eq:i8}) and (\ref{eq:i9}). We recall them for the reader's convenience: \begin{align*} \sigma & >\alpha\beta+s_1\\ \delta & >s_{0}+\frac{\alpha}{\vartheta}\left(\sigma-s_{0}-\alpha\beta+\ell"\right)\\ \left(\alpha-1\right)\beta & >\left(1-\vartheta\right)\left(\sigma-s_{0}\right)+\vartheta m+\ell"+\frac{g}{\eta}\\ \ell" & =\max\left\{ \left(\alpha-1\right)\ell+\ell',\alpha\vartheta \ell'\right\} \end{align*} Let us go back to (\ref{eq:i23}). By Theorem \ref{thm1} to solve $\Pi'_n f_{n}\left(z\right)=\Delta_{n}v+e_{n}$ with $z\in B_{\mathcal{N}_{n}}\left(0,r_{n}\right)$ it is enough that: \begin{equation} |||\, T_{n}\left(z\right)|||_{_{\mathcal{N}_{n}}}\left(\left\Vert \Delta_{n}v\right\Vert _{\mathcal{N}_{n}}+\left\Vert e_{n}\right\Vert _{\mathcal{N}_{n}}\right)\leq r_{n}\label{eq:i33} \end{equation} Here $r_{n}$ is given by (\ref{eq:i29}). We can estimate $|||\, T_{n}\left(z\right)|||_{_{\mathcal{N}_{n}}}$ using (\ref{eq:i28}). We need to estimate $\left\Vert \Delta_{n}v\right\Vert _{\mathcal{N}_{n}}$ and $\left\Vert e_{n}\right\Vert _{\mathcal{N}_{n}}$. From (\ref{eq:i24}) we have: \begin{align*} \left\Vert \Delta v\right\Vert _{s_{0}}' & \lesssim M_{n-2}^{s_{0}-\delta}\left\Vert v\right\Vert _{\delta}'\\ \left\Vert \Delta v\right\Vert _{\sigma}' & \lesssim M_{n-1}^{\sigma-\delta}\left\Vert v\right\Vert _{\delta}'\\ \left\Vert \Delta v\right\Vert _{\mathcal{N}_{n}}' & \lesssim\max\left\{ M_{n-2}^{s_{0}-\delta},\Lambda_{n-1}^{-\sigma+s_{0}}M_{n-1}^{\sigma-\delta}\right\} \left\Vert v\right\Vert _{\delta}' \end{align*} An easy calculation yields: \[ \sigma - s_0 - \vartheta (\sigma - \delta) + \frac{\vartheta}{\alpha}(s_0 -\delta) = (1-\vartheta) (\sigma - \delta) + (1-\frac{\vartheta}{\alpha}) (\delta - s_0) \\ \] Since $s_0<\delta<\sigma$ and $\vartheta<1<\alpha$, the two terms on the right-hand side are positive, so $\Lambda_{n-1}^{-\sigma+s_{0}}M_{n-1}^{\sigma-\delta}\lesssim M_{n-2}^{s_{0}-\delta}$. It follows that: \begin{equation} \left\Vert \Delta v\right\Vert _{\mathcal{N}_{n}}'\lesssim M_{n-2}^{s_{0}-\delta}\left\Vert v\right\Vert _{\delta}'\label{eq:i30} \end{equation} From (\ref{eq:i25}), we derive: \[ \left\Vert e_{n}\right\Vert _{s_{0}}'\lesssim M_{n-1}^{-\sigma+m+s_{0}}\left\Vert e_{n}\right\Vert _{\sigma-m}' \] By Lemma \ref{boundF}, $\left\Vert F_\varepsilon\left(u_{n-1}\right)\right\Vert _{\sigma-m}\lesssim\left\Vert u_{n-1}\right\Vert _{\sigma}$ . So, remembering (\ref{eq:i25}) and (\ref{eq:sigma-estim}), we get: \begin{align*} \left\Vert e_{n}\right\Vert'_{s_{0}} & \lesssim M_{n-1}^{-\sigma+m+s_{0}}\left\Vert u_{n-1}\right\Vert _{\sigma}\\ & \lesssim\varepsilon^{-g}M_{n-1}^{-\sigma+m+s_{0}}\Lambda_{n-1}^{\beta}\left(M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right)\left\Vert v\right\Vert _{\delta}' \end{align*} Similarly, \begin{align*} \left\Vert e_{n}\right\Vert'_\sigma & \lesssim\left\Vert u_{n-1}\right\Vert _{\sigma+m}\lesssim \Lambda_{n-1}^{m}\left\Vert u_{n-1}\right\Vert _{\sigma}\\ & \lesssim\varepsilon^{-g}\Lambda_{n-1}^{\beta+m}\left(M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right)\left\Vert v\right\Vert _{\delta}' \end{align*} Finally, since $M_{n-1}<\Lambda_{n-1}$ and $\sigma>m+s_{0}$ , we get: \begin{equation} \left\Vert e_{n}\right\Vert _{\mathcal{N}_{n}}'\lesssim\varepsilon^{-g}\Lambda_{n-1}^{\beta}M_{n-1}^{-\sigma+m+s_{0}}\left(M_{1}^{\ell'}+\Lambda_{1}^{\ell}\right)\left\Vert v\right\Vert _{\delta}' \label{eq:i73} \end{equation} Substituting \eqref{eq:i28}, \eqref{eq:i29}, \eqref{eq:i30}, \eqref{eq:i73} in \eqref{eq:i33}, we get the following sufficient condition: \begin{equation} \left(\Lambda_{n}^{\beta+\ell}\Lambda_{n-1}^{-\sigma+s_{0}+\ell'}+M_{n}^{\ell'}\right)\left(M_{n-2}^{s_{0}-\delta}+\varepsilon^{-g}\Lambda_{n-1}^{\beta}M_{n-1}^{-\sigma+m+s_{0}}\right)\lesssim \Lambda_{n-1}^{\alpha\beta-\sigma+s_{0}}\label{eq:i31} \end{equation} We estimate both sides separately. Remembering that $M_{n-i}=\left(\Lambda_{n-1}\right)^{\alpha^{1-i}\vartheta}$ and $\Lambda_{n-1}=\left(K\varepsilon^{-\eta}\right)^{\alpha^{n-2}}$, we find \begin{align*} \left(\Lambda_{n}^{\beta+\ell}\Lambda_{n-1}^{-\sigma+s_{0}+\ell'}+M_{n}^{\ell'}\right) \Big(M_{n-2}^{s_{0}-\delta} +\varepsilon^{-g} \Lambda_{n-1}^{\beta} & M_{n-1}^{-\sigma+m+s_{0}} \Big)\\ &\lesssim \Big(\varepsilon^{-\eta\alpha^{n-2}}\Big)^{\max\{C_3,C_4\}+\max\{C_5,C_6\}} \end{align*} and $$\Lambda_{n-1}^{\alpha\beta-\sigma+s_{0}}\gtrsim \Big(\varepsilon^{-\eta\alpha^{n-2}}\Big)^{C_7}$$ with \begin{align*} C_3:=&\alpha(\beta+\ell)-\sigma+s_{0}+\ell'\\ C_4:=&\alpha\vartheta\ell'\\ C_5:=&\vartheta\alpha^{-1}(s_{0}-\delta)\\ C_6:=&g/\eta + \beta+ \vartheta(-\sigma+m+s_{0})\\ C_7:=&\alpha\beta-\sigma+s_{0} \end{align*} By (\ref{eq:i4}), we have $\sigma-\alpha\beta> s_1>s_{0}+\max\left\{ m,\ell\right\} $. It follows that: $$C_3 <\left(\alpha-1\right)\ell+\ell'\,.$$ So, defining $\ell"=\max\left\{ \left(\alpha-1\right)\ell+\ell',\alpha\vartheta \ell'\right\}$ as in (\ref{eq:i9}), we see that $$\max\{C_3,C_4\}+\max\{C_5,C_6\}\leq \max\{\ell"+C_5,\ell"+C_6\} $$ So condition (\ref{eq:i31}) is implied by the inequalities $\ell"+C_5< C_7$ and $\ell"+C_6<C_7$, which are the same as conditions (\ref{eq:i7}) and (\ref{eq:i8}). So inequality \eqref{eq:i33} holds, and the induction holds by Theorem \ref{thm1} \subsection{End of proof} First of all, for the above construction to work, the only constraint on $S$ is $S>\sigma$, and Lemma \ref{compatibility} gives us the estimate $\sigma<{\zeta}g/\eta$. The constant $\eta$ is only constrained by condition \eqref{eq:i0}, and we can choose, for instance, $\eta=\frac{g'-g}{2\max\left\{ \vartheta \ell',\ell\right\}}$. So we only need a condition on $S$ of the form $S\geq S_0$ with $S_0=O(\frac{1}{g'-g})$ as $g'\to g\,,$ all the other parameters being fixed.\medskip Let us now check that the estimate $\left\Vert v\right\Vert _{\delta}'\lesssim\varepsilon^{g'}$ is sufficient for the above construction. In \eqref{v_petit} we made the assumption $\left\Vert v\right\Vert _{\delta}'\lesssim\varepsilon^{g}\left(\Lambda_{1}^{\ell}+M_{1}^{\ell'}\right)^{-1}$ on $v\,,$ and we have $M_{1}\lesssim \varepsilon^{-\vartheta\eta}\,,\; \Lambda_{1}\lesssim\varepsilon^{-\eta}$, hence $\left(\Lambda_{1}^{\ell}+M_{1}^{\ell'}\right)\lesssim \varepsilon^{-\eta\max\left\{ \vartheta \ell',\ell\right\} }\,.$ So the condition $\left\Vert v\right\Vert _{\delta}'\lesssim\varepsilon^{g+\eta\max\left\{ \vartheta \ell',\ell\right\} }$ guarantees the existence of the sequence $(u_n)$. But (\ref{eq:i0}) may be rewritten in the form $$g+\eta\max\left\{ \vartheta \ell',\ell\right\}<g'\,,$$ so the preceding condition is implied by the estimate $\left\Vert v\right\Vert _{\delta}'\lesssim\varepsilon^{g'}\,,$ which is thus sufficient, as desired.\medskip Now we can translate the symbol $\,\lesssim\,$ into more explicit estimates. Choosing $r>0$ small enough, our construction gives, for every $v\in W_{\delta}$ with $\left\Vert v\right\Vert _{\delta}'\leq r\,\varepsilon^{g'}$ a sequence $u_{n},\,n\geq 1$, such that $u_n\in E_n\,,$ $\left\Vert u_{n}\right\Vert _{s_{1}}\leq r^{-1}\varepsilon^{-g'}\Vert v\Vert _{\delta}'\leq 1\,,$ and $$\Pi'_n F_\varepsilon\left(u_{n}\right)=\Pi'_{n-1}v\,.$$ It follows from Lemma \ref{lem:i5} that for any $t<\sigma-\alpha\beta\,,$ $(u_{n})$ is a Cauchy sequence for the $\Vert\cdot\Vert_t\,$. We recall that, by condition (\ref{eq:i4}), $s_{1}<\sigma-\alpha\beta$. So we can choose $t_1\in (s_{1},\sigma-\alpha\beta)\,.$ Then $(u_n)$ converges to some $u_\varepsilon$ in $V_{t_1}$ with $\left\Vert u_\varepsilon\right\Vert _{s_1}\leq r^{-1}\varepsilon^{-g'}\Vert v\Vert _{\delta}'\leq 1\,.$\medskip Since $t_1\geq s_{0}+m$, the map $F_\varepsilon$ is continuous from the $t_1$-norm to the $\left(t_1-m\right)$-norm, so $F_\varepsilon\left(u_{n}\right)$ converges to $F_\varepsilon\left(u_\varepsilon\right)$ in $W_{t_1-m}\,.$ Then $F_\varepsilon\left(u_{n}\right)$ is a bounded sequence in $W_{t_1-m}$, and $t_1-m>s_0$. So, using the approximation estimate \eqref{gain}, we find that $\Vert(1-\Pi'_n)F_\varepsilon\left(u_{n}\right)\Vert_{s_0}\to 0\,,$ and finally $\Vert \Pi'_n F_\varepsilon\left(u_{n}\right)-F_\varepsilon(u_\varepsilon)\Vert_{s_0} \to 0$ as $n\to\infty\,.$\medskip On the right-hand side, using \eqref{gain} again, we find that $\Pi'_{n-1} v$ converges to $v$ in $W_{s_0}$, since $\delta > s_0\,$.\medskip We conclude that $F_\varepsilon\left(u_\varepsilon\right)=v$, as desired, and this ends the proof of Theorem \ref{Thm8}. \section{An application of the singular perturbation theorem} \subsection{The result} In this section, we consider a Cauchy problem for nonlinear Schr\"odinger systems arising in nonlinear optics, a question recently studied by M\'etivier-Rauch \cite{MR} and Texier-Zumbrun \cite{TZ}. M\'etivier-Rauch proved the existence of local in time solutions, with an existence time $T$ converging to $0$ when the $H^{s}$ norm of the initial datum goes to infinity. Texier-Zumbrun, thanks to their version of the Nash-Moser theorem adapted to singular perturbation problems, were able to find a uniform lower bound on $T$ for certain highly concentrated initial data. The $H^{s}$ norm of these initial data could go to infinity. By applying our "semiglobal" version of the Nash-Moser theorem, we are able to extend Texier-Zumbrun's result to even larger initial data. In the sequel we follow closely their exposition, but some parameters are named differently to avoid confusions with our other notations.\medskip The problem takes the following form: \begin{equation} \label{Cauchy} \left\{ \begin{array}{ll} & \partial_{t}u+iA(\partial_{x})u=B(u,\partial_{x})u, \\ & u(0,x)=\varepsilon^{\kappa} \left(a_\varepsilon(x), \bar{a_{\varepsilon}}(x)\right) \end{array} \right. \end{equation} with $u(t,x)=(\psi(t,x),\bar{\psi}(t,x))\in\mathbf{C}^{2n}$, $(t,x)\in \lbrack0,T]\times\mathbf{R}^{d}$, \[ A(\partial_{x})=\mathrm{diag}(\lambda_{1},\cdots,\lambda_{n},-\lambda_{1},\cdots ,-\lambda_{n})\Delta_x \] and \[ B= \begin{pmatrix} \mathcal{B} & \mathcal{C}\\ \bar{\mathcal{C}} & \bar{\mathcal{B}} \end{pmatrix} \] The coefficients $b_{jj^{\prime}},\ c_{jj^{\prime}}$ of the $n\times n$ matrices $\mathcal{B},\ \mathcal{C}$ are first-order operators with smooth coefficients: $b_{jj^{\prime}}=\sum_{k=1}^{d}b_{kjj^{\prime}}(u)\partial _{x_{k}}$, $c_{jj^{\prime}}=\sum_{k=1}^{d}c_{kjj^{\prime}}(u)\partial_{x_{k}} $, with $b_{kjj^{\prime}}$ and $c_{kjj^{\prime}}$ smooth complex-valued functions of $u$ satisfying, for some integer $p\geq 2$, some $C>0$, all $0\leq|\alpha|\leq p$ and all $u=(\psi,\bar{\psi})\in \mathbf{C}^{2n}$: \[ |\partial^{\alpha}b_{kjj^{\prime}}(u)|+|\partial^{\alpha}c_{kjj^{\prime}}(u)|\leq C|u|^{p-|\alpha|}\,. \] Moreover, we assume that the following ``transparency'' conditions hold: the functions $b_{kjj}$ are real-valued, the coefficients $\lambda_{j}$ are real and pairwise distinct, and for any $j,\,j^{\prime}$ such that $\lambda _{j}+\lambda_{j^{\prime}}=0$, $c_{jj^{\prime}}=c_{j^{\prime}j}$. \medskip\ \ \ We consider initial data of the form $\varepsilon ^{\kappa}\left(a_{\varepsilon}(x),\bar{a_{\varepsilon}}(x)\right)$ with $a_{\varepsilon}(x)=a_{1}(x/\varepsilon)$ where $0<\varepsilon\leq 1$, $a_{1}\in H^{S}(\mathbf{R}^{d})$ for some $S$ large enough and $\Vert a_1\Vert_{_{H^{S}}}$ small enough. \medskip Our goal is to prove that the Cauchy problem has a solution on $[0,T]\times \mathbf{R}^{d}$ for all $0<\varepsilon\leq 1\,$, with $T>0$ independent of $\varepsilon$. Texier-Zumbrun obtain existence and uniqueness of the solution, under some conditions on $\kappa$, which should be large enough. This corresponds to a smallness condition on the initial datum when $\varepsilon$ approaches zero. Our local surjection theorem only provides existence, but our condition on $\kappa$ is less restrictive, so our initial datum is allowed to be larger. Note that, once existence is proved, uniqueness is easily obtained for this Cauchy problem, indeed local-in-time uniqueness implies global-in-time uniqueness. Our result is the following: \begin{theorem} \label{ThmTexier} Under the above assumptions and notations, let us impose the additional condition \begin{equation} \label{kappa} \kappa>\frac{d}{2(p-1)}\,. \end{equation} Let $s_1>\frac{d}{2}+4\,.$ If $0<\varepsilon\leq 1$, $a_1\in H^S(\mathbf{R}^d)$ for $S$ large enough, and $\Vert a_1\Vert_{H^{S}}$ is small enough, then the Cauchy problem (\ref{Cauchy}) has a unique solution in the functional space $C^1\left([0,T],\, H^{s_1-2}(\mathbf{R}^d)\right)\cap C^0\left([0,T],\, H^{s_1}(\mathbf{R}^d)\right)\,.$ \end{theorem} Metivier-Rauch already provide existence for a fixed positive $T$ when $\kappa\geq 1$ . So we obtain something new in comparison with them when $\frac{d}{2}\frac{1}{p-1}<1\,,$ that is, when \[ p>1+\frac{d}{2}\;. \] Let us now compare our results with those of Texier-Zumbrun \cite{TZ}. In order to do so, we consider the same particular values as in their Remark 4.7 and Examples 4.8, 4.9 pages 517-518. Let us illustrate this in 2 and 3 space dimensions. \bigskip {\bf In two space dimensions}, $d=2$ (Example 4.8 in \cite{TZ}): Our condition becomes $\frac{1}{p-1}<\kappa$. In their paper, Texier and Zumbrun need the stronger condition $\frac{9}{2(p+1)}<\kappa$.\medskip {\bf In three space dimensions}, $d=3$ (Example 4.9 in \cite{TZ}): Our condition becomes $\frac{3}{2(p-1)}<\kappa$. In their paper, Texier-Zumbrun need the stronger condition $\frac{4}{p+1}<\kappa$.\medskip In both cases, we improve over M\'etivier-Rauch when $p\geq 3$, while Texier-Zumbrun need $p\geq 4$.\medskip {\bf Remark.} {\it After reading our paper, Baldi and Haus \cite{BHperso} have been able to relax even further the condition on $\kappa$, based on their version \cite{BH} of the classical Newton scheme in the spirit of H\"{o}rmander. A key point in their proof is a clever modification of the norms considered by Texier-Zumbrun, allowing better $C^2$ estimates on the functional. They also explain that their approach can be extended to other $C^2$ functionals consisting of a linear term perturbed by a nonlinear term of homogeneity at least $p+1$. Our abstract theorem, however, seems more general since we do not need such a structure.} \subsection{Proof of Theorem \ref{ThmTexier}} We have to show that our Corollary \ref{Cor9} applies. Our functional setting is the same as in \cite{TZ}, with slightly different notations.\medskip We introduce the norm $\Vert f\Vert_{H_{\varepsilon}^{s}(\mathbf{R}^{d} )}=\Vert(-\varepsilon^{2}\Delta+1)^{s/2}f\Vert_{L^{2}(\mathbf{R}^{d})}$, and we take \begin{align*} V_{s} & =\mathcal{C}^{1}([0,T],H^{s-2}(\mathbf{R}^{d}))\cap\mathcal{C}^{0}([0,T],H^{s}(\mathbf{R}^{d}))\;,\\ \vert u\vert_{s} & =\sup_{0\leq t\leq T}\left\{ \Vert\varepsilon ^{2}\partial_{t}u(t,\cdot)\Vert_{H_{\varepsilon}^{s-2}(\mathbf{R}^{d})}+\Vert u(t,\cdot)\Vert_{H_{\varepsilon}^{s}(\mathbf{R}^{d})}\right\} \; \end{align*} and \begin{align*} W_{s} & =\mathcal{C}^{0}([0,T],H^{s}(\mathbf{R}^{d}))\times H^{s+2} (\mathbf{R}^{d})\;,\\ \vert(v_{1},v_{2})\vert_{s}^{\prime} & =\sup_{0\leq t\leq T}\left\{ \Vert v_{1}(t,\cdot)\Vert_{H_{\varepsilon}^{s}(\mathbf{R}^{d})}\right\} +\Vert v_{2}\Vert_{H_{\varepsilon}^{s+2}(\mathbf{R}^{d})} \end{align*} Our projectors are \begin{align*} \Pi_{\Lambda}u & =\mathcal{F}_{x}^{-1}(1_{|\varepsilon\xi|\leq \Lambda}\mathcal{F} _{x}u(t,\xi))\,,\\ \Pi_{\Lambda}^{\prime}(v_{1},v_{2}) & =\left( \mathcal{F}_{x}^{-1} (1_{|\varepsilon\xi|\leq \Lambda}\mathcal{F}_{x}v_{1}(t,\xi)),\mathcal{F} ^{-1}(1_{|\varepsilon\xi|\leq \Lambda}\mathcal{F}v_{2}(\xi))\right) \end{align*} We take $$\Phi_\varepsilon(u) =\left( \varepsilon^{2}\partial_{t}u+iA(\varepsilon \partial_{x})u-\varepsilon B(u,\varepsilon\partial_{x})u\,,\,u(0,\cdot)-\varepsilon^{\kappa}(a_{\varepsilon },\bar{a}_{\varepsilon})\right) $$ and $${\mathfrak a}_\varepsilon(t,x) =\varepsilon^{\kappa}(\exp(-itA(\partial_{x}))a_{\varepsilon },\exp(itA(\partial_{x}))\bar{a}_{\varepsilon})\;.$$ We have $\Phi_{\varepsilon}({\mathfrak a}_\varepsilon)=(-\varepsilon B({\mathfrak a}_\varepsilon,\varepsilon\partial_{x}){\mathfrak a}_\varepsilon,0)$. A solution of the functional equation $\Phi_\varepsilon(u)=0$ is a solution on $[0,T]\times\mathbf{R}^{d}$ of the Cauchy problem \ref{Cauchy}.\medskip Our Corollary \ref{Cor9} requires a direct estimate (\ref{tamedirectepsilonbis}) on $D\Phi_\varepsilon$ and an estimate (\ref{tameinverseepsilonbis}) on the right-inverse $L_\varepsilon$. Take $s_{0}>d/2+2$, $m=2$, $\gamma =\frac{dp}{2(p-1)}$ and $S$ large. Since $\kappa>\frac{d}{2(p-1)}$, we have an estimate of the form $\vert {\mathfrak a}_\varepsilon\vert_{S}\lesssim \varepsilon^{\gamma}\Vert a_1\Vert_{H^{S}}$, so, taking $\Vert a_1\Vert_{H^{S}}$ small, we can ensure that ${\mathfrak a}_\varepsilon\in {\mathfrak B}_{S}(\varepsilon^{\gamma})$. Moreover the inequality $\kappa>\frac{d}{2(p-1)}$ implies the condition \[1-\frac{dp}{2}+p\gamma\geq 0\,.\] So we see that the assumptions of Lemma 4.4 in \cite{TZ} are satisfied by the parameters $\gamma_0=\gamma_1=\gamma$ (note that our exponent $p$ is denoted $\ell$ in \cite{TZ}). The direct estimate (\ref{tamedirectepsilonbis}) thus follows from Lemma 4.4 in \cite{TZ}. Note that Lemma 4.4 of \cite{TZ} also gives an estimate on the second derivative of $\Phi_\varepsilon(\cdot)$, but we do not need such an estimate.\medskip Choosing, in addition, $\ell=2$, $\ell^{\prime}=0$, $g=2$, our inverse estimate (\ref{tameinverseepsilonbis}) follows from from Lemma 4.5 in \cite{TZ}.\medskip To summarize, the assumptions (2.9, 2.10, 2.11) of our Corollary \ref{Cor9} are satisfied for $s_{0}>d/2$, $m=2$, $\gamma =\frac{p}{p-1}\frac{d}{2}$, $g=2$, $\ell=2$, $\ell^{\prime}=0$.\medskip Moreover, in \cite{TZ}, {\it Proof of Theorem 4.6}, one finds an estimate which can be written in the form $$\vert \Phi_{\varepsilon}({\mathfrak a}_\varepsilon)\vert_{s_1-1}^{\prime}\leq r\,\varepsilon^{1+\kappa (p+1)+d/2}$$ where $r$ is small when $\Vert a_1\Vert_{H^{s_1}}$ is small.\medskip So, using our Corollary \ref{Cor9}, taking $S$ large enough we can solve the equation $\Phi_\varepsilon(u)=0$ in $X_{s_1}$ under the additional condition $1+\kappa(p+1)+d/2>\gamma+g\,,$ which can be rewritten as follows: \[ \kappa>\frac{1}{p+1}+\frac{d}{2(p+1)(p-1)}\,. \] Since $d\geq 2$, this inequality is a consequence of our assumption $\kappa\geq \frac{d}{2(p-1)}\,.$\medskip So our Corollary \ref{Cor9} implies the existence of a solution to the Cauchy problem (\ref{Cauchy}). The uniqueness of this solution comes from the local-in-time uniqueness of solutions to the Cauchy problem. This proves Theorem \ref{ThmTexier} as a consequence of Corollary \ref{Cor9}.\medskip {\bf Remark.} {\it In the examples 4.8 and 4.9 of \cite{TZ}, Texier and Zumbrun also study the case of oscillating initial data, i.e. $a_\varepsilon=a(x) e^{ix\cdot\xi_0/\varepsilon}$, and in the first submitted version of this paper we considered it as well. However, a referee pointed out to us that the corresponding statements were not fully justified in \cite{TZ}. Indeed, in the proof of their Theorem 4.6, Texier and Zumbrun have to invert the linearized functional $D\Phi_\varepsilon(u)$ for $u$ in a neighborhood of the function ${\mathfrak a}_\varepsilon$, denoted $a_f$ in their paper. For this purpose, it seems that they need the norm of their function $a_f$ to be controlled by $\varepsilon^\gamma$. This condition appears in their Remark 2.14 and their Lemma 4.5, but not in the statement of their Theorem 4.6. This additional constraint does not affect their results for concentrating initial data in Examples 4.8, 4.9. But in the oscillating case, their statements seem overly optimistic. We did not want to investigate further that issue, this is why we only deal with the concentrating case. Note, however, that this difficulty with the oscillating case is overcome in the recent work \cite{BHperso}, thanks to improved norms and estimates.} \bigskip \section{Conclusion} The purpose of this paper has been to introduce a new algorithm into the "hard" inverse function theorem, where both $DF\left( u\right) $ and its right inverse $L\left( u\right) $ lose derivatives, in order to improve its range of validity. To highlight this improvement, we have considered singular perturbation problems with loss of derivatives. We have shown that, on the specific example of a Schr\"{o}dinger-type system of PDEs arising from nonlinear optics, our method leads to substantial improvements of known results. We believe that our approach has the potential of improving the known estimates in many other ``hard" inversion problems. In the statement and proof of our abstract theorem, our main focus has been the existence of $u$ solving $F(u)=v$ in the case when $S$ is large and the regularity of $v$ is as small as possible. We haven't tried to give an explicit bound on $S$, but with some additional work, it can be done. In an earlier version \cite{ESDebut} of this paper, the reader will find a study of the intermediate case of a tame Galerkin right-invertible differential $DF$, with precise estimates on the parameter $S$ depending on the loss of regularity of the right-inverse, in the special case $s_0=m=0$ and $\ell=\ell'$.
{"config": "arxiv", "file": "1811.07568.tex"}
TITLE: How to do modular arithmetic with a negative n QUESTION [0 upvotes]: Playing with Python and the mod operation I encountered that (5 % -3) = -1. This is confirmed by WolframAlpha, and I have not been able to find any simple explanation for this online, mostly because all I can find about modular arithmetic uses a positive n. I am surprised by this result. My understanding of the modulo operation is that a mod n is a number c computed by taking the integer part q = a / n, and then substracting n·q from a. Therefore, for 5 mod -3 I would do: q = int(5 / -3) = int(-1.6667) = -1 5 - (-3)·(-1) = 5 - 3 = 2 Where am I wrong? I have realised that -1 is congruent with 2 modulo -3, so maybe the answer is that the result must be between 0 and n, so if n is negative we need to add n to the positive result, but not sure if this is really the reason. Please consider that I am not a mathematician, so the simpler the explanation the better. REPLY [1 votes]: An integer is divisible by 3 if and only if it is divisible by -3. The usual definition of congruence mod 3 is that $a\equiv_3 b$ if and only if $3\mid a-b$ (i.e if and only if $a-b$ is divisible by 3). So you have the same structure mod 3 and mod -3. The reason they didnt give you 2 as your answer is probably because they want to choose representatives from the set $\{0,-1,-2\}$.
{"set_name": "stack_exchange", "score": 0, "question_id": 3438107}
TITLE: Eigenfunctions for the symmetric kernel of an integral equation QUESTION [1 upvotes]: The solution of the symmetric integral equation below: $$g(s) = f(s) + \lambda \int_{-1}^{1} (st +s^2t^2)g(t)dt \tag{$*$}$$ with separable kernels method is $$g(s) = f(s) + \lambda \int_{-1}^{1} (\frac{st}{(1-\frac{2}{3}\lambda)}+\frac{s^2t^2}{(1-\frac{2}{5}\lambda)} )f(t) dt$$ let $f(s)=(s+1)^2$ and $\lambda = 1$ in $(*)$, i.e let: $$g(s) = (s+1)^2 + \int_{-1}^{1} (st +s^2t^2)g(t)dt \tag{$**$}$$ The question is, How I can (maybe using of the Hilbert-Schmidt theorem to) find the eigenfunctions corresponding to eigenvalues $\lambda_1 = \frac{3}{2}, \lambda_2 = \frac{5}{2}$ for the symmetric kernel in $(**)$? The issue in the question has also briefly been surveyed in the first example appeared on the 153th page of the book titled "Linear Integral Equations" by Ram P. Kanwal. The picture blow is an extraction from the second edition of aforementioned reference. REPLY [1 votes]: Late answer, still, for the sake of those who might land here looking for detailed explanation here: You can solve for resolvent kernel as in Example 4 in Section 2.2 of Ram P Kanwal, to get $$\Gamma(s,t;\lambda)=\frac{st}{1-\frac23\lambda}+\frac{s^2t^2}{1-\frac25\lambda}$$ Putting $\lambda=1$ as in question, we get $$\Gamma(s,t;\lambda)=\frac{st}{1-\frac23}+\frac{s^2t^2}{1-\frac25}$$ Taking $\dfrac32$ common in the first term, and $\dfrac25$ common in second term, we get $$\Gamma(s,t;\lambda)=\frac{st}{\frac23(\frac32-\lambda)}+\frac{s^2t^2}{\frac25(\frac52-\lambda)}=\frac{\frac32st}{\frac32-\lambda}+\frac{\frac52s^2t^2}{\frac52-\lambda}\tag{1}$$ By preceding theory, we have $$\Gamma(s,t;\lambda)=\sum_{k=1}^\infty{\phi_k(s)\phi^*_k(t)\over\lambda_k-\lambda}$$ where $\lambda=1$ $\lambda_1=\dfrac32$ $\lambda_2=\dfrac52$ and $\phi^*_k=\phi_k$ since integral is over real numbers. Hence $$\Gamma(s,t;\lambda)=\frac{\phi_1(s)\phi_1(t)}{\frac32-1}+\frac{\phi_2(s)\phi_2(t)}{\frac52-1}\tag{2}$$ Comparing equations (1) and (2), $$\phi_1(s)=\sqrt\frac32s,\ \phi_2(s)=\sqrt\frac52s^2$$ Where multiplying with $\dfrac{\sqrt2}{\sqrt2}$ in denominator and numerator gives the desired result. (Note: there is a mistake in print; $\phi_2(s)=\sqrt{\dfrac52}s^2$, and not $\phi_2(s)=\sqrt{\dfrac{10}2}s^2$ - corrected in the very next step in textbook.)
{"set_name": "stack_exchange", "score": 1, "question_id": 1600936}
TITLE: Do I have these recursive and closed forms correct? QUESTION [0 upvotes]: For the sequence: $0,1,5,12,22,35,51,70,92,117,145,176$ I have the closed form (dashes indicate subtext): $$a_n=\frac{n(3n-1)}{2}$$ For recursive: $$a_{n+1}=\frac{a_n+3n^2+5n+2}{2}$$ If they are wrong, please explain how to solve. REPLY [0 votes]: They are pentagonal numbers $$ p_n = \tfrac{3n^2-n}{2} $$ for n ≥ 1. The recursive formula is $$ \begin{align} p_1 &= 1\\ p_n & =p_{n-1}+3n-2\qquad\text{for }n\ge 2 \end{align} $$
{"set_name": "stack_exchange", "score": 0, "question_id": 1709179}
TITLE: Topology on the space of foliations QUESTION [7 upvotes]: Let $(M^3,g)$ be a closed Riemannian manifold. Is there a “natural” topology on the space $\operatorname{Fol}(M)$ of smooth codimension $1$ foliations on $M$? Is there any other relevant structure on this set? REPLY [4 votes]: A smooth codimension 1 foliation (in any dimension) is the same thing as an integrable codimension 1 distribution (tangent spaces to the leaves of the foliation). The space of smooth distributions is the space of smooth sections of the projectivization of the cotangent bundle of the manifold. If you have any smooth bundle $E\to M$ over any manifold, then the space of sections $\Gamma(M,E)$, has a family of topologies depending on how much smoothness you want to keep track of. In your case, for instance, if your foliation is of class $C^1$, the distribution will be of class $C^0$, hence, the natural topology to use on the space of continuous sections of $P(T^*M)$ is the compact-open topology (or topology of uniform convergence since $M$ is compact); then restrict this topology to the subspace topology on the subspace of integrable distributions. The Riemannian metric is mostly irrelevant here, but you can use it to identify $TM$ and $T^*M$ if you like.
{"set_name": "stack_exchange", "score": 7, "question_id": 3728363}
\begin{document} \title[Strichartz estimates w/o loss outside many convex obstacles]{Strichartz estimates without loss outside many strictly convex obstacles} \author{David Lafontaine {*}} \thanks{{*} d.lafontaine@bath.ac.uk, University of Bath, Department of Mathematical Sciences} \begin{abstract} We prove Strichartz estimates without loss for Schrödinger and wave equations outside finitely many strictly convex obstacles verifying Ikawa's condition, introduced in \cite{Ikawa2}. We extend the approach introduced in \cite{Schreodinger,Waves} for the two convex case. \end{abstract} \maketitle \section{Introduction} Let $(M,g)$ be a Riemannian manifold of dimension $d$. We are interested in the Schrödinger \begin{align} \begin{cases} i\partial_{t}u-\Delta_{g}u=0\\ u(0)=u_{0} \end{cases}\label{eq:lw-1} \end{align} and wave equations on $M$ \begin{align} \begin{cases} \partial_{t}^{2}u-\Delta_{g}u=0\\ (u(0),\partial_{t}u(0))=(f,g), \end{cases}\label{eq:lw} \end{align} where $\Delta_{g}$ is the Laplace-Beltrami operator. A key to study the perturbative theory and the nonlinear problems associated with these equations is to understand the size and the decay of the linear flows. One tool to quantify these decays is the so-called \textit{Strichartz estimates \begin{gather*} \Vert u\Vert_{L^{q}(0,T)L^{r}(M)}\leq C_{T}\left(\Vert u_{0}\Vert_{\dot{H}^{s}}+\Vert u_{1}\Vert_{\dot{H}^{s-1}}\right),\ \text{(Waves)}\\ \Vert u\Vert_{L^{q}(0,T)L^{r}(M)}\leq C_{T}\Vert u_{0}\Vert_{L^{2}},\ \text{(Schrödinger)} \end{gather*} }where $(p,q)$ has to follow an admissiblity condition given by the scaling of the equation, respectively \begin{gather*} \frac{1}{q}+\frac{d}{r}=\frac{d}{2}-s,\ \frac{1}{q}+\frac{d-1}{2r}\leq\frac{d-1}{4},\\ \frac{2}{q}+\frac{d}{r}=\frac{d}{2},\ (q,r,d)\neq(2,\infty,2), \end{gather*} for the Schrödinger and wave equations. These estimates have a long story, beginning with the work of \cite{Strichartz} for the $p=q$ case in $\mathbb{R}^{n}$, extended to all exponents by \cite{GV85}, \cite{LindbladSogge}, and \cite{KeelTao}. For the wave equation in a manifold without boundary, the finite speed of propagation shows that it suffices to work in local coordinates to obtain local Strichartz estimates. This path was followed by \cite{Kapitanskii}, \cite{MoSeSo}, \cite{SmithC11}, and \cite{Tataruns}. The case of a manifold with boundary, where reflexions have to be dealt with, is more difficult. Estimates outside one convex obstacle for the wave equation were obtained by \cite{SS95}, following the parametrix construction of Melrose and Taylor, which gives an explicit representation of the solution near diffractive points, and for the Schrödinger equation later by \cite{MR2672795}. The first local estimates for the wave equation on a general domain were shown by \cite{BLP} for certain ranges of indices, then extended by \cite{BSS}. These estimates cannot be as good as in the flat case : \cite{OanaCounterex} showed indeed that a loss has to occur if some concavity is met, because of the formation of caustics. Recently, \cite{ILPAnnals} and \cite{ILLPGeneral} obtained almost sharp local Strichartz estimates inside a convex domain. One obstruction to the establishment of global estimates without loss is the presence of trapped geodesics. Under a non trapping assumption, such estimates were established for the wave equation by the works of \cite{SmithSoggeNonTrapping}, \cite{MR2001179} and \cite{Metcalfe}. For the Schrödinger flow in the boundaryless case, \cite{BoucletTzvetkov}, \cite{Bouclet}, \cite{HassellTaoWunsch}, \cite{StaffTata} obtained the estimates in several non-trapping geometries. When trapped geodesics are met, \cite{MR2066943} showed that a loss with respect to the flat case has to occur for the wave equation in the global $L^{2}$ integrability of the flow, and his counterpart, the smoothing estimate, for the Schrödinger equation, which write respectively in the flat case as \begin{gather*} \Vert(\chi u,\chi\partial_{t}u)\Vert_{L^{2}(\mathbb{R},\dot{H}^{s}\times\dot{H}^{s-1})}\lesssim\Vert u_{0}\Vert_{\dot{H}^{s}}+\Vert u_{1}\Vert_{\dot{H}^{s-1}}\ \text{(Waves)},\\ \Vert\chi u\Vert_{L^{2}(\mathbb{R},H^{1/2})}\lesssim\Vert u_{0}\Vert_{L^{2}\ }\text{(Schrödinger)}. \end{gather*} Despite this loss in the smoothing estimate, \cite{MR2720226} showed Strichartz estimates without loss for the Schrödinger equation in an asymptotically euclidian manifold without boundary for which the trapped set is sufficiently small and exhibits an hyperbolic dynamic. Following this breakthrough, we recently proved in \cite{Schreodinger,Waves} global Strichartz estimates without loss for Schrödinger and wave equations outside two strictly convex obstacles, exhibiting in the boundary case the first trapped situation where no loss occurs. The goal of this paper is to extend this result to the case of the exterior of $N\geq3$ convex obstacles, which is in many aspects a counterpart with boundaries of the framework studied in \cite{MR2720226} . In this $N$-convex obstacles setting, there is infinitely many trapped rays. Therefore, there is a competition between the large number of parts of the flow that remain trapped between the obstacles and the decay of each such part. For a sufficient decay to hold, this competition has to occur in a favorable way. This is the so called Ikawa condition: \begin{defn}[Ikawa condition, 1: strong hyperbolicity] There exists $\alpha>0$ such that the following condition holds \begin{equation} \sum_{\gamma\in\mathcal{P}}\lambda_{\gamma}d_{\gamma}e^{\alpha d_{\gamma}}<\infty.\label{eq:IK1} \end{equation} \end{defn} Here $\mathcal{P}$ denotes the set of all primitive periodic trajectories, $d_{\gamma}$ the length of the trajectory $\gamma$ and $\lambda_{\gamma}=\sqrt{\mu_{\gamma}\mu'_{\gamma}}$, where $\mu_{\gamma}$ and $\mu'_{\gamma}$ are the two eigenvalues of modulus smaller than one of the Poincaré map associated with $\gamma$. This condition was first introduced by \cite{IkawaMult} when investigating the decay of the local energy of the wave equation. Notice that it is in particular automatically verified when the obstacles are sufficiently far from each other. It is the analog of the topologic pressure condition arising in \cite{MR2720226}. We will moreover suppose the second part of the Ikawa condition to be verified, namely, denoting by $\Theta_{i}$ the obstacles: \begin{defn}[Ikawa condition, 2: no obstacle in shadow] For all $i,j,k$ pairwise distincts, \begin{equation} \text{Conv}(\Theta_{i}\cup\Theta_{j})\cap\Theta_{k}=\emptyset.\label{eq:IK2} \end{equation} At the difference of the first one, and excepting the degenerated situation where a periodic trajectory is tangent to an obstacle, this condition may be purely technical (it permits to construct solutions without been preoccupied by the shadows induced by the obstacles) and should be avoided with a more careful analysis. We are now in position to state our result. \end{defn} \begin{thm} \label{th}Let $(\Theta_{i})_{1\leq i\leq N}$ be a finite family of smooth strictly convex subsets of $\mathbb{R}^{3}$, such that Ikawa's conditions (\ref{eq:IK1}) and (\ref{eq:IK2}) hold, and $\Omega=\mathbb{R}^{3}\backslash\underset{1\leq i\leq N}{\cup}\Theta_{i}$ . Then, under the non-endpoint admissibility conditions: \begin{gather*} \frac{1}{q}+\frac{3}{r}=\frac{3}{2}-s,\ \frac{1}{q}+\frac{1}{r}\leq\frac{1}{2},\ q\neq\infty,\ \text{(Waves)}\\ \frac{2}{q}+\frac{3}{r}=\frac{3}{2},\ (q,r)\neq(2,6),\ \text{(Schrödinger)} \end{gather*} global Strichartz estimates without loss hold for both Schrödinger and wave equations in $\Omega$ : \begin{gather*} \Vert u\Vert_{L^{q}(\mathbb{R},L^{r}(\Omega))}\lesssim\Vert u_{0}\Vert_{\dot{H}^{s}}+\Vert u_{1}\Vert_{\dot{H}^{s-1}},\ \text{(Waves)}\\ \Vert u\Vert_{L^{q}(\mathbb{R},L^{r}(\Omega))}\lesssim\Vert u_{0}\Vert_{L^{2}}.\ \text{(Schrödinger)} \end{gather*} \end{thm} \subsection*{Overview of the proof} We generalise the approach introduced in \cite{Waves,Schreodinger}. As we dealt with the Schrödinger equation outside two convex obstacles in \cite{Schreodinger} and showed in \cite{Waves} how to adapt the work to the wave equation, the main novelty of this note is how to handle the $N$-convex framework, and therefore we present a detailed proof of our main result in the more intricate case of the Schrödinger equation, and briefly explain how to adapt it to the wave equation with the material of \cite{Waves} in the last section. In the flat case, the smoothing estimate permits to stack Strichartz estimates in time $\sim h$ for data of frequency $\sim h^{-1}$ to show global estimates. As remarked in \cite{MR2720226}, the logarithmic loss that appears in our setting in the smoothing estimate can be compensated if we show Strichartz estimates in time $h|\log h|$ instead of $h$ near the trapped set, provided a smoothing estimate without loss in the non trapping region is at hand. Therefore, our first section is devoted to prove such an estimate, using a commutator argument together with the escape function construction of Morawetz, Raltson and Strauss \cite{MorRS}. We then show that we can reduce ourselves to data micro-locally supported near trapped trajectories, and that remain in a neighbourhood of it in logarithmic times. We extend to the $N$-convex framework the construction of an approximate solution for such data done in \cite{Waves,Schreodinger} following ideas of \cite{Ikawa2,IkawaMult} and \cite{MR1254820}, and finally, we show that under the strong hyperbolicity assumption (\ref{eq:IK1}), this construction gives a sufficient decay. \subsection*{Notations} \begin{itemize} \item We denote by $\mathcal{K}\subset T^{\star}\Omega\cup T^{\star}\partial\Omega$ the trapped set, which is is composed of infinitely many periodic trajectories, \item and by $\mathcal{P}$ the set of all primitive periodic trajectories, that is, followed only once, \item the operator $\psi(-h^{2}\Delta)$ localizes at frequencies $|\xi|\in[\alpha_{0}h^{-1},\beta_{0}h^{-1}],$ we refer to \cite{MR2672795} for the definition of this operator, \item the set $\mathcal{I}$ is the set of all stories of reflexions, that is all finites sequences $(j_{1},\cdots,j_{k})$ with values in $\llbracket1,\cdots,N\rrbracket$ such that $j_{i}\neq j_{i+1}$, \item moreover, we will adopt all the notations introduced in \cite{Schreodinger}. Let us in particular recall that \[ \Phi_{t}:T^{\star}\Omega\cup T^{\star}\partial\Omega\longrightarrow T^{\star}\Omega\cup T^{\star}\partial\Omega \] denotes the billiard flow on $\Omega$: $\Phi_{t}(x,\xi)$ is the point attained after a time $t$ from the point $x$ in the direction $\frac{\xi}{|\xi|}$ at the speed $|\xi|$, following the laws of geometrical optics, \item finally, let us recall that the spatial and directional components of $\Phi_{t}$ are respectively denoted $X_{t}$ and $\Xi_{t}$. \end{itemize} \section{Smoothing effect without loss outside the trapped set} Let us recall the smoothing effect with logarithmic loss obtained in \cite{MR2066943} in our framework of a family of strictly convex obstacle verifying Ikawa's condition: \begin{prop} \label{prop:smooth_logloss}For any $\chi\in C_{c}^{\infty}(\mathbb{R}^{3})$ and any $u_{0}\in L^{2}(\Omega)$ such that $u_{0}=\psi(-h^{2}\Delta)u_{0}$, we have \begin{equation} \Vert\chi e^{it\Delta_{D}}u_{0}\Vert_{L^{2}(\mathbb{R},L^{2})}\lesssim(h|\log h|)^{\frac{1}{2}}\Vert u_{0}\Vert_{L^{2}}.\label{eq:smooth_logloss} \end{equation} \end{prop} The aim of this first section is to prove a smoothing effect without loss outside the trapped set: \begin{prop}[Local smoothing without loss in the non trapping region] \label{prop:smooth_wo}Let $\phi\in C_{c}^{\infty}(\mathbb{R}^{3}\times\mathbb{R}^{3})$ be supported in the complementary of the trapped set, $\mathcal{K}^{c}$. Then we have, for $u_{0}=\psi(-h^{2}\Delta)u_{0}$ \begin{equation} \Vert\text{Op}_{h}(\phi)e^{-it\Delta_{D}}u_{0}\Vert_{L^{2}(\mathbb{R},H^{1/2}(\Omega))}\lesssim\Vert u_{0}\Vert_{L^{2}}.\label{eq:lswlnt} \end{equation} \end{prop} \begin{proof} We will use the same strategy as in \cite{MR2720226} lemma 2.2, adapting the proof in the case of a domain with boundary. Notice that, for any operator $A$, \begin{equation} \langle Au,u\rangle(T)-\langle Au,u\rangle(0)=\int_{0}^{T}\int_{\Omega}\langle[i\Delta,A]u,u\rangle+\int_{0}^{T}\int_{\partial\Omega}\langle Au,\partial_{n}u\rangle.\label{eq:smooth_com} \end{equation} Thus, if we find an operator $A$ of order $0$ such that $[i\Delta,A]$ is elliptic and positive on the support of $\phi$ and such that the border term \[ B=\int_{0}^{T}\int_{\partial\Omega}\langle Au,\partial_{n}u\rangle d\sigma dt \] is essentially positive, we shall obtain the desired estimate. \subsection*{Notations} If $b\in C^{\infty}(\mathbb{R}^{3}\times\mathbb{R}^{3})$ is a real symbol such that $b\geq0$, and $b\geq\alpha$ on $U$, we use Garding inequality on symbols of the form \[ b-\alpha\frac{a\bar{a}}{\left(\sup|a|\right)^{2}}\geq0, \] where $a\in C_{c}^{\infty}(\mathbb{R}^{3}\times\mathbb{R}^{3})$ is supported in $U$. Notice that we have $\text{Op}_{h}(a\bar{a})=\text{Op}_{h}(a)\text{Op}_{h}(a)^{\star}+O(h)$. Moreover, we will denote, in this section and this section only, $\Phi$ for the operator associated to $\phi$. \subsection*{The symbol of A at the border as an operator acting on Schrödinger waves} We perform the semi-classical time change of variable to write: \[ B=h^{-1}\int_{0}^{hT}\int_{\partial\Omega}\langle A(e^{iht}u_{0}),\partial_{n}(e^{iht}u_{0})\rangle d\sigma dt \] We use the strategy of \cite{MorRS} to derive the symbol of $A$ at the border as an operator acting on Schrödinger waves. Let us consider $A$ as an operator acting on $\partial\Omega\times\mathbb{R}$. Notice that, because $\partial\Omega\times\mathbb{R}$ is nowhere characteristic for the semi-classical Schrödinger flow, there exists an operator $Q$ of order zero such that for any semi-classical Schrödinger wave $v$ \begin{equation} Av_{|\partial\Omega\times\mathbb{R}}=Q(\partial_{n}v).\label{eq:smooth_pn} \end{equation} Let $q$ be the symbol of this operator. Let $(x_{0},t_{0})\in\partial\Omega\times\mathbb{R}$ and $(\eta,\tau)\in T_{(x_{0},t_{0})}(\partial\Omega\times\mathbb{R})$. We denote by $\psi_{\pm}$ the two distinct solutions of the Eikonal equations \begin{gather*} |\nabla\psi(x)|^{2}=-\tau,\\ \psi_{\pm}(x)=x\cdot\eta\text{ on }\partial\mathcal{O}, \end{gather*} that are well defined in a neighborhood of $x_{0}$ as soon as $\tau-\eta^{2}>0$: indeed, extending $n$ in a small neighborhood of the border, one can always take \[ \psi_{\pm}=x\cdot\eta\pm\sqrt{\tau-\eta^{2}}n. \] For $\lambda>0$, consider, extending $\partial_{n}\psi_{\pm}$ in a neighborhood of $x_{0}$ in $\Omega$ \[ v_{\lambda}=\frac{e^{i\lambda(\psi_{+}+t\tau)}-e^{i\lambda(\psi_{-}+t\tau)}}{(i\lambda)(\partial_{n}\psi_{+}-\partial_{n}\psi_{-})}, \] which is solution of an approximate semi-classical Schrödinger equation \begin{flalign*} i\partial_{t}v_{\lambda}-\lambda^{-1}\Delta v_{\lambda} & =O(\lambda^{-1}),\\ v_{\lambda} & =0\ \text{\text{on }\ensuremath{\partial}\ensuremath{\mathcal{O}}}. \end{flalign*} verifying, in a neighborhood of $x_{0}$ in $\partial\mathcal{O}$ \[ \partial_{n}v_{\lambda}=e^{i\lambda(x\cdot\eta+t\tau)}. \] But, the principal symbol of $Q$ can be computed as \begin{multline*} q(x_{0},t_{0},\eta,\tau)=\lim_{\lambda\rightarrow\infty}e^{-i\lambda(x_{0}\cdot\eta+t_{0}\cdot\tau)}Q(e^{i\lambda(x\cdot\eta+t\tau)})(x_{0},t_{0})\\ =\lim_{\lambda\rightarrow\infty}e^{-i\lambda(x_{0}\cdot\eta+t_{0}\cdot\tau)}Q(\partial_{n}v_{\lambda})(x_{0},t_{0}). \end{multline*} By the Duhamel formula, the difference between $v_{\lambda}$ and the solution of the actual equation $w_{\lambda}$ is bounded in a neighborhood of $(x_{0},t_{0})$ by \[ |w_{\lambda}-v_{\lambda}|\lesssim\lambda^{-1}, \] therefore, we can replace $v_{\lambda}$ by $w_{\lambda}$, which is an exact Schrödinger wave, in the limit and make use of (\ref{eq:smooth_pn}) to get: \begin{gather*} q(x_{0},t_{0},\eta,\tau)=\lim_{\lambda\rightarrow\infty}e^{-i\lambda(x_{0}\cdot\eta+t_{0}\cdot\tau)}Q(\partial_{n}w_{\lambda})(x_{0},t_{0})\\ =\lim_{\lambda\rightarrow\infty}e^{-i\lambda(x_{0}\cdot\eta+t_{0}\cdot\tau)}A(w_{\lambda})(x_{0},t_{0})=\lim_{\lambda\rightarrow\infty}e^{-i\lambda(x_{0}\cdot\eta+t_{0}\cdot\tau)}A(v_{\lambda})(x_{0},t_{0})\\ =\left(\frac{a(x,d\psi_{+})-a(x,d\psi_{-})}{2(\partial_{n}\psi_{+}-\partial_{n}\psi_{-})}\right)(x_{0},t_{0}).\\ \end{gather*} And we conclude that, this computation been valid for $\tau-\eta^{2}>0$ \begin{gather} q(x_{0},t_{0},\eta,\tau)=\left(\frac{a(x_{0},\xi_{+})-a(x_{0},\xi_{-})}{(\xi_{+}-\xi_{-})\cdot n(x_{0})}\right),\label{eq:smoothq1}\\ \xi_{\pm}=\eta\pm\sqrt{\tau-\eta^{2}}n(x_{0}).\label{eq:smoothq2} \end{gather} Notice that $\xi_{\pm}$ is a pair of reflected rays. \subsection*{The escape function} Let $(y,\eta)\notin\mathcal{K}$. The generalized broken ray starting from $(y,\eta)$ is composed of a finite number of segments, thus, the construction of \cite{MorRS}, Section 5, holds to construct a ray function starting from $(y,\eta)$, that is, a function $p_{0}$ satisfying \begin{gather*} \xi\cdot\nabla p_{0}(x,\xi)\geq0,\ \frac{p_{0}(x,\xi)-p_{0}(x,\xi')}{(\xi-\xi^{'})\cdot n(x)}\geq0, \end{gather*} and \[ \eta\cdot\nabla p_{0}(y,\eta)>0,\ \frac{p_{0}(y,\eta)-p_{0}(y,\eta')}{(\eta-\eta')\cdot n(y)}>0. \] Therefore, by compactness, we can construct a function $a$ such that \begin{gather} \xi\cdot\nabla a(x,\xi)\geq0,\frac{a(x,\xi)-a(x,\xi^{'})}{(\xi-\xi^{'})\cdot n(x)}\geq0\label{eq:smooth-a0-1}\\ \xi\cdot\nabla a(x,\xi)>0,\ \frac{a(x,\xi)-a(x,\xi^{'})}{(\xi-\xi^{'})\cdot n(x)}>0,\text{ on \ensuremath{V\supset\supset} supp \ensuremath{\phi}}.\label{eq:smooth-a0-2} \end{gather} Finally, notice that, because the construction of \cite{MorRS} follows the rays and because the trapped set is invariant by the flow, we can construct $a$ in such a way that \begin{equation} a=0\text{ near }\mathcal{K}.\label{eq:0-out} \end{equation} Remark that, as in \cite{MorRS}, Section 1, such an $a$ can be approximated by a polynomial in order to justify the above integration by parts. \subsection*{A first estimate} Let $\delta>0$. Because of (\ref{eq:smooth-a0-1}), (\ref{eq:smoothq1}), $q$ is real-valued and positive on $\{\tau-\eta^{2}\geq0\}$, therefore, there exists $\epsilon>0$ small enough so that, on $\{\tau-\eta^{2}\geq-\epsilon\}$ we have, with the notations of (\ref{eq:smoothq1}) \begin{equation} \Re e\frac{a(x_{0},\xi_{+})-a(x_{0},\xi_{-})}{(\xi_{+}-\xi_{-})\cdot n(x_{0})}\geq-\delta/2.\label{eq:a_bord_ok} \end{equation} and, for $|\alpha|\leq2(d+1)=8$ \begin{equation} \big|\Im m\ \partial_{x,t,\xi,\tau}^{\alpha}\frac{a(x_{0},\xi_{+})-a(x_{0},\xi_{-})}{(\xi_{+}-\xi_{-})\cdot n(x_{0})}\big|\leq\delta/2\label{eq:a_bord_ok_IM} \end{equation} Now, let $\chi$ be positive and supported in $\{\tau-\eta^{2}\geq-2\epsilon\}$ and such that $\chi=1$ in $\{\tau-\eta^{2}\geq-\epsilon\}.$ We decompose $a$ as the sum \[ a=\chi a+(1-\chi)a. \] Note that $(1-\chi)a$ is supported away from the characteristic set $\{\tau=\eta^{2}\}$ of the semi-classical Schrödinger flow. Therefore, \[ \Vert\text{Op}_{h}((1-\chi)a)u\Vert_{H^{\sigma}(\mathbb{R}\times\Omega)}=O(h^{\infty})\Vert u_{0}\Vert_{L^{2}}, \] and using a trace theorem \[ B=\int_{0}^{T}\int_{\partial\Omega}\langle R(\partial_{n}(e^{it\Delta}u_{0})),\partial_{n}(e^{it\Delta}u_{0})\rangle d\sigma dt+O(h^{\infty})\Vert u_{0}\Vert_{L^{2}}, \] where $R=\text{Op}(\chi a)$. Notice that a pair of reflected rays share the same norm, therefore, by (\ref{eq:smoothq1}), the symbol of $R$ is \begin{gather*} r(x_{0},t_{0},\eta,\tau)=\chi(\eta,\tau)\left(\frac{a(x_{0},\xi_{+}(\eta,\tau))-a(x_{0},\xi_{-}(\eta,\tau))}{(\xi_{+}(\eta,\tau)-\xi_{-}(\eta,\tau))\cdot n(x_{0})}\right),\\ \xi_{\pm}=\eta\pm\sqrt{\tau-\eta^{2}}n(x_{0}). \end{gather*} Therefore, by (\ref{eq:a_bord_ok}), (\ref{eq:smooth-a0-2}) and (\ref{eq:0-out}), we can use the Garding inequality for the real part, the Calderon-Vaillancourt theorem for the imaginary part in order to write \begin{equation} B\geq-\delta\int_{0}^{T}\int_{\partial\Omega}|\tilde{\Phi}u|^{2}d\sigma dt-c_{\text{Gard}}\Vert\chi_{b}u\Vert_{L^{2}([0,T],H^{-1/2}(\partial\Omega))}+O(h^{\infty})\Vert u_{0}\Vert_{L^{2}}.\label{eq:smot10} \end{equation} where $\tilde{\phi}\in C_{c}^{\infty}(\mathbb{R}^{3}\times\mathbb{R}^{3})$ is supported in $\mathcal{K}^{c}$ and $\tilde{\phi}=1$ on the support of $\phi$, and $\chi_{b}\in C_{c}^{\infty}(\mathbb{R}^{3})$ is such that $\chi_{b}=1$ on $\partial\Omega$. Moreover, by the same procedure as in \cite{MorRS}, we may suppose that for $|x|\geq R\gg1$, $a$ is given by $a(x,\xi)=hx\cdot\xi$. Let $\chi_{R}\in C_{c}^{\infty}$ be such that $\chi_{R}=1$ on $\left\{ |x|\leq2R\right\} $ and $\chi_{R}=0$ on $\left\{ |x|\geq3R\right\} $. We decompose \begin{multline*} \int_{\Omega}\langle[i\Delta,A]u,u\rangle=\int_{\Omega}\langle[i\Delta,A]\chi_{R}u,\chi_{R}u\rangle\\ +\int_{\Omega}\langle[i\Delta,A]\chi_{R}u,(1-\chi_{R})u\rangle+\int_{\Omega}\langle[i\Delta,A](1-\chi_{R})u,\chi_{R}u\rangle\\ +\int_{\Omega}\langle[i\Delta,A](1-\chi_{R})u,(1-\chi_{R})u\rangle. \end{multline*} Because the commutator is truly non-negative for functions supported in $\left\{ |x|\geq2R\right\} $, the last term is non-negative. Moreover, the integrand of both intermediate terms are supported in $\left\{ 2R\leq|x|\leq3R\right\} $. Therefore, taking $R$ large enough, the long-range smoothing estimate, which is for example a consequence of the long-range resolvent estimate of Cardoso and Vodev \cite{CardoVodev} by the procedure of \cite{MR2068304}, allows us to control them: \begin{multline*} \big|\int_{\mathbb{R}}\int_{\Omega}\langle[i\Delta,A]\chi_{R}u,(1-\chi_{R})u\rangle+\langle[i\Delta,A](1-\chi_{R})u,\chi_{R}u\rangle\big|\\ \lesssim\Vert\tilde{\chi}u\Vert_{L^{2}H^{1/2}}\lesssim\Vert u_{0}\Vert_{L^{2}}, \end{multline*} where $\tilde{\chi}\in C_{c}^{\infty}$ is equal to one in $\left\{ 2R\leq|x|\leq3R\right\} $ and supported in $\left\{ |x|\geq R\right\} $. Finally, by the Garding inequality again, using (\ref{eq:smooth-a0-2}): \begin{equation} \int_{0}^{T}\int_{\Omega}\langle[i\Delta,A]\chi_{R}u,\chi_{R}u\rangle\geq C\Vert\Phi u\Vert_{L^{2}H^{1/2}}-c_{\text{Gard}}\Vert\chi_{R}u\Vert_{L^{2}L^{2}},\label{smoth11} \end{equation} Thus, combining (\ref{eq:smooth_com}), (\ref{eq:smot10}), and (\ref{smoth11}), using the trace theorem and controlling the lower order terms with the estimate \textit{with logarithmic loss }we get \begin{equation} \Vert\Phi u\Vert_{L^{2}H^{1/2}}\leq C(\Vert u_{0}\Vert_{L^{2}}+\delta\Vert\tilde{\Phi}u\Vert_{L^{2}H^{1/2}})+C_{\delta}O(h^{\infty}).\label{eq:it} \end{equation} \subsection*{Iteration and conclusion} To conclude, we would like to take $\delta>0$ small enough and iterate (\ref{eq:it}). In order to do so, we have to take care of the potential dependency in $\phi,\tilde{\phi},\tilde{\tilde{\phi}},\dots,\overset{\sim(k)}{\phi},\cdots$ and $\delta$ of the constants appearing in this estimate. Let us first remark that we take all the $\overset{\sim(k)}{\phi}$ in a given small neighborhood of the support of $\phi$ - this neighborhood is a subset of $V$ of (\ref{eq:smooth-a0-2}). Thus, there exists $A\geq1$ such that, for $|\alpha+\beta|\leq N$ \[ \Vert\partial_{x,\xi}^{\alpha,\beta}\overset{\sim(k)}{\phi}\Vert_{L^{\infty}}\leq A^{k}. \] Therefore, the Garding constants $c_{\text{Gard}}$ in (\ref{eq:smot10}), (\ref{smoth11}) at the $k$-th iteration can be taken as $A^{k}$. Moreover, by (\ref{eq:smooth-a0-2}), $\xi\cdot\nabla a$ is bounded below by a constant $C$ uniformly on the support of all the $\overset{\sim(k)}{\phi}$, so we can choose the same constant $C$ in (\ref{smoth11}) at all iteration. Finally, the $O(h^{\infty})$ term depends only of $\delta$. Therefore, we can precise the constants in (\ref{eq:it}) at the $k$-th iteration: \[ \Vert\overset{\sim(k)}{\Phi}u\Vert_{L^{2}H^{1/2}}\leq(C+A^{k})\Vert u_{0}\Vert_{L^{2}}+C\delta\Vert\overset{\sim(k+1)}{\Phi}u\Vert_{L^{2}H^{1/2}}+C_{\delta}O(h^{\infty}), \] where $C$ and $A$ have no dependencies in $k$ and $\delta$ and $C_{\delta}$ depends only of $\delta$. Thus we get \begin{multline*} \Vert\Phi u\Vert_{L^{2}H^{1/2}}\leq\left[C\frac{1-(C\delta)^{k+1}}{1-C\delta}+\frac{(C\delta A)-(C\delta A)^{k+1}}{1-C\delta A}\right]\Vert u_{0}\Vert_{L^{2}}\\ +(C\delta)^{k}\Vert\overset{\sim(k+1)}{\Phi}u\Vert_{L^{2}H^{1/2}}+C_{\delta}\frac{1-(C\delta)^{k+1}}{1-C\delta}O(h^{\infty})\\ \leq\left[C\frac{1-(C\delta)^{k+1}}{1-C\delta}+\frac{(C\delta A)-(C\delta A)^{k+1}}{1-C\delta A}\right]\Vert u_{0}\Vert_{L^{2}}\\ +(C\delta)^{k}\Vert\chi_{0}u\Vert_{L^{2}H^{1/2}}+C_{\delta}\frac{1-(C\delta)^{k+1}}{1-C\delta}O(h^{\infty}) \end{multline*} where $\chi_{0}$ is compactly supported. We fix $\delta$ small enough so that $C\delta A<1$ and let $k$ go to infinity to obtain the result. \end{proof} \begin{rem} Notice that the exact same proof holds for any arbitrary domain for which a smoothing estimate with logarithmic loss holds. Moreover, as remarked by \cite{DatchevVasy}, we can iterate such a proof and therefore it suffices to assume a smoothing estimate with polynomial loss. More precisely, we initiate the argument controlling the lower order terms by the smoothing estimate with polynomial loss, and then iterate the proof and control the lower order terms by the previous estimate at each step, until we reach $h^{0}$. Thus we obtain the more general: \end{rem} \begin{prop} \label{prop:dv}Let $\Omega$ be such that the following smoothing estimate with polynomial loss holds: there exists $k>0$ such that for all $\chi\in C_{c}^{\infty}(\mathbb{R}^{d})$ and all $u_{0}\in L^{2}$ such that $u_{0}=\psi(-h^{2}\Delta)u_{0}$, we have: \[ \Vert\chi e^{-it\Delta_{D}}u_{0}\Vert_{L^{2}(\mathbb{R},H^{1/2}(\Omega))}\lesssim h^{-k}\Vert u_{0}\Vert_{L^{2}}. \] Then, a smoothing estimate without loss holds outside the trapped set $\mathcal{K}$: that is, for all $\phi\in C_{c}^{\infty}(\mathbb{R}^{3}\times\mathbb{R}^{3})$ supported in $\mathcal{K}^{c}$, we have \[ \Vert\text{Op}_{h}(\phi)e^{-it\Delta_{D}}u_{0}\Vert_{L^{2}(\mathbb{R},H^{1/2}(\Omega))}\lesssim\Vert u_{0}\Vert_{L^{2}}. \] \end{prop} \section{Reduction to the logarithmic trapped set} Because of Proposition \propref{smooth_logloss} and Proposition \propref{smooth_wo}, the exact same proof as in \cite{Schreodinger}, section 2, show that the following proposition implies our main result for the Schrödinger equation: \begin{prop}[Strichartz estimates on a logarithmic interval near the trapped set] \label{thm:main-1}\label{prop:semilog}There exists $\epsilon>0$ such that for all $\phi\in C_{c}^{\infty}(\mathbb{R}^{3}\times\mathbb{R}^{3})$ supported in a small enough neighborhood of $\mathcal{K}\cap\left\{ |\xi|\in[\alpha_{0},\beta_{0}]\right\} $, we have \begin{equation} \Vert\text{Op}_{h}(\phi)e^{-it\Delta_{D}}\psi(-h^{2}\Delta)u_{0}\Vert_{L^{p}(0,\epsilon h|\log h|)L^{q}(\Omega)}\leq C\Vert u_{0}\Vert_{L^{2}}.\label{eq:butult} \end{equation} \end{prop} Notice that, by a classical $TT^{\star}$ argument, Proposition \propref{semilog} is a consequence of the following pointwise dispersive estimate: \begin{equation} \Vert Ae^{ith\Delta}\psi(-h^{2}\Delta)A^{\star}\Vert_{L^{1}\rightarrow L^{\infty}}\lesssim(ht)^{-3/2},\ \forall0\leq t\leq\epsilon|\log h|,\label{eq:pointdisp} \end{equation} where we denoted, here and in the sequel of this section \[ A:=\text{Op}_{h}(\phi) \] in the seek of readability. Thus, the rest of the paper will be devoted to prove such an estimate. The aim of this section is to show that we can reduce ourselves to data micro-locally supported to the points that remain near the trapped trajectories in logarithmic times. In order to do so, we first need to generalizes some properties of the billiard flow shown in \cite{Schreodinger}: \subsection{Regularity of the billiard flow} We first need the following lemma, where we denoted by $W_{\text{tan},\eta}$ an $\eta$-neighborhood of the tangent rays: \begin{lem} \label{lem:2cross}There exists $\eta>0$ such that any ray cannot cross $W_{\text{tan},\eta}$ more than twice. \end{lem} \begin{proof} If it is not the case, for all $n\geq0$, there exists $(x_{n},\xi_{n})\in K\times\mathcal{S}^{2}$, where $K$ is a compact set strictly containing the obstacles, such that $\Phi_{t}(x_{n},\xi_{n})$ cross $W_{\text{tan},\frac{1}{n}}$ at least three times. Extracting from $(x_{n},\xi_{n})$ a converging subsequence, by continuity of the flow, letting $n$ going to infinity we obtain a ray that is tangent to $\cup\Theta_{i}$ in at least points. Therefore, it suffices to show that such a ray cannot exists. Remark that, because of the non-shadows condition (\ref{eq:IK2}), if $(x,\xi)\in W_{\text{tan}}$, if we consider the ray starting from $(x,\xi)$ and the ray starting from $(x,-\xi)$, one of the two do not cross any obstacle in positive times. But, if there is a ray tangent to the obstacles in at least three points, if we consider the second tangent point $(x_{0},\xi_{0})$, both rays starting from $(x_{0},\xi_{0})$ and $(x_{0},-\xi_{0})$ have to cross an obstacle, therefore, this is not possible. \end{proof} Together with lemma 3.2 of \cite{Schreodinger}, which gives the (Hölder) regularity of the billiard flow near tangent points for a domain with no infinite order of contact points, we obtain, with the exact same proof as in this previous paper - the only assumption made been which given by \lemref{2cross}: \begin{lem} \label{lem:distm}Let $V$ be a bounded open set containing the convex hull of $\cup\Theta_{i}$. Then, there exists $\mu>0$, $C>0$ and $\tau>0$ such that, for all $x,\tilde{x}\in V$, all $\xi,\tilde{\xi}$ such that $|\xi|,|\xi'|\in[\alpha_{0},\beta_{0}]$, for all $t>0$ there exists $t'$ verifying $|t'-t|\leq$ $\tau$ such that \begin{equation} d(\varPhi_{t'}(\tilde{x},\tilde{\xi}),\varPhi_{t'}(x,\xi)))\leq C^{t'}d((\tilde{x},\tilde{\xi}),(x,\xi))^{\mu}.\label{eq:div} \end{equation} \end{lem} \begin{rem} It is crucial, in the proof of this previous lemma, that a ray cannot cross $W_{\text{tan},\eta}$ infinitely many times: indeed, regularity is lost at each tangent point. Therefore, in the case which does not enters the framework of \nameref{eq:IK2} of a trapped ray which is tangent to an obstacle, this proof does not hold, and we do not know if such a regularity of the flow is true. As this regularity is crucial in the sequel, we think that this ``non shadow'' condition may not be only technical, at least in the degenerated situation previously mentioned. \end{rem} Finally, let us remark that \begin{lem} \label{lem:goesunif-1}Let $\delta>0$ and $D_{\delta}$ be a $\delta$-neighborhood of $\text{\ensuremath{\mathcal{P}}}$. Then, for all compact $K$, $\Phi_{t}(\rho)\longrightarrow\infty$ as $t\longrightarrow\pm\infty$ uniformly with respect to $\rho\in K\cap D_{\delta}^{c}$. \end{lem} \begin{proof} It suffices to prove that the length of all trajectories in $K\cap D_{\delta}^{c}$ are uniformly bounded. If it is not the case, there exists $\rho_{n}\in D_{\delta}^{c}\cap K$ such that \[ \text{lenght}\left\{ \Phi_{t}(\rho_{n})\right\} _{t\geq0}\cap K\longrightarrow+\infty \] as $n$ goes to infinity. Up to extract a subsequence, $\rho_{n}\longrightarrow\rho^{\star}\in D_{\delta}^{c}$. Necessarily, $\text{lenght}\left\{ \Phi_{t}(\rho^{\star})\right\} _{t\geq0}\cap K=\infty$, thus $\rho^{\star}\in\mathcal{P}$, this is not possible. \end{proof} \begin{lem} \label{lem:closed}$\mathcal{K}$ is closed. \end{lem} \begin{proof} Let $\rho_{n}\in\mathcal{K}$, $\rho_{n}\longrightarrow\rho$. There exists $A>0$ such that for any $t$, $d(\pi_{x}\Phi_{t}(\rho_{n}),0_{\mathbb{R}^{3}})\leq A.$ $\pi_{x}\Phi_{t}(\cdot)$ been continuous for any fixed $t$, it suffices to pass to the limit $n\longrightarrow\infty$ in the previous inequality to obtain $\rho\in\mathcal{K}$. \end{proof} \subsection{Reduction of the problem} We now show that we can reduce ourselves to points that remain near trapped trajectories in logarithmic times $T_{0}\leq t\leq\epsilon|\log h|$ in order to prove the pointwise dispersive estimate (\ref{eq:pointdisp}) in times $[T_{0},\epsilon|\log h|]$. In contrast to \cite{Schreodinger}, where we used a translation argument in the spirit of \cite{MR2672795}, we are here inspired by \cite{MR2720226}. Let $\delta>0$. By \lemref{closed}, the projection on $\mathbb{R}^{3}\times\mathcal{S}^{2}$ of the trapped set is compact, thus there exists a finite number of phase-space segments $(S_{k})_{1\leq k\leq N_{\delta}}$ , $S_{k}=s_{k}\times\mathbb{R}\xi_{k}\subset T^{\star}\Omega$, $s_{i}$ been a segment of $\mathbb{R}^{3}$, such that $\mathcal{K}$ is contained in a $\delta$-neighborhood of $\cup S_{k}$. The small quantity $\delta>0$ may be reduced a finite number of time in the sequel. We will now define a microlocal partition of unity $(\Pi_{k})$. Let $p_{k}\in C_{0}^{\infty}(T^{\star}\Omega),\ 0\leq p_{k}\leq1$ be a family of functions such that $p_{k}$ is supported in a neighborhood $W_{k}$ of $S_{k}$ and \[ \sum_{1\leq k\leq N_{\delta}}p_{k}=1\ \text{in a neighborhood of }\mathcal{K}. \] Let us define \[ \Pi_{k}=\text{Op}_{h}(p_{k}),\ \forall1\leq k\leq N_{\delta}. \] Now, let $\chi_{0}\in C^{\infty}(\mathbb{R}^{3})$ , $0\leq\chi_{0}\leq1$ such that $\chi_{0}$ is supported sufficiently far from $\text{Con}\cup\Theta_{i}$, and equal one far from the origin. Notice that any broken bicharacteristic entering the support of $\chi_{0}$ from its complement remains in it for all times. We take \[ \Pi_{0}=\chi_{0} \] and let \[ \Pi_{-1}=\text{Op}_{h}\left(1-\chi_{0}-\sum_{1\leq k\leq N_{\delta}}p_{k}\right). \] $\Pi_{-1}$ is defined in such a way that his symbol verifies \[ d(\text{Supp}p_{-1},\mathcal{K})\geq d_{1}>0, \] therefore, by \lemref{goesunif-1}, there exists $T_{0}>0$ such that \[ \pi_{x}\Phi_{t}(\text{Supp}p_{-1})\subset\text{Supp}\chi_{0},\ \forall|t|\geq T_{0}. \] Now, let $\tau>0$. It will be fixed in the sequel. In the spirit of \cite{MR2720226}, we decompose $T=(L-1)\tau+s_{0}$, where $L\in\mathbb{N}$ and $s_{0}\in[0,\tau)$. We have \[ e^{iTh\Delta}=e^{its_{0}\Delta}\left(e^{i\tau h\Delta}\right)^{L-1},\ e^{i\tau h\Delta}=e^{i\tau h\Delta}\sum_{-1\leq k\leq N_{\delta}}\Pi_{k}. \] and thus \[ e^{iTh\Delta}=\sum_{\mathbf{k}=(k_{1},\cdots,k_{L})}e^{its_{0}\Delta}\Pi_{k_{L}}e^{i\tau h\Delta}\Pi_{k_{L-1}}\cdots\Pi_{k_{1}}e^{i\tau h\Delta}, \] where the sum is taken over all multi-indice $\mathbf{k}\in\llbracket-1,N_{\delta}\rrbracket^{L}$. Let us remark that, because the wavefront set of the semi-classical Schödinger flow is invariant by the generalized bicharacteristic flow, denoting \[ \sigma_{\mathbf{k}}=Ae^{its_{0}\Delta}\Pi_{k_{L}}e^{i\tau h\Delta}\Pi_{k_{L-1}}\cdots\Pi_{k_{1}}e^{i\tau h\Delta}\psi(-h^{2}\Delta)A^{\star}, \] it holds that \begin{equation} \rho\in WF_{h}(\sigma_{\mathbf{k}})\implies\begin{cases} \pi_{x}\rho\in\text{Supp}\phi,\\ \Phi_{j\tau}(\rho)\in\text{Supp}q_{k_{j}} & \forall1\leq j\leq L,\\ \pi_{x}\Phi_{T}(\rho)\in\text{Supp}\phi. \end{cases}\label{eq:WFtraj} \end{equation} Thus we have \begin{lem} Let $\mathbf{k}\in\llbracket-1,N_{\delta}\rrbracket^{L}$. If there exists $1\leq j\leq L$ such that $k_{j}=0$ or $k_{j}=-1$, then $\sigma_{\mathbf{k}}=O(h^{\infty})$ as an $L^{1}\rightarrow L^{\infty}$ operator. \end{lem} \begin{proof} As remarked in \cite{MR2720226}, by virtue of Sobolev embeddings it suffices to show that $\sigma_{\mathbf{k}}=O(h^{\infty})$ as an $L^{2}\rightarrow L^{2}$ operator, thus has null operator wavefront set. Let us suppose first that there exists $j$ such that $k_{j}=0$. We choose $j$ to be the the first such indice. Suppose that $\rho\in WF_{h}(\sigma_{\mathbf{k}})$. There exists $t_{0}\in[(j-1)\tau,j\tau]$ such that the spatial projection of $\Phi_{j\tau}(\rho)$ enters the support of $\chi_{0}$ from its complementary, thus it does not leave it. Therefore $\pi_{x}\Phi_{T}(\rho)\in\text{\text{Supp}\ensuremath{\chi}}_{\text{0}}$, this is not possible. Thus $WF_{h}(\sigma_{\mathbf{k}})=\emptyset$. Now, suppose that there exists $j\in[1,L-\frac{T_{\text{0}}}{\tau}]$ such that $k_{j}=-1$. Let $\rho\in WF_{h}(\sigma_{\mathbf{k}})$. $\Phi_{j\tau}(\rho)\in\text{Supp}\Pi_{-1}$, hence \[ \pi_{x}\Phi_{j\tau+t}(\rho)\in\text{\text{Supp}}\chi_{0},\ \forall t\geq T_{0}, \] and we are reduced thus to the previous case. In the same way, we exclude $j\in[\frac{T_{0}}{\tau},L]$ using the property for all $t\leq-T_{0}$. \end{proof} But, as the $\mathbf{k}$-sum contains at most $(N_{\delta}+2)^{\frac{\epsilon}{\tau}|\log h|}$ -- that is, a negative power of $h$ -- terms, we have \[ \sum_{\mathbf{k}}O(h^{\infty})=O(h^{\infty}), \] and therefore we deduce from the previous lemma that, as an $L^{1}\rightarrow L^{\infty}$ operator \[ Ae^{iTh\Delta}\psi(-h^{2}\Delta)A^{\star}=\sum_{\mathbf{k},k_{j}\geq1}\sigma_{\mathbf{k}}+O(h^{\infty}). \] Now, we will choose $\tau>0$ small enough given by the following lemma: \begin{lem} \label{lem:choixtau}For all $\delta>0$, there exists $\tau>0$ small enough so that, for every trajectory $\gamma\in\text{\ensuremath{\mathcal{P}}}$, we have \[ d(\rho,\gamma)<\delta,\ d(\Phi_{\tau}(\rho),\gamma)<\delta\implies\forall t\in[0,\tau],d(\Phi_{t}(\rho),\gamma)<3\delta. \] \end{lem} \begin{proof} Let $\tilde{\rho}$ realizing the distance from $\rho$ to $\gamma$. We denote \[ t_{0}=\inf\left\{ t\geq0,\ \text{s.t. }\pi_{x}\Phi_{t}(\rho)\in\Theta\right\} ,\ \tilde{t}_{0}=\inf\left\{ t\geq0,\ \text{s.t. }\pi_{x}\Phi_{t}(\tilde{\rho})\in\Theta\right\} . \] We assume that, for example, $\tilde{t}_{0}>t_{0}$. Notice that, by the proof of \lemref{distm} from \cite{Schreodinger}, we have \[ \forall t\in[0,\tau]\backslash(t_{0},\tilde{t}_{0}),\ d(\Phi_{t}(\rho),\Phi_{t}(\tilde{\rho}))\leq C^{\tau}\delta. \] Moreover, for $t\in[t_{0},\tilde{t}_{0}],$ \[ d(\Phi_{t}(\rho),\Phi_{t}(\tilde{\rho}))\leq d(\Phi_{t}(\rho),\Phi_{t_{0}}(\rho))+d(\Phi_{t_{0}}(\rho),\Phi_{t_{0}}(\tilde{\rho}))+d(\Phi_{t_{0}}(\tilde{\rho}),\Phi_{t}(\tilde{\rho})), \] but, as $\left\{ (\Phi_{t}(\rho)\right\} _{t\in[t_{0},\tilde{t}_{0}]}$ and $\left\{ (\Phi_{t}(\tilde{\rho})\right\} _{t\in[t_{0},\tilde{t}_{0}]}$ are straight lines \[ d(\Phi_{t}(\rho),\Phi_{t_{0}}(\rho))\leq|t-t_{0}||\pi_{\xi}\rho|\leq\tau\beta_{0}, \] and similarly for $\tilde{\rho}$. Therefore \[ d(\Phi_{t}(\rho),\Phi_{t}(\tilde{\rho}))\leq2\tau\beta_{0}+C^{\tau}\delta. \] We take $\tau>0$ small enough so that $2\tau\beta_{0}\leq\delta$ and $C^{\tau}\leq2$ and we get the result. \end{proof} The segment $S_{k_{j}}$ joins the obstacles $\Theta_{a_{j}}$ and $\Theta_{b_{j}}$. Choosing $\delta>0$ small enough, by (\ref{eq:WFtraj}), $\sigma_{\mathbf{k}}$ is not $O(h^{\infty})$ only if, for all $j$ \[ (a_{j}=a_{j+1}\text{ and }b_{j}=b_{j+1})\text{ or }(a_{i+1}=b_{j}). \] that is, only if $\gamma_{\mathbf{k}}=S_{k_{1}}\circ S_{k_{2}}\circ\cdots\circ S_{k_{L}}$ is a trajectory. Let, if it is the case, $J_{\mathbf{k}}$ be the corresponding story of reflexions. We extract from $J_{\mathbf{k}}$ the primitive story $I_{\mathbf{k}}$, that is, $J_{\mathbf{k}}=lI_{\mathbf{k}}+r$, $I_{\mathbf{k}}$ been primitive. We now introduce the trapped set of an open subset in time $T$: \begin{defn} Let $D$ be an open subset of $(T^{\star}\Omega\cup T^{\star}\partial\Omega)\cap\left\{ |\xi|\in[\alpha_{0},\beta_{0}]\right\} $ and $T>0$. We define the trapped set of $D$ in time $T$, denoted $\mathcal{T}_{T}(D)$, in the following way \[ \rho\in\mathcal{T}_{T}(D)\iff\forall t\in[0,T],\ \Phi_{T}(\rho)\in D. \] \end{defn} Let us denote by $D_{I_{\mathbf{k}},\delta}$ a $\delta$-neighborhood of $\gamma_{\mathbf{k}}\cap\left\{ |\xi|\in[\alpha_{0},\beta_{0}]\right\} $. For $I$ a primitive story of reflexions, let $q_{I,T}\in C_{0}^{\infty}$ be such that \begin{equation} q_{I,T}=0\text{ outside }\mathcal{T}_{T}(D_{I,4\delta}),\ q_{I,T}=1\text{ in }\mathcal{T}_{T}(D_{I,3\delta}),\label{eq:qIT} \end{equation} and denote \[ Q_{I}^{T}:=\text{Op}_{h}(q_{I,T}). \] We have, by (\ref{eq:WFtraj}) and the choice of $\tau>0$ permitted by \lemref{choixtau} \[ \sigma_{\mathbf{k}}=\sigma_{\mathbf{k}}Q_{I_{\mathbf{k}}}^{T}+O(h^{\infty}). \] Now, remark that for $I$ a primitive story of reflexions \[ Ae^{iTh\Delta}\psi(-h^{2}\Delta)A^{\star}Q_{I}^{T}=\sum_{\mathbf{k},I_{\mathbf{k}}=I}\sigma_{\mathbf{k}}Q_{I}^{T}+O(h^{\infty}), \] and therefore we recover \[ \sum_{I\text{ primitive}}Ae^{iTh\Delta}\psi(-h^{2}\Delta)A^{\star}Q_{I}^{T}=Ae^{iTh\Delta}\psi(-h^{2}\Delta)A^{\star}+O(h^{\infty}). \] Let us finally remark that for $T\leq\epsilon|\log h|$, we have $h\leq e^{-\frac{T}{\epsilon}}$, thus the $O(h^{\infty})$ term verifies the dispersive estimate. Therefore, we have proven that: \begin{lem} \label{lem:redtrap}If the following dispersive estimate holds true \[ \Vert\sum_{I\text{ primitive}}Ae^{iTh\Delta}\psi(-h^{2}\Delta)A^{\star}Q_{I}^{T}\Vert_{L^{1}\longrightarrow L^{\infty}}\lesssim(hT)^{-\frac{3}{2}},\ \forall T_{0}\leq T\leq\epsilon|\log h|, \] then the dispersive estimate (\ref{eq:pointdisp}) is true in times $[T_{0},\epsilon|\log h|]$. \end{lem} \subsection{Times $0\protect\leq t\protect\leq T_{0}$ and conclusion of the section} Finally, notice that the construction of $Q_{I}^{T}$ does not depend of $\phi$. We choose $\phi$ supported in a small enough neighborhood of $\mathcal{K}$ so that, in times $0\leq t\leq T_{0}$ and for $|\xi|\in[\alpha_{0},\beta_{0}]$, the bicharacteristic flow $\Phi_{t}(\rho)$ starting from $\rho$ has only hyperbolic points of intersection with the boundary. But, for such points, we can use the parametrix construction of Ikawa \cite{IkawaMult,Ikawa2}, adapted to this problem in \cite{Schreodinger} and explained in the next section in the $N$-convex framework to show that the dispersive estimate holds true in times $0\leq t\leq T_{0}$, with a constant depending on $T_{0}$: indeed, the flow can be writen as a finite (depending on $T_{0}$) sum of reflected waves, each of them verifying the dispersive estimate. Thus, by \lemref{redtrap}, we are reduced to show the following dispersive estimate in order to obtain our main result, namely, we have \begin{lem} \label{lem:redtrap-ult}If the following dispersive estimate holds true \begin{equation} \Vert\sum_{I\text{ primitive}}Ae^{iTh\Delta}\psi(-h^{2}\Delta)A^{\star}Q_{I}^{T}\Vert_{L^{1}\longrightarrow L^{\infty}}\lesssim(hT)^{-\frac{3}{2}},\ \forall T_{0}\leq T\leq\epsilon|\log h|,\label{eq:finbutult} \end{equation} then Strichartz estimates of Theorem \ref{th} hold true for the Schrödinger equation. \end{lem} where the symbols of $Q_{I}^{T}$ were defined by (\ref{eq:qIT}). The sequel of the paper is devoted to doing so. Let us remark that, with the same proof as in \cite{Schreodinger}, we have, as a consequence of \lemref{distm}, \[ d(\mathcal{T}_{T}(\tilde{D})^{c},\mathcal{T}_{T}(D))\geq\frac{1}{4}e^{-cT}d(\tilde{D}{}^{c},D),\ \forall D\subset\tilde{D} \] and therefore $q_{I}^{T}$ can, and will be constructed in such a way that, for $0\leq T\leq\epsilon|\log h|$ \begin{equation} |\partial_{\alpha}q_{I}^{T}|\lesssim h^{-2|\alpha|c\epsilon}.\label{eq:contrq} \end{equation} \section{Construction of an approximate solution} \subsection{The microlocal cut off} We will use the reflected-phase construction of \cite{Ikawa2,IkawaMult} and \cite{MR1254820}. It is summed up in \cite{Schreodinger}, let us recall that $\varphi_{J}$ is the reflected phase obtained from $\varphi$ after the story of reflexions $J$. According to \cite{MR1254820} (remark 3.17') there exists $M>0$ such that if $J\in\mathcal{I}$, $J=rI+l$ verifies $|J|\geq M$, and $\varphi$ verifies $(P)$, $\varphi_{J}$ can be defined in $\mathcal{U}_{I,l}^{\infty}$. We choose $\delta>0$ small enough so that, according to the construction of the previous section \[ D_{I,4\delta}\subset\bigcup_{|l|\leq|I|-1}\mathcal{U}_{I,l}^{\infty}, \] moreover, we will take $T_{0}\geq2\beta_{0}M$. Let us recall that we are reduced to show the following dispersive estimate: \[ \Vert\sum_{I\text{ primitive}}Ae^{iTh\Delta}\psi(-h^{2}\Delta)A^{\star}Q_{I}^{T}\Vert_{L^{1}\longrightarrow L^{\infty}}\lesssim(hT)^{-\frac{3}{2}},\ \forall T_{0}\leq T\leq\epsilon|\log h|. \] For all primitive story $I$, let us define \[ \delta_{I}^{y}(x)=\frac{1}{(2\pi h)^{3}}\int e^{-i(x-y)\cdot\xi/h}p_{I,T}(x,\xi)d\xi, \] where $p_{I,T}$ is the symbol associated with $P_{I}^{T}:=\psi(-h^{2}\Delta)A^{\star}Q_{I}^{T}$. Then we have, for $u_{0}\in L^{2}$ \[ \psi(-h^{2}\Delta)A^{\star}Q_{I}^{T}u_{0}(x)=\int\delta_{I}^{y}(x)u_{0}(y)dy. \] Then, by linearity of the flow \[ Ae^{ith\Delta}\psi(-h^{2}\Delta)A^{\star}Q_{I}^{T}u_{0}=\int Ae^{ith\Delta}\delta_{I}^{y}u_{0}(y)dy, \] and it therefore suffices to show that \[ \sum_{I\text{ primitive}}|Ae^{iTh\Delta}\delta_{I}^{y}(x)|\lesssim(hT)^{-3/2},\ \forall T_{0}\leq T\leq\epsilon|\log h|. \] Finally, notice that as the operator $A$ is bounded in $L^{\infty}\rightarrow L^{\infty}$ in the same way as in \cite{Schreodinger}, it suffices only to show that \begin{equation} \sum_{I\text{ primitive}}|\chi e^{iTh\Delta}\delta_{I}^{y}(x)|\lesssim(hT)^{-3/2},\ \forall T_{0}\leq T\leq\epsilon|\log h|,\label{eq:butult2} \end{equation} where $\chi\in C_{c}^{\infty}(\mathbb{R}^{3})$ is supported in a neighborhood of the spatial projection of the support of $\phi$ and equal to one on it. In order to do so, we will construct a parametrix, that is, an approximate solution, in time $0\leq t\leq\epsilon|\log h|$ for the semi-classical Schrödinger equation with data $\delta_{I}^{y}$. The first step will be to construct an approximate solution of the semi-classical Schrödinger equation with data \[ e^{-i(x-y)\cdot\xi/h}p_{I,T}(x,\xi) \] where $\xi\in\mathbb{R}^{n}$ is fixed and considered as a parameter. Now that we are localized around a trajectory, the construction is exactly the same as in \cite{Schreodinger}. Let us sum it up briefly. In the sequel of this section, $p_{I,T}$ will be denoted $p$ in the seek of conciseness. \subsection{Approximate solution} We look for the solution in positives times of the equation \[ \begin{cases} (i\partial_{t}w-h\Delta w) & =0\ \text{in }\Omega\\ w(t=0)(x) & =e^{-i(x-y)\cdot\xi/h}p(x,\xi)\\ w_{|\partial\Omega} & =0 \end{cases} \] as the Neumann serie \emph{ \[ w=\sum_{J\in\mathcal{I}}(-1)^{|J|}w^{J} \] }where \[ \begin{cases} (i\partial_{t}w^{\emptyset}-h\Delta w^{\emptyset}) & =0\ \text{in }\mathbb{R}^{n}\\ w^{\emptyset}(t=0)(x) & =e^{-i(x-y)\cdot\xi/h}p(x,\xi) \end{cases} \] and, for $J\neq\emptyset$, $J=(j_{1},\cdots,j_{n})$, $J'=(j_{1},\cdots,j_{n-1})$ \begin{equation} \begin{cases} (i\partial_{t}w^{J}-h\Delta w^{J}) & =0\ \text{in }\mathbb{R}^{n}\backslash\Theta_{j_{n}}\\ w^{J}(t=0) & =0\\ w_{|\partial\Theta_{j_{n}}}^{J} & =w_{|\partial\Theta_{j_{n}}}^{J'}. \end{cases}\label{eq:wJ} \end{equation} We will look for the $w^{J}$'s as power series in $h.$ In the sake of conciseness, these series will be considered at a formal level in this section, and we will introduce their expression as a finite sum plus a reminder later, in the last section. We look for $w^{\emptyset}$ as \begin{gather*} w^{\emptyset}=\sum_{k\geq0}h^{k}w_{k}^{\emptyset}e^{-i((x-y)\cdot\xi-t\xi^{2})/h},\\ w_{0}^{\emptyset}(t=0)=q(x,\xi),\ w_{k}^{\emptyset}(t=0)=0. \end{gather*} Solving the transport equations gives immediately \begin{align*} w_{0}^{\emptyset} & =p(x-2t\xi,\xi),\\ w_{k}^{\emptyset} & =-i\int_{0}^{t}\Delta w_{k-1}^{\emptyset}(x-2(s-t)\xi,s)ds\quad k\geq1. \end{align*} Now, starting from the phase $\varphi(x)=\frac{(x-y)\cdot\xi}{|\xi|}$, we define the reflected phases as before and we look for $w^{J}$ as: \begin{gather*} w^{J}=\sum_{k\geq0}h^{k}w_{k}^{J}e^{-i(\varphi_{J}(x,\xi)|\xi|-t\xi^{2})/h},\\ w_{k}^{J}|_{t\leq0}=0,\ w_{k|\partial\Theta_{j_{n}}}^{J}=w_{k|\partial\Theta_{j_{n}}}^{J'}. \end{gather*} For $x\in\mathcal{U}_{J}(\varphi)$, we have \[ \begin{cases} (\partial_{t}+2|\xi|\nabla\varphi_{J}\cdot\nabla+|\xi|\Delta\varphi_{J})w_{0}^{J} & =0\\ w_{0|\Theta_{j_{n}}}^{J} & =w_{0|\Theta_{j_{n}}}^{J'}\\ w_{0}^{J}|_{t\leq0} & =0 \end{cases} \] and \[ \begin{cases} (\partial_{t}+2|\xi|\nabla\varphi_{J}\cdot\nabla+|\xi|\Delta\varphi_{J})w_{k}^{J} & =-i\Delta w_{k-1}^{J}\\ w_{k|\Theta_{jn}}^{J} & =w_{k|\Theta_{j_{n}}}^{J'}\\ w_{k}^{J}|_{t\leq0} & =0. \end{cases} \] Solving the transport equations along the rays by the procedure explained in \cite{Schreodinger}, we get the exact same following expressions of $w_{k}^{J}$ for $x\in\mathcal{U}_{J}(\varphi)$: \begin{prop} \label{prop:solref}We denote by $\hat{X}_{-2t}(x,|\xi|\nabla\varphi_{J})$ the backward spatial component of the flow starting from $(x,|\xi|\nabla\varphi_{J})$, defined in the same way as $X_{-2t}(x,|\xi|\nabla\varphi_{J})$, at the difference that we ignore the first obstacle encountered if it's not $\Theta_{j_{n}},$ and we ignore the obstacles after $|J|$ reflections. Moreover, for $J=(j_{1},\dots,j_{n})\in\mathcal{I}$, we denote by \[ J(x,t,\xi)=\begin{cases} (j_{1},\cdots,j_{k}) & \text{if }\hat{X}_{-2t}(x,|\xi|\nabla\varphi_{J})\text{ has been reflected \ensuremath{n-k} times,}\\ \emptyset & \text{if }\hat{X}_{-2t}(x,|\xi|\nabla\varphi_{J})\text{ has been reflected \ensuremath{n} times}. \end{cases} \] Then, the $w_{k}^{J}$'s are given by, for $t\geq0$ and $x\in\mathcal{U}_{J}(\varphi)$ \[ w_{0}^{J}(x,t)=\Lambda\varphi_{J}(x,\xi)p(\hat{X}_{-2t}(x,|\xi|\nabla\varphi_{J}),\xi) \] where \[ \Lambda\varphi_{J}(x,\xi)=\left(\frac{G\varphi_{J}(x)}{G\varphi_{J}(X^{-1}(x,|\xi|\nabla\varphi_{J}))}\right)^{1/2}\times\cdots\times\left(\frac{G\varphi(X^{-|J|-1}(x,|\xi|\nabla\varphi_{J}))}{G\varphi(X^{-|J|}(x,|\xi|\nabla\varphi_{J}))}\right)^{1/2}, \] and, for $k\geq1$, and $x\in\mathcal{U}_{J}(\varphi)$ \[ w_{k}^{J}(x,t)=-i\int_{0}^{t}g_{\varphi_{J}}(x,t-s,\xi)\Delta w_{k-1}^{J(x,\xi,t-s)}(\hat{X}_{-2(t-s)}(x,|\xi|\nabla\varphi_{J}),s)ds \] where \[ g_{\varphi_{J}}(x,\xi,t)=\left(\frac{G\varphi_{J}(x)}{G\varphi_{J}(X^{-1}(x,|\xi|\nabla\varphi_{J}))}\right)^{1/2}\times\cdots\times\left(\frac{G\varphi_{J(x,t,\xi)}(X^{-|J(x,t,\xi)|-1}(x,|\xi|\nabla\varphi_{J}))}{G\varphi_{J(x,t,\xi)}(\hat{X}_{-2t}(x,|\xi|\nabla\varphi_{J}))}\right)^{1/2}. \] \end{prop} And, by the same proof again as in \cite{Schreodinger} it implies in particular the following three results. The first of them is about the support of the solutions: \begin{lem} \label{lem:supprought}For $x\in\mathcal{U}_{J}(\varphi)$ \begin{equation} w_{k}^{J}(x,t)\neq0\implies(\hat{X}_{-2t}(x,|\xi|\nabla\varphi_{J}),\xi)\in\text{Supp}p.\label{eq:UinfJ} \end{equation} And moreover \begin{equation} \text{Supp}w_{k}^{J}\subset\left\{ J(x,\xi,t)=\emptyset\right\} .\label{eq:suppJ} \end{equation} \end{lem} It implies that we can extend it by zero outside the domains of definition of the phases: \begin{prop} \label{prop:solref2}For $x\notin\mathcal{U}_{J}(\varphi)$ and $0\leq t\leq T$ we have $w_{k}^{J}(x,t)=0$. \end{prop} And that the have $|J|\approx t$: \begin{lem} \label{lem:support}There exists $c_{1},c_{2}>0$ such that for every $J\in\mathcal{I}$, the support of $w_{k}^{J}$ is included in $\left\{ c_{1}|J|\leq t\right\} $ and which of $\chi w_{k}^{J}$ is included in $\left\{ c_{1}|J|\leq t\leq c_{2}(|J|+1)\right\} $. \end{lem} Now, let us recall that $q=q_{I,T}$ where $I$ is a given primitive trajectory. We have: \begin{lem} If $J$ is not of the form $rI+l$, then $w_{k}^{J}=0$ for $0\leq t\leq\epsilon|\log h|$. \end{lem} \begin{proof} If $w_{k}^{J}(x,\xi)\neq0$, it follows from \lemref{supprought} that there exists a broken ray joining $(x,|\xi|\nabla\varphi_{J})$ and a point of the support of $p_{I,T}$ in time $t$ following the complete story of reflexions $J$. By definition of the trapped set and because $\text{Supp}p\subset\mathcal{T}_{T}(D_{I,4\delta})$, this broken ray remains in a neighborhood of the trajectory $\gamma$ corresponding to $I$, thus $J$ can only be of the form $rI+l$. \end{proof} Finally, let us notice that \begin{lem} \label{lem:supportchi}In times $0\leq t\leq T$, for $J=rI+l$, $\chi w_{k}^{J}$ is supported in $\mathcal{U}_{I,l}^{\infty}$. \end{lem} \begin{proof} From (\ref{eq:UinfJ}), the support of $w_{k}^{J}$ consists of the support of $q(.,\xi)$, transported along the billiard flow with initial direction $\xi$ along the story of reflexion $J$ and then ignoring the obstacles. Because of the non-shadow condition (\ref{eq:IK2}), the part ignoring the obstacles is cut off by $\chi$, thus we obtain the result. \end{proof} \subsection{The $\xi$ derivatives} The following results about the directional derivatives of the phase and the solution has been proven in \cite{Schreodinger}, where the proof does not involve the particular two obstacles geometry. The first one involves the critical points of the phase and its non-degeneracy: \begin{lem} \label{lem:nondeg}Let $J\in\mathcal{I}$ and $\mathcal{S}_{J}(x,t,\xi):=\varphi_{J}(x,\xi)|\xi|-t\xi^{2}$. For all $t>0$ and there exists at most one $s_{J}(x,t)$ such that $D_{\xi}\mathcal{S}_{J}(x,t,s_{J}(x,t))=0$. Moreover, for all $t_{0}>0$, there exists $c(t_{0})>0$ such that, for all $t\geq t_{0}$ and all $J\in\mathcal{I}$ \begin{equation} w^{J}(x,t,\xi)\neq0\implies|\det D_{\xi}^{2}\mathcal{S}_{J}(x,t,\xi)|\geq c(t_{0})>0.\label{eq:detpo} \end{equation} \end{lem} The last two permits to control the directional derivatives of the solutions: \begin{prop} \label{prop:decder}For all multi-indices $\alpha,\beta$ there exists a constant $D_{\alpha,\beta}>0$ such that the following estimate holds on $\mathcal{U}_{I,l}^{\infty}$: \[ |D_{\xi}^{\alpha}D_{x}^{\beta}\nabla\varphi_{J}|\leq D_{\alpha,\beta}^{|J|}. \] \end{prop} \begin{cor} \label{cor:boundsdirect}We following bounds hold on $\mathcal{U}_{I,l}^{\infty}$ \[ |D_{\xi}^{\alpha}w_{k}^{J}|\lesssim C_{\alpha}^{|J|}h^{-(2k+|\alpha|)c\epsilon}. \] \end{cor} \subsection{Decay of the reflected solutions} The principal result which permits us to estimate the decay of the reflected solutions is the convergence of the product of the Gaussian curvatures $\Lambda\varphi_{J}$ obtained by \cite{Ikawa2,IkawaMult} and \cite{plaques}. It writes, in this setting \begin{prop} \label{prop:convL}Let $0<\lambda_{I}<1$ be the square-root of the product of the two eigenvalues lesser than one of the Poincaré map associated with the periodic trajectory $I$. Then, there exists $0<\alpha<1$ and a $C^{\infty}$ function $a_{I,l}$ defined in $\mathcal{U}_{I,l}^{\infty}$, such that, for all $J=rI+l$, we have \[ \underset{\mathcal{U}_{I,l}^{\infty}}{\sup}|\Lambda\varphi_{J}-\lambda_{I}^{r}a_{I,l}|_{m}\leq C_{m}\lambda_{I}^{r}\alpha^{|J|}. \] \end{prop} In the same way as in \cite{Schreodinger}, it implies in particular: \begin{prop} \label{prop:essbouds}If $J=rI+l$, where $I$ is a primitive trajectory and $l\leq|I|$, then the following bounds hold on $\mathcal{U}_{I,l}^{\infty}$: \[ |w_{k}^{J}|_{m}\leq C_{k}\lambda_{I}^{|J|}h^{-(2k+m)c\epsilon}. \] Moreover, on the whole space, $|w_{k}^{J}|_{m}\leq C_{k}h^{-(2k+m)c\epsilon}.$ \end{prop} \section{Proof of the main result} Let $K\geq0$. By the previous section, the function \[ (x,t)\rightarrow\frac{1}{(2\pi h)^{3}}\sum_{J=rI+l}\int\sum_{k=0}^{K}h^{k}w_{k}^{J}(x,t,\xi)e^{-i(\varphi_{J}(x,\xi)|\xi|-t\xi^{2})/h}d\xi \] satisfies the approximate equation \[ \partial_{t}u-ih\Delta u=-ih^{K}\frac{1}{(2\pi h)^{3}}\sum_{J=rI+l}\int\Delta w_{K-1}^{J}(x,t,\xi)e^{-i(\varphi_{J}(x,\xi)|\xi|-t\xi^{2})/h}d\xi \] with data $\delta_{I,T}^{y}$. Because $e^{-i(t-s)h\Delta}$ is an $H^{m}$-isometry and by the Duhamel formula, the difference from the actual solution $e^{-ith\Delta}\delta^{y}$ is bounded in $H^{m}$ norm by \[ C\times|t|\times h^{K-3}\times\sup_{t,\xi}\sum_{J=rI+l}\Vert\Delta w_{K-1}^{J}(\cdot,t,\xi)e^{-i(\varphi_{J}(\cdot,\xi)|\xi|-t\xi^{2})/h}\Vert_{H^{m}}. \] Therefore, \begin{equation} \sum_{I\text{ primitive}}e^{-ith\Delta}\delta_{I}^{y}(x)=S_{K}(x,t)+R_{K}(x,t)\label{eq:sumdelta} \end{equation} with \[ S_{K}(x,t)=\frac{1}{(2\pi h)^{3}}\sum_{J\in\mathcal{I}}\int\sum_{k=0}^{K}h^{k}w_{k}^{J}(x,t,\xi)e^{-i(\varphi_{J}(x,\xi)|\xi|-t\xi^{2})/h}d\xi \] and, for $0\leq t\leq\epsilon|\log h|$ \begin{equation} \Vert R_{K}(\cdot,t)\Vert_{H^{m}}\lesssim|\log h|h^{K-3}\sup_{t,\xi}\sum_{J\in\mathcal{I}}\Vert\Delta w_{K-1}^{J}(\cdot,t,\xi)e^{-i(\varphi_{J}(\cdot,\xi)|\xi|-t\xi^{2})/h}\Vert_{H^{m}},\label{eq:rK1} \end{equation} where $w_{k}^{J}$ is understood to be constructed from $p_{I,T}$ when $J=rI+l$. \subsection*{The reminder} We first deal with the reminder term $R_{K}$. Let us denote \[ W_{K-1}^{J}(x,t)=\Delta w_{K-1}^{J}(\cdot,t,\xi)e^{-i(\varphi_{J}(\cdot,\xi)|\xi|-t\xi^{2})/h} \] Notice that, by construction of the $w_{k}$'s, $w_{k}^{J}$ is supported in a set of diameter $(C+\beta_{0}t)$. Therefore, using Proposition \ref{prop:essbouds} to control the derivatives coming from $w_{K-1}$ and the estimate \[ |\nabla\varphi_{J}|_{m}\leq C_{m}|\nabla\varphi|_{m} \] from \cite{Ikawa2} to control the derivatives coming from the phase we get: \[ \Vert\partial^{m}W_{K-1}^{J}\Vert_{L^{2}}\lesssim C_{K}(1+\beta_{0}t)^{\frac{1}{2}}\Vert\partial^{m}W_{K-1}^{J}\Vert_{L^{\infty}}\lesssim C_{K}(1+t)^{\frac{1}{2}}h^{-m}\times h^{-(2K+m+2)c\epsilon} \] and thus, by (\ref{eq:rK1}) and the Sobolev embedding $H^{2}\hookrightarrow L^{\infty}$, for $0\leq t\leq\epsilon|\log h|$ \begin{equation} \Vert R_{K}\Vert_{L^{\infty}}\lesssim|\log h|^{\frac{3}{2}}h^{K(1-2c\epsilon)-5-4c\epsilon}|\left\{ J\in\mathcal{I},\text{ s.t }w_{K-1}^{J}\neq0\right\} |.\label{eq:rK0} \end{equation} Note that $w_{K-1}^{J}(t)\neq0$ implies by \lemref{support} that $|J|\leq c_{1}t$, and $|\left\{ J\in\mathcal{I},\text{ s.t }w_{K-1}^{J}\neq0\right\} |$ is bounded by the number of elements in \[ \alpha_{\lceil c_{1}t\rceil} \] where \[ \alpha_{k}=\left\{ \text{sequences \ensuremath{s} in \ensuremath{\llbracket1,N\rrbracket} of lenght \ensuremath{\leq}\ensuremath{k} s.t }s_{i+1}\neq s_{i}\right\} \] But \begin{lem} The number of elements in $\alpha_{k}$ admits the bound \[ |\alpha_{k}|\leq C_{N}N^{k}. \] \end{lem} \begin{proof} Let us denote \[ \beta_{k}=\left\{ \text{sequences \ensuremath{s} in \ensuremath{\llbracket1,N\rrbracket} of lenght \ensuremath{k} s.t }s_{i+1}\neq s_{i}\right\} . \] We have \[ |\beta_{1}|=N \] and \[ |\beta_{k+1}|=(N-1)|\beta_{k}|. \] Therefore \[ |\beta_{k}|=N(N-1)^{k-1},\ |\alpha_{k}|=\sum_{i=1}^{k}\beta_{i}+1=N\frac{(N-1)^{k}-1}{N-2}+1, \] and the bound holds. \end{proof} Thus \begin{equation} |\left\{ J\in\mathcal{I},\text{ s.t }w_{K-1}^{J}\neq0\right\} |\lesssim N^{t}\label{eq:wkJnonnul} \end{equation} and therefore, according to (\ref{eq:rK0}), for $0\leq t\leq\epsilon|\log h|$ \begin{align*} \Vert R_{K}\Vert_{L^{\infty}} & \lesssim C_{K}|\log h|^{\frac{3}{2}}h^{K(1-2c\epsilon)-5-4c\epsilon}h^{-\epsilon\log N}\\ & \lesssim C_{K}h^{K(1-2c\epsilon)-6-4c\epsilon-\epsilon\log N}. \end{align*} We take $\epsilon>0$ small enough so that $2c\epsilon\leq\frac{1}{2}$ and $\epsilon\log N\leq1$ in order to get \[ \Vert R_{K}\Vert_{L^{\infty}}\leq C_{K}h^{\frac{K}{2}-8}. \] Let us fix $K=15$. Then, $\Vert R_{K}\Vert_{L^{\infty}}\leq C_{K}h^{-\frac{1}{2}}$. Therefore, as $t\leq\epsilon|\log h|$ implies $h\leq e^{-\frac{t}{\epsilon}}$, we get \begin{equation} \Vert R_{K}\Vert_{L^{\infty}}\leq C_{K}h^{-\frac{3}{2}}e^{-\frac{t}{\epsilon}}\label{eq:RK} \end{equation} for $0\leq t\leq\epsilon|\log h|$. \subsection*{Times $t\protect\geq t_{0}>0$} Let us now deal with the approximate solution $S_{K}$, $K$ been fixed and $x$ in $\text{Supp}\chi$. Let $t_{0}>0$ to be chosen later. For $t\geq t_{0}$, by \lemref{nondeg} we can perform a stationary phase on each term of the $J$ sum, up to order $h$. We obtain, for $t\geq t_{0}$ \begin{multline} S_{K}(x,t)=\frac{1}{(2\pi h)^{3/2}}\sum_{J\in\mathcal{I}}e^{-i(\varphi_{J}(x,s_{J}(t,x))|s_{J}(t,x)|-ts_{J}(t,x)^{2})/h}\left(w_{0}^{J}(t,x,s_{J}(t,x))+h\tilde{w}_{1}^{J}(t,x)\right)\\ +\frac{1}{h^{3/2}}\sum_{J\in\mathcal{I}}R_{\text{st.ph.}}^{J}(x,t)+\frac{1}{(2\pi h)^{3}}\sum_{J\in\mathcal{I}}\int\sum_{k=2}^{K}h^{k}w_{k}^{J}(x,t,\xi)e^{-i(\varphi_{J}(x,\xi)|\xi|-t\xi^{2})/h}d\xi\label{eq:sk} \end{multline} where $s_{J}(t,x)$ is an eventual unique critical point of the phase (if it does not exist, the corresponding term is $O(h^{\infty})$ and by (\ref{eq:wkJnonnul}) it does not contribute). The term $\tilde{w}_{1}^{J}$ is a linear combination of \begin{gather*} D_{\xi}^{2}w_{0}^{J}(t,x,s_{J}(t,x)),w_{1}^{J}(t,x,s_{J}(t,x)), \end{gather*} and $R_{\text{st.ph.}}^{J}$ is the reminder involved in the stationary phase, who verifies (see for example to \cite{semibook}, Theorem 3.15) \begin{equation} |R_{\text{st.ph.}}^{J}(x,t)|\leq h^{2}\sum_{|\alpha|\leq7}\sup|D_{\xi}^{\alpha}w_{k}^{J}(x,\cdot,t)|.\label{eq:remstph} \end{equation} We recall that by \lemref{supportchi}, for $0\leq t\leq\epsilon|\log h|$, $\chi w_{k}^{J}$ is supported in $\mathcal{U}_{I,l}^{\infty}$. Therefore, for $0\leq t\leq\epsilon|\log h|$ and all $0\leq k\leq K-1$, we have, if $x\in\text{Supp}\chi$, using the estimate of Proposition \propref{essbouds}, because $w_{k}^{J}(x,\xi,\cdot)$ is supported in $\{c_{1}|J|\leq t\leq c_{2}(|J|+1)\}$ by \lemref{support}, \[ \sum_{J\in\mathcal{I}}|w_{k}^{J}|\leq C_{k}h^{-2kc\epsilon}\sum_{\substack{J=rI+s\ |\ w_{k}^{J}\neq0\\ I\text{ primitive, }|s|\leq|I|-1 } }\lambda_{I}^{|J|}. \] Thus \[ \sum_{J\in\mathcal{I}}|w_{k}^{J}|\leq C_{k}h^{-2kc\epsilon}\sum_{I\text{ primitive}}\sum_{\substack{r\geq0\\ 0\leq s\leq|I|-1 } }\lambda_{I}^{\rho_{k}(I)+r}\lambda_{I}^{s}, \] where we denoted \[ \rho_{k}(I)=\inf\left\{ r\geq1\text{ s.t. }\exists s,\ w_{k}^{rI+s}\neq0\right\} , \] and we get \begin{equation} \sum_{J\in\mathcal{I}}|w_{k}^{J}|\leq C_{k}h^{-2kc\epsilon}\sum_{\substack{I\text{ primitive}\\ \rho_{k}(I)\neq\infty } }\frac{1}{1-\lambda_{I}}\lambda_{I}^{\rho_{k}(I)}|I|.\label{eq:wkmaj1} \end{equation} Moreover, as \begin{equation} \rho_{k}(I)\lesssim\frac{t}{|I|}\label{eq:rhokbound} \end{equation} and, because as remarked in \cite{MR1254820}, if $\gamma$ is the trajectory associated to $I$ \begin{equation} \frac{d_{\gamma}}{\text{diam}\mathcal{C}}\leq\text{card}\gamma=|I|\leq\frac{d_{\gamma}}{d_{\text{min}}}\label{eq:cardgamma} \end{equation} where $\mathcal{C}$ is the convex hull of $\cup\Theta_{i}$. Therefore, combining (\ref{eq:wkmaj1}) with (\ref{eq:rhokbound}) and (\ref{eq:cardgamma}) \begin{equation} \sum_{J\in\mathcal{I}}|w_{k}^{J}|\lesssim C_{k}h^{-2kc\epsilon}\sum_{\substack{\mathcal{\gamma}\text{ primitive}} }d_{\gamma}\lambda_{\gamma}^{D_{k}\frac{t}{d_{\gamma}}}.\label{eq:wkmaj2} \end{equation} But, by Ikawa condition (\ref{eq:IK1}), there exists $\alpha>0$ such that \[ \sum_{\gamma\text{ primitive}}d_{\gamma}\lambda_{\gamma}e^{\alpha d_{\gamma}}<\infty. \] Let us denote \[ C_{\gamma}=\lambda_{\gamma}e^{\alpha d_{\gamma}}. \] Notice that, because $d_{\gamma}$ is bounded from below by $d_{\text{\text{min }}}$ uniformly with respect to $\gamma$, we have a fortiori \[ \sum C_{\gamma}<\infty. \] Therefore, all $C_{\gamma}$ but a finite number are lesser than one. Reducing $\alpha$ if necessary and taking it small enough, we can thus assume that \[ 0\leq C_{\gamma}\leq1,\ \forall\gamma. \] Hence, for $t\geq\frac{d_{\text{min}}}{D_{k}}$ we have \[ C_{\gamma}^{D\frac{t}{d_{\gamma}}}\leq C_{\gamma}, \] thus, by (\ref{eq:wxp1}), for $t\geq\frac{d_{\text{min}}}{D_{k}}$ \begin{multline*} \sum_{J\in\mathcal{I}}|w_{k}^{J}|\lesssim C_{k}h^{-2kc\epsilon}\sum_{\substack{\mathcal{\gamma}\text{ primitive}} }d_{\gamma}\left(C_{\gamma}e^{-\alpha d_{\gamma}}\right)^{D_{k}\frac{t}{d_{\gamma}}}\\ \lesssim C_{k}h^{-2kc\epsilon}\sum_{\gamma\text{ primitive}}d_{\gamma}C_{\gamma}^{D\frac{t}{d_{\gamma}}}e^{-\alpha D_{k}t}\leq C_{k}h^{-2kc\epsilon}e^{-\alpha D_{k}t}\sum_{\gamma\text{ primitive}}d_{\gamma}C_{\gamma}, \end{multline*} and hence, because of (\ref{eq:IK1}), \begin{equation} \sum_{J\in\mathcal{I}}|w_{k}^{J}|\leq C_{k}h^{-2kc\epsilon}e^{-\mu_{k}t}\text{ for }\frac{d_{\min}}{D_{k}}\leq t\leq\epsilon|\log h|.\label{eq:wxp1} \end{equation} for some $\mu_{k}>0$. Now, remark that for $t\leq\frac{d_{\min}}{D}$, by (\ref{eq:wkmaj2}) we have \[ \sum_{J\in\mathcal{I}}|w_{k}^{J}|\lesssim C_{k}h^{-2kc\epsilon}\sum_{\substack{\mathcal{\gamma}\text{ primitive}} }d_{\gamma}\lambda_{\gamma} \] but because $d_{\gamma}$ are bounded below, (\ref{eq:IK1}) implies a fortiori \[ \sum_{\substack{\mathcal{\gamma}\text{ primitive}} }d_{\gamma}\lambda_{\gamma}<\infty \] and thus \begin{equation} \sum_{J\in\mathcal{I}}|w_{k}^{J}|\lesssim C_{k}h^{-2kc\epsilon}\text{ for }t_{0}\leq t\leq\frac{d_{\min}}{D_{k}}.\label{eq:wkp2} \end{equation} Combining (\ref{eq:wxp1}) and (\ref{eq:wkp2}) we get \[ \sum_{J\in\mathcal{I}}|w_{k}^{J}|\leq C'_{k}h^{-2kc\epsilon}e^{-\mu_{k}t}\text{ for }t_{0}\leq t\leq\epsilon|\log h| \] Let us take $\epsilon>0$ small enough so that $2Kc\epsilon\leq\frac{1}{2}$. We get, for $t_{0}\leq t\leq\epsilon|\log h|$ \begin{align} \sum_{J\in\mathcal{I}}|w_{k}^{J}| & \leq C_{k}h^{-\frac{1}{2}}e^{-\mu t},\ 1\leq k\leq K-1,\label{eq:sk1}\\ \sum_{J\in\mathcal{I}}|w_{0}^{J}| & \lesssim e^{-\mu t}.\label{eq:sk2} \end{align} with \[ \mu=\min_{0\leq k\leq K-1}\mu_{k}>0. \] Moreover, using (\ref{eq:remstph}) together with (\ref{eq:wkJnonnul}), \lemref{support} and Corollary \corref{boundsdirect} we obtain, for $t\leq\epsilon|\log h|$ \begin{multline*} \sum_{J\in\mathcal{I}}|R_{\text{st.ph.}}^{J}(x,t)|\leq h^{2}\sum_{J\in\mathcal{I}}\sum_{|\alpha|\leq7}\sup|D_{\xi}^{\alpha}w_{k}^{J}(x,\cdot,t)|\\ \leq h^{2-(2K+7)c\epsilon}|\left\{ J\in\mathcal{I},\text{ s.t }w_{K-1}^{J}\neq0\right\} |C^{\frac{t}{c_{1}}}\lesssim h^{2-(2K+7)c\epsilon}N^{t}C^{\frac{t}{c_{1}}}\\ \leq h^{2-(2K+7)c\epsilon}h^{-\eta\epsilon} \end{multline*} where $\eta>0$ depends only of $\alpha_{0},\beta_{0}$, and the geometry of the obstacles. Therefore, choosing $\epsilon>0$ small enough \begin{equation} \sum_{J\in\mathcal{I}}|R_{\text{st.ph.}}^{J}(x,t)|\lesssim h\leq e^{-t/\epsilon}.\label{eq:remphst} \end{equation} for $t\leq\epsilon|\log h|$. In the same way we get, taking $\epsilon>0$ small enough and $t\leq\epsilon|\log h|$ \[ \sum_{J\in\mathcal{I}}|D_{\xi}^{2}w_{0}^{J}|\lesssim N^{t}C^{\frac{t}{c_{1}}}\lesssim h^{-1/4} \] and therefore \begin{equation} \sum_{J\in\mathcal{I}}|D_{\xi}^{2}w_{0}^{J}|\leq h^{-\frac{1}{2}}e^{-t/4\epsilon}.\label{eq:sk3} \end{equation} So, combining (\ref{eq:sk1}), (\ref{eq:sk2}), (\ref{eq:remphst}) and (\ref{eq:sk3}) with (\ref{eq:sk}), we obtain, for some $\nu>0$ \begin{equation} |\chi S_{K}(x,t)|\lesssim\frac{e^{-\nu t}}{h^{3/2}}\ \text{ for }t_{0}\leq t\leq T.\label{eq:SKgrand} \end{equation} \subsection*{Conclusion} Combining the above estimate (\ref{eq:SKgrand}) with the control of the reminder term (\ref{eq:RK}) and taking $t=T$ gives (\ref{eq:butult2}) and therefore the dispersive estimate (\ref{eq:finbutult}). By the work of reduction of the third section and summed up in \lemref{redtrap-ult}, Theorem \ref{th} is therefore demonstrated for the Schrödinger equation. \section{The wave equation} In the case of the wave equation, the counterpart of the smoothing estimate without loss outside the trapped set, namely the following $L^{2}$- decay of the local energy \begin{equation} \Vert(Au,A\partial_{t}u)\Vert_{L^{2}(\mathbb{R},\dot{H}^{\gamma}\times\dot{H}^{\gamma-1})}\lesssim\Vert u_{0}\Vert_{\dot{H}^{\gamma}}+\Vert u_{1}\Vert_{\dot{H}^{\gamma}},\label{eq:dec_waves} \end{equation} where $A$ has micro-support disjoint from $\mathcal{K}$, is obtained using the same commutator argument, writing in the case of the wave equation as \[ 0=\int\int_{\mathbb{R}\times\Omega}\langle u,[\oblong,P]u\rangle+\int\int_{\mathbb{R}\times\partial\Omega}\langle Pu,\partial_{n}u\rangle, \] where $P$ is any pseudo-differential operator. Notice that the symbol of $P$ at the border, as an operator acting on waves, has been derivated in $\{\tau^{2}-\eta^{2}>0\}$ by \cite{MorRS}. Our method apply in the exact same way as for the Schrödinger equation. Once (\ref{eq:dec_waves}) is obtained, it follows as in \cite{Waves} that we can reduce ourselves to prove the Strichartz estimates near the trapped set in logarithmic times, namely \[ \Vert\text{Op}_{h}(\phi)u\Vert_{L^{q}(\epsilon|\log h|,L^{r}(\Omega))}\lesssim\Vert u_{0}\Vert_{\dot{H}^{s}}+\Vert u_{1}\Vert_{\dot{H}^{s-1}} \] where $u_{0,1}=\psi(-h^{2}\Delta)u_{0,1}$ and $\phi$ is supported in a small neighborhood of $\mathcal{K}$. In order to reduce ourselves at points of the phase-space that remain near a periodic trajectory in logarithmic times, the exact same cuting as in the third section holds, at the difference that the flow is followed at constant speed one. Then, the construction of an approximate solution is the same as in \cite{Waves}, with the adaptations of the $N$-convex framework presented in the fourth section. In particular, the results of non-degeneracy of the phase and stationary points of \cite{Waves} hold, as their proof does not rely on the particular two-convex geometry. Thus, we can perform the same stationary phase argument as in \cite{Waves}, the difference with the Schrödinger equation been that the phase is now stationary on plain lines due to the constant speed of propagation, and we obtain the good scale in $h$. Now, the only difference with the conclusion section of \cite{Waves} is that we cannot deal with \[ \sum_{J\in\mathcal{I}} \] as in the two convex case. But we can do it in the exact same way as presented in the fifth section, using the strong hyperbolic setting assumption (\ref{eq:IK1}), in order to deduce the sufficient time decay. Thus the appropriate dispersive estimate for the waves is obtained and the theorem follows. \subsection*{Aknowledgments} The author is grateful to Jared Wunsch for having indicated him the paper of Datchev and Vasy \cite{DatchevVasy} which permits to remark that we are able to obtain the more general smoothing estimate of Proposition \propref{dv}, and to Fabrice Planchon and Nicolas Burq for many discussions about the problem. \bibliographystyle{amsalpha} \bibliography{refs,/Users/David/Desktop/Projets/arXiv_waves/megaref} \end{document}
{"config": "arxiv", "file": "1811.12357/many_pre_fin.tex"}
TITLE: Pugh's exercise on Dedekind cuts addition QUESTION [0 upvotes]: I am trying to solve the following exercise: Let $x=A|B$ and $x'=A'|B'$ be cuts in $\mathbb{Q}$. Show that although $B+B'$ is disjoint from $A+A'$, it may happen in degenerate cases that $\mathbb{Q}$ is not the union of $A+A'$ and $B+B'$. The first assertion is straightforward, but I have not been able to handle the second one. It does not work for rational cuts or $\sqrt{2}+\sqrt{2}$. REPLY [2 votes]: HINT: Try $\sqrt2$ and $-\sqrt2$. More generally, try any two irrationals whose sum is rational.
{"set_name": "stack_exchange", "score": 0, "question_id": 1869911}
TITLE: Let $E \subset M$ and $x \in M$ then $x$ is a limit point of $E$ iff every $B_r(x)$ contains at least one point of $E$ QUESTION [1 upvotes]: Let $E \subset \left<M,\rho\right>$ and $x \in M$ then $x$ is a limit point of $E$ iff every $B_r(x)$ contains at least one point of $E$ I am reading Methods of Real Analysis by Goldberg and I tried to prove the above theorem in my own words before looking at the proof in the book. My proof for the converse came out slightly different and I was wondering if it is correct. Book's proof for "$\impliedby$" Let $x \in M$ and suppose every $B_r(x)$ contains a point of $E$. Then for $n \in \mathbb{N}$, the open ball $B_{\frac{1}{n}}(x)$ contains a point $x_n \in E$. The sequence $\{x_n\}_{n=1}^{\infty}$ converges to $x$ since $\rho(x, x_n) \lt \frac{1}{n}$, and hence $x$ is a limit point of $E$. My proof for "$\impliedby$" Let $\epsilon \gt 0$ be given. Let $x \in M$ and every $B_r(x)$ contain a point of $E$. Then for $r=\epsilon$, $B_\epsilon(x)$ contains a point $y$ of $E$. Define a sequence $\{y\}_{n=1}^{\infty} = \{y,y,...\}$, then $\forall n \in \mathbb{N}, \rho(y_n,x) \lt \epsilon$. So $x$ is a limit point of $E$. REPLY [1 votes]: Your proof for the converse part is not correct. Let $x$ be a limit point. Then there exists a sequence $(x_n)$ in $E$ converging to $x$. For any $\epsilon >0$ the ball $B(x,\epsilon)$ contains $x_n$ for $n$ sufficiently large , hence it contains at least one point of $E$.
{"set_name": "stack_exchange", "score": 1, "question_id": 3367645}
TITLE: Paradigm shifts from $2\to 3$ QUESTION [3 upvotes]: Recently, I've been thinking about a common theme that I've seen all over mathematics. One often finds, that when the number of dimensions/degrees of freedom in a given scenario/problem changes from $2$ to $3$, that some fundamental shifts in the solution or resulting behavior occur. I'll list five examples of what I'm talking about, three of which have to do with differential equations. (What can I say? Read my bio.) Before I do so, my questions are What are some other examples of this theme? Is there a universal reason why this happens? Does it have to do with $3$ being the first odd prime? Or perhaps something else? 1: Poincaré-Bendixson Theorem For those that don't know, the Poincare-Bendixson theorem is a deep result in ODEs/dynamical systems. Consider the autonomous ODE $$\dot{\mathrm{x}}=\mathrm{u}(\mathrm{x})\tag{1}$$ Where $\mathrm{x}:\mathbb{R}\to\mathbb{R}^n~;~\mathrm{x}:t\mapsto \mathrm{x}(t)$, and $\mathrm{u}:\mathbb{R}^n\to\mathbb{R}^n$. When $n=1$ the behavior of the solution is typically very easily to analyze heuristically, and in particular, it is rather obvious that only monotonic solutions exist. When $n=2$, things obviously get a lot more complicated but there is in fact the powerful P-B theorem: Let $\Omega\subset \mathbb{R}^2$ be closed and bounded. If $u$ is $C^1$ in $\Omega$, $\Omega$ contains no fixed points, and $\exists x_0\in\Omega$ such that the solution of the IVP $$\dot {\mathrm{x}}=\mathrm{u}(\mathrm{x})~~;~~\mathrm{x}(0)=\mathrm{x}_0$$ Is entirely contained in $\Omega$, then there is at least one closed orbit in $\Omega$. Quite a remarkable theorem if you ask me. The consequence of this theorem is that there is no chaotic behavior in two dimensions. In the plane, any solution of $1$ is either unbounded bounded and approaching a periodic limit cycle periodic So solutions that are bounded, but do not approach a stable limit cycle, i.e strange attractors, are impossible in the plane. However, when we jump from dimension two to dimension three, and any subsequent dimension, no similar result exists. Solutions of autonomous ODEs in $n>2$ dimensions can be as "strange" as you like. 2: Fermat's Last Theorem Consider the simple equation $$a^n+b^n=c^n\tag{2}$$ Where $a,b,c\in\mathbb{Z}\setminus \{0\}$ and $n\in\mathbb{N}$. The question is, given $n$, how many solutions $(a,b,c)$ exist to $\boldsymbol{(2)}$? When $n=1$ it is obvious that there are infinitely many solutions - the sum of any two integers is an integer. When $n=2$, proving that infinitely many solutions exist is still rather easy, and known thousands of years ago to the Greeks. We can simply let $r,k\in\mathbb{N}$ with $k>r$ and observe that $$(k^2-r^2)^2+(2rk)^2=(k^2+r^2)^2$$ Since there are infinitely many pairs of positive integers $(k,r)$ with $k>r$ there are infinitely many solutions. However, as I am sure you are all aware, the general answer was not known until 1995, when Andrew Wiles published his complete and peer-reviewed proof of the problem, $358$(!) years after the problem's conception by Fermat. His result was that For $n>2$, no solutions to $\boldsymbol{(2)}$ exist. 3: The Three Body Problem Take a system of $n$ particles in $\mathbb{R}^3$ with positions $\mathrm{r}_1,\dots ,\mathrm{r}_n$ and masses $m_1,\dots, m_n$ and consider the the coupled vector IVP $$\ddot{\mathrm{r}}_{i}=\sum _{j\in \{1,\dotsc ,n\} \setminus \{i\}}\frac{-Gm_{i} m_{j}}{\| \mathrm{r}_{i} -\mathrm{r}_{j} \| ^{3}}(\mathrm{r}_{i} -\mathrm{r}_{j})$$ $$\mathrm{r}_i(0)=\mathrm{r}_{i,0}~~,~~\dot{\mathrm{r}}_i(0)=\dot{\mathrm{r}}_{i,0}$$ Where $i\in\{1,\dots ,n\}$. When $n=1$ we just have a single stationary body. When $n=2$ things get a lot more interesting, but still the equations are easy to analyze and their behavior is easy to predict with numerical simulation - it is why we are able to predict solar ecplipses years in advance. In fact, taking the limiting case $m_2 \gg m_1$ some very precise equations for the motion of the bodies have been known for hundreds of years, namely Kepler's laws. However, when $n\geq 3$ the system becomes chaotic, with the particles exhibiting no obvious or predictable behavior. Is this because, unlike two points, one cannot in general find a line that goes through three arbitrary points? This reminds me a lot of example 1. 4: Commutative Division Algebras over $\mathbb{R}$ (Frobenius's Theorem) My shortest entry on this list, due to my extreme lack of knowledge about abstract algebra. There is (trivially) a one dimensional commutative division algrebra over $\mathbb{R}$, namely $\mathbb{R}$ itself. There a two dimensional commutative division algebra over $\mathbb{R}$, namely the complex numbers $\mathbb{C}$. But there is in fact no commutative division algebra over $\mathbb{R}$ when $n>2$. Once again, we see that changing the dimension from $2$ to $3$ completely changes the behavior. 5: The fundamental solution of Laplace's equation We seek to solve the equation $$(\boldsymbol{\triangle}u)(\mathrm{x})=\delta(\mathrm{x})$$ Here $u:\mathbb{R}^n\to\mathbb{R}$, $\mathrm{x}\in\mathbb{R}^n$, and $\delta$ is Dirac's delta distribution. It can be shown that, letting $V_n=\frac{\pi^{n/2}}{\Gamma(1+n/2)}$ be the volume (where, by volume I really mean the $n$ dimensional measure) of the unit $n$ ball, the solution is $$\Phi_n:\mathbb{R}^n\setminus \{0\}\to\mathbb{R}$$ $$\Phi _{n}(\mathrm{x}) =\begin{cases} \frac{1}{2} |\mathrm{x} | & n=1\\ \frac{1}{2\pi }\log |\mathrm{x} | & n=2\\ \frac{-1}{n( n-2) V_{n}} \ \frac{1}{|\mathrm{x} |^{n-2}} & n\geq 3 \end{cases}$$ This is actually more interesting - going from $n=1,2,3$ we start with a power law in $|\mathrm{x}|$, then a logarithm, and then again a power law. However, once again we see a stark change in behavior when $n$ goes from $2\to 3$. If you made it this far, thanks for reading. Consider leaving an answer giving other examples or perhaps a hand-wavy explanation. REPLY [1 votes]: Regular polytopes Infinitely many in $2d$ but only finitely many in higher dimensions. https://en.m.wikipedia.org/wiki/Regular_polytope Orbits Stable closed orbits exist in the $3d$ two body problem with the usual inverse square central force. In the $4d$ analogy with an inverse cube law, only circular orbits are possible and they are unstable. Similarly for even higher dimensions.
{"set_name": "stack_exchange", "score": 3, "question_id": 4131334}
\begin{document} \title[The maximum of the zeta function on the 1-line]{A note on the maximum of the Riemann zeta function on the 1-line} \author{Winston Heap} \address{Department of Mathematics, University College London, 25 Gordon Street, London WC1H.} \email{winstonheap@gmail.com} \thanks{Research supported by European Research Council grant no. 670239.} \maketitle \begin{abstract} We investigate the relationship between the maximum of the zeta function on the 1-line and the maximal order of $S(t)$, the error term in the number of zeros up to height $t$. We show that the conjectured upper bounds on $S(t)$ along with the Riemann hypothesis imply a conjecture of Littlewood that $\max_{t\in [1,T]}|\zeta(1+it)|\sim e^\gamma\log\log T$. The relationship in the region $1/2<\sigma<1$ is also investigated. \end{abstract} \section{Introduction} The behaviour of large values of the Riemann zeta function on the 1-line was first investigated by Littlewood \cite{L}. Over the years, his lower bound has been improved several times; the current best \cite{AMM} establishes arbitrarily large values of $t$ for which \[|\zeta(1+it)|\geqs e^\gamma (\log\log t+\log\log\log t)+O(1).\] In the other direction, assuming the Riemann hypothesis he proved that for large $t$ \ben \label{euler prod} \zeta(1+it)\sim\prod_{p\leqs \log^2 t}\bigg(1-\frac{1}{p^{1+it}}\bigg)^{-1} \een from which it follows by Merten's Theorem that \ben \label{littlewood} |\zeta(1+it)|\leqs 2e^\gamma (1+o(1))\log\log t. \een It is believed that the length of the Euler product can be reduced to $\log t$ and as a consequence one gets the following conjecture\footnote{For a more precise version, see the paper of Granville and Soundararajan \cite{GS}.}. \begin{conj} \label{max of zeta} We have \[ \max_{t\in [1,T]}|\zeta(1+it)|\sim e^\gamma\log\log T. \] \end{conj} Littlewood \cite{L2} later refined the upper bound \eqref{littlewood} by replacing the constant $2e^\gamma$ by $2\beta(1)e^\gamma$ where $\beta(1)=\lim_{\sigma\to1^-}\beta(\sigma)$ and for $1/2<\sigma<1$, $\beta(\sigma)=v(\sigma)/(2-2\sigma)$ where $v(\sigma)$ is defined as the minimum exponent for which $\log \zeta(s)\ll (\log t)^{v(\sigma)}$. We will prove a stronger relation where the maximum on the 1-line is related to the behaviour on the 1/2-line. Another object of interest in the theory of the Riemann zeta function is the remainder $S(t)$ in the formula for the number of zeros of height $t$ in the critical strip: \[ N(t)=\frac{t}{2\pi}\log \Big(\frac{t}{2\pi e}\Big)+\frac{7}{8}+S(t)+O(1/t). \] Here, the classical bound is $S(t)\ll \log t$. Under the assumption of the Riemann hypothesis, Selberg showed that $S(t)\ll\log t/\log\log t$ which remains the current best. In terms of lower bounds, the most recent improvements are due to Bondarenko and Seip \cite{BS} who showed that conditionally there exist arbitrarily large values of $t$ for which $|S(t)|\gg\sqrt{\log t \log\log\log t/\log\log t}$. It is generally believed that the lower bound is closer to the true maximal order of growth. Accordingly we define \[ \alpha=\limsup_{t\to\infty}\frac{\log |S(t)|}{\log\log t} \] and note that, conditionally, $1/2\leqs \alpha\leqs 1$. \begin{conj} \label{max of s(t)} We have \[\alpha=1/2.\] \end{conj} We remark that Farmer, Gonek and Hughes \cite{FGH} have made the more precise conjecture $\limsup_{t\to\infty} S(t)/\sqrt{\log t\log\log t}=1/\pi\sqrt{2}$. The maximum of $S(t)$ is closely related to the maximum of the zeta function. In this note we attempt to clarify this relation on the 1-line. \begin{thm} \label{main thm} Assume the Riemann hypothesis and let \ben \label{L} L(t)=\max_{|u-t|\leqs C\log\log t}|S(u)| \een for $C>1/\pi$. Then for $X$ satisfying $\max(L(t)^2, \log t)=o(X)$ and large $t$ we have \[ \zeta(1+it)=(1+o(1))\prod_{p\leqs X}\bigg(1-\frac{1}{p^{1+it}}\bigg)^{-1}. \] In particular, \[ |\zeta(1+it)|\leqs e^\gamma (1+o(1))\log(L(t)^2+\log t). \] and hence Conjecture \ref{max of s(t)} together with the Riemann hypothesis implies Conjecture \ref{max of zeta}. \end{thm} A common approach to proving conditional upper bounds such as \eqref{littlewood} is via the explicit formula. On assuming RH one can trivially estimate the sum over zeros which leaves only a sum over primes. We aim to be more precise in this step and simply write the sum over zeros as a Stieltjes integral which allows us to exploit some cancellation from oscillating terms. This is essentially the content of the following proposition where the sum over zeros has been replaced by an integral involving $S(t)$. \begin{prop} \label{main prop} Assume the Riemann hypothesis. Then, uniformly for $1/2+\delta\leqs \sigma \leqs 9/8$ with fixed $\delta>0$, and $1\leqs X\leqs e^{\sqrt{t}}$, we have \ben \label{explicit form} -\frac{\zeta^\prime(s)}{\zeta(s)}=\sum_{n\geqs 1}\frac{\Lambda(n)}{n^s}e^{-n/X}+J(t,X)+O(X^{-1}\log t)+O(X^{1/2-\sigma}) \een where \[ J(t,X)=iX^{1/2-\sigma}\int_{-t/2}^{t/2} X^{iy}\Gamma(\tfrac{1}{2}-\sigma+iy)\Big(\log X+\frac{\Gamma^\prime}{\Gamma}(\tfrac{1}{2}-\sigma+iy) \Big)S(t+y)dy. \] \end{prop} We will deduce Theorem \ref{main thm} from this proposition in the next section. One can also use this formula to get upper bounds when $1/2< \sigma<1$. In this region we have the conditional upper bound \ben \label{zeta RH bound} \zeta(\sigma+it)\ll \exp\bigg(A\frac{(\log t)^{2-2\sigma}}{\log\log t}\bigg), \een see \cite{T}, and the unconditional lower bound \[ \zeta(\sigma+it)=\Omega\bigg( \exp\bigg(B\frac{(\log t)^{1-\sigma}}{(\log\log t)^\sigma}\bigg)\bigg), \] originally due to Montgomery \cite{M}. The value of the constant $B$ has been improved several times with the current best due to Bondarenko and Seip \cite{BS0}. Again, it is generally believed that the lower bound is the true order of the maximum; indeed, based on some heuristic arguments Montgomery \cite{M} conjectured that this was the case (see \cite{Lam} for a detailed discussion). In terms of relating this to the maximum of $S(t)$ we have the following. \begin{thm}\label{second thm}Assume the Riemann hypothesis and let $L(t)$ be given by \eqref{L}. Then for fixed $1/2<\sigma<1$, \be \label{zeta RH bound 2} \zeta(\sigma+it)\ll \exp\big(c\max\big(L(t)^{2-2\sigma},(\log t)^{1-\sigma}\big)(\log\log t)^{1-2\sigma}\big). \ee \end{thm} Note that the conjecture of Farmer, Gonek and Hughes gives the upper bound $\ll \exp (c(\log t)^{1-\sigma}(\log\log t)^{2-3\sigma})$ which is still a power of a double logarithm away from Montgomery's conjecture. It is possible that trivially bounding $J(t,X)$, as we do below, is too wasteful and that further cancellations are possible. A finer analysis of $J(t,X)$ would also be of interest in determining the lower order terms in Theorem \ref{main thm}. \section{Proof of Theorems \ref{main thm} and \ref{second thm}} \begin{proof}[Proof of Theorem \ref{main thm}] Throughout we assume $X\ll (\log t)^A$. By Stirling's formula we have $\Gamma(\tfrac{1}{2}-\sigma+iy)\ll y^{-\sigma} e^{-\frac{\pi}{2}|y|}$ and $\frac{\Gamma^\prime}{\Gamma}(\tfrac{1}{2}-\sigma+iy)\ll \log(2+|y|)$. Applying these along with the classical bound $S(t)\ll \log t$ we find that \begin{multline*} X^{1/2-\sigma}\int_{C\log\log t}^{t/2} |\Gamma(\tfrac{1}{2}-\sigma+iy)|\Big(\log X+\Big|\frac{\Gamma^\prime}{\Gamma}(\tfrac{1}{2}-\sigma+iy) \Big|\Big)|S(t+y)|dy \\ \ll X^{1/2-\sigma}(\log t)(\log X) \int_{C\log\log t}^{t/2} e^{-\frac{\pi}{2}y}\log (2+y)dy \\ \ll X^{1/2-\sigma}(\log t)^{1-\frac{\pi}{2}C}\log_2 t\log_3 t. \end{multline*} Hence, for $C>1/\pi$ we may restrict the range of integration in $J(t, X)$ to $|y|\leqs C\log\log t$ at the cost of an error of size $O(X^{1/2-\sigma}\sqrt{\log t})$. Then, using similar bounds to estimate this remaining integral gives \[ J(t,X)\ll X^{1/2-\sigma}(\log X) L(t)+X^{1/2-\sigma}\sqrt{\log t}. \] Applying this in \eqref{explicit form} and integrating from $\sigma=1$ to $\sigma=9/8$ we get \begin{multline} \label{log zeta} \log\zeta(1+it)= \sum_{n\geqs 1}\frac{\Lambda(n)}{n^{1+it}\log n}e^{-n/X}+ O\bigg(\bigg|\log\zeta(9/8+it)-\sum_{n\geqs 1}\frac{\Lambda(n)}{n^{9/8+it}\log n}e^{-n/X}\bigg|\bigg) \\ +O\bigg(\frac{L(t)}{X^{1/2}}\bigg) + O\Big(\frac{\sqrt{\log t}}{X^{1/2}\log X}\Big) + O\Big(\frac{\log t}{X}\Big). \end{multline} Choosing $X$ such that $\max(L(t)^2,\log t)=o(X)$ we see that the error terms in the second line of the above are all $o(1)$. It remains to consider the sum over primes. By splitting the sum at $n=X$ and applying the expansion $e^{-n/X}=1+O(n/X)$ in the sum over $n\leqs X$ we find that, for $\sigma\geqs 1$, \be \begin{split} \sum_{n\geqs 1}\frac{\Lambda(n)}{n^{\sigma+it}\log n}e^{-n/X} = & \sum_{n\leqs X}\frac{\Lambda(n)}{n^{\sigma+it}\log n}+O\bigg(\frac{1}{\log X}\bigg) \end{split} \ee after estimating the tail sum by the prime number theorem. Also note that for $\sigma\geqs 1$ \be \begin{split} \sum_{n\leqs X}\frac{\Lambda(n)}{n^{\sigma+it}\log n} = & \sum_{\substack{k\geqs 1,p\leqs X\\p^k\leqs X}}\frac{1}{kp^{k(\sigma+it)}} = -\sum_{p\leqs X}\log\bigg(1-\frac{1}{p^{\sigma+it}}\bigg)+O\bigg(\sum_{\substack{k\geqs 1,p\leqs X\\p^k> X}}\frac{1}{kp^{k\sigma}}\bigg) \\ = & -\sum_{p\leqs X}\log\bigg(1-\frac{1}{p^{\sigma+it}}\bigg)+O\bigg(\frac{1}{\log X} \bigg). \end{split} \ee From this it is clear that the first error term of \eqref{log zeta} is $o(1)$ as $X\to\infty$ and hence we acquire the asymptotic \[ \zeta(1+it)=(1+o(1))\prod_{p\leqs X}\bigg(1-\frac{1}{p^{1+it}}\bigg)^{-1} \] provided $\max(L(t)^2,\log t)=o(X)$. Theorem \ref{main thm} then follows. \end{proof} \begin{proof}[Proof of Theorem \ref{second thm}] Adapting the above proof to the case $1/2<\sigma<1$ gives \be \label{log zeta sigma} \log\zeta(\sigma+it)= \sum_{n\geqs 1}\frac{\Lambda(n)}{n^{\sigma+it}\log n}e^{-n/X} +O\bigg(\frac{L(t)}{X^{\sigma-1/2}}\bigg) + O\Big(\frac{\sqrt{\log t}}{X^{\sigma-1/2}\log X}\Big)+ O\Big(\frac{\log t}{X}\Big) \ee A short calculation with the prime number theorem shows that the sum over $n$ is $\ll X^{1-\sigma}/\log x$ and so \ben \label{log zeta sigma} \log|\zeta(\sigma+it)|\ll \frac{X^{1-\sigma}}{\log X} +\frac{L(t)}{X^{\sigma-1/2}} +\frac{\sqrt{\log t}}{X^{\sigma-1/2}\log X}+\frac{\log t}{X}. \een Taking $X=\max(L(t)^2,\log t)(\log\log t)^2$ balances the first two terms and the result follows. \end{proof} \section{Proof of Proposition \ref{main prop}} We start from a slightly more precise version of the explicit formula used by Littlewood (see Theorem 14.4 of Titchmarsh \cite{T}). The proof is fairly standard but we shall give most of the details for clarity. \begin{lem} Assume the Riemann hypothesis. For large $t$ we have \ben \label{explicit} -\frac{\zeta^\prime(s)}{\zeta(s)}=\sum_{n\geqs 1}\frac{\Lambda(n)}{n^s}e^{-n/X}+\sum_{\rho}X^{\rho-s}\Gamma(\rho-s)+O(X^{-1}\log t) \een uniformly for $1/2\leqs \sigma \leqs 9/8$ and $1\leqs X\leqs e^{\sqrt{t}}$. \end{lem} \begin{proof} On the one hand we have \[ \sum_{n\geqs 1}\frac{\Lambda(n)}{n^s}e^{-n/X}=-\frac{1}{2\pi i}\int_{2-i\infty}^{2+i\infty}\Gamma(z-s)\frac{\zeta^\prime(z)}{\zeta(z)}X^{z-s}dz \] which follows from the identity $e^{-Y}=\frac{1}{2\pi i}\int_{(c)}\Gamma(z)Y^{-z}dz$. On the other hand, by shifting contours to the left in the usual way we find that \begin{multline*} -\frac{1}{2\pi i}\int_{2-i\infty}^{2+i\infty}\Gamma(z-s)\frac{\zeta^\prime(z)}{\zeta(z)}X^{z-s}dz\\ = -\frac{\zeta^\prime(s)}{\zeta(s)}-\sum_{\rho}X^{\rho-s}\Gamma(\rho-s)+\Gamma(1-s)X^{1-s}\\ + \frac{\zeta^\prime(s-1)}{\zeta(s-1)}X^{-1} +\frac{1}{2\pi i}\int_{(\kappa)}\Gamma(z-s)\frac{\zeta^\prime(z)}{\zeta(z)}X^{z-s}dz \end{multline*} where $-2+\sigma <\kappa<-1+\sigma$. Now, \[ \Gamma(1-s)X^{1-s}\ll e^{-A|t|}X^{1/2}\leqs e^{-B|t|} \] and \[ \frac{\zeta^\prime(s-1)}{\zeta(s-1)}X^{-1}\ll \frac{\log t}{X} \] since $\zeta^\prime(s)/\zeta(s)\ll \log t$ for $-1\leqs \sigma\leqs 2$ provided that $\sigma\neq 1/2$. The integral on the new line is \be \begin{split} \ll X^{\kappa-\sigma}\int_{-\infty}^\infty|\Gamma(\kappa+iy-s)|\bigg|\frac{\zeta^\prime(\kappa+iy)}{\zeta(\kappa+iy)}\bigg|dy \ll & {X^{\kappa-\sigma}}\int_{-\infty}^\infty e^{-A|y-t|}\log(|y|+2)dy \\ \ll & {X^{\kappa-\sigma}} \log t. \end{split} \ee Clearly this is smaller than our previous error term so we are done. \end{proof} One may conduct some basic estimates of the sum over zeros appearing in \eqref{explicit} which gives an upper bound of $X^{1/2-\sigma}\log t$ (see section 14.5 of \cite{T}). After integrating over $\sigma\geqs 1$, we see that one requires $X=\log^2 t$ for this term to be $o(1)$. This is the reason for such a restriction in the length of the Euler product in \eqref{euler prod}. As mentioned, we would like to exploit some cancellation in the sum over zeros in a hope to improve this. In this direction we have the following Lemma. \begin{lem}Assume the Riemann hypothesis and let $t$ be large. Then, uniformly for $1/2+\delta\leqs \sigma \leqs 9/8$ with fixed $\delta>0$, and $1\leqs X\leqs e^{\sqrt{t}}$, we have \[ \sum_{\rho}X^{\rho-s}\Gamma(\rho-s)=-\frac{\log (t/2\pi)}{X}+J(t,X)+O(X^{-2}\log t)+ O(X^{1/2-\sigma}) \] where \[ J(t,X)=iX^{1/2-\sigma}\int_{-t/2}^{t/2} X^{iy}\Gamma(\tfrac{1}{2}-\sigma+iy)\Big(\log X+\frac{\Gamma^\prime}{\Gamma}(\tfrac{1}{2}-\sigma+iy) \Big)S(t+y)dy. \] \end{lem} \begin{proof} We first note that we may restrict the sum to those ordinates for which $t/2\leqs\gamma\leqs 3t/2$. For, the tail satisfies the bound \[ \sum_{\gamma<t/2}X^{\rho-s}\Gamma(\rho-s)\ll X^{1/2-\sigma} \sum_{\gamma<t/2}e^{-A|\gamma-t|}\ll X^{1/2-\sigma}e^{-At/4}\sum_{\gamma<t/2}e^{-A|\gamma|/2}\ll X^{1/2-\sigma}e^{-At/4} \] after writing this last sum as a Stieltjes integral and applying the appropriate bounds on $N(t)$. A similar bound holds for the sum over $\gamma>3t/2$. We write the remaining sum in the form \be \begin{split} \sum_{t/2\leqs \gamma\leqs3t/2}X^{\rho-s}\Gamma(\rho-s) = & X^{1/2-\sigma}\sum_{t/2\leqs \gamma\leqs3t/2}X^{i(\gamma-t)}\Gamma(\tfrac{1}{2}-\sigma+i(\gamma-t)) \\ = & X^{1/2-\sigma}\int_{-t/2}^{3t/2} X^{i(y-t)}\Gamma(\tfrac{1}{2}-\sigma+i(y-t))dN(y). \end{split} \ee We decompose $N(y)$ as a sum of its smooth part and $S(y)$; that is, we write $ N(y)=N^*(y)+S(y) $ where \be \begin{split} N^*(y)&=\frac{1}{\pi}\Delta\arg s(s-1)\pi^{-s/2}\Gamma(s/2)\\ &=\frac{y}{2\pi}\log(y/2\pi e)+\frac{7}{8}+\frac{c}{y}+O(1/y^2),\qquad y\to\infty \end{split} \ee and $S(y)=\frac{1}{\pi}\Delta\arg\zeta(s)$. Here, $\Delta\arg$ denotes the change in argument along the straight lines from 2 to $2+iy$, and then to $1/2+iy$. We note that $N^*(y)$ is a smooth function and its above asymptotic expansion can be given to any degree of accuracy in terms of negative powers of $y$. Then, our integral can be written as \be \begin{split} & X^{1/2-\sigma}\int_{t/2}^{3t/2} X^{i(y-t)}\Gamma(\tfrac{1}{2}-\sigma+i(y-t))[dN^*(y)+dS(y)] \\ = & X^{1/2-\sigma}\int_{t/2}^{3t/2} X^{i(y-t)}\Gamma(\tfrac{1}{2}-\sigma+i(y-t))\Big(\frac{1}{2\pi}\log (y/2\pi)+O(1/y^2)\Big)dy \\ & +X^{1/2-\sigma}\int_{t/2}^{3t/2} \frac{d}{dy}\Big(X^{i(y-t)}\Gamma(\tfrac{1}{2}-\sigma+i(y-t))\Big)S(y)dy\\ &\qquad+O(X^{1/2-\sigma}e^{-Bt})+O(X^{1/2-\sigma}/t) \\ \end{split} \ee after integration by parts and applying the bounds $\Gamma(\tfrac{1}{2}-\sigma+iy)\ll e^{-A|y|}$ and $S(y)\ll \log y$. Denote the first of these integrals by $I$ and the second by $J$. Now, \be \begin{split} I = & X^{1/2-\sigma}\int_{t/2}^{3t/2} X^{i(y-t)}\Gamma(\tfrac{1}{2}-\sigma+i(y-t))\Big(\frac{1}{2\pi}\log (y/2\pi)+O(1/y^2)\Big)dy \\ = & X^{1/2-\sigma}\int_{-t/2}^{t/2} X^{iy}\Gamma(\tfrac{1}{2}-\sigma+iy)\frac{1}{2\pi}\log ((y+t)/2\pi)dy+O(X^{1/2-\sigma}/t) \\ = & \log\Big(\frac{t}{2\pi}\Big)\frac{1}{2\pi}\int_{-t/2}^{t/2} X^{1/2-\sigma+iy}\Gamma(\tfrac{1}{2}-\sigma+iy)dy\\ & + X^{1/2-\sigma}\int_{-t/2}^{t/2} X^{iy}\Gamma(\tfrac{1}{2}-\sigma+iy)\frac{1}{2\pi}\log ((1+y/t)/2\pi)dy+ O(X^{1/2-\sigma}/t) \end{split} \ee The second integral here is bounded and so results in a contribution of $O(X^{1/2-\sigma})$. After extending the tails of the first integral, which incurs only a small error, we acquire \[ I=\log\Big(\frac{t}{2\pi}\Big)\frac{1}{2\pi i}\int_{1/2-\sigma-i\infty}^{1/2-\sigma+i\infty}\Gamma(s)X^sds+O(X^{1/2-\sigma}). \] In the usual way we may shift this contour to the far left encountering poles at $s=-n$, $n\in\mathbb{N}$ with residues $(-1)^n/n!$. Since $\sigma>1/2$ there is no contribution from the pole at zero. In this way we find that this integral is given by $e^{-1/X}-1$ and so \be \begin{split} I = & (e^{-1/X}-1)\log(t/2\pi)+ O(X^{1/2-\sigma}) \\ = & -\frac{\log (t/2\pi)}{X}+O(X^{-2}\log t)+ O(X^{1/2-\sigma}). \end{split} \ee Performing the differentiation in $J$ gives \[ J=iX^{1/2-\sigma}\int_{t/2}^{3t/2} X^{i(y-t)}\Gamma(\tfrac{1}{2}-\sigma+i(y-t))\Big(\log X+\frac{\Gamma^\prime}{\Gamma}(\tfrac{1}{2}-\sigma+i(y-t)) \Big)S(y)dy. \] and then on substituting $y\mapsto y+t$ the result follows. \end{proof} Combining the above two lemmas gives Proposition \ref{main prop}. Note that the integral $I$ above is trivially $\ll X^{1/2-\sigma}\log t$. Evaluating it explicitly is where we acquire some cancellations, however the problem is then reduced to finding good bounds on $S(t)$, of which we know very little.
{"config": "arxiv", "file": "1812.01415.tex"}
TITLE: Creating gradient functions based on model parameters? QUESTION [1 upvotes]: I am using a software library (Math.Net) to try to fit two Lorentzians to a curve. I have found some example software which shows the fitting out a few various types of curves (Line, Parabola, Power Function, and a Sum of Trigonometric functions). I would like to modify a model of these curves to represent my two Lorentzians. The model that represents the Sum of Trig Funcs uses the model equation y = a * cos(b * x) + b * sin(a * x) It also has a gradient function for the parameters a and b as (in C# code) public override void GetGradient(double x, Vector<double> parameters, ref Vector<double> gradient) { gradient[0] = (Math.Cos(parameters[1] * x) + parameters[1] * Math.Cos(parameters[0] * x)); gradient[1] = (-parameters[0] * Math.Sin(parameters[1] * x) * x + Math.Sin(parameters[0] * x)); } Where gradient[0] is the gradient value for parameter a at x and gradient[1] is the gradient value for parameter b at x. Also where parameters[0] is the current value of a and parameters[1] is the current value of b. What I would like to do is to produce these gradient functions for my equation model function y = b + a1 * g1^2 / ((x - x1)^2 + g1^2) + a2 * g2^2 / ((x - x2)^2 + g2^2) Where my parameters are b, a1, g1, x1, a2, g2, x2. I am not the moth mathematically inclined person there is but from what I can tell all the gradient functions are the derivative of the model function with respect to the parameter in question, is this correct? Link to page containing the code I am trying to modify (source download at bottom): Linear and Non-Linear Least Squares with Math.Net EDIT: Working on the derivations now. -- Done! See answer below! REPLY [0 votes]: After some help from some coworkers and WolframAlpha I figured out how to create the gradient functions with respect to each of the parameters. As I suspected to do this I simply needed to derive my model function with respect to each parameter. This produced very good results. I managed to cram all of this into the example software in the link in the OP. Here are the gradient functions that I used (In psuedo code (In C# you cannot ^ doubles so there are a lot of Math.Pow making it almost unreadable) gradient[0] = 1; gradient[1] = g1 ^ 2 / (g1 ^ 2 + (x - x1) ^ 2); gradient[2] = (2 * a1 * g1 * (x - x1)^2) / (g1^2 + (x - x1)^2)^2; gradient[3] = (2 * a1 * g1^2 * (x-x1)) / (g1^2 + (x - x1)^2)^2; gradient[4] = g2^2 / (g2^2 + (x - x2)^2); gradient[5] = (2 * a2 * g2 * (x - x2)^2) / (g2^2 + (x - x2)^2)^2; gradient[6] = (2 * a2 * g2^2 * (x-x2)) / (g2^ 2 + (x - x2)^2)^2;
{"set_name": "stack_exchange", "score": 1, "question_id": 1769993}
TITLE: Does there exist a way to find the sum of the digits of the result from a large cubic root? QUESTION [1 upvotes]: The problem is as follows: Find the sum of the digits of $B$: $B=\sqrt[3]{1224 \times 1225\times 1226+35\sqrt[3]{34\times35\times36+35}}$ The possible answers given in my book are as follows: $\begin{array}{ll} 1.&10\\ 2.&18\\ 3.&9\\ 4.&12\\ \end{array}$ I've been tempted to use a calculator but this problem is meant to be solved by hand. Does there exist a trick or anything for doing this, other than going by the brute force method of doing all the operations one by one?. How can this be simplified or solved quickly?. Can someone help me here?. REPLY [2 votes]: Well, I suppose you could just do it by hand, but that would be a pain in the butt. Thus, to flesh out Catalin Zara's hint: Let $a=1225, b=35$. Then $$B = \sqrt[3]{(a-1)(a)(a+1) + b\sqrt[3]{(b-1)(b)(b+1) + b}}$$ Noting that $(x-1)(x)(x+1) = (x^2 - 1)(x) = x^3 - x$, then the innermost radicand is equal to $b^3$ -- just move the $x$ over in the previous equation. Then that simplifies to $b$ and thus $$B = \sqrt[3]{(a-1)(a)(a+1) + b^2}$$ A priori, it doesn't seem like we can apply it again. However, would it not be convenient for us if $b^2 = a$, to reapply the same trick? Indeed, if you happen to have $35^2$ memorized, or calculate it by hand, you'll find $b^2 = a$. Then the radicand simplifies to $a^3$ and then $B=a$. Thus, $B=1225$, and finding the sum of the digits is trivial.
{"set_name": "stack_exchange", "score": 1, "question_id": 3588675}
TITLE: Reference request: an elementary result on characters of finite abelian groups QUESTION [6 upvotes]: The referee of a paper I submitted to a journal asked me to include a reference of the following elementary result on characters of finite abelian groups: Let $A$ be a finite abelian group of order $N$ and let $\hat A$ be its dual group. Let $a\in A$ have order $h$. Then $$\prod_{\chi\in\hat A}(1-\chi(a)T)=(1-T^h)^{N/h}.$$ I don't want to include a proof because one of the good things about this paper (I hope not the only one) is that is short. I have searched in books about abelian groups, finite groups, representations, and number theory, but I could not find it. As usual, the only place I could find it is in one of the (magnificent) "blurbs" by Keith Conrad. Does anyone knows a book where I can actually find this result? REPLY [3 votes]: This is essentially proved in Rosen's "Number Theory in Function Fields", page 109, Lemma 8.14. This is also proved in Lang's "Algebraic Number Theory" (2nd edition), page 230. It is the equation with (*) in its beginning. The context is abelian extensions, so some of the notation make it seem number-theoretic, but the argument is general.
{"set_name": "stack_exchange", "score": 6, "question_id": 312576}
TITLE: logical quantifiers on sets question QUESTION [0 upvotes]: I have to prove whether or not these statements are true/false but I'm having trouble understanding it. $\forall x \in \mathbb{Z}^5, \forall y \in \mathbb{Z}^5. \exists z \in \mathbb{Z}^5, \forall j \in \{1,2,3,4,5\},x_{j} \leq z_{j} \leq y_{j}$. I think I understand the first half, up until the "for all j that is a member of the set {1,2,3,4,5}" part. Is it saying that that the element at index j of x will always be less than the one at z which is less than the one at y? How would you prove something like that (if it's true)? The other one is this, $i \neq j \implies (\{x \in \mathbb{Z}^5 : x_{i} = 3\} \cap \{x \in \mathbb{Z}^5 : x_{j} = 3\} = \emptyset)$ I'm not sure I understand what this one is saying at all. Maybe that the elements in the set of length 5 have to be at different indexes? But I'm not sure I understand why $x_{i} =3$. REPLY [0 votes]: Your interpretation of the first statement is more or less correct, but the statement itself is not. The first statement claims that, for any pair of integer quintuples $x= (x_1...,x_5)$ and $y=(y_1,...,y_5)$ in the set $\mathbb{Z}^5$ there exists another integer quintuple $z = (z_1,...,z_5) \in \mathbb{Z}^5$ such that $x_i \leqslant z_i \leqslant y_i$ for each index $i=1,...,5$. To see why this is false, take $x = (1,0,0,0,0)$ and $y=(0,0,0,0,0)$ as a counterexample. They are a legitimate pair of points in $\mathbb{Z}^5$, but there is no $z = (z_1,...,z_5) \in \mathbb{Z}^5$ satisfying $1 = x_1 \leqslant z_1 \leqslant y_1 = 0$. The second statement says that the sets $A_i=\{(x_1,...,x_5) \in \mathbb{Z}^5: x_i = 3\}$ and $A_j=\{(x_1,...,x_5) \in \mathbb{Z}^5: x_j = 3\}$ are disjoint whenever $i \neq j$ are distinct indices among $1,...,5$. The set $A_i$ is the set of all integer quintuples whose $i$-th coordinate is $3$. Similarly, the set $A_j$ is the set of all integer quintuples whose $j$-th coordinate is $3$. So, the statement essentially claims that an integer quintuple cannot have distinct coordinates both equal to $3$. This certainly isn't true. The perfectly legitimate quintuple $x = (3,3,3,3,3) \in A_i \cap A_j$ serves as a counterexample
{"set_name": "stack_exchange", "score": 0, "question_id": 2161397}
\begin{document} \title[]{Index and nullity of proper biharmonic maps in spheres} \author{S.~Montaldo} \address{Universit\`a degli Studi di Cagliari\\ Dipartimento di Matematica e Informatica\\ Via Ospedale 72\\ 09124 Cagliari, Italia} \email{montaldo@unica.it} \author{C.~Oniciuc} \address{Faculty of Mathematics\\ ``Al.I. Cuza'' University of Iasi\\ Bd. Carol I no. 11 \\ 700506 Iasi, ROMANIA} \email{oniciucc@uaic.ro} \author{A.~Ratto} \address{Universit\`a degli Studi di Cagliari\\ Dipartimento di Matematica e Informatica\\ Via Ospedale 72\\ 09124 Cagliari, Italia} \email{rattoa@unica.it} \begin{abstract} In recent years, the study of the bienergy functional has attracted the attention of a large community of researchers, but there are not many examples where the second variation of this functional has been thoroughly studied. We shall focus on this problem and, in particular, we shall compute the exact index and nullity of some known examples of proper biharmonic maps. Moreover, we shall analyse a case where the domain is not compact. More precisely, we shall prove that a large family of proper biharmonic maps $\varphi: \R \to \s^2$ is strictly stable with respect to compactly supported variations. In general, the computations involved in this type of problems are very long. For this reason, we shall also define and apply to specific examples a suitable notion of index and nullity with respect to equivariant variations. \end{abstract} \subjclass[2000]{Primary: 58E20; Secondary: 53C43.} \keywords{Biharmonic maps, second variation, index, nullity} \thanks{The first and the last authors were supported by Fondazione di Sardegna (project STAGE) and Regione Autonoma della Sardegna (Project KASBA); the second author was supported by a project funded by the Ministry of Research and Innovation within Program 1 - Development of the national RD system, Subprogram 1.2 - Institutional Performance - RDI excellence funding projects, Contract no. 34PFE/19.10.2018.} \maketitle \section{Introduction}\label{intro} {\it Harmonic maps} are the critical points of the {\em energy functional} \begin{equation}\label{energia} E(\varphi)=\frac{1}{2}\int_{M}\,|d\varphi|^2\,dv_M \,\, , \end{equation} where $\varphi:M\to N$ is a smooth map from a compact Riemannian manifold $(M,g)$ to a Riemannian manifold $(N,h)$. In particular, $\varphi$ is harmonic if it is a solution of the Euler-Lagrange system of equations associated to \eqref{energia}, i.e. \begin{equation}\label{harmonicityequation} - d^* d \varphi = {\trace} \, \nabla d \varphi =0 \,\, . \end{equation} The left member of \eqref{harmonicityequation} is a vector field along the map $\varphi$ or, equivalently, a section of the pull-back bundle $\varphi^{-1} TN$: it is called {\em tension field} and denoted $\tau (\varphi)$. In addition, we recall that, if $\varphi$ is an \textit{isometric immersion}, then $\varphi$ is a harmonic map if and only if it defines a \textit{minimal submanifold} of $N$ (see \cite{EL1, EL83} for background). A related topic of growing interest is the study of {\it biharmonic maps}: these maps, which provide a natural generalisation of harmonic maps, are the critical points of the {\it bienergy functional} (as suggested in \cite{EL83}, \cite{ES}) \begin{equation}\label{bienergia} E_2(\varphi)=\frac{1}{2}\int_{M}\,|d^*d\varphi|^2\,dv_M=\frac{1}{2}\int_{M}\,|\tau(\varphi)|^2\,dv_M\,\, . \end{equation} There have been extensive studies on biharmonic maps. We refer to \cite{Chen, Jiang, SMCO, Ou} for an introduction to this topic and to \cite{MOR2, Mont-Ratto2, Mont-Ratto3, Mont-Ratto6} for a collection of examples which shall be studied in this paper in the context of second variation. We observe that, obviously, any harmonic map is trivially biharmonic and an absolute minimum for the bienergy. Therefore, we say that a biharmonic map is {\it proper} if it is not harmonic and, similarly, a biharmonic isometric immersion is {\it proper} if it is not minimal. As a general fact, when the ambient has nonpositive sectional curvature there are several results which assert that, under suitable conditions, a biharmonic submanifold is minimal, but the Chen conjecture that any biharmonic submanifold of $\R^n$ must be minimal is still open (see \cite{Chen, Chen2}). The aim of this paper is to compute the index and the nullity of certain biharmonic maps. It shall be clear from our analysis that, in general, despite the simplicity of the involved maps, this is a hudge task (for this reason some of the computations have also been checked with the aid of Mathematica$^{\footnotesize \textregistered}$). Therefore, in some cases, we shall focus on reduced index and nullity (i.e., index and nullity which arise from the restriction to equivariant variations). Now we want to prepare the ground to state our main results. To this purpose, first of all we need to explain some basic facts about the iterated Jacobi operator $I_2(V)$ and the definition of index and nullity. More specifically, let $\varphi:M\to N$ be a biharmonic map between two Riemannian manifolds $(M,g)$, $(N,h)$. We shall consider a two-parameter smooth variation $\left \{ \varphi_{t,s} \right \}$ $(-\varepsilon <t,s < \varepsilon,\,\varphi_{0,0}=\varphi)$ and denote by $V,W$ its associated vector fields: \begin{align}\label{V-W} & V(x)= \left . \frac{d}{dt}\right |_{t=0} \varphi_{t,0} \in T_{\varphi(x)}N \\ \nonumber & W(x)= \left . \frac{d}{ds}\right |_{s=0} \varphi_{0,s} \in T_{\varphi(x)}N\,. \end{align} Note that $V$ and $W$ are sections of $\varphi^{-1}TN$. The {\it Hessian} of the bienergy functional $E_2$ at its critical point $\varphi$ is defined by \begin{equation}\label{Hessian-definition} H(E_2)_\varphi (V,W)= \left . \frac{\partial^2}{\partial t \partial s}\right |_{(t,s)=(0,0)} E_2 (\varphi_{t,s}) \, . \end{equation} The following theorem was obtained by Jiang and translated by Urakawa \cite{Jiang}: \begin{theorem}\label{Hessian-Theorem} Let $\varphi:M\to N$ be a biharmonic map between two Riemannian manifolds $(M,g)$ and $(N,h)$, where $M$ is compact. Then the Hessian of the bienergy functional $E_2$ at a critical point $\varphi$ is given by \begin{equation}\label{Operator-Ir} H(E_2)_\varphi (V,W)= \int_M \langle I_2(V),W \rangle \, dv_M \,\,, \end{equation} where $I_2 \,:\mathcal{C}\left(\varphi^{-1} TN\right) \to \mathcal{C}\left(\varphi^{-1} TN\right)$ is a semilinear elliptic operator of order $4$. \end{theorem} Now we want to give an explicit description of the operator $I_2$. To this purpose, let $\nabla^M, \nabla^N$ and $\nabla^{\varphi}$ be the induced connections on the bundles $TM, TN$ and $\varphi ^{-1}TN$ respectively. Then the \textit{rough Laplacian} on sections of $\varphi^{-1} TN$, denoted by $\overline{\Delta}$, is defined by \begin{equation}\label{roughlaplacian} \overline{\Delta}=d^* d =-\sum_{i=1}^m\Big\{\nabla^{\varphi}_{e_i} \nabla^{\varphi}_{e_i}-\nabla^{\varphi}_ {\nabla^M_{e_i}e_i}\Big\}\,\,, \end{equation} where $\{e_i\}_{i=1}^m$ is a local orthonormal frame field tangent to $M$. In the present paper, we shall only need the explicit expression of $I_2(V)$ in the case that the target manifold is $\s^n$. This relevant formula, which was first given in \cite{Onic} and can be deduced from a general formula in \cite{Jiang}, is the following: \begin{eqnarray}\label{I2-general-case} \nonumber I_2(V)&=&\overline{\Delta}^2 V+\overline{\Delta}\left ( {\rm trace}\langle V,d\varphi \cdot \rangle d\varphi \cdot - |d\varphi|^2\,V \right)+2\langle d\tau(\varphi),d\varphi \rangle V+|\tau(\varphi)|^2 V\\ \nonumber &&-2\,{\rm trace}\langle V,d\tau(\varphi) \cdot \rangle d\varphi \cdot-2 \,{\rm trace} \langle \tau(\varphi),dV \cdot \rangle d\varphi \cdot - \langle \tau(\varphi),V \rangle \tau(\varphi)\\ &&+{\rm trace} \langle d\varphi \cdot,\overline{\Delta}V \rangle d\varphi \cdot +{\rm trace}\langle d\varphi \cdot,\left ( {\rm trace}\langle V,d\varphi \cdot \rangle d\varphi \cdot \right ) \rangle d\varphi \cdot -2 |d\varphi|^2\, {\rm trace}\langle d\varphi \cdot,V \rangle d\varphi\cdot \\ \nonumber &&+2 \langle dV,d\varphi\rangle \tau(\varphi) -|d\varphi|^2 \,\overline{\Delta}V+|d\varphi|^4 V\,, \nonumber \end{eqnarray} where $\cdot$ denotes trace with respect to a local orthonormal frame field on $M$. Next, it is important to recall from the general theory that, since $M$ is compact, the spectrum \begin{equation} \lambda_1 < \lambda_2 < \ldots < \lambda_i < \ldots \end{equation} of the iterated Jacobi operator $I_2(V)$ is \textit{discrete} and tends to $+\infty$ as $i$ tends to $+ \infty$. We denote by $\mathcal{V}_i$ the eigenspace associated to the eigenvalue $\lambda_i$. Then we define \begin{equation}\label{Index-definition} {\rm Index}(\varphi) = \sum_{\lambda_i <0} \dim(\mathcal{V}_i)\,. \end{equation} The nullity of $\varphi$ is defined as \begin{equation}\label{Nullity-definition} {\rm Nullity}(\varphi) = \dim \left \{ V \in\mathcal{C}\left(\varphi^{-1} TN\right) \, : \, I_2(V)=0\right \} \,. \end{equation} We say that a map $\varphi: M\to N$ is {\it stable} if ${\rm Index}(\varphi)=0$. The index and the nullity of certain proper biharmonic maps have been computed in \cite {BFO,LO, TAMS} where, apart from one case, only estimates have been produced. In this paper we continue this program of study of the second variation of the bienergy and now we are in the right position to describe the specific examples that we shall investigate: each of them contains a short description of the biharmonic maps under consideration and the corresponding result concerning their exact index and nullity. In the first examples the domain of the map is a flat torus or a circle, for which the full description of its spectrum is well known, and the pull-back bundle of the map is parallelizable. Moreover, in the first example, where the domain is the flat torus $\mathbb{T}^2$, we shall also give an explicit geometric description of the space where the Hessian is negative definite. In the last example we shall consider a case where the domain is \textit{not} compact: in this context it is meaningful to study \textit{stability with respect to compactly supported variations}. In particular, we shall prove the existence of a large family of \textit{strictly stable} proper biharmonic maps $\varphi: \R \to \s^2$. The proofs of the results shall be given in Section\link\ref{proofs}. Finally, in the last section we shall define and study a \textit{reduced} index and nullity. We shall now give a detailed description of the results. \begin{example}\label{example-mapstoro-to-sfera} We write the flat 2-torus ${\mathbb T}^2$ as \begin{equation}\label{toro} {\mathbb T}^2 = \left ( \s ^1 \times \s^1, d\gamma^2+ d\vartheta^2 \right ) \,\, , \qquad 0 \leq \gamma, \vartheta \leq 2 \pi \,\,. \end{equation} Next, we describe the 2-sphere $\s^2$ by means of spherical coordinates: \begin{equation}\label{2sfera} \s^2 = \left ( \s^1 \times [0,\pi], \, \sin^2 \alpha \, \, dw^2+ d\alpha^2 \right ) \,\, , \qquad 0 \leq w \leq 2 \pi \,, \,\, 0 \leq \alpha\leq \pi \,\, . \end{equation} We embed $\s^2$ in the canonical way into $\R^3$ and we consider equivariant maps $\varphi_{k} : {\mathbb T}^2 \to \s^2$ of the following form: \begin{equation}\label{equivdatoroasfera} \left ( \gamma, \, \vartheta \right ) \mapsto \left ( \sin \alpha(\vartheta) \cos(k\gamma), \sin \alpha(\vartheta) \sin(k\gamma),\cos \alpha (\vartheta) \right ) \,\, , \end{equation} where $k\in \z^*$ is a fixed integer and $\alpha(\vartheta)$ is a differentiable, periodic function of period equal to $2\pi$. The condition of biharmonicity (see \cite{Mont-Ratto3}) for $\varphi_{k}$ reduces to: \begin{equation}\label{biarmoniatorosfera} \alpha^{(4)} - {\alpha}'' \, \left [ 2 \, k^2 \, \cos (2 \alpha)\right ] + ({\alpha'})^2 \, \left [ 2 \, k^2 \, \sin (2 \alpha)\right ] + \, \frac{k^4}{2} \, \sin (2\alpha) \, \cos (2 \alpha) \, = \, 0 \,\, . \end{equation} In particular, \eqref{biarmoniatorosfera} admits the following constant solutions: \begin{equation}\label{trivialsolutions} {\rm (i)}\,\, \alpha \, \equiv \ell \, \frac {\pi}{2} \,\,, {\rm where} \,\, \ell =0,\,1,\,2 \, \, ; \qquad \rm {(ii)}\,\, \alpha\, \equiv \frac {\pi}{4} \,\,\, {\rm or}\, \,\, \alpha \, \equiv \frac {3\, \pi}{4} \,\, . \end{equation} The solutions in \eqref{trivialsolutions}(i) are not interesting because they give rise to harmonic maps which are absolute minima for the bienergy. By contrast, the solutions in \eqref{trivialsolutions}(ii) represent {proper biharmonic maps} and here we study their second variation operator $I_2$. Despite the apparently simple structure of these critical points, the study of their index and nullity requires a rather accurate analysis. Since index and nullity are invariant with respect to composition with an isometry of either the domain or the target, it is not restrictive to assume that $k\in \n^*$ in \eqref{equivdatoroasfera} and $\alpha=\pi /4$ in \eqref{trivialsolutions}. We shall prove the following result: \begin{theorem}\label{Spectrum-theorem-toro-sfera} Let $\varphi_{k} : {\mathbb T}^2 \to \s^2$ be the proper biharmonic map \begin{equation}\label{equivdatoroasfera-bis*} \left ( \gamma, \, \vartheta \right ) \mapsto \left ( \sin (\alpha^*) \cos(k\gamma), \sin (\alpha^*) \sin(k\gamma),\cos (\alpha^*) \right ) \,\, , \end{equation} where $k\in \n^*$ and $\alpha^*= \pi /4$. Then the eigenvalues of the second variation operator $I_2(V)$ associated to $\varphi_{k}$ can be parametrized by means of two parameters $m,n\in \n\times \n$ as follows: \renewcommand{\arraystretch}{1.3} \begin{equation*} \begin{tabular}{ l r c l} If $m=n=0$: & && \\ & $\mu_0$&$=$&$0$ \\ & $\mu_1$&$=$&$-k^4$ \\ If $m \geq 1,\,n=0$:& && \\ &$\lambda_{m,0}^+$ &$=$&$\frac{1}{2} \left(-k^4+5k^2 m^2+2 m^4+ \sqrt{k^8+2 k^6 m^2+k^4 m^4+32k^2 m^6 }\right)$ \\ &$\lambda_{m,0}^- $&$=$&$\frac{1}{2} \left(-k^4+5k^2 m^2+2 m^4- \sqrt{k^8+2 k^6 m^2+k^4 m^4+32k^2 m^6 }\right)$\\ If $m=0,\,n \geq 1$:& && \\ &$\lambda_{0,n}^+$ &$=$&$n^2 (n^2+k^2)$ \\ &$\lambda_{0,n}^-$ &$=$&$n^4-k^4$\\ If $m,n \geq 1$:& && \\ &$\lambda_{m,n}^+$ &$=$&$\frac{1}{2}\left(-k^4+k^2 \left(5 m^2+n^2\right)+2 \left(m^2+n^2\right)^2 \right .$ \\ &&&$ \left . + \sqrt{k^8+2 k^6 \left(m^2+n^2\right)+k^4 \left(m^2+n^2\right)^2+32 k^2 m^2 \left(m^2+n^2\right)^2}\right)$ \\ &$\lambda_{m,n}^-$ &$=$&$\frac{1}{2}\left(-k^4+k^2 \left(5 m^2+n^2\right)+2 \left(m^2+n^2\right)^2 \right .$\\ &&& $ \left . - \sqrt{k^8+2 k^6 \left(m^2+n^2\right)+k^4 \left(m^2+n^2\right)^2+32 k^2 m^2 \left(m^2+n^2\right)^2}\right)$ \end{tabular} \end{equation*} \renewcommand{\arraystretch}{1.} Moreover, the multiplicities $\nu(\lambda)$ of these eigenvalues are: \begin{equation}\label{eigenspaces-toro-sfera} \begin{tabular}{c c c cc} $\nu \left ( \mu_0\right )$&=&$\nu \left ( \mu_1\right )$&=&$1$ \\ $\nu \left ( \lambda_{m,0}^+\right )$&=&$\nu \left ( \lambda_{m,0}^-\right )$&=&$2$ \\ $\nu \left ( \lambda_{0,n}^+\right )$&=&$\nu \left ( \lambda_{0,n}^-\right )$&=&$2$ \\ $\nu \left ( \lambda_{m,n}^+\right )$&=&$\nu \left ( \lambda_{m,n}^-\right )$&=&$4$ \\ \end{tabular} \end{equation} \end{theorem} Now, in order to state our next result, it is convenient to define, for $k \in \n^*$, \begin{equation}\label{definizione-f(k)} f(k)=\sharp \left \{[m,n]\in \n^* \times \n^* \,\,:\,\,\lambda_{m,n}^- <0 \right \} \end{equation} and \begin{equation}\label{definizione-g(k)} g(k)=\sharp \left \{[m,n]\in \n^* \times \n^* \,\,:\,\,\lambda_{m,n}^- =0 \right \}. \end{equation} We point out that it is not difficult to prove that $f(k)\leq k^2$ for all $k \in \n^*$ and that $f(k)$ does not admit a polynomial expression. \begin{theorem}\label{Index-theorem-toro-sfera} Let $\varphi_{k} : {\mathbb T}^2 \to \s^2$ be the proper biharmonic map \eqref{equivdatoroasfera-bis*}. Then \begin{align} & {\rm Nullity}(\varphi_{k})=5+4\,g(k) \\ & {\rm Index}(\varphi_{k})=1+4(k-1)+4\, f(k). \end{align} \end{theorem} \begin{remark}\label{remark-stime-index-toro-sfera} The eigenvalues in Theorem~\ref{Spectrum-theorem-toro-sfera} are not written in an increasing order. Moreover, we observe that some eigenvalues in Theorem~\ref{Spectrum-theorem-toro-sfera} may occur more than once. For instance, $\mu_0=\lambda_{k,0}^-=\lambda_{0,k}^-$. The numerical values of the functions $f$ and $g$ in the statement of Theorem\link\ref{Index-theorem-toro-sfera} can be computed by means of a suitable computer algorithm. By way of example, we report some of them in the following table: \begin{equation}\label{Table-index-toro-sfera} \,\begin{array}{lllll} k=1 &\quad \quad &{\rm Index}(\varphi_{k})=1 &\quad \quad& {\rm Nullity}(\varphi_{k})=5 \\ \nonumber k=2 &\quad \quad &{\rm Index}(\varphi_{k})=13 &\quad \quad& {\rm Nullity}(\varphi_{k})=5 \\ \nonumber k=3 &\quad \quad &{\rm Index}(\varphi_{k})=29 &\quad \quad& {\rm Nullity}(\varphi_{k})=5 \\ \nonumber k=4 &\quad \quad &{\rm Index}(\varphi_{k})=57 &\quad \quad& {\rm Nullity}(\varphi_{k})=5 \\ \nonumber k=5 &\quad \quad &{\rm Index}(\varphi_{k})=89 &\quad \quad& {\rm Nullity}(\varphi_{k})=5 \\ \nonumber k=6 &\quad \quad &{\rm Index}(\varphi_{k})=129 &\quad \quad& {\rm Nullity}(\varphi_{k})=5 \\ \nonumber k=7 &\quad \quad &{\rm Index}(\varphi_{k})=181 &\quad \quad& {\rm Nullity}(\varphi_{k})=5 \\ \nonumber k=8 &\quad \quad &{\rm Index}(\varphi_{k})=233 &\quad \quad& {\rm Nullity}(\varphi_{k})=5 \\ \nonumber k=9 &\quad \quad &{\rm Index}(\varphi_{k})=297 &\quad \quad& {\rm Nullity}(\varphi_{k})=5 \\ \nonumber k=10 &\quad \quad &{\rm Index}(\varphi_{k})=365&\quad \quad& {\rm Nullity}(\varphi_{k})=5 \\ \nonumber k=17 &\quad \quad &{\rm Index}(\varphi_{k})=1065 &\quad \quad& {\rm Nullity}(\varphi_{k})=5 \\ \nonumber k=155 &\quad \quad &{\rm Index}(\varphi_{k})=88433 &\quad \quad& {\rm Nullity}(\varphi_{k})=5 \nonumber \end{array} \end{equation} \textbf{Conjecture:} The ${\rm Nullity}(\varphi_{k})$ is equal to $5$ for any $k$. \vspace{2mm} We have checked this conjecture by means of a computer algorithm for all $k \leq 1500$. Therefore, it is reasonable to believe that ${\rm Nullity}(\varphi_{k})=5$ for all $k \in \n^*$. The difficulty to prove this conjecture is the following: there are values which are very close both to satisfy $\lambda_{m,n}^-=0$ and to be integers. For instance, the expression which defines $\lambda_{m,n}^-$ vanishes when $k=192,\,m=100$ and $n \simeq 184,998$. \end{remark} \end{example} \begin{example}\label{parallel-r-harmonic-circles} Let $\varphi_{k}\,: \s^1 \to \s^2\hookrightarrow \R^3$ be the proper biharmonic map defined by \begin{equation}\label{r-harmonic-examples} \gamma \mapsto \, \left ( \frac{1}{\sqrt 2}\, \cos(k\gamma),\,\frac{1}{\sqrt 2}\,\sin(k\gamma), \,\frac{1}{\sqrt 2}\right ) \,, \quad 0 \leq \gamma \leq 2\pi\,, \end{equation} where $k \in \n^*$ is a fixed positive integer. Both the notions of biharmonicity and that of index and nullity of a biharmonic map are invariant under homothetic changes of the metric of either the domain or the codomain. Therefore, in this example, we have assumed for simplicity that the domain is the unit circle. In particular, the radius of the domain which would ensure the condition of isometric immersion for $k=1$ is $R=1/\sqrt 2$, but any choice of $R$ would not affect the conclusions of our next result: \begin{theorem}\label{Index-theorem-r-harmonic-circles} Let $\varphi_{k} : \s^1 \to \s^2$ be the proper biharmonic map defined in \eqref{r-harmonic-examples}. Then \begin{align}\label{*} & {\rm Nullity}(\varphi_{k})=3 \\ & {\rm Index}(\varphi_{k})=1+2(k-1) \nonumber \end{align} \end{theorem} \begin{remark}\label{remark-inex-depend-k} Theorem\link\ref{Index-theorem-r-harmonic-circles} was known in the case that $k=1$: it was proved in \cite{Balmus,LO}, where the index and the nullity of $i:\s^{n-1}(1/\sqrt 2) \hookrightarrow \s^n$ was computed although the pull-back bundle $i^{-1}T\s^n$ is not parallelizable for all dimensions $n$. Since the $k$-fold rotation $e^{{\mathrm i}\vartheta} \mapsto e^{{\mathrm i}k\vartheta}$ of $\s^1$ is a local homothety, it is somehow surprising that the index depends on $k$. A possible explanation is the fact that the $k$-fold rotation is not a global diffeomorphism and, while the biharmonic equation can be solved locally (so biharmonicity remains invariant under the composition with a local homothety), the index is a global notion. \end{remark} \end{example} \begin{example}\label{example-Legendre} Now we study an example following the lines of \cite{BFO, S1}. Let $\mathbb{S}^{2n+1}=\{z\in\mathbb{C}^{n+1}: |z|=1\}$ be the unit $(2n+1)$-dimensional Euclidean sphere. Consider $\mathcal{J}:\mathbb{C}^{n+1}\to\mathbb{C}^{n+1}$, $\mathcal{J}(z)={\mathrm i}z$, to be the usual complex structure on $\mathbb{C}^{n+1}$ and $$ \phi=s\circ\mathcal{J},\qquad \xi_z=-\mathcal{J}z, $$ where $s:T_z\mathbb{C}^{n+1}\to T_z\mathbb{S}^{2n+1}$ is the orthogonal projection. Endowed with these tensors and the standard metric $h$, the sphere $(\mathbb{S}^{2n+1},\phi,\xi,\eta,h)$ becomes a Sasakian space form with constant $\varphi$-sectional curvature equal to $1$. An isometric immersion $\varphi:M^m\to\mathbb{S}^{2m+1}$ is said to be {\it Legendre} if it is {\it integral}, that is $\eta(d\varphi(X))=0$ for all $X\in {\mathcal C}(TM)$. Sasahara studied the proper biharmonic Legendre immersed surfaces in Sasakian space forms and obtained the explicit representations of such surfaces into $\s^5$. In particular, he proved \begin{theorem}[\cite{S1}]\label{th:Chen_T^2 in S^5} Let $\varphi:M^2\to\mathbb{S}^5$ be a proper biharmonic Legendre immersion. Then $\Phi=i\circ\varphi:M^2\to \C^3=\mathbb{R}^6$ is locally given by $$ \Phi(\gamma,\vartheta)=\frac{1}{\sqrt 2}\left (e^{{\mathrm i}\gamma},{\mathrm i}e^{-{\mathrm i}\gamma}\sin(\sqrt 2 \vartheta), {\mathrm i}e^{-{\mathrm i}\gamma}\cos(\sqrt 2 \vartheta)\right), $$ where $i:\mathbb{S}^5 \hookrightarrow\mathbb{R}^6$ is the canonical inclusion. \end{theorem} The map $\varphi$ induces a full proper biharmonic Legendre embedding of the flat torus ${\mathbb T}^2=\mathbb{S}^1\times\mathbb{S}^1(1/\sqrt 2)$ into $\mathbb{S}^5$. This embedding, still denoted by $\varphi$, has constant mean curvature $|H|=1/2$, it is not pseudo-umbilical and its mean curvature vector field is not parallel. Moreover, $\Phi=\Phi_p+\Phi_{q}$, where $$ \Phi_{p}(\gamma,\vartheta)=\frac{1}{\sqrt 2}(e^{{\mathrm i}\gamma},0,0), $$ $$ \Phi_{q}(\gamma,\vartheta)=\frac{1}{\sqrt 2}\big(0,{\mathrm i}e^{-{\mathrm i}\gamma}\sin(\sqrt 2 \vartheta), {\mathrm i}e^{-{\mathrm i}\gamma}\cos(\sqrt 2 \vartheta) \big), $$ and $ \Delta \Phi_{p}=\Phi_{p}$, $ \Delta \Phi_{q}=3\Phi_{q}$. Thus $\Phi$ is a $2$-type mass-symmetric immersion in $\mathbb{R}^6$ with eigenvalues $1$ and $3$ and order $[1,3]$ (see \cite{BFO, S1} for more details). Our goal is to determine the index and the nullity of the above embedding. We shall prove: \begin{theorem}\label{th: Sasahara_T2_S5} Let $\varphi:{\mathbb T}^2=\mathbb{S}^1\times\mathbb{S}^1(1/\sqrt 2)\to\mathbb{S}^5$ be the proper biharmonic Legendre embedding. Then \[ \Index(\varphi)= 11 \quad {\rm and} \quad \nul(\varphi)= 18 \,. \] \end{theorem} \begin{remark} Our result completes the analysis of \cite{BFO}, where it was shown that $\Index(\varphi)\geq 11$ and $\nul(\varphi)\geq 18$. \end{remark} \begin{remark} The index and nullity of the biharmonic immersions into spheres that derive from the minimal generalised Veronese immersions have been estimated in \cite{LO}. These maps are pseudo-umbilical immersions with parallel mean curvature vector field and do not have parallelizable pull-back bundle. \end{remark} \end{example} \begin{example}\label{example-noncompact} In this example we study the notion of stability when the domain is not compact. In this case, the most natural approach is to study the second variation \eqref{Hessian-definition} assuming that $\varphi_{t,s} = \varphi$ outside a compact set. Variations of this type are called \textit{compactly supported variations}. Using this type of variations, we study the Hessian bilinear form \begin{equation}\label{hessian-noncompact} H(E_2)_{(\varphi; D)} (V,W)= \left . \frac{\partial^2}{\partial t \partial s} \right |_{(t,s)=(0,0)} \hspace{-5mm} E_2 (\varphi_{t,s};D)= \int_D \langle I_2(V),W \rangle dv_M=\int_M \langle I_2(V),W \rangle dv_M\,, \end{equation} where $D$ is a compact set (with smooth boundary), $\varphi_{t,s}=\varphi$ outside $D$ and the vector fields $V,W$ are defined precisely as in \eqref{V-W} and so they are sections of $\varphi^{-1}TN$ which vanish outside $D$. In particular, if $N=\s^n$, then the explicit expression of $I_2(V)$ can be computed again by using the divergence theorem, now for compactly supported vector fields, and we get the same formula \eqref{I2-general-case}. The spectrum of $I_2$ is not discrete in general, but we can say that a biharmonic map $\varphi$ is \textit{strictly stable} if $H(E_2)_{(\varphi; D)} (V,V)>0$ for all nontrivial compactly supported vector fields $V \in \mathcal{C}\left (\varphi^{-1}TN\right)$. \begin{remark} It can be proved that when $M$ is not compact and $N$ is flat, then any proper biharmonic map $\varphi: M\to N$ is stable, that is $H(E_2)_{(\varphi; D)} (V,V)\geq 0$ for all compactly supported vector fields $V \in \mathcal{C}\left (\varphi^{-1}TN\right)$. \end{remark} Now we can describe our example. Let $\varphi\,: \R \to \s^2$ be the proper biharmonic map defined by \begin{equation}\label{A-harmonic-examples} \gamma \mapsto \, \left ( \cos(A(\gamma)),\,\sin(A(\gamma)), \,0\right ) \in \s^2\hookrightarrow \R^3 \,, \end{equation} with \begin{equation}\label{A-gamma} A(\gamma)= a \gamma^3+b \gamma^2+c \gamma +d \,, \end{equation} where $a,b,c,d$ are real numbers such that $a^2+b^2>0$. We shall prove: \begin{theorem}\label{th:noncompact} Let $\varphi\,: \R \to \s^2$ be the proper biharmonic map defined in \eqref{A-harmonic-examples}. Assume that \begin{equation}\label{condiz-noncompact} {\rm either} \,\, a=0 \quad {\rm or} \,\, \left\{a\neq 0 \,\,{\rm and} \,\, b^2 -3ac \leq 0 \right \}\,. \end{equation} Then $\varphi$ is strictly stable. \end{theorem} We point out that this is the first example of a strictly stable, proper biharmonic map into a sphere. In general, according to a result of Jiang \cite{Jiang}, we know that, when $M$ is compact and the tension field $\tau(\varphi)$ is orthogonal to the image, then any proper biharmonic map $\varphi:M\to \s^n$ is unstable. By contrast, in the example of Theorem~\ref{th:noncompact} the tension field is tangent to the image of the map. \end{example} \section{Proof of the results}\label{proofs} We shall first prove Theorems\link\ref{Spectrum-theorem-toro-sfera} and \ref{Index-theorem-toro-sfera}. The first step is to derive an explicit formula for the operator $I_2:\mathcal{C}\left(\varphi_k^{-1} T\s^2\right) \to \mathcal{C}\left(\varphi_k^{-1} T\s^2\right)$ using its explicit expression \eqref{I2-general-case}. To this purpose, it is convenient to introduce two suitable vector fields along $\varphi_k$. More specifically, using coordinates $\left(y^1,y^2,y^3 \right)$ on $\R^3$, we define \begin{equation}\label{definizioneY-eta} Y=\sqrt 2 \,\left (-y^2,y^1,0 \right ) \quad {\rm and} \quad \eta= \left(y^1,y^2,\,-\,\frac{1}{\sqrt 2} \right)\,. \end{equation} From a geometric viewpoint, we observe that the image of $\varphi_k$ is a circle $\s^1 (1 /\sqrt 2)$ into $\s^2$. Then the restriction of $Y$ to the circle provides a unit section of $T\s^1 (1 /\sqrt 2)$, while $\eta$ gives rise to a unit section of the normal bundle of $\s^1 (1 /\sqrt 2)$ into $\s^2$. For our future purposes, we shall use the following elementary calculations: \begin{equation}\label{IIfund-form} B(Y,Y)=-\eta \,\, ; \quad A(Y)=-Y \,\,, \end{equation} where $B$ and $A$ denote the second fundamental form and the shape operator respectively. Then, we set \begin{equation}\label{def-V1 e V2} V_Y=Y \left (\varphi_k \right )\quad {\rm and} \quad V_\eta= \eta \left ( \varphi_k \right )\,. \end{equation} The vectors $V_Y$, $V_\eta$ provide an orthonormal basis on $T\s^2$ at each point of the image of $\varphi_k$ and it is easy to conclude that each section $V \in \mathcal{C}\left( \varphi_k^{-1}T\s^2\right )$ can be written as \begin{equation}\label{general-section-toro-sfera} V=f_1 \,V_Y +f_2\,V_\eta \,, \end{equation} where $f_j\in C^\infty \left ( {\mathbb T}^2 \right)$, $j=1,2$. For our purposes, it shall be sufficient to study in detail the case that the functions $f_j$ are eigenfunctions of the Laplacian. More precisely, let \[ \Delta= -\left (\frac{\partial^2}{\partial \gamma^2}+ \frac{\partial^2}{\partial \vartheta^2}\right ) \] be the Laplace operator on ${\mathbb T}^2$ and denote by $\lambda_i,\,i \in \n$, its spectrum. We define \begin{equation}\label{sottospazi-S-lambda} S^{\lambda_i}=\left \{ f_1\,V_Y \,\,:\,\, \Delta f_1= \lambda_i f_1 \right \} \oplus \left \{ f_2\,V_\eta \,\,:\,\, \Delta f_2= \lambda_i f_2 \right \} \end{equation} As in \cite{LO}, $S^{\lambda_i} \perp S^{\lambda_j}$ if $i \neq j$ and $\oplus_{i=0}^{+\infty}\, S^{\lambda_i}$ is dense in $\mathcal{C}\left( \varphi_k^{-1}T\s^2\right )$ (note that the scalar product which we use on sections of $ \varphi_k^{-1}T\s^2$ is the standard $L^2$-inner product). Our first key result is: \begin{proposition}\label{proposizione-I-esplicito-toro-sfera} Assume that $f\in C^{\infty}\left ( {\mathbb T}^2 \right )$ is an eigenfunction of $\Delta$ with eigenvalue $\lambda$. Then \begin{equation}\label{prima-espressione-I-toro-sfera} I_2 (f V_Y)= \lambda \left ( \lambda+k^2 \right )fV_Y -2 k^2\,f_{\gamma \gamma}V_Y+2 \sqrt 2 k \lambda f_{\gamma}V_\eta \end{equation} and \begin{equation}\label{seconda-espressione-I-toro-sfera} I_2 (f V_\eta)= \left ( \lambda^2-k^4 \right )fV_\eta -2 k^2 f_{\gamma \gamma}V_\eta-2 \sqrt 2 k \lambda f_{\gamma} V_Y \,. \end{equation} \end{proposition} \begin{proof} We recall that the definition of the map $\varphi_k$ was given in \eqref{equivdatoroasfera}. The vector fields $\partial / \partial \gamma$ and $\partial / \partial \vartheta$ quotient to vector fields tangent to $\t^2$ forming a global orthonormal frame field of ${\mathbb T}^2$ and we easily find: \begin{equation}\label{dfi} d\varphi_k \left (\frac{\partial}{\partial \gamma} \right )= k \,\frac{\sqrt 2}{2}\, Y \left ( \varphi_k \right )= k \,\frac{\sqrt 2}{2}\, V_Y\quad {\rm and} \quad d\varphi_k \left (\frac{\partial}{\partial \vartheta} \right )= 0 \,, \end{equation} where we have used the vector fields introduced in \eqref{definizioneY-eta} and \eqref{def-V1 e V2}. In order to complete the proof of Proposition\link\ref{proposizione-I-esplicito-toro-sfera} we need to compute all the terms which appear in formula \eqref{I2-general-case}. This shall be done by means of a series of lemmata (to simplify notation, in these lemmata we shall write $\varphi$ instead of $\varphi_k$). \begin{lemma}\label{lemma1} \begin{equation}\label{rough-V1} \overline{\Delta} V_Y=\frac{1}{2}\,k^2 \,V_Y \,. \end{equation} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lemma1}] In general, for $X \in \mathcal{C}(T\,{\mathbb T}^2)$ we have: \begin{eqnarray}\label{1-lemma-1} \nabla_X^\varphi V_Y&=&\nabla_X^\varphi Y(\varphi)=\nabla_{d\varphi(X)}^{\s^2} \,Y\\ \nonumber &=&\nabla_{d\varphi(X)}^{\s^1(1/\sqrt 2)}\, \,Y +B(d\varphi(X),Y)\,. \nonumber \end{eqnarray} If we apply \eqref{1-lemma-1} to $X=\partial / \partial \gamma$ we easily obtain \begin{equation}\label{2-lemma-1} \nabla_{\partial / \partial \gamma}^\varphi \,V_Y = \frac{\sqrt 2}{2}\,k\,B(Y,Y)=-\,\frac{\sqrt 2}{2}\,k\,V_{\eta}\,, \end{equation} where for the last equality we have used \eqref{IIfund-form}. Next, \begin{eqnarray}\label{3-lemma-1} \nabla_{\partial / \partial \gamma}^\varphi \,\,\left (\nabla_{\partial / \partial \gamma}^\varphi \,V_Y \right )&=&-\,\frac{\sqrt 2}{2}\,k\, \nabla_{\partial / \partial \gamma}^\varphi \,V_{\eta}\\ \nonumber &=&-\,\frac{\sqrt 2}{2}\,k\, \nabla_{(\sqrt 2/2)kY}^{\s^2} \,\eta\\ \nonumber &=&-\,\frac{1}{2}\,k^2\,(-A(Y)+0)\\ \nonumber &=& -\,\frac{1}{2}\,k^2\,V_Y \,, \nonumber \end{eqnarray} where for the last equality we have used \eqref{IIfund-form}. Since $d\varphi (\partial / \partial \vartheta)$ vanishes, \eqref{rough-V1} follows immediately from \eqref{3-lemma-1} (note the sign convention). \end{proof} \begin{lemma}\label{lemma2} \begin{equation}\label{rough-V2} \overline{\Delta} V_\eta=\frac{1}{2}\,k^2 \,V_\eta \,. \end{equation} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lemma2}] \begin{equation}\label{1-lemma-2} \nabla_{\partial / \partial \gamma}^\varphi \,V_\eta= \nabla_{(\sqrt 2/2)kY}^{\s^2} \,\eta= \frac{\sqrt 2}{2}k V_Y \,, \end{equation} from which \begin{equation}\label{2-lemma-2} \nabla_{\partial / \partial \gamma}^\varphi \left (\nabla_{\partial / \partial \gamma}^\varphi V_\eta \right )= \frac{1}{2}\,k^2 \nabla_Y^{\s^2}Y= -\,\frac{1}{2} k^2\,V_\eta \,. \end{equation} Now \eqref{rough-V2} follows readily. \end{proof} \begin{lemma}\label{lemma3} Assume that $\Delta f= \lambda f$. Then \begin{eqnarray}\label{rough-fV1-fV2} {\rm (i)}\,\,\quad \overline{\Delta} (fV_Y)&=&\left ( \lambda+\frac{k^2}{2}\right )fV_Y+ \sqrt 2kf_\gamma V_\eta\\ \nonumber {\rm (ii)}\,\, \quad \overline{\Delta} (f V_\eta)&=& \left ( \lambda+\frac{k^2}{2}\right )f V_\eta- \sqrt 2 k f_\gamma V_Y\nonumber \end{eqnarray} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lemma3}] This lemma can be easily proved by applying the results of Lemmata\link\ref{lemma1} and \ref{lemma2} to the general formula \begin{equation}\label{general-product-formula} \overline{\Delta} (f\,V)=(\Delta f)\,V-2\,\nabla_{\nabla f}^{\varphi}V+f\,\overline{\Delta} V \,. \end{equation} \end{proof} Now, we compute the various terms which appear in the formula \eqref{I2-general-case}. \begin{lemma}\label{lemma4}Assume that $\Delta f= \lambda f$. Then \begin{equation}\label{doppio-rough-fV1} \overline{\Delta}^2 (f V_Y)=\left ( \lambda+\frac{k^2}{2}\right )^2 f V_Y-2k^2 f_{\gamma \gamma} V_Y+ 2\sqrt 2 k \left ( \lambda+\frac{k^2}{2}\right )f_\gamma V_\eta \,. \end{equation} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lemma4}] We just need to compute using twice \eqref{rough-fV1-fV2}\link(i) together with the observation that, since $\partial / \partial \gamma$ is a Killing field on ${\mathbb T}^2$, $\Delta f_\gamma= \lambda\,f_\gamma$. \end{proof} In the same way, we obtain: \begin{lemma}\label{lemma5}Assume that $\Delta f= \lambda f$. Then \begin{equation}\label{doppio-rough-fV2} \overline{\Delta}^2 (f V_\eta)=\left ( \lambda+\frac{k^2}{2}\right )^2 f V_\eta-2k^2 f_{\gamma \gamma} V_\eta- 2\sqrt 2 k \left ( \lambda+\frac{k^2}{2}\right )f_\gamma V_Y \,. \end{equation} \end{lemma} \begin{lemma}\label{lemma6} \begin{equation}\label{1-lemma6} \overline{\Delta}\left ( {\rm trace}\langle f V_Y,d\varphi \cdot \rangle d\varphi \cdot - |d\varphi|^2\,fV_Y \right)=0 \,. \end{equation} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lemma6}] \[ {\rm trace}\langle fV_Y,d\varphi \cdot \rangle d\varphi \cdot - |d\varphi|^2 fV_Y=\langle fY,\frac{\sqrt 2}{2}kY\rangle \frac{\sqrt 2}{2}kY -\frac{k^2}{2} fY =0 \] \end{proof} \begin{lemma}\label{lemma7} \begin{equation}\label{1-lemma7} \overline{\Delta}\left ( {\rm trace}\langle fV_\eta,d\varphi \cdot \rangle d\varphi \cdot - |d\varphi|^2\,fV_\eta \right)= -\,\frac{k^2}{2}\left ( \lambda+\frac{k^2}{2}\right )f\,V_\eta+ \frac{k^3 \sqrt 2}{2}\,f_\gamma\,V_Y\,. \end{equation} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lemma7}] \[ {\rm trace}\langle fV_\eta,d\varphi \cdot \rangle d\varphi \cdot - |d\varphi|^2\,fV_\eta=\langle f\eta,\frac{\sqrt 2}{2}kY\rangle \,\frac{\sqrt 2}{2}k V_Y -\frac{k^2}{2} fV_\eta = -\frac{k^2}{2} fV_\eta \,. \] Next, using \eqref{rough-fV1-fV2}\link(ii), we obtain \eqref{1-lemma7}. \end{proof} \begin{lemma}\label{lemma8} \begin{equation}\label{1-lemma8} 2\langle d\tau(\varphi),d\varphi \rangle fV_Y+|\tau(\varphi)|^2 f V_Y=-\,\frac{k^4}{4}\,f\,V_Y \end{equation} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lemma8}]The tension field of an equivariant map of the type \eqref{equivdatoroasfera} is \[ \tau(\varphi)=\left ( \alpha''- \frac{k^2}{2}\sin (2 \alpha) \right ) \frac{\partial}{\partial \alpha} \,. \] Since $\alpha \equiv \pi /4$ we can write, after standard identification, \begin{equation}\label{tau} \tau(\varphi)=- \frac{k^2}{2}\,V_\eta \,. \end{equation} Now \begin{eqnarray}\label{2-lemma8} 2\langle d\tau(\varphi),d\varphi \rangle fV_Y&=&2 \langle\nabla_ {\partial/\partial \gamma}^\varphi \left ( - \frac{k^2}{2} V_\eta\right ), \frac{\sqrt 2}{2}k V_Y \rangle fV_Y \\ \nonumber &=&-\,\frac{k^4}{2}\,f\,V_Y \,, \nonumber \end{eqnarray} from which it is immediate to obtain \eqref{1-lemma8}. \end{proof} In a similar way we obtain: \begin{lemma}\label{lemma9} \begin{equation}\label{1-lemma9} 2\langle d\tau(\varphi),d\varphi \rangle fV_\eta+|\tau(\varphi)|^2 f V_\eta=-\,\frac{k^4}{4}\,f\,V_\eta \end{equation} \end{lemma} \begin{lemma}\label{lemma11} \begin{equation}\label{1-lemma11} -2\,{\rm trace}\langle fV_Y,d\tau(\varphi) \cdot \rangle d\varphi \cdot= \frac{k^4}{2}\,f\,V_Y\,. \end{equation} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lemma11}] \begin{eqnarray}\label{2-lemma11} -2\,{\rm trace}\langle fV_Y,d\tau(\varphi) \cdot \rangle d\varphi \cdot&=&-2 \langle fV_Y, \nabla_ {\partial/\partial \gamma}^\varphi \left ( - \frac{k^2}{2} V_\eta\right ) \rangle \frac{\sqrt 2}{2}k V_Y \\ \nonumber &=&\frac{\sqrt 2}{2}k^3 f \langle Y,\frac{\sqrt 2}{2}kY \rangle V_Y \\ \nonumber &=&\frac{k^4}{2}\,f\,V_Y \,,\nonumber \end{eqnarray} \end{proof} \begin{lemma}\label{lemma12} \begin{equation}\label{1-lemma12} -2\,{\rm trace}\langle fV_\eta,d\tau(\varphi) \cdot \rangle d\varphi \cdot= 0\,. \end{equation} \end{lemma} \begin{proof}[Proof of Lemma~\ref{lemma12}] \begin{eqnarray}\label{2-lemma12} -2\,{\rm trace}\langle fV_\eta,d\tau(\varphi) \cdot \rangle d\varphi \cdot&=&-2 \langle fV_\eta, \nabla_ {\partial/\partial \gamma}^\varphi \left ( - \frac{k^2}{2} V_\eta\right ) \rangle \frac{\sqrt 2}{2}k V_Y \\ \nonumber &=&\frac{\sqrt 2}{2}k^3 f \langle \eta,\frac{\sqrt 2}{2}kY \rangle V_Y =0 \nonumber \end{eqnarray} \end{proof} All the calculations involved in the next two lemmata use the same patterns which we followed so far and so we state directly the relevant results omitting the details of the proofs. \renewcommand{\arraystretch}{1.3} \begin{lemma}\label{lemma13} \[\begin{array}{rcl} -2 \,{\rm trace} \langle \tau(\varphi),d(fV_Y) \cdot \rangle d\varphi \cdot &=& -\,\frac{k^4}{2}fV_Y\\ \nonumber - \langle \tau(\varphi),fV_Y \rangle \tau(\varphi)&=&0 \\ \nonumber {\rm trace} \langle d\varphi \cdot,\overline{\Delta}(fV_Y) \rangle d\varphi \cdot &=&\frac{k^2}{2}\left ( \lambda+\frac{k^2}{2}\right )f V_Y \\ \nonumber {\rm trace}\langle d\varphi \cdot,\left ( {\rm trace}\langle fV_Y,d\varphi \cdot \rangle d\varphi \cdot \right ) \rangle d\varphi \cdot&=& \frac{k^4}{4}fV_Y \end{array} \] \[\begin{array}{rcl} -2 |d\varphi|^2\, {\rm trace}\langle d\varphi \cdot,fV_Y \rangle d\varphi \cdot&=&- \frac{k^4}{2}fV_Y \\ \nonumber 2 \langle d(fV_Y),d\varphi\rangle \tau(\varphi) &=& - \frac{k^3 \sqrt 2}{2} f_\gamma V_\eta\\ \nonumber -|d\varphi|^2 \,\overline{\Delta}(fV_Y)&=& -\,\frac{k^2}{2}\left ( \lambda+\frac{k^2}{2}\right )f\,V_Y- \frac{k^3 \sqrt 2}{2} f_\gamma V_\eta\\ \nonumber |d\varphi|^4 fV_Y&=&\frac{k^4}{4}fV_Y \nonumber \end{array} \] \end{lemma} \begin{lemma}\label{lemma14} \[\begin{array}{rcl} -2 \,{\rm trace} \langle \tau(\varphi),d(fV_\eta) \cdot \rangle d\varphi \cdot &=& \frac{k^3\sqrt 2}{2}f_\gamma V_Y\\ \nonumber - \langle \tau(\varphi),fV_\eta \rangle \tau(\varphi)&=&-\,\frac{k^4}{4}fV_\eta \\ \nonumber {\rm trace} \langle d\varphi \cdot,\overline{\Delta}(fV_\eta) \rangle d\varphi \cdot &=&-\,\frac{k^3\sqrt 2}{2}f_\gamma V_Y \\ \nonumber {\rm trace}\langle d\varphi \cdot,\left ( {\rm trace}\langle f V_\eta,d\varphi \cdot \rangle d\varphi \cdot \right ) \rangle d\varphi \cdot&=& 0 \end{array} \] \[\begin{array}{rcl} -2 |d\varphi|^2\, {\rm trace}\langle d\varphi \cdot,fV_\eta \rangle d\varphi \cdot&=&0 \\ \nonumber 2 \langle d(fV_\eta),d\varphi\rangle \tau(\varphi) &=& - \frac{k^4}{2}\,f V_\eta\\ \nonumber -|d\varphi|^2 \,\overline{\Delta}(fV_\eta)&=& -\,\frac{k^2}{2}\left ( \lambda+\frac{k^2}{2}\right )f\,V_\eta+ \frac{k^3 \sqrt 2}{2}\,f_\gamma\,V_Y\\ \nonumber |d\varphi|^4 fV_\eta&=&\frac{k^4}{4}fV_\eta \,\,.\nonumber \end{array} \] \end{lemma} \renewcommand{\arraystretch}{1.} Now we are able to end the proof of Proposition\link\ref{proposizione-I-esplicito-toro-sfera}. As for \eqref{prima-espressione-I-toro-sfera}, it suffices to replace the results of Lemmata\link\ref{lemma4}, \ref{lemma6}, \ref{lemma8}, \ref{lemma11} and \ref{lemma13} into \eqref{I2-general-case} and add up. Similarly, \eqref{seconda-espressione-I-toro-sfera} can be obtained using Lemmata\link\ref{lemma5}, \ref{lemma7}, \ref{lemma9}, \ref{lemma12} and \ref{lemma14}. \end{proof} We are now in the right position to prove our main theorems. \subsection{Proof of Theorem~\ref{Spectrum-theorem-toro-sfera}} The eigenvalues of $\Delta$ on ${\mathbb T}^2$ have the form $\lambda=m^2+n^2$. In particular, $\lambda_0=0$, \begin{equation}\label{sottospazi-S-lambda-0} S^{\lambda_0}=\left \{ c_1\,V_Y \,\,:\,\, c_1 \in \R\right \} \oplus \left \{ c_2\,V_\eta \,\,:\,\, c_2\in \R\right \} \end{equation} and $\dim \left( S^{\lambda_0} \right)=2$. It follows by a direct application of Proposition\link\ref{proposizione-I-esplicito-toro-sfera} that the restriction of $I_2$ to $S^{\lambda_0}$ gives rise to the eigenvalues $\mu_0=0$ and $\mu_1=-k^4$. Next, let us consider the case that $\lambda=m^2+n^2 >0$ and denote by $W_{\lambda}$ the corresponding eigenspace. In a similar fashion to \cite{BFO}, we decompose \begin{equation}\label{sottospazi-W-lambda} W_{\lambda}=W^{m,0}\oplus_{m,n \geq 1}W^{m,n}\oplus W^{0,n}\,, \end{equation} where it is understood that in \eqref{sottospazi-W-lambda} we have to consider all the possible couples $(m,n)\in \n \times \n$ such that $\lambda=m^2+n^2$. By way of example, if $\lambda=4$, then the possible couples are $(2,0)$ and $(0,2)$. If $\lambda=5$, then the possible couples are $(1,2)$ and $(2,1)$. The subspaces of the type $W^{m,0}$ are $2$-dimensional and are spanned by the functions $\left \{\cos (m\gamma),\sin (m\gamma) \right \}$. Similarly, $W^{0,n}$ is $2$-dimensional and is generated by $\left \{\cos (n\vartheta),\sin (n\vartheta) \right \}$. Finally, the subspaces $W^{m,n}$, with $m,n \geq 1$, have dimension $4$ and are spanned by \[ \left \{\cos (m\gamma)\cos(n\vartheta),\cos (m\gamma)\sin (n\vartheta),\sin (m\gamma)\cos(n\vartheta),\sin (m\gamma)\sin (n\vartheta)\right \} \] Now it becomes natural to define \begin{equation}\label{sottospazi-S-m,n} S^{m,n}=\left \{ f_1\,V_Y \,\,:\,\, f_1 \in W^{m,n} \right \} \oplus \left \{ f_2\,V_\eta \,\,:\,\, f_2 \in W^{m,n} \right \} \,\,. \end{equation} All these subspaces are orthogonal to each other. Moreover, for any positive eigenvalue $\lambda_i$, we have \begin{equation}\label{decomposizioneS-lambda-in-coppie-m-n} S^{\lambda_i}= \oplus_{m^2+n^2=\lambda_i}\,\,S^{m,n}\,. \end{equation} It follows easily from Proposition\link\ref{proposizione-I-esplicito-toro-sfera} that the operator $I_2$ preserves each of the subspaces $S^{m,n}$. Therefore, its spectrum can be computed by determing the eigenvalues of the matrices associated to the restriction of $I_2$ to each of the $S^{m,n}\,$'s. We separate three cases: \textbf{Case 1:} $S^{m,0}$, $m \geq1$. In this case, an orthonormal basis of $S^{m,0}$ is given by: \[ \left \{\frac{\cos (m\gamma)}{\sqrt{2} \,\pi}\,V_Y,\frac{\sin (m\gamma)}{\sqrt{2} \,\pi}\,V_Y,\frac{\cos (m\gamma)}{\sqrt{2} \,\pi}\,V_\eta,\frac{\sin (m\gamma)}{\sqrt{2} \,\pi}\,V_\eta\right \} \,. \] Using Proposition\link\ref{proposizione-I-esplicito-toro-sfera} and computing we find that in this case the $(4\times 4)$-matrices associated to the operator $I_2$ are: \begin{equation}\label{matrici-S-m-0} \left( \begin{array}{cccc} m^2\left(m^2+3k^2 \right ) & 0 & 0 & -2\sqrt 2\,k m^3 \\ 0& m^2\left(m^2+3k^2 \right )&2 \sqrt 2\,k m^3&0\\ 0&2 \sqrt 2\,k m^3 & m^4+2k^2m^2-k^4 &0 \\ -2 \sqrt 2\,k m^3 & 0&0&m^4+2k^2m^2-k^4 \\ \end{array} \right) \end{equation} The eigenvalues of these matrices are precisely the $\lambda_{m,0}^+\,$'s, $\lambda_{m,0}^-\,$'s indicated in the statement of Theorem\link\ref{Spectrum-theorem-toro-sfera}. Each of them has multiplicity equal to $2$. \textbf{Case 2:} $S^{0,n}$, $n \geq1$. In this case, an orthonormal basis of $S^{0,n}$ is given by: \[ \left \{ \frac{\cos (n\vartheta)}{\sqrt{2} \,\pi}\,V_Y,\frac{\sin (n\vartheta)}{\sqrt{2} \,\pi}\,V_Y,\frac{\cos (n\vartheta)}{\sqrt{2} \,\pi}\,V_\eta,\frac{\sin (n\vartheta)}{\sqrt{2} \,\pi} \,V_\eta\right \} \,. \] Using Proposition\link\ref{proposizione-I-esplicito-toro-sfera} and computing it is immediate to find that in this case the $(4\times 4)$-matrices associated to the operator $I_2$ are: \begin{equation}\label{matrici-S-0-n} \left( \begin{array}{cccc} n^2\left(n^2+k^2 \right ) & 0 & 0 & 0 \\ 0& n^2\left(n^2+k^2 \right )&0&0\\ 0&0 & n^4-k^4 &0 \\ 0 & 0&0&n^4-k^4 \\ \end{array} \right) \end{equation} The eigenvalues of these matrices are obviously those indicated with $\lambda_{0,n}^+,\lambda_{0,n}^-$ in the statement of Theorem\link\ref{Spectrum-theorem-toro-sfera}. Each of them has multiplicity equal to $2$. \textbf{Case 3:} $S^{m,n}$, $m,n \geq1$. This is the case which requires the biggest computational effort. An orthonormal basis of $S^{m,n}$ is given by: \renewcommand{\arraystretch}{1.5} \[\begin{array}{l} \left \{ \frac{1}{\pi} \,\cos (m \gamma)\cos (n\vartheta)\,V_Y, \frac{1}{\pi} \,\cos (m \gamma)\sin (n\vartheta)\,V_Y, \frac{1}{\pi} \,\sin (m \gamma)\cos (n\vartheta)\,V_Y, \frac{1}{\pi} \,\sin (m \gamma)\sin (n\vartheta)\,V_Y, \right . \\ \nonumber \left . \frac{1}{\pi} \,\cos (m \gamma)\cos (n\vartheta)\,V_\eta, \frac{1}{\pi} \,\cos (m \gamma)\sin (n\vartheta)\,V_\eta, \frac{1}{\pi} \,\sin (m \gamma)\cos (n\vartheta)\,V_\eta, \frac{1}{\pi} \,\sin (m \gamma)\sin (n\vartheta)\,V_\eta\right \} \,. \end{array} \] \renewcommand{\arraystretch}{1.} Using Proposition\link\ref{proposizione-I-esplicito-toro-sfera} and computing we find that in this case the $(8\times 8)$-matrices associated to the operator $I_2$ can be described as follows. Set \begin{eqnarray*} A_{m,n,k}&=& \left(3 m^2+n^2\right) k^2+\left(m^2+n^2\right)^2 \\ B_{m,n,k}&= & -k^4+2 m^2 k^2+\left(m^2+n^2\right)^2\\ C_{m,n,k}&=& 2 \sqrt{2} k m \left(m^2+n^2\right) \end{eqnarray*} Then the matrices are: \begin{equation}\label{matrici-S-m-n} \left( \begin{array}{cccccccc} A_{m,n,k} & 0 & 0 & 0 & 0 & 0 & -C_{m,n,k} & 0 \\ 0 & A_{m,n,k} & 0 & 0 & 0 & 0 & 0 & -C_{m,n,k} \\ 0 & 0 & A_{m,n,k} & 0 & C_{m,n,k} & 0 & 0 & 0 \\ 0 & 0 & 0 & A_{m,n,k}& 0 & C_{m,n,k} & 0 & 0 \\ 0 & 0 & C_{m,n,k} & 0 & B_{m,n,k} & 0 & 0 & 0 \\ 0 & 0 & 0 & C_{m,n,k} & 0 & B_{m,n,k} & 0 & 0 \\ -C_{m,n,k} & 0 & 0 & 0 & 0 & 0 &B_{m,n,k} & 0 \\ 0 & -C_{m,n,k}& 0 & 0 & 0 & 0 & 0 & B_{m,n,k} \end{array} \right) \end{equation} The characteristic polynomial of this matrix is: \[ \left[A_{m,n,k} (\lambda-B_{m,n,k})+\lambda (B_{m,n,k}-\lambda)+C_{m,n,k}^2\right]^4 \,. \] Then a straightforward computation shows that the eigenvalues are the $\lambda_{m,n}^{\pm}\,$'s given in the statement of Theorem\link\ref{Spectrum-theorem-toro-sfera}. Each of them has multiplicity equal to $4$ and this ends the proof. \subsection{Proof of Theorem~\ref{Index-theorem-toro-sfera}} First, it is obvious that the subspace $S^{\lambda_0}$ yields a contribution of $+1$ for both ${\rm Index}\left( \varphi_k \right)$ and ${\rm Nullity}\left( \varphi_k \right)$. Next, we examine the subspaces of the type $S^{0,n}$. It is immediate to conclude that their contribution to ${\rm Index}\left( \varphi_k \right)$ is $2(k-1)$, because we have $k-1$ negative eigenvalues of multiplicity $2$. The contribution of $S^{0,n}$ to ${\rm Nullity}\left( \varphi_k \right)$ is always equal to $2$ because $\lambda^{-}_{0,n}$ vanishes if and only if $n=k$, and $\lambda^{+}_{0,n}$ is always positive. The same conclusions hold for the subspaces of the type $S^{m,0}$. Indeed, it is obvious that $\lambda_{m,0}^+$ is positive for all $m \geq 1$. As for $\lambda_{m,0}^-$, our claim is an immediate consequence of the following lemma: \begin{lemma}\label{lemma-tecnico1} If $1\leq m \leq k-1$, then $\lambda_{m,0}^- <0$. If $m=k$, then $\lambda_{m,0}^- =0$. If $m>k$, then $\lambda_{m,0}^- >0$. \end{lemma} \begin{proof} [Proof of Lemma~\ref{lemma-tecnico1}] The eigenvalue $\lambda_{m,0}^- $ has the same sign of the expression \begin{equation}\label{1} -k^4+5k^2 m^2+2 m^4- \sqrt{k^8+2 k^6 m^2+k^4 m^4+32k^2 m^6 } \end{equation} If we set $m=ck$ into \eqref{1} we obtain: \[ k^4 \, \left ( -1+5c^2+2 c^4- \sqrt{1+2 c^2+c^4+32c^6 } \right ) = k^4 \, h(c)\,. \] Now a routine analysis shows that $h'(c)$ is positive for $c>0$ and $h(1)=0$, and from this the conclusion of the lemma follows immediately. \end{proof} By way of summary, the total contribution of the subspaces $S^{\lambda_0}$, $S^{0,n}$ and $S^{m,0}$ to the index and the nullity of $\varphi_k$ is $1+4(k-1)$ and $5$ respectively. To these values, we have to add the contributions coming from the subspaces of the type $S^{m,n}$ with $m,n \geq 1$. Now, we observe that all the eigenvalues of the type $\lambda_{m,n}^+$ are positive: this follows from the fact that the expression {\small \[ -k^4+k^2 \left(5 m^2+n^2\right)+2 \left(m^2+n^2\right)^2 + \sqrt{k^8+2 k^6 \left(m^2+n^2\right)+k^4 \left(m^2+n^2\right)^2+32 k^2 m^2 \left(m^2+n^2\right)^2} \]} vanishes when $m=n=0$ and it is increasing with respect to both $m$ and $n$. Therefore, the conclusion of the proof is an immediate consequence of the definition of the functions $f$ and $g$ in \eqref{definizione-f(k)} and \eqref{definizione-g(k)} respectively, together with the fact that each of the $\lambda_{m,n}^-\,$'s has multiplicity $4$. \begin{remark} Taking into account the expression of the eigenvectors corresponding to $\mu_0=\lambda^-_{k,0}=\lambda^-_{0,k}=0$ we have the following geometric description of the space where $I_2$ vanishes: $$ \Big\{d\varphi_k(X)\colon X\; {\rm Killing}\Big\}\oplus W^{0,k} V_{\eta}\oplus \Big\{ \frac{1}{k^2}d\varphi_k(\nabla f)+f V_{\eta}\colon f\in W^{k,0}\Big\}\,. $$ In a similar way, we can describe the space where $H(E_2)_{\varphi_k}$ is negative definite. For example, if $k=1$, the space where $H(E_2)_{\varphi_1}$ is negative definite is $\left \{ c_2 V_\eta \colon c_2 \in \R \right \}$. If $k=2$ the situation becomes more interesting and here we report the explicit description. More precisely, a computation shows that the space where $H(E_2)_{\varphi_2}$ is negative definite is \[ \left \{ c_2 V_\eta \colon c_2 \in \R \right \} \oplus W^{0,1}V_\eta \oplus \mathcal{A}_{1,0}\oplus \mathcal{A}_{1,1}\oplus \mathcal{A}_{2,1} \,, \] where \begin{eqnarray*} \mathcal{A}_{1,0}&=&\left \{ \frac{1}{4}(-5+\sqrt{33})d\varphi_2(\nabla f)+fV_\eta \,\,\colon f \in W^{1,0}\right \}\, ;\\ \mathcal{A}_{1,1}&=&\left \{ \frac{1}{4}(-3+\sqrt{17})d\varphi_2(\nabla f)+fV_\eta \,\,\colon f \in W^{1,1}\right \}\, ;\\ \mathcal{A}_{2,1}&=&\left \{ \frac{1231+41\sqrt{881}}{80\,(59+2\sqrt{881})}\,\,d\varphi_2(\nabla f)+fV_\eta \,\,\colon f \in W^{2,1}\right \} \,. \end{eqnarray*} Similar computations can also be performed in the cases $k \geq3$, but they are rather long and so we omit further details. \end{remark} \subsection{Proof of Theorem\link\ref{Index-theorem-r-harmonic-circles}} Again, we use vector fields $V_Y$ and $V_\eta$ defined precisely as in \eqref{def-V1 e V2}. The vectors $V_Y$, $V_\eta$ provide an orthonormal basis on $T\s^2$ at each point of the image of $\varphi_k$ and it is easy to conclude that each section $V \in \mathcal{C}\left( \varphi_k^{-1}T\s^2\right )$ can be written as \begin{equation}\label{general-section-toro-sfera-bis} V=f_1 \,V_Y +f_2\,V_\eta \,, \end{equation} where $f_j\in C^\infty \left ( \s^1 \right)$, $j=1,2$. The version of Proposition\link\ref{proposizione-I-esplicito-toro-sfera} in this context is: \begin{proposition}\label{proposizione-I-esplicito-s1-sfera} Assume that $f\in C^{\infty}\left ( \s^1 \right )$ is an eigenfunction of $\Delta$ with eigenvalue $\lambda$. Then \begin{equation}\label{prima-espressione-I-s1-sfera} I_2 (fV_Y)= \lambda \,\left ( \lambda+3k^2 \right )fV_Y +2 \sqrt 2 k \lambda f_{\gamma}V_\eta \end{equation} and \begin{equation}\label{seconda-espressione-I-s1-sfera} I_2 (fV_\eta)= \left ( \lambda^2-k^4 +2k^2\lambda\right )fV_\eta -2 \sqrt 2 k \lambda f_{\gamma} V_Y \,. \end{equation} \end{proposition} The proof is based again on the general formula \eqref{I2-general-case}. The necessary calculations are entirely similar to those of Proposition\link\ref{proposizione-I-esplicito-toro-sfera} and so we omit the details. Next, we decompose $\mathcal{C}\left( \varphi_k^{-1}T\s^2\right )$ in a similar fashion to \eqref{sottospazi-S-lambda}. We recall that the spectrum of $\Delta$ on $\s^1$ is $\{ m^2 \}_{m\in\n}$ and, for $m \in \n$, we define \begin{equation}\label{sottospazi-S-m} S^{m^2}=\left \{ f_1\,V_Y\,\,:\,\, \Delta f_1= m^2 f_1 \right \} \oplus \left \{ f_2\,V_\eta\,\,:\,\, \Delta f_2= m^2 f_2 \right \} \,. \end{equation} Then we know that $S^{m^2} \perp S^{m'^2}$ if $m \neq m'$, and $\oplus_{m=0}^{+\infty}\, S^{m^2}$ is dense in $\mathcal{C}\left( \varphi_k^{-1}T\s^2\right )$. Moreover, $I_2$ preserves all these subspaces. Now, we observe that $\dim \left (S^0 \right )=2$ and that an orthonormal basis of $S^0$ is $\left\{u_1,u_2 \right\}$, where \[ u_1= \frac{1}{\sqrt {2\pi}}\,V_Y,\,\quad u_2= \frac{1}{\sqrt {2\pi}}\,V_\eta\,. \] Now, using \eqref{prima-espressione-I-s1-sfera} and \eqref{seconda-espressione-I-s1-sfera}, it is immmediate to construct the $(2 \times 2)$-matrix which describes the restriction of $I_2$ to $S^0$: \begin{equation}\label{matrix-S-0} \left (\begin{array}{rr} 0&0 \\ 0&-k^4 \end{array} \right ) \end{equation} from which we deduce immediately that the contribution of $S^0$ to the index and the nullity of $\varphi_k$ is $+1$ for both. Next, we study the subspaces $S^{m^2}$, $m \geq1$. First, we observe that $\dim \left (S^{m^2} \right )=4$ and that an orthonormal basis of $S^{m^2}$ is $\left\{u_1,u_2,u_3,u_4 \right\}$, where {\small \begin{equation}\label{on-bases} u_1= \frac{1}{\sqrt {\pi}}\,\cos (m\gamma) \,V_Y,\; u_2= \frac{1}{\sqrt {\pi}}\,\sin (m\gamma) \,V_Y,\; u_3= \frac{1}{\sqrt {\pi}}\,\cos (m\gamma) \,V_\eta,\; u_4= \frac{1}{\sqrt {\pi}}\,\sin (m\gamma) \,V_\eta\,. \end{equation} } Now, using \eqref{prima-espressione-I-s1-sfera} and \eqref{seconda-espressione-I-s1-sfera}, we construct the $(4 \times 4)$-matrices which describe the restriction of $I_2$ to $S^{m^2}$. The outcome is: \begin{equation}\label{matrici-I2-Sm} \left( \begin{array}{cccc} m^2 \left(3 k^2+m^2\right) & 0 & 0 & -2 \sqrt{2} k m^3 \\ 0 & m^2 \left(3 k^2+m^2\right) & 2 \sqrt{2} k m^3 & 0 \\ 0 & 2 \sqrt{2} k m^3 & m^4+2 m^2 k^2-k^4 & 0 \\ -2 \sqrt{2} k m^3 & 0 & 0 & m^4+2 m^2 k^2-k^4 \end{array} \right) \,, \end{equation} whose eigenvalues are \begin{equation}\label{autovalori-I2-Sm} \lambda_m^{\pm}=\frac{1}{2} \left(-k^4+2 m^4+5 k^2 m^2\pm \sqrt{k^8+2 k^6 m^2+k^4 m^4+32 k^2m^6}\right) \end{equation} with multiplicity equal to $2$. Now, all the $\lambda_m^+$'s are clearly positive and so they do not contribute neither to the index nor to the nullity of $\varphi_k$. As for the $\lambda_m^-$'s, we can apply Lemma\link\ref{lemma-tecnico1}: it follows that the contribution to the nullity of $\varphi_k$ is $+2$ (coming from $\lambda_{k}^-$), while the contribution to the index is $+2(k-1)$, arising from $1 \leq m \leq k-1$, so that the proof of Theorem\link\ref{Index-theorem-r-harmonic-circles} is completed. \subsection{Proof of Theorem\link\ref{th: Sasahara_T2_S5}} The first part of the proof follows the lines of Theorem\link\ref{Spectrum-theorem-toro-sfera} and \cite{BFO}. For the sake of completeness and clarity, we report here the relevant facts. Let $ X_1=d\pi(\partial/\partial \gamma)$, $X_2=d\pi(\partial/\partial \vartheta)$, where $\pi$ denotes the projection from $\mathbb{R}^2$ to ${\mathbb T}^2$. If $ U_1=d\varphi(X_1)$, $U_2=d\varphi(X_2)$, then $\tau(\varphi)=-\phi(U_1)$. Moreover, the sections $U_1$, $U_2$, $\phi(U_1)$, $\phi(U_2)$ and $\xi$ parallelize the pull-back bundle $\varphi^{-1}T\mathbb{S}^5$. As in \eqref{sottospazi-S-lambda}, we consider $$ S^\lambda=\{fU_1\}_{f\in W_\lambda}\oplus\{fU_2\}_{f\in W_\lambda} \oplus\{f\phi(U_1)\}_{f\in W_\lambda} \oplus\{f\phi(U_2)\}_{f\in W_\lambda} \oplus\{f\xi\}_{f\in W_\lambda}, $$ where $W_{\lambda}=\{f\in C^{\infty}({\mathbb T}^2): \Delta f=\lambda f\}$. In this example, our torus is ${\mathbb T}^2=\s^1\times\s^1(1/\sqrt 2)$ and so the eigenvalues of its Laplace operator are of the form $\lambda=m^2+2n^2$, with $m,n \in \n$. As above, $S^{\lambda_i} \perp S^{\lambda_j}$ if $i \neq j$ and $\oplus_{i=0}^{+\infty}\, S^{\lambda_i}$ is dense in $\mathcal{C}\left( \varphi^{-1}T\s^5\right )$. The following version of Proposition\link\ref{proposizione-I-esplicito-toro-sfera} in this context was obtained in \cite{BFO}: \begin{proposition}\label{prop-Legendre} Assume that $f \in W_{\lambda}$. Then \begin{align*} I_2(fU_1)=&(\lambda^2f-4X_2(X_2(f)))U_1-4X_1(X_2(f))U_2\\ &+4(\lambda+1)X_2(f)\phi(U_2)+(2\lambda f-4X_2(X_2(f)))\xi,\\ \mbox{} I_2(fU_2)=&-4X_2(X_1(f))U_1+(\lambda^2+6\lambda)fU_2\\ & +4\lambda X_2(f)\phi(U_1)+ 4(\lambda+1)X_1(f)\phi(U_2) \\&-8X_1(X_2(f))\xi, \end{align*} \begin{align*} I_2(f\phi(U_1))=&-4\lambda X_2(f)U_2\\ &+(\lambda^2+4\lambda-4)f\phi(U_1)-8X_1(X_2(f))\phi(U_2)\\ &-4\lambda X_1(f)\xi,\\ \mbox{} I_2(f\phi(U_2))=&-4(\lambda+1)X_2(f)U_1-4(\lambda+1)X_1(f)U_2\\ &-8X_1(X_2(f))\phi(U_1)+((\lambda^2+6\lambda)f-4X_2(X_2(f)))\phi(U_2)\\ &-4(\lambda+1)X_2(f)\xi,\\ \mbox{} I_2(f\xi)=&(2\lambda f-4X_2(X_2(f)))U_1-8X_1(X_2(f))U_2\\ &+4\lambda X_1(f)\phi(U_1)+4(\lambda+1)X_2(f)\phi(U_2)\\ &+(\lambda^2+4\lambda)f\xi. \end{align*} \end{proposition} As for $\lambda_0=0$, we have: \begin{equation}\label{sottospazi-S-lambda-0-Legendre} S^{\lambda_0}=\left \{ c_1\,U_1 \right \} \oplus \left \{ c_2\,U_2 \right \}\oplus \left \{ c_3\,\phi(U_1) \right \}\oplus \left \{ c_4\,\phi(U_2) \right \}\oplus \left \{ c_5\,\xi \right \}\,, \end{equation} where $c_i \in \R$, $\,i=1, \ldots, 5$, so that $\dim \left( S^{\lambda_0} \right)=5$. Next, let us consider the case that $\lambda=m^2+2n^2 >0$. We decompose \begin{equation}\label{sottospazi-W-lambda-Legendre} W_{\lambda}=W^{m,0}\oplus_{m,n \geq 1}W^{m,n}\oplus W^{0,n}\,, \end{equation} where it is understood that in \eqref{sottospazi-W-lambda-Legendre} we have to consider all the possible couples $(m,n)\in \n \times \n$ such that $\lambda=m^2+2n^2$. Now, the subspaces of the type $W^{m,0}$ are $2$-dimensional and are spanned by the functions $\left \{\cos (m\gamma),\sin (m\gamma) \right \}$. Similarly, $W^{0,n}$ is $2$-dimensional and is generated by $\left \{\cos (\sqrt 2 n\vartheta),\sin (\sqrt 2 n\vartheta) \right \}$. Finally, the subspaces $W^{m,n}$, with $m,n \geq 1$, have dimension $4$ and are spanned by $\left \{g_1,g_2,g_3,g_4 \right \}$, where \begin{eqnarray}\label{definiz-g1-2-3-4} &g_1=\cos (m\gamma)\cos(\sqrt 2 n\vartheta),\,\quad g_2=\cos (m\gamma)\sin (\sqrt 2 n\vartheta),\\\nonumber &g_3=\sin (m\gamma)\cos(\sqrt 2 n\vartheta),\,\quad g_4=\sin (m\gamma)\sin (\sqrt 2 n\vartheta) \,.\nonumber \end{eqnarray} Now it becomes natural to define \begin{equation}\label{sottospazi-S-m,n-Legendre} S^{m,n}=\left \{ f_1\,U_1\right \} \oplus \left \{ f_2\,U_2\right \} \oplus \left \{ f_3\,\phi(U_1)\right \}\oplus \left \{ f_4\,\phi(U_2)\right \}\oplus \left \{ f_5\,\xi\right \}\,, \end{equation} where $f_i \in W^{m,n}$, $i=1, \ldots, 5$. All these subspaces are orthogonal to each other. Moreover, for any positive eigenvalue $\lambda_i$, we have \begin{equation}\label{decomposizioneS-lambda-in-coppie-m-n} S^{\lambda_i}= \oplus_{m^2+2n^2=\lambda_i}\,\,S^{m,n}\,. \end{equation} Since $X_1$ and $X_2$ are Killing vector fields on ${\mathbb T}^2$ it follows easily from Proposition\link\ref{prop-Legendre} that the operator $I_2$ preserves each of the subspaces $S^{m,n}$. Therefore, its spectrum can be computed by determing the eigenvalues of the matrices associated to the restriction of $I_2$ to each of the $S^{m,n}\,$'s. The contribution to the index and the nullity of $\varphi$ arising from the subspaces $S^{\lambda_0}$, $S^{m,0}$, $S^{0,n}$, $S^{1,1}$ and $S^{2,1}$ has already been calculated in \cite {BFO}: it is $1+6+0+4+0=11$ for the index and $4+2+8+0+4=18$ for the nullity. By way of summary, in order to complete the proof of the theorem, we just have to study the subspaces $S^{m,n}$ in the remaining cases and show that they do not contribute neither to the index nor to the nullity of $\varphi$. To this purpose, we first observe that $\dim (S^{m,n})=20$ for all $m,n \in \n^*$ and an orthonormal basis for these subspaces is: \begin{equation}\label{base-ortonormale-S-m-n} \left \{c^*g_1U_1,c^*g_2 U_1,c^*g_3U_1,c^*g_4U_1,c^*g_1U_2,\ldots,c^*g_1\phi(U_1),\ldots, c^*g_1\phi(U_2),\ldots,c^*g_1\xi,\ldots\right \}, \end{equation} where $g_1,g_2,g_3,g_4$ are the functions introduced in \eqref{definiz-g1-2-3-4} and $c^*=\sqrt[4]{2}/\pi$. Next, using Proposition\link\ref{prop-Legendre}, a long but straightforward calculation leads us to the expression of the $20 \times 20$-matrices which describe the restriction of $I_2$ to the $S^{m,n}$'s with respect to the orthonormal bases \eqref{base-ortonormale-S-m-n}. The result is ($\lambda=m^2+2n^2$): \begin{tiny} \begin{equation*} \setlength\arraycolsep{1.pt} \left ( \begin{array}{cccccccccc} 8 n^2+\lambda ^2 & 0 & 0 & 0 & 0 & 0 & 0 & -4 \sqrt{2} m n & 0 & 0 \cr 0 & 8 n^2+\lambda ^2 & 0 & 0 & 0 & 0 & 4 \sqrt{2} m n & 0 & 0 & 0 \cr 0 & 0 & 8 n^2+\lambda ^2 & 0 & 0 & 4 \sqrt{2} m n & 0 & 0 & 0 & 0 \cr 0 & 0 & 0 & 8 n^2+\lambda ^2 & -4 \sqrt{2} m n & 0 & 0 & 0 & 0 & 0 \cr 0 & 0 & 0 & -4 \sqrt{2} m n & \lambda (\lambda +6) & 0 & 0 & 0 & 0 & -4 \sqrt{2} n \cr 0 & 0 & 4 \sqrt{2} m n & 0 & 0 & \lambda (\lambda +6) & 0 & 0 & 4 \sqrt{2} n \lambda &0 \cr 0 & 4 \sqrt{2} m n & 0 & 0 & 0 & 0 & \lambda (\lambda +6) & 0 & 0 & 0 \cr -4 \sqrt{2} m n & 0 & 0 & 0 & 0 & 0 & 0 & \lambda (\lambda +6) & 0 & 0 \cr 0 & 0 & 0 & 0 & 0 & 4 \sqrt{2} n \lambda & 0 & 0 & \lambda ^2+4 \lambda -4 & 0 \cr 0 & 0 & 0 & 0 & -4 \sqrt{2} n \lambda & 0 & 0 & 0 & 0 & \lambda ^2+4 \lambda -4 \cr 0 & 0 & 0 & 0 & 0 & 0 & 0 & 4 \sqrt{2} n \lambda & 0 & 0 \cr 0 & 0 & 0 & 0 & 0 & 0 & -4 \sqrt{2} n \lambda & 0 & 0 & 0 \cr 0 & 4 \sqrt{2} n (\lambda +1) & 0 & 0 & 0 & 0 & 4 m (\lambda +1) & 0 & 0 & 0 \cr -4 \sqrt{2} n (\lambda +1) & 0 & 0 & 0 & 0 & 0 & 0 & 4 m (\lambda +1) & 0 & 0 \cr 0 & 0 & 0 & 4 \sqrt{2} n (\lambda +1) & \;\;\;-4 m (\lambda +1) & 0 & 0 & 0 & 0 & 8 \sqrt{2} m n \cr 0 & 0 & -4 \sqrt{2} n (\lambda +1) & 0 & 0 & -4 m (\lambda +1) & 0 & 0 & -8 \sqrt{2} m n & 0 \cr 2 \left(4 n^2+\lambda \right) & 0 & 0 & 0 & 0 & 0 & 0 & -8 \sqrt{2} m n & 0 & 0 \cr 0 & 2 \left(4 n^2+\lambda \right) & 0 & 0 & 0 & 0 & 8 \sqrt{2} m n & 0 & 0 & 0 \cr 0 & 0 & 2 \left(4 n^2+\lambda \right) & 0 & 0 & 8 \sqrt{2} m n & 0 & 0 & 4 m \lambda & 0\cr 0 & 0 & 0 & 2 \left(4 n^2+\lambda \right) & -8 \sqrt{2} m n & 0 & 0 & 0 & 0 & 4 m \lambda \end{array} \right. \end{equation*} \begin{equation*} \setlength\arraycolsep{-1pt} \left . \hspace{-5mm}\begin{array}{cccccccccc} 0 & 0 & 0 & -4 \sqrt{2} n (\lambda +1) & 0 & 0 & 2 \left(4 n^2+\lambda \right) & 0 & 0 & 0\cr 0 & 0 & 4 \sqrt{2} n (\lambda +1) & 0 & 0 & 0 & 0 & 2 \left(4 n^2+\lambda \right) & 0 & 0 \cr 0 & 0 & 0 & 0 & 0 & -4 \sqrt{2} n (\lambda +1) & 0 & 0 & 2 \left(4 n^2+\lambda \right) & 0 \cr 0 & 0 & 0 & 0 & 4 \sqrt{2} n (\lambda +1) & 0 & 0 & 0 & 0 & 2 \left(4 n^2+\lambda \right) \cr 0 & 0 & 0 & 0 & -4 m (\lambda +1) & 0 & 0 & 0 & 0 & -8 \sqrt{2} m n \cr 0 & 0 & 0 & 0 & 0 & -4 m (\lambda +1) & 0 & 0 & 8 \sqrt{2} m n & 0 \cr 0 & -4 \sqrt{2} n \lambda & 4 m (\lambda +1) & 0 & 0 & 0 & 0 & 8 \sqrt{2} m n & 0 & 0 \cr 4 \sqrt{2} n \lambda & 0 & 0 & 4 m (\lambda +1) & 0 & 0 & -8 \sqrt{2} m n & 0 & 0 & 0 \cr 0 & 0 & 0 & 0 & 0 & -8 \sqrt{2} m n & 0 & 0 & 4 m \lambda & 0 \cr 0 & 0 & 0 & 0 & 8 \sqrt{2} m n & 0 & 0 & 0 & 0 & 4 m \lambda \cr \lambda ^2+4 \lambda -4 & 0 & 0 & 8 \sqrt{2} m n & 0 & 0 & -4 m \lambda & 0 & 0 & 0 \cr 0 & \lambda ^2+4 \lambda -4 & -8 \sqrt{2} m n & 0 & 0 & 0 & 0 & -4 m \lambda & 0 & 0 \cr 0 & -8 \sqrt{2} m n & \;\;\; 8 n^2+\lambda (\lambda +6) & 0 & 0 & 0 & 0 & 4 \sqrt{2} n (\lambda +1) & 0 & 0 \cr 8 \sqrt{2} m n & 0 & 0 & 8 n^2+\lambda (\lambda +6) & 0 & 0 & -4 \sqrt{2} n (\lambda +1) & 0 & 0 & 0\cr 0 & 0 & 0 & 0 & 8 n^2+\lambda (\lambda +6) & 0 & 0 & 0 & 0 & 4 \sqrt{2} n (\lambda +1) \cr 0 & 0 & 0 & 0 & 0 & 8 n^2+\lambda (\lambda +6) & 0 & 0 & -4 \sqrt{2} n (\lambda +1) & 0 \cr -4 m \lambda & 0 & 0 & -4 \sqrt{2} n (\lambda +1) & 0 & 0 & \lambda (\lambda +4) & 0 & 0 & 0 \cr 0 & -4 m \lambda & 4 \sqrt{2} n (\lambda +1) & 0 & 0 & 0 & 0 & \lambda (\lambda +4) & 0 & 0 \cr 0 & 0 & 0 & 0 & 0 & -4 \sqrt{2} n (\lambda +1) & 0 & 0 & \lambda (\lambda +4) & 0 \cr 0 & 0 & 0 & 0 & 4 \sqrt{2} n (\lambda +1) & 0 & 0 & 0 & 0 & \lambda (\lambda +4) \end{array} \right) \end{equation*} \end{tiny} By means of a suitable software we find that their characteristic polynomial is: \begin{equation}\label{pol-char-matrici20x20} P(x)= \left[ P_5(x) \right ]^4 \,, \end{equation} where \begin{equation}\label{pol-char-grado-5} P_5(x)= a_5x^5+a_4x^4+a_3x^3+a_2x^2+a_1x+a_0 \end{equation} with coefficients given by: \begin{eqnarray}\label{coeff-pol-grado-5} \\\nonumber a_5(m,n)&=&-1 \,\,;\\ \nonumber a_4(m,n)&=&5 m^4+20 m^2 n^2+20 m^2+20 n^4+56 n^2-4\,\,; \\ \nonumber a_3(m,n)&=& -10 m^8-80 m^6 n^2-48 m^6-240 m^4 n^4-320 m^4 n^2-96 m^4-320 m^2 n^6\\ \nonumber &&-704 m^2 n^4-272 m^2 n^2+80 m^2-160 n^8-512 n^6-736 n^4+256 n^2\,\,;\\ \nonumber a_2(m,n)&=&10 m^{12}+120 m^{10} n^2+24 m^{10}+600 m^8 n^4+240 m^8 n^2-8 m^8+1600 m^6 n^6\\ \nonumber &&+960 m^6 n^4+1200 m^6 n^2-16 m^6+2400 m^4 n^8+1920 m^4 n^6+5024 m^4 n^4\\ \nonumber &&-320 m^4 n^2-320 m^4+1920 m^2 n^{10}+1920 m^2 n^8+5440 m^2 n^6-2368 m^2 n^4\\ \nonumber &&-832 m^2 n^2+64 m^2+640 n^{12}+768 n^{10}+512 n^8+512 n^6-2688 n^4+256 n^2 \,\,;\\ \nonumber a_1(m,n)&=& -5 m^{16}-80 m^{14} n^2+16 m^{14}-560 m^{12} n^4+256 m^{12} n^2+64 m^{12}-2240 m^{10}n^6\\ \nonumber && +1728 m^{10} n^4-560 m^{10} n^2+48 m^{10}-5600 m^8 n^8+6400 m^8 n^6-7072 m^8 n^4\\ \nonumber &&+1536 m^8 n^2+272 m^8-8960 m^6 n^{10}+14080 m^6 n^8-23936 m^6 n^6+6272 m^6 n^4\\ \nonumber &&+6080 m^6 n^2-64 m^6-8960 m^4 n^{12}+18432 m^4 n^{10}-34048 m^4 n^8+4608 m^4 n^6\\ \nonumber &&+12800 m^4 n^4-256 m^4-5120 m^2 n^{14}+13312 m^2 n^{12}-18176 m^2 n^{10}\\ \nonumber &&-11520 m^2 n^8+45312 m^2 n^6-5888 m^2 n^4+512 m^2 n^2-1280 n^{16}+4096 n^{14}\\ \nonumber &&-512 n^{12}-14336 n^{10}+18176 n^8-4096 n^6-2048 n^4\,\,;\\ \nonumber \end{eqnarray} \begin{eqnarray}\label{coeff-pol-grado-5-bis} &&\\ \nonumber a_0(m,n)&=&m^{20}+20 m^{18} n^2-12 m^{18}+180 m^{16} n^4-232 m^{16} n^2+44 m^{16}+960 m^{14} n^6\\ \nonumber &&-1984 m^{14} n^4+656 m^{14} n^2-112 m^{14}+3360 m^{12} n^8-9856 m^{12} n^6+4832 m^{12} n^4\\ \nonumber &&+576 m^{12} n^2+304 m^{12}+8064 m^{10} n^{10}-31360 m^{10} n^8+22592 m^{10} n^6\\ \nonumber &&+11456 m^{10} n^4-5056 m^{10} n^2-320 m^{10}+13440 m^8 n^{12}-66304 m^8 n^{10}\\ \nonumber &&+70400 m^8 n^8+44544 m^8 n^6-41152 m^8 n^4-1920 m^8 n^2+576 m^8+15360 m^6 n^{14}\\ \nonumber &&-93184 m^6 n^{12}+144128 m^6 n^{10}+52992 m^6 n^8-116224 m^6 n^6-21504 m^6 n^4\\ \nonumber &&+3328 m^6 n^2-256 m^6+11520 m^4 n^{16}-83968 m^4 n^{14}+184832 m^4 n^{12}\\ \nonumber &&-48128 m^4 n^{10}-121600 m^4 n^8-22528 m^4 n^6+11264 m^4 n^4+1024 m^4 n^2\\ \nonumber &&+5120 m^2 n^{18}-44032 m^2 n^{16}+134144 m^2 n^{14}-158720 m^2 n^{12}+54272 m^2 n^{10}\\ \nonumber &&-31744 m^2 n^8+44032 m^2 n^6-3072 m^2 n^4+1024 n^{20}-10240 n^{18}+41984 n^{16}\\ \nonumber &&-98304 n^{14}+142336 n^{12}-124928 n^{10}+60416 n^8-12288 n^6 \,\,. \nonumber \end{eqnarray} Now, the end of the proof of the theorem is an immediate consequence of the following technical lemma. To state the lemma, it is convenient to use the following notation: \begin{definition} Let $m,n,m',n' \in \n$. We shall write $[m,n] \leq [m',n']$ if $m \leq m'$ and $n \leq n'$. We shall write $[m,n] < [m',n']$ if $[m,n] \leq [m',n']$ and either $m < m'$ or $n < n'$. \end{definition} \begin{lemma}\label{lemma-pol-grado-5} Let $P_5(x)$ be the polynomial of degree $5$ defined in \eqref{pol-char-grado-5}--\eqref{coeff-pol-grado-5-bis}. Assume that \begin{equation}\label{uffa-uffa} [2,1]<[m,n] \quad {\rm or} \quad [1,2]\leq [m,n]\,. \end{equation} Then $P_5(x)$ does not admit any nonpositive root. \end{lemma} \begin{proof}[Proof of the lemma] By the classical criterion of Descartes, it suffices to show that, if \eqref{uffa-uffa} holds, we have \begin{equation}\label{criterio-cartesio} {\rm (i)}\,\,a_5 < 0 \,; \quad {\rm (ii)}\,\,a_4 \geq 0 \,;\quad {\rm (iii)}\,\, a_3 \leq 0\,; \quad{\rm (iv)}\,\, a_2 \geq 0\,; \quad{\rm (v)}\,\,a_1 \leq 0\,; \quad{\rm (vi)}\,\, a_0 > 0\,. \end{equation} The claims \eqref{criterio-cartesio}(i),(ii) and (iii) are obvious. As for \eqref{criterio-cartesio}(iv), we first observe, by direct computation, that $a_2 \geq0$ when \eqref{uffa-uffa} holds and $[m,n] \leq [3,3]$. Then we rewrite the coefficient $a_2$ as follows: \begin{eqnarray*}\label{riscrittura-a2} a_2&=& 256 n^2+(-2688 n^4+512 n^6)+512 n^8+768 n^{10}+640 n^{12}+64 m^2\\ &&+(-2368 n^4 m^2+5440 n^6 m^2)+ 1920 n^8 m^2+1920 n^{10} m^2\\ &&+(-320-320 n^2 +5024 n^4)m^4+1920 n^6 m^4+2400 n^8 m^4\\ &&+(-16 m^6+1200 n^2 m^6)+960 n^4 m^6+ 1600 n^6 m^6\\ &&+(-832 n^2 m^2+240 n^2 m^8)+600 n^4 m^8+24 m^{10}+120 n^2 m^{10}+(-8 m^8+10 m^{12}) \end{eqnarray*} Now it is easy to check that all the terms within parentheses are positive when $[3,3]<[m,n]$ and so \eqref{criterio-cartesio}(iv) is proved. As for \eqref{criterio-cartesio}(v), we first observe, by direct computation, that $a_1 \leq 0$ when \eqref{uffa-uffa} holds and $[m,n] \leq [3,3]$. Then we rewrite the coefficient $a_1$ as follows: \begin{eqnarray*}\label{riscrittura-a1} a_1&=& -2048 n^4-4096 n^6+(18176 n^8-14336 n^{10})-512 n^{12}\\ &&+(4096 n^{14}-1280 n^{16})+(512 n^2 m^2-5888 n^4 m^2)\\ &&+ (45312 n^6 m^2-11520 n^8 m^2)-18176 n^{10 }m^2+(13312 n^{12} m^2-5120 n^{14} m^2)\\ &&-256 m^4+(12800 +4608 n^2- 34048 n^4)n^4 m^4+(18432 n^{10} m^4-8960 n^{12} m^4)\\ &&+(-64 +6080 n^2+6272 n^4-23936 n^6 )m^6\\ &&+(14080 n^8-8960 n^{10} )m^6+ (272 +1536 n^2 -7072 n^4)m^8\\ &&+(6400 n^6 m^8-5600 n^8 m^8)+(48 m^{10}-560 n^2 m^{10})\\ &&+(1728 n^4 m^{10}-2240 n^6 m^{10})+(64 m^{12}+ 256 n^2 m^{12}-560 n^4 m^{12})\\ &&+(16 m^{14}-80 n^2 m^{14})-5 m^{16} \end{eqnarray*} Now it is not difficult to check that all the terms within parentheses are negative when $[3,3]<[m,n]$ and so \eqref{criterio-cartesio}(v) is proved. As for \eqref{criterio-cartesio}(vi), we first observe, by direct computation, that $a_0 > 0$ when \eqref{uffa-uffa} holds and $[m,n] \leq [3,3]$. Then we rewrite the coefficient $a_0$ as follows: \begin{eqnarray*}\label{riscrittura-a0} a_0&=& (-12288 n^6+60416 n^8)+(-124928 n^{10}+142336 n^{12})\\ &&+(-98304 n^{14}+41984 n^{16})+(-10240 n^{18}+1024 n^{20})+ (-3072 n^4 m^2+44032 n^6 m^2)\\ &&+(-31744 n^8 m^2+54272 n^{10} m^2)+(-158720 n^{12} m^2+134144 n^{14} m^2)\\ &&+ (-44032 n^{16} m^2+5120 n^{18} m^2)+1024 n^2 m^4+11264 n^4 m^4\\ &&+(-48128 n^{10} m^4+184832 n^{12} m^4)+ (-83968 n^{14} m^4+11520 n^{16} m^4)\\ &&+ (-256 m^6+3328 n^2 m^6)+(-121600 n^8 m^4+52992 n^8 m^6)+144128 n^{10} m^6\\ &&+ (-93184 n^{12} m^6+15360 n^{14} m^6)+ 576 m^8+(-41152 n^4 m^8+44544 n^6 m^8)\\ &&+(-116224 n^6 m^6+70400 n^8 m^8)+(-66304 n^{10} m^8+13440 n^{12} m^8)\\ &&+(-5056 n^2 m^{10}+ 11456 n^4 m^{10})+(-22528 n^6 m^4+22592 n^6 m^{10})\\ &&+(-31360 n^8 m^{10}+8064 n^{10} m^{10})+304 m^{12}+ (-1920 n^2 m^8+576 n^2 m^{12})\\ &&+(-21504 n^4 m^6+4832 n^4 m^{12})+(-9856 n^6 m^{12}+3360 n^8 m^{12})\\ &&+ (-112 m^{14}+656 n^2 m^{14})+(-1984 n^4 m^{14}+960 n^6 m^{14})\\ &&+44 m^{16}+ (-232 n^2 m^{16}+180 n^4 m^{16} )+(-12 m^{18}+20 n^2 m^{18})+(m^{20} -320 m^{10}) \end{eqnarray*} Now it is easy to check that all the terms within parentheses are positive when $[3,3]<[m,n]$ and so \eqref{criterio-cartesio}(vi) is proved and the proof of the lemma is ended. \end{proof} \begin{remark} It is easy to check, by direct inspection, that actually the inequalities \eqref{criterio-cartesio}(i)--(v) are true if \eqref{uffa-uffa} is replaced by the less restrictive condition $[1,1] \leq [m,n]$. By contrast, $a_0(1,1)<0$ and $a_0(2,1)=0$. Putting all these facts together we recover the result of \cite{BFO} according to which $S^{1,1}$ produces a contribution $4$ for the index and $0$ for the nullity, while the contribution of $S^{2,1}$ is $4$ for the nullity and $0$ for the index. \end{remark} \subsection{Proof of Theorem\link\ref{th:noncompact}} The first step of the proof is to compute the explicit expression of $I_2$ using \eqref{I2-general-case}. The image of $\varphi$ is contained in the equator of $\s^2$, which we denote by $\s^1$. Next, we define $Y$ and $\eta$ as follows: \[ Y\left ( y_1,y_2,0\right )=\left ( -y_2,y_1,0\right )\quad {\rm and} \quad \eta \left ( y_1,y_2,0\right )=( 0,0,1)\,. \] We shall write $V_Y,\,V_\eta$ for $Y\circ \varphi,\,\eta\circ \varphi$ respectively. Clearly, $\left \{V_Y(\gamma),\,V_\eta(\gamma) \right \}$ is an orthonormal basis of $T_{\varphi(\gamma)}\s^2$ for all $\gamma \in \R$. Therefore, any section of $\varphi^{-1}T\s^2$ can be written as \[ V=f_1 \,V_Y+f_2 \,V_ \eta \,, \] where $f_1,\,f_2 \in C^{\infty}(\R)$. Moreover, $$ \tau(\varphi)=A'' \,V_Y\,, \quad \tau_2(\varphi)=A^{(4)} \,V_Y\,. $$ Next, performing computations similar to those of Proposition\link\ref{proposizione-I-esplicito-toro-sfera}, we compute the various terms of \eqref{I2-general-case}. The results are summarised in the following two lemmata. \renewcommand{\arraystretch}{1.3} \begin{lemma}\label{lemma13-noncompact} \[\begin{array}{rcl} \overline{\Delta}^2\left ( fV_Y\right ) &=& f^{(4)}V_Y\\ \nonumber \overline{\Delta}\left ( {\rm trace}\langle fV_Y,d\varphi \cdot \rangle d\varphi \cdot- |d\varphi|^2\,fV_Y \right) &=& 0\\ \nonumber 2\langle d\tau(\varphi),d\varphi \rangle fV_Y &=& 2\, A'''\,A'\,f\,V_Y\\ \nonumber |\tau(\varphi)|^2 fV_Y &=&(A'')^2\, f\,V_Y\\ \nonumber -2\,{\rm trace}\langle fV_Y,d\tau(\varphi) \cdot \rangle d\varphi \cdot &=& -2\, A'''\,A'\,f\,V_Y\\ \nonumber -2 \,{\rm trace} \langle \tau(\varphi),d(fV_Y) \cdot \rangle d\varphi \cdot &=&- 2\, A''\,A'\,f' \,V_Y\\ \nonumber - \langle \tau(\varphi),fV_Y \rangle \tau(\varphi)&=&-(A'')^2\, f\,V_Y \\ \nonumber {\rm trace} \langle d\varphi \cdot,\overline{\Delta}(fV_Y) \rangle d\varphi \cdot &=&-(A')^2\, f''\,V_Y\\ \nonumber {\rm trace}\langle d\varphi \cdot,\left ( {\rm trace}\langle fV_Y,d\varphi \cdot \rangle d\varphi \cdot \right ) \rangle d\varphi \cdot&=& (A')^4\, f\,V_Y\\ \nonumber -2 |d\varphi|^2\, {\rm trace}\langle d\varphi \cdot,fV_Y \rangle d\varphi \cdot&=&-2 (A')^4\, f\,V_Y \\ \nonumber 2 \langle d(fV_Y),d\varphi\rangle \tau(\varphi) &=& 2 A''\,A'\,f'\,V_Y\\ \nonumber -|d\varphi|^2 \,\overline{\Delta}(fV_Y)&=& (A')^2\,f''\,V_Y\\ \nonumber |d\varphi|^4 fV_Y&=& (A')^4\,f\,V_Y\nonumber \end{array} \] \end{lemma} \begin{lemma}\label{lemma14-noncompact} \[\begin{array}{rcl} \overline{\Delta}^2\left ( fV_\eta\right ) &=& f^{(4)}V_\eta\\ \nonumber \overline{\Delta}\left ( {\rm trace}\langle fV_\eta,d\varphi \cdot \rangle d\varphi \cdot- |d\varphi|^2\,fV_\eta \right) &=& \left [ 2 (A'')^2\,f+2\,A'''\,A'\,f+4\,A''\,A'\,f'+(A')^2 \,f''\right ]V_\eta\\ \nonumber 2\langle d\tau(\varphi),d\varphi \rangle fV_\eta &=& 2\, A'''\,A'\,f\,V_\eta\\ \nonumber |\tau(\varphi)|^2 fV_\eta &=&(A'')^2\, f\,V_\eta\\ \nonumber -2\,{\rm trace}\langle fV_\eta,d\tau(\varphi) \cdot \rangle d\varphi \cdot &=& 0\\ \nonumber -2 \,{\rm trace} \langle \tau(\varphi),d(fV_\eta) \cdot \rangle d\varphi \cdot &=&0\\ \nonumber - \langle \tau(\varphi),fV_\eta \rangle \tau(\varphi)&=&0 \\ \nonumber {\rm trace} \langle d\varphi \cdot,\overline{\Delta}(fV_\eta) \rangle d\varphi \cdot &=&0\\ \nonumber {\rm trace}\langle d\varphi \cdot,\left ( {\rm trace}\langle fV_\eta,d\varphi \cdot \rangle d\varphi \cdot \right ) \rangle d\varphi \cdot&=& 0 \\\nonumber -2 |d\varphi|^2\, {\rm trace}\langle d\varphi \cdot,fV_\eta \rangle d\varphi \cdot&=&0\\ \nonumber 2 \langle d(fV_\eta),d\varphi\rangle \tau(\varphi) &=&0\\ \nonumber -|d\varphi|^2 \,\overline{\Delta}(fV_\eta)&=& (A')^2\,f''\,V_\eta\\ \nonumber |d\varphi|^4 fV_\eta&=& (A')^4\,f\,V_\eta\nonumber \end{array} \] \end{lemma} \renewcommand{\arraystretch}{1.} Next, we insert the results given in Lemmata\link\ref{lemma13-noncompact} and \ref{lemma14-noncompact} into \eqref{I2-general-case}. Then, adding up all the terms, we obtain: \begin{equation}\label{prima-espressione-I-noncompact} I_2 (f\,V_Y)=f^{(4)}\,V_Y \end{equation} and \begin{equation}\label{seconda-espressione-I-noncompact} I_2 (f\,V_\eta)= \left [f^{(4)}+2(A')^2\,f''+4A''\,A'\,f'+\left(4 A'''\,A'+3(A'')^2+(A')^4\right)\,f \right ]\,V_\eta \,. \end{equation} Now, let $V=f_1 \,V_Y+f_2 \,V_ \eta\,$, assume that $f_1,\,f_2$ have compact support and denote by $D_V$ the support of $V$. Using \eqref{prima-espressione-I-noncompact} and \eqref{seconda-espressione-I-noncompact}, we find: \begin{eqnarray}\label{IV,V-noncompact} \int_{\R} \langle I_2(V),V \rangle \, d\gamma= \int_{D_V} &&\Big [f_1^{(4)}\,f_1 +f_2^{(4)}\,f_2+ \left((A')^2 f_2\right)'' f_2+(A')^2 f_2'' f_2 \\ \nonumber && +\left(2 A'''\,A'+(A'')^2+(A')^4\right)\,f_2^2\Big ] d\gamma \end{eqnarray} Since $f_1,\,f_2$ and all their derivatives vanish on the boundary of $D_V$, performing suitable partial integrations it is easy to verify that \eqref{IV,V-noncompact} can be rewritten in the following more convenient way: \begin{eqnarray}\label{IV,V-noncompact-bis} \int_{\R} \langle I_2(V),V \rangle \, d\gamma= \int_{D_V} &\Big [(f_1'')^2 +\left(f_2''+(A')^2\,f_2\right)^2\\ \nonumber & +\left((A'')^2+2A'''\,A'\right)\,f_2^2\Big ] d\gamma \end{eqnarray} It follows immediately that a sufficient condition to ensure that $\varphi$ is strictly stable is \begin{equation}\label{cond-suff-noncompact} (A'')^2+2A'''\,A' \geq 0 \quad {\rm on} \,\,\R\,. \end{equation} Finally, a routine verification, using the explicit expression \eqref{A-gamma}, shows that \eqref{cond-suff-noncompact} is equivalent to \eqref{condiz-noncompact} and so the proof of Theorem\link\ref{th:noncompact} is completed. \begin{remark} We point out that in \eqref{IV,V-noncompact-bis} the quantity $(f_1'')^2+\left(f_2''+(A')^2\,f_2\right)^2$ is exactly the square of the norm of $J(V)$, where $J$ is the classical Jacobi operator. \end{remark} \begin{remark} If \eqref{condiz-noncompact} is not satisfied, then $\varphi$ may be unstable. To see this, at least in the class of $C^5$-differentiable functions, we consider the case that $a=1, \,c=-2,\,b=d=0$, i.e., $A(\gamma)=\gamma^3-2\gamma$. We choose $f_1\equiv 0$ and \[ f_2(\gamma)=\cos^6 \gamma \quad {\rm if} \,-\,\frac{\pi}{2} \leq \gamma \leq \frac{\pi}{2}, \,\, {\rm and}\,\,f_2(\gamma)=0 \,\, {\rm elsewhere} \,. \] Then, replacing into \eqref{IV,V-noncompact-bis}, we find: \begin{eqnarray}\label{controesempio-noncompact}\nonumber \int_{\R} \langle I_2(V),V \rangle \, d\gamma=& &\int_{-\pi/2}^{\pi/2} \Big [\left(36 \gamma^2+12 \left(3 \gamma^2-2\right)\right) \cos ^{12}\gamma\\ && +\left(\left(3 \gamma^2-2\right)^2 \cos ^6\gamma-6 \cos ^6\gamma+30 \sin ^2\gamma \cos ^4\gamma\right)^2\Big ] d\gamma \simeq -3.537 <0\,. \end{eqnarray} \end{remark} \section{Reduced Index and Nullity}\label{variation-of-example-toro-sfera} A natural continuation of the study that we have undertaken in the previous sections would be to investigate the following family of equivariant maps: \begin{align}\label{mappegeneralizzanti-toro-sfera} \varphi_\alpha \,:\, \s^{n-1} (R) \times \s^1 & \to \s^n \hookrightarrow \R^n \quad \times \R \\ \nonumber (R\,\underline{\gamma},\qquad \vartheta)& \mapsto \, \left ( \sin \alpha(\vartheta) \, \underline{\gamma}, \,\cos \alpha (\vartheta) \right ) \quad (\underline{\gamma} \in \s^{n-1},\,n \geq 2) \,. \end{align} The bienergy of a map as in \eqref{mappegeneralizzanti-toro-sfera} depends only on the function $\alpha$. Then it is natural to define the \textit{reduced bienergy} on $C^{\infty} \left(\s^1\right)$, as follows: \begin{equation}\label{reduced-bienergy-generalizza-toro-sfera} E_{2,{\rm red}}(\alpha)=E_2 \left (\varphi_\alpha \right )= \frac{1}{2}\, {\rm Vol}\left(\s^{n-1}(R) \right )\, \int_0^{2\pi } \left [\alpha'' - \frac{(n-1)}{2\,R^2}\sin (2\,\alpha) \right ]^2 d\vartheta \end{equation} The condition of biharmonicity for $\varphi_{\alpha}$ is the following ODE, which can be derived as the Euler-Lagrange equation of the reduced bienergy (see \cite{Mont-Ratto3}): \begin{equation}\label{biarmoniatorosfera-generalizzato} \alpha^{(4)} - \alpha'' \, \left [ 2 \, \frac{(n-1)}{R^2} \, \cos (2 \alpha)\right ] + (\alpha')^2 \, \left [ 2 \, \frac{(n-1)}{R^2} \, \sin (2 \alpha)\right ] + \, \frac{(n-1)^2}{2 R^4} \, \sin (2\alpha) \, \cos (2 \alpha) \, = \, 0 \, \end{equation} and we still have the constant solution $\alpha \equiv \pi /4$. There are two basic differences between this example and the case of the maps which we have studied in Theorem\link\ref{Index-theorem-toro-sfera}: the higher dimension of the domain and the effects which derive from the fact that we admit a radius $R\neq1$. For these reasons, the computation of index and nullity becomes very complicated and therefore a rather natural approach in this context is to investigate what we shall refer to as the \textit{reduced index and nullity}. More precisely, let $\alpha^*$ denote the constant solution $\alpha\equiv \pi / 4$. We shall consider the reduced bienergy \eqref{reduced-bienergy-generalizza-toro-sfera} and its Hessian at the critical point $\alpha^*$ \begin{equation}\label{Hessian-definition-equiv} H(E_{2,\rm red})_{\alpha^*} (V,W)= \left . \frac{\partial^2}{\partial t \partial s}\right |_{(0,0)} E_{2,\rm red} (\alpha^*_{t,s}) \,, \end{equation} $\alpha^*_{t,s}$ being a two-parameter variation of $\alpha^*$ given by \begin{equation}\label{equiv-2-par-variation} \alpha^*_{t,s}(\vartheta)=\frac{\pi}{4}+t\, v(\vartheta)+s\, w(\vartheta) \, \end{equation} where $v,\,w \in C^{\infty}\left (\s^1 \right )$. As usually, the tangent vectors $V$ and $W$ to $C^{\infty}\left (\s^1 \right )$ at $\alpha^*$ are identified with $v$ and $w$, respectively. Then, thinking $\s^n$ as the warped product $\left(\s^{n-1}\times [0,\pi], \sin^2\alpha \ g_{\s^{n-1}} + d\alpha^2\right)$, we can identify $V$ and $W$ with the following sections of $\varphi_{\alpha^*}^{-1} T\s^n$ \[ V=\left .\frac{d}{d t} \right |_{t=0}\varphi_{\alpha^*_{t,0}}= v(\vartheta)\, \frac{\partial}{\partial \alpha} \quad {\rm and}\quad W=\left .\frac{d}{d s} \right |_{s=0}\varphi_{\alpha^*_{0,s}}= w(\vartheta)\, \frac{\partial}{\partial \alpha} \,. \] From a geometric point of view, we observe that \eqref{Hessian-definition-equiv} is the restriction of the Hessian \eqref{Hessian-definition} to the following linear subspace of $\mathcal{C}\left(\varphi_{\alpha^*}^{-1} T\s^n\right)$: \begin{equation}\label{sottospazio-equiv} \mathcal{V}_{\rm red}=\left \{ V\in \mathcal{C}\left(\varphi_{\alpha^*}^{-1} T\s^n\right)\,\,:\,\, V\left(\underline{\gamma},\vartheta \right )= v(\vartheta)\, \frac{\partial}{\partial \alpha}, \,\,v \in C^{\infty}\left (\s^1 \right ) \right \} \,, \end{equation} In particular, we observe that \begin{equation}\label{I2-equiv} H(E_{2,\rm red})_{\alpha^*} (V,W)=H(E_2)_{\varphi_{\alpha^*}}(V,W)={\rm Vol}\left(\s^{n-1} (R)\right )\, \int_0^{2\pi} \langle I_2(V),W \rangle d\vartheta \, . \end{equation} Moreover, the operator $I_2$ preserves the subspace $\mathcal{V}_{\rm red}$: this is a natural consequence of the symmetries of the problem and can be formally verified by a direct application of the general expression for $I_2$ (see \eqref{I2-general-case}). Therefore, the restriction of $I_2$ to $\mathcal{V}_{\rm red}$, which we shall denote $I_{2,{\rm red}}$, has a discrete spectrum and so we can define ${\rm Index}_{\rm red}(\varphi_{\alpha^*})$ and ${\rm Nullity}_{\rm red}(\varphi_{\alpha^*})$ precisely as in \eqref{Index-definition} and \eqref{Nullity-definition}. Our result in this context is the following: \begin{proposition}\label{Index-theorem-equivariant} Let $\varphi_{\alpha^*}\,:\,\s^{n-1}(R) \times \s^1 \to \s^n$ be the proper biharmonic map defined by \eqref{mappegeneralizzanti-toro-sfera} with $\alpha(\vartheta) \equiv \alpha^*$, where $\alpha^*=\pi /4$. If \begin{equation}\label{condition-equiv-prop} \frac{\sqrt {n-1}}{R} \not\in \n^* \,, \end{equation} then \begin{align}\label{ind-null-equi} & {\rm Nullity}_{\rm red}(\varphi_{\alpha^*})= 0 \\ \nonumber & {\rm Index}_{\rm red}(\varphi_{\alpha^*})=1+2\,\left \lfloor \frac{\sqrt {n-1}}{R} \right \rfloor \,,\nonumber \end{align} where $\lfloor x\rfloor$ denotes the integer part of $x \in \R$. If \begin{equation}\label{condition-equiv-prop-bis} \frac{\sqrt {n-1}}{R}\in \n^* \,, \end{equation} then \begin{align}\label{ind-null-equi-bis} & {\rm Nullity}_{\rm red}(\varphi_{\alpha^*})= 2 \\ \nonumber & {\rm Index}_{\rm red}(\varphi_{\alpha^*})= 1+2\,\left ( \frac{\sqrt {n-1}}{R} -1 \right ) \,.\nonumber \end{align} \end{proposition} \begin{remark}\label{remark-reduced-index} Clearly, each eigenvalue $\lambda$ of $I_{2,{\rm red}}$ is also an eigenvalue of $I_2$ and $\mathcal{V}_{{\rm red},\lambda} \subseteq \mathcal{V}_{\lambda}$. Therefore, it is always true that ${\rm Index}_{\rm red}(\varphi) \leq {\rm Index}(\varphi)$ and ${\rm Nullity}_{\rm red}(\varphi) \leq {\rm Nullity}(\varphi)$. By way of example, if $\varphi_{k} : {\mathbb T}^2 \to \s^2$ is the proper biharmonic map defined in \eqref{equivdatoroasfera-bis*}, then it is not difficult to verify, using the technique of Proposition\link\ref{Index-theorem-equivariant}, that \begin{align*}\label{ind-null-equi-bis} & {\rm Nullity}_{\rm red}(\varphi_k)= 2 \\ \nonumber & {\rm Index}_{\rm red}(\varphi_k)= 1+2\,(k-1) \,.\nonumber \end{align*} \end{remark} \subsection{Proof of Proposition\link\ref{Index-theorem-equivariant}} We compute the reduced Hessian \eqref{Hessian-definition-equiv} with respect to a two-parameter variation $\alpha^*_{t,s}$ as in \eqref{equiv-2-par-variation}. We obtain: \begin{equation}\label{Hessian-equiv-explicit} \left . \frac{\partial^2}{\partial t \partial s}\right |_{(0,0)} E_{2,{\rm red}} (\alpha^*_{t,s})=c \int_0^{2 \pi} \left [v''(\vartheta ) w''(\vartheta )-\frac{(n-1)^2 v(\vartheta ) w(\vartheta )}{R^4}\right] d\vartheta \,, \end{equation} where $c={\rm Vol}\left(\s^{n-1} (R)\right )\,$. Comparing with \eqref{I2-equiv} and integrating by parts we find \begin{equation}\label{I2-equiv-explicit} I_{2,{\rm red}}(V)=\left [v^{(4)}(\vartheta )-\frac{(n-1)^2 }{R^4}\,v(\vartheta )\right ]\,\frac{\partial}{\partial \alpha} \,. \end{equation} Now, let \begin{small} \begin{equation}\label{base-equiv-case} \mathcal{B}= \left \{U_0= \frac{1}{\sqrt{2\pi}}\,\frac{\partial}{\partial \alpha} ,\,U_m= \frac{1}{\sqrt \pi}\,\cos (m\vartheta)\,\frac{\partial}{\partial \alpha} ,\,\, V_m= \frac{1}{\sqrt \pi}\,\sin (m\vartheta)\,\frac{\partial}{\partial \alpha}, \,\,m \geq 1 \right \} \,. \end{equation} \end{small} Then $\mathcal{B}$ is an orthonormal basis of $\mathcal{V}_{\rm red}$. Moreover, it is immediate to check that the vectors $U_0,U_m$ and $V_m$ are eigenvectors for the operator \eqref{I2-equiv-explicit} with eigenvalues \begin{equation}\label{eigenvalues-reduced} \lambda_m=m^4-\frac{(n-1)^2 }{R^4}\,, \quad \,\, m \in \n . \end{equation}\label{multiplicities-reduced} Then the multiplicities are \begin{equation} \nu(\lambda_0)=1,\,\,\,\,\nu(\lambda_m)=2 \quad\,\,\, {\rm for} \,\, m \geq 1 \,, \end{equation} and now the conclusion of the proof follows easily. \subsection{Conformal diffeomorphisms}\label{conformal-diffeo} Proper biharmonic conformal diffeomorphisms of $4$-dimensional Riemannian manifolds play an interesting role in the study of the bienergy functional. A basic example (see \cite{Baird}) is the inverse stereographic projection $\varphi:\R^4 \to \s^4$. We proved in \cite{MOR2} that its restriction to the open unit ball $B^4$ is strictly stable with respect to compactly supported equivariant variations. More generally, the same is true for homothetic dilations of $\varphi$ provided that we consider the restrictions to a ball whose radius ensures that the image is contained in the upper hemisphere of $\s^4$ (see \cite{MOR2} for details). Here we study another example which was found in \cite{Baird}. More precisely, using polar coordinates on $\R^4 \setminus\{ O \}$, let \begin{align}\label{mappe-Baird-Fardoun} \varphi_\alpha \,:\R^4\setminus \{O \}=& \s^{3} \times (0,+\infty) \to \s^{3} \times \R \\ \nonumber (&\underline{\gamma},\qquad r)\quad \quad \mapsto \, \left (\underline{\gamma}, \,\alpha( r) \right ) \,, \end{align} where $\alpha(r)=\log r $. It was observed in \cite{Baird} that the map $\varphi_\alpha$ in \eqref{mappe-Baird-Fardoun} is a proper biharmonic conformal diffeomorphism. As in Theorem\link\ref{th:noncompact}, we study the stability of $\varphi_\alpha$ with respect to compactly supported variations. More precisely, since the domain here is not $1$-dimensional, we just consider equivariant variations: \begin{align}\label{mappe-Baird-Fardoun-equiv-var} \varphi_{\alpha_{t,s}} \,:\R^4\setminus\{O \}=& \s^{3} \times (0,+\infty) \to \s^{3} \times \R \\ \nonumber (&\underline{\gamma},\qquad r)\quad \quad \mapsto \, \left (\underline{\gamma}, \,\alpha( r)+t v(r)+sw(r) \right ) \,, \end{align} where $v,w$ are compactly supported functions on $(0,+\infty)$. We shall prove the following result: \begin{proposition}\label{prop-Baird-ex} Let $\varphi_\alpha \,:\R^4\setminus \{O \} \to \s^{3} \times \R $ be the proper biharmonic conformal diffeomorphism defined in \eqref{mappe-Baird-Fardoun}. Then $\varphi_\alpha$ is strictly stable with respect to compactly supported equivariant variations. \end{proposition} \begin{proof}The tension field of a map of the type \eqref{mappe-Baird-Fardoun} is \[ \tau \left ( \varphi_\alpha\right )= \left [\alpha ''(r) + \frac{3}{r}\, \alpha'(r) \right ] \frac{\partial}{\partial \alpha} \] and so the reduced $2$-energy becomes: \[ E_{2,{\rm red}} (\alpha)=\frac{1}{2}\,c\, \int_0^{+\infty} \left [\alpha ''(r) + \frac{3}{r}\, \alpha'(r) \right ]^2\, r^3 \,dr \,, \] where $c={\rm Vol}\left ( \s^3 \right )$. It is convenient to make the following change of variable: $r=e^u$, $\beta(u)=\alpha(e^u)$. In terms of $\beta$, the reduced $2$-energy becomes: \begin{equation}\label{Beta-2-energy} E_{2,{\rm red}} (\beta)=\frac{1}{2}\,c\, \int_{-\infty}^{+\infty} \left [\beta ''(u) + 2\, \beta'(u) \right ]^2 \,du \,. \end{equation} Next, we compute the reduced Hessian \eqref{Hessian-definition-equiv} with respect to a two-parameter variation \[ \beta_{t,s}= u + t v(u)+sw(u)\,, \] where $v,w$ are compactly supported smooth functions on $\R$ (note that, after the change of variable, $\alpha(r)= \log r$ corresponds to $\beta(u)=u$). We obtain: \begin{equation}\label{Hessian-equiv-Baird} \left . \frac{\partial^2}{\partial t \partial s}\right |_{(0,0)} E_{2,{\rm red}} (\beta_{t,s})=c \int_{-\infty}^{+\infty} \big [v''\,w''+2 v''\,w'+2 v'\, w''+4 v'\, w'\big] du \,. \end{equation} Now, since $v,w \in C_0^{\infty}(\R)$, integrating by parts into \eqref{Hessian-equiv-Baird} we find: \[ \int_{-\infty}^{+\infty} \,\langle I_{2,{\rm red}}(V),W \rangle \,du=\int_{-\infty}^{+\infty} \,\big[ v^{(4)}-4 v''\big] \,w \,\, du\,. \] Finally, again integrating by parts, we conclude that \[ \int_{-\infty}^{+\infty} \,\langle I_{2,{\rm red}}(V),V \rangle \,du=\int_{-\infty}^{+\infty} \,\big[ (v'')^2+4 (v')^2\big]\,\, du \left(=\frac{1}{c} \int_{\r^4\setminus\{O\}} |J(V)|^2 dv_M\right) \] and so the proof is completed. \end{proof} \subsection{Further developments}\label{Further-developments} A further generalisation of \eqref{mappegeneralizzanti-toro-sfera} leads us to study the case of biharmonic maps into the rotationally symmetric ellipsoid defined by \begin{equation}\label{ellissoide} \mathcal{Q} ^n(b)= \left \{ [x_1,\ldots,x_{n+1}]\in \R^{n+1}\,\,:\,\, x_1^2+\ldots +x_n^2+\frac{x_{n+1}^2}{b^2} =1\right \}\quad (b>0)\,. \end{equation} It is convenient to describe the ellipsoid $\mathcal{Q}^n(b)$ as \begin{equation}\label{ellissoide-metrica} \mathcal{Q} ^n(b)=\left ( \s^{n-1} \times [0,\pi],\sin^2 \alpha\,g_{\s^{n-1}}+K^2(\alpha)\,d\alpha^2 \right )\,, \end{equation} where $K(\alpha)=\sqrt{b^2\,\sin^2 \alpha+ \cos^2 \alpha}$. Then we study equivariant maps \begin{align}\label{mappegeneralizzanti-toro-ellissoide} \varphi_\alpha \,:\, \s^{n-1} (R) \times \s^1 & \to \mathcal{Q}^n \hookrightarrow \R^n \quad \times \R \\ \nonumber (R\,\underline{\gamma},\qquad \vartheta)& \mapsto \, \left ( \sin \alpha(\vartheta) \, \underline{\gamma}, \,b\,\cos \alpha (\vartheta) \right ) \quad (\underline{\gamma} \in \s^{n-1},\,n \geq 2) \,. \end{align} Now, the reduced bienergy of a map as in \eqref{mappegeneralizzanti-toro-ellissoide} is \begin{equation}\label{reduced-bienergy-generalizza-toro-ellissoide} E_{2,{\rm red}}(\alpha)= \frac{1}{2}\, {\rm Vol}\left(\s^{n-1}(R) \right )\, \int_0^{2\pi } \left [\alpha'' - \frac{(n-1)}{2\,R^2\,K^2(\alpha)}\sin (2\,\alpha)+\frac{K'(\alpha)}{K(\alpha)}\,(\alpha')^2 \right ]^2 K^2(\alpha) \,d\vartheta \,. \end{equation} By direct substitution into the Euler-Lagrange equation of \eqref{reduced-bienergy-generalizza-toro-ellissoide} we find that a constant function $\alpha(\vartheta)\equiv \alpha^*$ ($0< \alpha^*<\pi/2$) gives rise to a proper biharmonic map of the type \eqref{mappegeneralizzanti-toro-ellissoide} if and only if \begin{equation}\label{biarmonia-ellissoide-alpha*} \alpha^*=\frac{1}{2} \arccos \left [\frac{b-1}{b+1} \right ] \,. \end{equation} Following the method of proof of Proposition\link\ref{Index-theorem-equivariant} and computing we find that now the expression of the reduced index operator is: \begin{equation}\label{I2-equiv-explicit-ellissoide} I_{2,{\rm red}}(V)=\left [V^{(4)}-\frac{4 (n-1)^2}{b (b+1)^2 R^4}\,V\right ]\,\frac{\partial}{\partial \alpha} \,. \end{equation} Then it is easy to conclude that the eigenvalues of $I_{2,{\rm red}}$ are \begin{equation*}\label{eigenvalues-reduced} \lambda_m=m^4-\frac{4(n-1)^2 }{b(b+1)^2 R^4}\,, \quad \,\, m \in \n , \end{equation*}\label{multiplicities-reduced} with multiplicities \begin{equation*} \nu(\lambda_0)=1,\,\,\,\,\nu(\lambda_m)=2 \quad\,\,\, {\rm for} \,\, m \geq 1 \,. \end{equation*} By way of summary, we conclude that an extension of Proposition\link\ref{Index-theorem-equivariant} holds in this context where the curvature of the target is nonconstant. Indeed, we have \begin{proposition}\label{Index-theorem-ellissoide} Let $\varphi_{\alpha^*}\,:\,\s^{n-1}(R) \times \s^1 \to \mathcal{Q}^n(b)$ be the proper biharmonic map defined by \eqref{mappegeneralizzanti-toro-ellissoide} with $\alpha(\vartheta) \equiv \alpha^*$, where $\alpha^*$ is given in \eqref{biarmonia-ellissoide-alpha*}. If \begin{equation}\label{condition-equiv-prop-bis} \sqrt[4]{\frac{4(n-1)^2 }{b(b+1)^2 R^4}} \not\in \n^* \,, \end{equation} then \begin{align}\label{ind-null-equi-bis} & {\rm Nullity}_{\rm red}(\varphi_{\alpha^*})= 0 \\ \nonumber & {\rm Index}_{\rm red}(\varphi_{\alpha^*})=1+2\,\left \lfloor \sqrt[4]{\frac{4(n-1)^2 }{b(b+1)^2 R^4}} \right \rfloor \,,\nonumber \end{align} where $\lfloor x\rfloor$ denotes the integer part of $x \in \R$. If \begin{equation}\label{condition-equiv-prop-bis} \sqrt[4]{\frac{4(n-1)^2 }{b(b+1)^2 R^4}}\in \n^* \,, \end{equation} then \begin{align}\label{ind-null-equi-bis} & {\rm Nullity}_{\rm red}(\varphi_{\alpha^*})= 2 \\ \nonumber & {\rm Index}_{\rm red}(\varphi_{\alpha^*})= 1+2\,\left ( \sqrt[4]{\frac{4(n-1)^2 }{b(b+1)^2 R^4}} -1 \right ) \,.\nonumber \end{align} \end{proposition} \begin{remark}\label{remark-ellissoide} We observe that the value $\alpha^*$ in \eqref{biarmonia-ellissoide-alpha*} corresponds to the parallel proper biharmonic hypersphere in $\mathcal{Q}^n(b)$ which was found in \cite{Mont-Ratto2}. We also point out that, for fixed values of $n,R$, there exists $b^*>0$ such that if $ b \geq b^*$, then the reduced index is $1$. By contrast, the reduced index becomes arbitrarily large provided that $b$ is sufficiently small. \end{remark} A natural question is to ask whether variations associated to vector fields in the nullity subspace give rise to variations made of biharmonic maps. We have checked by direct substitution into \eqref{biarmoniatorosfera-generalizzato} that, in general, this is not the case. Actually, variations of this type do not even preserve the bienergy. For instance, assume $n=2$ and $R=b=1$. Then the variation \[ \alpha_t(\vartheta)=\frac{\pi}{4}+t \sin \vartheta \] gives rise to a vector field in the nullity subspace and a direct computation shows that, up to a constant factor, \[ \frac{d}{dt}\, E_{2,{\rm red}} (\alpha_{t})= \pi\, t -\frac{1}{2} \, \pi \, J_1(4t) \,, \] where $J_1$ denotes the Bessel function $J_n$ of the first type, with $n=1$ (see \cite{Bessel} for definitions and properties of Bessel functions). From this it is possible to deduce that \begin{equation*} \left . \frac{d^j}{dt^j}\right |_{t=0} E_{2,{\rm red}} (\alpha_{t})=0\,,\quad \quad j=1,2,3. \qquad \left . \frac{d^4}{dt^4}\right |_{t=0} E_{2,{\rm red}} (\alpha_{t})=12 \pi \,. \end{equation*} In particular, along this direction in the nullity subspace we see that the bienergy has a local minimum.
{"config": "arxiv", "file": "1902.01621.tex"}
\begin{document} \maketitle \begin{abstract} We characterize fundamental domains of affine reflection groups as those polyhedral convex bodies which support a continuous billiard dynamics. We interpret this characterization in the broader context of Alexandrov geometry and prove an analogous characterization for isosceles tetrahedra in terms of continuous quasigeodesic flows. Moreover, we show that for general convex bodies the billiard dynamics is continuous if the boundary is sufficiently regular. In particular, billiard trajectories converge to geodesics on the boundary in this case. Our proof of the latter continuity statement is partly based on Alexandrov geometry methods that we discuss resp. establish first. \end{abstract} \section{Introduction} Billiards are a widely studied subject in dynamics and in mathematics in general with many interesting results and open questions, see e.g. the surveys \cite{Kat02,Ta05,Gut12}. For instance, it is not known if every obtuse triangle admits a periodic billiard trajectory, see e.g. \cite{Sc09}. One cause of difficulty is that billiard trajectories through corners are not well defined. The billiard dynamics usually exhibits discontinuities. Nevertheless, there is a reasonable, though ambiguous, notion of a billiard trajectory with bounces in non-smooth boundary points that makes sense in any dimension, see Section \ref{sub:billiards}. Such billiards have for instance been studied in \cite{BC89,Gh04,BB09}. This generalized notion is also essential in the context of the relationship between certain symplectic capacities and shortest billiard trajectories \cite{Ru22}. Another important topic in mathematics are reflection groups. After their prominent appearance in Lie theory they pervaded branches like algebra, topology and geometry, see e.g. \cite{MT02,Dol08,Da11}. Reflection groups are tied to billiards via the reflection law. To each polyhedral billiard table one can associate a group generated by the reflections at the table's faces, which encodes interesting properties about the billiard. The case when this group is discrete is sometimes of special interest, e.g. in the context of Teichmüller theory \cite{MT02}. Many powerful methods can only be applied in this case. Here we show that it goes hand in hand with a sufficiently continuous billiard dynamics. \begin{maintheorem}\label{thm:billiard_orbifold} A polyhedral convex body in $\R^n$ admits a continuous billiard evolution if and only if it is an alcove, i.e. the fundamental domain of a discrete affine reflection group. In particular, irreducible such billiard tables are classified by connected (affine) Coxeter--Dynkin diagrams. \end{maintheorem} Roughly speaking, we say that a convex body admits a continuous billiard evolution, if there exists a global choice of billiard trajectories that are defined for all times so that convergence of initial conditions implies pointwise convergence of the trajectories, see Section \ref{sub:billiards} for more details. A convex body in $\R^n$ is an alcove if and only if it is an orbifold, see Proposition \ref{prp:reflection}, i.e. a metric space that is locally isometric to certain model spaces, see Section \ref{sub:orbifold}. In this case the continuous billiard evolution is given by the orbifold geodesic flow which in turn is induced by the geodesic flow of $\R^n$. For more background about orbifold geodesics we refer to e.g. \cite{La18,La19}. An interpretation of Theorem \ref{thm:billiard_orbifold} in the context of Alexandrov geometry will be given further below and in Section \ref{sec:continuous_billiard_quasigeodesic}. In the class of general convex billiard bodies the situation is much more flexible as the following result illustrates. \begin{maintheorem}\label{thm:convex_smooth} Let $K$ be a convex body in $\R^n$. For $n=2$ suppose that the boundary of $K$ has three bounded derivatives and is positively curved everywhere. For $n>2$ suppose that the boundary is of class $\mathcal{C}^{4}$ and has a positive definite second fundamental form. Then $K$ admits a continuous billiard evolution. In particular, billiard trajectories whose initial directions converge to a tangent vector of the boundary converge locally uniformly to the corresponding geodesic of the boundary. \end{maintheorem} Moreover, there are also examples of convex bodies in $\R^2$ which admit a continuous billiard evolution, but whose boundary is only of class $\mathcal{C}^1$, see Example \ref{exe:c1_example}. On the other hand, Theorem \ref{thm:convex_smooth} is optimal in dimension $2$ in the sense that there are examples of convex bodies in $\R^2$ whose boundary is positively curved everywhere and has three derivatives, but for which the conclusion of Theorem \ref{thm:convex_smooth} fails, see Proposition \ref{prp_noncontinuous_example}. In higher dimensions the regularity assumption can probably be improved, see in particular Lemma \ref{lem:distance_taylor}. \begin{qst} What is the optimal regularity assumption for the statement in Theorem \ref{thm:convex_smooth} in dimensions $n>2$? \end{qst} Nevertheless, some rigidity remains, at least locally. Namely, in dimension $2$ each boundary point of a convex body that admits a continuous billiard evolution is an orbifold point, see Proposition \ref{prp:local_rigidity_dim2}. Here a boundary point is said to be an orbifold point if its tangent cone is a Riemannian orbifold or, in dimension two, if its opening angle is of the form $\frac{\pi}{n}$ for some natural number $n$. We believe that the same conclusion also holds in higher dimensions. \begin{cnj}\label{conj:orb_point} If a convex body in $\R^n$ admits a continuous billiard evolution (resp. quasigeodesic flow, see below), then all its boundary points are orbifold points. \end{cnj} Theorems \ref{thm:billiard_orbifold} and \ref{thm:convex_smooth} admit the following interpretation in the context of Alexandrov geometry. A convex body is an example of an Alexandrov space with nonnegative curvature, and a (generalized) billiard trajectory corresponds to a so-called quasigeodesic on this Alexandrov space. In this context the question about the existence of a continuous billiard evolution naturally generalizes to the question about the existence of an everywhere defined quasigeodesic flow that satisfies a certain continuity condition, see Section \ref{sub:continuity_alex_sense}. A priorily, it is shown in \cite{KLP21} that on many Alexandrov spaces (with empty boundary) the geodesic flow is defined almost everywhere for all times and that it is continuous on its domain, see Section \ref{sub:further_properties} for more details. In fact, also our proof of Theorem \ref{thm:convex_smooth} in dimensions $n>2$ relies on methods from Alexandrov geometry. For instance, we show and apply the statement that quasigeodesics of a convex body with sufficiently regular boundary that are contained in the boundary are also quasigeodesics of the boundary with respect to its intrinsic metric, see Lemma \ref{lem:quasigeodesic_boundary}. Another large class of Alexandrov spaces, including those in Theorem \ref{thm:billiard_orbifold}, that admit continuous quasigeodesic flows are quotients of Riemannian manifolds by proper and isometric Lie group actions \cite{LT10}. Like the billiard example, this already indicates that the question of which Alexandrov spaces admit a continuous quasigeodesic flow is complicated in general. Nevertheless, we can say something in the class of polyhedral Alexandrov spaces given by boundaries of convex polyhedral bodies. In this case the same proof as the one of Theorem \ref{thm:billiard_orbifold} shows that a continuous quasigeodesic flow exists if and only if the space is a (flat) orbifold. The following (semi-)rigidity result classifies such spaces. Here a tetrahedron is called \emph{isosceles} if its opposite sides have equal length. Such a tetrahedron is also known as a \emph{disphenoid}. \begin{maintheorem} \label{thm_3_4_polyhedral} The boundary of a polyhedral convex body in $\R^n$ admits a continuous quasigeodesic flow if and only if it is a Riemannian orbifold with respect to its intrinsic metric. The only polyhedral convex bodies that are bounded by Riemannian orbifolds are isosceles tetrahedra in $\R^3$. \end{maintheorem} Theorem \ref{thm_3_4_polyhedral} adds to a large number of interesting properties and characterizations of isosceles tetrahedra, see e.g. \cite{AP18,Br26,FF07}. The proof of the only if statement of the first part of Theorem \ref{thm_3_4_polyhedral} works like the only if part of Theorem \ref{thm:billiard_orbifold} by induction on the dimension, see Section \ref{sec:polyhedral_billiard_table}. The $3$-dimensional case of the second part can for instance be obtained via the Gauß-Bonnet theorem or with Euler's polyhedral formula, see Section \ref{sub:_prove_C_dim3}. Examples in higher dimensions will be ruled out by a comparison of singular strata with respect to the orbifold structure and the polyhedral structure, see Section \ref{sub:proof_C_dimh}. Finally, we point out that the everywhere defined continuous quasigeodesic flows in Theorems \ref{thm:billiard_orbifold}, \ref{thm:convex_smooth} and \ref{thm_3_4_polyhedral} are unique, see Section \ref{sub:further_properties}. \subsection{Structure of the paper} In Section \ref{sec:preliminaries} we collect some preliminaries that are needed later in the paper. Theorem \ref{thm:billiard_orbifold} about continuous billiards on polyhedral convex bodies is then proved in Section \ref{sec:polyhedral_billiard_table}. To read it the subsections of Section \ref{sec:preliminaries} about Alexandrov spaces and quasigeodesics can be skipped. The latter are required in Section \ref{sec:continuous_billiard_quasigeodesic} where we generalize the discussion to arbitrary convex bodies and Alexandrov spaces. Some of these considerations are then applied in Section \ref{sec:general_convex} in the proof of Theorem \ref{thm:convex_smooth}. Finally, in Section \ref{sec:boundary_polyhedral} we prove Theorem \ref{thm_3_4_polyhedral}. While the formulation of Theorem \ref{thm_3_4_polyhedral} relies on the notion of a quasigeodesic, Section \ref{sec:boundary_polyhedral} can be read independently from the sections about Alexandrov spaces and quasigeodesics, either by taking the first part of Theorem \ref{thm_3_4_polyhedral} for granted or by taking the characterization of quasigeodesics in Lemma \ref{cor:quasigeodesic_polyhedral} as a definition. \newline \newline \textbf{Acknowledgements.} This work came into being while the author was visiting ENS de Lyon and Ruhr-Universität Bochum. He thanks the geometry and dynamics groups there for their hospitality. He also would like to thank Alexander Lytchak and Artem Nepechiy for discussions about Alexandrov spaces and quasigeodesics. Moreover, he is grateful to Luca Asselle, Florian Lange, Bernhard Leeb, Alexander Lytchak, Anton Petrunin and Daniel Rudolf for useful comments and hints to the literature, respectively. \section{Preliminaries}\label{sec:preliminaries} \subsection{Billiards on polyhedral convex bodies} \label{sub:billiards} As a warm-up we consider billiards on (polyhedral) convex bodies. At a boundary point with a unique tangent plane, a billiard trajectory is reflected according to the usual reflection law, i.e. the angle of incidence equals the angle of reflection. We would like to have a reasonable notion of billiard trajectories that may pass through corners of the boundary of the table. One criterion should be that billiard trajectories are closed under pointwise limits. We first introduce the following notions related to a convex body $K\subset \R^n$. The \emph{tangent cone} of $K$ at a point $p$ in $K$ is defined to be \[ T_p K = \overline{\left\langle q-p \mid q \in K \right\rangle_{\R} }, \] i.e. the closure of the $\R$-span of all $q-p$, $q\in \K$. The \emph{normal cone} at $p$ is defined to be \[ N_p K = \{ v \in \R^n \mid \left\langle v, u \right\rangle \leq 0 \text{ for all } u \in T_p K \}. \] A point in $K$ lies in the interior of $K$ if and only if $N_p K$ consists of a single point. A boundary point for which $N_p K$ is $1$-dimensional is called \emph{smooth}. Two vectors $u,v \in T_p K$ are called \emph{polar} if $-(u+v) \in N_p K$. If $p$ is a smooth boundary point then for any unit vector $v\in T_p K$ there exists a unique polar unit vector $u\in T_p K$ and this correspondence specifies the reflection law at $p$. Let us record the following characterization. \begin{lem}\label{lem:polar_characterization} The following conditions are equivalent for two unit vectors $u,v \in T_p K$. \begin{compactenum} \item $u$ and $v$ are polar, i.e. $\left\langle u,w\right\rangle + \left\langle v,w\right\rangle\geq 0$ for all $w\in T_p K$. \item There exists a supporting hyperplane of $K$ at $p$ orthogonal to $u+v$. \item $\angle (u,w) + \angle (v,w) \leq \pi$ for all $w \in T_p K$, $w \neq 0$. \end{compactenum} \end{lem} Here $\angle (u,w)$ denotes the angle between $u$ and $w$. For a path $c:I \To K$ we denote by $c^+(t_0)$ the right derivative of $c$ at $t_0$ and by $c^-(t_0)$ the right derivative of $t \mapsto c(t_0-t)$ at $0$ if they exist. For now we will restrict ourselves to the case of a \emph{polyhedral} convex body $K$ \cite{Al05}. It will be more natural to consider the general case in the context of Alexandrov geometry, see Section \ref{sec:continuous_billiard_quasigeodesic}. \begin{dfn} \label{dfn:billiard_trajectory} Let $K$ be a polyhedral convex body. A continuous path $c:\R\supset I \To K$ parametrized proportional to arclength is called \emph{billiard trajectory}, if it is locally length minimizing except at a discrete set of times $\mathcal{T}\subset I$ such that for each $t \in \mathcal{T}$ the vectors $c^+(t)$ and $c^-(t)$ are polar. \end{dfn} In particular, each constant path is a billiard trajectory, and in the $2$-dimensional case a parametrization of the boundary of $K$ is a billiard trajectory. Moreover, Lemma \ref{lem:polar_characterization}, $(i)$ implies that pointwise limits of billiard trajectories are indeed again billiard trajectories. We say that a billiard trajectory $c$ \emph{bounces} at time $t$ if $c^+(t) \neq -c^-(t)$. Billiard trajectories on polytopes are tame in the following sense. \begin{lem} \label{lem:collision_estimate} On a polyhedral convex body $K$ bounce times do not accumulate. \end{lem} \begin{proof} Suppose the bounce times of a billiard trajectory $c:[0,t_0) \To K$ on a polyhedral convex body $K$ accumulate at time $t_0$ and let $p$ be the limit of $c(t)$ as $t$ tends to $t_0$. We can assume that $K=T_pK$. Instead of following the billiard trajectory, we can follow a straight line and reflect the table at a bounce time at the respective supporting hyperplane, see Lemma \ref{lem:polar_characterization}, $(ii)$. Since these reflections fix the point $p$, the only possibility that $c$ runs into $p$ is that the straight line passes through $p$. However, in this case the billiard trajectory experiences only a single bounce near $p$. This contradiction completes the proof of the lemma. \end{proof} Alternatively, the statement can be deduced from \cite{Sin78} which provides a constant $C$ such that any regular billiard trajectory on a tangent cone of a polyhedral convex body experiences at most $C$ bounces. Moreover, in a similar way one can deduce from \cite[Corollary~1]{BFK98} that there exists a constant $C$ only depending on $K$ such that any unit speed billiard trajectory experiences at most $C(t+1)$ bounces in any time interval of length $t$. In particular, each billiard trajectory can be extended for all times. While the latter is still true in more general situations, see Section \ref{sub:quasigeodesics}, Lemma \ref{lem:collision_estimate} may fail on general convex (even smooth) tables, see Section \ref{sub:optimality} and \cite{Hal77}. It will be more natural to consider such cases in the context of Alexandrov geometry, see Section \ref{sub:Alex_spaces}, as already pointed out. The tangent cone bundle $TK$ of $K$ is defined to be the union of all tangent cones $T_pK$ of $K$ and it inherits a subspace topology from $T\R^n$. By a \emph{billiard flow} we mean a map $\phi: TK \times \R \To TK$ with $\Phi(\cdot,0)=\mathrm{id}_{TX}$ such that for each $(p,v) \in TK$ the map $ \R \ni t \mapsto \pi(\phi((p,v),t)) \in K$ is a billiard trajectory with initial conditions $\phi((p,v),t)$ at time $t$, where $\pi: TK \To K$ is the natural projection. We say that $K$ admits a \emph{continuous billiard evolution}, if there exists a billiard flow for $K$ such that the composition $\pi\circ \phi$ is continuous. We also call $K$ continuous, if it admits a continuous billiard evolution. Observe that two convex bodies $K_1\subset \R^n$ and $K_2\subset \R^m$ are continuous billiard tables if and only if $K_1 \times K_2 \subset \R^{m+n}$ is so. Examples of polyhedral continuous billiard tables will be constructed in Section \ref{sec:polyhedral_billiard_table}. \subsection{Riemannian orbifolds}\label{sub:orbifold} An \emph{$n$-dimensional Riemannian orbifold} is a metric length space $\Orb$ such that each point in $\Orb$ has a neighborhood that is isometric to the quotient of an $n$-dimensional Riemannian manifold $M$ by an isometric action of a finite group $\Gamma$ \cite{La20}. For a point $p$ in a Riemannian orbifold $\Orb$ the isotropy group of a preimage of $p$ in a Riemannian manifold chart is uniquely determined up to conjugation. Its conjugacy class in $\Or(n)$ is called the \emph{local group} of $\Orb$ at $p$ and we also denote it as $\Gamma_p$. The point $p$ is called \emph{regular} if this group is trivial and \emph{singular} otherwise. More precisely, an orbifold admits a stratification into manifolds, where the stratum of codimension $k$ is given by \[ \Sigma_k = \{ p\in \Orb \mid \mathrm{codim} \mathrm{Fix}(\Gamma_p)=k \}. \] In particular, $\Sigma_0$ is the set of regular points. Examples of Riemannian orbifolds arise as quotients of Riemannian manifolds by isometric and proper actions of discrete groups. Riemannian orbifolds that can be obtained in this way are called \emph{good} or \emph{developable}. The quotient map from the manifold to the orbifold is then an instance of a Riemannian orbifold covering, cf. e.g. \cite{La20}. It can be useful to have criteria for an orbifold to be good, cf. \cite{LR21}. For instance, if a complete Riemannian orbifold has constant curvature, meaning that all local manifold charts have constant curvature, then it is good \cite{MM91}. This fact has for instance been applied in \cite{Le15} and it will also be used in the proof of Theorem \ref{thm:billiard_orbifold} and \ref{thm_3_4_polyhedral}. \subsection{Affine reflection groups}\label{sub:affine_reflection} An \emph{affine reflection group} is a discrete subgroup of the isometry group of a Euclidean vector space $\R^n$ that is generated by reflections. We also assume that it acts cocompactly. Affine reflection groups have been classified by Coxeter \cite{Co34}. Their classification can be conveniently stated in terms of (affine) \emph{Cartan-Dynkin diagrams}. In our context they arise in the following way. \begin{prp} \label{prp:reflection} A polyhedral convex body $K$ in $\R^n$ is an orbifold if and only if it is isometric to a quotient of $\R^n$ by an affine reflection group. \end{prp} \begin{proof} Suppose a polyhedral convex body $K$ in $\R^n$ is an orbifold. Then $K$ is actually a complete flat orbifold. Hence, it is a quotient of $\R^n$ by a discrete subgroup $\Gamma$ of the isometry group of $\R^n$. We can lift $K$ to a copy $K'$ in $\R^n$. The group $\Gamma$ must contain the reflections at the faces of $K'$. Let $\Gamma'$ be the subgroup of $\Gamma$ which is generated by these reflections. Since the $\Gamma'$-translates of $K'$ cover $\R^n$, we have $\Gamma=\Gamma'$. Conversely, if $\Gamma$ is an affine reflection group acting on $\R^n$, then a fundamental domain of this action is given by \begin{equation*} \begin{split} \Lambda &= \{ q \in \R^n \mid d(p,q)\leq d(p,gq) \text{ for all } g \in \Gamma \} \\ &= \bigcap_{g\in \Gamma, \; \mathrm{codim}(\mathrm{Fix}(g))=1} \{ q \in \R^n \mid d(p,q)\leq d(p,gq) \}, \end{split} \end{equation*} where $p$ is a point in $\R^n$ that is not fixed by any $g\in \Gamma$ \cite[Theorem~4.9]{Hum90}. It follows that $\R^n/\Gamma$ is isometric to $\Lambda$ and the latter is clearly a polyhedral convex body in $\R^n$. \end{proof} \subsection{Metric constructions} \label{sub:metric_constructions} For a metric space $(X,d)$ with $\mathrm{diam}(X)\leq \pi$ a metric $d_c$ on the open cone $CX:=(X\times [0,\infty)) / \sim$, where $\sim$ collapses $X\times \{0\}$ to a point, can be defined as follows, cf. \cite[Def.~3.6.12., Prop.~3.6.13]{MR1835418}. For $q,p \in CX$ with $p=(x,t)$ and $q=(y,s)$ set \[ d_c(p,q)=\sqrt{t^2+s^2 - 2ts \cos(d(x,y))}. \] The space $(CX,d_c)$ is referred to as the \emph{Euclidean cone} of $(X,d)$. If $X$ is the unit sphere in $\R^n$ with its induced length metric, then $(CX,d_c)$ is naturally isometric to $\R^n$. The assumption $\mathrm{diam}(X)\leq \pi$ of this construction is in particular satisfied for Alexandrov spaces (see Section \ref{sub:Alex_spaces}) with curvature $\geq 1$ \cite[Thm.~10.4.1]{MR1835418}. An isometric action of a group $\Gamma$ on $X$ induces an isometric action of $\Gamma$ on $CX$ in the obvious way and the metric spaces $CX/\Gamma$ and $C(X/\Gamma)$ are isomeric. For a metric space $X$ with a closed subspace $Y$ the natural metric on the double of $X$ along $Y$ is for instance described in \cite[Section~4]{La20} and the references therein. \subsection{Alexandrov spaces}\label{sub:Alex_spaces} The following two subsections might be skipped on first reading. An \emph{Alexandrov space} with curvature bounded below by $\kappa$ is, roughly speaking, a complete, locally compact, geodesic metric space in which triangles are not thinner than their comparison triangles in the model plane of constant curvature $\kappa$, see e.g. \cite{MR1835418,BGP92} for more details. The Hausdorff dimension of such a space is always an integer or infinite. In the following we always assume that the dimension is finite. We denote the class of $n$-dimensional Alexandrov spaces with curvature bounded below by $\kappa$ equipped with the $n$-dimensional Hausdorff measure by $\Alex^n(\kappa)$. Examples of Alexandrov spaces with nonnegative curvature are given by boundaries of convex bodies and by metric doubles of convex bodies along their boundary, cf. \cite[Thm.~10.2.6]{MR1835418}. Another class of examples of Alexandrov spaces is given by quotients of compact Riemannian manifolds by isometric actions of compact Lie groups. In particular, compact Riemannian orbifolds are Alexandrov spaces. For any pair of points $x$ and $y$ in an Alexandrov space $X$ there exists by definition a geodesic, i.e. a distance realizing path, that connects $x$ with $y$. We denote such a geodesic by $[xy]$, although it is in general not unique. Consider \[ \Sigma_x' := \{[xy] | y \in X \backslash \{y\} \}/ \sim. \] where the equivalence relation is defined such that $[xy] \sim [xz]$ if and only if $[xy] \subset [xz]$ or $[xz] \subset [xy]$. We point out that geodesics in $X$ cannot branch. Measurements of angles defines a metric on $\Sigma_x'$ \cite[§4.3]{MR1835418}. More precisely, the angle between two geodesics $\gamma, \gamma' : [0,\varepsilon) \To X$ starting at $x$ is defined to be \[ \limsup_{t,s \downarrow 0} \angle_p (\gamma(s),\gamma'(t)). \] Here $\angle_p(q,q')$ denotes the comparison angle $\tilde\angle_{\kappa}(|pq|,|qq'|,|pq'|)$ of a comparison triangle with side lengths $|pq|$, $|qq'|$ and $|pq'|$ in the model plane of constant curvature $\kappa$ opposite to the comparison side of $qq'$. The \emph{space of directions} $\Sigma_x$ of $X$ at $x$ is defined to be the metric completion of $\Sigma_x'$. The Euclidean cone over the space of directions is called the \emph{tangent cone} of $X$ at $x$ and is denoted as $T_x X$. Alternatively, the tangent cone $T_xX$ can be obtained as pointed Gromov-Hausdorff limit of rescaled versions of $X$ \cite[Theorem 7.8.1]{BGP92}. If $X \in \Alex^{n}(\kappa)$ for some $\kappa$, then $\Sigma_x \in \Alex^{n-1}(1)$ and $T_x X\in \Alex^{n-1}(0)$. The boundary of an Alexandrov space can be defined inductively via the spaces of directions. A point belongs to the boundary if and only if the boundary of its space of directions is non-empty. The interior is the complement of the boundary. The two classes of examples arising from convex bodies mentioned above have empty boundary. \subsection{Quasigeodesics}\label{sub:quasigeodesics} Geodesics in Riemannian manifolds can be characterized among curves parametrized by arclength in terms of a certain concavity condition \cite{QG95}. A curve in an Alexandrov space parametrized by arclength that satisfies this condition does not need to be a geodesic and is called a quasigeodesic. Here we recall a definition of quasigeodesics in terms of so-called developments from \cite{Pe07,QG95}. Consider a curve $\gamma:[a,b] \To X$ parametrized by arclength in an Alexandrov space $X\in \Alex^n(\kappa)$. We pick a point $p\in X\backslash \gamma$ and assume that $0<|p\gamma(t)|<\pi/ \sqrt{k}$ for all $t \in [a,b]$ if $\kappa>0$. Then, given a reference point $o$, up to rotation there exists a unique curve $\tilde\gamma :[a,b] \To S_{\kappa}$ parametrized by arclength in the model plane $S_{\kappa}$ of constant curvature $\kappa$ such that $|o\tilde \gamma(t)| = |p \gamma(t)|$ for all $t$ and the segment $o\tilde \gamma(t)$ turns clockwise as $t$ increases. The curve $\tilde\gamma$ is called the \emph{development} of $\gamma$. We call a curve $\gamma:[a,b] \To X$ a (local) \emph{quasigeodesic} if for any $t_0 \in [a,b]$ there exists a neighborhood $U$ of $\gamma(t_0)$ and an $\varepsilon>0$ with $\gamma([t_0-\varepsilon,t_0+\varepsilon])\subset U$ such that the development $\tilde\gamma$ of the restriction $\gamma_{|[t_0-\varepsilon,t_0+\varepsilon]}$ with respect to any point in $U$ is convex in the sense that for every $t\in (t_0-\varepsilon,t_0+\varepsilon)$ and for every $\tau>0$ the region bounded by the segments $o\tilde\gamma(t \pm \tau)$ and the arc $\tilde\gamma_{|[t-\tau,t+\tau]}$ is convex whenever it is defined. Quasigeodesics have nice properties. For instance, they are unit speed curves \cite[Thm.~7.3.3]{Pe07} and for any point $x \in X$ and any direction $\xi \in \Sigma_x$ there exists a quasigeodesic with $\gamma(0)=x$, $\gamma^+(0)=\xi$. Here $\gamma^+(0)$ is defined to be the limit in $\Sigma_x$ of the directions $[x\gamma(t)]$ for $t\searrow 0$, cf. \cite[Thm.~A.0.1]{Pe07}. Left- and right derivatives $\gamma^-(t_0)$ and $\gamma^+(t_0)$ of a quasigeodesic $\gamma$ always exist and are polar, i.e. $\angle (\gamma^-(t_0),w)+ \angle ( \gamma^+(t_0),w) \leq \pi$ for any $w\in \Sigma_p$ \cite[Section~2.2]{QG95}. Conversely, if $\gamma_1:(s_0,0] \To X$ and $\gamma_2:[0,t_0) \To X$ are quasigeodesics such that $\gamma_1(0)=\gamma_2(0)$ and such that $\gamma_1^-(0)$ and $\gamma_1^+(0)$ are polar, then also the concatenation of $\gamma_1$ and $\gamma_2$ is a quasigeodesic. Moreover, pointwise limits of quasigeodesics are quasigeodesics. Let us also record the nontrivial statement that quasigeodesics can be extended for all times \cite{QG95,Pe07}. Finally, we repeat the the following two statements from \cite[2.3]{QG95}. \begin{lem}\label{lem:unique_geodesic} If a geodesic starts in a given direction in an Alexandrov space $X$, then any quasigeodesic with the same initial direction coincides with it for some positive time. \end{lem} \begin{cor}\label{cor:quasigeodesic_Riemannian} A quasigeodesic in a Riemannian manifold $M$ is a geodesic. \end{cor} \section{Polyhedral billiard tables} \label{sec:polyhedral_billiard_table} Let us first show the if direction of Theorem \ref{thm:billiard_orbifold}. \begin{prp} \label{prp:polyhedral_continuous} A polyhedral convex body $K$ in $\R^n$, which is a Riemannian orbifold, admits a continuous billiard evolution. \end{prp} \begin{proof} By Proposition \ref{prp:reflection} we can realize $K$ as the quotient of $\R^n$ by an affine reflection group $\Gamma$. We claim that the geodesic flow on $\R^n$ induces a billiard flow on $\Lambda$. For that consider a geodesic $c$ in $\R^n$. If at some time $t$ it does not intersect the fixedpoint subspace of an element in $\Gamma$ transversely, then the image of this geodesic in the quotient is locally length minimizing at time $t$. Since $\Gamma$ is discrete, the set of times where this conidition is not satisfied is discrete. Since the induced quotient maps $ T^1_p \R^n \To T^1 (\R^n/\Gamma_p)=(T^1 \R^n)/\Gamma_p$ are $1$-Lipschitz, it follows that geodesics in $\R^n$ project to billiard trajectories in $\Lambda$, see Lemma \ref{lem:polar_characterization}, $(ii)$. Moreover, since the geodesic flow on $\R^n$ is continuous, the so induced billiard flow yields a continuous billiard evolution. \end{proof} The billiard evolutions induced by this construction on equilateral triangles and on rectangles in $\R^2$ are illustrated in Figure \ref{fig:2d_continuous_billiards}. \begin{figure} \centering \def\svgwidth{0.7\textwidth} \input{2d_billiards.pdf_tex} \caption{Continuous billiard tables in $\R^2$ of type $A_1 \times A_1$ and $A_2$.} \label{fig:2d_continuous_billiards} \end{figure} Now we will prove the only if direction of Theorem \ref{thm:billiard_orbifold}. By Proposition \ref{prp:reflection} it is sufficient to show that a convex body which admits a continuous billard evolution is an orbifold. In dimension 1 the claim holds trivially. For reasons that become clear soon, we need to start the induction in dimension $2$. \begin{figure} \centering \def\svgwidth{0.9\textwidth} \input{corner2.pdf_tex} \caption{Reflection at a corner with opening angle $\frac{\pi}{4}<\frac{2\pi}{7}<\alpha <\frac{\pi}{3}$. The boundary of the table is depicted in solid. On the unfolded table a billiard trajectory that does not hit the corner corresponds to a straight line. a) The two reflections that are obtained as limits of billiard trajectories that approximate the bisector parallelly from above (solid) and from below (dashed). b) Approximation of the bisector from above by a parallel trajectory that does not hit the corner.} \label{fig:2d_corner} \end{figure} \begin{lem} \label{lem:dim2_billiard} A polyhedral convex body $K$ in $\R^2$ that admits a continuous billiard evolution is an orbifold. The only possible billiard table shapes are rectangles and triangles with interior angles $(\pi/3,\pi/3,\pi/3)$, $(\pi/2,\pi/4,\pi/4)$ and $(\pi/2,\pi/3,\pi/6)$ corresponding to the affine reflection groups of type $A_1\times A_1$, $A_2$, $BC_2$ and $G_2$. \end{lem} \begin{proof} Let $K$ be a polyhedral convex body as in the statement of the lemma. It is sufficient to show that $K$ is an orbifold in a neighborhood of each corner. Suppose there is a corner where this is not the case. The opening angle $\alpha$ of this corner then satisfies $\frac{\pi}{n+1} < \alpha < \frac{\pi}{n}$ for some $n\in \N_{\geq 2}$. Figure $\ref{fig:2d_corner}$ illustrates this situation in the case of $n=3$. Consider a billiard trajectory that runs into the corner as a bisector. We approximate this trajectory by parallel trajectories that approach the corner slightly above and below the bisector. The approximation from above is illustrated in Figure $\ref{fig:2d_corner}$, (b). We can understand the continuations of these billiard trajectories by continuing them as straight lines and reflecting the table at the table's faces instead as shown in Figure $\ref{fig:2d_corner}$. In particular, this shows that these approximating trajectories do not hit the corner and that their continuations are thus uniquely defined. Let $\beta$ be the angle at the corner between the continuation of the bisector and the first cushion in the development of the table that does not point in the interior of the upper half plane, see Figure $\ref{fig:2d_corner}$, (a). In formulas, $\beta = (\frac 1 2 + m) \alpha - \pi$ where $m$ is the minimal integer such that this expression is nonnegative. Depending on whether $n$ is even or odd, the limit of the billiard trajectories that approximate the bisector from above will then be reflected at the corner and form an angle of $\beta$ with the lower or upper face of the table, respectively, see Figure $\ref{fig:2d_corner}$, (a). Because of the $\Z_2$-symmetry with respect to the horizontal bisector, the limits of the two approximating sequences of the bisector are mirror images of each other with respect to the bisector as well. Therefore the two reflections coincide if and only if the angle $\beta$ satisfies $\beta=\frac \alpha 2$. This implies $\alpha = \frac{\pi}{m}$ in contradiction to our assumption. The second claim follows from the classification of compact, flat $2$-orbifolds. Explicitly, one can argue as follows. Let $k\geq 3$ be the number of corners and denote the natural numbers $n$ that specify the respective opening angles via the relation $\alpha=\frac{\pi}{n}$ by $n_1,\ldots, n_k \in \N_{\geq 2}$. Since the interior angles sum up to $(k-2)\pi$ we obtain the relation \[ \frac{1}{n_1}+ \ldots + \frac{1}{n_k} = k-2. \] This implies $k\leq 4$. Now it easily follows that the only possibilities are those stated. The tables of type $A_1\times A_1$ and $A_2$ are depicted in Figure \ref{fig:2d_continuous_billiards}. \end{proof} Now we can prove the general case. \begin{prp} \label{prp:dim_high} A polyhedral convex body $K$ in $\R^n$ that admits a continuous billiard evolution is an orbifold. \end{prp} \begin{proof} The proof is by induction on the dimension. Let $K$ in $\R^n$, $n\geq 3$, be as in the statement of the proposition. By Lemma \ref{lem:dim2_billiard} and by induction we can assume that the statement of the proposition is true in all lower dimensions. Let $p \in K$ be a point that lies in a face $\sigma$ of codimension $d<n$. The intersection of $K$ with a $d$-dimensional plane $H$ through $p$ orthogonal to $\sigma$ must then be an orbifold in a neighborhood of $p$. For, we can look at the flow restricted to $K \cap H$ in a neighborhood of $p$ and apply the induction assumption. Since a neighborhood of $p$ in $K$ splits as a product of $K \cap H$ and an open set in $\R^d$, it follows that $K$ is an orbifold in a neighborhood of $p$. At this point the claim follows from \cite[Proposition~2.16.]{Le15} which is itself proved by induction on the dimension. For the convenience of the reader we sketch the last step of the argument: It remains to show that $K$ is an orbifold in a neighborhood of its vertices. To do so one observes that the intersection of $K$ with a sufficiently small sphere centered at such a vertex $p$ is a Riemannian orbifold $\Orb$ of constant curvature $1$, since all its tangent cones are orbifolds by what has already been shown. The universal covering $\tilde \Orb$ of $\Orb$ is then a sphere of constant curvature $1$, and the Euclidean cone over $\tilde \Orb$ is isometric to $\R^n$. Since a neighborhood of $p$ in $K$ is isometric to a neighborhood of the tip of the Euclidean cone over $\Orb$, the claim follows, cf. Section \ref{sub:metric_constructions}. Hence, the table $K$ is an orbifold as claimed. \end{proof} Proposition \ref{prp:dim_high} completes the proof of Theorem \ref{thm:billiard_orbifold}. \section{Continuous quasigeodesic flows} \label{sec:continuous_billiard_quasigeodesic} \subsection{From billiard to quasigeodesic flows} By a unit speed \emph{billiard trajectory} on a convex body $K$ we simply mean a quasigeodesic on $K$. This definition is justified by the following lemma. \begin{lem}On a polyhedral convex body $K$ a curve parametrized by arclength is a billiard trajectory in the sense of Definition \ref{dfn:billiard_trajectory} if and only if it is a quasigeodesic. \end{lem} \begin{proof} Recall from Section \ref{sub:quasigeodesics} that the concatenation of two curves parametrized by arclength is a quasigeodesic if and only of both curves are quasigeodesics and the initial direction of the second curve is polar to the initial direction of the reversed first curve. Now the claim follows from the fact that quasigeodesics in a Riemannian manifold, in particular in the interior of $K$, are geodesics \cite[Corollary~2.3]{QG95}, and that bounce times of billiard trajectories and quasigeodesics do not accumulate by Lemma \ref{lem:collision_estimate}. \end{proof} More generally, a compact Alexandrov space is called \emph{polyhedral} if it admits a triangulation such that each simplex is globally isometric to a simplex in Euclidean space, cf. \cite{LP15,Le15}. In particular, (boundaries and doubles of) polyhedral convex bodies are examples of polyhedral Alexandrov spaces. \begin{lem}\label{cor:quasigeodesic_polyhedral} A continuous curve $c$ in a polyhedral Alexandrov space is a quasigeodesic if and only if it is locally distance realizing except at a discrete number of times $\mathcal{T}$ such that for each $t \in \mathcal{T}$ the vectors $c^-(t)$ and $c^+(t)$ are polar. \end{lem} \begin{proof} If the image under a quasigeodesic $c$ of a small neighborhood around some time $t$ is contained in a simplex, then $c$ is locally length realizing on this interval by the fact that quasigeodesics in Riemannian manifolds are geodesics \cite[Corollary~2.3]{QG95}. The fact that times for which this is not the case do not accumulate follows as in the proof of Lemma \ref{lem:collision_estimate}. \end{proof} Given the notion of a billiard trajectory on a convex body, we can define the notions of a billiard flow and of a continuous billiard evolution precisely as in Section \ref{sub:billiards}. For general Alexandrov spaces we make the analogous definition: By a \emph{quasigeodesic flow} we mean a map $\Phi: TX \times \R \To TX$ with $\Phi(\cdot,0)=\mathrm{id}_{TX}$ such that for each $(p,v) \in TX$ the map $\R \ni t \mapsto \pi(\Phi((p,v),t)) \in X$ is constant or a constant speed curve with initial conditions $\Phi((p,v),t)$ at time $t$ which becomes a quasigeodesic with respect to unit-speed parametrization. Here $\pi: TX \To X$ is the natural projection. \subsection{Continuity in the Alexandrov sense}\label{sub:continuity_alex_sense} We suggest the following continuity notion in order to talk about continuous quasigeodesic flows. We say that the initial directions of a sequence of quasigeodesics $\gamma_i :[0,a)\rightarrow X$ \emph{converges (in the Alexandrov sense)} to the initial direction of a quasigeodesic $\gamma: [0,a)\rightarrow X$ if $\gamma_i(0)\rightarrow \gamma(0)=:p$ and \[ \limsup_{t,s \downarrow 0} \limsup_{i\rightarrow \infty} \angle_p (\gamma_i(s),\gamma(t)) = 0. \] In particular, this condition is satisfied if $\gamma_i(t)$ converges to a quasigeodesic with the same initial condition as $\gamma$. The following equivalence was suggested by Lytchak. \begin{lem}\label{lem:char_conv_alex} The initial directions of a sequence of quasigeodesics $\gamma_i :[0,a)\rightarrow X$ converge in the Alexandrov sense to the initial direction of a quasigeodesic $\gamma: [0,a)\rightarrow X$ if and only if $\gamma_i(0)$ converges to $\gamma(0)$ and the initial direction of the limit of any convergent subsequence of the $\gamma_i$ (which is again a quasigeodesic) coincides with the initial direction of $\gamma$. \end{lem} \begin{proof} The only if direction follows from the observation that the initial directions of a sequence of quasigeodesics $\gamma_i :[0,a)\rightarrow X$ cannot converge in the Alexandrov sense to two distinct initial directions. The contraposition of the if direction follows by passing to a convergent subsequence whose initial directions do not converge to the initial direction of $\gamma$ in the Alexandrov sense, neither. \end{proof} In order to compare this convergence notion with others, we prove the following lemma. \begin{lem}\label{lem:convergence_initial_directions} Let $K$ be a convex subset of a Riemannian manifold. Suppose a sequence of quasigeodesics $\gamma_i:[0,a) \To K$ converges to a quasigeodesic $\gamma:[0,a) \To K$ and that $\gamma^+(0)$ is contained in an $\R$-factor of $T_{\gamma(0)} K$. Then the initial directions $\gamma_i^+(0)$ converge to the initial direction $\gamma^+(0)$ with respect to the subspace topology of $TK$. \end{lem} \begin{proof}Set $p=\gamma(0)=\lim_{i\rightarrow \infty} \gamma_i(0)$. Since $\gamma^+(0)$ is contained in an $\R$-factor of $T_{\gamma(0)}K$, we can choose a sequence of points $p_j\in K$ that converge to $p$ from the direction $-\gamma^+(0)$, i.e. initial directions of minimizing geodesics from $p$ to $p_j$ converge to $-\gamma^+(0)$. Now suppose the conclusion of the lemma does not hold. In this case we can assume that the $\gamma^+_i(0)$ converge to another direction $v\neq \gamma^+(0)$ in $T_pK$ with $\angle_p(v,\gamma^+(0))>\varepsilon$ for some $\varepsilon>0$. We can choose a small $t_0>0$ and a large $j$ such that $\angle_p(p_j,\gamma(t_0)) > \pi-\varepsilon/200$. Then for all sufficiently large $i$ we have $\angle_{\gamma_i(0)}(p_j,\gamma_i(t_0)) > \pi-\varepsilon/100$. Moreover, for sufficienty large $i$ and sufficiently small $t_1>0$ we have $\angle_{\gamma_i(0)}(p_j,\gamma_i(t_1)) < \pi- \varepsilon/2$. Hence, the angle $[0,t_0)\ni t \mapsto \angle_{\gamma_i(0)}(p_j,\gamma_i(t))$ is not decreasing in contradiction to the fact that $\gamma_i$ is a quasigeodesic, see \cite[5.(iii), p.~36]{Pe07}. \end{proof} We deduce \begin{cor} \label{cor:convergence_sub_alex}Let $K$ be a convex body in $\R^n$. Suppose the initial directions $\gamma_i^+(0):[0,a) \To K$ of a sequence of quasigeodesics converge to the initial direction $\gamma^+(0)$ of a quasigeodesic $\gamma:[0,a) \To K$ with respect to the subspace topology of $TK$. Then convergence also holds in the Alexandrov sense. \end{cor} \begin{proof} Perhaps by enlarging $K$, we can assume that $\gamma^+(0)$ is contained in an $\R$-factor of $T_p K$, $p=\gamma(0)$. Let $\gamma_{i_j}$ be a convergent subsequence of the sequence $\gamma_{i}$ with limit quasigeodesic $\bar \gamma$. Then the initial directions $\gamma^+_{i_j}(0)$ converge to $\gamma^+(0)$ by assumption and to $\bar \gamma^+(0)$ by Lemma \ref{lem:convergence_initial_directions}. Hence, $\bar \gamma^+(0)=\gamma^+(0)$ and so the claim follows with Lemma \ref{lem:char_conv_alex}. \end{proof} On the other hand, the converse implication fails at the boundary of $K$. However, an equivalence between different continuity notions for quasigeodesic flows still holds, see Proposition \ref{prp:equivalence_cont_def}. As another example, the tangent cone bundle of the quotient of a compact Riemannian manifold $M$ by an isometric action of a compact Lie group inherits a quotient topology from $TM$ and in this case convergence with respect to the quotient topology is equivalent to convergence in the Alexandrov sense. We say that a quasigeodesic flow gives rise to a \emph{continuous quasigeodesic evolution} if the following condition is satisfied. If for some sequence $(p_i,v_i) \in TX$ and for some $(p,v) \in TX$ the initial directions of the quasigeodesics $t \mapsto \Phi((p_i,v_i),t)$ converge to the initial direction of the quasigeodesic $t \mapsto \Phi((p,v),t)$ in the Alexandrov sense, then $\pi(\Phi((p_i,v_i),t_i))$ converges to $\pi(\Phi((p,v),t))$ for all $t$ and all sequences $t_i\rightarrow t$. A quasigeodesic flow gives rise to a continuous quasigeodesic evolution if and only if it is \emph{continuous} in the sense that in the condition above $\Phi((p_i,v_i),t_i))$ converges to $\Phi((p,v),t)$ in the Alexandrov sense for all $t$ and all sequences $t_i\rightarrow t$. In particular, the quasigeodesic flow of an Alexandrov space in which quasigeodesics do not branch is continuous: \begin{lem}\label{lem:unique_implies_continuity} If quasigeodesics on an Alexandrov space do not branch, then the uniquely defined quasigeodesic flow is continuous. \end{lem} \begin{proof}Suppose to the contrary that there is a sequence of quasigeodesics $\gamma_i :[0,a)\rightarrow X$ whose initial directions converge to the initial direction of a quasigeodesic $\gamma: [0,a)\rightarrow X$ in the Alexandrov sense, but which does not converge pointwise to $\gamma$ respecting the parametrizations. Then there also exists a converging subsequence whose limit is a quasigeodesic distinct from $\gamma$ but with the same initial direction as $\gamma$ in contradiction to our assumption by Lemma \ref{lem:char_conv_alex}. \end{proof} We will apply this statement in the proof of Theorem \ref{thm:convex_smooth}, see Section \ref{sub:proof_thm_B}. In the class of examples given by quotients of compact Riemannian manifolds $M$ by isometric actions of compact Lie groups a continuous quasigeodesic flow is induced by the projection of the horizontal geodesic flow on $M$ to the quotient, cf. \cite{LT10}. In case of a convex body $K$ the following proposition shows that continuity of a quasigeodesic flow is equivalent to continuity of the corresponding billiard evolution in our earlier sense. \begin{prp} \label{prp:equivalence_cont_def} A quasigeodesic flow on a convex body $K$ is continuous in the Alexandrov sense if and only if the induced billiard evolution is continuous with respect to the subspace topology of $TK$. \end{prp} \begin{proof}By Corollary \ref{cor:convergence_sub_alex} a continuous quasigeodesic flow on $K$ induces a continuous billiard evolution on $K$. Conversely, suppose we are given a quasigeodesic resp. billiard flow $\Phi:\R \times TK \To TK$ that induces a continuous billiard evolution. To show that $\Phi$ is continuous in the Alexandrov sense it suffices to observe that if a sequence $(p_i,v_i)\in TK$ converges to $(p,v)\in TK$ in the Alexandrov sense (with respect to the quasigeodesics defined by $\Phi$), then there exists a sequence $s_i \searrow 0$ such that $\Phi(s_i,(p_i,v_i))$ converges to $(p,v)\in TK$ with respect to the subspace topology of $TK$. \end{proof} \subsection{Uniqueness of continuous quasigeodesic flows and other digressions}\label{sub:further_properties} In this section we discuss uniqueness and related properties of continuous quasigeodesic flows. These discussions are not needed in later sections and could be skipped. Recall from Lemma \ref{lem:unique_geodesic} that no quasigeodesic can bifurcate from a geodesic. This implies that the continuous quasigeodesic flows in the settings of Theorems \ref{thm:billiard_orbifold} and \ref{thm_3_4_polyhedral}, which are induced by an orbifold geodesic flow, are unique: in a polyhedral Alexandrov space (or an orbifold) a quasigeodesic in the interior of a face (stratum) is a geodesic and can thus only branch, when it hits a face (a singular stratum) of a higher codimension which is at least $2$. Besides, in such spaces any quasigeodesic for which this happens at time $t_0$ can be approximated by quasigeodesics for which this does not happen before time $t_0+ \varepsilon$ for some $\varepsilon>0$. Therefore, in these cases a continuous quasigeodesic flow is determined by the behaviour of its geodesics. Moreover, the fact that the quasigeodesic flows in the setting of Theorems \ref{thm:billiard_orbifold} and \ref{thm_3_4_polyhedral} are induced by orbifold geodesic flows implies that they are \emph{reversible dynamical systems}, i.e. they satisfy $\Phi(\Phi(v,s),t)=\Phi(v,s+t)$ and $\Phi(v,t)=\Phi(-v,-t)$ for all $v\in TX$ and all $s,t \in \R$. Uniqueness of the quasigeodesic flow in the setting of Theorem \ref{thm:convex_smooth} will be shown in Section \ref{sub:proof_thm_B}. In fact, in this case we first show that quasigeodesics cannot branch. Then the reversible dynamical system property is immediate and continuity follows from Lemma \ref{lem:unique_implies_continuity}. In summary, we have \begin{prp}\label{prp:uniqueness_rds} The continuous quasigeodesic flows in Theorems \ref{thm:billiard_orbifold}, \ref{thm:convex_smooth} and \ref{thm_3_4_polyhedral} are unique and reversible dynamical systems. \end{prp} We remark that in \cite{KLP21} it is shown that on an Alexandrov space without boundary which satisfies a certain measure theoretical condition the geodesic flow exists almost everywhere for all times. By Lemma \ref{lem:unique_geodesic} each quasigeodesic flow coincides with the geodesic flow on the domain of the latter. If each initial direction of a quasigeodesic is a limit in the Alexandrov sense of initial directions of geodesics from an almost everywhere defined geodesic flow, then a continuous quasigeodesic flow is uniquely determined and a reversible dynamical system by continuity if it exists. For instance, this conclusion holds for quotients of compact Riemannian manifolds $M$ by isometric actions of compact Lie groups. In this case the measure theoretical condition above is satisfied \cite[Proposition~12.1]{KL20} and convergence in the Alexandrov sense is equivalent to convergence with respect to the quotient topology of the tangent cone bundle. The measure theoretical property from \cite{KLP21} is also satisfied for boundaries and doubles of convex bodies, see \cite{KLP21}. Uniqueness of continuous quasigeodesic flows on convex bodies would follow if each continuous quasigeodesic flow could be lifted to a continuous quasigeodesic flow on its double. Note in this respect that quasigeodesics on a convex body are precisely the projections of quasigeodesics of its double. A negative answer to Question \ref{sec:continuous_nonsmooth_bound} in Section \ref{sub:optimality} would imply that such a lift always exists. \section{Billiards on general convex bodies} \label{sec:general_convex} \subsection{Proof of Theorem \ref{thm:convex_smooth}}\label{sub:proof_thm_B} In this section we illustrate the flexibility of general continuous convex billiard tables by proving Theorem \ref{thm:convex_smooth}. \begin{lem}\label{lem:convergence_to_boundary} Let $K$ be a convex body in $\R^n$. For $n=2$ suppose that the boundary has three bounded derivatives and is positively curved everywhere. For $n>2$ suppose that the boundary is of class $\mathcal{C}^3$ and has a positively definite second fundamental form. If the initial directions of billiard trajectories converge to a tangent vector in the boundary, then the trajectories converge locally uniformly to the boundary. \end{lem} In particular, no quasigeodesic in the boundary can leave the boundary, and every billiard trajectory in the interior can be extended for all times without experiencing infinitely many bounces in finite time. \begin{proof} We need to show that a billiard trajectory restricted to any compact domain stays in arbitrarily small neighborhoods of the boundary, if its initial direction is sufficiently close to a tangent direction of the boundary. We call the angle between the tangent plane and the foward velocity at a bounce point the bouncing angle. To prove what we need it is sufficient to observe that there exist constants $C,C',C''>0$ that only depend on $K$ such that the following holds: \begin{compactenum} \item If a billiard trajectory bounces with bouncing angle at least $\alpha$, then the distance to the next bounce point is at least $C\alpha$. \item If a billiard trajectory bounces with a sufficiently small bouncing angle $\alpha$, then the next bouncing angle is bounded from above by $\alpha+C' \alpha^2$. \item If a billiard trajectory restricted to some compact interval bounces and all bouncing angles are bounded by $\alpha$, then this trajectory stays in a $C'' \alpha$ neighborhood of the boundary of $K$. \end{compactenum} Indeed, by (ii) it requires at least $\frac{1}{4C'\alpha}$ bounces to increase the bouncing angle from a sufficiently small bouncing angle $\alpha$ to bouncing angle $2\alpha$. After these bounces the trajectory has traveled at least a distance $\frac{C}{4C'}$ within a small neighborhood of the boundary. By choosing the initial bouncing angle sufficiently small we can thus guarantee that the trajectory stays in a small neighborhood of the boundary for arbitrary long time. Claim (i) and (iii) are easy consequences of the fact that the table has bounded diameter and curvature. In the $2$-dimensional case claim (ii) is shown at the end of the proof of \cite[Theorem~3]{Hal77}. We can reduce the general case to the $2$-dimensional case as follows. Any tangent vector $v$ of the boundary of $K$ specifies a $2$-dimensional plane spanned by $v$ and the normal vector to the boundary with the same foot point as $v$. The intersection of this plane with $K$ is a $2$-dimensional convex body which satisfies the assumptions of \cite[Theorem~3]{Hal77}, namely its boundary has three bounded derivatives and is everywhere positively curved. The constants $C'$ provided by this theorem are bounded in terms of the first derivative of the curvature of these boundary curves. We obtain a uniform bound by compactness. Moreover, for a small bounce angle at a boundary point $p$ the reflected trajectory will stay in the $2$-dimensional plane spanned by the projection of its velocity to the tangent space of the boundary at $p$ and the normal direction at $p$ until the next bounce. Since the bouncing angle at the next bound is bounded from above by the bouncing angle inside this $2$-dimensional plane, the claim follows. \end{proof} For the proof in the higher dimensional case we moreover need to show that limits of billiard trajectories in the boundary are quasigeodesics of the boundary. The next lemma is the first step. \begin{lem}\label{lem:distance_taylor} Let $M$ be a submanifold of $\R^n$ of class $\mathcal{C}^{4}$. Then for each point in $M$ there exists a small neighborhood $U$ and functions $g_1\in o(s^2)$ and $g_2\in o(s)$ such that for each $p\in U$ and for all pairs of unit speed geodesics $\gamma_1,\gamma_2:[0,1] \To M$ with $\gamma_1(0)=\gamma_2(0)=p$, $u=\gamma_1'(0) \in T_p M$ and $v=\gamma_2'(0) \in T_p M$ we have for all sufficiently small $s$ that \[ d(\gamma_1(s),\gamma_2(t))^2\leq \left\| su-vt \right\|^2 +g_1(s)t+ g_2(s) t^2 + o(t^2) \] where $d$ denotes the induced intrinsic metric on $M$. In particular, if $\bar \gamma:[0,a) \To T_p M$ is a curve such that $\gamma(\tau)=\exp_p(\bar \gamma(\tau))$ is parametrized by arclength, then \[ d(\gamma_1(s),\gamma(\tau))^2 \leq \left\| su-\bar \gamma(\tau) \right\|^2 +g_1(s)\tau+ g_2(s) \tau^2 + o(\tau^2).\] \end{lem} \begin{proof} We can assume to work in a sufficiently small open subset $W$ of $TM$ on which $\exp\times \pi:TM \To M \times M$ defines a $\mathcal{C}^2$-diffeomorphism onto its image. For small $s,\varepsilon\in (0,1]$ we define a variation $\kappa:=\kappa_{s}:[0,\varepsilon]\times [0,1] \To M$ such that $\kappa(0,r)=\gamma_1(sr)$, $\kappa(t,0) = \gamma_2(t)$ and such that $r\mapsto \kappa(t,r) $ is a geodesic with $\kappa(t,s)=\kappa(0,s)$. The vector field \[ T=T_{s}(t,r) := \frac{\partial \kappa}{\partial r} (t,r) \] is of class $\mathcal{C}^{2}$, because of $(\exp\times \pi)(T)=(\gamma_2(t),\gamma_1(s))$, and satisfies $f(t):=\left\langle T,T \right\rangle = d(\gamma_1(s),\gamma_2(t))^2$. In particular, $f(0)=s^2\left\|u\right\|^2=s^2$. We set $U(t)=T(t,0)/s$ so that $\exp_{\gamma_2(t)}(sU(t))=\gamma_1(s):=q$. We want to Taylor expand $f(t)$ up to second order at $t=0$. For that we compute \begin{equation*} \begin{split} f'(0) & = 2 s^2\left\langle \nabla_t U,U \right\rangle, \\ f''(0) & = 2 s^2 \left\langle \nabla_t \nabla_t U,U \right\rangle+ 2 s^2 \left\langle \nabla_t U, \nabla_t U \right\rangle. \\ \end{split} \end{equation*} We denote the second fundamental form of $M$ by $\alpha$. The Taylor expansion of the exponential map at $\gamma_2(t)$ \cite{MMS14} yields \[ \gamma_1(s) = \gamma_2(t) + U(t) s + \frac{1}{2} \alpha_{\gamma_2(t)}(U(t),U(t)) s^2 + R(t,s) s^2 \] with a remainder term of integral form \[ R(t,s) = \int_{0}^{s} I_{rU(t)}\left(U(t),U(t)\right) dr \] where $I_v$, $v\in TM$, is bilinear. Here $U$, $\alpha$ and $I$ are of class $\mathcal{C}^2$. Differentiation with respect to $t$ gives \[ \nabla_t U (t) s = - \gamma_2'(t) - \alpha_{\gamma_2(t)}(\nabla_t U (t) s,U(t)) s + o(s). \] In particular, $\nabla_t U (0) s = -v +o(s)$. Another differentiation with respect to $t$ gives $\nabla_t \nabla_t U (0) s = o(1)$. Hence, \begin{equation*} \begin{split} f'(0) & = -2 \left\langle v, u \right\rangle s + o(s^2), \\ f''(0) & = 2 \left\| v \right\|^2 + o(s). \\ \end{split} \end{equation*} Now the claim follows since $g_1$ and $g_2$ can be chosen independently of the other choices by compactness. \end{proof} Now we apply the previous lemma in the proof of the following statement. \begin{lem}\label{lem:quasigeodesic_boundary} Let $K$ be a convex body in $\R^n$ whose boundary $M$ is of class $\mathcal{C}^{4}$. Then a quasigeodesic of $K$ which is contained in $M$ is also quasigeodesic of $M$ (with respect to its intrinsic metric). \end{lem} \begin{proof} Let $\gamma$ be a quasigeodesic of $K$ that is contained in $M$. By \cite[Proposition~2.3.12]{MR1835418} it is also parametrized by arclength with respect to the intrinsic metric of $M$. Now we want to apply the following characterization. By \cite[Proposition~1.7]{QG95} a curve $\gamma:[a,b] \rightarrow X$ parametrized by arclength in an Alexandrov space $(X,d)$ (with curvature $\geq \kappa$) is a ($\kappa$-)quasigeodesic if and only if for every $t\in (a,b)$ \begin{equation}\label{eq:quasigeodesic_condition} \frac{1}{2}(d_q^2\circ \gamma)''(t) \leq 1+ o(d(q,\gamma(t)). \end{equation} Here a continuous function $\phi$ on $(a,b)$ is said to satisfy $\phi''\leq B$ if $\phi(t+\tau)\leq \phi(t)+ A \tau + B\tau^2/2+o(\tau^2)$ for some $A \in \R$. We denote the extrinsic and the intrinsic distance function on $M$ by $d$ and $d_i$, respectively. Locally around a point $p=\gamma(t)$ we write $\gamma$ as the image of a curve $\bar \gamma$ in $T_pM$ under the exponential map of $M$. More precisely, we choose the parametrization of $\bar \gamma$ such that $\gamma(t+\tau)=\exp_p(\bar \gamma (\tau))$. Moreover, we write a point $q$ close to $p$ as $q=\exp_p(su)$ for some unit vector $u\in T_pM$. We obtain Taylor expansions \begin{equation*} \begin{split} \exp_p(su) &=p+su+\frac{1}{2} \alpha(u,u)s^2 + o(s^2), \\ \exp_p(\bar \gamma (\tau)) &=p+\bar \gamma (\tau)+\frac{1}{2} \alpha(\bar \gamma (\tau),\bar \gamma (\tau)) + o(\left\|\bar \gamma (\tau)\right\|^2), \\ \end{split} \end{equation*} as in the proof of Lemma \ref{lem:distance_taylor}. Evaluating $d^2(\exp_p(\bar \gamma (\tau)), q)$ using these expansions shows that condition (\ref{eq:quasigeodesic_condition}) for the extrinsic metric implies that \begin{equation}\label{eq:quasigeodesic_condition_ex} \frac 1 2 \left\|\bar \gamma (\tau) \right\|^2 - s \left\langle u,\bar \gamma(\tau) \right\rangle \leq A\tau + \left(1+ o(d(q,\gamma(t))) \right) \tau^2 + o(\tau^2) \end{equation} for some $A\in \R$. On the other hand, by Lemma \ref{lem:distance_taylor} we have \[ f(\tau):=d_i^2(\exp_p(su), \exp_p(\bar \gamma (\tau))) \leq \left\| su-\bar \gamma(\tau) \right\|^2 +g_1(s)\tau+ g_2(s) \tau^2 + o(\tau^2) \] for some $g_1 \in o(s^2)$, $g_2\in o(s)$. Moreover, $f(0)=\left\|u\right\|^2s^2$. Combining this with (\ref{eq:quasigeodesic_condition_ex}) completes the proof. \end{proof} Now we can complete the proof of Theorem \ref{thm:convex_smooth}. \begin{proof}[Proof of Theorem \ref{thm:convex_smooth}] By Lemma \ref{lem:convergence_initial_directions}, Lemma \ref{sub:proof_thm_B} and Lemma \ref{lem:quasigeodesic_boundary} the quasigeodesic flow on $K$ is uniquely defined and hence continuous by Lemma \ref{lem:unique_implies_continuity}. By Corollary \ref{cor:convergence_sub_alex} also the induced billiard evolution on $K$ is continuous with respect to the subspace topology of $TK$. The statement about the locally uniform convergence follows from the fact that our billiard trajectories are parametrized proportionally to arclength. \end{proof} The following construction provides an example of a convex body with a continuous quasigeodesic flow although the boundary is only $\mathcal{C}^1$. \begin{exl}\label{exe:c1_example} Attach two circular arcs of different radii and close the boundary smoothly. Then no quasigeodesic in the boundary can leave the boundary and so the uniquely defined quasigeodesic flow is continuous by Lemma \ref{lem:unique_implies_continuity}. \end{exl} \subsection{Optimality of Theorem \ref{thm:convex_smooth} in dimension $2$} \label{sub:optimality} The following example shows that the statement of Theorem \ref{thm:convex_smooth} may fail if the boundary is only three times differentiable and positively curved everywhere. \begin{prp}\label{prp_noncontinuous_example} There exists a convex body $K \subset \R^2$ whose boundary is three times differentiable and positively curved everywhere, but which does not admit a continuous quasigeodesic flow resp. billiard evolution. \end{prp} \begin{proof} An example of a convex billiard table $K$ whose boundary is three times differentiable and positively curved everywhere with a unit speed billiard trajectory $c:[0,t_0]\To X$ whose bounce times accumulate at $t_0$ (for the first time) is constructed in \cite{Hal77}. Suppose that the convex body $K$ from this construction admits a continuous quasigeodesic flow $\Phi: [0,\infty)\times TK \To TK$. Then the curve $[0,t_0] \ni t\mapsto \phi(t,c^+(0))$ projects to $c$, because the behaviour of $c$ on $[0,t_0]$ is uniquely determined by the initial condition $c^+(0)$. At time $t_0$ the velocity of $c$ converges to a tangent vector $v$ of the boundary of $K$ \cite[Theorem~1(a)]{Hal77}. Let $\gamma :\R \rightarrow K$ be a unit speed parametrization of the boundary of $K$ with $\gamma^+(0)=v$ and period $T$. The example constructed in \cite{Hal77} is such that the only possible extension of $\tilde c$ to a quasigeodesic $\tilde c:[0,\infty) \To X$ is the one which parametrizes the boundary after time $t_0$. This is because the boundary has three bounded derivatives in the complement of $\gamma((-\varepsilon,0))$ for any $\varepsilon>0$, see the proof of Lemma \ref{lem:convergence_to_boundary} and of Theorem 3 in \cite{Hal77}. Hence, this extension satisfies $\tilde c(t+T)=\tilde c(t)$ for all $t\geq t_0$. Moreover, by \cite[Theorem~4]{Hal77} for some small $\varepsilon$ we can approximate $\tilde c^+(t_0-\varepsilon)$ and $\tilde c^+(T+t_0-\varepsilon)$ by directions in which billiard trajectories $\gamma_i$ and $\bar\gamma_i$ exist forever without accumulating bounce times. By construction both sequences converge to the boundary after time $\varepsilon$. Reverting the billiard trajectories in these sequences yields a contradiction to the continuity of the quasigeodesic flow at initial directions $\tilde c^-(t_0+\varepsilon)$ for any $\varepsilon>0$. \end{proof} In the proof we used that for the convex body constructed in \cite{Hal77} there is only one tangent vector of the boundary in which two quasigeodesics $\gamma_i:[0,a) \To K$, $i=1,2$, start that are distinct on any nontrivial time interval $[0,\varepsilon)$. A negative answer to the following question would in particular imply that continuous quasigeodesic flows on convex bodies are unique, see Section \ref{sub:further_properties}. \begin{qst} Does there exist a convex body with a continuous quasigeodesic flow on which a quasigeodesic can escape the boundary to the interior? \end{qst} \subsection{Continuous billiard flows on tables with non-smooth boundary}\label{sec:continuous_nonsmooth_bound} Now we illustrate in dimension $2$ that the presence of continuous quasigeodesic flow on a convex body implies some local rigidity. \begin{prp}\label{prp:local_rigidity_dim2} If a convex body $K\subset \R^2$ admits a continuous quasigeodesic flow, then each tangent cone of $K$ does so. In particular, all boundary points are orbifold points. \end{prp} \begin{proof} Suppose that $K$ admits a continuous quasigeodesic flow and let $p$ be a point in the boundary of $K$. We can assume that $p$ is a non-smooth boundary point. The billiard dynamics on the tangent cone $T_p K$ is uniquely determined up to the tip of the cone. We specify the reflection law at this tip by the given reflection law at $p$. Then the following observations imply that the resulting billiard evolution on $T_pK$ is continuous. Firstly, the curvature of the boundary concentrated in $(B_{\varepsilon}\cap \partial K)\backslash \{p\}$ tends to $0$ when $\varepsilon$ does so. Secondly, a blow-up sequence of $K$ at $p$ converges to the tangent cone $T_pK$. And thirdly, the number of bounces of billiard trajectories in $T_p K$ is uniformly bounded (by a number inversely proportional to the opening angle of the cone $T_p K$) \cite{Sin78}. By these observations the deviation of billiard trajectories in $T_pK$ that approach $p$ from cooresponding billiard trajectory in $K$ become arbirtrarily small when the initial conditions converge. \end{proof} We remark that considering the curvature of the boundary shows that there can be at most four non-smooth orbifold points in the boundary and that the table is a rectangle if there are four, see e.g. \cite{AP18}. Moreover, such tables have continuous quasigeodesic flows if the boundary is sufficiently regular otherwise. We expect that the statements of Proposition \ref{prp:local_rigidity_dim2} are also true in higher dimensions, see Conjecture \ref{conj:orb_point}. \section{Boundaries of polyhedral convex bodies} \label{sec:boundary_polyhedral} In this section we prove Theorem \ref{thm_3_4_polyhedral} which relies on the notion of a (continuous) quasigeodesic flow. However, to read it one can either take the next paragraph for granted or recall the definition of a quasigeodesic flow from Section \ref{sec:continuous_billiard_quasigeodesic} and the characterization of quasigeodesics in polyhedral Alexandrov spaces provided in Lemma \ref{cor:quasigeodesic_polyhedral}: A quasigeodesic is a curve parametrized by arclength which is locally length realizing except at a discrete set of times at which the forward and backward derivatives are polar. As far as continuity concerns one can alternatively demand it only at directions over points that are contained in the interior of a maximal dimensional simplex where continuity can be defined in terms of the topology of the ambient Euclidean vector space. The only if statement of the first part of Theorem \ref{thm_3_4_polyhedral} works analogously by induction on the dimension as in the proof of Theorem \ref{thm:billiard_orbifold} in Section \ref{sec:polyhedral_billiard_table}. The statement that the orbifold geodesic flow defines a continuous quasigeodesic flow follows as in the proof of Proposition \ref{prp:polyhedral_continuous} and was also observed in general in Section \ref{sec:continuous_billiard_quasigeodesic}. \subsection{Proof of Theorem \ref{thm_3_4_polyhedral} in dimension $3$} \label{sub:_prove_C_dim3} For the proof of the second part of Theorem \ref{thm_3_4_polyhedral} we first deal with the $3$-dimensional statement. \begin{prp}\label{prp:3d_tetrahedron} Suppose a polyhedral convex body $K$ in $\R^3$ is bounded by an orbifold. Then $K$ is a simplex with all four cone angles equal to $\pi$. \end{prp} We present two different proofs of Proposition \ref{prp:3d_tetrahedron}, one using the Gauß--Bonnet theorem and another one via Euler's polyhedral formula. \begin{proof}[Proof of Proposition \ref{prp:3d_tetrahedron} via Gauß--Bonnet] We assume that the reader is familiar with the curvature measure and the Gauß--Bonnet theorem for $2$-dimensional convex surfaces, see e.g. \cite{AP18,Al05}. The curvature of the boundary of a $3$-dimensional polyhedral convex body is concentrated in the vertices of the body. The curvature at a vertex equals the area of the intersection of the normal cone and a unit sphere at this vertex. However, it is determined by the intrinsic geometry of the surface alone. By the Gauß--Bonnet theorem applied to a neighborhood of the vertex, the curvature $\kappa$ and the cone angle $\alpha$ at vertex $i$ are related via $\kappa=2\pi-\alpha$. Here the cone angle at a vertex $p$ is the length of the boundary of the intersection of $T_pK$ with the unit sphere, and it equals the sum of the face angles adjacent to $p$. We enumerate the vertices of $K$ from $1$ to $k$ and denote the curvature and the cone angle at the $i$-th vertex by $\kappa_i$ and $\alpha_i$, respectively. Applying Gauß--Bonnet to the entire surface yields that \[ 4\pi=\sum_{i=1}^k \kappa_i=2k\pi - \sum_{i=1}^k \alpha_i. \] By our orbifold assumption each $\alpha_i$ is of the form $\alpha_i=\frac{2\pi}{n_i}$ for some $n_i\in \N$. Since the normal cone at a vertex has nonempty interior, the curvature at a vertex is positive and the cone angle is strictly less than $2\pi$. This implies that $n_i \geq 2$ for all $i=1\ldots, k$. Hence, we obtain the same condition that we encountered in the proof of Lemma \ref{lem:dim2_billiard}, namely \[ \frac{1}{n_1}+ \ldots + \frac{1}{n_k} = k-2 \] with $n_i\in \N_{\geq2}$ for all $i=1,\ldots,k$. Since a convex body in $\R^3$ has at least four vertices, this time the only possible solution is $k=4$ and $n_1=n_2=n_3=n_4=2$. It corresponds to a simplex with all four cone angles equal to $\pi$. \end{proof} The formulation in Proposition \ref{prp:3d_tetrahedron} is related to the formulation in Theorem \ref{thm_3_4_polyhedral} via the following observation, cf. \cite[Section~4]{AP18} and Figure \ref{fig:_triangle}. \begin{exe} All cone angles of a $3$-simplex are $\pi$ if and only if opposite sides have equal length. For each acute triangle $T$ in the plane there exists precisely one such $3$-simplex all of whose faces are congruent to $T$, cf. Figure \ref{fig:_triangle}. \end{exe} For the alternative proof of Proposition \ref{prp:3d_tetrahedron} without the Gauß-Bonnet theorem we need the following two ingredients. \begin{lem}\label{lem:cone_angle_bound} The cone angle at a vertex of a polyhedral convex body $K$ in $\R^3$ is less then $2\pi$. \end{lem} \begin{proof} Recall that the cone angle at a vertex $p$ is the length of the boundary of the intersection $P$ of $T_pK$ with the unit sphere. This intersection $P$ can be seen as a finite intersection of at least $3$ hemispheres in the unit sphere. In this intersection it suffices to consider hemispheres that correspond to sides of the spherical polygon $P$. By the spherical triangle inequality the length of the boundary of this intersection strictly increases if we subsequently remove hemispheres from the intersection until only two hemispheres are left. At this final stage the length of the boundary is $2\pi$. Since the original number of hemispheres was at least $3$, the claim follows. \end{proof} Let $T$ be a triangulation of a disk. We denote the number of vertices, edges and faces of such a triangulation by $V$, $E$ and $F$, respectively. The following second ingredient can for instance be easily obtained by induction on the number of faces. \begin{exe} \label{lem:disk_inequality} Let $T$ be a triangulation of a disk. Then $2V \leq E + 3$, or equivalently $V \leq F + 2$ by Euler's formula. \end{exe} Now we present the second proof of Proposition \ref{prp:3d_tetrahedron}. \begin{proof}[Proof of Proposition \ref{prp:3d_tetrahedron} via Euler's formula] We can triangulate the boundary of $K$ in such a way that every vertex of the triangulation is also a vertex of $K$. All interior angles of the triangulation sum up to $F\pi$. On the other hand, by Lemma \ref{lem:cone_angle_bound} and our orbifold assumption the cone angle at a vertex is at most $\pi$. Hence, we have that $F \leq V$. With Euler's polyhedral formula $V-E+F=2$ we deduce that \begin{equation} E \leq 2V-2. \end{equation} We claim that we actually have equality. To see this we can pick a vertex and remove its star from $T$ to obtain a new triangulation $T'$ of a disk, which satisfies $2V' \leq E' + 3$ by Exercise \ref{lem:disk_inequality}. Because of $V'=V-1$ and $E'\leq E-3$ we indeed have equality. Since we can start this argument with any vertex in $T$, the triangulation $T$ must be trivalent. This implies that there are only four vertices. The equality discussion moreover shows that the total angle at each vertex is $\pi$. \end{proof} \begin{figure} \centering \def\svgwidth{0.3\textwidth} \input{triangle.pdf_tex} \caption{Acute triangle subdivided into four similar triangles by segments between sidemidpoints.} \label{fig:_triangle} \end{figure} \subsection{Proof of Theorem \ref{thm_3_4_polyhedral} in higher dimensions} \label{sub:proof_C_dimh} In order to rule out examples in higher dimensions we will apply the following intrinsic characterization of faces. In fact, we will only need the characterization of faces of codimension $3$, which we have already used, but for completeness we state the result for all codimensions. \begin{prp} \label{prp:face_characterization} Let $K$ be a polyhedral convex body in $\R^n$, $n\geq3$. Then a point in the boundary of $K$ belongs to the interior of a face of dimension $l<n-1$ if and only if a neighborhood of $p$ in $\partial K$ (with its induced intrinsic metric) isometrically splits off an open set in $\R^{l}$ but not an open set in $\R^{l+1}$. \end{prp} \begin{proof} We only need to show that no neighborhood of a point $p$ in a face of dimension $l<n-1$ splits off an open set in $\R^{l+1}$. Looking at the intersection of $K$ with a $(n-l)$-dimensional plane through $p$ and orthogonal to the supporting face of $p$ reduces the claim to the case $l=0$. We prove the latter by induction on $n$. For $n=3$ the cone angle at each vertex is strictly less than $2 \pi$ by Lemma \ref{lem:cone_angle_bound} or by the Gauß-Bonnet, cf. proof of Proposition \ref{prp:3d_tetrahedron}. In this case no shortest curve can pass through a vertex \cite[1.8.1~(A)]{Al05}. This proves the claim for $n=3$. To prove the claim for some $n>3$, we first observe that in this case there are at least $n$ edges, i.e. $1$-dimensional faces, adjacent to a vertex $p$. By induction assumption no point on these edges admits a neighborood that splits off an open set in $\R^2$. Now suppose that a neighborhood of $p$ splits off an open set in $\R$. Then there exists an edge adjacent to $p$ that is not contained in this $\R$-factor. Therefore points on this edge have neighborhoods that split off open sets in $\R$ in two different directions. Now a splitting theorem of Milka \cite{Mi67} (applied to a tangent cone which is locally isometric to a neighborhood of the base point) implies that neighborhoods of such points actually split off an open set in $\R^2$. This contradiction completes the proof of the proposition. \end{proof} \begin{prp} \label{prp:polyhedral_orbifold_4d} For $n\geq 4$ there does not exist a polyhedral convex body in $\R^n$ whose boundary is an orbifold. \end{prp} \begin{proof} Suppose such a polyhedral convex body $K$ exists in some $\R^n$, $n\geq 4$. We pick a point $p$ in the interior of a face of codimension $4$ and look at a small neighborhood of $p$ in the intersection of the boundary of $K$ with a $4$-dimensional plane through $p$ orthogonal to the supporting face of $p$. This intersection is a $3$-dimensional orbifold and by Proposition \ref{prp:face_characterization} the faces of codimension $3$ belong to the codimension $2$ stratum of this orbifold. On a polyhedral convex body each codimension $4$ face is adjacent to at least $4$ codimension $3$ faces. However, on the other hand, in a $3$-orbifold at most $3$ components of the codimension $2$ stratum can meet at a point. The latter follows from the classification of finite subgroups of $\Or(3)$. This contradiction completes the proof of the proposition and of Theorem \ref{thm_3_4_polyhedral}. \end{proof}
{"config": "arxiv", "file": "2202.11624/Billiards.tex"}
TITLE: Complete Linear system on Del Pezzo surfaces QUESTION [2 upvotes]: Is there always a reducible curve (EDIT: with exactly two irreducible components intersecting in at least 2 points) in a complete linear system (EDIT: of dimension at least 2 with curves of genus at least 1) on a Del Pezzo surface? REPLY [7 votes]: No. Let $S$ be a Del Pezzo surface of degree 1, namely $K^2_S=1$. The anticanonical system $|-K_S|$ has dimension 1 and it contains no reducible curve, since $-K_S$ is ample and has self-intersection $1$. EDIT: the system of lines in $\mathbb P^2$ shows that the answer to the edited question is also negative. Other counterexamples are given by the system of curves of $\mathbb P^1\times\mathbb P^1$ of bidegreee $(1,n)$, $n\ge 1$.
{"set_name": "stack_exchange", "score": 2, "question_id": 144638}
TITLE: Normal subgroup of $A_4$ QUESTION [1 upvotes]: Prove that $\{id, (12)(34), (13)(24), (14)(23)\}$ is a normal subgroup of $A_4$. I see that it's possible to prove this with the following theorem: $$N \trianglelefteq G \text{ if } N\leq G \text{ and } \forall n\in N, g\in G: g^{-1}ng \in N.$$ But is there an easier way to do this? This way isn't hard but it takes quite some time to work through. REPLY [1 votes]: It is a clear that the set is a subgroup. This requires minimal calculations. What you could do next is to calculate the conjugacy class of $(12)(34)$ in $A_4$. Almost hands-down (conjugation preserves cycle structures!!) you will see that this is $Cl_{A_4}((12)(34))=\{(12)(34),(13)(24),(14)(23)\}$. Hence your subgroup is the disjoint union of two conjugacy classes: $\{(1)\} \cup \{(12)(34),(13)(24),(14)(23)\}$ whence normal. Note that the subgroup is isomorphic to $V_4$, Klein's $4$-group. It is also equal to the commutator subgroup $[A_4,A_4]$ of $A_4$, which yields another way of showing normality. So what I used is the fact that if $N$ is a subgroup of $G$, then $N \unlhd G$ if and only if $N=\bigcup_{n \in N}Cl_G(n)$, which you might try to prove yourself. Note that in general, unions of $G$-conjugacy classes form a normal set (meaning: closed under conjugation), but do not need to form a subgroup!
{"set_name": "stack_exchange", "score": 1, "question_id": 3949412}
TITLE: Simple inequality with a,b,c QUESTION [3 upvotes]: I'm looking for proof of $$\sqrt{a(b+c)}+\sqrt{b(c+a)}+\sqrt{c(a+b)} \leq \sqrt{2}(a+b+c)$$ I tried using $m_g \leq m_a$, generating the permutations of $\sqrt{a(b+c)} \leq \frac{a+(b+c)}{2}$ and adding them together, but I get $\frac{3}{2}(a+b+c)$, and obviously $\frac{3}{2}>\sqrt{2}$, so I'm stuck and sure I'm missing something trivial, so thanks in advance. REPLY [3 votes]: Using Cauchy-Schwarz inequality with the vectors: $$\begin{cases} u=(\sqrt{a},\sqrt{b},\sqrt{c})\\ v=(\sqrt{b+c},\sqrt{c+a},\sqrt{a+b}) \end{cases}$$ you get \begin{align}[\sqrt{a(b+c)}+\sqrt{b(c+a)}+\sqrt{c(a+b)}]^2 &=(u.v)^2\\ & \le \Vert u \Vert^2 \Vert v \Vert^2\\ &=(a+b+c)((b+c)+(c+a)+(a+b))\\ &=2(a+b+c)^2\end{align} Which enables to conclude to the desired result.
{"set_name": "stack_exchange", "score": 3, "question_id": 1489533}
TITLE: Compact operator on $l^2$ QUESTION [4 upvotes]: Let $A$ be a bounded linear operator on $l^2$ defined by $A(a_n)= \left(\frac{1}{n} a_n\right)$. Would you help me to prove that $A$ is compact operator. I guess the answer using an approximation by a sequences of finite range operator. REPLY [6 votes]: Let $$T_k(a_n)_j=\begin{cases} \frac 1ja_j&\mbox{ if }j\leq k;\\ 0&\mbox{ otherwise.} \end{cases}$$ This gives a linear operator, and the range of $T_k$ is generated by $e_1,\dots,e_k$, a finite dimensional space. For $a\in\ell^2$, $$\lVert T(a)-T_k(a)\rVert=\sum_{j\geq k+1}\frac 1j|a_j|\leq\frac 1{k+1}\sum_{j\geq 1}|a_j|,$$ so $\lVert T-T_k\rVert\leq\frac 1{k+1}$ and $T$ is compact as the limit in operator norm of compact operators. Note that more generally, we can define $T(a)(k):=d_ka_k$, where $d_k\to 0$, and $T$ from $\ell^p$ to $\ell^p$, where $1\leq p<\infty$, and this will give a compact operator by the same argument.
{"set_name": "stack_exchange", "score": 4, "question_id": 229602}
TITLE: Can the gauge boson respective of a spontaneously broken generator remain massless in the context of the Higgs Mechanism? QUESTION [3 upvotes]: I'm studying a 3-3-1 model which is an extension of the standard model. The breaking $$SU(3)\times U(1) \to SU(2)\times U(1)'$$ occurs in a single step and through a single scalar VEV. The problem is that implementing this model on Mathematica, evaluating the scalar covariant derivatives, assembling the vector mass matrix and diagonalizing it, I find that only 3 vector bosons acquire mass, not the 5 that would be expected realizing that $$\text{dim}[SU(3)\times U(1)]-\text{dim}[SU(2)\times U(1)']=5.$$ Assuming that my calculations are correct, is it possible that $$SU(3)\times U(1) \to SU(2)\times U(1)'$$ is indeed the breaking pattern caused by the vacuum of that single scalar even if the bosons remains massless or is the final symmetry strictly bigger than the one stated? REPLY [2 votes]: TL;DR: No, that is not possible under normal circumstances. Let us for simplicity consider the spontaneous symmetry breaking (SSB) of the group $$ G~=~ U(N+1) \quad \longrightarrow\quad H~=~ U(N), $$ i.e. there are $$ \dim_{\mathbb{R}} G-\dim_{\mathbb{R}} H~=~ (N+1)^2-N^2~=~2N+1$$ broken generators. At the Lie algebra level this corresponds to OP's example for $N=2$ and to electroweak SSB for $N=1$ because$^1$ $$u(N)\cong su(N)\oplus u(1), \qquad u(N\!+\!1)\cong su(N\!+\!1)\oplus u(1).$$ This is good enough to count DOFs. Let the scalar field $\Phi$ transform in the fundamental/defining representation $V\cong \mathbb{C}^{N+1}$ of $G$. Assume that it has a non-zero VEV $\Phi_0\neq 0$. To be concrete, by a global $G$ transformation we may assume that $$\Phi_0~\propto~ \begin{pmatrix} 0\cr \vdots \cr 0 \cr 1 \end{pmatrix}~\in~V.$$ The stabilizer/isotropy subgroup is $$H~\cong~\begin{pmatrix} H \cr & 1 \end{pmatrix}_{(N+1)\times (N+1)} ~ \subseteq~G. $$ In the unitary gauge$^2$ we may assume that the scalar field (including quantum fluctuations) is of the form $$\Phi ~\in~ \{0\}^N\times \mathbb{R}, $$ i.e. there is only 1 real physical Higgs boson. The remaining $2N+1$ fluctuations are eaten by gauge symmetry (along the broken directions). The mass terms for the gauge fields come from the Lagrangian term $|D_{\mu}\Phi_0|^2$. This makes precisely $2N+1$ components of the gauge fields $A_{\mu}\in \mathfrak{g}=u(N\!+\!1)$ massive, namely the ones in the last column. -- $^1$ $SU(1)=\{1\}$ and $su(1)=\{0\}$ are singletons. $^2$ The unitary gauge is here a partial gauge fixing condition that fixes the gauge symmetry along the broken directions. Ultimately we should also fix the $H$-gauge symmetry, but that's another story.
{"set_name": "stack_exchange", "score": 3, "question_id": 653048}
\section{The menagerie: notations, background, and constructions} In this section, we will lay out the fundamental definitions and constructions that will be used in the proof of the main result. Along the way, we will also prove basic relations between these definitions, to alleviate the density of later arguments. \subsection{Linear and cyclic orders} \begin{defn}\label{defn:SimplexCats} The \emph{simplex category} $\Delta$ has objects the standard linearly ordered sets $[n]=\{0,1,\ldots, n\}$ for $n\geq 0$ and morphisms the order-preserving maps. The \emph{enlarged simplex category} $\bbDelta$ has objects finite non-empty linearly ordered sets, and morphisms order-preserving maps. The \emph{augmented simplex category} $\Delta_+$ (resp. the \emph{augmented simplex category} $\bbDelta_+$) is obtained from $\Delta$ (resp. $\bbDelta$) by appending an initial object $\emptyset$, which will also sometimes be denoted by $[-1]$. The \emph{interval category} $\nabla$ is the subcategory of $\Delta$ on the objects $[n]$ for $n\geq 1$, the morphisms of which preserve maximal and minimal elements. The \emph{enlarged interval category} $\bbNabla$ is the subcategory of $\bbDelta$ on those sets of cardinality $\geq 2$, whose morphisms preserve maximal and minimal elements. The \emph{augmented interval category} $\nabla_+$ (resp. the \emph{augmented extended interval category} $\bbNabla_+$) is the subcategory of $\Delta$ (resp. $\bbDelta$) whose objects have cardinality $\geq 1$ and whose morphisms preserve the maximal and minimal elements. \end{defn} We denote by $S^1$ the unit circle $\{z\in \mathbb{C}|\,|z|=1\}$, and equip it with the orientation inherited from $\mathbb{C}$. We will denote by $r(n)=\{e^{\frac{2\pi i k}{n+1}}|0\leq k<n+1\}$ the sets of roots of unity in $S^1$. {\color{red} define cyclic orders} \begin{defn}\label{defn:CyclicCats} The \emph{cyclic category} has as its objects the standard cyclicly ordered sets $\langle n\rangle$ for $n\geq 0$, and as its morphisms the maps of finite sets respecting the cyclic order. The \emph{enlarged cyclic category} $\bbLambda$ has as its objects all finite, non-empty, cyclicly ordered sets, and as its morphisms the maps which respect the cyclic order. \end{defn} \begin{defn}\label{defn:SetCats} The category of the standard finite sets $\underline{n}:=\{1,2,\ldots, n\}$ for $n\geq 0$ will be denoted $\Fin$. The category of the standard finite pointed sets $\langle n\rangle:=\underline{n}\amalg \{\ast\}$ will be denoted $\Fin_\ast$. The category of all finite sets (resp. the category of all finite points sets) will be denoted by $\bFin$ (resp. by $\bFin_\ast$). When convenient, we will denote by $\bbGamma$ (resp. by $\Gamma$) the opposites of the categories $\bFin_\ast$ (resp. $\Fin_\ast$). We additionally denote by $\Ass$ the associative operad\todo{expand this defn}. Given a pointed set $S\in \bFin_\ast$, we denote by $S^\circ$ the set $S\setminus \{\ast\}$, where $\ast$ denotes the basepoint of $S$. \end{defn} \begin{const}[Linear interstices]\label{const:LinInterstice} Given a linearly ordered set $S\in \bbDelta$ we define an \emph{inner interstice} of $S$ to be an ordered pair $(k,k+1)\in S\times S$, where $k+1$ denotes the successor to $k$. The set of inner interstices of $S$ is, itself, a linearly ordered set, with the order \[ (k,k+1)\leq (j,j+1)\Leftrightarrow k\leq j \] We will denote the linearly ordered set of inner interstices of $S$ by $\mathbb{I}(S)$. Note that $\mathbb{I}([0])=\emptyset$. Given a linearly ordered set $S\in \bbDelta_+$, let $\hat{S}$ be the set $\{a\}\amalg S\amalg \{b\}$, where $b$ is taken to be maximal and $a$ minimal. We define an \emph{outer interstice} of $S$ to be an inner interstice of $\hat{S}$. We will denote the linearly ordered set of outer interstices of $S$ by $\mathbb{O}(S)$. Note that $\mathbb{O}(\emptyset)=\{(a,b)\}$. We define functors \[ \mathbb{O}:\bbDelta_+^{\op}\to \bbNabla_+;\quad S\mapsto \mathbb{O}(S) \] and \[ \mathbb{I}: \bbNabla_+^\op\to \bbDelta;\quad S\mapsto \mathbb{I}(S) \] as follows (we will define $\mathbb{O}$ explicitly, the definition of $\mathbb{I}$ is similar). Given a morphism $f:S\to T$ in $\bbDelta_+$, we define a morphism $\mathbb{O}(f):\mathbb{O}(S)\to \mathbb{O}(T)$ by setting \[ \mathbb{O}(f)(j,j+1)=\begin{cases} (k,k+1) & f(k)\leq j\leq j+1\leq f(k+1)\\ (a,a+1) & j\leq f(k)\; \forall k\in S\\ (b-1,b) & j\geq f(k)\; \forall k\in S. \end{cases} \] Pictorially, we can represent the morphism $\mathbb{O}(f)$ as a forest as in Figure \ref{fig:DualForest}, thinking leaves $j\in \mathbb{O}(T)$ as being attached to the root $k\in \mathbb{O}(S)$ if $\mathbb{O}(f)(j)=k$. \begin{figure}[htb] \begin{tikzpicture} \foreach \x/\lab/\nom in {0.5/a_0/a0,1.5/a_1/a1,2.5/a_2/a2,3.5/a_3/a3}{ \path (0,-\x) node (\nom) {$\lab$}; }; \begin{scope}[yshift=12] \foreach \x/\lab/\nom in {0.5/b_0/b0,1.5/b_1/b1,2.5/b_2/b2,3.5/b_3/b3,4.5/b_4/b4}{ \path (4,-\x) node (\nom) {$\lab$}; }; \end{scope} \draw[->] (a0) to (b0); \draw[->] (a1) to (b2); \draw[->] (a2) to (b3); \draw[->] (a3) to (b3); \end{tikzpicture} \qquad \begin{tikzpicture} \foreach \x/\lab/\nom in {0.5/a_0/a0,1.5/a_1/a1,2.5/a_2/a2,3.5/a_3/a3}{ \path (0,-\x) node (\nom) {$\lab$}; }; \foreach \y in {0,1,2,3,4}{ \draw[blue](-0.2,-\y) to (0.2,-\y); }; \begin{scope}[yshift=12] \foreach \x/\lab/\nom in {0.5/b_0/b0,1.5/b_1/b1,2.5/b_2/b2,3.5/b_3/b3,4.5/b_4/b4}{ \path (4,-\x) node (\nom) {$\lab$}; }; \foreach \y in {0,1,2,3,4,5}{ \draw[blue](4.2,-\y) to (3.8,-\y); } \end{scope} \draw[->] (a0) to (b0); \draw[->] (a1) to (b2); \draw[->] (a2) to (b3); \draw[->] (a3) to (b3); \end{tikzpicture} \caption{Left: a morphism $f$ of linearly ordered sets. Right: the morphism $\mathbb{O}(f)$, visualized as a forest (blue). {\color{red} Finish!!!}}\label{fig:DualForest} \end{figure} Note that the functors $\mathbb{I}$ and $\mathbb{O}$ define an equivalence of categories. Since $\Delta_+$ (resp. $\nabla_+$) is the skeletal version of $\bbDelta_+$ (resp. $\bbNabla_+$), and is gaunt, we see that we get an induced \emph{isomorphism} of categories \[ O:\Delta_+^\op\overset{\cong}{\longleftrightarrow }\nabla_+:I \] Moreover, we can define a functor $\bbNabla_+\to \Fin_\ast$ by \[ S\mapsto (S\amalg \{\ast\})_{/\on{max}(S)\sim\on{min}(s)\sim \ast} \] We then find that the induced functor \[ \Delta_+^{\op}\hookrightarrow \Delta_+^\op\overset{O}{\to} \nabla_+\to \Fin_\ast \] is precisely the functor $\on{cut}:\Delta^\op\to \Fin_\ast$ defined in \cite[4.1.2.9]{LurieHA}. \end{const} \begin{defn} Given two linearly ordered sets $S,T\in \bbDelta_+$ define the \emph{ordinal sum} $S\oplus T$ to be the set $S\amalg T$ equipped with the linear order defined by the orders on $S$ and $T$ and the proscription that for all $s\in S$ and $t\in T$, $s\leq t$. The ordinal sum defines a monoidal structure on $\bbDelta_+$. Given two linearly ordered sets $S,T\in \bbNabla_+$, with $b$ the maximum of $S$ and $a$ the minimum of $T$, define the \emph{imbrication} $S\star T$ to be the linearly ordered set $(S\oplus T)_{/a\sim b}$ (note that since $a$ is the successor to $b$ in $S\oplus T$, there is a canonical linear order on $S\star T$ compatible with the quotient map). \end{defn} \begin{lem} The functor $\mathbb{O}$ is a monoidal functor sending the ordinal sum to the imbrication. \end{lem} \begin{proof} The\todo{add?} \end{proof} \begin{const}[Cyclic Duality]\label{const:CyclicDuality} In analogy to the construction of the linear interstice functors, we define a duality \[ \mathbb{D}:\bbLambda^{\op}\to \bbLambda \] on the cyclic category. Let $S\in \bbLambda$ be a cyclicly ordered set. We define a \emph{cyclic interstice} of $S$ to be an ordered pair $(a,a+1)\in S\times S$, where $a+1$ denotes the successor of $a$ under the cyclic order. We denote the set of cyclic interstices of $S$ by $\mathbb{D}(S)$. The set $\mathbb{D}(S)$ inherits a canonical cyclic order from $S$\todo{give explicit characterization}, which can be visualized as in Figure \ref{fig:CyclicInterstices}. \begin{figure}[htb] \begin{tikzpicture} \draw (0,0) circle (1); \foreach \th in {0,60,120,180,240,300}{ \draw[fill=black] (\th:1) circle (0.05); }; \foreach \th in {30,90,150,210,270,330}{ \draw[blue,fill=blue] (\th:1) circle (0.05); }; \end{tikzpicture} \caption{A cyclic set with its cyclic order visualized via an embedding into the oriented circle (black), together with its set of cyclic interstices (blue).}\label{fig:CyclicInterstices} \end{figure} The functor $\mathbb{D}$ is specified on morphisms by an analogue of \ref{const:LinInterstice}, namely, for $f:S\to T$ in $\bbLambda$, we set\todo{Give explicit characterization} \[ \mathbb{D}(f)(j,j+1):=k\quad \text{where } \] This functor is an equivalence of categories\todo{Cite DK}. Since $\Lambda$ is the skeletal version of $\bbLambda$, $\mathbb{D}$ descends to an equivalence $D:\Lambda^\op\to \Lambda$ \end{const} \begin{const}[Cyclic closures]\label{const:CyclicClosures} We define a functor $\mathbb{K}:\bbDelta\to \bbLambda$ in the following way. Given a linearly ordered set $S$ of cardinality $n+1$, there is a unique order-preserving bijection $\phi:S\to [n]$. We define a bijection \[ S\to r(n); \quad j\mapsto \exp\left(\frac{2\pi i \phi(j)}{n+1}\right) \] to the $n^{\on{th}}$ roots of unity in $S^1$. The orientation on $S^1$ then yields a canonical cyclic order on $S$. \todo{on morphisms} Passing to skeletal versions yields the well-known functor $\kappa: \Delta\to \Lambda$. Via the equivalences $\mathbb{O}$ and $\mathbb{D}$ we can then define a functor $\mathbb{C}:\bbNabla\to \Lambda$ such that the diagram \[ \begin{tikzcd} \bbDelta^\op\arrow[d,"\mathbb{K}"']\arrow[r,"\mathbb{O}"] & \bbNabla\arrow[d,"\mathbb{C}"]\\ \bbLambda^\op\arrow[r,"\mathbb{D}"'] & \bbLambda \end{tikzcd} \] commutes up to natural isomorphism. The functor $\mathbb{C}$ admits the following explicit description on objects. Let $S\in \bbNabla$ with maximal element $b$ and minimal element $a$. Then $\mathbb{C}(S)$ can be identified with with quotient of $\mathbb{K}(S)$ by the identification $a\sim b$.\todo{write about cyclic order here} Once again, we have that $\mathbb{C}$ descends to a functor $C:\nabla\to \Lambda$. \end{const} \begin{defn}\label{defn:CyclicCatChosenTriv} We introduce one final equivalent variant of $\bbLambda$, which we will denote $\bfLambda$. The objects of $\bfLambda$ consist of pairs $(S,\phi)$ where $S\in \bbLambda$, and $\phi:S\to \langle |S|-1\rangle$ is an isomorphism in $\bbLambda$. The morphisms of $S$ are simply the morphisms of $\bbLambda$. It is clear that the forgetful functor $\bfLambda\to\bbLambda$ is an equivalence. \end{defn} \subsection{Calabi-Yau algebras} \begin{defn} We define a category $\Ass_\CY$ to have objects given by $S\in \bFin_\ast$ together with a partition $S_1\amalg S_2=S^\circ$. We will denote these objects by the ordered pair $\langle S_1,S_2\rangle$. A morphism $\langle S_1,S_2\rangle\to \langle T_1,T_2\rangle$ consists of \begin{enumerate} \item A morphism $\phi:S\to T$ in $\bFin_\ast$ such that, for every $i\in T$ such that $\phi^{-1}(i)\neq \emptyset$ \begin{itemize} \item for every $i\in T_1$, $\phi^{-1}(i)\subset S_1$ \item for every $i\in T_2$, either $\phi^{-1}(i)\subset S_1$ or $\phi^{-1}(i)\in S_2$ is a single element. \end{itemize} \item If $i\in T_1$, a linear order on $\phi^{-1}(i)$. \item If $i\in T_2$ with $\phi^{-1}(i)\subset S_1$, a cyclic order on $\phi^{-1}(i)$. \end{enumerate} Composition in $\Ass_\CY$ is defined by composition in $\bFin_\ast$, together with the induced linear and cyclic orders. \todo{Write whole up} Note that $\Ass_\CY$ comes equipped with a forgetful functor to $\bFin_\ast$. \end{defn} \begin{lem} The functor $\Ass_\CY\to \bFin_\ast$ displays $\Ass_\CY$ as an $\infty$-operad. \end{lem} \begin{proof} We first note that if $\phi:S\to T$ is an inert morphism in $\bFin_\ast$ and $\langle S_1,S_2\rangle$ is an object in $\Ass_\CY$ lying over $S$, we can define $T_1:=\phi(S_1)$ and $T_2=\phi(S_2)$ to get an object $\langle T_1,T_2\rangle$ over $T$. Since $\phi$ is inert, we need specify no extra data to lift $\phi$ to a morphism $\tilde{\phi}:\langle S_1,S_2\rangle \to \langle T_1,T_2\rangle$. Let $\tilde{\psi}: \langle S_1,S_2\rangle \to \langle U_1,U_2\rangle$ be any morphism in $\Ass_\CY$, and let $\psi:S\to U$ be its image in $\bFin_\ast$. Suppose $\gamma:T\to U$ is a morphism in $\bFin_\ast$ \end{proof} \subsection{Cartesian monoidal structures} Throughout this paper, we will model (symmetric) monoidal structions by Cartesian fibrations, rather than the coCartesian fibrations used in \cite{LurieHA}. These fibrations will be defined via adjunctions with the following. Throughout this section, $\CC$ will denote an $\infty$-category which admits finite products. \begin{defn} The category $\Delta^{\amalg}$ has as its objects pairs $([n],\{i,j\})$, where $[n]\in \Delta$ and $i\leq j$ are elements in $[n]$. The morphisms $ ([n],\{i,j\})\to ([m],\{k,\ell\})$ consist of a morphism $\phi:[n]\to [m]$ such that $\phi(i)\leq k\leq \ell\leq \phi(j)$. We will, in general, think of $\{i,j\}$ as an interval inside $[n]$, and denote by $\{i\leq j\}$ the linearly ordered set \[ \{i\leq j\}:= \{i,i+1,\ldots,j\}\subset [n]. \] The category $\bFin_\ast^\amalg$ has as its objects pairs $(S, T)$ where $S\in \bFin_\ast$ and $T\subset S^\circ$. A morphism $(S,T)\to (P,Q)$ consists of a morphism $\phi:S\to P$ in $\bFin_\ast$ such that $\phi(T)\subset Q$. We will sometimes denote by $\bbGamma^\amalg$ the category $(\bFin_\ast^\amalg)^\op$. \end{defn} \begin{rmk} We can provide an alternate characterization of $\Delta^\amalg$ and $(\bFin_\ast^\amalg)^\op$. They are the coCartesian fibrations defined as Grothendieck constructions of the functors\todo{provide definitions of $I_{[n]}$ etc.} \[ \Delta\to \Cat;\quad [n] \mapsto I_{[n]}^\op \] and the contravariant power set functor \[ \bFin_\ast^\op\to \Cat;\quad S\mapsto \mathcal{P}(S^\circ)^\op \] respectively. \end{rmk} \begin{const} The functor $\on{cut}:\Delta\to \bFin_\ast^\op$ yields a functor $\Delta^\amalg\to (\bFin_\ast^\amalg)^\op$. To see this, we first note that for $\{i,j\}\subset [n]$ in $\Delta^\amalg$, we have $\mathbb{O}(\{i\leq j\})\subset \mathbb{O}([n])$. On objects we therefore define $\{i,j\}\subset [n]\mapsto (\mathbb{O}([n]),\mathbb{O}(\{i\leq j\}))$ Given a morphism $f:([n],\{i,j\})$ to $([m],\{k,\ell\})$ in $\Delta^\amalg$, we get a morphism $\mathbb{O}(f): \mathbb{O}([m])\to \mathbb{O}([n])$. Moreover, the condition that $f(i)\leq k\leq \ell\leq f(j)$ ensures that $\mathbb{O}(f)\left( \mathbb{O}(\{k\leq \ell\})\right) \subset \mathbb{O}(\{i\leq j\})$. \end{const} \begin{const}[Cartesian monoidal structures] Given an $\infty$-category $\CC$ with finite products, we can associate two Cartesian fibrations to $\CC$ as follows. We define a functor of $\infty$-categories $\overline{\CC^\boxtimes} \to \Delta$ via the universal property \[ \Hom_\Delta(K, \overline{\CC^\boxtimes})\cong \Hom_{\Set_\Delta} (K\times_\Delta \Delta^\amalg, \CC). \] Similarly, we define a functor $\overline{\CC^\times}\to \bbGamma$ via the universal property \[ \Hom_{\bbGamma}(K, \overline{\CC^\boxtimes})\cong \Hom_{\Set_\Delta} (K\times_{\bbGamma} \bbGamma^\amalg, \CC). \] Both of these are Cartesian fibrations by dint of \cite[3.2.2.13]{LurieHTT}. We now let $\CC^\boxtimes\subset \overline{\CC^\boxtimes}$ be the full subcategory on those objects $G:I_[n]^\op\to \CC$ for which $G$ displays $G(\{i\leq j\})$ as a product over $G(\{k\leq k+1\})$ for $i\leq k<j$. Similarly, we let $\CC^\times\subset \overline{\CC^\times}$ be the full subcategory on those objects $G:\mathcal{P}(S^\circ)^{\op}$ for which $G$ displays $G(S)$ as a product over $G(i)$ for $i\in S$. \todo{FINISH!!} \end{const} \begin{prop} The functor $\CC^\boxtimes\to \Delta$ is a Cartesian fibration exhibiting the Cartesian monoidal structure on $\CC$. \end{prop} \begin{proof} This is \cite[Prop. 10.3.8]{DKHSSI}. \end{proof} \begin{prop} The functor $\CC^\times\to \bbGamma$ is a Cartesian fibration exhibiting the Cartesian symmetric monoidal structure on $\CC$. \end{prop} \begin{proof} We first note that $\CC^\times\to \bbGamma$ is a\todo{Fill in. Mutatis mutandis same as in DKHSSI.} \end{proof} \subsection{Categories of Spans} Throughout this section, we will assume that $\CC$ is now an $\infty$-category with small limits. \begin{const}[Categories of Spans]\label{const:TwSpan} We define the functor $\Tw:\Delta\to \Set_\Delta$ by \[ [n]\mapsto N(I_{[n]})^\op. \] By left Kan extension along the Yoneda embedding and restriction, we get an adjunction, which we will abuse notation and denote by \[ \Tw:\Set_\Delta\leftrightarrow \Set_\Delta: \Span. \] For an $\infty$-category $\DD$, the simplicial set $\Tw(\DD)$ is an $\infty$-category, which we will call the \emph{twisted arrow $\infty$-category} of $\DD$. Given $B\in \Set_\Delta$, we can \end{const}
{"config": "arxiv", "file": "1905.06671/menagerie.tex"}
TITLE: Discrete Proof with Irrational Numbers QUESTION [0 upvotes]: I've been stuck on this problem below and have a few more identical to it on my assignment. And yes I've Googled around and found a few links on here regarding to the density of $\mathbb{Q} \subseteq \mathbb{R}$, but I'm just not getting it. Can somebody please dumb the steps down for me to prove it? It is an intro class, so I'm still trying to get down some of the vocab. Prove that, if $y$ and $z$ are irrational numbers such that $y < z$, then there exists some $x \in \mathbb{Q}$ such that $y < x < z$. (I'm unsure of the whole process, otherwise I'd attempt to show some work...) Thank you REPLY [1 votes]: here's a sketch Take the decimal expansion of each of them. They have to go on forever. There must be a point where they disagree. Truncate the decimal expansion of the larger one at that point. You then have a rational which is between the two numbers.
{"set_name": "stack_exchange", "score": 0, "question_id": 2459728}
\begin{document} \title{Landis-type conjecture for the half-Laplacian} \author{Pu-Zhao Kow} \address{Department of Mathematics and Statistics, P.O. Box 35 (MaD), FI-40014 University of Jyv\"{a}skyl\"{a}, Finland.} \email{pu-zhao.pz.kow@jyu.fi} \author{Jenn-Nan Wang} \address{Institute of Applied Mathematical Sciences, NCTS, National Taiwan University, Taipei 106, Taiwan.} \email{jnwang@math.ntu.edu.tw} \begin{abstract} In this paper, we study the Landis-type conjecture, i.e., unique continuation property from infinity, of the fractional Schr\"{o}dinger equation with drift and potential terms. We show that if any solution of the equation decays at a certain exponential rate, then it must be trivial. The main ingredients of our proof are the Caffarelli-Silvestre extension and Armitage's Liouville-type theorem. \end{abstract} \keywords{Unique continuation property, Landis conjecture, half-Laplacian, Caffarelli-Silvestre extension, Liouville-type theorem} \subjclass[2020]{Primary: 35A02, 35B40, 35R11. Secondary: 35J05, 35J15} \maketitle \begin{sloppypar} \section{Introduction} In this paper, we consider the following equation with the half Laplacian \begin{equation} (-\Delta)^{\frac{1}{2}}u+{\bf b}({\bf x})\cdot\nabla u+q({\bf x})u=0\quad\text{in}\;\;\mathbb{R}^{n},\label{eq:Schrodinger} \end{equation} where $n\ge 1$. Our aim is to investigate the minimal decay rate of nontrivial solutions of \eqref{eq:Schrodinger}. In other words, we consider the unique continuation property from infinity of \eqref{eq:Schrodinger}. This problem is closely related to the conjecture proposed by Landis in the 60's \cite{KL88}. Landis conjectured that, if $u$ is a solution to the classical Schr\"{o}dinger equation \begin{equation} -\Delta u+q({\bf x})u=0\quad\text{in}\;\;\mathbb{R}^{n},\label{eq:Schrodinger-classical} \end{equation} with a bounded potential $q$, satisfying the decay estimate \[ |u({\bf x})|\le\exp(-C|{\bf x}|^{1+}), \] then $u\equiv0$. Landis' conjecture was disproved by Meshkov \cite{Meshkov92}, who constructed a \emph{complex-valued} potential $q\in L^{\infty}(\mathbb{R}^{n})$ and a nontrivial solution $u$ of \eqref{eq:Schrodinger-classical} such that \[ |u({\bf x})|\le\exp(-C|{\bf x}|^{\frac{4}{3}}). \] In the same work, Meshkov showed that if \begin{equation*} |u({\bf x})|\le\exp(-C|{\bf x}|^{\frac{4}{3}+}), \end{equation*} then $u\equiv 0$. Based on a suitable Carleman estimate, a quantitative version of Meshkov's result was established in \cite{BK05quantitativeLandis}, see also \cite{Cruz-Sampedro99,Davey14magneticSch,DZ18Landis,DZ19Landis,KL19LandisNS,LUW11LandisNS,LW14Landis} for related results. We also refer to \cite[Theorem~2]{Zhu18LandisHighOrder} for some decay estimates at infinity for higher order elliptic equations. In view of Meshkov's example, Kenig modified Landis' original conjecture and asked that whether the Landis' conjecture holds true for \emph{real-valued} potentials $q$ in \cite{kenig06realLandis}. The real version of Landis' conjecture in the plane was resolved recently in \cite{LMNNrealLandis}. We also refer to \cite{Davey20realLandis,DKW17realLandis,DKW20realLandis,KSW15realLandis} for the early development of the real version of Landis' conjecture. For the fractional Schr\"{o}dinger equation, the Landis-type conjecture was studied in \cite{RW19Landis}. The main theme of this paper is to extend the results in \cite{RW19Landis} to the fractional Schr\"{o}dinger equation with the half Laplacian \eqref{eq:Schrodinger}. Previously, the authors in \cite{KWLandisDrift} proved some partial results for the fractional Schr\"odinger equation \begin{equation} ((-\Delta)^{s}+b({\bf x}){\bf x}\cdot\nabla + q({\bf x}))u=0\quad\text{in}\;\;\mathbb{R}^{n},\label{eq:sch} \end{equation} where $s\in(0,1)$ and $b$, $q$ are scalar-valued functions. The main tools used in \cite{KWLandisDrift} are the Caffarelli-Silvestre extension and the Carleman estimate. The particular form of the drift coefficient in \eqref{eq:sch} is due to the applicability of the Carleman estimate. It turns out when $s=\frac 12$, i.e., the case of half Laplacian, we can treat a general vector-valued drift coefficient ${\bf b}({\bf x})$ in \eqref{eq:Schrodinger}. The underlying reason is that the Caffarelli-Silvestre extension solution of $(-\Delta)^{\frac{1}{2}}u=0$ in $\R^n$ is a harmonic function in ${\mathbb R}_+^{n+1}$. Inspired by this observation, we show that if both ${\bf b}$ and $q$ are differentiable, then any nontrivial solution of \eqref{eq:Schrodinger} can not decay exponentially at infinity. The detailed statement is described in the following theorem. \begin{thm} \label{thm:main1} \begin{subequations} Assume that there exists a constant $\Lambda>0$ such that \begin{equation} \|q\|_{L^{\infty}(\mathbb{R}^{n})}+\|\nabla q\|_{L^{\infty}(\mathbb{R}^{n})}+\|\nabla{\bf b}\|_{L^{\infty}(\mathbb{R}^{n})}\le\Lambda\label{eq:assumption-1} \end{equation} and, furthermore, there exists an $\epsilon>0$, depending only on $n$, such that \begin{equation} \|{\bf b}\|_{L^{\infty}(\mathbb{R}^{n})}\le\epsilon.\label{eq:assumption-2} \end{equation} Let $u \in W^{2,p}(\mathbb{R}^{n})$ for some integer $p>n$ be a solution to \eqref{eq:Schrodinger} such that \begin{equation} |u({\bf x})|\le\Lambda e^{-\lambda|{\bf x}|}\label{eq:decay-assumption} \end{equation} for some $\lambda>0$, then $u\equiv0$. \end{subequations} \end{thm} \begin{rem} Note that both $(-\Delta)^{\frac{1}{2}}u$ and $\nabla u$ are first orders. In view of the $L^{p}$ estimate of the Riesz transform \eqref{eq:boundedness-Riesz-transform1}, \eqref{eq:boundedness-Riesz-transform2}, the assumption \eqref{eq:assumption-2} and the regularity requirement of $u$ are imposed to ensure that the non-local operator $(-\Delta)^{\frac{1}{2}}u$ is the dominated term in \eqref{eq:Schrodinger}. \end{rem} It is interesting to compare Theorem~\ref{thm:main1} with \cite[Theorem 1]{RW19Landis}. Assume that $u\in H^{s}(\mathbb{R}^{n})$ is a solution to \begin{equation} (-\Delta)^{s}u+q({\bf x})u=0\quad\text{in}\,\,\mathbb{R}^{n}\label{eq:fractional-Sch} \end{equation} such that $|q({\bf x})|\le1$ and $|{\bf x}\cdot\nabla q({\bf x})|\le1$. If \begin{equation}\label{tx} \int_{\mathbb{R}^{n}}e^{|{\bf x}|^{\alpha}}|u|^{2}\,\mathsf{d}{\bf x}<\infty\quad\text{for some}\,\,\alpha>1, \end{equation} then $u\equiv0$. Therefore, for $s=\frac 12$, Theorem~\ref{thm:main1} extends their results by slightly relaxing the condition on $q$ and also adding a drift term. Another key improvement is that the exponential decay rate $e^{-\lambda|{\bf x}|}$ is sharper than \eqref{tx}. The proof of Theorem~\ref{thm:main1} consists of two steps. Inspired by \cite{RW19Landis}, we first pass the boundary decay \eqref{eq:decay-assumption} to the bulk decay of the Caffarelli-Silvestre extension solution (harmonic function) in the extended space $\mathbb{R}^{n}\times(0,\infty)$. In the second step, we apply the Liouville-type theorem (Theorem~\ref{thm:Liouville}) to the harmonic function. It is noted that we do not use any Carleman estimate here. On the other hand, using the harmonic function in the unit ball ${v}_{0}({\bf z}):=\Re(e^{-1/{\bf z}^{\alpha}})$, ${\bf z}\in{\mathbb C}$, $0<\alpha<1$ (see \cite{Jin93UCP}), it is not difficult to construct an example to show the optimality of the Liouville-type theorem. In view of this example, we believe that the decay assumption \eqref{eq:decay-assumption} is optimal. When ${\bf b}\equiv 0$, the following theorem can be found in \cite[Theorem 1.1.9]{Kow21dissertation}, which was obtained using similar ideas as in the proof of Theorem~\ref{thm:main1}. \begin{thm} \label{thm:main2}Let $q\in L^{\infty}(\mathbb{R}^{n})$ (not necessarily differentiable) satisfy \[ \|q\|_{L^{\infty}(\mathbb{R}^{n})}\le\Lambda. \] If $u \in H^{\frac{1}{2}}(\mathbb{R}^{n})$ is a solution to \eqref{eq:Schrodinger} with ${\bf b}\equiv0$ such that \begin{equation}\label{cx} \int_{\mathbb{R}^{n}}e^{|{\bf x}|}|u|^{2}\,\mathsf{d}{\bf x}<\infty, \end{equation} then $u\equiv0$. \end{thm} \begin{rem} Theorem~\ref{thm:main2} is an immediate consequence of \cite[Proposition~2.2]{RW19Landis} and Theorem~\ref{thm:Liouville} (without using Proposition~\ref{prop:boundary-bulk}). Therefore we only need \eqref{cx}, namely, \eqref{eq:decay-assumption} is unnecessary when ${\bf b} \equiv 0$. \end{rem} It is interesting to compare this result with \cite[Theorem 2]{RW19Landis}. There, it was proved that if $u\in H^{s}(\mathbb{R}^{n})$ solves \eqref{eq:fractional-Sch} with $|q(x)|\le 1$ and \begin{equation}\label{txx} \int_{\mathbb{R}^{n}}e^{|{\bf x}|^{\alpha}}|u|^{2}\,\mathsf{d}{\bf x}<\infty\quad\text{for some}\,\,\alpha>\frac{4s}{4s-1}, \end{equation} then $u\equiv0$. When $s=\frac 12$, \eqref{txx} becomes \[ \int_{\mathbb{R}^{n}}e^{|{\bf x}|^{\alpha}}|u|^{2}\,\mathsf{d}{\bf x}<\infty \] for $\alpha >2$, which is clearly stronger than \eqref{cx}. On the other hand, Theorem~\ref{thm:main2} holds regardless whether $q$ is real-valued or complex-valued. This paper is organized as follows. In Section~\ref{sec:Decay-gradient}, we will study the decaying behavior of $\nabla u$. In Section~\ref{sec:Caffarelli-Silvestre-extension}, we localize the nonlocal operator $(-\Delta)^{\frac{1}{2}}$ by the Caffarelli-Silvestre extension. In Section~\ref{sec:Some-estimates-CS}, we derive some useful estimates about the Caffarelli-Silvestre extension $\tilde{u}$ of the solution $u$, which is harmonic. In Section~\ref{sec:Boundary-Bulk}, we obtain the decay rate of $\tilde{u}$ from that of $u$. Finally, we prove Theorem~\ref{thm:main1} in Section~\ref{sec:main} by Armitage's Liouville-type theorem. Furthermore, we provide another proof of this Liouville-type theorem in Appendix~\ref{sec:Appendix1}. \section{\label{sec:Decay-gradient}Decay of the gradient} Let $1<p<\infty$. For each $u\in L^{p}(\mathbb{R}^{n})$, let $\psi$ satsify $(-\Delta)^{\frac{1}{2}}\psi=u$ and let ${\bf u}:=\nabla\psi$. Using the $L^{p}$-boundedness of the Riesz transform \cite{stein2016singular} (see also \cite{BG13Riesz}), we can show that \begin{equation} \|{\bf u}\|_{L^{p}(\mathbb{R}^{n})}\le C(n,p)\|u\|_{L^{p}(\mathbb{R}^{n})}.\label{eq:boundedness-Riesz-transform1} \end{equation} We remark that this estimate is also used in the proof of \cite[Theorem 2.1]{CCW01criticalQG}. Note that we can formally write ${\bf u}=\nabla(-\Delta)^{-\frac{1}{2}}u$. Plugging $(-\Delta)^{\frac{1}{2}}\psi=u$ and ${\bf u}=\nabla\psi$ into \eqref{eq:boundedness-Riesz-transform1} implies \begin{equation} \|\nabla\psi\|_{L^{p}(\mathbb{R}^{n})}\le C(n,p)\|(-\Delta)^{\frac{1}{2}}\psi\|_{L^{p}(\mathbb{R}^{n})}.\label{eq:boundedness-Riesz-transform2} \end{equation} Thanks to \eqref{eq:boundedness-Riesz-transform2}, we can obtain the following lemma. \begin{lem} \label{lem:gradient-decay}Let $2 \le p < \infty$ be an integer. Assume that \eqref{eq:assumption-1} and \eqref{eq:assumption-2} hold. Let $u \in W^{2,p}(\mathbb{R}^{n})$ be a solution to \eqref{eq:Schrodinger} such that the decay assumption \eqref{eq:decay-assumption} holds, then $(-\Delta)^{\frac{1}{2}}u$ satisfies \begin{equation} \int_{\mathbb{R}^{n}}e^{\frac{\lambda}{2}|{\bf x}|}|(-\Delta)^{\frac{1}{2}}u|^{2}\,\mathsf{d}{\bf x}+\int_{\mathbb{R}^{n}}e^{\frac{\lambda}{2}|{\bf x}|} |(-\Delta)^{\frac{1}{2}}u|^{p} \,\mathsf{d}{\bf x}\le C\label{eq:gradient-decay} \end{equation} for some positive constant $C=C(n,p,\lambda,\Lambda)$. \end{lem} \begin{proof} We first estimate the $L^{p}$-norm of $\nabla u$. Taking $L^{p}$-norm on \eqref{eq:Schrodinger} and using \eqref{eq:boundedness-Riesz-transform2}, \eqref{eq:assumption-2}, \eqref{eq:decay-assumption}, we have \begin{align*} \|\nabla u\|_{L^{p}(\mathbb{R}^{n})} & \le C(n,p)\|(-\Delta)^{\frac{1}{2}}u\|_{L^{p}(\mathbb{R}^{n})}\\ & \le C(n,p)\bigg( \|{\bf b}\|_{L^{\infty}(\mathbb{R}^{n})} \|\nabla u\|_{L^{p}(\mathbb{R}^{n})} + \|q\|_{L^{\infty}(\mathbb{R}^{n})}\|u\|_{L^{p}(\mathbb{R}^{n})}\bigg)\\ & \le\epsilon C(n,p) \|\nabla u\|_{L^{p}(\mathbb{R}^{n})} + C(n,p,\lambda,\Lambda). \end{align*} Choosing $\epsilon=(2C(n,p))^{-1}$ in the estimate above gives \begin{equation} \|\nabla u\|_{L^{p}(\mathbb{R}^{n})}\le C(n,p,\lambda,\Lambda).\label{eq:Lp-gradient-bound} \end{equation} Next, we estimate the $L^{p}$-norm of $\nabla^{2}u$. Differentiating \eqref{eq:Schrodinger} yields \begin{equation} (-\Delta)^{\frac{1}{2}}\partial_{j}u+{\bf b}({\bf x})\cdot\nabla(\partial_{j}u)+\partial_{j}{\bf b}({\bf x})\cdot\nabla u+q({\bf x})\partial_{j}u+\partial_{j}q({\bf x})u=0\label{eq:Schrodinger-differentiate} \end{equation} for each $j=1,\cdots,n$. Taking the $L^{p}$-norm of \eqref{eq:Schrodinger-differentiate}, we have \begin{align*} \|\nabla(\partial_{j}u)\|_{L^{p}(\mathbb{R}^{n})} & \le C(n,p)\|(-\Delta)^{\frac{1}{2}}\partial_{j}u\|_{L^{p}(\mathbb{R}^{n})}\\ & \le C(n,p)\bigg( \|{\bf b}\|_{L^{\infty}(\mathbb{R}^{n})}\|\nabla(\partial_{j}u)\|_{L^{p}(\mathbb{R}^{n})} + \|\nabla{\bf b}\|_{L^{\infty}(\mathbb{R}^{n})}\|\nabla u\|_{L^{p}(\mathbb{R}^{n})}\\ & \quad+\|q\|_{L^{\infty}(\mathbb{R}^{n})}\|\nabla u\|_{L^{p}(\mathbb{R}^{n})} + \|\nabla q\|_{L^{\infty}(\mathbb{R}^{n})} \|u\|_{L^{p}(\mathbb{R}^{n})}\bigg)\\ & \le C(n,p) \bigg(\epsilon \|\nabla(\partial_{j}u)\|_{L^{p}(\mathbb{R}^{n})} + \Lambda \|\nabla u\|_{L^{p}(\mathbb{R}^{n})} + \Lambda \|u\|_{L^{p}(\mathbb{R}^{n})}\bigg)\\ & \le\frac{1}{2} \|\nabla(\partial_{j} u)\|_{L^{p}(\mathbb{R}^{n})} + C(n,p,\lambda,\Lambda), \end{align*} and hence \begin{equation} \|\nabla^{2}u\|_{L^{p}(\mathbb{R}^{n})}\le C(n,p,\lambda,\Lambda).\label{eq:Lq-hessian-bound} \end{equation} Hence, it follows from the Sobolev embedding that $\|\nabla u\|_{L^\infty(\R^n)}\le C(n,p,\lambda,\Lambda)$. Now we would like to derive the $L^{2}$-decay of $\nabla u$. Combining \eqref{eq:Lp-gradient-bound} and \eqref{eq:Lq-hessian-bound}, it is easy to see that \begin{align*} & \int_{\mathbb{R}^{n}}e^{\frac{\lambda}{2}|{\bf x}|}|\partial_{j}u|^{2}\,\mathsf{d}{\bf x}=\int_{\mathbb{R}^{n}}e^{\frac{\lambda}{2}|{\bf x}|}(\partial_{j}u)(\partial_{j}u)\,\mathsf{d}{\bf x}\\ & =-\frac{\lambda}{2}\int_{\mathbb{R}^{n}}\frac{x_{j}}{|{\bf x}|}e^{\frac{\lambda}{2}|{\bf x}|}u\partial_{j}u\,\mathsf{d}{\bf x}-\int_{\mathbb{R}^{n}}e^{\frac{\lambda}{2}|{\bf x}|}(\partial_{j}^{2}u)u\,\mathsf{d}{\bf x}\\ & \le\frac{\lambda}{2}\int_{\mathbb{R}^{n}}e^{\frac{\lambda}{2}|{\bf x}|}|u||\partial_{j}u|\,\mathsf{d}{\bf x}+\int_{\mathbb{R}^{n}}e^{\frac{\lambda}{2}|{\bf x}|}|u||\partial_{j}^{2}u|\,\mathsf{d}{\bf x}\\ & \le\frac{\lambda\Lambda}{2}\int_{\mathbb{R}^{n}}e^{-\frac{\lambda}{2}|{\bf x}|}|\partial_{j}u|\,\mathsf{d}{\bf x}+\Lambda\int_{\mathbb{R}^{n}}e^{-\frac{\lambda}{2}|{\bf x}|}|\partial_{j}^{2}u|\,\mathsf{d}{\bf x}\quad\text{(by \eqref{eq:decay-assumption})}\\ & \le\frac{\lambda\Lambda}{2}\bigg(\int_{\mathbb{R}^{n}}e^{-\frac{\lambda}{2}p'|{\bf x}|}\,\mathsf{d}{\bf x}\bigg)^{\frac{1}{p'}}\|\partial_{j}u\|_{L^{p}(\mathbb{R}^{n})} + \Lambda\bigg(\int_{\mathbb{R}^{n}}e^{-\frac{\lambda}{2}p'|{\bf x}|}\,\mathsf{d}{\bf x}\bigg)^{\frac{1}{p'}}\|\partial_{j}^{2}u\|_{L^{p}(\mathbb{R}^{n})}\\ & \le C(n,p,\lambda,\Lambda) \quad(\mbox{where $p'$ is the conjugate exponent of $p$}), \end{align*} that is, we obtain \begin{equation} \int_{\mathbb{R}^{n}}e^{\frac{\lambda}{2}|{\bf x}|}|\nabla u|^{2}\,\mathsf{d}{\bf x}\le C(n,p,\lambda,\Lambda).\label{eq:L2-decay-gradient} \end{equation} We now continue to obtain the $L^{2}$-decay of $(-\Delta)^{\frac{1}{2}}u$. In view of \eqref{eq:Schrodinger} and using \eqref{eq:assumption-1}, \eqref{eq:assumption-2}, \eqref{eq:decay-assumption}, \eqref{eq:L2-decay-gradient}, we have \begin{align} & \int_{\mathbb{R}^{n}}e^{\frac{\lambda}{2}|{\bf x}|}|(-\Delta)^{\frac{1}{2}}u|^{2}\,\mathsf{d}{\bf x}\nonumber \\ & \le\int_{\mathbb{R}^{n}}e^{\frac{\lambda}{2}|{\bf x}|}|{\bf b}({\bf x})\cdot\nabla u|^{2}\,\mathsf{d}{\bf x}+\int_{\mathbb{R}^{n}}e^{\frac{\lambda}{2}|{\bf x}|}|q({\bf x})u|^{2}\,\mathsf{d}{\bf x}\nonumber \\ & \le \|{\bf b}\|_{L^{\infty}(\mathbb{R}^{n})}^{2}\int_{\mathbb{R}^{n}}e^{\frac{\lambda}{2}|{\bf x}|}|\nabla u|^{2}\,\mathsf{d}{\bf x}+\|q\|_{L^{\infty}(\mathbb{R}^{n})}^{2}\int_{\mathbb{R}^{n}}e^{\frac{\lambda}{2}|{\bf x}|}|u|^{2}\,\mathsf{d}{\bf x}\nonumber \\ & \le C(n,\lambda,\Lambda).\label{eq:L2-decay-half-Lap} \end{align} Here we may choose a smaller $\epsilon$ if necessary. Our next task is to derive the $L^{p}$-decay of $\nabla u$. First of all, let $p$ be odd. We then have \begin{align} & \int_{\mathbb{R}^{n}}e^{\frac{\lambda}{2}|{\bf x}|}|\partial_{j}u|^{p}\,\mathsf{d}{\bf x} = \int_{\mathbb{R}^{n}}e^{\frac{\lambda}{2}|{\bf x}|}|\partial_{j}u|(\partial_{j}u)^{p-1}\,\mathsf{d}{\bf x}\nonumber \\ & = \int_{\{\partial_{j}u\neq0\}}e^{\frac{\lambda}{2}|{\bf x}|}|\partial_{j}u|(\partial_{j}u)^{p-2}(\partial_{j}u)\,\mathsf{d}{\bf x}\nonumber \\ & =-\frac{\lambda}{2}\int_{\mathbb{R}^{n}}\frac{x_{j}}{|{\bf x}|}e^{\frac{\lambda}{2}|{\bf x}|}|\partial_{j}u|(\partial_{j}u)^{p-2}u\,\mathsf{d}{\bf x} - \int_{\{\partial_{j}u\neq0\}} e^{\frac{\lambda}{2}|{\bf x}|}\frac{\partial_{j}u}{|\partial_{j}u|}(\partial_{j}^{2}u)(\partial_{j}u)^{p-2}u\,\mathsf{d}{\bf x}\nonumber \\ & \quad -(p-2) \int_{\mathbb{R}^{n}}e^{\frac{\lambda}{2}|{\bf x}|}|\partial_{j}u|(\partial_{j}u)^{p-3}(\partial_{j}^{2}u)u\,\mathsf{d}{\bf x}\nonumber \\ & \le\frac{\lambda\Lambda}{2}\int_{\mathbb{R}^{n}}e^{-\frac{\lambda}{2}|{\bf x}|}|\partial_{j}u|^{p-1}\,\mathsf{d}{\bf x} + \Lambda(p-1)\int_{\mathbb{R}^{n}}e^{-\frac{\lambda}{2}|{\bf x}|}|\partial_{j}u|^{p-2}|\partial_{j}^{2}u|\,\mathsf{d}{\bf x}\quad\text{(by \eqref{eq:decay-assumption})}\nonumber \\ & \le C(n,p,\lambda,\Lambda) + \Lambda(p-1) \int_{\mathbb{R}^{n}}e^{-\frac{\lambda}{2}|{\bf x}|}|\partial_{j}u|^{p-2}|\partial_{j}^{2}u|\,\mathsf{d}{\bf x}\quad\text{(using \eqref{eq:Lp-gradient-bound})}.\label{eq:Lr-decay-gradient1} \end{align} Note that \begin{align*} & \int_{\mathbb{R}^{n}}e^{-\frac{\lambda}{2}|{\bf x}|}|\partial_{j}u|^{p-2}|\partial_{j}^{2}u|\,\mathsf{d}{\bf x}\\ & \le\bigg(\int_{\mathbb{R}^{n}}e^{-\frac{\lambda}{2}r_{1}|{\bf x}|}\,\mathsf{d}{\bf x}\bigg)^{\frac{1}{r_{1}}}\bigg(\int_{\mathbb{R}^{n}}|\partial_{j}u|^{r_{2}(p-2)}\,\mathsf{d}{\bf x}\bigg)^{\frac{1}{r_{2}}}\bigg(\int_{\mathbb{R}^{n}}|\partial_{j}^{2}u|^{r_{3}}\,\mathsf{d}{\bf x}\bigg)^{\frac{1}{r_{3}}}, \end{align*} where $1<r_{1},r_{2},r_{3}<\infty$ satisfy \[ \frac{1}{r_{1}}+\frac{1}{r_{2}}+\frac{1}{r_{3}}=1. \] Since we consider odd $p \ge 3$, we can choose $r_{1}=p$, $r_{2}=\frac{p}{p-2}$ and $r_{3}=p$. Hence, we obtain from \eqref{eq:Lp-gradient-bound} and \eqref{eq:Lq-hessian-bound} that \begin{equation} \int_{\mathbb{R}^{n}}e^{-\frac{\lambda}{2}|{\bf x}|}|\partial_{j}u|^{p-2}|\partial_{j}^{2}u|\,\mathsf{d}{\bf x}\le C(n,p,\lambda,\Lambda).\label{eq:Lr-decay-gradient2} \end{equation} Combining \eqref{eq:Lr-decay-gradient1} and \eqref{eq:Lr-decay-gradient2} gives \begin{equation} \int_{\mathbb{R}^{n}}e^{\frac{\lambda}{2}|{\bf x}|}|\nabla u|^{p}\,\mathsf{d}{\bf x}\le C(n,p,\lambda,\Lambda).\label{eq:Lr-decay-gradient} \end{equation} When $p$ is even, estimate \eqref{eq:Lr-decay-gradient} follows from the same argument above by noting $|\partial_ju|^{p}=(\partial_ju)^{p}$. Finally, we estimate the $L^{p}$-decay of $(-\Delta)^{\frac{1}{2}}u$. Using the equation \eqref{eq:Schrodinger} and by \eqref{eq:assumption-1}, \eqref{eq:assumption-2}, \eqref{eq:decay-assumption}, \eqref{eq:Lr-decay-gradient}), we have \begin{align} & \int_{\mathbb{R}^{n}}e^{\frac{\lambda}{2}|{\bf x}|}|(-\Delta)^{\frac{1}{2}}u|^{p}\,\mathsf{d}{\bf x}\nonumber \\ & \le C\bigg(\int_{\mathbb{R}^{n}}e^{\frac{\lambda}{2}|{\bf x}|}|{\bf b}({\bf x})\cdot\nabla u|^{p}\,\mathsf{d}{\bf x} + \int_{\mathbb{R}^{n}}e^{\frac{\lambda}{2}|{\bf x}|}|q({\bf x})u|^{p}\,\mathsf{d}{\bf x}\bigg)\nonumber \\ & \le C\bigg( \|{\bf b}\|_{L^{\infty}(\mathbb{R}^{n})}^{p} \int_{\mathbb{R}^{n}}e^{\frac{\lambda}{2}|{\bf x}|}|\nabla u|^{p}\,\mathsf{d}{\bf x} + \|q\|_{L^{\infty}(\mathbb{R}^{n})}^{p} \int_{\mathbb{R}^{n}}e^{\frac{\lambda}{2}|{\bf x}|}|u|^{p}\,\mathsf{d}{\bf x}\bigg)\nonumber \\ & \le C(n,p,\lambda,\Lambda).\label{eq:Lr-decay-half-Lap} \end{align} Consequently, \eqref{eq:gradient-decay} is a direct consequence of \eqref{eq:L2-decay-half-Lap} and \eqref{eq:Lr-decay-half-Lap}. \end{proof} \section{\label{sec:Caffarelli-Silvestre-extension}Caffarelli-Silvestre extension} In this section, we briefly discuss the Caffarelli-Silvestre extension \cite{CS07extension}. We also refer to \cite[Appendix~A]{GR19unique} for higher order fractional Laplacian. Let $\mathbb{R}_{+}^{n+1}:=\mathbb{R}^{n}\times\mathbb{R}_{+}=\begin{Bmatrix}\begin{array}{l|l} {\bf x}=({\bf x}',x_{n+1}) & {\bf x}'\in\mathbb{R}^{n},x_{n+1}>0\end{array}\end{Bmatrix}$ and ${\bf x}_{0}=({\bf x}',0)\in\mathbb{R}^{n}\times\{0\}$. For $R>0$, we denote \begin{align*} B_{R}^{+}({\bf x}_{0}) & :=\begin{Bmatrix}\begin{array}{l|l} {\bf x}\in\mathbb{R}_{+}^{n+1} & |{\bf x}-{\bf x}_{0}|\le R\end{array}\end{Bmatrix},\\ B_{R}'({\bf x}_{0}) & :=\begin{Bmatrix}\begin{array}{l|l} {\bf x}\in\mathbb{R}^{n}\times\{0\} & |{\bf x}-{\bf x}_{0}|\le R\end{array}\end{Bmatrix}. \end{align*} To simplify the notations, we also denote $B_{R}^{+}:=B_{R}^{+}(0)$ and $B_{R}':=B_{R}'(0)$. We define two Sobolev spaces \begin{align*} \dot{H}^{1}(\mathbb{R}_{+}^{n+1}) & :=\begin{Bmatrix}\begin{array}{l|l} v:\mathbb{R}_{+}^{n+1}\rightarrow\mathbb{R} & \int_{\mathbb{R}_{+}^{n+1}}|\nabla v|^{2}\,\mathsf{d}{\bf x}<\infty\end{array}\end{Bmatrix},\\ H_{{\rm loc}}^{1}(\mathbb{R}_{+}^{n+1}) & :=\begin{Bmatrix}\begin{array}{l|l} v\in\dot{H}^{1}(\mathbb{R}_{+}^{n+1}) & \int_{\mathbb{R}^{n}\times(0,r)}|v|^{2}\,\mathsf{d}{\bf x}<\infty\text{ for some constant }r>0\end{array}\end{Bmatrix}. \end{align*} Given any $\mu \in \mathbb{R}$ and $u\in H^{\mu}(\mathbb{R}^{n})$. Following from \cite[Lemma~A.1]{GR19unique}, there exists $\tilde{u} \in \mathcal{C}^{\infty}(\mathbb{R}_{+}^{n+1})$ such that \begin{equation} \begin{cases} \Delta\tilde{u}=0 \quad \text{in}\;\;\,\mathbb{R}_{+}^{n+1}, \\ \displaystyle{\lim_{x_{n+1} \rightarrow 0}} \| \tilde{u}(\cdot,x_{n+1}) - u \|_{H^{\mu}(\mathbb{R}^{n})} = 0, \end{cases}\label{eq:extension-problem1} \end{equation} where $\nabla=(\nabla',\partial_{n+1})=(\partial_{1},\cdots,\partial_{n},\partial_{n+1})$, and the half Laplacian is equivalent to the Dirichlet-to-Neumann map of the extension problem \eqref{eq:extension-problem1}: \begin{equation} \lim_{x_{n+1}\rightarrow0} \|\partial_{n+1}\tilde{u}(\cdot ,x_{n+1}) +(-\Delta)^{\frac{1}{2}}u \|_{H^{\mu - 1}(\mathbb{R}^{n})} =0 \label{eq:fractional-Lap-DNmap} \end{equation} (see \cite[(A.3)]{GR19unique}). In particular, when $\mu = \frac{1}{2}$, it follows from \cite[Corollary~A.2]{GR19unique} that \[ \| \tilde{u} \|_{\dot{H}^{1}(\mathbb{R}_{+}^{n+1})} \le C \| u \|_{H^{\frac{1}{2}}(\mathbb{R}^{n})} \quad \text{for some positive constant }C. \] In view of this observation, if $u \in H^{1}(\mathbb{R}^{n})$ and both ${\bf b},q$ are bounded, we can reformulate \eqref{eq:Schrodinger} as the following local elliptic equation: \begin{equation} \begin{cases} \Delta\tilde{u}=0 & \text{in}\;\;\mathbb{R}_{+}^{n+1},\\ \tilde{u}({\bf x}',0)=u({\bf x}') & \text{on}\;\;\mathbb{R}^{n} \quad (\text{in}\;\; H^{1}(\mathbb{R}^{n})\text{-sense}),\\ {\displaystyle \lim_{x_{n+1}\rightarrow0}}\partial_{n+1}\tilde{u}({\bf x})={\bf b}({\bf x}')\cdot\nabla'u+q({\bf x}')u & \text{on}\;\;\mathbb{R}^{n} \quad (\text{in}\;\; L^{2}(\mathbb{R}^{n})\text{-sense}). \end{cases}\label{eq:sch-localized} \end{equation} Since $u \in H^{1}(\mathbb{R}^{n}) \equiv {\rm dom} \, ((-\Delta)^{\frac{1}{2}})$, from \cite[page 48--49]{Sti10}, we have that $\tilde{u}\in H_{{\rm loc}}^{1}(\mathbb{R}_{+}^{n+1})$ and \begin{equation} \|\tilde{u}(\bullet,x_{n+1})\|_{L^{2}(\mathbb{R}^{n})}\le\|u\|_{L^{2}(\mathbb{R}^{n})}.\label{eq:extension-problem-est} \end{equation} \section{\label{sec:Some-estimates-CS}Some estimates related to the extension problem } The following lemma is a special case of \cite[Equation~(19)]{RW19Landis} (see also \cite[Lemma~3.2]{KWLandisDrift}). \begin{lem} Let $\tilde{u}\in \mathcal{C}^{\infty}(\mathbb{R}_{+}^{n+1})$ be a solution to \eqref{eq:extension-problem1}. Then the following estimate holds for any ${\bf x}_{0}\in\mathbb{R}^{n}\times\{0\}$: \begin{align} \|\tilde{u}\|_{L^{2}(B_{{c}R}^{+}({\bf x}_0))}\le& C\bigg(\|\tilde{u}\|_{L^{2}(B_{16R}^{+}({\bf x}_0))}+R^{\frac{1}{2}}\|u\|_{L^{2}(B_{16R}'({\bf x}_0))}\bigg)^{\alpha}\nonumber\\ &\times\bigg(R^{\frac 32}\bigg\|\lim_{x_{n+1}\rightarrow0}\partial_{n+1}\tilde{u}\bigg\|_{L^{2}(B_{16R}'({\bf x}_0))}+R^{\frac 12}\|u\|_{L^{2}(B_{16R}'({\bf x}_0))}\bigg)^{1-\alpha}\label{eq:recall-RW19-eqn19} \end{align} for some positive constants $C=C(n)$, $\alpha=\alpha(n)\in(0,1)$ and ${c}={c}(n)\in(0,1)$, all of them are independent of $R$ and ${\bf x}_{0}$. \end{lem} By choosing $\sigma=\frac{1}{2}$, $\nu=2$ and $a({\bf x}')\equiv0$ in \cite[Proposition 2.6(i)]{JLX14}, we obtain the following version of De Giorgi-Nash-Moser type theorem. \begin{lem} Let $\tilde{u}\in \mathcal{C}^{\infty}(\mathbb{R}_{+}^{n+1})$ be satisfy \eqref{eq:extension-problem1} and $p>n$. There exists a constant $C=C(n,p)>0$ such that \begin{equation} \|\tilde{u}\|_{L^{\infty}(B_{\frac{1}{4}}^{+})}\le C\bigg[\|\tilde{u}\|_{L^{2}(B_{1}^{+})}+\bigg\|\lim_{x_{n+1}\rightarrow0}\partial_{n+1}\tilde{u}\bigg\|_{L^{p}(B_{1}')}\bigg].\label{eq:Schauder-estimate} \end{equation} \end{lem} Combining \eqref{eq:recall-RW19-eqn19} and \eqref{eq:Schauder-estimate}, together with some suitable scaling, we can obtain the following lemma. \begin{lem} Let $\tilde{u}\in \mathcal{C}^{\infty}(\mathbb{R}_{+}^{n+1})$ be a solution to \eqref{eq:extension-problem1} and $p>n$. Then the following inequality holds for all ${\bf x}_{0}\in\mathbb{R}^{n}\times\{0\}$ and $R\ge1$: \begin{align} \|\tilde{u}\|_{L^{\infty}(B_{cR}^{+}({\bf x}_{0}))}& \le C\bigg(\|\tilde{u}\|_{L^{2}(B_{16R}^{+}({\bf x}_{0}))}+R^{\frac{1}{2}}\|u\|_{L^{2}(B_{16R}'({\bf x}_{0}))}\bigg)^{\alpha}\nonumber \\ & \quad\times\bigg(R^{\frac{3}{2}}\|(-\Delta)^{\frac{1}{2}}u\|_{L^{2}(B_{16R}'({\bf x}_{0}))}+R^{\frac{1}{2}}\|u\|_{L^{2}(B_{16R}'({\bf x}_{0}))}\bigg)^{1-\alpha}\nonumber \\ & \quad+R^{\frac{3}{2}}\|(-\Delta)^{\frac{1}{2}}u\|_{L^{p}(B_{R}'({\bf x}_{0}))}\label{eq:boundary-bulk1} \end{align} for some positive constants $C=C(n,p)$, $\alpha=\alpha(n)\in(0,1)$ and ${c}={c}(n)\in(0,1)$, all of them are independent of $R$ and ${\bf x}_{0}$. \end{lem} \begin{proof} Without loss of generality, it suffices to take ${\bf x}_{0}=0$. Let $\tilde{v}({\bf x})=\tilde{u}(R{\bf x})$ and let $v({\bf x}')=u(R{\bf x}')$, we observe that \[ \begin{cases} \Delta\tilde{v}=0 & \text{in}\;\;\mathbb{R}_{+}^{n+1},\\ \tilde{v}({\bf x}',0)=v({\bf x}') & \text{on}\;\;{\bf x}'\in\mathbb{R}^{n}. \end{cases} \] From \eqref{eq:recall-RW19-eqn19} and \eqref{eq:Schauder-estimate}, it follows that \begin{align} \|\tilde{v}\|_{L^{\infty}(B_{c}^{+})}& \le C\bigg(\|\tilde{v}\|_{L^{2}(B_{16}^{+})}+\|v\|_{L^{2}(B_{16}')}\bigg)^{\alpha}\bigg(\bigg\|\lim_{x_{n+1}\rightarrow0}\partial_{n+1}\tilde{v}\bigg\|_{L^{2}(B_{16}')}+\|v\|_{L^{2}(B_{16}')}\bigg)^{1-\alpha}\nonumber \\ & \quad+\bigg\|\lim_{x_{n+1}\rightarrow0}\partial_{n+1}\tilde{v}\bigg\|_{L^{p}(B_{1}')}.\label{eq:Schauder2} \end{align} Note that \begin{align*} \|\tilde{v}\|_{L^{\infty}(B_{c}^{+})} & =\|\tilde{u}\|_{L^{\infty}(B_{cR}^{+})},\\ \|\tilde{v}\|_{L^{2}(B_{16}^{+})} & =R^{-\frac{n+1}{2}}\|\tilde{u}\|_{L^{2}(B_{16R}^{+})},\\ \|v\|_{L^{2}(B_{16}')} & =R^{-\frac{n}{2}}\|u\|_{L^{2}(B_{16R}')},\\ \bigg\|\lim_{x_{n+1}\rightarrow0}\partial_{n+1}\tilde{v}\bigg\|_{L^{p}(B_{1}')} & =R^{1-\frac{n}{p}}\bigg\|\lim_{x_{n+1}\rightarrow0}\partial_{n+1}\tilde{u}\bigg\|_{L^{p}(B_{R}')},\quad p\ge2. \end{align*} Hence, \eqref{eq:Schauder2} becomes \begin{align*} \|\tilde{u}\|_{L^{\infty}(B_{cR}^{+})}& \le CR^{-\frac{n}{2}}R^{-\frac{\alpha}{2}}\bigg(\|\tilde{u}\|_{L^{2}(B_{16R}^{+})}+R^{\frac{1}{2}}\|u\|_{L^{2}(B_{16R}')}\bigg)^{\alpha}\\ & \quad\times R^{-\frac{1}{2}(1-\alpha)}\bigg(R^{\frac{3}{2}}\bigg\|\lim_{x_{n+1}\rightarrow0}\partial_{n+1}\tilde{u}\bigg\|_{L^{2}(B_{16R}')}+R^{\frac{1}{2}}\|u\|_{L^{2}(B_{16R}')}\bigg)^{1-\alpha}\\ & \quad+R^{-\frac{1}{2}-\frac{n}{p}}R^{\frac{3}{2}}\bigg\|\lim_{x_{n+1}\rightarrow0}\partial_{n+1}\tilde{u}\bigg\|_{L^{p}(B_{R}')}. \end{align*} Since $R\ge1$, \eqref{eq:boundary-bulk1} follows immediately. \end{proof} \section{\label{sec:Boundary-Bulk}Boundary decay to bulk decay} In this section, we will establish that the boundary decay implies the bulk decay. \begin{prop} \label{prop:boundary-bulk}Assume that \eqref{eq:assumption-1} and \eqref{eq:assumption-2} are satisfied. Let $u \in W^{2,p}(\mathbb{R}^{n})$ for some integer $n < p <\infty$ be a solution to \eqref{eq:Schrodinger} and the decay assumption \eqref{eq:decay-assumption} holds. Then \begin{equation} |\tilde{u}({\bf x})|\le Ce^{-c|{\bf x}|}\quad\text{for }\;\;{\bf x}=({\bf x}',x_{n+1})\in\mathbb{R}_{+}^{n+1}.\label{eq:bulk-decay} \end{equation} \end{prop} \begin{proof} Given any $R\ge1$, choosing ${\bf x}_{0}\in\mathbb{R}^{n}\times\{0\}$ with $|{\bf x}_{0}|=32R$. By \eqref{eq:decay-assumption}, we have \begin{equation} \|u\|_{L^{2}(B_{16R}'({\bf x}_{0}))}\le Ce^{-cR}.\label{eq:boundary-to-bulk1} \end{equation} Furthermore, \eqref{eq:extension-problem-est} yields \begin{equation} \|\tilde{u}\|_{L^{2}(B_{16R}^{+}({\bf x}_{0}))}\le\|\tilde{u}\|_{L^{2}(\mathbb{R}^{n}\times(0,16R))}\le4R^{\frac{1}{2}}\|u\|_{L^{2}(\mathbb{R}^{n})}\le C(n,\lambda,\Lambda)R^{\frac{1}{2}}.\label{eq:boundary-to-bulk2} \end{equation} Plugging \eqref{eq:gradient-decay}, \eqref{eq:boundary-to-bulk1} and \eqref{eq:boundary-to-bulk2} into \eqref{eq:boundary-bulk1} implies \[ \|\tilde{u}\|_{L^{\infty}(B_{cR}^{+}({\bf x}_{0}))}\le Ce^{-cR}. \] Following the chain of balls argument described in \cite[Proposition 2.2, Step 2]{RW19Landis}, we finally conclude our result. \end{proof} \section{\label{sec:main}Proof of Theorem~{\rm \ref{thm:main1}}} Recall the following Liouville-type theorem in \cite[Theorem B]{Armitage85Liouville}. \begin{thm} \label{thm:Liouville}Suppose that $\Delta\tilde{u}=0$ in $\mathbb{R}_{+}^{n+1}$. If $\tilde{u}$ satisfies the decay property \eqref{eq:bulk-decay}, then $\tilde{u}\equiv0$. \end{thm} \noindent It is obvious that Theorem~\ref{thm:main1} is an easy consequence of Proposition~\ref{prop:boundary-bulk} and Theorem~\ref{thm:Liouville}. We now say a few words about the proof of Theorem~\ref{thm:main2}. As Proposition~\ref{prop:boundary-bulk}, the boundary decay \eqref{cx} implies the bulk decay \eqref{eq:bulk-decay}. In the case of ${\bf b}\equiv 0$, the proof of Proposition~\ref{prop:boundary-bulk} remains true when $q$ is bounded. To make the paper self-contained, we will give another proof of Theorem~\ref{thm:Liouville} in Appendix~\ref{sec:Appendix1}. \appendix \section{\label{sec:Appendix1}Proof of Theorem~\ref{thm:Liouville}} First of all, we introduce a mapping from the ball to the upper half-space, and back, which preserves the Laplacian. For convenience, we define \[ {\bf x}^{*}:=\frac{{\bf x}}{|{\bf x}|^{2}}\quad\text{for }\,\,{\bf x}\in\mathbb{R}^{n+1}\setminus\{0\}, \] see e.g. \cite{ABR01Harmonic}. Let ${\bf s}=(0,\cdots,0,-1)$ be the south pole of the unit sphere $\mathcal{S}^{n}$, and we define \[ \Phi({\bf z}):=2({\bf z}-{\bf s})^{*}+{\bf s}=\frac{(2{\bf z}',1-|{\bf z}|^{2})}{|{\bf z}'|^{2}+(1+z_{n+1})^{2}} \] for all ${\bf z}=({\bf z}',z_{n+1})\in\mathbb{R}^{n+1}\setminus\{{\bf s}\}$. It is easy to see that $\Phi^{2}={\rm Id}$. Let $B_{1}(0)$ be the unit ball in $\mathbb{R}^{n+1}$. The following lemma can be found in \cite{ABR01Harmonic}. \begin{lem} \label{lem:conformal}The mapping $\Phi:\mathbb{R}^{n+1}\setminus\{{\bf s}\}\rightarrow\mathbb{R}^{n+1}\setminus\{{\bf s}\}$ is injective. Furthermore, it maps $B_{1}(0)$ onto $\mathbb{R}_{+}^{n+1}$, and maps $\mathbb{R}_{+}^{n+1}$ onto $B_{1}(0)$. It also maps $\mathcal{S}^{n}\setminus\{{\bf s}\}$ onto ${\mathbb R}^n$ and maps ${\mathbb R}^n$ onto $\mathcal{S}^{n}\setminus\{{\bf s}\}$. \end{lem} Given any function $w$ defined on a domain $\Omega$ in $\mathbb{R}^{n+1}\setminus\{{\bf s}\}$. The Kelvin transform $\mathcal{K}[w]$ of $w$ is defined by \begin{equation} \mathcal{K}[w]({\bf z}):=2^{\frac{n-1}{2}}|{\bf z}-{\bf s}|^{1-n}w(\Phi({\bf z}))\quad\text{for all}\,\,{\bf z}\in\Phi(\Omega).\label{eq:Kelvin} \end{equation} The following lemma can be found in \cite{ABR01Harmonic}, which exhibits a crucial property of the Kelvin transform. \begin{lem} \label{lem:harmonic}Let $\Omega$ be any domain in $\mathbb{R}^{n+1}\setminus\{{\bf s}\}$. Then $u$ is harmonic on $\Omega$ if and only if $\mathcal{K}[u]$ is harmonic on $\Phi(\Omega)$. \end{lem} Now, we are ready to prove Theorem~\ref{thm:Liouville}. \begin{proof} [Proof of Theorem~{\rm \ref{thm:Liouville}}] To begin, it is not hard to compute \begin{align*} |\Phi({\bf z})| & =\frac{\sqrt{4|{\bf z}'|^{2}+(|{\bf z}|^{2}-1)^{2}}}{|{\bf z}'|^{2}+(1+z_{n+1})^{2}}=\bigg|\frac{2|{\bf z}'|+i((-z_{n+1})^{2}+|{\bf z}'|^{2}-1)}{(-z_{n+1}-1)^{2}+|{\bf z}'|^{2}}\bigg|\\ & =\bigg|\frac{(-z_{n+1}+1)+i|{\bf z}'|}{(-z_{n+1}-1)+i|{\bf z}'|}\bigg|. \end{align*} The decay assumption \eqref{eq:bulk-decay} implies that for ${\bf z}$ near the south pole ${\bf s}$, \begin{align*} |\mathcal{K}[\tilde{u}]({\bf z})| & =2^{\frac{n-1}{2}}|{\bf z}-{\bf s}|^{1-n}|\tilde{u}(\Phi({\bf z}))| \le C|{\bf z}-{\bf s}|^{1-n}e^{-c|\Phi({\bf z})|}\\ & =C|{\bf z}-{\bf s}|^{1-n}\exp\bigg(-c\bigg|\frac{(-z_{n+1}+1)+i|{\bf z}'|}{(-z_{n+1}-1)+i|{\bf z}'|}\bigg|\bigg)\\ & \le C|{\bf z}-{\bf s}|^{1-n}\exp\bigg(-c\frac{1}{|{\bf z}-{\bf s}|}\bigg)\\ & \approx C\exp\bigg(-c\frac{1}{|{\bf z}-{\bf s}|}\bigg). \end{align*} From Lemma~\ref{lem:harmonic}, we know that $\mathcal{K}[\tilde{u}]$ is harmonic on $B_{1}(0)$. By \cite[Theorem 1]{Jin93UCP}, we obtain that $\mathcal{K}[\tilde{u}]\equiv0$. In view of \eqref{eq:Kelvin} and Lemma~\ref{lem:conformal}, we then conclude that $\tilde{u}\equiv0$ in $\Phi(B_{1})=\mathbb{R}_{+}^{n+1}$. \end{proof} \section*{Acknowledgments} Kow is partially supported by the Academy of Finland (Centre of Excellence in Inverse Modelling and Imaging, 312121) and by the European Research Council under Horizon 2020 (ERC CoG 770924). Wang is partially supported by MOST 108-2115-M-002-002-MY3 and MOST 109-2115-M-002-001-MY3. \end{sloppypar} \bibliographystyle{custom} \bibliography{ref} \end{document}
{"config": "arxiv", "file": "2106.06120/arXiv-preprint_ver4_half-Lap-with-drift.tex"}
TITLE: If $abc\neq 0$, then $ \frac{(a+b)^2}{c^2}+\frac{(a+c)^2}{b^2}+\frac{(b+c)^2}{a^2}\geq2 $ QUESTION [2 upvotes]: Let $a$, $b$ and $c$ be real numbers such that $abc\neq0$. Prove that: $$ \frac{(a+b)^2}{c^2}+\frac{(a+c)^2}{b^2}+\frac{(b+c)^2}{a^2}\geq2 $$ I checked for some values and it seems to be true. But no plausible proof is there. I would love a counterexample, or a solution. But please no incomplete hints, I wish to see a complete solution, if it's true. Thanks. REPLY [0 votes]: Let $a+b+c=3u$, $ab+ac+bc=3v^2$, where $v^2$ can be negative, and $abc=w^3$. Hence, we need to prove that $$\sum_{cyc}a^2b^2(a+b)^2\geq2a^2b^2c^2$$ or $$\sum_{cyc}(a^4b^2+a^4c^2+2a^3b^3)\geq2a^2b^2c^2.$$ We see that $$\sum_{cyc}(a^4b^2+a^4c^2+2a^3b^3)\geq2a^2b^2c^2+\frac{1}{81}\left(\sum\limits_{cyc}(a^2b+a^2c-2abc)\right)^2$$ is a linear inequality of $w^3$, which says that it's enough to prove the last inequality for an extremal value of $w^3$, which happens for equality case of two variables. Since the last inequality is homogeneous and even degree, it's enough assume $b=c=1$, which gives $$79a^4+170a^3-6a^2+7a+322\geq0,$$ which is obviously true. Done!
{"set_name": "stack_exchange", "score": 2, "question_id": 836986}
TITLE: question in discrete mathematics QUESTION [1 upvotes]: I have questions. Can anyone help me to get the idea or figure out this problem. Find a recurrence relation. If an denote the number of words from the alphabet W={A,B,C} of length n with no two adjacent letters being A'S. thank you very much REPLY [1 votes]: Call a string with no two consecutive A's good. Let $x_n$ be the number of good strings of length $n$ that end in $A$, and let $y_n$ be the number of good strings of length $n$ that don't end in A. Note that $x_{n+1}=y_n$, since we get the good strings of length $n+1$ that end in A by appending an A to a good (possibly empty) string of length $n$ that doesn't end in A. Note also that $y_{n+1}=2(x_n+y_n)$. This is because we get all the good strings of length $n+1$ that don't end in A by appending a B or C to any good string of length $n$. From the preceding recurrence, we get $$y_{n+2}=2x_{n+1}+2y_{n+1}=2y_n+2y_{n+1}.\tag{$1$}$$ Also, $$x_{n+2}=y_{n+1}=2x_n+2y_n=2x_n+2x_{n+1}.\tag{$2$}$$ If $g_n$ is the number of good strings of length $n$, then $g_n=x_n+y_n$. We conclude from $(1)$ and $(2)$ that $g_{n+2}=2g_n+2g_{n+1}$. Remark: The method generalizes with no trouble to situations where we have A and $n$ other letters. If $n=1$, we get the Fibonacci recurrence.
{"set_name": "stack_exchange", "score": 1, "question_id": 320182}